International Journal of Research in Computing
https://ijrcom.org/index.php/ijrc
<p>The <strong>International Journal of Research in Computing</strong> (IJRC) which will publish high quality and refereed papers by the Faculty of Computing of General Sir John Kotelawala Defence University will constitute research work from all computing-related topics from various fields such as Computer Engineering, Computer Science, Software Engineering, Information and Communication Technologies, Information Systems, Computational Mathematics, etc., hence providing a platform for researchers and scholars worldwide from the numerous fields of Computing, to publicize their works or to enhance their knowledge. While serving the focal purpose of the journal to provide a space to archive all research work published at the annual International Research Conferences of the University for public access and reference. This Journal will also accept Research works from other scholars worldwide. The Journal will proceed to publish these works of research after a standard peer-review process to ensure the quality and authenticity of the journal and its content. This journal will be published as an open-access journal in order to give wider access to the journal and two volumes will be published per year.</p> <p>ISSN No.: <strong>ISSN 2820-2147</strong> (For the on-line issues)</p> <p>ISSN No.: I<strong>SSN 2820-2139</strong> (For the print issues)</p>Faculty of Computing, General Sir John Kotelawala Defence Universityen-USInternational Journal of Research in Computing 2820-2139Empowering the Captioning of Fashion Attributes from Asian Fashion Images
https://ijrcom.org/index.php/ijrc/article/view/131
<p>Fashion image captioning, an evolving field in AI and computer vision, generates descriptive captions for fashion images. This paper addresses the prevalent bias in existing studies, which focus predominantly on Western fashion, by incorporating Asian fashion into the analysis. This paper describes developing more inclusive AI technologies for the fashion industry by bridging the gap between Western and Asian fashion in image captioning. We leverage transfer learning techniques, combining the DeepFashion dataset (primarily Western fashion) with a newly curated Asian fashion dataset. Our approach employs advanced deep learning methods for the encoder and decoder components to generate high-quality captions that capture various fashion attributes, such as style, color, and garment type, tailored specifically to Asian fashion trends. Results demonstrate the efficacy of our methods, with the model achieving accuracies of 93.63% for gender, 83.42% for article type, and 61.34% for base color on the training dataset, and 94.13%, 79.25%, and 59.71%, respectively, on the validation dataset. These findings highlight the importance of inclusivity and diversity in AI research, advancing the field of fashion image captioning.</p> <p>Link: <a href="https://www.ijrcom.org/download/issues/v3i1/IJRC31_01.pdf">https://www.ijrcom.org/download/issues/v3i1/IJRC31_01.pdf</a></p>DDA GaminiKVS Perera
Copyright (c) 2024 International Journal of Research in Computing
2024-07-172024-07-173159Artificial Neural Network Based Grey Exponential Smoothing Approach for Forecasting Electricity Demand in Sri Lanka
https://ijrcom.org/index.php/ijrc/article/view/130
<p>The electricity supply of the country has greatly impacted the economy and the nation’s standard of living; an accurate forecast of electricity demand is essential for any country to enhance industrialization, farming, and residential requirements and to make proper investment decisions. Therefore, most countries have been allocating and spending significant amounts from their annual budgets on power generation. This current study proposes an Artificial Neural Network (ANN) based approach to forecast electricity demands in Sri Lanka. For model validation, GM (1, 1), Moving Average, and Grey Exponential Smoothing models were used based on electricity gross generation data from 2000 to 2022. The empirical results suggest that the hybrid Grey Exponential Smoothing model is highly accurate under the non-stationary framework.</p> <p>Link: <a href="https://www.ijrcom.org/download/issues/v3i1/IJRC31_02.pdf">https://www.ijrcom.org/download/issues/v3i1/IJRC31_02.pdf</a></p>Kumudu Nadeeshani Seneviratna Dissanayaka Mudiyanselage
Copyright (c) 2024 International Journal of Research in Computing
2024-07-172024-07-17311014Automatic Bug Priority Prediction using LSTM and ANN Approaches during Software Development
https://ijrcom.org/index.php/ijrc/article/view/129
<p>The process of manually assign a priority value to a bug report takes time. There is a high chance that a developer may allocate the wrong value, and this can affect several important software development processes. To address this problem, the objective of this research incorporates three unique feature extraction approaches to create a model for automatically predicting the priority of bugs using the Long Short-Term Memory (LSTM) deep learning algorithm and Artificial Neural Network (ANN) algorithm. First, we collected approximately 20,500 bug reports from the Bugzilla; bug tracking system. Followed preprocessing, created models using two classifiers and feature vectors including Global Vectors for Word Representation (GloVe), Term Frequency-Inverse Document Frequency (TF-IDF), and Word2Vec used individually. The final classification results were determined by comparing the all results of the different models, which were integrated into an ensemble model. For evaluating the models, accuracy, recall, precision, and f-measure were used. The ensemble model produced the highest accuracy of 92% than other models as ANN model’s accuracy was 80.28%, LSTM GloVe model's accuracy was 89.58%, LSTM TF-IDF model's accuracy was 88.94%, LSTM W2V model's accuracy was 84.84%. And also, higher recall, precision, and f-measure results were found in the ensemble model. Using the proposed model by LSTM-based ensemble approach we could automatically find the bug priority level of bug reports <br />efficiently and effectively. In the future studies, intend to gather data from sources other than Bugzilla, such as JIRA or a GitHub repository. Additionally, try to apply other deep algorithms to improve the accuracy.</p> <p>Link: <a href="https://www.ijrcom.org/download/issues/v3i1/IJRC31_03.pdf">https://www.ijrcom.org/download/issues/v3i1/IJRC31_03.pdf</a></p>DNA DissanayakeRAHM RupasinghaBTGS Kumara
Copyright (c) 2024 International Journal of Research in Computing
2024-07-172024-07-17311526Monocular 3D Reconstruction in Poorly Visible Environments
https://ijrcom.org/index.php/ijrc/article/view/134
<p>3D reconstruction of real physical environments can be a challenging task, often requiring depth cameras such as LIDAR or RGB-D to capture the necessary depth information. However, this method is resource-intensive and expensive. To counter this problem, monocular 3D reconstruction has emerged as a research area of interest, leveraging <br />deep learning techniques to reconstruct 3D environments using only sequences of RGB images, thus reducing the need for specialized hardware. Existing research has primarily focused on environments with good lighting conditions, leaving a gap in research for environments with poor visibility. In response, we propose a solution that addresses this limitation by enhancing the visibility of images taken in poorly visible environments. These enhanced images are then used for 3D reconstruction, resulting in the extraction of more features and producing a 3D mesh with improved visibility. Our solution employs a Generative Adversarial Network (GAN) to enhance the images, providing a complete pipeline from inputting images with poor visibility to generating an output mesh file for 3D reconstruction. Through visualization of these mesh files, we observe that our solution improves the lighting conditions of the environment, resulting in a more detailed and <br />readable 3D reconstruction.</p> <p>Link: <a href="https://www.ijrcom.org/download/issues/v3i1/IJRC31_04.pdf">https://www.ijrcom.org/download/issues/v3i1/IJRC31_04.pdf</a></p>Nivinya SamarutilakeTharusha LekamgeThilina Ilesinghe
Copyright (c) 2024 International Journal of Research in Computing
2024-07-172024-07-17312734Enhancing Human-Computer Interaction on Educational Websites through User Interface Design: Color Preferences for 7-8 Years Old
https://ijrcom.org/index.php/ijrc/article/view/135
<p>For any application development first impression will always be a matter to attract users. User Interfaces (UI) will be the first handshake of an application with the user. This concern brings more impact when creating applications for children for educational purposes. In that case, having a vibrant and playful UI will be supporting to spark joy in every click of an application. The study aims to evaluate the impact of the effective selection of colors in educational website UI designs. Initially, the study conducted a comparative analysis of color preferences among 7-8-year-old students, aiming to identify the most preferred colors. A systematic color selection process was employed by gathering data from both primary and secondary data sources, resulting in 220 data from primary and a sample population of 323 data from secondary. Based on the findings, the most preferred colors among 7-8-year-old students were identified as red, yellow, green, blue, and purple. Then, the obtained color preferences were used in designing an UI for an educational website. The newly created design was then compared with three existing websites to evaluate the attractiveness of UI as an educational website. Finally, the study has concluded that a successful color selection in the UI design will enhance the UI, as it was proven by identifying the newly created UI design with the most preferred colors of the 7-8 years old as the most liked with 52.9% of <br />positive feedback during the post surveys conducted to validate the aim of the study.</p> <p>Link: <a href="https://www.ijrcom.org/download/issues/v3i1/IJRC31_05.pdf">https://www.ijrcom.org/download/issues/v3i1/IJRC31_05.pdf</a></p>DMS SathsaraDVDS AbeysingheKGK Abeywardhane
Copyright (c) 2024 International Journal of Research in Computing
2024-07-172024-07-17313541