ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal https://revistas.usal.es/cinco/index.php/2255-2863 <p dir="ltr">The <a title="adcaij" href="http://adcaij.usal.es" target="_blank" rel="noopener">Advances in Distributed Computing and Artificial Intelligence Journal</a> (ISSN: 2255-2863) is an open access (OA) journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence, and their application in different areas, such as the Deep Learning, Generative AI, Electronic commerce, Smart Grids, IoT, Distributed Computing and so on. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of computing.</p> <p dir="ltr">ADCAIJ focuses attention in the exchange of ideas between scientists and technicians. Both, academic and business areas, are essential to facilitate the development of systems that meet the demands of today's society. The journal is supported by the research group <a title="bisite" href="http://bisite.usal.es/en/research/research-lines" target="_blank" rel="noopener">BISITE</a>.</p> <p dir="ltr">The journal commenced publication in 2012 with quarterly periodicity and has published more than 300 articles with peer review. All the articles are written in scientific English language.</p> <p dir="ltr">From volume 12 (2023) onwards, the journal will be published in continuous mode, in order to advance the visibility and dissemination of scientific knowledge.</p> <p dir="ltr">ADCAIJ is indexed in Scopus and in the Emerging Sources Citation Index (ESCI) of Web of Science, in the category COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE. It also appears in other directories and databases such as DOAJ, ProQuest, Scholar, WorldCat, Dialnet, Sherpa ROMEO, Dulcinea, UlrichWeb, BASE, Academic Journals Database and Google Scholar.</p> en-US adcaij@usal.es (Juan M. CORCHADO) redero@usal.es (Ángel REDERO (Ediciones Universidad de Salamanca)) Wed, 05 Jun 2024 10:51:16 +0200 OJS 3.3.0.13 http://blogs.law.harvard.edu/tech/rss 60 Evaluation of One-Class Techniques for Early Estrus Detection on Galician Intensive Dairy Cow Farm Based on Behavioral Data From Activity Collars https://revistas.usal.es/cinco/index.php/2255-2863/article/view/32508 Nowadays, precision livestock farming has revolutionized the livestock industry by providing it with devices and tools that significantly improve farm management. Among these technologies, smart collars have become a very common device due to their ability to register individual cow behavior in real time. These data provide the opportunity to identify behavioral patterns that can be analyzed to detect relevant conditions, such as estrus. Against this backdrop, this research work evaluates and compares the effectiveness of six one-class techniques for estrus early detection in dairy cows in intensive farms based on data collected by a commercial smart collar. For this research, the behavior of 10 dairy cows from a cattle farm in Spain was monitored. Feature engineering techniques were applied to the data obtained by the collar, in order to add new variables and enhance the dataset. Some techniques achieved F1-Score values exceeding 95 % in certain cows. However, considerable variability in the results was observed among different animals, highlighting the need to develop individualized models for each cow. In addition, the results suggest that incorporating a temporal context of the animal’s previous behavior is key to improving model performance. Specifically, it was found that when considering a period of 8 hours prior, the performance of the evaluated techniques was substantially improved. Álvaro Michelena, Esteban Jove, Óscar Fontenla-Romero, José-Luis Calvo-Rolle Copyright (c) 2024 Álvaro Michelena, Esteban Jove, Óscar Fontenla-Romero, José-Luís Calvo-Rolle https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/32508 Tue, 31 Dec 2024 00:00:00 +0100 Machine Learning based Prediction of Retinopathy Diseases using Segmented Images https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31737 <p>Diabetes, hypertension, obesity, glaucoma, macular degeneration, etc. are the severe and most widely spread diseases today. More ever, these diseases are the basis of several other fatal diseases. Early-stage identification and diagnosis of these diseases can prevent blindness and other life threats. Blood vessels of a retina contain information about these diseases. Therefore, features extraction from retinal vessels and classification of these diseases are essential. There are existing different approaches today to classify these diseases, but they have used RGB retinal images due to which their performances are relatively low. In this paper, we have proposed an approach based on machine learning that uses segmented retinal images generated by different efficient methods to classify diabetic retinopathy, glaucoma and multi class diseases. We have conducted exhaustive experiments on large number of images of DRIVE, STARE and HRF datasets. The accuracy of the proposed approach is 90.90%, 95.00%, and 92.90% for diabetic retinopathy, glaucoma, and multi class diseases, respectively which is found better than most of the approaches of this area.</p> Sushil Kumar Saroj Copyright (c) 2024 Sushil Kumar Saroj https://creativecommons.org/licenses/by-nc-sa/4.0 https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31737 Mon, 23 Dec 2024 00:00:00 +0100 Filtering Approaches and Mish Activation Function Applied on Handwritten Chinese Character Recognition https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31218 Handwritten Chinese Characters (HCC) have recently received much attention as a global means of exchanging information and knowledge. The start of the information age has increased the number of paper documents that must be electronically saved and shared. The recognition accuracy of online handwritten Chinese characters has reached its limit as online characters are more straightforward than offline characters. Furthermore, online character recognition enables stronger involvement and flexibility than offline characters. Deep learning techniques, such as convolutional neural networks (CNN), have superseded conventional Handwritten Chinese Character Recognition (HCCR) solutions, as proven in image identification. Nonetheless, because of the large number of comparable characters and styles, there is still an opportunity to improve the present recognition accuracy by adopting different activation functions, including Mish, Sigmoid, Tanh, and ReLU. The main goal of this study is to apply a filter and activation function that has a better impact on the recognition system to improve the performance of the recognition CNN model. In this study, we implemented different filter techniques and activation functions in CNN to offline Chinese characters to understand the effects of the model's recognition outcome. Two CNN layers are proposed given that they achieve comparative performances using fewer-layer CNN. The results demonstrate that the Weiner filter has better recognition performance than the median and average filters. Furthermore, the Mish activation function performs better than the Sigmoid, Tanh, and ReLU functions. Zhong Yingna, Kauthar Mohd Daud, Kohbalan Moorthy, Ain Najiha Mohamad Nor Copyright (c) 2024 Yingna Zhong , Kauthar Mohd Daud, Kohbalan Moorthy, Ain Najiha Mohamad Nor https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31218 Fri, 01 Nov 2024 00:00:00 +0100 ML-Based Quantitative Analysis of Linguistic and Speech Features Relevant in Predicting Alzheimer’s Disease https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31625 Alzheimer’s disease (AD) is a severe neurological condition that affects numerous people globally with detrimental consequences. Detecting AD early is crucial for prompt treatment and effective management. This study presents a novel approach for detecting and classifying six types of cognitive impairment using speech-based analysis, including probable AD, possible AD, mild cognitive impairment (MCI), memory impairments, vascular dementia, and control. The method employs speech data from DementiaBank’s Pitt Corpus, which is preprocessed and analyzed to extract pertinent acoustic features. The characteristics are subsequently used to educate five machine learning algorithms, namely k-nearest neighbors (KNN), decision tree (DT), support vector machine (SVM), XGBoost, and random forest (RF). The effectiveness of every algorithm is assessed through a 10-fold cross-validation. According to the research findings, the suggested method based on speech obtains a total accuracy of 75.59% concerning the six-class categorization issue. Among the five machine learning algorithms tested, the XGBoost classifier showed the highest accuracy of 75.59%. These findings indicate that speech-based approaches can potentially be valuable for detecting and classifying cognitive impairment, including AD. The paper also explores robustness testing, evaluating the algorithms’ performance under various circumstances, such as noise variability, voice quality changes, and accent variations. The proposed approach can be developed into a noninvasive, cost-effective, and accessible diagnostic tool for the early detection and management of cognitive impairment. Tripti Tripathi, Rakesh Kumar Copyright (c) 2023 Tripti Tripathi https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31625 Wed, 05 Jun 2024 00:00:00 +0200 An Efficient Approach to Extract and Store Big Semantic Web Data Using Hadoop and Apache Spark GraphX https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31506 The volume of data is growing at an astonishingly high speed. Traditional techniques for storing and processing data, such as relational and centralized databases, have become inefficient and time-consuming. Linked data and the Semantic Web make internet data machine-readable. Because of the increasing volume of linked data and Semantic Web data, storing and working with them using traditional approaches is not enough, and this causes limited hardware resources. To solve this problem, storing datasets using distributed and clustered methods is essential. Hadoop can store datasets because it can use many hard disks for distributed data clustering; Apache Spark can be used for parallel data processing more efficiently than Hadoop MapReduce because Spark uses memory instead of the hard disk. Semantic Web data has been stored and processed in this paper using Apache Spark GraphX and the Hadoop Distributed File System (HDFS). Spark's in-memory processing and distributed computing enable efficient data analysis of massive datasets stored in HDFS. Spark GraphX allows graph-based semantic web data processing. The fundamental objective of this work is to provide a way for efficiently combining Semantic Web and big data technologies to utilize their combined strengths in data analysis and processing. First, the proposed approach uses the SPARQL query language to extract Semantic Web data from DBpedia datasets. DBpedia is a hugely available Semantic Web dataset built on Wikipedia. Secondly, the extracted Semantic Web data was converted to the GraphX data format; vertices and edges files were generated. The conversion process is implemented using Apache Spark GraphX. Third, both vertices and edge tables are stored in HDFS and are available for visualization and analysis operations. Furthermore, the proposed techniques improve the data storage efficiency by reducing the amount of storage space by half when converting from Semantic Web Data to a GraphX file, meaning the RDF size is around 133.8 and GraphX is 75.3. Adopting parallel data processing provided by Apache Spark in the proposed technique reduces the required data processing and analysis time. This article concludes that Apache Spark GraphX can enhance Semantic Web and Big Data technologies. We minimize data size and processing time by converting Semantic Web data to GraphX format, enabling efficient data management and seamless integration. Wria Mohammed Salih Mohammed, Alaa Khalil Jumaa Copyright (c) 2023 Wria Mohammed Salih Mohammed, Alaa Khalil Jumaa https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31506 Wed, 05 Jun 2024 00:00:00 +0200 Optimizing Credit Card Fraud Detection: A Genetic Algorithm Approach with Multiple Feature Selection Methods https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31533 In today’s cashless society, the increasing threat of credit card fraud demands our attention. To protect our financial security, it is crucial to develop robust and accurate fraud detection systems that stay one step ahead of the fraudsters. This study dives into the realm of machine learning, evaluating the performance of various algorithms - logistic regression (LR), decision tree (DT), and random forest (RF) - in detecting credit card fraud. Taking innovation, a step further, the study introduces the integration of a genetic algorithm (GA) for feature selection and optimization alongside LR, DT, and RF models. LR achieved an accuracy of 99.89 %, DT outperformed with an accuracy of 99.936 %, and RF yielded a high accuracy of 99.932 %, whereas GA-RF (a5) achieved an accuracy of 99.98 %. Ultimately, the findings of this study fuel the development of more potent fraud detection systems within the realm of financial institutions, safeguarding the integrity of transactions and ensuring peace of mind for cardholders. Sunil Kumar Patel, Devina Panday Copyright (c) 2023 Sunil Kumar Patel Patel https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31533 Mon, 02 Dec 2024 00:00:00 +0100 Sarcasm Text Detection on News Headlines Using Novel Hybrid Machine Learning Techniques https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31601 One of the biggest problems with sentiment analysis systems is sarcasm. The use of implicit, indirect language to express opinions is what gives it its complexity. Sarcasm can be represented in a number of ways, such as in headings, conversations, or book titles. Even for a human, recognizing sarcasm can be difficult because it conveys feelings that are diametrically contrary to the literal meaning expressed in the text. There are several different models for sarcasm detection. To identify humorous news headlines, this article assessed vectorization algorithms and several machine learning models. The recommended hybrid technique using the bag-of-words and TF-IDF feature vectorization models is compared experimentally to other machine learning approaches. In comparison to existing strategies, experiments demonstrate that the proposed hybrid technique with the bag-of-word vectorization model offers greater accuracy and F1-score results. Neha Singh, Umesh Chandra Jaiswal Copyright (c) 2023 Neha Singh Neha Singh https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31601 Wed, 05 Jun 2024 00:00:00 +0200 A Systematic Analysis of Various Word Sense Disambiguation Approaches https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31602 The process of finding the correct sense of a word in context is known as word sense disambiguation (WSD). In the field of natural language processing, WSD has become a growing research area. Over the decades, so many researchers have proposed the many approaches to WSD. A development of this field has created the significant impact on several Web-based applications such as information retrieval and information extraction. This paper contains the description of various approaches such as knowledge-based, supervised, unsupervised and semi-supervised. This paper also describes the various applications of WSD, such as information retrieval, machine translation, speech recognition, computational advertising, text processing, classification of documents and biometrics. Chandra Ganesh, Sanjay K. Dwivedi, Satya Bhushan Verma, Manish Dixit Copyright (c) 2023 Satya Bhushan Verma https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31602 Mon, 02 Dec 2024 00:00:00 +0100 Computer-Aided Detection and Diagnosis of Breast Cancer: a Review https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31412 Statistics across different countries point to breast cancer being among severe cancers with a high mortality rate. Early detection is essential when it comes to reducing the severity and mortality of breast cancer. Researchers proposed many computer-aided diagnosis/detection (CAD) techniques for this purpose. Many perform well (over 90% of classification accuracy, sensitivity, specificity, and f-1 sore), nevertheless, there is still room for improvement. This paper reviews literature related to breast cancer and the challenges faced by the research community. It discusses the common stages of breast cancer detection/ diagnosis using CAD models along with deep learning and transfer learning (TL) methods. In recent studies, deep learning models outperformed the handcrafted feature extraction and classification task and the semantic segmentation of ROI images achieved good results. An accuracy of up to 99.8% has been obtained using these techniques. Furthermore, using TL, researchers combine the power of both, pre-trained deep learning-based networks and traditional feature extraction approaches. Bhanu Prakash Sharma, Ravindra Kumar Purwar Copyright (c) 2023 BHANU PRAKASH SHARMA, RAVINDRA KUMAR PURWAR https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31412 Wed, 05 Jun 2024 00:00:00 +0200 Evaluation and Refinement of Elbow Recovery in Sports Medicine Using Smart Tracking Technologies https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31939 Elbow injuries, prevalent in various sports, significantly impact an athlete’s performance and career longevity. Traditional rehabilitation methods, while effective to a degree, often miss the mark in terms of precision and personalised care. This gap necessitates a shift towards more sophisticated rehabilitation strategies. This study introduces a pioneering approach in elbow rehabilitation, utilising cutting-edge wearable tracking technologies along with the telerehabilitation paradigm. The focus is on increasing the precision and efficacy of rehabilitation processes. We developed a state-of-the-art wearable device, equipped with sophisticated sensors, to accurately track elbow joint movements, including position, rotation, and flexion, in real-time. The device provides detailed data, allowing for nuanced diagnosis and effective monitoring during rehabilitation phases. This data is integrated into a specialised application, enabling comprehensive data analysis and the formulation of personalised rehabilitation plans with real-time feedback. The device demonstrated a notable improvement in the precision of monitoring and effectiveness of rehabilitation strategies, allowing the measurement of the range of motion (RoM) within an error of ±3 degrees. A comparative analysis with traditional methods revealed significant advancements in accuracy, adherence to prescribed rehabilitation regimens, and overall speed of recovery. Sergio Alonso-Rollán, Sergio Márquez-Sánchez, Albano Carrera, Isaac M. S. Froes, Juan F. Blanco Copyright (c) 2024 Sergio Márquez Sánchez, Sergio Alonso Rollán, Albano Carrera, Juan F. Blanco, Juan Manuel Corchado https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31939 Tue, 31 Dec 2024 00:00:00 +0100 Investigation of the Role of Machine Learning and Deep Learning in Improving Clinical Decision Making for Musculoskeletal Rehabilitation https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31590 Musculoskeletal rehabilitation is an important aspect of healthcare that involves the treatment and management of injuries and conditions affecting the muscles, bones, joints, and related tissues. Clinical decision-making in musculoskeletal rehabilitation involves complex and multifactorial considerations that can be challenging for healthcare professionals. Machine learning and deep learning techniques have the potential to enhance clinical judgement in musculoskeletal rehabilitation by providing insights into complex relationships between patient characteristics, treatment interventions, and outcomes. These techniques can help identify patterns and predict outcomes, allowing for personalized treatment plans and improved patient outcomes. In this investigation, we explore the various applications of machine learning and deep learning in musculoskeletal rehabilitation, including image analysis, predictive modelling, and decision support systems. We also examine the challenges and limitations associated with implementing these techniques in clinical practice and the ethical considerations surrounding their use. This investigation aims to highlight the potential benefits of using machine learning and deep learning in musculoskeletal rehabilitation and the need for further research to optimize their use in clinical practice. Madhu Yadav, Pushpendra Kumar Verma, Sumaiya Ansari Copyright (c) 2023 Pushpendra Verma https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31590 Wed, 12 Jun 2024 00:00:00 +0200 A Parallel Approach to Generate Sports Highlights from Match Videos Using Artificial Intelligence https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31615 Publishing highlights after a sports game is a common practice in the broadcast industry, providing viewers with a quick summary of the game and highlighting interesting events. However, the manual process of compiling all the clips into a single video can be time-consuming and cumbersome for video editors. Therefore, the development of an artificial intelligence (AI) model for sports highlight generation would significantly reduce the time and effort required to create these videos and improve the overall efficiency and accuracy of the process. This would benefit not only the broadcast industry but also sports fans who are looking for a quick and engaging way to catch up on the latest games. The objective of the paper is to develop an AI model that automates the process of sports highlight generation by taking a match video as input and returning the highlights of the game. The approach involves creating a list of words (wordnet) that indicate a highlight and comparing it with the commentary audio’s transcript to find a similarity, making use of a speech-to-text conversion, followed by some pre-processing of the extracted text, vectorization and finally measurement of the cosine similarity metric between the text and the wordnet. However, this process can become time-consuming too, in case of longer match videos, as the computation times of the AI models become inefficient. So, we used a parallel processing technique to counter the time required by the AI models to compute the outputs on large match videos, which can decrease the overall time complexity and increase the overall throughput of the model. Arjun Sivaraman, Tarun Kannuchamy, Anmol Anand, Shivam Dheer, Devansh Mishra, Narayanan Prasanth, S. P. Raja Copyright (c) 2024 Arjun Sivaraman, Tarun Kannuchamy, Anmol Anand, Devansh Mishra, Shivam Dheer, Narayanan Prasanth N, S.P. Raja https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31615 Tue, 31 Dec 2024 00:00:00 +0100 Resolving Covid-19 with Blockchain and AI https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31454 In the early months of 2020, a fast-spreading outbreak was brought about by the new virus SARS-CoV-2. The uncontrolled spread, which led to a pandemic, illustrated the healthcare system’s slow response time to public health emergencies at that time. Blockchain technology was anticipated to be crucial in the effort to contain the COVID-19 pandemic. In that review, many potential blockchain applications were discovered; however, the majority of them were still in their infancy, and it couldn’t yet be predicted how they could contribute to the fight against COVID-19 through the use of platforms, access kinds, and consensus algorithms. Modern innovations such as blockchain and artificial intelligence (AI) were shown to be promising in limiting the spread of a virus. Blockchain could specifically aid in the battle against pandemics by supporting early epidemic identification, assuring the ordering of clinical information, and maintaining a trustworthy medical chain during disease tracing. AI also offered smart forms of diagnosing coronavirus therapies and supported the development of pharmaceuticals. Blockchain and AI software for epidemic and pandemic containment were analyzed in that research. First, a new conceptual strategy was proposed to tackle COVID-19 through an architecture that fused AI with blockchain. State-of-the-art research on the benefits of blockchain and AI in COVID-19 containment was then reviewed. Recent initiatives and use cases developed to tackle the coronavirus pandemic were also presented. A case study using federated intelligence for COVID-19 identification was also provided. Finally, attention was drawn to problems and prospective directions for further investigation into future coronavirus-like wide-ranging scenarios. Suyogita Singh, Satya Bhushan Verma Copyright (c) 2023 suyogita singh https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31454 Wed, 12 Jun 2024 00:00:00 +0200 Classification of Animal Behaviour Using Deep Learning Models https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31638 Damage to crops by animal intrusion is one of the biggest threats to crop yield. People who stay near forest areas face a major issue with animals. The most significant task in deep learning is animal behaviour classification. This article focuses on the classification of distinct animal behaviours such as sitting, standing, eating etc. The proposed system detects animal behaviours in real time using deep learning-based models, namely, convolution neural network and transfer learning. Specifically, 2D-CNN, VGG16 and ResNet50 architectures have been used for classification. 2D-CNN, «VGG-16» and «ResNet50» have been trained on the video frames displaying a range of animal behaviours. The real time behaviour dataset contains 682 images of animals eating, 300 images of animas sitting and 1002 images of animals standing, therefore, there is a total of 1984 images in the training dataset. The experiment shows good accuracy results on the real time dataset, achieving 99.43 % with Resnet50 compared to 2D CNN ,VGG19 and VGG166. M. Sowmya, M. Balasubramanian, K. Vaidehi Copyright (c) 2024 sowmyam https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31638 Tue, 31 Dec 2024 00:00:00 +0100 Performance Research on Multi-Target Detection in Different Noisy Environments https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31710 This paper studies five classic multi-target detection methods in different noisy environments, including Akaike information criterion, ration criterion, Rissanen's minimum description length, Gerschgorin disk estimator and Eigen-increment threshold methods. Theoretical and statistical analyses of these methods have been done through simulations and a real-world water tank experiment. It is known that these detection approaches suffer from array errors and environmental noises. A new diagonal correction algorithm has been proposed to address the issue of degraded detection performance in practical systems due to array errors and environmental noises. This algorithm not only improves the detection performance of these multi-target detection methods in low signal-to-noise ratios (SNR), but also enhances the robust property in high SNR scenarios. Yuhong Yin, Qian Jia, Huiqi Xu, Guanglei Fu Copyright (c) 2024 Yuhong Yin, Qian Jia, Huiqi Xu, Guanglei Fu https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31710 Wed, 27 Nov 2024 00:00:00 +0100 Bus Ridership Prediction and Scenario Analysis through ML and Multi-Agent Simulations https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31866 This paper introduces an innovative approach to predicting bus ridership andanalysing transportation scenarios through a fusion of machine learning (ML) techniques and multi-agent simulations. Utilising a comprehensive dataset from an urban bus system, we employ ML models to accurately forecast passenger flows, factoring in diverse variables such as weather conditions. The novelty of our method lies in the application of these predictions to generate detailed simulation scenarios, which are meticulously executed to evaluate the efficacy of public transportation services. Our research uniquely demonstrates the synergy between ML predictions and agent-based simulations, offering a robust tool for optimising urban mobility. The results reveal critical insights into resource allocation, service efficiency, and potential improvements in public transport systems. This study significantly advances the field by providing a practical framework for transportation providers to optimise services and address long-term challenges in urban mobility Pasqual Martí, Alejandro Ibáñez, Vicente Julian, Paulo Novais, Jaume Jordán Copyright (c) 2024 Pasqual Martí, Alejandro Ibáñez, Vicente Julian, Paulo Novais, Jaume Jordán https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31866 Tue, 31 Dec 2024 00:00:00 +0100 Systematic Literature Review of Machine Learning Models for Detecting DDoS Attacks in IoT Networks https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31919 The escalating integration of Internet of Things (IoT) devices has led to a surge in data generation within networks, consequently elevating the vulnerability to Distributed Denial of Service (DDoS) attacks. Detecting such attacks in IoT Networks is critical, and Machine Learning (ML) models have shown efficacy in this realm. This study conducts a systematic review of literature from 2018 to 2023, focusing on DDoS attack detection in IoT Networks using deep learning techniques. Employing the PRISMA methodology, the review identifies and evaluates studies, synthesizing key findings/2**. It highlights that incorporating deep learning significantly enhances DDoS attack detection precision and efficiency, achieving detection rates between 94 % and 99 %. Despite progress, challenges persist, such as limited training data and IoT device processing constraints with large data volumes. This review underscores the importance of addressing these challenges to improve DDoS attack detection in IoT Networks. The research's significance lies in IoT's growing importance and security concerns. It contributes by showcasing current state-of-the-art DDoS detection through deep learning while outlining persistent challenges. Recognizing deep learning's effectiveness sets the stage for refining IoT security protocols, and moreover, by identifying challenges, the research informs strategies to enhance IoT security, fostering a resilient framework. Marcos Luengo Viñuela, Jesús-Ángel Román Gallego Copyright (c) 2024 Jesús Ángel Román Gallego https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31919 Tue, 31 Dec 2024 00:00:00 +0100 Evaluating the Effectiveness of Zero Trust Architecture in Protecting Against Advanced Persistent Threats https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31611 As a paradigm shift in network security, the idea of Zero Trust Architecture has attracted a lot of attention recently. This study intends to investigate the assessment and application of Zero Trust Architecture in business networks. Network segmentation, continuous authentication, least privilege access, and micro-segmentation are some of the basic ideas and elements of Zero Trust Architecture that are covered in this research. By taking a comprehensive approach to network security, the study evaluates how well Zero Trust Architecture mitigates security risks and shrinks the attack surface. It looks into the difficulties and factors to be taken into account when adopting Zero Trust Architecture, including scalability, user experience, and operational complexity. To shed light on the real-world application of Zero Trust Architecture, the paper also investigates empirical data and case studies from real-world scenarios. The influence of Zero Trust Architecture on operational processes and network performance are also be covered, along with recommended practices and various deployment strategies. Additionally, the research assesses how well Zero Trust Architecture conforms to regulatory standards, compliance needs, and existing security frameworks. The results of this study help us comprehend Zero Trust Architecture and its possible advantages and disadvantages. By offering a thorough evaluation framework and useful suggestions for effective implementation, it is helpful to organizations looking to adopt Zero Trust Architecture. The study's findings add to the corpus of information on Zero Trust Architecture and its role in strengthening network security in the face of evolving cyber threats. Pushpendra Kumar Verma, Bharat Singh, Preety, Shubham Kumar Sharma, Rakesh Prasad Joshi Copyright (c) 2024 Pushpendra Verma https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31611 Mon, 02 Dec 2024 00:00:00 +0100 Federated Learning in Data Privacy and Security https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31647 Federated learning (FL) has been a rapidly growing topic in recent years. The biggest concern in federated learning is data privacy and cybersecurity. There are many algorithms that federated models have to work on to achieve greater efficiency, security, quality and effective learning. This paper focuses on algorithms such as, federated averaging algorithm, differential privacy, federated stochastic variance and reduced gradient (FSVRG). To achieve data privacy and security, this research paper presents the main data statistics with the help of graphs, visual images and design models. Later, data security in federated learning models is researched and case studies are presented to identify risks and possible solutions. Detecting security gaps is a challenge for many companies. This paper presents solutions for the identification of security-related issues which results in a decrease in time complexity and an increase in accuracy. This research sheds light on the topics of federated learning and data security. Dokuru Trisha Reddy, Haripriya Nandigam, Sai Charan Indla, S. P. Raja Copyright (c) 2024 Trisha Reddy Dokuru, Haripriya Nandigam, Sai Charan Indla, S.P. Raja https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31647 Tue, 31 Dec 2024 00:00:00 +0100 CyberUnits Bricks: An Implementation Study of a Class Library for Simulating Nonlinear Biological Feedback Loops https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31762 Feedback loops and other types of information processing structures play a pivotal role in maintaining the internal milieu of living organisms. Although methods of biomedical cybernetics and systems biology help to translate between the structure and function of processing structures, computer simulations are necessary for studying nonlinear systems and the full range of dynamic responses of feedback control systems. Currently, available approaches for modelling and simulation comprise basically domain-specific environments, toolkits for computer algebra systems and custom software written in universal programming languages for a specific purpose, respectively. All of these approaches are faced with certain weaknesses. We therefore developed a cross-platform class library that provides versatile building bricks for writing computer simulations in a universal programming language (CyberUnits Bricks). It supports the definition of models, the simulative analysis of linear and nonlinear systems in the time and frequency domain and the plotting of block diagrams. We compared several programming languages that are commonly used in biomedical research (S in the R implementation and Python) or that are optimized for speed (Swift, C++ and Object Pascal). In benchmarking experiments with two prototypical feedback loops, we found the implementations in Object Pascal to deliver the fastest results. CyberUnits Bricks is available as open-source software that has been optimised for Embarcadero Delphi and the Lazarus IDE for Free Pascal. Johannes W. Dietrich, Nina Siegmar, Jonas R. Hojjati, Oliver Gardt, Bernhard O. Boehm Copyright (c) 2024 PD Dr. med. Johannes W. Dietrich, Nina Siegmar, Jonas R. Hojjati, Oliver Gardt, Bernhard O. Boehm https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31762 Tue, 27 Aug 2024 00:00:00 +0200 Performance Analysis of Software-Defined Networking in Band Controllers for Different Network Topologies https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31674 With the great increase in the complexity of networking, software-defined networks have been developed to help administrators operate and configure network services with controllers such as Pox, Ryu, Floodlight and OpenDaylight. Those controllers offer an appropriate platform for applications that need high bandwidth. In this paper, several SDN controllers have been evaluated using in-band communication mode with different network topologies to check the performance of the in-band controllers. Some controllers cannot operate with an in-band controller such as Pox. The controllers were evaluated with Mininet by using iperf and ping networking tools, the packet latency round trip time RTT and the comparison of the throughput of the three topologies. The results of the experiments showed that in-band controllers can be implemented and have efficient results. Results showed that OpenDaylight has the lower value of RTT so it is the best for the applications that need fast response. Ryu has a greater bandwidth value, so it is the best for applications that need high bandwidth. Floodlight comes third in order, after OpenDaylight and Ryu, respectively. Hussein Ali Al-Gubouri Copyright (c) 2024 Hussein Al-Gbouri https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31674 Tue, 31 Dec 2024 00:00:00 +0100 An Ensemble Based Machine Learning Classification for Automated Glaucoma Detection https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31640 Glaucoma is an irredeemable eye disease that causes sight degeneration and is the fourth leading cause of vision impairment as per the World Report on Vision 2019. Several techniques exist for the screening, detection, treatment, and rehabilitation of glaucoma. But still, they are not sufficient to have control over this disease to prevent further vision loss. Studies done on the prevalence of glaucoma have reported a high proportion of undiagnosed patients. Late diagnosis is related to an increased risk of glaucoma associated with visual disability. For the effective management or prevention of blindness, the importance of early diagnosis of glaucoma cannot be underestimated. This paper has proposed an approach for effectively extracting the key features of colour retinal fundus images and categorizing them as normal or glaucomatous. The novel approach of an ensemble machine learning technique has been implemented with an Automated Weightage Based Voting (AWBV) algorithm. This paper has been designed to evaluate the performance of Probabilistic Neural Networks (PNN), K-Nearest Neighbour (KNN), Support Vector Machines (SVM), Naïve Bayes (NB) and Logistic Regression (LR) as individual and ensemble classifiers. It includes the extraction of fused features from various retinal fundus image datasets. The proposed Combined Features Fused Classifier (CF2C) model has had a remarkable performance with the IEEE DataPort image dataset, achieving an ensembled prediction accuracy of 96.25 %, a sensitivity of 95.83 % and a specificity of 96.67 % which are better results than those of the five classifiers individually. Digvijay J. Pawar, Yuvraj K. Kanse, Suhas S. Patil Copyright (c) 2024 Digvijay pawar https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31640 Tue, 31 Dec 2024 00:00:00 +0100 Optimal Positioning and Sizing of Distributed Energy Sources in Distribution System Using Hunter-Prey Optimizer Algorithm https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31639 The integration of distributed generation (DG) based on renewable energy (RE), in distribution power networks (DPN) has become indispensable for reducing power losses and voltage deviation along the DPN. Typical DGs are placed adjacent to the load in DPN and locally distribute adequate active and reactive power. However, the appropriate placement of DG in DPN at the right location and size is essential to achieve the desired objectives. In this paper, DG is optimized into radial DPN with the aid of a recent bio-inspired hunter-prey optimization (HPO) algorithm. HPO is a bio-inspired and population-based optimization algorithm that mimics the hunting action of an animal. The HPO algorithm evades the local optimal stagnation and reaches the optimal solution rapidly. HPO optimizes solar photovoltaic (PV) and wind turbine (WT) DG systems to minimize multi-objective functions (MOFs) including active power loss (APL) and voltage deviation (VD), and to enhance voltage stability (VS). An optimized solution has been obtained for a standard IEEE 69-bus radial DPN and the optimized simulation result of HPO has been compared with other optimization algorithms with the aim of assessing its effectiveness. The optimized PV and WT DG integration via the proposed HPO algorithm has yielded a power loss reduction of 67.10 % and 90.4 %, respectively. Furthermore, a considerable enhancement in bus voltage and voltage stability has been seen in radial DPN after the inclusion of DG. P. Rajakumar, M. Senthil Kumar, K. Karunanithi, S. Vinoth John Prakash, P. Baburao, S. P. Raja Copyright (c) 2024 Rajakumar Planisamy, Senthil Kumar M, Karunanithi K, Vinoth John Prakash S, Baburao P, S.P. Raja https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31639 Tue, 31 Dec 2024 00:00:00 +0100 Harigeeta: Cic Mechanism with Euclidean Steiner Tree for Service Latency Prediction in Delay-Sensitive Cloud Services https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31594 Data establishment and resource provision are the most crucial tasks in the data center. To achieve minimum service latency, it is required to have a balance between the virtual machine and physical machine for proper execution of any query into the cloud data center. Cloud services have a huge market in the world trade. These services have a large impact on every field, also on research. Latency is a major problem in the growth of the cloud market in a real time scenario. Online trade, marketing and banking have a large market of cloud services, which require minimum latency in the real-time response otherwise the whole market would be destroyed. Latency prediction plays a crucial role in managing the load on the data center. To perfectly maintain a request waiting queue, it is required to predict accurate latency between the virtual machines in the data center. If any approach can predict accurate latency in the data center for any particular request, then it can perfectly manage the waiting queue for the cloud data center. Thus, prediction plays a crucial role in reducing latency in the execution of any request to the cloud data center. This article presents an online latency prediction approach for VMs to improve load balancing. A Euclidean Circle Steiner Tree point is proposed. Results show compression with existing mechanisms and get 8-12 % more accuracy in latency prediction. Rahul Kumar Sharma, Sarvpal Singh Copyright (c) 2024 RAHUL KUMAR SHARMA https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31594 Tue, 31 Dec 2024 00:00:00 +0100 A Review on Covid-19 Detection Using Artificial Intelligence from Chest CT Scan Slices https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31528 The outbreak of COVID-19, a contagious respiratory disease, has had a significant impact on people worldwide. To prevent its spread, there is an urgent need for an easily accessible, fast, and cost-effective diagnostic solution. According to studies, COVID-19 is frequently accompanied by coughing. Therefore, the identification and classification of cough sounds can be a promising method for rapidly and efficiently diagnosing the disease. The COVID-19 epidemic has resulted in a worldwide health crisis, and stopping the disease's spread depends on a quick and precise disease diagnosis. COVID-19 has been detected using medical imaging modalities such as chest X-rays and computed tomography (CT) scans due to their non-invasive nature and accessibility. This research provides an in-depth examination of deep learning-based strategies for recognising COVID-19 in medical images. The benefits and drawbacks of various deep learning approaches and their applications in COVID-19 detection are discussed. The study also examines publicly available datasets and benchmarks for evaluating deep learning model performance. Furthermore, the limitations and future research prospects for using deep learning in COVID-19 detection are discussed. This survey's goal is to offer a comprehensive overview of the current state of advancement in deep learning-based COVID-19 detection using medical images. This can aid researchers and healthcare professionals in selecting appropriate approaches for an effective diagnosis of the disease. Copyright (c) 2024 Dhanashri Mali https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31528 Wed, 27 Nov 2024 00:00:00 +0100 Deep and Machine Learning for Acute Lymphoblastic Leukemia Diagnosis: A Comprehensive Review https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31420 The medical condition known as acute lymphoblastic leukemia (ALL) is characterized by an excess of immature lymphocyte production, and it can affect people across all age ranges. Detecting it at an early stage is extremely important to increase the chances of successful treatment. Conventional diagnostic techniques for ALL, such as bone marrow and blood tests, can be expensive and time-consuming. They may be less useful in places with scarce resources. The primary objective of this research is to investigate automated techniques that can be employed to detect ALL at an early stage. This analysis covers both machine learning models (ML), such as support vector machine (SVM) & random forest (RF), as well as deep learning algorithms (DL), including convolution neural network (CNN), AlexNet, ResNet50, ShuffleNet, MobileNet, RNN. The effectiveness of these models in detecting ALL is evident through their ability to enhance accuracy and minimize human errors, which is essential for early diagnosis and successful treatment. In addition, the study also highlights several challenges and limitations in this field, including the scarcity of data available for ALL types, and the significant computational resources required to train and operate deep learning models. Mohammad Faiz, Bakkanarappa Gari Mounika, Mohd Akbar, Swapnita Srivastava Copyright (c) 2023 Mohammad Faiz Faiz, Bakkanarappa Gari Mounika, Ramandeep Snadhu https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31420 Mon, 15 Jul 2024 00:00:00 +0200 Hybrid Text Embedding and Evolutionary Algorithm Approach for Topic Clustering in Online Discussion Forums https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31448 Leveraging discussion forums as a medium for information exchange has led to a surge in data, making topic clustering in these platforms essential for understanding user interests, preferences, and concerns. This study introduces an innovative methodology for topic clustering by combining text embedding techniques—Latent Dirichlet Allocation (LDA) and BERT—trained on a singular autoencoder. Additionally, it proposes an amalgamation of K-Means and Genetic Algorithms for clustering topics within triadic discussion forum threads. The proposed technique begins with a preprocessing stage to clean and tokenize textual data, which is then transformed into a vector representation using the hybrid text embedding method. Subsequently, the K-Means algorithm clusters these vectorized data points, and Genetic Algorithms optimize the parameters of the K-Means clustering. We assess the efficacy of our approach by computing cosine similarities between topics and comparing performance against coherence and graph visualization. The results confirm that the hybrid text embedding methodology, coupled with evolutionary algorithms, enhances the quality of topic clustering across various discussion forum themes. This investigation contributes significantly to the development of effective methods for clustering discussion forums, with potential applications in diverse domains, including social media analysis, online education, and customer response analysis. Ibrahim Bouabdallaoui, Fatima Guerouate, Mohammed Sbihi Copyright (c) 2023 Ibrahim Bouabdallaoui, Fatima Guerouate, Mohammed Sbihi https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31448 Tue, 27 Aug 2024 00:00:00 +0200 Optimized Deep Belief Network for Efficient Fault Detection in Induction Motor https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31616 Numerous industrial applications depend heavily on induction motors and their malfunction causes considerable financial losses. Induction motors in industrial processes have recently expanded dramatically in size, and complexity of defect identification and diagnostics for such systems has increased as well. As a result, research has concentrated on developing novel methods for the quick and accurate identification of induction motor problems.In response to these needs, this paper provides an optimised algorithm for analysing the performance of an induction motor. To analyse the operation of induction motors, an enhanced methodology on Deep Belief Networks (DBN) is introduced for recovering properties from the sensor identified vibration signals. Restricted Boltzmann Machine (RBM) is stacked utilizing multiple units of DBN model, which is then trained adopting Ant colony algorithm.An innovative method of feature extraction for autonomous fault analysis in manufacturing is provided by experimental investigations utilising vibration signals and overall accuracy of 99.8% is obtained, which therefore confirms the efficiency of DBN architecture for features extraction. Pradeep Katta, K. Karunanithi, S. P. Raja, S. Ramesh, S. Vinoth John Prakash, Deepthi Joseph Copyright (c) 2023 PRADEEP KATTA, K.Karunanithi, S.P. Raja, S. Ramesh, Deepthi Joseph https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31616 Wed, 24 Jul 2024 00:00:00 +0200 CNN Based Automatic Speech Recognition: A Comparative Study https://revistas.usal.es/cinco/index.php/2255-2863/article/view/29191 Recently, one of the most common approaches used in speech recognition is deep learning. The most advanced results have been obtained with speech recognition systems created using convolutional neural network (CNN) and recurrent neural networks (RNN). Since CNNs can capture local features effectively, they are applied to tasks with relatively short-term dependencies, such as keyword detection or phoneme- level sequence recognition. This paper presents the development of a deep learning and speech command recognition system. The Google Speech Commands Dataset has been used for training. The dataset contained 65.000 one-second-long words of 30 short English words. That is, %80 of the dataset has been used in the training and %20 of the dataset has been used in the testing. The data set consists of one-second voice commands that have been converted into a spectrogram and used to train different artificial neural network (ANN) models. Various variants of CNN are used in deep learning applications. The performance of the proposed model has reached %94.60. Hilal Ilgaz, Beyza Akkoyun, Özlem Alpay, M. Ali Akcayol Copyright (c) 2023 Ozlem Alpay https://creativecommons.org/licenses/by-nc-nd/4.0/ https://revistas.usal.es/cinco/index.php/2255-2863/article/view/29191 Tue, 27 Aug 2024 00:00:00 +0200