https://revistas.usal.es/cinco/index.php/2255-2863/issue/feedADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal2024-11-29T12:30:37+01:00Juan M. CORCHADOadcaij@usal.esOpen Journal Systems<p dir="ltr">The <a title="adcaij" href="http://adcaij.usal.es" target="_blank" rel="noopener">Advances in Distributed Computing and Artificial Intelligence Journal</a> (ISSN: 2255-2863) is an open access (OA) journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence, and their application in different areas, such as the Deep Learning, Generative AI, Electronic commerce, Smart Grids, IoT, Distributed Computing and so on. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of computing.</p> <p dir="ltr">ADCAIJ focuses attention in the exchange of ideas between scientists and technicians. Both, academic and business areas, are essential to facilitate the development of systems that meet the demands of today's society. The journal is supported by the research group <a title="bisite" href="http://bisite.usal.es/en/research/research-lines" target="_blank" rel="noopener">BISITE</a>.</p> <p dir="ltr">The journal commenced publication in 2012 with quarterly periodicity and has published more than 300 articles with peer review. All the articles are written in scientific English language.</p> <p dir="ltr">From volume 12 (2023) onwards, the journal will be published in continuous mode, in order to advance the visibility and dissemination of scientific knowledge.</p> <p dir="ltr">ADCAIJ is indexed in Scopus and in the Emerging Sources Citation Index (ESCI) of Web of Science, in the category COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE. It also appears in other directories and databases such as DOAJ, ProQuest, Scholar, WorldCat, Dialnet, Sherpa ROMEO, Dulcinea, UlrichWeb, BASE, Academic Journals Database and Google Scholar.</p>https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31625ML-Based Quantitative Analysis of Linguistic and Speech Features Relevant in Predicting Alzheimer’s Disease2024-06-05T10:54:34+02:00Tripti Tripathitripti.gkp7@gmail.comRakesh Kumarrkiitr@gmail.com Alzheimer’s disease (AD) is a severe neurological condition that affects numerous people globally with detrimental consequences. Detecting AD early is crucial for prompt treatment and effective management. This study presents a novel approach for detecting and classifying six types of cognitive impairment using speech-based analysis, including probable AD, possible AD, mild cognitive impairment (MCI), memory impairments, vascular dementia, and control. The method employs speech data from DementiaBank’s Pitt Corpus, which is preprocessed and analyzed to extract pertinent acoustic features. The characteristics are subsequently used to educate five machine learning algorithms, namely k-nearest neighbors (KNN), decision tree (DT), support vector machine (SVM), XGBoost, and random forest (RF). The effectiveness of every algorithm is assessed through a 10-fold cross-validation. According to the research findings, the suggested method based on speech obtains a total accuracy of 75.59% concerning the six-class categorization issue. Among the five machine learning algorithms tested, the XGBoost classifier showed the highest accuracy of 75.59%. These findings indicate that speech-based approaches can potentially be valuable for detecting and classifying cognitive impairment, including AD. The paper also explores robustness testing, evaluating the algorithms’ performance under various circumstances, such as noise variability, voice quality changes, and accent variations. The proposed approach can be developed into a noninvasive, cost-effective, and accessible diagnostic tool for the early detection and management of cognitive impairment. 2024-06-05T00:00:00+02:00Copyright (c) 2023 Tripti Tripathihttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31506An Efficient Approach to Extract and Store Big Semantic Web Data Using Hadoop and Apache Spark GraphX2024-06-05T10:54:41+02:00Wria Mohammed Salih Mohammedwria.mohammedsalih@spu.edu.iqAlaa Khalil Jumaaalaa.alhadithy@spu.edu.iq The volume of data is growing at an astonishingly high speed. Traditional techniques for storing and processing data, such as relational and centralized databases, have become inefficient and time-consuming. Linked data and the Semantic Web make internet data machine-readable. Because of the increasing volume of linked data and Semantic Web data, storing and working with them using traditional approaches is not enough, and this causes limited hardware resources. To solve this problem, storing datasets using distributed and clustered methods is essential. Hadoop can store datasets because it can use many hard disks for distributed data clustering; Apache Spark can be used for parallel data processing more efficiently than Hadoop MapReduce because Spark uses memory instead of the hard disk. Semantic Web data has been stored and processed in this paper using Apache Spark GraphX and the Hadoop Distributed File System (HDFS). Spark's in-memory processing and distributed computing enable efficient data analysis of massive datasets stored in HDFS. Spark GraphX allows graph-based semantic web data processing. The fundamental objective of this work is to provide a way for efficiently combining Semantic Web and big data technologies to utilize their combined strengths in data analysis and processing. First, the proposed approach uses the SPARQL query language to extract Semantic Web data from DBpedia datasets. DBpedia is a hugely available Semantic Web dataset built on Wikipedia. Secondly, the extracted Semantic Web data was converted to the GraphX data format; vertices and edges files were generated. The conversion process is implemented using Apache Spark GraphX. Third, both vertices and edge tables are stored in HDFS and are available for visualization and analysis operations. Furthermore, the proposed techniques improve the data storage efficiency by reducing the amount of storage space by half when converting from Semantic Web Data to a GraphX file, meaning the RDF size is around 133.8 and GraphX is 75.3. Adopting parallel data processing provided by Apache Spark in the proposed technique reduces the required data processing and analysis time. This article concludes that Apache Spark GraphX can enhance Semantic Web and Big Data technologies. We minimize data size and processing time by converting Semantic Web data to GraphX format, enabling efficient data management and seamless integration. 2024-06-05T00:00:00+02:00Copyright (c) 2023 Wria Mohammed Salih Mohammed, Alaa Khalil Jumaahttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31601Sarcasm Text Detection on News Headlines Using Novel Hybrid Machine Learning Techniques2024-06-05T10:54:38+02:00Neha Singhnehaps2703@gmail.comUmesh Chandra Jaiswalucjitca@mmmut.ac.inb One of the biggest problems with sentiment analysis systems is sarcasm. The use of implicit, indirect language to express opinions is what gives it its complexity. Sarcasm can be represented in a number of ways, such as in headings, conversations, or book titles. Even for a human, recognizing sarcasm can be difficult because it conveys feelings that are diametrically contrary to the literal meaning expressed in the text. There are several different models for sarcasm detection. To identify humorous news headlines, this article assessed vectorization algorithms and several machine learning models. The recommended hybrid technique using the bag-of-words and TF-IDF feature vectorization models is compared experimentally to other machine learning approaches. In comparison to existing strategies, experiments demonstrate that the proposed hybrid technique with the bag-of-word vectorization model offers greater accuracy and F1-score results. 2024-06-05T00:00:00+02:00Copyright (c) 2023 Neha Singh Neha Singhhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31412Computer-Aided Detection and Diagnosis of Breast Cancer: a Review2024-06-05T10:54:44+02:00Bhanu Prakash Sharmabhanu.12016492317@ipu.ac.inRavindra Kumar Purwarravindra@ipu.ac.in Statistics across different countries point to breast cancer being among severe cancers with a high mortality rate. Early detection is essential when it comes to reducing the severity and mortality of breast cancer. Researchers proposed many computer-aided diagnosis/detection (CAD) techniques for this purpose. Many perform well (over 90% of classification accuracy, sensitivity, specificity, and f-1 sore), nevertheless, there is still room for improvement. This paper reviews literature related to breast cancer and the challenges faced by the research community. It discusses the common stages of breast cancer detection/ diagnosis using CAD models along with deep learning and transfer learning (TL) methods. In recent studies, deep learning models outperformed the handcrafted feature extraction and classification task and the semantic segmentation of ROI images achieved good results. An accuracy of up to 99.8% has been obtained using these techniques. Furthermore, using TL, researchers combine the power of both, pre-trained deep learning-based networks and traditional feature extraction approaches. 2024-06-05T00:00:00+02:00Copyright (c) 2023 BHANU PRAKASH SHARMA, RAVINDRA KUMAR PURWARhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31590Investigation of the Role of Machine Learning and Deep Learning in Improving Clinical Decision Making for Musculoskeletal Rehabilitation2024-06-13T09:38:57+02:00Madhu Yadavmy8006069850@gmail.comPushpendra Kumar Vermadr.pkverma81@gmail.comSumaiya Ansarisumaiyaansari123@gmail.com Musculoskeletal rehabilitation is an important aspect of healthcare that involves the treatment and management of injuries and conditions affecting the muscles, bones, joints, and related tissues. Clinical decision-making in musculoskeletal rehabilitation involves complex and multifactorial considerations that can be challenging for healthcare professionals. Machine learning and deep learning techniques have the potential to enhance clinical judgement in musculoskeletal rehabilitation by providing insights into complex relationships between patient characteristics, treatment interventions, and outcomes. These techniques can help identify patterns and predict outcomes, allowing for personalized treatment plans and improved patient outcomes. In this investigation, we explore the various applications of machine learning and deep learning in musculoskeletal rehabilitation, including image analysis, predictive modelling, and decision support systems. We also examine the challenges and limitations associated with implementing these techniques in clinical practice and the ethical considerations surrounding their use. This investigation aims to highlight the potential benefits of using machine learning and deep learning in musculoskeletal rehabilitation and the need for further research to optimize their use in clinical practice. 2024-06-12T00:00:00+02:00Copyright (c) 2023 Pushpendra Vermahttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31454Resolving Covid-19 with Blockchain and AI2023-10-19T23:12:26+02:00Suyogita Singhsuyogitasingh0885@gmail.comSatya Bhushan VermaSatyabverma1@gmail.com In the early months of 2020, a fast-spreading outbreak was brought about by the new virus SARS-CoV-2. The uncontrolled spread, which led to a pandemic, illustrated the healthcare system’s slow response time to public health emergencies at that time. Blockchain technology was anticipated to be crucial in the effort to contain the COVID-19 pandemic. In that review, many potential blockchain applications were discovered; however, the majority of them were still in their infancy, and it couldn’t yet be predicted how they could contribute to the fight against COVID-19 through the use of platforms, access kinds, and consensus algorithms. Modern innovations such as blockchain and artificial intelligence (AI) were shown to be promising in limiting the spread of a virus. Blockchain could specifically aid in the battle against pandemics by supporting early epidemic identification, assuring the ordering of clinical information, and maintaining a trustworthy medical chain during disease tracing. AI also offered smart forms of diagnosing coronavirus therapies and supported the development of pharmaceuticals. Blockchain and AI software for epidemic and pandemic containment were analyzed in that research. First, a new conceptual strategy was proposed to tackle COVID-19 through an architecture that fused AI with blockchain. State-of-the-art research on the benefits of blockchain and AI in COVID-19 containment was then reviewed. Recent initiatives and use cases developed to tackle the coronavirus pandemic were also presented. A case study using federated intelligence for COVID-19 identification was also provided. Finally, attention was drawn to problems and prospective directions for further investigation into future coronavirus-like wide-ranging scenarios. 2024-06-12T00:00:00+02:00Copyright (c) 2023 suyogita singhhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31710Performance Research on Multi-Target Detection in Different Noisy Environments2024-04-09T09:01:19+02:00Yuhong Yin59763537@qq.comQian Jiajiaqianxd@163.comHuiqi Xuh.q.xu@mail.nwpu.edu.cnGuanglei Fufugl@nwpu.edu.cn This paper studies five classic multi-target detection methods in different noisy environments, including Akaike information criterion, ration criterion, Rissanen's minimum description length, Gerschgorin disk estimator and Eigen-increment threshold methods. Theoretical and statistical analyses of these methods have been done through simulations and a real-world water tank experiment. It is known that these detection approaches suffer from array errors and environmental noises. A new diagonal correction algorithm has been proposed to address the issue of degraded detection performance in practical systems due to array errors and environmental noises. This algorithm not only improves the detection performance of these multi-target detection methods in low signal-to-noise ratios (SNR), but also enhances the robust property in high SNR scenarios. 2024-11-27T00:00:00+01:00Copyright (c) 2024 Yuhong Yin, Qian Jia, Huiqi Xu, Guanglei Fuhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31762CyberUnits Bricks: An Implementation Study of a Class Library for Simulating Nonlinear Biological Feedback Loops2024-08-27T14:40:34+02:00Johannes W. Dietrichjohannes.dietrich@ruhr-uni-bochum.deNina Siegmarnina.siegmar@ruhr-uni-bochum.deJonas R. Hojjatijonas.hojjati@ruhr-uni-bochum.deOliver Gardtoliver.gardt@klinikum-bochum.deBernhard O. Boehmbernhard.o.boehm@mailbox.org Feedback loops and other types of information processing structures play a pivotal role in maintaining the internal milieu of living organisms. Although methods of biomedical cybernetics and systems biology help to translate between the structure and function of processing structures, computer simulations are necessary for studying nonlinear systems and the full range of dynamic responses of feedback control systems. Currently, available approaches for modelling and simulation comprise basically domain-specific environments, toolkits for computer algebra systems and custom software written in universal programming languages for a specific purpose, respectively. All of these approaches are faced with certain weaknesses. We therefore developed a cross-platform class library that provides versatile building bricks for writing computer simulations in a universal programming language (CyberUnits Bricks). It supports the definition of models, the simulative analysis of linear and nonlinear systems in the time and frequency domain and the plotting of block diagrams. We compared several programming languages that are commonly used in biomedical research (S in the R implementation and Python) or that are optimized for speed (Swift, C++ and Object Pascal). In benchmarking experiments with two prototypical feedback loops, we found the implementations in Object Pascal to deliver the fastest results. CyberUnits Bricks is available as open-source software that has been optimised for Embarcadero Delphi and the Lazarus IDE for Free Pascal. 2024-08-27T00:00:00+02:00Copyright (c) 2024 PD Dr. med. Johannes W. Dietrich, Nina Siegmar, Jonas R. Hojjati, Oliver Gardt, Bernhard O. Boehmhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31528A Review on Covid-19 Detection Using Artificial Intelligence from Chest CT Scan Slices2024-11-29T12:30:37+01:00Dhanshri M. Malimalidhanshri93@gmail.comS. A. Patilsapatil@dkte.ac.in The outbreak of COVID-19, a contagious respiratory disease, has had a significant impact on people worldwide. To prevent its spread, there is an urgent need for an easily accessible, fast, and cost-effective diagnostic solution. According to studies, COVID-19 is frequently accompanied by coughing. Therefore, the identification and classification of cough sounds can be a promising method for rapidly and efficiently diagnosing the disease. The COVID-19 epidemic has resulted in a worldwide health crisis, and stopping the disease's spread depends on a quick and precise disease diagnosis. COVID-19 has been detected using medical imaging modalities such as chest X-rays and computed tomography (CT) scans due to their non-invasive nature and accessibility. This research provides an in-depth examination of deep learning-based strategies for recognising COVID-19 in medical images. The benefits and drawbacks of various deep learning approaches and their applications in COVID-19 detection are discussed. The study also examines publicly available datasets and benchmarks for evaluating deep learning model performance. Furthermore, the limitations and future research prospects for using deep learning in COVID-19 detection are discussed. This survey's goal is to offer a comprehensive overview of the current state of advancement in deep learning-based COVID-19 detection using medical images. This can aid researchers and healthcare professionals in selecting appropriate approaches for an effective diagnosis of the disease. 2024-11-27T00:00:00+01:00Copyright (c) 2024 Dhanashri Malihttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31420Deep and Machine Learning for Acute Lymphoblastic Leukemia Diagnosis: A Comprehensive Review2024-07-15T13:33:00+02:00Mohammad Faizfaiz.techno20@gmail.comBakkanarappa Gari Mounikamounikareddyb0305@gmail.comMohd Akbaralakbar.com@gmail.comSwapnita Srivastavaswapnitasrivastava@gmail.com The medical condition known as acute lymphoblastic leukemia (ALL) is characterized by an excess of immature lymphocyte production, and it can affect people across all age ranges. Detecting it at an early stage is extremely important to increase the chances of successful treatment. Conventional diagnostic techniques for ALL, such as bone marrow and blood tests, can be expensive and time-consuming. They may be less useful in places with scarce resources. The primary objective of this research is to investigate automated techniques that can be employed to detect ALL at an early stage. This analysis covers both machine learning models (ML), such as support vector machine (SVM) & random forest (RF), as well as deep learning algorithms (DL), including convolution neural network (CNN), AlexNet, ResNet50, ShuffleNet, MobileNet, RNN. The effectiveness of these models in detecting ALL is evident through their ability to enhance accuracy and minimize human errors, which is essential for early diagnosis and successful treatment. In addition, the study also highlights several challenges and limitations in this field, including the scarcity of data available for ALL types, and the significant computational resources required to train and operate deep learning models. 2024-07-15T00:00:00+02:00Copyright (c) 2023 Mohammad Faiz Faiz, Bakkanarappa Gari Mounika, Ramandeep Snadhuhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31448Hybrid Text Embedding and Evolutionary Algorithm Approach for Topic Clustering in Online Discussion Forums2024-08-27T14:40:38+02:00Ibrahim Bouabdallaouiibrahim_bouabdallaoui@um5.ac.maFatima Guerouatefatima.guerouate@est.um5.ac.maMohammed Sbihimohammed.sbihi@est.um5.ac.ma Leveraging discussion forums as a medium for information exchange has led to a surge in data, making topic clustering in these platforms essential for understanding user interests, preferences, and concerns. This study introduces an innovative methodology for topic clustering by combining text embedding techniques—Latent Dirichlet Allocation (LDA) and BERT—trained on a singular autoencoder. Additionally, it proposes an amalgamation of K-Means and Genetic Algorithms for clustering topics within triadic discussion forum threads. The proposed technique begins with a preprocessing stage to clean and tokenize textual data, which is then transformed into a vector representation using the hybrid text embedding method. Subsequently, the K-Means algorithm clusters these vectorized data points, and Genetic Algorithms optimize the parameters of the K-Means clustering. We assess the efficacy of our approach by computing cosine similarities between topics and comparing performance against coherence and graph visualization. The results confirm that the hybrid text embedding methodology, coupled with evolutionary algorithms, enhances the quality of topic clustering across various discussion forum themes. This investigation contributes significantly to the development of effective methods for clustering discussion forums, with potential applications in diverse domains, including social media analysis, online education, and customer response analysis. 2024-08-27T00:00:00+02:00Copyright (c) 2023 Ibrahim Bouabdallaoui, Fatima Guerouate, Mohammed Sbihihttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/31616Optimized Deep Belief Network for Efficient Fault Detection in Induction Motor2023-09-20T17:35:19+02:00Pradeep Kattapradeep.2048@gmail.comK. Karunanithik.karunanithiklu@gmail.comS. P. Rajaavemariaraja@gmail.comS. Rameshrameshsme@gmail.comS. Vinoth John Prakashvjp.tiglfg@gmail.comDeepthi Josephdeepths@gmail.com Numerous industrial applications depend heavily on induction motors and their malfunction causes considerable financial losses. Induction motors in industrial processes have recently expanded dramatically in size, and complexity of defect identification and diagnostics for such systems has increased as well. As a result, research has concentrated on developing novel methods for the quick and accurate identification of induction motor problems.In response to these needs, this paper provides an optimised algorithm for analysing the performance of an induction motor. To analyse the operation of induction motors, an enhanced methodology on Deep Belief Networks (DBN) is introduced for recovering properties from the sensor identified vibration signals. Restricted Boltzmann Machine (RBM) is stacked utilizing multiple units of DBN model, which is then trained adopting Ant colony algorithm.An innovative method of feature extraction for autonomous fault analysis in manufacturing is provided by experimental investigations utilising vibration signals and overall accuracy of 99.8% is obtained, which therefore confirms the efficiency of DBN architecture for features extraction. 2024-07-24T00:00:00+02:00Copyright (c) 2023 PRADEEP KATTA, K.Karunanithi, S.P. Raja, S. Ramesh, Deepthi Josephhttps://revistas.usal.es/cinco/index.php/2255-2863/article/view/29191CNN Based Automatic Speech Recognition: A Comparative Study2024-08-27T14:40:42+02:00Hilal Ilgazhilalilgaz06@gmail.comBeyza Akkoyunbeyzaakkoyun9@gmail.comÖzlem Alpayozlemalpay@gazi.edu.trM. Ali Akcayolakcayol@gazi.edu.tr Recently, one of the most common approaches used in speech recognition is deep learning. The most advanced results have been obtained with speech recognition systems created using convolutional neural network (CNN) and recurrent neural networks (RNN). Since CNNs can capture local features effectively, they are applied to tasks with relatively short-term dependencies, such as keyword detection or phoneme- level sequence recognition. This paper presents the development of a deep learning and speech command recognition system. The Google Speech Commands Dataset has been used for training. The dataset contained 65.000 one-second-long words of 30 short English words. That is, %80 of the dataset has been used in the training and %20 of the dataset has been used in the testing. The data set consists of one-second voice commands that have been converted into a spectrogram and used to train different artificial neural network (ANN) models. Various variants of CNN are used in deep learning applications. The performance of the proposed model has reached %94.60. 2024-08-27T00:00:00+02:00Copyright (c) 2023 Ozlem Alpay