ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal https://revistas.usal.es/cinco/index.php/2255-2863 <p dir="ltr">The <a title="adcaij" href="http://adcaij.usal.es" target="_blank" rel="noopener">Advances in Distributed Computing and Artificial Intelligence Journal</a> (ISSN: 2255-2863) is an open access (OA) journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence, and their application in different areas, such as the Deep Learning, Generative AI, Electronic commerce, Smart Grids, IoT, Distributed Computing and so on. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of computing.</p> <p dir="ltr">ADCAIJ focuses attention in the exchange of ideas between scientists and technicians. Both, academic and business areas, are essential to facilitate the development of systems that meet the demands of today's society. The journal is supported by the research group <a title="bisite" href="http://bisite.usal.es/en/research/research-lines" target="_blank" rel="noopener">BISITE</a>.</p> <p dir="ltr">The journal commenced publication in 2012 with quarterly periodicity and has published more than 300 articles with peer review. All the articles are written in scientific English language.</p> <p dir="ltr">From volume 12 (2023) onwards, the journal will be published in continuous mode, in order to advance the visibility and dissemination of scientific knowledge.</p> <p dir="ltr">ADCAIJ is indexed in Scopus and in the Emerging Sources Citation Index (ESCI) of Web of Science, in the category COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE. It also appears in other directories and databases such as DOAJ, ProQuest, Scholar, WorldCat, Dialnet, Sherpa ROMEO, Dulcinea, UlrichWeb, BASE, Academic Journals Database and Google Scholar.</p> Universidad de Salamanca en-US ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 2255-2863 ML-Based Quantitative Analysis of Linguistic and Speech Features Relevant in Predicting Alzheimer’s Disease https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31625 Alzheimer’s disease (AD) is a severe neurological condition that affects numerous people globally with detrimental consequences. Detecting AD early is crucial for prompt treatment and effective management. This study presents a novel approach for detecting and classifying six types of cognitive impairment using speech-based analysis, including probable AD, possible AD, mild cognitive impairment (MCI), memory impairments, vascular dementia, and control. The method employs speech data from DementiaBank’s Pitt Corpus, which is preprocessed and analyzed to extract pertinent acoustic features. The characteristics are subsequently used to educate five machine learning algorithms, namely k-nearest neighbors (KNN), decision tree (DT), support vector machine (SVM), XGBoost, and random forest (RF). The effectiveness of every algorithm is assessed through a 10-fold cross-validation. According to the research findings, the suggested method based on speech obtains a total accuracy of 75.59% concerning the six-class categorization issue. Among the five machine learning algorithms tested, the XGBoost classifier showed the highest accuracy of 75.59%. These findings indicate that speech-based approaches can potentially be valuable for detecting and classifying cognitive impairment, including AD. The paper also explores robustness testing, evaluating the algorithms’ performance under various circumstances, such as noise variability, voice quality changes, and accent variations. The proposed approach can be developed into a noninvasive, cost-effective, and accessible diagnostic tool for the early detection and management of cognitive impairment. Tripti Tripathi Rakesh Kumar Copyright (c) 2023 Tripti Tripathi https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-06-05 2024-06-05 13 e31625 e31625 10.14201/adcaij.31625 An Efficient Approach to Extract and Store Big Semantic Web Data Using Hadoop and Apache Spark GraphX https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31506 The volume of data is growing at an astonishingly high speed. Traditional techniques for storing and processing data, such as relational and centralized databases, have become inefficient and time-consuming. Linked data and the Semantic Web make internet data machine-readable. Because of the increasing volume of linked data and Semantic Web data, storing and working with them using traditional approaches is not enough, and this causes limited hardware resources. To solve this problem, storing datasets using distributed and clustered methods is essential. Hadoop can store datasets because it can use many hard disks for distributed data clustering; Apache Spark can be used for parallel data processing more efficiently than Hadoop MapReduce because Spark uses memory instead of the hard disk. Semantic Web data has been stored and processed in this paper using Apache Spark GraphX and the Hadoop Distributed File System (HDFS). Spark's in-memory processing and distributed computing enable efficient data analysis of massive datasets stored in HDFS. Spark GraphX allows graph-based semantic web data processing. The fundamental objective of this work is to provide a way for efficiently combining Semantic Web and big data technologies to utilize their combined strengths in data analysis and processing. First, the proposed approach uses the SPARQL query language to extract Semantic Web data from DBpedia datasets. DBpedia is a hugely available Semantic Web dataset built on Wikipedia. Secondly, the extracted Semantic Web data was converted to the GraphX data format; vertices and edges files were generated. The conversion process is implemented using Apache Spark GraphX. Third, both vertices and edge tables are stored in HDFS and are available for visualization and analysis operations. Furthermore, the proposed techniques improve the data storage efficiency by reducing the amount of storage space by half when converting from Semantic Web Data to a GraphX file, meaning the RDF size is around 133.8 and GraphX is 75.3. Adopting parallel data processing provided by Apache Spark in the proposed technique reduces the required data processing and analysis time. This article concludes that Apache Spark GraphX can enhance Semantic Web and Big Data technologies. We minimize data size and processing time by converting Semantic Web data to GraphX format, enabling efficient data management and seamless integration. Wria Mohammed Salih Mohammed Alaa Khalil Jumaa Copyright (c) 2023 Wria Mohammed Salih Mohammed, Alaa Khalil Jumaa https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-06-05 2024-06-05 13 e31506 e31506 10.14201/adcaij.31506 Sarcasm Text Detection on News Headlines Using Novel Hybrid Machine Learning Techniques https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31601 One of the biggest problems with sentiment analysis systems is sarcasm. The use of implicit, indirect language to express opinions is what gives it its complexity. Sarcasm can be represented in a number of ways, such as in headings, conversations, or book titles. Even for a human, recognizing sarcasm can be difficult because it conveys feelings that are diametrically contrary to the literal meaning expressed in the text. There are several different models for sarcasm detection. To identify humorous news headlines, this article assessed vectorization algorithms and several machine learning models. The recommended hybrid technique using the bag-of-words and TF-IDF feature vectorization models is compared experimentally to other machine learning approaches. In comparison to existing strategies, experiments demonstrate that the proposed hybrid technique with the bag-of-word vectorization model offers greater accuracy and F1-score results. Neha Singh Umesh Chandra Jaiswal Copyright (c) 2023 Neha Singh Neha Singh https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-06-05 2024-06-05 13 e31601 e31601 10.14201/adcaij.31601 Computer-Aided Detection and Diagnosis of Breast Cancer: a Review https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31412 Statistics across different countries point to breast cancer being among severe cancers with a high mortality rate. Early detection is essential when it comes to reducing the severity and mortality of breast cancer. Researchers proposed many computer-aided diagnosis/detection (CAD) techniques for this purpose. Many perform well (over 90% of classification accuracy, sensitivity, specificity, and f-1 sore), nevertheless, there is still room for improvement. This paper reviews literature related to breast cancer and the challenges faced by the research community. It discusses the common stages of breast cancer detection/ diagnosis using CAD models along with deep learning and transfer learning (TL) methods. In recent studies, deep learning models outperformed the handcrafted feature extraction and classification task and the semantic segmentation of ROI images achieved good results. An accuracy of up to 99.8% has been obtained using these techniques. Furthermore, using TL, researchers combine the power of both, pre-trained deep learning-based networks and traditional feature extraction approaches. Bhanu Prakash Sharma Ravindra Kumar Purwar Copyright (c) 2023 BHANU PRAKASH SHARMA, RAVINDRA KUMAR PURWAR https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-06-05 2024-06-05 13 e31412 e31412 10.14201/adcaij.31412 Investigation of the Role of Machine Learning and Deep Learning in Improving Clinical Decision Making for Musculoskeletal Rehabilitation https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31590 Musculoskeletal rehabilitation is an important aspect of healthcare that involves the treatment and management of injuries and conditions affecting the muscles, bones, joints, and related tissues. Clinical decision-making in musculoskeletal rehabilitation involves complex and multifactorial considerations that can be challenging for healthcare professionals. Machine learning and deep learning techniques have the potential to enhance clinical judgement in musculoskeletal rehabilitation by providing insights into complex relationships between patient characteristics, treatment interventions, and outcomes. These techniques can help identify patterns and predict outcomes, allowing for personalized treatment plans and improved patient outcomes. In this investigation, we explore the various applications of machine learning and deep learning in musculoskeletal rehabilitation, including image analysis, predictive modelling, and decision support systems. We also examine the challenges and limitations associated with implementing these techniques in clinical practice and the ethical considerations surrounding their use. This investigation aims to highlight the potential benefits of using machine learning and deep learning in musculoskeletal rehabilitation and the need for further research to optimize their use in clinical practice. Madhu Yadav Pushpendra Kumar Verma Sumaiya Ansari Copyright (c) 2023 Pushpendra Verma https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-06-12 2024-06-12 13 e31590 e31590 10.14201/adcaij.31590 Resolving Covid-19 with Blockchain and AI https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31454 In the early months of 2020, a fast-spreading outbreak was brought about by the new virus SARS-CoV-2. The uncontrolled spread, which led to a pandemic, illustrated the healthcare system’s slow response time to public health emergencies at that time. Blockchain technology was anticipated to be crucial in the effort to contain the COVID-19 pandemic. In that review, many potential blockchain applications were discovered; however, the majority of them were still in their infancy, and it couldn’t yet be predicted how they could contribute to the fight against COVID-19 through the use of platforms, access kinds, and consensus algorithms. Modern innovations such as blockchain and artificial intelligence (AI) were shown to be promising in limiting the spread of a virus. Blockchain could specifically aid in the battle against pandemics by supporting early epidemic identification, assuring the ordering of clinical information, and maintaining a trustworthy medical chain during disease tracing. AI also offered smart forms of diagnosing coronavirus therapies and supported the development of pharmaceuticals. Blockchain and AI software for epidemic and pandemic containment were analyzed in that research. First, a new conceptual strategy was proposed to tackle COVID-19 through an architecture that fused AI with blockchain. State-of-the-art research on the benefits of blockchain and AI in COVID-19 containment was then reviewed. Recent initiatives and use cases developed to tackle the coronavirus pandemic were also presented. A case study using federated intelligence for COVID-19 identification was also provided. Finally, attention was drawn to problems and prospective directions for further investigation into future coronavirus-like wide-ranging scenarios. Suyogita Singh Satya Bhushan Verma Copyright (c) 2023 suyogita singh https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-06-12 2024-06-12 13 e31454 e31454 10.14201/adcaij.31454 Deep and Machine Learning for Acute Lymphoblastic Leukemia Diagnosis: A Comprehensive Review https://revistas.usal.es/cinco/index.php/2255-2863/article/view/31420 The medical condition known as acute lymphoblastic leukemia (ALL) is characterized by an excess of immature lymphocyte production, and it can affect people across all age ranges. Detecting it at an early stage is extremely important to increase the chances of successful treatment. Conventional diagnostic techniques for ALL, such as bone marrow and blood tests, can be expensive and time-consuming. They may be less useful in places with scarce resources. The primary objective of this research is to investigate automated techniques that can be employed to detect ALL at an early stage. This analysis covers both machine learning models (ML), such as support vector machine (SVM) & random forest (RF), as well as deep learning algorithms (DL), including convolution neural network (CNN), AlexNet, ResNet50, ShuffleNet, MobileNet, RNN. The effectiveness of these models in detecting ALL is evident through their ability to enhance accuracy and minimize human errors, which is essential for early diagnosis and successful treatment. In addition, the study also highlights several challenges and limitations in this field, including the scarcity of data available for ALL types, and the significant computational resources required to train and operate deep learning models. Mohammad Faiz Bakkanarappa Gari Mounika Mohd Akbar Swapnita Srivastava Copyright (c) 2023 Mohammad Faiz Faiz, Bakkanarappa Gari Mounika, Ramandeep Snadhu https://creativecommons.org/licenses/by-nc-nd/4.0/ 2024-07-15 2024-07-15 13 e31420 e31420 10.14201/adcaij.31420