ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal <p dir="ltr">The <a title="adcaij" href="" target="_blank" rel="noopener">Advances in Distributed Computing and Artificial Intelligence Journal</a> (ISSN: 2255-2863) is an open access (OA) journal that publishes articles which contribute new results associated with distributed computing and artificial intelligence, and their application in different areas, such as the Deep Learning, Generative AI, Electronic commerce, Smart Grids, IoT, Distributed Computing and so on. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of computing.</p> <p dir="ltr">ADCAIJ focuses attention in the exchange of ideas between scientists and technicians. Both, academic and business areas, are essential to facilitate the development of systems that meet the demands of today's society. The journal is supported by the research group <a title="bisite" href="" target="_blank" rel="noopener">BISITE</a>.</p> <p dir="ltr">The journal commenced publication in 2012 with quarterly periodicity and has published more than 300 articles with peer review. All the articles are written in scientific English language.</p> <p dir="ltr">From volume 12 (2023) onwards, the journal will be published in continuous mode, in order to advance the visibility and dissemination of scientific knowledge.</p> <p dir="ltr">ADCAIJ is indexed in Scopus and in the Emerging Sources Citation Index (ESCI) of Web of Science, in the category COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE. It also appears in other directories and databases such as DOAJ, ProQuest, Scholar, WorldCat, Dialnet, Sherpa ROMEO, Dulcinea, UlrichWeb, BASE, Academic Journals Database and Google Scholar.</p> en-US (Juan M. CORCHADO) (Ángel REDERO (Ediciones Universidad de Salamanca)) Tue, 18 Jul 2023 00:00:00 +0200 OJS 60 SRG: Energy-Efficient Localized Routing to Bypass Void in Wireless Sensor Networks The Shift Reverse Gradient (SRG) approach presents a void-size-independent hole bypassing scheme for wireless sensor networks. It does not require establishing any chain or hierarchical tree structure to ensure reliable delivery. The proposed Shift Reverse Gradient (SRG) offers an energy-efficient solution with minimal overhead and consumes minimum power. It has a communication overhead equivalent to greedy forwarding. We have shown through the simulation that SRG energy consumption is minimal and is not much affected by an increase in the void size like other existing void bypassing methods. Saurabh Singh, Sarvpal Singh, Jay Prakash Copyright (c) 2023 Saurabh Singh, Sarvpal Singh, Jay Prakash Fri, 29 Dec 2023 00:00:00 +0100 Generative Artificial Intelligence: Fundamentals Generative language models have witnessed substantial traction, notably with the introduction of refined models aimed at more coherent user-AI interactions—principally conversational models. The epitome of this public attention has arguably been the refinement of the GPT-3 model into ChatGPT and its subsequent integration with auxiliary capabilities such as search features in Microsoft Bing. Despite voluminous prior research devoted to its developmental trajectory, the model’s performance, and applicability to a myriad of quotidian tasks remained nebulous and task specific. In terms of technological implementation, the advent of models such as LLMv2 and ChatGPT-4 has elevated the discourse beyond mere textual coherence to nuanced contextual understanding and real-world task completion. Concurrently, emerging architectures that focus on interpreting latent spaces have offered more granular control over text generation, thereby amplifying the model’s applicability across various verticals. Within the purview of cyber defense, especially in the Swiss operational ecosystem, these models pose both unprecedented opportunities and challenges. Their capabilities in data analytics, intrusion detection, and even misinformation combatting is laudable; yet the ethical and security implications concerning data privacy, surveillance, and potential misuse warrant judicious scrutiny. Juan M. Corchado, Sebastian López F., Juan M. Núñez V., Raul Garcia S., Pablo Chamoso Copyright (c) 2023 Sebastian Lopez Florez Fri, 01 Dec 2023 00:00:00 +0100 Service Chain Placement by Using an African Vulture Optimization Algorithm Based VNF in Cloud-Edge Computing The use of virtual network functions (VNFs) enables the implementation of service function chains (SFCs), which is an innovative approach for delivering network services. The deployment of service chains on the actual network infrastructure and the establishment of virtual connections between VNF instances are crucial factors that significantly impact the quality of network services provided. Current research on the allocation of vital VNFs and resource constraints on the edge network has overlooked the potential benefits of employing SFCs with instance reuse. This strategy offers significant improvements in resource utilization and reduced startup time. The proposed approach demonstrates superior performance compared to existing state-of-the-art methods in maintaining inbound service chain requests, even in complex network typologies observed in real-world scenarios. We propose a novel technique called African vulture optimization algorithm for virtual network functions (AVOAVNF), which optimizes the sequential arrangement of SFCs. Extensive simulations on edge networks evaluate the AVOAVNF methodology, considering metrics such as latency, energy consumption, throughput, resource cost, and execution time. The results indicate that the proposed method outperforms BGWO, DDRL, BIP, and MILP techniques, reducing energy consumption by 8.35%, 12.23%, 29.54%, and 52.29%, respectively. Abhishek Kumar Pandey, Sarvpal Singh Copyright (c) 2023 Abhishek Kumar Pandey Abhishek Kumar Pandey Fri, 29 Dec 2023 00:00:00 +0100 Cryptocurrency Price Prediction Using Supervised Machine Learning Algorithms As a consequence of rising geo-economic issues, global currency values have declined during the last two years, stock markets have performed poorly, and investors have lost money. Consequently, there is a renewed interest in digital currencies. Cryptocurrency is a fresh kind of asset that has evolved as a result of fintech innovations, and it has provided a major research opportunity. Due to price fluctuation and dynamism, anticipating the price of cryptocurrencies is difficult. There are hundreds of cryptocurrencies in circulation around the world and the demand to use a prediction system for price forecasting has increased manifold. Hence, many developers have proposed machine learning algorithms for price forecasting. Machine learning is fast evolving, with several theoretical advances and applications in a variety of domains. This study proposes the use of three supervised machine learning methods, namely linear regression, support vector machine, and decision tree, to estimate the price of four prominent cryptocurrencies: Bitcoin, Ethereum, Dogecoin, and Bitcoin Cash. The purpose of this study is to compute and compare the precision of all three techniques over all four datasets. Divya Chaudhary, Sushil Kumar Saroj Copyright (c) 2023 Sushil Saroj Fri, 29 Dec 2023 00:00:00 +0100 Host Detection and Classification using Support Vector Regression in Cloud Environment Having the potential to provide global users with pay-per-use utility-oriented IT services across the Internet, cloud computing has become increasingly popular. These services are provided via the establishment of data centers (DCs) across the world. These data centers are growing increasingly with the growing demand for cloud, leading to massive energy consumption with energy requirement soaring by 63% and inefficient resource utilization. This paper contributes by utilizing a dynamic time series-based prediction support vector regression (SVR) model. This prediction model defines upper and lower limits, based on which the host is classified into four categories: overload, under pressure, normal, and underload. A series of migration strategies have been considered in the case of load imbalance. The proposed mechanism improves the load distribution and minimizes energy consumption and execution time by balancing the host in the data center. Also, it optimizes the execution cost and resource utilization. In the proposed framework, the energy consumption is 0.641kWh, and the execution time is 165.39sec. Experimental results show that the proposed approach outperforms other existing approaches. Vidya Srivastava, Rakesh Kumar Copyright (c) 2023 VIDYA SRIVASTAVA, Rakesh Kumar Fri, 29 Dec 2023 00:00:00 +0100 Blockchain Enabled Hadoop Distributed File System Framework for Secure and Reliable Traceability Hadoop Distributed File System (HDFS) is a distributed file system that allows large amounts of data to be stored and processed across multiple servers in a Hadoop cluster. HDFS also provides high throughput for data access. HDFS enables the management of vast amounts of data using commodity hardware. However, security vulnerabilities in HDFS can be manipulated for malicious purposes. This emphasizes the significance of establishing strong security measures to facilitate file sharing within Hadoop and implementing a reliable mechanism for verifying the legitimacy of shared files. The objective of this paper is to enhance the security of HDFS by utilizing a blockchain-based technique. The proposed model uses the Hyperledger Fabric platform at the enterprise level to leverage metadata of files, thereby establishing dependable security and traceability of data within HDFS. The analysis of results indicates that the proposed model incurs a slightly higher overhead compared to HDFS and requires more storage space. However, this is considered an acceptable trade-off for the improved security. Manish Kumar Gupta, Rajendra Kumar Dwivedi Copyright (c) 2023 Manish Gupta Fri, 29 Dec 2023 00:00:00 +0100 A Review of Cloud Security Issues and Challenges The advancement of information technology today depends on cloud computing. Every cloud consumer continues to search for the best services available, particularly in terms of security. The absence of uniform standards in the security architecture of the cloud, especially within the cloud computing community, has become an ongoing problem. Most cloud service providers automatically follow the best security practises and actively protect the operation of their systems. Enterprises are required to independently judge the security of their data, apps, and the cloud’s workload. The complexity of security concerns is increasing as the digital environment develops. Virtualization, multi-tenancy, security controls, rules, and risk profiling are all concepts in cloud security. DoS, service injection, user-to-root attacks, man-in-the-middle attacks, data loss leakage andloss, identity theft, and backdoor channel attacks are a few of the dangers associated with cloud computing. Cloud platforms, data outsourcing, data storage standardisation and security, data backup, and data recovery are all examples of cloud security services. Data storage, computation, untrusted computing, and virtualization security challenges are all a part of the cloud. Anamika Agarwal, Satya Bhushan Verma, Bineet Kumar Gupta Copyright (c) 2023 anamika agrawal Anamika Fri, 29 Dec 2023 00:00:00 +0100 Explainable Heart Disease Diagnosis with Supervised Learning Methods The objective of this study is to develop a heart disease diagnosis model with a supervised machine learning algorithm. To that end, random forest (RF), support vector machine (SVM), Naïve Bayes (NB), and extreme boosting (XGBoost) are employed in a medical heart disease dataset to develop a model for heart disease prediction. The performance of the algorithms is investigated and compared for automation of heart disease diagnosis. The best model is selected, and a grid search is applied to improve model performance. The simulation result shows that the XGBoost model outperforms the others, achieving 99.10% accuracy, and receiver operating characteristic curve (AUC score=0.99) compared to RF, SVM, and NB on heart disease detection. Finally, the obtained result is interpreted with Shapley additive model explanation (SHAP) to investigate the effect of each feature on the diagnosis of heart disease. A case study on heart disease diagnosis shows an important insight into the impact of the feature on the diagnosis performance of the supervised learning method. The developed model had an expressively higher prediction accuracy, indicating the utility of supervised learning systems in detecting heart disease in the early stages. Tsehay Admassu Assegie, S. J. Sushma, Shonazarova Shakhnoza Mamanazarovna Copyright (c) 2023 Sushma S J Fri, 29 Dec 2023 00:00:00 +0100 Ensemble Learning Approach for Effective Software Development Effort Estimation with Future Ranking To provide a client with a high-quality product, software development requires a significant amount of time and effort. Accurate estimates and on-time delivery are requirements for the software industry. The proper effort, resources, time, and schedule needed to complete a software project on a tight budget are estimated by software development effort estimation. To achieve high levels of accuracy and effectiveness while using fewer resources, project managers are improving their use of a model created to evaluate software development efforts properly as a decision-support system. As a result, this paper proposed that a novel model capable of determining precise accuracy of global and large-scale software products be developed with practical efforts. The primary goal of this paper is to develop and apply a practical ensemble approach for predicting software development effort. There are two parts to this study: the first phase uses machine learning models to extract the most useful features from previous studies. The development effort is calculated in the second phase using an advanced ensemble method based on the components of the first phase. The performance of the developed model outperformed the existing models after a controlled experiment was conducted to develop an ensemble model, evaluate it, and tune its parameters. K. Eswara Rao, Balamurali Pydi, P. Annan Naidu, U. D. Prasann, P. Anjaneyulu Copyright (c) 2023 Eswara Rao K Fri, 29 Dec 2023 00:00:00 +0100 Stable Feature Selection using Improved Whale Optimization Algorithm for Microarray Datasets A microarray is a collection of DNA sequences that reflect an organism’s whole gene set and are organized in a grid pattern for use in genetic testing. Microarray datasets are extremely high-dimensional and have a very small sample size, posing the challenges of insufficient data and high computational complexity. Identification of true biomarkers that are the most significant features (a very small subset of the complete feature set) is desired to solve these issues. This reduces over-fitting, and time complexity, and improves model generalization. Various feature selection algorithms are used for this biomarker identification. This research proposed a modification to the whale optimization algorithm (WOAm) for biomarker discovery, in which the fitness of each search agent is evaluated using the hinge loss function during the hunting for prey phase to determine the optimal search agent. Also compared the results of the proposed modified algorithm with the original whale optimization algorithm and also with contemporary algorithms like the marine predator algorithm and grey wolf optimization. All these algorithms are evaluated on six different high-dimensional microarray datasets. It has been observed that the proposed modification for the whale optimization algorithm has significantly improved the results of feature selection across all the datasets. Domain experts trust the resultant biomarker/ associated genes by the stability of the results obtained. The chosen feature set’s stability was also evaluated during the research work. According to the findings, our proposed WOAm has superior stability compared to other algorithms for the CNS, colon, Leukemia, and OSCC. datasets. Dipti Theng, Kishor K Bhoyar Copyright (c) 2023 Dipti Theng Tue, 19 Dec 2023 00:00:00 +0100 Impact of VR on Learning Experience compared to a Paper based Approach <p>Different learning theories encourage different kinds of learning approaches. Following constructivist theories, learning experiences should be realistic in order to facilitate learning. Virtual Reality (VR) serious games could be a realistic learning approach without the challenges of the real situation. The serious game InGo allows a user to learn the intralogistics process of receiving goods. In this work we explore whether learning in VR is more effective concerning learning success and learning experience than traditional learning approaches. No significant difference between the two approaches concerning learning success is found. However, other factors that have a long term effect on learning, such as intrinsic motivation, flow and mood, are significantly higher for the VR approach. Thus, our research fits with past research which indicated the high potential of VR based learnig and educational games. This work encourages future research to compare VR based and traditional learning approaches in the long term.</p> Stella Kolarik, Christoph Schlüter, Katharina Ziolkowski Copyright (c) 2023 Stella Kolarik, Christoph Schlüter, Katharina Ziolkowski Wed, 17 Jan 2024 00:00:00 +0100 Comparison of Pre-trained vs Custom-trained Word Embedding Models for Word Sense Disambiguation The prime objective of word sense disambiguation (WSD) is to develop such machines that can automatically recognize the actual meaning (sense) of ambiguous words in a sentence. WSD can improve various NLP and HCI challenges. Researchers explored a wide variety of methods to resolve this issue of sense ambiguity. However, majorly, their focus was on English and some other well-reputed languages. Urdu with more than 300 million users and a large amount of electronic text available on the web is still unexplored. In recent years, for a variety of Natural Language Processing tasks, word embedding methods have proven extremely successful. This study evaluates, compares, and applies a variety of word embedding approaches to Urdu Word embedding (both Lexical Sample and All-Words), including pre-trained (Word2Vec, Glove, and FastText) as well as custom-trained (Word2Vec, Glove, and FastText trained on the Ur-Mono corpus). Two benchmark corpora are used for the evaluation in this study: (1) the UAW-WSD-18 corpus and (2) the ULS-WSD-18 corpus. For Urdu All-Words WSD tasks, top results have been achieved (Accuracy=60.07 and F1=0.45) using pre-trained FastText. For the Lexical Sample, WSD has been achieved (Accuracy=70.93 and F1=0.60) using custom-trained GloVe word embedding method. Muhammad Farhat Ullah, Ali Saeed, Naveed Hussain Copyright (c) 2023 Dr. Naveed Hussain Wed, 01 Nov 2023 00:00:00 +0100 Beaf:BD – A Blockchain Enabled Authentication Framework for Big Data <p>The widespread utilization of Internet-based applications in our daily routines has resulted in enormous amounts of data being generated every minute. This data is not only produced by humans but also by various machines such as sensors, satellites, CCTV, etc. For many organizations, Apache Hadoop is the solution for handling big data. Big data refers to the extensive set of dissimilar data that can be processed to derive meaningful insights. For its security needs, Hadoop relies on trusted third-party security providers such as Kerberos. Kerberos has several security vulnerabilities. The focus of this paper is to eliminate security issues, particularly dictionary attacks and single points of failure, by proposing a model based on blockchain technology and threshold cryptography.In comparison to other existing schemes, the proposed approach offers superior computational overhead and storage requirements while maintaining the system's security level.</p> Manish Kumar Gupta, Rajendra Kumar Dwivedi Copyright (c) 2023 Manish Kumar Gupta, Rajendra Kumar Dwivedi Fri, 29 Dec 2023 00:00:00 +0100 Enhancing Energy Efficiency in Cluster Based WSN using Grey Wolf Optimization Wireless sensor networks (WSNs) are typically made up of small, low-power sensor nodes (SNs) equipped with capability for wireless communication, processing, and sensing. These nodes collaborate with each other to form a self-organizing network. They can collect data from their surrounding environment, such as temperature, humidity, light intensity, or motion, and transmit it to a central base station (BS) or gateway for additional processing and analysis. LEACH and TSEP are examples of cluster-based protocols developed for WSNs. These protocols require careful design and optimization of CH selection algorithms, considering factors such as energy consumption, network scalability, data aggregation, load balancing, fault tolerance, and adaptability to dynamic network conditions. Various research efforts have been made to develop efficient CH selection algorithms in WSNs, considering these challenges and trade-offs. In this paper, the Grey Wolf Optimization (GWO) algorithm is employed to address the problem of selecting CHs (CHs) in WSNs. The proposed approach takes into account two parameters: Residual Energy (RE) and the distance of node (DS)s from the BS. By visualizing and analyzing the GWO algorithm under variable parameters in WSNs, this research identifies the most appropriate node from all normal nodes for CH selection. The experimental results demonstrate that the proposed model, utilizing GWO, outperforms other approaches in terms of performance. Ashok Kumar Rai, Lalit Kumar Tyagi, Anoop Kumar, Swapnita Srivastava, Naushen Fatima Copyright (c) 2023 Ashok Kumar Rai Ashok Wed, 01 Nov 2023 00:00:00 +0100 Energy Efficient Compressor Cell for Low Power Computing <p>As the use of multimedia devices is rising, power management is becoming a major challenge. Various types of compressors have been designed in this study. Compressor circuits are designed using several circuits of XOR-XNOR gates and multiplexers. XOR-XNOR gate combinations and multiplexer circuits have been used to construct the suggested compressor design. The performance of the proposed compressor circuits using these low-power XOR-XNOR gates and multiplexer blocks has been found to be economical in terms of space and power. This study proposes low-power and high-speed 3-2, 4-2, and 5-2 compressors for digital signal processing applications. A new compressor has also been proposed that is faster and uses less energy than the traditional compressor. The full adder circuit, constructed using various combinations of XOR-XNOR gates, has been used to develop the proposed compressor. The proposed 3-2 compressor shows average power dissipation 571.7 nW and average delay 2.41 nS, 4-2 compressor shows average power dissipation 1235 nW and average delay 2.7 nS while 5-2 compressor shows average power dissipation 2973.50 nW and average delay 3.75 nS.</p> Rahul Mani Upadhyay, R. K. Chauhan, Manish Kumar Copyright (c) 2023 Rahul Mani Upadhyay, R.K. Chauhan, Manish Kumar Tue, 19 Sep 2023 00:00:00 +0200 A Framework for Improving the Performance of QKDN using Machine Learning Approach <div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>A reliable secure communication can be given between two remote parties by key sharing, quantum key distribution (QKD) is widely concentrated as the information in QKD is safeguarded by the laws of quantum physics. There are many techniques that deal with quantum key distribution network (QKDN), however, only few of them use machine learning (ML) and soft computing techniques to improve QKDN. ML can analyze data and improve itself through model training without having to be programmed manually. There has been a lot of progress in both the hardware and software of ML technologies. Given ML’s advantageous features, it can help improve and resolve issues in QKDN, facilitating its commercialization. The proposed work provides a detailed understanding of role of each layer of QKDN, addressing the limitations of each layer, and suggesting a framework to improve the performance metrics for various applications of QKDN by applying machine learning techniques, such as support vector machine and decision tree algorithms.</p> </div> </div> </div> R Arthi, A Saravanan, J S Nayana, Chandresh MuthuKumaran Copyright (c) 2023 Arthi R, Saravanan A, Nayana J S, Chandresh MuthuKumaran Tue, 19 Sep 2023 00:00:00 +0200 Comparison of Swarm-based Metaheuristic and Gradient Descent-based Algorithms in Artificial Neural Network Training <p><em>This paper aims to compare the gradient descent-based algorithms under classical training model and swarm-based metaheuristic algorithms in feed forward backpropagation artificial neural network training. Batch weight and bias rule, Bayesian regularization, cyclical weight and bias rule and Levenberg-Marquardt algorithms are used as the classical gradient descent-based algorithms. In terms of the swarm-based metaheuristic algorithms, hunger games search, gray wolf optimizer, Archimedes optimization, and the Aquila optimizer are adopted. The Iris data set is used in this paper for the training. Mean square error, mean absolute error and determination coefficient are used as statistical measurement techniques to determine the effect of the network architecture and the adopted training algorithm. The metaheuristic algorithms are shown to have superior capability over the gradient descent-based algorithms in terms of artificial neural network training. In addition to their success in error rates, the classification capabilities of the metaheuristic algorithms are also observed to be in the range of 94%-97%. The hunger games search algorithm is also observed for its specific advantages amongst the metaheuristic algorithms as it maintains good performance in terms of classification ability and other statistical measurements.</em></p> Erdal Eker, Murat Kayri, Serdar Ekinci, Davut İzci Copyright (c) 2023 erdal eker Tue, 19 Sep 2023 00:00:00 +0200 PCA-Chain: A Novel Medical Image Retrieval Blockchain For decades data security has remained a challenging task for researchers. The unrivaled immutability of blockchain data and the decentralized nature of its ledger have been put forward as potential solutions to the issue. Blockchain has been proven to be effective in securely storing textual data, however, it is unable to store image files. Researchers are now focusing on implementing blockchain for storing and securing image data as images contain a large amount of sensitive data and are prone to data tampering attacks. The proposed PCA-Chain is a novel approach to providing a secure image- based blockchain using Principal Component Analysis (PCA) to compress digital leukemia images and propagate them on blockchain via simple hash functions and retrieve them back to their original size without major compression losses. Performance evaluation was conducted using MSE, PSNR and SSIMs performance parameters. PCA-Chain provides lossless compression and can be used for the storage of medical images. Abhay Kumar Yadav, Virendra P. Vishwakarma Copyright (c) 2023 Abhay Kumar Yadav, Virendra P. Vishwakarma Fri, 29 Dec 2023 00:00:00 +0100 Eta-Reduction in Type-Theory of Acyclic Recursion <p>We investigate the applicability of the classic eta-conversion in the type-theory of acyclic algorithms. While denotationally valid, classic eta-conversion is not algorithmically valid in the type theory of algorithms, with the exception of few limited cases. The paper shows how the restricted, algorithmic eta-rule can recover algorithmic eta-conversion in the reduction calculi of type-theory of algorithms.</p> Roussanka Loukanova Copyright (c) 2022 Roussanka Loukanova Tue, 18 Jul 2023 00:00:00 +0200 A Detailed Sentiment Analysis Survey Based on Machine Learning Techniques Sentiment analysis is a rapidly growing topic of research as a result of the tremendous growth of digital information. In the modern era of artificial intelligence, one of the most crucial technologies for obtaining sentiment data from the vast amounts of data is sentiment analysis. It refers to a procedure of finding and categorising the opinions expressed in a source text. Reaching a consensus regarding business decisions is made much easier by conducting a sentiment analysis on consumer data. Machine learning offers an efficient and trustworthy technique for sentiment categorization and opinion mining. State-of-art machine learning techniques and methodologies have evolved and expanded. In addition to summarising research articles based on movie reviews, product reviews, and Twitter reviews, this survey article covers sentiment analysis notations, needs, levels, methodologies, sources, and machine learning approaches and tools. This research aims to determine the significance of sentiment analysis and to generate interest in the subject. Neha Singh, Umesh Chandra Jaiswal Copyright (c) 2023 Neha Singh Neha Singh Tue, 19 Dec 2023 00:00:00 +0100 Restricted Computations and Parameters in Type-Theory of Acyclic Recursion <p>The paper extends the formal language and the reduction calculus of Moschovakis type-theory of recursion, by adding a restrictor operator on terms with predicative restrictions. Terms with restrictions over memory variables formalise inductive algorithms with generalised, restricted parameters. The extended type-theory of restricted recursion (TTRR) provides computations for algorithmic semantics of mathematical expressions and definite descriptors, in formal and natural languages.</p> <p>The reduction calculi of TTRR provides a mathematical foundation of the work of compilers for reducing recursive programs to iterative ones. The type-theory of acyclic recursion (TTAR) has a special importance to syntax-semantics interfaces in computational grammars.</p> Roussanka Loukanova Copyright (c) 2023 Roussanka Loukanova Thu, 20 Jul 2023 00:00:00 +0200 An Efficient Video Frames Retrieval System Using Speeded Up Robust Features Based Bag of Visual Words Most studies in content-based image retrieval (CBIR) systems use database images of multiple classes. There is a lack of an automatic video frame retrieval system based on the query image. Low-level features i.e., the shape and colors of most of the objects are almost the same e.g., the sun and an orange are both round and red in color. Features such as speeded up robust features (SURF) used in most of the content-based video retrieval (CBVR) & CBIR research work are non-invariant features which may affect the overall accuracy of the CBIR system. The use of a simple and weak classifier or matching technique may also affect the accuracy of the CBIR system on high scale. The unavailability of datasets for content-based video frames retrieval is also a research gap to be explored in this paper. Altaf Hussain Copyright (c) 2023 Altaf Hussain Fri, 29 Dec 2023 00:00:00 +0100 A Novel Study for Automatic Two-Class and Three-Class COVID-19 Severity Classification of CT Images using Eight Different CNNs and Pipeline Algorithm SARS-CoV-2 has caused a severe pandemic worldwide. This virus appeared at the end of 2019. This virus causes respiratory distress syndrome. Computed tomography (CT) imaging provides important radiological information in the diagnosis and clinical evaluation of pneumonia caused by bacteria or a virus. CT imaging is widely utilized in the identification and evaluation of COVID-19. It is an important requirement to establish diagnostic support systems using artificial intelligence methods to alleviate the workload of healthcare systems and radiologists due to the disease. In this context, an important study goal is to determine the clinical severity of the pneumonia caused by the disease. This is important for determining treatment procedures and the follow-up of a patient’s condition. In the study, automatic COVID-19 severity classification was performed using three-class (mild, moderate, and severe) and two-class (non-severe and severe). In the study, deep learning models were used for classification. Also, CT images were utilized as radiological images. A total of 483 COVID-19 CT-image slices, 267 mild, 156 moderate, and 60 severe, were used. These images and labels were used directly for the three classifications. In the two-class classification, the mild and moderate images were accepted as non-severe. A total of eight classifications were made with convolutional neural network (CNN) architectures. These architectures are MobileNetv2, ResNet101, Xception, Inceptionv3, GoogleNet, EfficientNetb0, DenseNet201, and DarkNet53. In the study, the results of the top four CNN architectures with the best performance were combined using a pipeline algorithm. In this way, it is seen that significant improvements have been achieved in the results of the study. Before using the pipeline algorithm for the three-class classification, the results of weighted recall-sensitivity (SNST), specificity (SPCF), accuracy (ACCR), F-1 score (F-1), area under the receiver operating characteristic curve (AUC), and overall ACCR were obtained: 0.7785, 0.8351, 0.8299, 0.7758, 0.9112 and 0.7785, respectively. After using the pipeline algorithm for the three-class classification, the results of these parameters were obtained: 0.8095, 0.8555, 0.8563, 0.8076, 0.9089, and 0.8095, respectively. Before using the pipeline algorithm for the two-class classification, the results of SNST, SPCF, ACCR, F-1, and AUC were obtained: 0.9740, 0.8500, 0.9482, 0.9703, and 0.9788, respectively. After using the pipeline algorithm for the two-class classification, the results of these parameters were obtained: 0.9811, 0.8333, 0.9627, 0.9788, and 0.9851, respectively. Hüseyin Yaşar, Murat Ceylan, Hakan Cebeci, Abidin Kılınçer, Nusret Seher, Fikret Kanat, Mustafa Koplay Copyright (c) 2023 Hüseyin Yaşar, Murat Ceylan, Hakan Cebeci, Abidin Kılınçer, Nusret Seher, Fikret Kanat, Mustafa Koplay Fri, 29 Dec 2023 00:00:00 +0100 Healthcare Data Collection Using Internet of Things and Blockchain Based Decentralized Data Storage <p>With the increase in usage of Internet of Things devices (IoT), IoT is used in different sectors such as manufacturing, electric vehicles, home automation and healthcare. The IoT devices collected large volumes of data on different parameters at regular intervals. Storing a massive amount volume of IoT data securely is a complicated task. Presently, the majority of IoT devices use cloud storage to store the data, however, cloud servers require large storage and high computation. Due to third party cloud service provider (CSP) interaction, the management of IoT data security fully depends on the CSP. To manage these problems, a decentralized blockchain based secure storage is proposed in this work. In the proposed scheme, instead of CSP storage location, the patient health information is stored in the blockchain technique and the blockchain miners verify the transactions with the help of Elliptic Curve Cryptography (ECC). The miner verification process dynamically avoids adversary access. Similarly, the certificateless access is used in the proposed system to avoid certificate based issues. The blocks in the blockchain is going to be stored patient details in a decentralized storage location to avoid unauthorized access and ensure the authenticity of data. The use of blockchain eliminates the need for third party public auditing process through immutable storage. This work illustrates secure communication and immutable data storage without the intervention of CSP. The communication overhead reduced by nearly 10 to 40% and authentication improved by 10 to 20% while confidentiality increased by 5% in comparison to existing techniques. Through this technique, data confidentiality, integrity and availability is ensured.</p> M. Sumathi, S. P. Raja, N. Vijayaraj, M. Rajkamal Copyright (c) 2023 Sumathi M, Raja S P, Vijayaraj N, Rajkamal M Fri, 06 Oct 2023 00:00:00 +0200 Sentiment Analysis Using Machine Learning In recent years, sentiment analysis on social media, including Facebook, Twitter and blogs, has grown in popularity. Social media generate large amounts of information, and this has contributed to the growth of sentiment analysis as a field of research. This study demonstrates that sentiment analysis has been thoroughly researched in previous years, and numerous methods have been designed and evaluated. Nevertheless, there is still much room for improvement. This paper reviews the state of art in sentiment analysis. Various machine learning procedures for sentiment analysis are discussed, their potential to increase the level of the analysis accuracy is underscored. This paper introduces sentiment analysis types, methodologies, applications, challenges, and a comparative study of machine learning and sentiment analysis approaches. Performance evaluation parameters, for sentiment analysis, have also been tested and compared using different machine learning classifiers. Performance evaluation points to logistic regression as the model that achieves the best result. In the future, a method that is easy, versatile, and practicable, should be offered as opposed to existing machine learning methods, and more work should be put into improving the algorithms’ performance. Neha Singh, Umesh Chandra Jaiswal Copyright (c) 2023 Neha Singh, Umesh Chandra Jaiswal Fri, 29 Dec 2023 00:00:00 +0100 Appointment Booking and Drug Inventory System in Healthcare Services Using Blockchain Technology Blockchain technology has the potential to revolutionize the healthcare industry by improving data security, reducing administrative inefficiencies, and enabling the seamless sharing of medical information. In the healthcare sector, blockchain can be applied to a wide range of use cases, such as medical record-keeping, clinical trials, drug traceability, and telemedicine. By using a secure, decentralized system, healthcare organizations can ensure that sensitive patient data is kept confidential and can be easily accessed by authorized parties. Additionally, the use of smart contracts can contribute to reducing the risk of errors and saving time and resources. In the proposed work, a decentralized application integrates healthcare services with blockchain technology to ensure transparency and security and prevent tampering with electronic medical records. The three main functionalities implemented in this work are to provide a transparent appointment booking system where patients can view the real-time availability status of the doctors and book an appointment with the doctor of their choice, and to store and retrieve data in an efficient and secure manner. To maintain a transparent and tamper-resistant medical inventory to prevent the unauthorized sale of medicines and drugs and also verify the availability of the drug. This helps the patients get the drugs in the quickest manner possible. Compared to existing centralized storage techniques, the proposed decentralized storage technique provides higher data availability, the fastest response time, and immutable storage of existing data. Experimental results show that, compared to existing work, the proposed work provided better results in throughput and latency. The communication cost of the proposed technique is 7% lesser than the existing Telecare Medicine Information system. M. Sumathi, Inti Dhiraj, Dhavala Sai Mahita, S. P. Raja Copyright (c) 2023 Sumathi M, Inti Dhiraj, Dhavala Sai Mahita, Raja S P Fri, 29 Dec 2023 00:00:00 +0100