<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>ADCAIJ, Vol.7, n.3</title>
<link href="http://hdl.handle.net/10366/139093" rel="alternate"/>
<subtitle/>
<id>http://hdl.handle.net/10366/139093</id>
<updated>2026-04-23T03:03:52Z</updated>
<dc:date>2026-04-23T03:03:52Z</dc:date>
<entry>
<title>Staff</title>
<link href="http://hdl.handle.net/10366/139230" rel="alternate"/>
<author>
<name>Editorial Team, Adcaij</name>
</author>
<id>http://hdl.handle.net/10366/139230</id>
<updated>2025-04-30T21:03:54Z</updated>
<published>2018-09-30T00:00:00Z</published>
<dc:date>2018-09-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolutionary Algorithms for Query Op-timization in Distributed Database Sys-tems: A review</title>
<link href="http://hdl.handle.net/10366/139229" rel="alternate"/>
<author>
<name>Ali, Zulfiqar</name>
</author>
<author>
<name>Kiran, Hafiza Maria</name>
</author>
<author>
<name>Shahzad, Waseem</name>
</author>
<id>http://hdl.handle.net/10366/139229</id>
<updated>2025-04-30T21:03:54Z</updated>
<published>2018-09-21T00:00:00Z</published>
<summary type="text">Evolutionary Algorithms are bio-inspired optimization problem-solving approaches that exploit principles of biological evolution. , such as natural selection and genetic inheritance. This review paper provides the application of evolutionary and swarms intelligence based query optimization strategies in Distributed Database Systems. The query optimization in a distributed environment is challenging task and hard problem. However, Evolutionary approaches are promising for the optimization problems. The problem of query optimization in a distributed database environment is one of the complex problems. There are several techniques which exist and are being used for query optimization in a distributed database. The intention of this research is to focus on how bio-inspired computational algorithms are used in a distributed database environment for query optimization. This paper provides working of bio-inspired computational algorithms in distributed database query optimization which includes genetic algorithms, ant colony algorithm, particle swarm optimization and Memetic Algorithms.
</summary>
<dc:date>2018-09-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Software for Diagnosing Heart Disease via Data Mining Techniques</title>
<link href="http://hdl.handle.net/10366/139228" rel="alternate"/>
<author>
<name>Jasim, Yaser Abdulaali</name>
</author>
<author>
<name>Saeed, Mustafa G.</name>
</author>
<id>http://hdl.handle.net/10366/139228</id>
<updated>2025-04-30T21:03:54Z</updated>
<published>2018-09-21T00:00:00Z</published>
<summary type="text">This paper builds a data mining tool via a classification method using Multi-Layer Perceptron (MLP) with Backpropagation learning method and an algorithm of feature selection along with biomedical testing values for diagnosing heart disease. Addition to that, developing a prototype for heart disease diagnosing with a friendly-user graphical interface (GUI). The purpose to construct this software is that; clinical prosopopoeia is done in any event by doctor’s experience. Despite that, some cases are reported negative diagnosis and treatment; therefore, patients are asked to take a number of tests for diagnosis. Moreover, not all the tests contribute towards an effective diagnosis of a disease, and by using data mining approach to diagnose heart disease that supports the doctors to make more efficient and subtle decisions.
</summary>
<dc:date>2018-09-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>JAMDER: JADE to MULTI-Agent Systems Development Resource</title>
<link href="http://hdl.handle.net/10366/139227" rel="alternate"/>
<author>
<name>Lopes, Yrleyjander S.</name>
</author>
<author>
<name>Cortés, Mariela I.</name>
</author>
<author>
<name>Tavares Gonçalves, Enyo José</name>
</author>
<author>
<name>Oliveira, Robson</name>
</author>
<id>http://hdl.handle.net/10366/139227</id>
<updated>2025-04-30T21:03:54Z</updated>
<published>2018-09-21T00:00:00Z</published>
<summary type="text">The semantic gap is distinguished by the difference between two descriptions generated using different representations. This difference has a negative impact on the developer productivity and probably, the quality of the written code. In software development context, the coding phase aims at coding the system consistent with the detailed project developed with a group of designed models. This paper presents an endeavor to consolidate different agent type definitions and implementation concepts for Multi-Agent Systems (MAS) involving the adaptation of the JADE framework regarding the theoretical concepts in MAS. Additionally, it contains a standardization of code generation. The main benefit of the proposed extension is to include the agent internal architectures, entities and relationships in an implementation framework and increase the productivity by code generation, ensuring the consistency between design and code. The applicability of the extension is illustrated by developing a multi-agent system for Moodle.
</summary>
<dc:date>2018-09-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of CNN architecture for Hindi Characters</title>
<link href="http://hdl.handle.net/10366/139226" rel="alternate"/>
<author>
<name>Yadav, Madhuri</name>
</author>
<author>
<name>Kr Purwar, Ravindra</name>
</author>
<author>
<name>Jain, Anchal</name>
</author>
<id>http://hdl.handle.net/10366/139226</id>
<updated>2025-04-30T21:03:54Z</updated>
<published>2018-09-21T00:00:00Z</published>
<summary type="text">Handwritten character recognition is a challenging problem which received attention because of its potential benefits in real-life applications. It automates manual paper work, thus saving both time and money, but due to low recognition accuracy it is not yet practically possible. This work achieves higher recognition rates for handwritten isolated characters using Deep learning based Convolutional neural network (CNN). The architecture of these networks is complex and plays important role in success of character recognizer, thus this work experiments on different CNN architectures, investigates different optimization algorithms and trainable parameters. The experiments are conducted on two different types of grayscale datasets to make this work more generic and robust. One of the CNN architecture in combination with adadelta optimization achieved a recognition rate of 97.95%. The experimental results demonstrate that CNN based end-to-end learning achieves recognition rates much better than the traditional techniques.
</summary>
<dc:date>2018-09-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>An empirical approach for software reengineering process with relation to quality assurance mechanism</title>
<link href="http://hdl.handle.net/10366/139225" rel="alternate"/>
<author>
<name>Muzammul, Muhammad</name>
</author>
<author>
<name>Awais, Muhammad</name>
</author>
<id>http://hdl.handle.net/10366/139225</id>
<updated>2025-04-30T21:03:53Z</updated>
<published>2018-09-13T00:00:00Z</published>
<summary type="text">Software development advances focus on productivity of existing software systems and quality is the basic demand of every engineering product. In this paper, we will discuss the complete re-engineering process with aspects of forwarding, reverse and quality assurance mechanism. As we know the software development lifecycle (SDLC) follows a complete mechanism of the engineering process. In forward engineering, we tried to follow selective main phases of software engineering(data,requirements, design, development, implementation).In reverse engineering, we move backward from the last phase of developing the product as it gathers requirements from the implemented product(implementation, coding, design, requirements, data). During reengineering, we add up more quality features on customer demands, but the actual demand is to fulfill quality needs that can be assured by external as well as internal quality attributes such as reliability, efficiency, flexibility, reusability, and robustness in any software system. We discussed a methodological approach to move from re-engineering to the journey of quality assurance. More than 50 studies come into discussion and throughput results proposed by graph and tabular form. We can say if the re-engineering process produces quality attributes, then it can be said by old software system refactoring as code refactoring, data refactoring and architectural refactoring we obtained a quality product at a lower cost instead of new software system development, which causes a decrease in quality attributes as cost, time etc. In future work, testing methodology can be proposed for quality assurance.
</summary>
<dc:date>2018-09-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Greedy Algorithms for Approximating the Diameter of Machine Learning Datasets in Multidimensional Euclidean Space: Experimental Results</title>
<link href="http://hdl.handle.net/10366/139224" rel="alternate"/>
<author>
<name>Hassanat, Ahmad</name>
</author>
<id>http://hdl.handle.net/10366/139224</id>
<updated>2025-04-30T21:03:53Z</updated>
<published>2018-09-13T00:00:00Z</published>
<summary type="text">Finding the diameter of a dataset in multidimensional Euclidean space is a well-established problem, with well-known algorithms. However, most of the algorithms found in the literature do not scale well with large values of data dimension, so the time complexity grows exponentially in most cases, which makes these algorithms impractical. Therefore, we implemented 4 simple greedy algorithms to be used for approximating the diameter of a multidimensional dataset; these are based on minimum/maximum l2 norms, hill climbing search, Tabu search and Beam search approaches, respectively. The time complexity of the implemented algorithms is near-linear, as they scale near-linearly with data size and its dimensions. The results of the experiments (conducted on different machine learning data sets) prove the efficiency of the implemented algorithms and can therefore be recommended for finding the diameter to be used by different machine learning applications when needed.
</summary>
<dc:date>2018-09-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ransomware - Kidnapping personal data for ransom and the information as hostage</title>
<link href="http://hdl.handle.net/10366/139223" rel="alternate"/>
<author>
<name>Ferreira, Márcio Ricardo</name>
</author>
<author>
<name>Kawakami, Cynthia</name>
</author>
<id>http://hdl.handle.net/10366/139223</id>
<updated>2025-04-30T21:03:53Z</updated>
<published>2018-09-13T00:00:00Z</published>
<summary type="text">The mankind faces a new overview to legally handle with the hyper-connected society, in which personal data and privacy are closely related. This globalization affected directly the criminality, both in its extension and in its structure and occurrence, entailing the occurrence of new criminal conduct, such as the ransomware. The problem is that, the few existent legislations do not successfully encompass the kidnap of personal data for ransom. In this regard, this paper proposes an analysis on the vulnerability of personal data in the New Information Technologies and the Communication, the scope is to analyze the conduct and propose new solutions, aiming to bring more legal certainty to this issue. This new criminological reality deserves reflection, therefore, the present paper will adopt the comparative-deductive method to examine what the existent doctrine and jurisprudence state, in order to assess the necessity (or not) of the creation of a specific criminal legislation for privacy and personal data security of the Internet’s user. The main result found is the necessity of the creation of an autonomous field of Criminal Cyber Law to handle with the offensive conducts against information and the legislative evolution demanded to offer specific solutions to RANSOMWARE, which is a new conduct, extremely different from the extortion and kidnapping that society is used to, typical in this new reality where the Information Society lives.; The mankind faces a new overview to legally handle with the hyper-connected society, in which personal data and privacy are closely related. This globalization affected directly the criminality, both in its extension and in its structure and occurrence, entailing the occurrence of new criminal conduct, such as the ransomware. The problem is that, the few existent legislations do not successfully encompass the kidnap of personal data for ransom. In this regard, this paper proposes an analysis on the vulnerability of personal data in the New Information Technologies and the Communication, the scope is to analyze the conduct and propose new solutions, aiming to bring more legal certainty to this issue. This new criminological reality deserves reflection, therefore, the present paper will adopt the comparative-deductive method to examine what the existent doctrine and jurisprudence state, in order to assess the necessity (or not) of the creation of a specific criminal legislation for privacy and personal data security of the Internet's user. The main result found is the necessity of the creation of an autonomous field of Criminal Cyber Law to handle with the offensive conducts against information and the legislative evolution demanded to offer specific solutions to RANSOMWARE, which is a new conduct, extremely different from the extortion and kidnapping that society is used to, typical in this new reality where the Information Society lives.
</summary>
<dc:date>2018-09-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Index</title>
<link href="http://hdl.handle.net/10366/139222" rel="alternate"/>
<author>
<name>Editorial Team, Adcaij</name>
</author>
<id>http://hdl.handle.net/10366/139222</id>
<updated>2025-04-30T21:03:53Z</updated>
<published>2018-09-30T00:00:00Z</published>
<dc:date>2018-09-30T00:00:00Z</dc:date>
</entry>
</feed>
