<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>ADCAIJ, Vol.8, n.2</title>
<link>http://hdl.handle.net/10366/143142</link>
<description/>
<pubDate>Wed, 13 May 2026 07:27:39 GMT</pubDate>
<dc:date>2026-05-13T07:27:39Z</dc:date>
<item>
<title>Perception Policies for Intelligent Virtual Agents</title>
<link>http://hdl.handle.net/10366/143308</link>
<description>Agents deployed to dynamic environments, such as virtual and augmented reality,  need specific mechanisms to capture relevant features from the environment. These mechanisms enable agents to avoid process some useless information and act quickly. The primary goal of this work is to investigate the perception policies of an agent situated in a virtual environment. Perception policies allow giving more priority to sensors perceiving the changes occurring in the environment. Based on the proposed model, each sensor follows a strategy that can change its priority in the overall system. We developed two policies to change the sensors prioritization. The performance evaluation of the proposed model consists of comparing both approaches in a highly dynamic environment.
</description>
<pubDate>Thu, 14 Mar 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143308</guid>
<dc:date>2019-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>IoT based intelligent irrigation support system for smart farming applications</title>
<link>http://hdl.handle.net/10366/143307</link>
<description>India is an agricultural country with an ample amount of arable land that produces wide variety of crops. Growing population and urbanization puts up challenges: more and quality yield in limited area, effective utilization of water resources, inculcating technology with traditional mechanisms, to be faced. A crop irrigation management system with sensor data fetch, transfer and operate functionalities is proposed to meet the expectations. The system comprises of: sensing, data processing and actuator sections, with a network of ambient temperature and humidity at a height and, soil moisture sensor placed at the root zone of the subject. The sensor generated data is compressed and then sent to an FTP server for processing. At the server, a 2-layer Neural Network with 4-Inputs, plant growth, temperature, humidity and soil moisture is used for decision making that controls water supply, fertilizer spray, etc. and a plant is used as the test object. Results show that there is tolerable error in the reconstructed data and 62.5% and 67.5% compression is achieved for ambient temperature, humidity and soil moisture respectively. The decisions are only 2% erroneous when done using Neural Networks using this data. Thus, due to its good data handling, decision making capabilities for precise water usage, being portable and user-friendly, this system proves beneficial in home gardens, greenhouses.
</description>
<pubDate>Thu, 14 Mar 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143307</guid>
<dc:date>2019-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Detecting Spam Review through Spammer’s Behavior Analysis</title>
<link>http://hdl.handle.net/10366/143306</link>
<description>Online reviews about the purchase of a product or services provided have become the main source of user opinions. To gain profit or fame usually spam reviews are written to promote or demote some target products or services. This practice is known as review spamming. In the last few years, different methods have been suggested to solve the problem of review spamming but there is still a need to introduce new spam review detection method to improve accuracy results. In this work, researchers have studied six different spammer behavioral features and analyzed the proposed spam review detection method using weight method. An experimental evaluation was conducted on a benchmark dataset and achieved 84.5% accuracy.
</description>
<pubDate>Thu, 14 Mar 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143306</guid>
<dc:date>2019-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Education System re-engineering with AI (artificial intelligence) for Quality Im-provements with proposed model</title>
<link>http://hdl.handle.net/10366/143305</link>
<description>Re-engineering (RE) of existing educational institutions (EI) with adoption of latest technology trends (LTT) in form of artificial intelligence (AI) can be great effective in term of quality systems. Increase in student’s strength in class and terrorist attacks on EI urged us to introduce such approach that can assure education quality. Class monitoring with heavy strength always remain major issue for teacher during lecture delivery. In this paper, we implemented reengineering using artificial intelligence based two theories of 1) Multi-face recognition (MFR) system 2) Facial expression recognition (FER) system. Both of these theories supported by intelligent techniques as principal component analysis (PCA), discrete wavelet transform (DWT) and k-nearest neighbor (KNN). After implementation of these intelligent techniques student’s attentiveness will increase. Our developed system can detect expressions like happiness, repulsion, fear, anger, and confusion. Student’s attentiveness score will be displayed on screen. Teacher can interpret on the basis of attentiveness %age. System decision making can be helpful for class continuity or short break. This system is also an application of an expert system (ES) and knowledge base system (KBS) for educational quality assurance. A similar monitoring system was imposed in china with Hikvision Digital Technology. Predations results proved monitoring can be best way for education quality.
</description>
<pubDate>Tue, 14 May 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143305</guid>
<dc:date>2019-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>An Intelligent Multi-Resolutional and Rotational Invariant Texture Descriptor for Image Retrieval Systems</title>
<link>http://hdl.handle.net/10366/143304</link>
<description>To find out the identical or comparable images from the large rotated databases with higher retrieval accuracy and lesser time is the challenging task in Content based Image Retrieval systems (CBIR). Considering this problem, an intelligent and efficient technique is proposed for texture based images. In this method, firstly a new joint feature vector is created which inherits the properties of Local binary pattern (LBP) which has steadiness regarding changes in illumination and rotation and discrete wavelet transform (DWT) which is multi-resolutional and multi-oriented along with higher directionality. Secondly, after the creation of hybrid feature vector, to increase the accuracy of the system, classifiers are employed on the combination of LBP and DWT. The performance of two machine learning classifiers is proposed here which are Support Vector Machine (SVM) and Extreme learning machine (ELM). Both proposed methods P1 (LBP+DWT+SVM) and P2 (LBP+DWT+ELM) are tested on rotated Brodatz dataset consisting of 1456 texture images and MIT VisTex dataset of 640 images. In both experiments the results of both the proposed methods are much better than simple combination of DWT +LBP and much other state of art methods in terms of precision and accuracy when different number of images is retrieved.  But the results obtained by ELM algorithm shows some more improvement than SVM. Such as when top 25 images are retrieved then in case of Brodatz database the precision is up to 94% and for MIT VisTex database its value is up to 96% with ELM classifier which is very much superior to other existing texture retrieval methods.
</description>
<pubDate>Tue, 14 May 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143304</guid>
<dc:date>2019-05-14T00:00:00Z</dc:date>
</item>
<item>
<title>Segmentation and detection of  cattle branding images using CNN and SVM classification</title>
<link>http://hdl.handle.net/10366/143303</link>
<description>This article presents a hybrid method that uses Convolutional Neural Networks (CNN) to segmentation and Support Vector Machines (SVM) to detection the brandings. The experiments were performed using a cattle branding images. Metrics of Overall Accuracy, Recall, Precision, Kappa Coefficient, and Processing Time were used in order to assess the proposed tool. The results obtained here were satisfactory, reaching a Overall Accuracy of 93% in the first experiment with 39 brandings and 1,950 sample images, and 95% of accuracy in the second experiment, with the same 39 brandings, but with 2,730 sample images. The processing time attained in the experiments was 32s and 42s, respectively.
</description>
<pubDate>Thu, 14 Mar 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143303</guid>
<dc:date>2019-03-14T00:00:00Z</dc:date>
</item>
<item>
<title>Learning Representations from Spatio-Temporal Distance Maps for 3D Action Recognition with Convolutional Neural Networks</title>
<link>http://hdl.handle.net/10366/143302</link>
<description>This paper addresses the action recognition problem using skeleton data. In this work, a novel method is proposed, which employs five Distance Maps (DM), named as Spatio-Temporal Distance Maps (ST-DMs), to capture the spatio-temporal information from skeleton data for 3D action recognition. Among five DMs, four DMs capture the pose dynamics within a frame in the spatial domain and one DM captures the variations between consecutive frames along the action sequence in the temporal domain. All DMs are encoded into texture images, and Convolutional Neural Network is employed to learn informative features from these texture images for action classification task. Also, a statistical based normalization method is introduced in this proposed method to deal with variable heights of subjects. The efficacy of the proposed method is evaluated on two datasets: UTD MHAD and NTU RGB+D, by achieving recognition accuracies91.63% and 80.36% respectively.
</description>
<pubDate>Fri, 14 Feb 2020 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143302</guid>
<dc:date>2020-02-14T00:00:00Z</dc:date>
</item>
<item>
<title>Index</title>
<link>http://hdl.handle.net/10366/143301</link>
<pubDate>Sun, 30 Jun 2019 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10366/143301</guid>
<dc:date>2019-06-30T00:00:00Z</dc:date>
</item>
</channel>
</rss>
