Hernandez Pedro and Espitia Edinson
This systematic mapping study consisted of tracking the scientific literature that addresses the issue of analogies as a didactic strategy in science teaching. An analogy can be understood as comparing an existing knowledge with a new knowledge to achieve a better understanding of the new knowledge as a result of the comparison of similarities; or in other words, use students own concepts to introduce new concepts using comparisons between the two. The purpose of this study was to identify, analyze, synthesize and evaluate research works that touched on this topic, with this, to have knowledge about the models of uses of analogies, most used didactic strategies, research methodologies in this field and how to evaluate the learning effectiveness of working with analogies. The methodology that was used is the systematic mapping study; Five questions were posed that guided the information tracking process. Later, the electronic documents in English for the last twenty years were traced in five databases related to the educational field. Finally, it is concluded by responding to the purpose of the study where it is evident that, broadly speaking, the research methodologies in this field are quantitative as well as qualitative, to implement analogies, resources such as images, illustrations, textual indications and audiovisual aids are used, it is usually evaluated the effectiveness of using analogies with multiple choice tests, oral tests of creating analogies by students.
Analogies, science teaching, analog model.
Bita Bayat, Department of Computer Engineering, Azad University, Safadasht, Tehran, Iran
The aim of this paper was to optimize the system and the method of identifying communication systems and evaluating the scope of system communication. Algorithmic technique was used to simulate the article. The name of the data set was a Mehr Bank data set in Iran with the number of connection routes of 80 cases and the prediction of 2 models (optimal and distorted). The algorithms used included a combined neural network and genetic algorithm, support vector machine (SVM). In the results of the research, we showed that in relation to the reduction of the cases of distorted route data and the increase of optimal routes, the accuracy of detecting the routes of connection to bank users in optimal routes is increasing. Using a combined neural network and genetic algorithm, the backup vector machine improves the accuracy of detecting connection paths to bank users. By recognizing the information, the system proposed in this paper can transfer less data when transferring data with the same amount. Using two types of algorithms to explain the level of accuracy and power of algorithms in identifying and monitoring the connection paths of inter-system communication. The algorithms used included a combined neural network and genetic algorithm, support vector machine (SVM). Examination of the ability of each of the hybrid algorithms The combined neural network and genetic algorithm and support vector machine (SVM) showed that in the major items of classification and identification of interconnection pathways and their identification, the neural network and genetic hybrid algorithm is more successful. And the percentage of identification and classification of this algorithm in order to identify computer communication systems was higher than the support vector machine (SVM).
Combined Neural Network and Genetics Algorithm, Support Vector Machine, Communication Systems, Connection Paths.
Seyed Modaresi1,2, Aomar Osmani1, Mohammadreza Razzazi2, Abdolghani Chibani3, 1Sorbonne Paris Nord University, 2Amirkabir University of Technology and Institute for Research in Fundamental Sciences (IPM), 3University Paris-Est Creteil
Internet of Things (IoT) generates a long and heterogeneous series of data. It is particularly the case with human activity recognition. Segmentation is a common bias used to divide this long (may be infinite) data stream into a set of smaller meaning-full finite segments to have a more straightforward model. It is often defined by researchers using their prior knowledge and therefore adds uncontrollable biases in their models. In this paper, we define the segmentation as a particular case of a general data-decomposition problem. Therefore, we formalise this problem as an hyperparameter in order to control the added biases and to optimize the segmentation process for a given task to solve. The impact of the biases should be described and evaluated in the data decomposition step, the problem resolution (ML) step, and in the composition (the connection between ML results, segments and the global problem results) step. In addition, our formalism leads to select dynamically an appropriate segmentation method independently as an hyper-parameter from the considered application that reduces, by the way, the implicit added biases. Intensive experiments on several public datasets show the effectiveness of this original approach.
Activity Recognition, Segmentation, Data-Decomposition, Complex Event Recognition, IoT.
Rafael A. Spíndola and Tiago M. U. Araújo, Computer Center, Federal University of Paraiba, João Pessoa, Brazil
With increasing amounts of data to be analyzed and interpreted, Anomaly Detection emerges as one of the areas of great impact in the context of Data Mining. Its applications extend to the most diverse fields of human activity, notably in medicine, administration, information science, economics and computing. In this work, we propose a support system for detecting aberrant events in stationary databases from Public Administration. The solution combines multiple supervised and unsupervised detection algorithms (OCSVM, LOF, CBLOF, HBOS, KNN, IForest and Robust Covariance) to classify events as anomalies. The results showed that, of the total events returned by the solution, 91.61% +/- 1.66% of them were correctly identified as outliers. Therefore, there are indications that the proposed solution has the potential to contribute to government audit support activities, as well as to management and decisionmaking processes, these arising from the interpretation of the phenomena present in the data.
Anomaly detection, Outlier detection, (un)supervised learning, data mining.