Article

Article title CLASSIFICATION ON THE BASIS OF THE MODIFIED STRUCTURE OF ARTIFICIAL NEURAL NETWORK VIA GENETIC ALGORITHMS
Authors S.S. Alkhasov, A.N. Tselykh, A.A. Tselykh
Section SECTION III. AUTOMATION AND CONTROL
Month, Year 10, 2016 @en
Index UDC 004.891.2
DOI 10.18522/2311-3103-2016-10-110122
Abstract This paper presents a new integrated approach to the implementation of the classification by means of artificial neural networks using genetic algorithms. An overview of previously developed methods of modifying the artificial neural networks based on genetic algorithms (using the inversion operator; divided into some subpopulations; with varying population size) is outlined. We describe the method of modification of the traditional neural network classification algorithm. The modification consists of two stages. The first stage is the detection of suboptimal (optimal for specified conditions) architecture of artificial neural networks (number of neurons in the hidden layer, training algorithm, learning rate, type of activation function, etc.). The second stage is the adjustment of the weight coefficients and biases towards the minimum fitness function of the genetic algorithm. We also study the influence of the input variables to the value of the classification error. In the final part of the paper we show the comparative results of traditional and new advanced approaches. The following parameters of the artificial neural network were obtained: a set of input features (12 of 13), the number of neurons in the hidden layer (46), the learning rate (0.1416), type of activation function (logistic function), training algorithm (algorithm of Levenberg – Marquardt). The advanced neural network model has the average estimation of mean-square error (MSE) which is better than the estimation of the conventional neural network based on gradient methods of the optimization approximately in two times (0.08 vs 0.15). However time of the classification execution is about ten times greater than the duration of the work of traditional artificial neural network. Thus, the feasibility of using artificial neural network in the combination with the optimization of its architecture by using the genetic algorithm is determined by the current accuracy requirement specified by the end user. However, this approach allows more flexibility to work with input datasets in the conditions when their internal structure changes periodically. Therefore, it is recommended to use the developed integrated model if you have access to distributed computing resources.

Download PDF

Keywords Classification; artificial neural networks; genetic algorithms.
References 1. Alkhasov, S.S., Tselykh, A.N., Tselykh, A.A. Application of cluster analysis for the assessment of the share of fraud victims among bank card holders, In: 8th International Conference on Se-curity of Information and Networks. ACM, New York, 2015, pp. 103-106.
2. Waszczyszyn Z. (Ed.) et al. Neural Networks in the Analysis and Design of Structures. Spring-er, Wien, 1999.
3. Bozhich, V.I., Lebedev, O.B., Shnitser, Yu.L. Razrabotka geneticheskogo algoritma obucheniya neyronnykh setey [The development of genetic algorithm for training neural networks], Izvestiya TRTU [Izvestiya TSURE], 2001, No. 4 (22), pp. 170-174.
4. Shumkov E.A. Ispol'zovanie geneticheskikh algoritmov dlya obucheniya neyronnykh setey [The use of genetic algorithms for training neural networks], Nauchnyy zhurnal KubGAU [Scientific journal of Kubsau], 2013, No. 91, pp. 1-9. Available at: http://ej.kubagro.ru/2013/07/pdf/78.pdf.
5. Panchenko T.V. Geneticheskie algoritmy [Genetic algorithms]. Astrakhan': AGU, 2007, 87 p.
6. Radcliffe N.J. Genetic Neural Networks on MIMD Computers. Ph.D. thesis, 1990.
7. Stanley K.O., Miikkulainen R. Evolving Neural Networks through Augmenting Topologies, Evolutionary Computation, 2002, No. 10, pp. 99-127.
8. Ayupov I.R. Parametricheskiy metod obucheniya neyronnoy seti pri reshenii zadach prognozirovaniya: diss. … kand. tekhn. nauk [A parametric method for training the neural network in the forecasting problem. Cand. of eng. sc. diss.], 2015.
9. Gomez F., Miikkulainen R. Incremental Evolution of Complex General Behavior, Adaptive Behavior, 1997, No. 5, pp. 317-342.
10. Tsoy Yu.R. Neyroevolyutsionnyy algoritm i programmnye sredstva dlya obrabotki izobrazheniy: diss. … kand. tekhn. nauk [Neuroevolutionary algorithm and software for image processing. Cand. of eng. sc. diss.], 2007.
11. Tsoy Yu.R., Spitsyn V.G. Issledovanie geneticheskogo algoritma s dinamicheski izmenyaemym razmerom populyatsii [A study of genetic algorithm with dynamically adjustable population size], V Mezhdunarodnaya konferentsiya «Intellektual'nye sistemy i intellektual'nye SAPR» (IEEE AIS’05/CAD–2005) [V international conference "Intelligent systems and intelligent CAD systems" (IEEE AIS'05/CAD–2005)]. Vol. 1. Moscow: Fizmatlit, 2005, pp. 241-246.
12. Rutkovskaya D., Pilin'skiy M., Rutkovskiy L. Neyronnye seti, geneticheskie algoritmy i nechetkie sistemy [Neural networks, genetic algorithms and fuzzy systems]. Moscow: Goryachaya liniya – Telekom, 2006, 383 p.
13. Gladkov L.A., Kureychik V.V., Kureychik V.M. Geneticheskie algoritmy [Genetic algorithms]. M.: Fizmatlit, 2006, 320 p.
14. Lyakhov A.L., Aleshin S.P. Iskusstvennaya neyronnaya set' kak izmeritel'nyy instrument adekvatnosti modeli s adaptivnym klassom tochnosti [Artificial neural network as a measuring tool of the adequacy of the model with adaptive precision class], Matematichni mashini i sistemi [Mathematical machines and machines and systems], 2010, No. 2, pp. 61-66.
15. Barsegyan A.A., Kupriyanov M.S., Stepanenko V.V., Kholod I.I. Tekhnologii analiza dannykh. Data Mining, Visual Mining, Text Mining, OLAP [Data analysis technologies. Data Mining, Visual Mining, Text Mining, OLAP]. St. Petersburg: BKhV-Peterburg, 2008, 384 p.
16. Medvedev V.S., Potemkin V.G. Neyronnye seti. MATLAB 6 [Neural network. MATLAB 6]. Moscow: Dialog-MIFI, 2002, 496 p.
17. Banks D. (Ed.) et al. Classification, Clustering and Data Mining Applications. Springer, Hei-delberg, 2004.
18. Burnaev E.V., Erofeev P.D. Vliyanie initsializatsii parametrov na vremya obucheniya i tochnost' nelineynoy regressionnoy modeli [The effect of initialization parameters on the training time and the accuracy of the nonlinear regression model], Informatsionnye protsessy [Information processes], 2015, No. 15, pp. 279-297.
19. Sichinava Z.I. Neyrosetevye algoritmy analiza povedeniya respondentov: diss. … kand. tekhn. nauk [Neural network algorithms to analyze the behavior of respondents. Cand. of eng. sc. diss.], 2014.
20. Kruglov V.V., Dli M.I., Golunov R.Yu. Nechetkaya logika i iskusstvennye neyronnye seti [Fuzzy logic and artificial neural networks]. Moscow: Fizmatlit, 2001, 201 p.
21. Stathakis D. How many hidden layers and nodes?, International Journal of Remote Sensing, 2009, No. 30, pp. 2133-2147. Available at: http://www.academia.edu/711697/How_ many_hidden_layers_and_nodes.
22. Thomas A.J., Petridis M., Walters S.D., Gheytassi S.M., Morgan R.E. On Prediction the Opti-mal Number of Hidden Nodes, In: 2015 International Conference on Computational Science and Computational Intelligence, 2015, pp. 565-570.
23. Yakh"yaeva G.E. Osnovy teorii neyronnykh setey [Fundamentals of the theory of neural net-works]. Available at: http://www.intuit.ru/studies/courses/ 88/88/info.
24. Huang G.-B. Learning capability and storage capacity of two-hidden-layer feedforward net-works, IEEE Transactions on Neural Networks, 2003, No. 14, pp. 274-281.
25. Geneticheskie algoritmy v MATLAB [Genetic algorithms in MATLAB], 2011. Available at: http://habrahabr.ru/post/111417/.

Comments are closed.