Trubarov AndreyFaculty: Computer Science and TechnologySpeciality: System ProgrammingTheme of master's work:The organisation of neural network training by means of genetic algorithm of optimizationScientific advisor: Svyatnyy VladimirAbstractIntroductionEvery day the problems solved by means of computer systems, become complicated more and more. For their decision the set of methods, technologies and concepts which are constantly improved is thought up. Thus, along with strongly settled and widely applied methods, all develop more widely and used other, nonconventional approaches. One of them is connected with artificial neural networks. The first attempt of creation and research of artificial neural networks job of J. McCulloch and W. Pitts "logic calculation of the ideas concerning nervous activity" (1943) [1] in which main principles of construction artificial neurons and neural networks have been formulated. And though this job was only the first stage, many ideas described here, remain actual and for today. Artificial neural networks represent the devices of parallel calculations consisting of set of co-operating simple processors. Such processors are usually exclusively simple, especially in comparison with the processors used in personal computers. Each processor of a similar network deals only with signals which it receives, and signals which it periodically sends to other processors, and, nevertheless, being connected in big enough network with operated interaction, such locally simple processors are together capable to carry out enough complicated problems [2]. Artificial neural networks have a direct communication with biology because they consist of the elements which functionality is similar to the majority of functions biological neuron. These elements can be organised in conformity with brain anatomy, and they show a considerable quantity of properties inherent in a brain. For example, neural networks can be trained, being based on the experience, can project the previous precedents on new cases and define significant properties of the entrance information, having the superfluous data. The nervous system of the person has stunning complexity. Nearby 1011 neurons participate in approximately 1015 passing the communications having length metre and more. Everyone neuron possesses many qualities, the general with other elements of a body, but its unique ability is reception, processing and transfer of electrochemical signals on nervous ways which form communication system of a brain [3]. The artificial neural network, as well as its biological prototype, should be trained to have possibility to solve the problems put before it. There is a set of various algorithms of training of the neural networks classified on certain parametres. One of the most perspective directions are stochastic algorithms - in particular, genetic algorithm. The genetic algorithm is based on a biological principle of evolution when one generation is replaced in due course by another. Thus more to survive and continue the sort more adapted individuals have chance. In genetic algorithm the role of generations is played by iterations, and a role of individuals - decisions from which the closest to desirable result continue to participate in the problem decision. The purposes and problemsThe purpose of current work is research of applicability of genetic algorithm to training of a neural network, the organisation of training of a network by means of genetic algorithm on an example of a problem of character recognition, definition of optimum structure of a network. For object in view achievement following problems will be solved:
Subject urgencyNeural networks allow to solve very much a wide range of problems. This pattern recognition and classification, decision-making and management, forecasting and approximation, compression of the data and associative memory. The major property of neural networks testifying to their huge potential and wide applied possibilities, consists in parallel processing of the information simultaneously all neurons. Other not less important property of a neural network consists in ability to training and to generalisation of the received knowledge. The network possesses lines of a so-called artificial intellect. Trained on the limited set training sample, it generalises the saved up information and develops expected reaction with reference to the data which was not processed in the course of training. Despite a significant amount of already known practical applications of artificial neural networks, possibilities of their further use for processing of signals are not studied definitively, and it is possible to come out with the assumption that neural networks still for many years will be means of development of information technics [4]. Prospective scientific novelty and practical resultsThe system of the organisation of training and testing of a neural network assumes the task of the parametres concerning as to structure of the network (its topology), and to applied algorithms of its training. For a neural network in such parametres are, for example, neurons quantity in a layer, quantity of layers. In the parametres, characterising directly algorithm of training, various factors of speed of training, initial values of scales of a network, an admissible error of training are, for example. Definition of optimum parametres of a network for specific targets is pressing question which dares in the current work. The review of researches and workings out on a subjectIn the world there is a considerable quantity of realisations of neural networks for decisions of the most different problems in various programming languages. One of the most known is working out Flood - library of a neural network on C++ with an open source code [5]. Also workings out of the European centre of soft calculations [6] are perspective. As to communication of genetic algorithm and training of neural networks then also there are some workings out. One of the most interesting - working out NeuroGen [7]. NeuroGen includes hybrid parallel algorithm of genetic search and standard return distribution for training of neural networks on systems with the distributed memory. The package contains some test programs which can be compiled and started as on a cluster, and as the usual one line C-application. As to national level in our country there is the smallest number of researches and workings out on the given subject. Much more materials it is possible to find at our neighbours, Russia. The short list of the software on neural networks and genetic algorithm can be found on "Links" page in corresponding section. It is necessary to notice that the powerful contribution to researches of neural networks and genetic algorithms is brought by graduates of DonNTU [8]. Own resultsThe object-oriented model of a neural network and the initial version of system of testing and the analysis is at present designed and developed. The neural network is characterised by quantity of layers which, in turn, are characterised, quantity of neurons. Neuron represents mathematical model of the simple processor having of some inputs and one exit (fig. 2) [9]. The vector of entrance signals will be transformed neuron to a target signal with use of three functional blocks: local memory, the block of summation and the block of nonlinear transformation. Fig. 2. Structure of neuron The vector of local memory contains the information on weight multipliers with which entrance signals will be interpreted by neuron. These variables of weight are analogue of sensitivity plastic synapse contacts. The choice of scales reaches this or that integrated function of neuron. In the summation block there is an accumulation of the general entrance signal (usually designated by a symbol net), equal to the weighed sum of inputs: Pass of signals of a two-layer neural network from its input to exits is presented on fig. 3. Fig. 3. Calculation output signals of the two-layers neural network (Animation contains 7 frames, length - 7 seconds. File size - 24 KB.) Training of a neural network occurs to the help of algorithm of return distribution of an error. Distribution of signals of an error occurs from exits of a neural network to its inputs, in a direction, the return to direct distribution of signals to a usual operating mode [10]. ConclusionsNeural networks are one of the most interesting directions of methods of the decision of challenges of various classification as they concern area of an artificial intellect which does not leave indifferent any researcher. By means of neural networks it is possible to solve problems which would demand considerably smaller computing capacities, rather than standard methods. Training of neural networks is and there is their essence. In job it is planned to conduct research of several methods of training, special attention having given thus to genetic algorithm. Genetic algorithms are enough powerful tool and can be applied with success to a wide class of applied problems, switching on what it is difficult, and sometimes and at all it is impossible, to solve others to methods. However, genetic algorithms, as well as other methods of evolutionary calculations, does not guarantee detection of the global decision for good time. Genetic algorithms do not guarantee also that the global decision will be found, but they are good for search "enough good" problem decisions "quickly enough". There, where the problem can be solved special to methods, almost always such methods will be more effective than GA both in speed and in accuracy of the found decisions. The main advantage of genetic algorithms is that they can be applied even on challenges, there, where there are no special methods. Even there, where existing techniques well work, it is possible to reach improvement by their combination to genetic algorithms. Sources
|