Назад в библиотеку

Neural Network Toolbox

Автор: Stefan Hausler
Источник: http://ailen.org/

Аннотация

Stefan Hausler. Neural Network Toolbox. В статье описывается раздел Neural Network Toolbox пакета прикладных программ Matlab.

The Neural Network Toolbox

The neural network toolbox makes it easier to use neural networks in matlab. The toolbox consists of a set of functions and structures that handle neural networks, so we do not need to write code for all activation functions, training algorithms, etc. that we want to use!

The Neural Network Toolbox is contained in a directory called nnet. Type help nnet for a listing of help topics.

A number of demonstrations are included in the toolbox. Each example states a problem, shows the network used to solve the problem, and presents the ?nal results. Lists of the demos and applications scripts that are discussed in this guide can be found with help nndemos/.

The Structure of the Neural Network Toolbox

The toolbox is based on the network object. This object contains information about everything that concern the neural network, e.g. the number and structure of its layers, the conectivity between the layers, etc. Matlab provides high-level network creation functions, like newlin (create a linear layer), newp (create a perceptron) or newp (create a feed-forward backpropagation network) to allow an easy construction of. As an example we construct a perceptron with two inputs ranging from -2 to 2:

>> net = newp([-2 2;-2 2],1)

First the architecture parameters and the subobject structures

subobject structures:

inputs: {1x1 cell} of inputs
layers: {1x1 cell} of layers
outputs: {1x1 cell} containing 1 output
targets: {1x1 cell} containing 1 target
biases: {1x1 cell} containing 1 bias
inputWeights: {1x1 cell} containing 1 input weight
layerWeights: {1x1 cell} containing no layer weights

are shown. The latter contains information about the individual objects of the network. Each layer consists of neurons with the same transfer function net.transferFcn and net input function net.netInputFcn, which are in the case of perceptrons hardlim and netsum. If neurons should have different transfer functions then they have to be arranged in different layers. The parameters net.inputWeights and net.layerWeights specify among other things the applied learning functions and their parameters. The next paragraph contains the training, initialization and performance functions.

functions:

adaptFcn: ’trains’
initFcn: ’initlay’
performFcn: ’mae’
trainFcn: ’trainc’

The trainFcn and adaptFcn are used for the two different learning types batch learning and incremental or on-line learning. By setting the trainFcn parameter you tell Matlab which training algorithm should be used, which is in our case the cyclical order incremental training/learning function trainc. The ANN toolbox include almost 20 training functions. The performance function is the function that determines how well the ANN is doing it’s task. For a perceptron it is the mean absolute error performance function mae. For linear regression usually the mean squared error performance function mse is used. The initFcn is the function that initialized the weights and biases of the network. To get a list of the functions that are available type help nnet. To change one of these functions to another one in the toolbox or one that you have created, just assign the name of the function to the parameter, e.g.

>> net.trainFcn = ’mytrainingfun’;

The parameters that concerns these functions are listed in the next paragraph.

By changing these parameters you can change the default behavior of the functions mentioned above. The parameters you will use the most are probably the components of trainParam. The most used of these are net.trainParam.epochs which tells the algorithm the maximum number of epochs to train, and net.trainParam.show that tells the algorithm how many epochs there should be between each presentation of the performance. Type help train for more information.

The weights and biases are also stored in the network structure:

weight and bias values:

IW: {1x1 cell} containing 1 input weight matrix
LW: {1x1 cell} containing no layer weight matrices
b: {1x1 cell} containing 1 bias vector

The .IW(i,j) component is a two dimensional cell matrix that holds the weights of the connection between the input j and the network layer i. The   .LW (i,j) component holds the weight matrix for the connection from the network layer j to the layer i. The cell array b contains the bias vector for each layer.

Список литературы

  1. Li Y., Sundararajan N., Saratchandran P. Neuro-controller design for nonlinear fighter aircraft maneuver using fully tuned RBF networks // Automatica. – 2001. – Vol. 37, N 8. – P. 1293 – 1301.
  2. Gundy-Burlet K., Krishnakumar K., Limes G., Bryant D. Augmentation of an Intelligent Flight Control System for a Simulated C-17 Aircraft // J. of Aerospace Computing, Information, and Communication. – 2004. – Vol. 1, N 12. – P. 526 – 542.
  3. Prokhorov D. and Wunsch D. Adaptive Critic Designs // IEEE Transactions on Neural Networks. – 1997. – Vol. 8, N 5. – P. 997 – 1007.
  4. Архангельский В.И., Богаенко И.Н., Грабовский Г.Г., Рюмшин Н.А. Нейронные сети в системах автоматизации. – К.: Техника, 1999. – 234 c.
  5. Купін А.І. Інтеллектуальна ідентифікація та керування в умовах процесів збагачувальної технології. – Кривий Ріг: КТУ, 2008. – 204 с.
  6. Терехов В.А., Ефимов Д.В., Тюкин И.Ю. Нейросетевые системы управления: Учеб. пособие для вузов. – М.: Высш. школа 2002. – 183 с.