Artificial Neural Networks are a recent development tool that are modeled from biological neural networks. The powerful side of this new tool is its ability to solve problems that are very hard to be solved by traditional computing methods(e.g. by algorithms). This work briefly explains Artificial Neural Networks and their applications, describing how to implement a simple ANN for image recogniton. Image Recognition with Neural Networks
Artificial Neural Networks (ANNs) are a new approach that follows a different way from traditional computing methods to solve problems. Since conventional computers use algorithmic approach, if the specific steps that the computer needs to follow are not known, the computer can not solve the problem. That means, traditional computing methods can only solve the problems that we have already understood and knew how to solve. However, ANNs are, in some way, much more powerful because they can solve problems that we do not exactly know how to solve. That’s why their usage are recently spreading over a wide range of area including, virus detection, robot control, intrusion detection systems, pattern (image, fingerprint, noise..) recognition and so on.
ANNs has the ability to adapt, learn, generalise, cluster or organise data. There are so many structures of ANNs including, Percepton, Adaline, Madaline, Kohonen, BackPropagation and many others. Probably, BackPropagation ANNs is the most commonly used, as it is very simple to implement and effective. In this work we will deal with BackPropagation ANNs.
BackPropagation ANNs contain one or more layers each of which are linked to the next layer. The first layer is called “input layer” which meets the initial input (e.g. pixels from a letter) and so do the last one “output layer” which usually holds input’s identifier (e.g. name of the input letter). The layers between input and output layers are called “hidden layer(s)” which only propagates previous layer’s outputs to the next layer and [back] propagates the following layer’s error to the previous layer. Actually, these are the main operations of training a BackPropagation ANN which follows a few steps.
A typical BackPropagation ANN is as depicted below. The black nodes (on the leftest) are the initial inputs. Training such a network involves two phases. In the first phase, the inputs are propagated forward to compute the outputs for each output node. Then, each of these outputs are substracted from its desired output, causing an error [an error for each output node]. In the second phase, each of these output errors is passed backward and the weights are fixed. These two phases is continued until sum of [square of output errors] reaches to an acceptable value.