This is a neural network simulator, written in JavaScript. The networks that can be constructed and run using the simulator have the following features:
Three layers of neuron-like units: an Input, a Hidden, and an Output Layer.
Each unit in the Input Layer is connected to each unit in the Hidden Layer, and each unit of the latter
is connected to each unit of the Output Layer.
Each of the connections has a weight (a floating point number) associated with it. The starting weights are random numbers
between -1 and 1.
Learning in the network means changing the weights so that the network becomes continuously better at an assigned task.
The task the network has to learn is mapping each of a set of activation patterns in the Input Layer to a particular activation pattern in the Output Layer. The Output Layer pattern the network should achieve is the target pattern.
A sigmoid activation function for the units. The unit's activation is near 1 if the input it receives from other units is a large positive value. Its
activation is near zero if it receives a large negative input from other units, and its activation is 0.5 if it receives a zero input.
Backpropagation as the learning rule.
If you wish to learn more about neural networks, please consult the FAQ for neural networks on the Internet, or one of the many introductory textbooks about the topic (like Anderson, J. A. (1995). An Introduction to Neural Networks. Cambridge: The MIT Press.)
Instructions
The user can set the following features of the networks:
The number of units in the three layers. Maximally 20 units can be defined for each layer (this is a limitation resulting from the way the activation of the units is shown, and not from some internal network feature). The default number of units is 2 in the Input Layer, 3 in the Hidden Layer, and 1 in the Output Layer.
The learning rate, that is, the rate of change of the connection weights. The default value is 0.5.
The task to learn. The default task is the logical function 'XOR': in the default network (see above) the single unit in the Output Layer should
have an activation of 0 if the input pattern is {0, 0} or {1, 1}, and it should have an activation of 1 if the input pattern is {0, 1} or {1, 0}.
Setting up a simulation:
Determine the task by defining the input and target patterns in their text
areas. Each row in the input pattern text area corresponds to an input
pattern. In each input pattern, a numerical value represents the activation of
an Input Layer unit. Each row in the target pattern text area corresponds
to a target pattern. In each target pattern, a numerical value represents the
activation of an Output Layer unit. Each input pattern must have its
corresponding target pattern, and these two patterns must have the same row
number (within their respective text areas). Each value is separated from its
neighbours by a space. In the current version of this simulator, only 1's and
0's are acceptable input and target pattern values.There should not be an
empty row after the last row in either text areas. There is no limitation
for the number of input and target patterns (except, of course, the memory
limitations of your PC or the speed of simulation).
Create the network by determining the learning rate and the number of units
in each Layer, and by clicking on the button Create network. The number
of Input Layer units has to be the same as the number of values in an input
pattern row and the number of Output Layer units has to be the same as the
number of values in a target pattern row (see above).
Running a simulation:
If you want to show the activation values while a simulation is
running, check the corresponding checkbox, otherwise un-check it.
The activation value of each of the units will be shown as one of five colours
in a square representing the unit. The colour white corresponds to
activation values 0-0.2, and increasingly darker shades of red
correspond to higher activation values, the darkest red colour representing
activation values 0.8-1.0.
You can set the number of simulation cycles to run.
The network's performance on the task assigned to it is shown by the
errors it makes. The error measure used here is the total sum of error
squares. The smaller this value, the better the performance of the network.
This value should decrease during learning. Notice that with error values
smaller than 0.0001, the value is expressed using scientific notation
(only the first part of which fits into the textbox).
You can run the simulation by clicking on the Run button. You can reset and restart the simulation by clicking on 'Create network' again.