Tools


The Tools menu item permits to control several aspects of the edited neural network.
Here is a list of all the contained items:

Control Panel



The Control Panel is the tool that controls the behaviour of the neural net. It contains three buttons:


Run:

Starts the neural net beginning from the first pattern of the input data set.

Continue:

Restarts the neural net from the last pattern processed.

Stop:

Stops the neural net.

The Control Panel parameters are:

BatchSize:
The number of the cycles for each epoch into which the batch mode is active. If the batch mode is true, insert here a number between 1 and TrainingPatterns. This parameter permits to set the mixed batch mode, where the net learns in batch mode from the cycle 1 to the 'batch size' cycle, and then learns in on-line mode for the remaining cycles.

Epochs:

The total number of the cycles for which the net is to be trained.

TrainingPatterns:

The total number of input rows for which the net is to be trained. This can be different from the number of rows read from the FileInput component (lastRow – firstRow + 1).

ValidationPatterns:
The total number of input rows for which the net is to be validated. Used in conjunction with the Validation parameter set to true.

Momentum:

The value of the momentum (see the literature about the back-propagation algorithm).

Learning Rate:

The value of the learning rate (see the literature about the back-propagation algorithm).

Learning:

True if the net is to be trained, otherwise set false.

LearningMode
The leaning algorithm chosen to train the network:
0 = On-Line Backprop (the default)
1 = Batch Backprop
2 = Resilient BackProp (RPROP)
Supervised:
True if the net must learn in the supervised mode. Normally you don't need to set this parameter, as the editor understands and set the right learning mode by cheching the presence of the Teacher component.

Validation:

True if the net is to be tested on a validation data set. Used ONLY in conjunction with a Learning Switch component inserted in the net.

Pre-Learning:  

The number of initial cycles skipped from the learning algorithm. Normally this parameter is zero, and is used when there is a DelayLayer component in the net. In this case pre-learning must be set equal to the number of taps of that component allowing its buffer to become filled before the learning cycle starts.

UseRMSE
If true, the error is calculated as Root Mean Squared Error (RMSE), otherwise as Mean Squared Error (MSE)

The meaning of the pre-learning parameter

To better explain the use of the Pre-Learning parameter, it serves to avoid making changes to the values of biases and synapses in the presence of a delayed layer. This is because if these values are altered from the first cycle of the learning process, this would adjust them using a wrong input pattern, obtained before the full input ‘temporal window’ is presented to the network. For instance, if an input pattern is composed of one column as follows:


0.2
0.5
0.1
...

and an input delay layer with taps = 3 is present, then when the network might only have read the first two input values (0.2 and 0.5), and the output of this first layer would be:



In this case the network would learn the wrong {0, 0, 0.2, 0.5} pattern.

Thus the Pre-Learning parameter must be set equal to the taps parameter so that the network starts to learn only when all the ‘taps’ values have been read.

To Do List

The To Do List contains a list of errors that prevent the neural network to run. When you try to run a neural network, a check is performed, and in case of errors, the network doesn't start and the following message is displayed:



In this case, you must open the to-do list, where a list of all the errors is shown, as in the following example:



The first word gives the severity of the error:
You must fix at least the signaled problems having severity = 'Error' and, only when, after pressing the 'Revalidate' button, the to-do list contains the following message:



then the network is OK and it can run.
Note: even if the warnings are not blocking, it should be better to fix them, as not doing so will result in unspecified behaviour and your network could not be able to learn properly.

Add Noise

This adds a random noise component to the net and is useful for allowing the net to exit from a local minimum. At the end of a round of training, adding noise may ‘jolt’ the netork out of a local minimum so that further training produces a better network.

Randomize

This reset the weights of a neural network, initializing it to a random state.

Reset Input Streams

This command resets all the buffered input streams in input and teacher layers, permitting the reloading of their buffers with the input data. This is useful after the contents of some files have changed and it is necessary to reload the data.

Macro Editor

See the corresponding help page