Braininabox is a background application.
Braininabox is based on client/server architecture. With Braininabox you can create
a neural network and
play with it using client applications.
The neural network topology used is backpropagation; this is a multi-layers neural
network.
Each neuron in a layer is connected to other neurons in the next layer.
In this architecture, during training, internal values are propagated through the
layers until the output layer.
Then, the error value computed for each neuron propagates backward and the weights
are adjusted.
We stop the learning process when the average error is under a given value (tolerance).
With Braininabox you can create your network giving :
- The number of layers
- The number of input and output neurons.
With Braininabox you can create several networks at once : each learning process
runs in separate thread.
You have to train before running a neural network.
To create, train or run a neural network you have to use BMessage in your client
application.
The "What" : LESSON 'less'
Data :
Name |
type |
description |
range |
tolerance | double | gives the error value that stops the training phase | 0.0 to 1.0 |
momentum_term | double | see here | |
learning_rate | double | used for computing error value | 0.0 to 1.0 |
signature_app | string | client's signature application | |
neurons_in_layer | int32 | gives the number of neurons for each layer. If you send four "neurons_in_layer", the first and last one will be the number of neurons respectively in input and output layer and you'll have two hidden layer. | Braininabox can run "only" with 90 layers max. |
input_values | data | used to send the scheme to learn. The data must be a single array of double values between 0.0 and 1.0 | depends on the number of patterns to learn. 3500 max. |
output_values | data | the desired output. Same description as input_values. |
You have to send this BMessage with a synchronous reply.
When the server receives this message, it starts the learning thread in background.
If the client
still running it will be notify of the end of learning phase by server's BMessage
:
The What : LEARN_RESULT 'leok'
Data :
name |
type |
description |
range |
LearninResult | string | often it's the string : OK |
The What : THINKNOW 'thin'
Data :
name |
type |
description |
range |
signature_app | string | client's signature application | |
generation | int32 | optionnal. Generation of parameters file (see above) | -4 to 0. (0 by default) |
input_values | data | the values that will be processed. The data must be a single aray of double values between 0.0 to 1.0. The size of the array correponds to the number of neurons of the input layer that has been created during training phase. | Each "input_values" is a scheme. You can send several scheme (then, several "input_values") |
name |
type |
description |
range |
Number_Scheme | int32 | number of patterns received by the server | |
Input_Neurons | int32 | number of inputs neurons | |
Output_Neurons | int32 | number of output neurons | |
YouSend | double | set of values sent by the client giving the input scheme (see "input_values" in running BMessage) | |
IThink | double | value(s) computed by the server | 0.0 to 1.0. The number of values depends on Output_Neurons received |
The What : BRAINBOX_ERR 'boko'
Data :
name |
type |
description |
range |
BraininboxErr | string | error's description occured in the server. It stops the running or learning thread. |
1) Why input values are between 0.0 and 1.0 ?
Because neurons produce values between 0.0 and 1.0. Some neural networks work with
values between
-1.0 and 1.0 in input and produce only -1 and 1. It's a binary output. The backpropagation
network
can produce a range of values.
If your input data are different than the require range of values, you can apply
a mathematical funtion
to scale your values between 0.0 and 1.0.
For example, if your data are 3.2, 5.6 or 8.32, you cand divide by 10 all
values giving 0.32, 0.56, 0.832.
2) What is momentum term ?
During training, the network try to minimize the error between input and desired
output.
Sometimes the network stays in a state that is not the state given the smallest error.
This state is called local minimum. The state given a best value is called global
minimum.
The momentum term introduces some noise that pushes values (weights) out of local
minimum.
3) How many hidden layers ?
Not easy to determine. But it's not because you add lots of layer your network will
be
powerfull. You have to test different topologies.
4) How many neurons in hidden layers ?
Again..not easy. It depends on the characteristics that could be important (such
as
number of colors in picture, or the number of different characteristics in a typing
character).
You have to test different sizes.
5) How many times for a learning process ?
Somtimes the learning phase can take 2 seconds and sometimes 2 hours. It depends
on the
patterns to learn and the parameters of the network.
Braininabox is multithreaded, so you can launch several training with different
parameters at once.
6) How the server load the appropriate parameters file for a client application ?
Each file is stored with attribute :
- Signature of application
- Generation of file (0 to -4)
- An identification number (no use yet)
When a run phase is started, the server keeps the file (via a query) with these attributes.
7) How to stop a learning phase ?
Kill the thread corresponding to your client application. The thread name's is the
signature of
your application.
[MAIN PAGE] [WHAT IS NEURAL NETWORK ?] [WHAT IS BRAININABOX ?]