Package crino :: Module network :: Class MultiLayerPerceptron
[hide private]
[frames] | no frames]

Class MultiLayerPerceptron

source code

Known Subclasses:

A MultiLayerPerceptron (MLP) is one classical form of artificial neural networks, whichs aims at predicting one or more output states given some particular inputs. A MLP is a Sequential module, made of a succession of Linear modules and non-linear Activation modules. This tends to make the MLP able to learn non-linear decision functions.

A MLP must be trained with a supervised learning algorithm in order to work. The gradient backpropagation is by far the most used algorithm used to train MLPs.

Instance Methods [hide private]
 
__init__(self, nUnits, outputActivation=<class crino.module.Sigmoid at 0x2b85740f5db8>)
Constructs a new MultiLayerPerceptron network.
source code
 
getGeometry(self) source code
 
getParameters(self) source code
 
setParameters(self, params) source code
 
initEpochHook(self, finetune_vars) source code
 
checkEpochHook(self, finetune_vars) source code
 
initBadmoveHook(self, finetune_vars) source code
 
checkBadmoveHook(self, finetune_vars) source code
 
initBatchHook(self, finetune_vars) source code
 
checkBatchHook(self, finetune_vars) source code
 
checkLearningParameters(self, param_dict) source code
 
defaultLearningParameters(self, param_dict) source code
 
finetune(self, shared_x_train, shared_y_train, batch_size, learning_rate, epochs, growth_factor, growth_threshold, badmove_threshold, verbose)
Performs the supervised learning step of the MultiLayerPerceptron, using a batch-gradient backpropagation algorithm.
source code
 
train(self, x_train, y_train, **params)
Performs the supervised learning step of the MultiLayerPerceptron.
source code

Inherited from module.Sequential: prepareGeometry, prepareOutput, prepareParams

Inherited from module.Container: add

Inherited from module.Module: criterionFunction, forward, forwardFunction, holdFunction, linkInputs, linkModule, prepare, prepareBackup, restoreFunction, save, trainFunction

Instance Variables [hide private]

Inherited from module.Container: modules

Inherited from module.Module: backupParams, inputs, nInputs, nOutputs, outputs, params, prepared

Method Details [hide private]

__init__(self, nUnits, outputActivation=<class crino.module.Sigmoid at 0x2b85740f5db8>)
(Constructor)

source code 
Constructs a new MultiLayerPerceptron network.
Parameters:
  • nUnits (int list) - The sizes of the (input, hidden and output) representations.
  • outputActivation (class derived from Activation) - The type of activation for the output layer.
Overrides: module.Module.__init__

Attention: outputActivation parameter is not an instance but a class.

finetune(self, shared_x_train, shared_y_train, batch_size, learning_rate, epochs, growth_factor, growth_threshold, badmove_threshold, verbose)

source code 
Performs the supervised learning step of the MultiLayerPerceptron, using a batch-gradient backpropagation algorithm. The learning\_rate is made adaptative with the growth\_factor multiplier. If the mean loss is improved during growth\_threshold successive epochs, then the learning\_rate is increased. If the mean loss is degraded, the epoche is called a "bad move", and the learning\_rate is decreased until the mean loss is improved again. If the mean loss cannot be improved within badmove\_threshold trials, then the last trained parameters are kept even though, and the finetuning goes further.
Parameters:
  • shared_x_train (SharedVariable from ndarray) - The training examples.
  • shared_y_train (SharedVariable from ndarray) - The training labels.
  • batch_size (int) - The number of training examples in each mini-batch.
  • learning_rate (float) - The rate used to update the parameters with the gradient.
  • epochs (int) - The number of epochs to run the training algorithm.
  • growth_factor (float) - The multiplier factor used to increase or decrease the learning\_rate.
  • growth_threshold (int) - The number of successive loss-improving epochs after which the learning\_rate must be updated.
  • badmove_threshold (int) - The number of successive loss-non-improving gradient descents after which parameters must be updated.
  • verbose (bool) - If true, information about the training process will be displayed on the standard output.
Returns:
elapsed time, in datetime.

train(self, x_train, y_train, **params)

source code 
Performs the supervised learning step of the MultiLayerPerceptron. This function explicitly calls finetune, but displays a bit more information.
Parameters:
  • x_train (ndarray) - The training examples.
  • y_train (ndarray) - The training labels.
  • params (dict) - The learning parameters, encoded in a dictionary, that are used in the finetune method.

    Possible keys: batch_size, learning_rate, epochs, growth_factor, growth_threshold, badmove_threshold, verbose.

Returns:
elapsed time, in datetime.

See Also: finetune