Package crino :: Module network :: Class PretrainedMLP
[hide private]
[frames] | no frames]

Class PretrainedMLP

source code

Known Subclasses:

A PretrainedMLP is a specialization of the MLP, where the layers are pretrained, for input part on the training examples (x)orforoutputpartonthetraininglabels(:math:mathbf{y}) using a Stacked AutoEncoder strategy.

See Also: MultiLayerPerceptron, http://www.deeplearning.net/tutorial/SdA.html

Instance Methods [hide private]
 
__init__(self, nUnits, outputActivation=<class crino.module.Sigmoid at 0x2b85740f5db8>, nInputLayers=0, nOutputLayers=0, InputAutoEncoderClass=<class crino.network.AutoEncoder at 0x2b85740fcbb0>, OutputAutoEncoderClass=<class crino.network.OutputAutoEncoder at 0x2b85740fcc18>)
Constructs a new DeepNeuralNetwork.
source code
 
prepareParams(self)
Initializes the params of the submodules.
source code
 
pretrainInputAutoEncoders(self, data, **params)
Performs the unsupervised learning step of the input autoencoders, using a batch-gradient backpropagation algorithm.
source code
 
pretrainOutputAutoEncoders(self, data, **params)
Performs the unsupervised learning step of the output autoencoders, using a batch-gradient backpropagation algorithm.
source code
 
train(self, x_train, y_train, **params)
Performs the pretraining step for the input and output autoencoders, optionally the semi-supervised pretraining step of the link layer, and finally the supervised learning step (finetune).
source code

Inherited from MultiLayerPerceptron: checkBadmoveHook, checkBatchHook, checkEpochHook, checkLearningParameters, defaultLearningParameters, finetune, getGeometry, getParameters, initBadmoveHook, initBatchHook, initEpochHook, setParameters

Inherited from module.Sequential: prepareGeometry, prepareOutput

Inherited from module.Container: add

Inherited from module.Module: criterionFunction, forward, forwardFunction, holdFunction, linkInputs, linkModule, prepare, prepareBackup, restoreFunction, save, trainFunction

Instance Variables [hide private]

Inherited from module.Container: modules

Inherited from module.Module: backupParams, inputs, nInputs, nOutputs, outputs, params, prepared

Method Details [hide private]

__init__(self, nUnits, outputActivation=<class crino.module.Sigmoid at 0x2b85740f5db8>, nInputLayers=0, nOutputLayers=0, InputAutoEncoderClass=<class crino.network.AutoEncoder at 0x2b85740fcbb0>, OutputAutoEncoderClass=<class crino.network.OutputAutoEncoder at 0x2b85740fcc18>)
(Constructor)

source code 
Constructs a new DeepNeuralNetwork.
Parameters:
  • nUnits (int list) - The sizes of the (input, hidden* , output) representations.
  • outputActivation (class derived from Activation) - The type of activation for the output layer.
  • nInputLayers (int) - Number of layers starting from input to be stacked with AE
  • nOutputLayers (int) - Number of layers starting from output to be stacked with AE
  • InputAutoEncoderClass (AutoEncoder sub class) - Class to be used for Input Auto Encoders
  • OutputAutoEncoderClass (OutputAutoEncoder sub class) - Class to be used for Output Auto Encoders
Overrides: module.Module.__init__

Attention: outputActivation parameter is not an instance but a class.

prepareParams(self)

source code 
Initializes the params of the submodules. The Sequential module params will include the params of its submodules .
Overrides: module.Module.prepareParams
(inherited documentation)

pretrainInputAutoEncoders(self, data, **params)

source code 

Performs the unsupervised learning step of the input autoencoders, using a batch-gradient backpropagation algorithm.

Classically, in a DeepNeuralNetwork, only the input autoencoders are pretrained. The data used for this pretraining step can be the input training dataset used for the supervised learning (see finetune), or a subset of this dataset, or else a specially crafted input pretraining dataset.

Once an AutoEncoder is learned, the projection (encoding) layer is kept and used to initialize the network layers. The backprojection (decoding) part is not useful anymore.

Parameters:
  • data (SharedVariable from ndarray) - The training data (typically example features).
  • params (dict) - The learning parameters, encoded in a dictionary, that are used in the finetune method.

pretrainOutputAutoEncoders(self, data, **params)

source code 

Performs the unsupervised learning step of the output autoencoders, using a batch-gradient backpropagation algorithm.

The InputOutputDeepArchitecture pretrains the output autoencoders, in the same way the DeepNeuralNetwork does for input autoencoders. In this case, the given training data are the labels (y) and not the examples (x) (i.e. the labels that the network must predict).

Once an AutoEncoder is learned, the backprojection layer (decoding) is kept and used to initialize the network layers. The projection (encoding) part is not useful anymore.

Parameters:
  • data (SharedVariable from ndarray) - The training data (typically example labels).
  • params (dict) - The learning parameters, encoded in a dictionary, that are used in the finetune method.

train(self, x_train, y_train, **params)

source code 
Performs the pretraining step for the input and output autoencoders, optionally the semi-supervised pretraining step of the link layer, and finally the supervised learning step (finetune).
Parameters:
  • x_train (ndarray) - The training examples.
  • y_train (ndarray) - The training labels.
  • params (dict) - The learning parameters, encoded in a dictionary, that are used during the autoencoders pretraining (pretrainInputAutoEncoders, pretrainOutputAutoEncoders), the link layer pretraining, and the final learning (finetune) steps.

    Possible keys: batch_size, learning_rate, epochs, growth_factor, growth_threshold, badmove_threshold, verbose, input_pretraining_params, output_pretraining_params, link_pretraining, link_pretraining_params.

    The link_pretraining parameter controls whether the link layer is pretrained or not (default: False).

    The input_pretraining_params, output_pretraining_params and link_pretraining_params parameters are themselves dictionaries containing the training parameters for each pretraining step.

Returns:
elapsed time, in deltatime.
Overrides: MultiLayerPerceptron.train