Home | Trees | Indices | Help |
---|
|
The network module provides some ready-to-use neural network architectures, along with pretraining and supervised learning methods.
See their respective documentations for more details about their use.
|
|||
MultiLayerPerceptron A MultiLayerPerceptron (MLP) is one classical form of artificial neural networks, whichs aims at predicting one or more output states given some particular inputs. |
|||
AutoEncoder An AutoEncoder is a neural network whichs aims at encoding its inputs in a smaller representation space. |
|||
OutputAutoEncoder | |||
PretrainedMLP A PretrainedMLP is a specialization of the MLP, where the layers are pretrained, for input part on the training examples (x)orforoutputpartonthetraininglabels(:math:mathbf{y}) using a Stacked AutoEncoder strategy. |
|||
DeepNeuralNetwork A DeepNeuralNetwork (DNN) is a specialization of the MLP, where the layers are pretrained on the training examples (x)usingaStacked strategy. |
|||
InputOutputDeepArchitecture An InputOutputDeepArchitecture (IODA) is a specialization of the DNN, where the layers are divided into three categories : the input layers, the link layer and the output layers. |
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.2 on Thu Aug 20 13:34:14 2015 | http://epydoc.sourceforge.net |