OpenCV  3.2.0-dev
Open Source Computer Vision
cv::ml::ANN_MLP Class Referenceabstract

Artificial Neural Networks - Multi-Layer Perceptrons. More...

#include "ml.hpp"

Inheritance diagram for cv::ml::ANN_MLP:
Collaboration diagram for cv::ml::ANN_MLP:

Public Types

enum  ActivationFunctions {
  IDENTITY = 0,
  SIGMOID_SYM = 1,
  GAUSSIAN = 2
}
 possible activation functions More...
 
enum  Flags {
  UPDATE_MODEL = 1,
  RAW_OUTPUT =1,
  COMPRESSED_INPUT =2,
  PREPROCESSED_INPUT =4
}
 Predict options. More...
 
enum  TrainFlags {
  UPDATE_WEIGHTS = 1,
  NO_INPUT_SCALE = 2,
  NO_OUTPUT_SCALE = 4
}
 Train options. More...
 
enum  TrainingMethods {
  BACKPROP =0,
  RPROP =1
}
 Available training methods. More...
 

Public Member Functions

virtual float calcError (const Ptr< TrainData > &data, bool test, OutputArray resp) const
 Computes error on the training or test dataset. More...
 
virtual void clear ()
 Clears the algorithm state. More...
 
virtual bool empty () const
 Returns true if the Algorithm is empty (e.g. More...
 
virtual double getBackpropMomentumScale () const =0
 BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). More...
 
virtual double getBackpropWeightScale () const =0
 BPROP: Strength of the weight gradient term. More...
 
virtual String getDefaultName () const
 Returns the algorithm string identifier. More...
 
virtual cv::Mat getLayerSizes () const =0
 Integer vector specifying the number of neurons in each layer including the input and output layers. More...
 
virtual double getRpropDW0 () const =0
 RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\). More...
 
virtual double getRpropDWMax () const =0
 RPROP: Update-values upper limit \(\Delta_{max}\). More...
 
virtual double getRpropDWMin () const =0
 RPROP: Update-values lower limit \(\Delta_{min}\). More...
 
virtual double getRpropDWMinus () const =0
 RPROP: Decrease factor \(\eta^-\). More...
 
virtual double getRpropDWPlus () const =0
 RPROP: Increase factor \(\eta^+\). More...
 
virtual TermCriteria getTermCriteria () const =0
 Termination criteria of the training algorithm. More...
 
virtual int getTrainMethod () const =0
 Returns current training method. More...
 
virtual int getVarCount () const =0
 Returns the number of variables in training samples. More...
 
virtual Mat getWeights (int layerIdx) const =0
 
virtual bool isClassifier () const =0
 Returns true if the model is classifier. More...
 
virtual bool isTrained () const =0
 Returns true if the model is trained. More...
 
virtual float predict (InputArray samples, OutputArray results=noArray(), int flags=0) const =0
 Predicts response(s) for the provided sample(s) More...
 
virtual void read (const FileNode &fn)
 Reads algorithm parameters from a file storage. More...
 
virtual void save (const String &filename) const
 Saves the algorithm to a file. More...
 
virtual void setActivationFunction (int type, double param1=0, double param2=0)=0
 Initialize the activation function for each neuron. More...
 
virtual void setBackpropMomentumScale (double val)=0
 BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). More...
 
virtual void setBackpropWeightScale (double val)=0
 BPROP: Strength of the weight gradient term. More...
 
virtual void setLayerSizes (InputArray _layer_sizes)=0
 Integer vector specifying the number of neurons in each layer including the input and output layers. More...
 
virtual void setRpropDW0 (double val)=0
 RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\). More...
 
virtual void setRpropDWMax (double val)=0
 RPROP: Update-values upper limit \(\Delta_{max}\). More...
 
virtual void setRpropDWMin (double val)=0
 RPROP: Update-values lower limit \(\Delta_{min}\). More...
 
virtual void setRpropDWMinus (double val)=0
 RPROP: Decrease factor \(\eta^-\). More...
 
virtual void setRpropDWPlus (double val)=0
 RPROP: Increase factor \(\eta^+\). More...
 
virtual void setTermCriteria (TermCriteria val)=0
 Termination criteria of the training algorithm. More...
 
virtual void setTrainMethod (int method, double param1=0, double param2=0)=0
 Sets training method and common parameters. More...
 
virtual bool train (const Ptr< TrainData > &trainData, int flags=0)
 Trains the statistical model. More...
 
virtual bool train (InputArray samples, int layout, InputArray responses)
 Trains the statistical model. More...
 
virtual void write (FileStorage &fs) const
 Stores algorithm parameters in a file storage. More...
 

Static Public Member Functions

static Ptr< ANN_MLPcreate ()
 Creates empty model. More...
 
static Ptr< ANN_MLPload (const String &filepath)
 Loads and creates a serialized ANN from a file. More...
 
template<typename _Tp >
static Ptr< _Tp > load (const String &filename, const String &objname=String())
 Loads algorithm from the file. More...
 
template<typename _Tp >
static Ptr< _Tp > loadFromString (const String &strModel, const String &objname=String())
 Loads algorithm from a String. More...
 
template<typename _Tp >
static Ptr< _Tp > read (const FileNode &fn)
 Reads algorithm from the file node. More...
 
template<typename _Tp >
static Ptr< _Tp > train (const Ptr< TrainData > &data, int flags=0)
 Create and train model with default parameters. More...
 

Protected Member Functions

void writeFormat (FileStorage &fs) const
 

Detailed Description

Artificial Neural Networks - Multi-Layer Perceptrons.

Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.

Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.

See also
Neural Networks

Member Enumeration Documentation

possible activation functions

Enumerator
IDENTITY 

Identity function: \(f(x)=x\).

SIGMOID_SYM 

Symmetrical sigmoid: \(f(x)=\beta*(1-e^{-\alpha x})/(1+e^{-\alpha x}\).

Note
If you are using the default sigmoid activation function with the default parameter values fparam1=0 and fparam2=0 then the function used is y = 1.7159*tanh(2/3 * x), so the output will range from [-1.7159, 1.7159], instead of [0,1].
GAUSSIAN 

Gaussian function: \(f(x)=\beta e^{-\alpha x*x}\).

enum cv::ml::StatModel::Flags
inherited

Predict options.

Enumerator
UPDATE_MODEL 
RAW_OUTPUT 

makes the method return the raw results (the sum), not the class label

COMPRESSED_INPUT 
PREPROCESSED_INPUT 

Train options.

Enumerator
UPDATE_WEIGHTS 

Update the network weights, rather than compute them from scratch.

In the latter case the weights are initialized using the Nguyen-Widrow algorithm.

NO_INPUT_SCALE 

Do not normalize the input vectors.

If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation equal to 1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case, you should take care of proper normalization.

NO_OUTPUT_SCALE 

Do not normalize the output vectors.

If the flag is not set, the training algorithm normalizes each output feature independently, by transforming it to the certain range depending on the used activation function.

Available training methods.

Enumerator
BACKPROP 

The back-propagation algorithm.

RPROP 

The RPROP algorithm.

See [68] for details.

Member Function Documentation

virtual float cv::ml::StatModel::calcError ( const Ptr< TrainData > &  data,
bool  test,
OutputArray  resp 
) const
virtualinherited

Computes error on the training or test dataset.

Parameters
datathe training data
testif true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
respthe optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

virtual void cv::Algorithm::clear ( )
inlinevirtualinherited

Clears the algorithm state.

Reimplemented in cv::FlannBasedMatcher, cv::DescriptorMatcher, and cv::cuda::DescriptorMatcher.

static Ptr<ANN_MLP> cv::ml::ANN_MLP::create ( )
static

Creates empty model.

Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.

virtual bool cv::ml::StatModel::empty ( ) const
virtualinherited

Returns true if the Algorithm is empty (e.g.

in the very beginning or after unsuccessful read

Reimplemented from cv::Algorithm.

virtual double cv::ml::ANN_MLP::getBackpropMomentumScale ( ) const
pure virtual

BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations).

This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1.

See also
setBackpropMomentumScale
virtual double cv::ml::ANN_MLP::getBackpropWeightScale ( ) const
pure virtual

BPROP: Strength of the weight gradient term.

The recommended value is about 0.1. Default value is 0.1.

See also
setBackpropWeightScale
virtual String cv::Algorithm::getDefaultName ( ) const
virtualinherited

Returns the algorithm string identifier.

This string is used as top level xml/yml node tag when the object is saved to a file or string.

virtual cv::Mat cv::ml::ANN_MLP::getLayerSizes ( ) const
pure virtual

Integer vector specifying the number of neurons in each layer including the input and output layers.

The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer.

See also
setLayerSizes
virtual double cv::ml::ANN_MLP::getRpropDW0 ( ) const
pure virtual

RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\).

Default value is 0.1.

See also
setRpropDW0
virtual double cv::ml::ANN_MLP::getRpropDWMax ( ) const
pure virtual

RPROP: Update-values upper limit \(\Delta_{max}\).

It must be >1. Default value is 50.

See also
setRpropDWMax
virtual double cv::ml::ANN_MLP::getRpropDWMin ( ) const
pure virtual

RPROP: Update-values lower limit \(\Delta_{min}\).

It must be positive. Default value is FLT_EPSILON.

See also
setRpropDWMin
virtual double cv::ml::ANN_MLP::getRpropDWMinus ( ) const
pure virtual

RPROP: Decrease factor \(\eta^-\).

It must be <1. Default value is 0.5.

See also
setRpropDWMinus
virtual double cv::ml::ANN_MLP::getRpropDWPlus ( ) const
pure virtual

RPROP: Increase factor \(\eta^+\).

It must be >1. Default value is 1.2.

See also
setRpropDWPlus
virtual TermCriteria cv::ml::ANN_MLP::getTermCriteria ( ) const
pure virtual

Termination criteria of the training algorithm.

You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01).

See also
setTermCriteria
virtual int cv::ml::ANN_MLP::getTrainMethod ( ) const
pure virtual

Returns current training method.

virtual int cv::ml::StatModel::getVarCount ( ) const
pure virtualinherited

Returns the number of variables in training samples.

virtual Mat cv::ml::ANN_MLP::getWeights ( int  layerIdx) const
pure virtual
virtual bool cv::ml::StatModel::isClassifier ( ) const
pure virtualinherited

Returns true if the model is classifier.

virtual bool cv::ml::StatModel::isTrained ( ) const
pure virtualinherited

Returns true if the model is trained.

static Ptr<ANN_MLP> cv::ml::ANN_MLP::load ( const String filepath)
static

Loads and creates a serialized ANN from a file.

Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.

Parameters
filepathpath to serialized ANN
template<typename _Tp >
static Ptr<_Tp> cv::Algorithm::load ( const String filename,
const String objname = String() 
)
inlinestaticinherited

Loads algorithm from the file.

Parameters
filenameName of the file to read.
objnameThe optional name of the node to read (if empty, the first top-level node will be used)

This is static template method of Algorithm. It's usage is following (in the case of SVM):

Ptr<SVM> svm = Algorithm::load<SVM>("my_svm_model.xml");

In order to make this method work, the derived class must overwrite Algorithm::read(const FileNode& fn).

References cv::Ptr< T >::empty(), cv::FileNode::empty(), cv::FileStorage::getFirstTopLevelNode(), and cv::FileStorage::READ.

Here is the call graph for this function:

template<typename _Tp >
static Ptr<_Tp> cv::Algorithm::loadFromString ( const String strModel,
const String objname = String() 
)
inlinestaticinherited

Loads algorithm from a String.

Parameters
strModelThe string variable containing the model you want to load.
objnameThe optional name of the node to read (if empty, the first top-level node will be used)

This is static template method of Algorithm. It's usage is following (in the case of SVM):

Ptr<SVM> svm = Algorithm::loadFromString<SVM>(myStringModel);

References CV_WRAP, cv::Ptr< T >::empty(), cv::FileNode::empty(), cv::FileStorage::getFirstTopLevelNode(), cv::FileStorage::MEMORY, and cv::FileStorage::READ.

Here is the call graph for this function:

virtual float cv::ml::StatModel::predict ( InputArray  samples,
OutputArray  results = noArray(),
int  flags = 0 
) const
pure virtualinherited

Predicts response(s) for the provided sample(s)

Parameters
samplesThe input samples, floating-point matrix
resultsThe optional output matrix of results.
flagsThe optional flags, model-dependent. See cv::ml::StatModel::Flags.

Implemented in cv::ml::LogisticRegression, and cv::ml::EM.

virtual void cv::Algorithm::read ( const FileNode fn)
inlinevirtualinherited

Reads algorithm parameters from a file storage.

Reimplemented in cv::FlannBasedMatcher, cv::DescriptorMatcher, and cv::Feature2D.

template<typename _Tp >
static Ptr<_Tp> cv::Algorithm::read ( const FileNode fn)
inlinestaticinherited

Reads algorithm from the file node.

This is static template method of Algorithm. It's usage is following (in the case of SVM):

cv::FileStorage fsRead("example.xml", FileStorage::READ);
Ptr<SVM> svm = Algorithm::read<SVM>(fsRead.root());

In order to make this method work, the derived class must overwrite Algorithm::read(const FileNode& fn) and also have static create() method without parameters (or with all the optional parameters)

References cv::Ptr< T >::empty().

Here is the call graph for this function:

virtual void cv::Algorithm::save ( const String filename) const
virtualinherited

Saves the algorithm to a file.

In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

virtual void cv::ml::ANN_MLP::setActivationFunction ( int  type,
double  param1 = 0,
double  param2 = 0 
)
pure virtual

Initialize the activation function for each neuron.

Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Parameters
typeThe type of activation function. See ANN_MLP::ActivationFunctions.
param1The first parameter of the activation function, \(\alpha\). Default value is 0.
param2The second parameter of the activation function, \(\beta\). Default value is 0.
virtual void cv::ml::ANN_MLP::setBackpropMomentumScale ( double  val)
pure virtual

BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations).

See also
getBackpropMomentumScale
virtual void cv::ml::ANN_MLP::setBackpropWeightScale ( double  val)
pure virtual

BPROP: Strength of the weight gradient term.

See also
getBackpropWeightScale
virtual void cv::ml::ANN_MLP::setLayerSizes ( InputArray  _layer_sizes)
pure virtual

Integer vector specifying the number of neurons in each layer including the input and output layers.

The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat.

See also
getLayerSizes
virtual void cv::ml::ANN_MLP::setRpropDW0 ( double  val)
pure virtual

RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\).

See also
getRpropDW0
virtual void cv::ml::ANN_MLP::setRpropDWMax ( double  val)
pure virtual

RPROP: Update-values upper limit \(\Delta_{max}\).

See also
getRpropDWMax
virtual void cv::ml::ANN_MLP::setRpropDWMin ( double  val)
pure virtual

RPROP: Update-values lower limit \(\Delta_{min}\).

See also
getRpropDWMin
virtual void cv::ml::ANN_MLP::setRpropDWMinus ( double  val)
pure virtual

RPROP: Decrease factor \(\eta^-\).

See also
getRpropDWMinus
virtual void cv::ml::ANN_MLP::setRpropDWPlus ( double  val)
pure virtual

RPROP: Increase factor \(\eta^+\).

See also
getRpropDWPlus
virtual void cv::ml::ANN_MLP::setTermCriteria ( TermCriteria  val)
pure virtual

Termination criteria of the training algorithm.

See also
getTermCriteria
virtual void cv::ml::ANN_MLP::setTrainMethod ( int  method,
double  param1 = 0,
double  param2 = 0 
)
pure virtual

Sets training method and common parameters.

Parameters
methodDefault value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
param1passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP
param2passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP.
virtual bool cv::ml::StatModel::train ( const Ptr< TrainData > &  trainData,
int  flags = 0 
)
virtualinherited

Trains the statistical model.

Parameters
trainDatatraining data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
flagsoptional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).
virtual bool cv::ml::StatModel::train ( InputArray  samples,
int  layout,
InputArray  responses 
)
virtualinherited

Trains the statistical model.

Parameters
samplestraining samples
layoutSee ml::SampleTypes.
responsesvector of responses associated with the training samples.
template<typename _Tp >
static Ptr<_Tp> cv::ml::StatModel::train ( const Ptr< TrainData > &  data,
int  flags = 0 
)
inlinestaticinherited

Create and train model with default parameters.

The class must implement static create() method with no parameters or with all default parameter values

References cv::Ptr< T >::empty().

Here is the call graph for this function:

virtual void cv::Algorithm::write ( FileStorage fs) const
inlinevirtualinherited

Stores algorithm parameters in a file storage.

Reimplemented in cv::FlannBasedMatcher, cv::DescriptorMatcher, and cv::Feature2D.

void cv::Algorithm::writeFormat ( FileStorage fs) const
protectedinherited

The documentation for this class was generated from the following file: