OpenCV  4.1.1-pre
Open Source Computer Vision
Looking for a C++ dev who knows OpenCV?
I'm looking for work. Hire me!
cv::ml::ANN_MLP Class Referenceabstract

Artificial Neural Networks - Multi-Layer Perceptrons. More...

#include <opencv2/ml.hpp>

Inheritance diagram for cv::ml::ANN_MLP:
Collaboration diagram for cv::ml::ANN_MLP:

Public Types

enum  ActivationFunctions {
  IDENTITY = 0,
  SIGMOID_SYM = 1,
  GAUSSIAN = 2,
  RELU = 3,
  LEAKYRELU = 4
}
 possible activation functions More...
 
enum  Flags {
  UPDATE_MODEL = 1,
  RAW_OUTPUT =1,
  COMPRESSED_INPUT =2,
  PREPROCESSED_INPUT =4
}
 Predict options. More...
 
enum  TrainFlags {
  UPDATE_WEIGHTS = 1,
  NO_INPUT_SCALE = 2,
  NO_OUTPUT_SCALE = 4
}
 Train options. More...
 
enum  TrainingMethods {
  BACKPROP =0,
  RPROP = 1,
  ANNEAL = 2
}
 Available training methods. More...
 

Public Member Functions

virtual float calcError (const Ptr< TrainData > &data, bool test, OutputArray resp) const
 Computes error on the training or test dataset. More...
 
virtual void clear ()
 Clears the algorithm state. More...
 
virtual bool empty () const CV_OVERRIDE
 Returns true if the Algorithm is empty (e.g. More...
 
virtual double getAnnealCoolingRatio () const =0
 ANNEAL: Update cooling ratio. More...
 
virtual double getAnnealFinalT () const =0
 ANNEAL: Update final temperature. More...
 
virtual double getAnnealInitialT () const =0
 ANNEAL: Update initial temperature. More...
 
virtual int getAnnealItePerStep () const =0
 ANNEAL: Update iteration per step. More...
 
virtual double getBackpropMomentumScale () const =0
 BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). More...
 
virtual double getBackpropWeightScale () const =0
 BPROP: Strength of the weight gradient term. More...
 
virtual String getDefaultName () const
 Returns the algorithm string identifier. More...
 
virtual cv::Mat getLayerSizes () const =0
 Integer vector specifying the number of neurons in each layer including the input and output layers. More...
 
virtual double getRpropDW0 () const =0
 RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\). More...
 
virtual double getRpropDWMax () const =0
 RPROP: Update-values upper limit \(\Delta_{max}\). More...
 
virtual double getRpropDWMin () const =0
 RPROP: Update-values lower limit \(\Delta_{min}\). More...
 
virtual double getRpropDWMinus () const =0
 RPROP: Decrease factor \(\eta^-\). More...
 
virtual double getRpropDWPlus () const =0
 RPROP: Increase factor \(\eta^+\). More...
 
virtual TermCriteria getTermCriteria () const =0
 Termination criteria of the training algorithm. More...
 
virtual int getTrainMethod () const =0
 Returns current training method. More...
 
virtual int getVarCount () const =0
 Returns the number of variables in training samples. More...
 
virtual Mat getWeights (int layerIdx) const =0
 
virtual bool isClassifier () const =0
 Returns true if the model is classifier. More...
 
virtual bool isTrained () const =0
 Returns true if the model is trained. More...
 
virtual float predict (InputArray samples, OutputArray results=noArray(), int flags=0) const =0
 Predicts response(s) for the provided sample(s) More...
 
virtual void read (const FileNode &fn)
 Reads algorithm parameters from a file storage. More...
 
virtual void save (const String &filename) const
 Saves the algorithm to a file. More...
 
virtual void setActivationFunction (int type, double param1=0, double param2=0)=0
 Initialize the activation function for each neuron. More...
 
virtual void setAnnealCoolingRatio (double val)=0
 ANNEAL: Update cooling ratio. More...
 
virtual void setAnnealEnergyRNG (const RNG &rng)=0
 Set/initialize anneal RNG. More...
 
virtual void setAnnealFinalT (double val)=0
 ANNEAL: Update final temperature. More...
 
virtual void setAnnealInitialT (double val)=0
 ANNEAL: Update initial temperature. More...
 
virtual void setAnnealItePerStep (int val)=0
 ANNEAL: Update iteration per step. More...
 
virtual void setBackpropMomentumScale (double val)=0
 BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). More...
 
virtual void setBackpropWeightScale (double val)=0
 BPROP: Strength of the weight gradient term. More...
 
virtual void setLayerSizes (InputArray _layer_sizes)=0
 Integer vector specifying the number of neurons in each layer including the input and output layers. More...
 
virtual void setRpropDW0 (double val)=0
 RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\). More...
 
virtual void setRpropDWMax (double val)=0
 RPROP: Update-values upper limit \(\Delta_{max}\). More...
 
virtual void setRpropDWMin (double val)=0
 RPROP: Update-values lower limit \(\Delta_{min}\). More...
 
virtual void setRpropDWMinus (double val)=0
 RPROP: Decrease factor \(\eta^-\). More...
 
virtual void setRpropDWPlus (double val)=0
 RPROP: Increase factor \(\eta^+\). More...
 
virtual void setTermCriteria (TermCriteria val)=0
 Termination criteria of the training algorithm. More...
 
virtual void setTrainMethod (int method, double param1=0, double param2=0)=0
 Sets training method and common parameters. More...
 
virtual bool train (const Ptr< TrainData > &trainData, int flags=0)
 Trains the statistical model. More...
 
virtual bool train (InputArray samples, int layout, InputArray responses)
 Trains the statistical model. More...
 
virtual void write (FileStorage &fs) const
 Stores algorithm parameters in a file storage. More...
 
void write (const Ptr< FileStorage > &fs, const String &name=String()) const
 simplified API for language bindings This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. More...
 

Static Public Member Functions

static Ptr< ANN_MLPcreate ()
 Creates empty model. More...
 
static Ptr< ANN_MLPload (const String &filepath)
 Loads and creates a serialized ANN from a file. More...
 
template<typename _Tp >
static Ptr< _Tp > load (const String &filename, const String &objname=String())
 Loads algorithm from the file. More...
 
template<typename _Tp >
static Ptr< _Tp > loadFromString (const String &strModel, const String &objname=String())
 Loads algorithm from a String. More...
 
template<typename _Tp >
static Ptr< _Tp > read (const FileNode &fn)
 Reads algorithm from the file node. More...
 
template<typename _Tp >
static Ptr< _Tp > train (const Ptr< TrainData > &data, int flags=0)
 Create and train model with default parameters. More...
 

Protected Member Functions

void writeFormat (FileStorage &fs) const
 

Detailed Description

Artificial Neural Networks - Multi-Layer Perceptrons.

Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.

Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.

See also
Neural Networks

Member Enumeration Documentation

◆ ActivationFunctions

possible activation functions

Enumerator
IDENTITY 

Identity function: \(f(x)=x\).

SIGMOID_SYM 

Symmetrical sigmoid: \(f(x)=\beta*(1-e^{-\alpha x})/(1+e^{-\alpha x})\).

Note
If you are using the default sigmoid activation function with the default parameter values fparam1=0 and fparam2=0 then the function used is y = 1.7159*tanh(2/3 * x), so the output will range from [-1.7159, 1.7159], instead of [0,1].
GAUSSIAN 

Gaussian function: \(f(x)=\beta e^{-\alpha x*x}\).

RELU 

ReLU function: \(f(x)=max(0,x)\).

LEAKYRELU 

Leaky ReLU function: for x>0 \(f(x)=x \) and x<=0 \(f(x)=\alpha x \).

◆ Flags

enum cv::ml::StatModel::Flags
inherited

Predict options.

Enumerator
UPDATE_MODEL 
RAW_OUTPUT 

makes the method return the raw results (the sum), not the class label

COMPRESSED_INPUT 
PREPROCESSED_INPUT 

◆ TrainFlags

Train options.

Enumerator
UPDATE_WEIGHTS 

Update the network weights, rather than compute them from scratch.

In the latter case the weights are initialized using the Nguyen-Widrow algorithm.

NO_INPUT_SCALE 

Do not normalize the input vectors.

If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation equal to 1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case, you should take care of proper normalization.

NO_OUTPUT_SCALE 

Do not normalize the output vectors.

If the flag is not set, the training algorithm normalizes each output feature independently, by transforming it to the certain range depending on the used activation function.

◆ TrainingMethods

Available training methods.

Enumerator
BACKPROP 

The back-propagation algorithm.

RPROP 

The RPROP algorithm.

See [78] for details.

ANNEAL 

The simulated annealing algorithm.

See [48] for details.

Member Function Documentation

◆ calcError()

virtual float cv::ml::StatModel::calcError ( const Ptr< TrainData > &  data,
bool  test,
OutputArray  resp 
) const
virtualinherited

Computes error on the training or test dataset.

Parameters
datathe training data
testif true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing.
respthe optional output responses.

The method uses StatModel::predict to compute the error. For regression models the error is computed as RMS, for classifiers - as a percent of missclassified samples (0%-100%).

◆ clear()

virtual void cv::Algorithm::clear ( )
inlinevirtualinherited

Clears the algorithm state.

Reimplemented in cv::FlannBasedMatcher, and cv::DescriptorMatcher.

◆ create()

static Ptr<ANN_MLP> cv::ml::ANN_MLP::create ( )
static

Creates empty model.

Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.

◆ empty()

virtual bool cv::ml::StatModel::empty ( ) const
virtualinherited

Returns true if the Algorithm is empty (e.g.

in the very beginning or after unsuccessful read

Reimplemented from cv::Algorithm.

◆ getAnnealCoolingRatio()

virtual double cv::ml::ANN_MLP::getAnnealCoolingRatio ( ) const
pure virtual

ANNEAL: Update cooling ratio.

It must be >0 and less than 1. Default value is 0.95.

See also
setAnnealCoolingRatio

◆ getAnnealFinalT()

virtual double cv::ml::ANN_MLP::getAnnealFinalT ( ) const
pure virtual

ANNEAL: Update final temperature.

It must be >=0 and less than initialT. Default value is 0.1.

See also
setAnnealFinalT

◆ getAnnealInitialT()

virtual double cv::ml::ANN_MLP::getAnnealInitialT ( ) const
pure virtual

ANNEAL: Update initial temperature.

It must be >=0. Default value is 10.

See also
setAnnealInitialT

◆ getAnnealItePerStep()

virtual int cv::ml::ANN_MLP::getAnnealItePerStep ( ) const
pure virtual

ANNEAL: Update iteration per step.

It must be >0 . Default value is 10.

See also
setAnnealItePerStep

◆ getBackpropMomentumScale()

virtual double cv::ml::ANN_MLP::getBackpropMomentumScale ( ) const
pure virtual

BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations).

This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1.

See also
setBackpropMomentumScale

◆ getBackpropWeightScale()

virtual double cv::ml::ANN_MLP::getBackpropWeightScale ( ) const
pure virtual

BPROP: Strength of the weight gradient term.

The recommended value is about 0.1. Default value is 0.1.

See also
setBackpropWeightScale

◆ getDefaultName()

virtual String cv::Algorithm::getDefaultName ( ) const
virtualinherited

Returns the algorithm string identifier.

This string is used as top level xml/yml node tag when the object is saved to a file or string.

Reimplemented in cv::AKAZE, cv::KAZE, cv::SimpleBlobDetector, cv::GFTTDetector, cv::AgastFeatureDetector, cv::FastFeatureDetector, cv::MSER, cv::ORB, cv::BRISK, and cv::Feature2D.

◆ getLayerSizes()

virtual cv::Mat cv::ml::ANN_MLP::getLayerSizes ( ) const
pure virtual

Integer vector specifying the number of neurons in each layer including the input and output layers.

The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer.

See also
setLayerSizes

◆ getRpropDW0()

virtual double cv::ml::ANN_MLP::getRpropDW0 ( ) const
pure virtual

RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\).

Default value is 0.1.

See also
setRpropDW0

◆ getRpropDWMax()

virtual double cv::ml::ANN_MLP::getRpropDWMax ( ) const
pure virtual

RPROP: Update-values upper limit \(\Delta_{max}\).

It must be >1. Default value is 50.

See also
setRpropDWMax

◆ getRpropDWMin()

virtual double cv::ml::ANN_MLP::getRpropDWMin ( ) const
pure virtual

RPROP: Update-values lower limit \(\Delta_{min}\).

It must be positive. Default value is FLT_EPSILON.

See also
setRpropDWMin

◆ getRpropDWMinus()

virtual double cv::ml::ANN_MLP::getRpropDWMinus ( ) const
pure virtual

RPROP: Decrease factor \(\eta^-\).

It must be <1. Default value is 0.5.

See also
setRpropDWMinus

◆ getRpropDWPlus()

virtual double cv::ml::ANN_MLP::getRpropDWPlus ( ) const
pure virtual

RPROP: Increase factor \(\eta^+\).

It must be >1. Default value is 1.2.

See also
setRpropDWPlus

◆ getTermCriteria()

virtual TermCriteria cv::ml::ANN_MLP::getTermCriteria ( ) const
pure virtual

Termination criteria of the training algorithm.

You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01).

See also
setTermCriteria

◆ getTrainMethod()

virtual int cv::ml::ANN_MLP::getTrainMethod ( ) const
pure virtual

Returns current training method.

◆ getVarCount()

virtual int cv::ml::StatModel::getVarCount ( ) const
pure virtualinherited

Returns the number of variables in training samples.

◆ getWeights()

virtual Mat cv::ml::ANN_MLP::getWeights ( int  layerIdx) const
pure virtual

◆ isClassifier()

virtual bool cv::ml::StatModel::isClassifier ( ) const
pure virtualinherited

Returns true if the model is classifier.

◆ isTrained()

virtual bool cv::ml::StatModel::isTrained ( ) const
pure virtualinherited

Returns true if the model is trained.

◆ load() [1/2]

static Ptr<ANN_MLP> cv::ml::ANN_MLP::load ( const String filepath)
static

Loads and creates a serialized ANN from a file.

Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.

Parameters
filepathpath to serialized ANN

◆ load() [2/2]

template<typename _Tp >
static Ptr<_Tp> cv::Algorithm::load ( const String filename,
const String objname = String() 
)
inlinestaticinherited

Loads algorithm from the file.

Parameters
filenameName of the file to read.
objnameThe optional name of the node to read (if empty, the first top-level node will be used)

This is static template method of Algorithm. It's usage is following (in the case of SVM):

Ptr<SVM> svm = Algorithm::load<SVM>("my_svm_model.xml");

In order to make this method work, the derived class must overwrite Algorithm::read(const FileNode& fn).

References CV_Assert, cv::FileNode::empty(), cv::FileStorage::getFirstTopLevelNode(), cv::FileStorage::isOpened(), and cv::FileStorage::READ.

Here is the call graph for this function:

◆ loadFromString()

template<typename _Tp >
static Ptr<_Tp> cv::Algorithm::loadFromString ( const String strModel,
const String objname = String() 
)
inlinestaticinherited

Loads algorithm from a String.

Parameters
strModelThe string variable containing the model you want to load.
objnameThe optional name of the node to read (if empty, the first top-level node will be used)

This is static template method of Algorithm. It's usage is following (in the case of SVM):

Ptr<SVM> svm = Algorithm::loadFromString<SVM>(myStringModel);

References CV_WRAP, cv::FileNode::empty(), cv::FileStorage::getFirstTopLevelNode(), cv::FileStorage::MEMORY, and cv::FileStorage::READ.

Here is the call graph for this function:

◆ predict()

virtual float cv::ml::StatModel::predict ( InputArray  samples,
OutputArray  results = noArray(),
int  flags = 0 
) const
pure virtualinherited

Predicts response(s) for the provided sample(s)

Parameters
samplesThe input samples, floating-point matrix
resultsThe optional output matrix of results.
flagsThe optional flags, model-dependent. See cv::ml::StatModel::Flags.

Implemented in cv::ml::LogisticRegression, and cv::ml::EM.

◆ read() [1/2]

virtual void cv::Algorithm::read ( const FileNode fn)
inlinevirtualinherited

Reads algorithm parameters from a file storage.

Reimplemented in cv::FlannBasedMatcher, cv::DescriptorMatcher, and cv::Feature2D.

◆ read() [2/2]

template<typename _Tp >
static Ptr<_Tp> cv::Algorithm::read ( const FileNode fn)
inlinestaticinherited

Reads algorithm from the file node.

This is static template method of Algorithm. It's usage is following (in the case of SVM):

cv::FileStorage fsRead("example.xml", FileStorage::READ);
Ptr<SVM> svm = Algorithm::read<SVM>(fsRead.root());

In order to make this method work, the derived class must overwrite Algorithm::read(const FileNode& fn) and also have static create() method without parameters (or with all the optional parameters)

◆ save()

virtual void cv::Algorithm::save ( const String filename) const
virtualinherited

Saves the algorithm to a file.

In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).

◆ setActivationFunction()

virtual void cv::ml::ANN_MLP::setActivationFunction ( int  type,
double  param1 = 0,
double  param2 = 0 
)
pure virtual

Initialize the activation function for each neuron.

Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Parameters
typeThe type of activation function. See ANN_MLP::ActivationFunctions.
param1The first parameter of the activation function, \(\alpha\). Default value is 0.
param2The second parameter of the activation function, \(\beta\). Default value is 0.

◆ setAnnealCoolingRatio()

virtual void cv::ml::ANN_MLP::setAnnealCoolingRatio ( double  val)
pure virtual

ANNEAL: Update cooling ratio.

See also
getAnnealCoolingRatio

◆ setAnnealEnergyRNG()

virtual void cv::ml::ANN_MLP::setAnnealEnergyRNG ( const RNG rng)
pure virtual

Set/initialize anneal RNG.

◆ setAnnealFinalT()

virtual void cv::ml::ANN_MLP::setAnnealFinalT ( double  val)
pure virtual

ANNEAL: Update final temperature.

See also
getAnnealFinalT

◆ setAnnealInitialT()

virtual void cv::ml::ANN_MLP::setAnnealInitialT ( double  val)
pure virtual

ANNEAL: Update initial temperature.

See also
getAnnealInitialT

◆ setAnnealItePerStep()

virtual void cv::ml::ANN_MLP::setAnnealItePerStep ( int  val)
pure virtual

ANNEAL: Update iteration per step.

See also
getAnnealItePerStep

◆ setBackpropMomentumScale()

virtual void cv::ml::ANN_MLP::setBackpropMomentumScale ( double  val)
pure virtual

BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations).

See also
getBackpropMomentumScale

◆ setBackpropWeightScale()

virtual void cv::ml::ANN_MLP::setBackpropWeightScale ( double  val)
pure virtual

BPROP: Strength of the weight gradient term.

See also
getBackpropWeightScale

◆ setLayerSizes()

virtual void cv::ml::ANN_MLP::setLayerSizes ( InputArray  _layer_sizes)
pure virtual

Integer vector specifying the number of neurons in each layer including the input and output layers.

The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat.

See also
getLayerSizes

◆ setRpropDW0()

virtual void cv::ml::ANN_MLP::setRpropDW0 ( double  val)
pure virtual

RPROP: Initial value \(\Delta_0\) of update-values \(\Delta_{ij}\).

See also
getRpropDW0

◆ setRpropDWMax()

virtual void cv::ml::ANN_MLP::setRpropDWMax ( double  val)
pure virtual

RPROP: Update-values upper limit \(\Delta_{max}\).

See also
getRpropDWMax

◆ setRpropDWMin()

virtual void cv::ml::ANN_MLP::setRpropDWMin ( double  val)
pure virtual

RPROP: Update-values lower limit \(\Delta_{min}\).

See also
getRpropDWMin

◆ setRpropDWMinus()

virtual void cv::ml::ANN_MLP::setRpropDWMinus ( double  val)
pure virtual

RPROP: Decrease factor \(\eta^-\).

See also
getRpropDWMinus

◆ setRpropDWPlus()

virtual void cv::ml::ANN_MLP::setRpropDWPlus ( double  val)
pure virtual

RPROP: Increase factor \(\eta^+\).

See also
getRpropDWPlus

◆ setTermCriteria()

virtual void cv::ml::ANN_MLP::setTermCriteria ( TermCriteria  val)
pure virtual

Termination criteria of the training algorithm.

See also
getTermCriteria

◆ setTrainMethod()

virtual void cv::ml::ANN_MLP::setTrainMethod ( int  method,
double  param1 = 0,
double  param2 = 0 
)
pure virtual

Sets training method and common parameters.

Parameters
methodDefault value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
param1passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
param2passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.

◆ train() [1/3]

virtual bool cv::ml::StatModel::train ( const Ptr< TrainData > &  trainData,
int  flags = 0 
)
virtualinherited

Trains the statistical model.

Parameters
trainDatatraining data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create.
flagsoptional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP).

◆ train() [2/3]

virtual bool cv::ml::StatModel::train ( InputArray  samples,
int  layout,
InputArray  responses 
)
virtualinherited

Trains the statistical model.

Parameters
samplestraining samples
layoutSee ml::SampleTypes.
responsesvector of responses associated with the training samples.

◆ train() [3/3]

template<typename _Tp >
static Ptr<_Tp> cv::ml::StatModel::train ( const Ptr< TrainData > &  data,
int  flags = 0 
)
inlinestaticinherited

Create and train model with default parameters.

The class must implement static create() method with no parameters or with all default parameter values

◆ write() [1/2]

virtual void cv::Algorithm::write ( FileStorage fs) const
inlinevirtualinherited

Stores algorithm parameters in a file storage.

Reimplemented in cv::FlannBasedMatcher, cv::DescriptorMatcher, and cv::Feature2D.

References CV_WRAP.

Referenced by cv::Feature2D::write(), and cv::DescriptorMatcher::write().

Here is the caller graph for this function:

◆ write() [2/2]

void cv::Algorithm::write ( const Ptr< FileStorage > &  fs,
const String name = String() 
) const
inherited

simplified API for language bindings This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ writeFormat()

void cv::Algorithm::writeFormat ( FileStorage fs) const
protectedinherited

The documentation for this class was generated from the following file: