Darknet/YOLO v5.0-117-g31c55275-dirty
Object Detection Framework
 
Loading...
Searching...
No Matches
Darknet::ONNXExport Class Referencefinal

Everthing we need to convert .cfg and .weights to .onnx is contained within this class. More...

#include "darknet_onnx.hpp"

Collaboration diagram for Darknet::ONNXExport:

Public Member Functions

 ONNXExport (const std::filesystem::path &cfg_filename, const std::filesystem::path &weights_filename, const std::filesystem::path &onnx_filename)
 Constructor.
 
 ~ONNXExport ()
 Destructor.
 
ONNXExportadd_node_activation (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_bn (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_conv (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_maxpool (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_resize (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_route_concat (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_route_split (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_shortcut (const size_t index, Darknet::CfgSection &section)
 
ONNXExportadd_node_yolo (const size_t index, Darknet::CfgSection &section)
 
ONNXExportbuild_model ()
 
ONNXExportcheck_activation (const size_t index, Darknet::CfgSection &section)
 
ONNXExportdisplay_summary ()
 Display some general information about the protocol buffer model.
 
ONNXExportinitialize_model ()
 Initialize some of the simple protobuffer model fields.
 
ONNXExportload_network ()
 Use Darknet to load the neural network.
 
ONNXExportpopulate_graph_initializer (const float *f, const size_t n, const size_t idx, const Darknet::Layer &l, const std::string &name, const bool simple=false)
 
ONNXExportpopulate_graph_input_frame ()
 
ONNXExportpopulate_graph_nodes ()
 
ONNXExportpopulate_graph_output ()
 
ONNXExportpopulate_input_output_dimensions (onnx::ValueInfoProto *proto, const std::string &name, const int v1, const int v2=-1, const int v3=-1, const int v4=-1, const size_t line_number=0)
 
ONNXExportsave_output_file ()
 Save the entire model as an .onnx file.
 

Static Public Member Functions

static void log_handler (google::protobuf::LogLevel level, const char *filename, int line, const std::string &message)
 Callback function that Protocol Buffers calls to log messages.
 

Public Attributes

Darknet::CfgFile cfg
 
std::filesystem::path cfg_fn
 
bool fuse_batchnorm
 Whether or not we need to fuse batchnorm (fuse and dontfuse on the CLI).
 
onnx::GraphProtograph
 
std::string input_string
 The dimensions used in populate_graph_input_frame().
 
onnx::ModelProto model
 
std::map< int, std::string > most_recent_output_per_index
 Keep track of the single most recent output name for each of the layers.
 
std::map< std::string, size_t > number_of_floats_exported
 The key is the last part of the string, and the value is the number of floats.
 
std::filesystem::path onnx_fn
 
int opset_version
 Which opset version to use (10, 18, ...)?
 
std::string output_string
 The output nodes for this neural network.
 
std::filesystem::path weights_fn
 

Detailed Description

Everthing we need to convert .cfg and .weights to .onnx is contained within this class.

Constructor & Destructor Documentation

◆ ONNXExport()

Darknet::ONNXExport::ONNXExport ( const std::filesystem::path &  cfg_filename,
const std::filesystem::path &  weights_filename,
const std::filesystem::path &  onnx_filename 
)

Constructor.

Here is the call graph for this function:

◆ ~ONNXExport()

Darknet::ONNXExport::~ONNXExport ( )

Destructor.

Here is the call graph for this function:

Member Function Documentation

◆ add_node_activation()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_activation ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ add_node_bn()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_bn ( const size_t  index,
Darknet::CfgSection section 
)

◆ add_node_conv()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_conv ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ add_node_maxpool()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_maxpool ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ add_node_resize()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_resize ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ add_node_route_concat()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_route_concat ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ add_node_route_split()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_route_split ( const size_t  index,
Darknet::CfgSection section 
)

<

Todo:
V5: is this logic correct? This is only a guess as to how this works.
Here is the call graph for this function:

◆ add_node_shortcut()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_shortcut ( const size_t  index,
Darknet::CfgSection section 
)
Todo:
V5: unused? Do we have weights for shortcuts?
Here is the call graph for this function:

◆ add_node_yolo()

Darknet::ONNXExport & Darknet::ONNXExport::add_node_yolo ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ build_model()

Darknet::ONNXExport & Darknet::ONNXExport::build_model ( )
Here is the caller graph for this function:

◆ check_activation()

Darknet::ONNXExport & Darknet::ONNXExport::check_activation ( const size_t  index,
Darknet::CfgSection section 
)
Here is the call graph for this function:

◆ display_summary()

Darknet::ONNXExport & Darknet::ONNXExport::display_summary ( )

Display some general information about the protocol buffer model.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initialize_model()

Darknet::ONNXExport & Darknet::ONNXExport::initialize_model ( )

Initialize some of the simple protobuffer model fields.

Todo:
We need a command-line parameter for this field.
Todo:
We need a command-line parameter for this field.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ load_network()

Darknet::ONNXExport & Darknet::ONNXExport::load_network ( )

Use Darknet to load the neural network.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ log_handler()

void Darknet::ONNXExport::log_handler ( google::protobuf::LogLevel  level,
const char *  filename,
int  line,
const std::string &  message 
)
static

Callback function that Protocol Buffers calls to log messages.

Here is the caller graph for this function:

◆ populate_graph_initializer()

Darknet::ONNXExport & Darknet::ONNXExport::populate_graph_initializer ( const float *  f,
const size_t  n,
const size_t  idx,
const Darknet::Layer l,
const std::string &  name,
const bool  simple = false 
)
Todo:
V5 2025-08-13: This is black magic! I actually have no idea how the DIMS work. I saw some example Darknet/YOLO weights converted to ONNX and attempted to figure out the patern. While this seems to work for the few examples I have, I would be extremely happy if someone can point out to me exactly how this works so I can implement it correctly!
Here is the call graph for this function:

◆ populate_graph_input_frame()

Darknet::ONNXExport & Darknet::ONNXExport::populate_graph_input_frame ( )

◆ populate_graph_nodes()

Darknet::ONNXExport & Darknet::ONNXExport::populate_graph_nodes ( )
Here is the call graph for this function:

◆ populate_graph_output()

Darknet::ONNXExport & Darknet::ONNXExport::populate_graph_output ( )
Here is the call graph for this function:

◆ populate_input_output_dimensions()

Darknet::ONNXExport & Darknet::ONNXExport::populate_input_output_dimensions ( onnx::ValueInfoProto proto,
const std::string &  name,
const int  v1,
const int  v2 = -1,
const int  v3 = -1,
const int  v4 = -1,
const size_t  line_number = 0 
)
Here is the call graph for this function:

◆ save_output_file()

Darknet::ONNXExport & Darknet::ONNXExport::save_output_file ( )

Save the entire model as an .onnx file.

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ cfg

Darknet::CfgFile Darknet::ONNXExport::cfg

◆ cfg_fn

std::filesystem::path Darknet::ONNXExport::cfg_fn

◆ fuse_batchnorm

bool Darknet::ONNXExport::fuse_batchnorm

Whether or not we need to fuse batchnorm (fuse and dontfuse on the CLI).

◆ graph

onnx::GraphProto* Darknet::ONNXExport::graph

◆ input_string

std::string Darknet::ONNXExport::input_string

The dimensions used in populate_graph_input_frame().

◆ model

onnx::ModelProto Darknet::ONNXExport::model

◆ most_recent_output_per_index

std::map<int, std::string> Darknet::ONNXExport::most_recent_output_per_index

Keep track of the single most recent output name for each of the layers.

◆ number_of_floats_exported

std::map<std::string, size_t> Darknet::ONNXExport::number_of_floats_exported

The key is the last part of the string, and the value is the number of floats.

For example, for "000_conv_bias", we store the key as "bias".

◆ onnx_fn

std::filesystem::path Darknet::ONNXExport::onnx_fn

◆ opset_version

int Darknet::ONNXExport::opset_version

Which opset version to use (10, 18, ...)?

◆ output_string

std::string Darknet::ONNXExport::output_string

The output nodes for this neural network.

◆ weights_fn

std::filesystem::path Darknet::ONNXExport::weights_fn

The documentation for this class was generated from the following files: