The Darknet/YOLO Discord and Darknet Issues regularly sees certain questions come up. This page will attempt to answer these frequenty asked questions.

Stรฉphane's Darknet FAQ:

How to get started with Darknet? How long will it take?

The simple stop sign tutorial (also available as a Youtube video) is a good place to get started. It will probably take you ~90 minutes to complete:

At the end, you'll have a simple custom neural network that can find stop signs.

There is a 2nd different "getting started" tutorial in the section that discusses How to build Darknet on Linux?.

See also: How many images will it take? and How long does it take to train?.

Which configuration file should I use?

Darknet comes with many configuration files. This is how you select which configuration file to use:

See also: Configuration Files.

Does the network have to be perfectly square?


The default network sizes in the common template configuration files is defined as 416x416 or 608x608, but those are only examples!

Choose a size that works for you and your images. The only restrictions are:

Whatever size you choose, Darknet will stretch (without preserving the aspect ratio!) your images to be exactly that size prior to processing the image. This includes both training and inference. So use a size that makes sense for you and the images you need to process, but remember that there are important speed and memory limitations:

See also: How much memory does it take?

Can I train a neural network with a CPU?

Not really.

While technically you could, the length of time you'll have to wait does not make sense. Doing inference of single images with a CPU does work (measured in seconds, versus GPU which is measured in milliseconds), but to give you an idea of the difference between CPU and GPU, see the following:

See also: How long does it take and CPU vs GPU.

What is the difference between CPU and GPU performance?

The CPU-only version of darknet can be used for inference, but it is not as fast as when a CUDA-compatible GPU is available.

When using the CPU, inference is measured in seconds. When using a GPU, inference is measured in milliseconds.

An example, using YOLOv4-tiny, 640x480, against a directory with 65 JPG images:

See also: Training with a CPU?

How many images will it take?

It depends. There isn't a single magic number that answers this question for everyone.

A simple tldr answer: between several hundred and many thousands. If you are asking, it probably means many thousands.

A more detailed answer:

  1. Say your application is a camera pointed to a fixed location inside a machine, or looking at a spot on a conveyer belt in a factory. We'll call this scenario #1.
  2. Then we can imagine a camera pointed to a fixed location outdoors. Such as a traffic cam mounted to a pole and always looking at the exact same scene. We'll call this scenario #2.
  3. The last example will be for drones that fly around outdoors, or for people to use random images they either download on the internet or take by themselves, where there is no control of the size of the image, the zoom, the angle, the quality of the image, etc. This is scenario #3.

In all cases, you can start with just a few images. In the stop sign tutorial a neural network is trained to find stop signs with 30+ images. And you can do it with less. But your neural network will be extremely limited. Note that in that tutorial, all signs are more-or-less the same size, taken from similar distances and angles. The network trained will be limited in finding stop signs which are similar to what was used to train.

  1. Scenario #1: In the first scenario, there is an extremely low amount of variability in what the camera will see. You can train with very few images. It is very important in these scenarios to have negative images during training. So if the network needs to find a part moving on a conveyer belt, then make sure to include images the camera will see of the conveyer belt without the part you need it to find. (This is mentioned briefly in the Darknet readme, search for "negative samples".) To get started, annotate 2 dozen images and include an additional dozen negative sample images. (See also: How to annotate images?) Depending on variability with ligthing conditions, object position, and how many ways the part can be rotated, you'll eventually work up to hundreds of images to get a decent network trained.
  2. Scenario #2: In the second scenario, the scene might change in lighting, but the background is relatively fixed. Like the previous scenario, negative images will help the neural network determine what is the background that must be ignored. You may need to collect images over the span of a year to eventually train the network the difference between summer, winter, and everything in between. To get started you'll have a few dozen images, but know you need to immediately scale to thousands. And the more classes you want to detect, the more images you need. For example:
    • Need to recognize a blue car? That is an image.
    • A blue car in the lower right as well as the upper left? More images.
    • Do you need to find red cars too? Duplicate all the previous images.
    • Cars of any colour of the rainbow? Yet more images.
    • Small cars as well as large cars?
    • What about trucks? Pickup truck, transport trucks, garbage truck? More images of each type and colour.
    • Buses? School buses? City buses? Commercial buses? Black buses? White buses? More images!
    • Cars at noon? Cars when it is raining? Cars in winter when the road is white and covered in snow? At 9am or 9pm? With headlights on or with lights off?
    Each of these scenarios barely begins to describe all the possibilities. And if you want the neural network to function correctly, you need annotated images of each possibility.
  3. Scenario #3: The last scenario is the most complex. Because you have all the possibilities of the last scenario, but in addition you don't control who takes the images, the type of camera used, or how it is done. You have no idea what the background will be. You have no idea if the camera will be in focus, zoomed in, tilted slightly to one side, what the lighting conditions are, if the image is upside down, etc. So of course to get started you can begin with just a few dozen images, but because diversity is key to make this scenario work, your network will need thousands of annotated images per class.

Can I train a neural network using synthetic images?


Or, to be more precise, you'll probably end up with a neural network that is great at detecting your synthetic images, but unable to detect much in real-world images.

(Yes, one of my first Darknet tutorials was to detect barcodes in synthetic images, and that neural network worked great...but mostly at detecting barcodes in synthetic images!)

What does the error message "CUDA Error: out of memory" mean during training?

If training abruptly stops with the following message:

Try to set subdivisions=64 in your cfg-file. CUDA Error: out of memory: File exists darknet: ./src/utils.c:325: error: Assertion `0' failed. Command terminated by signal 6

This means you don't have enough video memory on your GPU card. There are several possible solutions:

  1. decrease the network size, meaning width=... and height=... in the [net] section of your .cfg file
  2. increase the subdivisions, meaning subdivisions=... in the [net] section of your .cfg file
  3. choose a different configuration file that consumes less memory
  4. purchase a different GPU with more memory

If the network size cannot be modified, the most common solution is to increase the subdivision. For example, in the [net] section of your .cfg file you might see this:

batch=64 subdivisions=2

Try doubling the subdivisions and train again:

batch=64 subdivisions=4


batch=64 subdivisions=8

Keep doubling the subdivisions until you don't get the out-of-memory error.

If subdivision=... matches the value in batch=... and you still get an out-of-memory error, then you'll need to decrease the network dimensions (the width=... and height=... in [net]) or select a less demanding configuration.

See also: How much memory does it take?

How to build Darknet on Linux?

The Linux instructions for building Darknet are very simple. It should not take more than 2 or 3 minutes to get all the dependencies installed and everything built.

Taken from the DarkHelp page, it should look similar to this when building Darknet on a Debian-based distribution such as Ubuntu:

sudo apt-get install build-essential git libopencv-dev mkdir ~/src cd ~/src git clone https://github.com/AlexeyAB/darknet.git cd darknet # edit Makefile to set LIBSO=1, and possibly other flags make sudo cp libdarknet.so /usr/local/lib/ sudo cp include/darknet.h /usr/local/include/ sudo ldconfig

When you edit the Makefile for the first time, if you're unsure of which flags to set, try this to get started:


These are safe options that will build a CPU-only version of Darknet, which doesn't require any NVIDIA files to be installed. Remember this is only to help you get started. If you want to use your GPU, you can go back and enable it after you confirm everything is working.

This YouTube video shows how to install Darknet as described above. Watch the "Darknet" segment that begins at 0:50:

See also: What about OpenCV and CUDA?

How to build Darknet on Windows?

This gets...complicated. And as of early 2021, there isn't a single easy-to-follow tutorial that would work for everyone, as the Windows build instructions tend to be quite fragile.

At this point in time, the only solution is to look at the Darknet project's readme (we know, it isn't always up-to-date) or come ask questions in the #build-in-windows channel on the Darknet/YOLO Discord.

What about OpenCV and CUDA?

You definitely should NOT build OpenCV by hand! Yes, we know about the many tutorials. Chances are, you'll get it wrong. OpenCV is a complex library, with many optional modules. Some of those modules are not optional for Darknet, and many OpenCV tutorials don't include them.

Instead, please follow the standard way to install OpenCV for your operating system. On Debian-based Linux distributions, it is a single command that should look similar to this:

sudo apt-get install libopencv-dev

That's it! It really should never be more complicated than that and shouldn't take more than a few seconds.

Which brings up the topic of CUDA-enabled OpenCV. Please take note: Darknet DOES NOT use the CUDA portion of OpenCV! Don't bother spending hours or days trying to install it, Darknet wont use it. At all!

Darknet does use CUDA to train and run the neural network, but it does so directly, not via OpenCV. OpenCV is used to load images from disk, resize images, and create the "mosaic" images, all of which is done without the GPU.

To use CUDA and the GPU from within OpenCV, you'd need to make extensive modifications to the Darknet source code. For example, to use cv::cuda::GpuMat instead of the usual cv::Mat. And even then, it isn't as simple as people think, as described in this blog post on CUDA and OpenCV.

So save yourself the headache, install the "normal" OpenCV libraries, and don't waste any more time on trying to figure out how to enable CUDA within OpenCV.

How much memory does it take to train a custom network?

There are several factors that determine how much video memory is needed on your GPU to train a network. Except for the first (the configuration file itself), these are all defined in the [net] section at the top of the configuration file:

Typically, once a network configuration and dimensions are chosen, the value that gets modified to make the network fit in the available memory is the batch subdivision.

You'll want the subdivision to be as small as possible without causing an out-of-memory error. Here are some values showing the amount of GPU memory required using various configurations and subdivisions:

subdivisions= 64 32 16 8 4 2 1
YOLOv3 3085 MiB 4406 MiB 6746 MiB ? ? ? ?
YOLOv3-tiny 1190 MiB 1204 MiB 1652 MiB 2522 MiB 4288 MiB ? ?
YOLOv3-tiny-3l 1046 MiB 1284 MiB 1814 MiB 2810 MiB 4846 MiB ? ?
YOLOv4 4236 MiB 6246 MiB ? ? ? ? ?
YOLOv4-tiny 814 MiB 956 MiB 1321 MiB 1752 MiB 2770 MiB 5532 MiB ?
YOLOv4-tiny-3l 830 MiB 1085 MiB 1282 MiB 1862 MiB 2982 MiB 5748 MiB ?

Here is the same table but for a slightly larger network size:

subdivisions= 64 32 16 8 4 2 1
YOLOv3 4648 MiB 4745 MiB ? ? ? ? ?
YOLOv3-tiny 1278 MiB 1774 MiB 2728 MiB 4634 MiB ? ? ?
YOLOv3-tiny-3l 1473 MiB 2059 MiB 3044 MiB 5420 MiB ? ? ?
YOLOv4 6906 MiB ? ? ? ? ? ?
YOLOv4-tiny 984 MiB 1262 MiB 1909 MiB 2902 MiB 5076 MiB ? ?
YOLOv4-tiny-3l 1020 MiB 1332 MiB 1938 MiB 3134 MiB 5518 MiB ? ?

Memory values as reported by nvidia-smi. My setup is a GeForce RTX 2070 with only 8 GiB of memory, which limits the configurations I can run.

How long does it take to train a custom network?

The length of time it takes to train a network depends on the input image data, the network configuration, the available hardware, how Darknet was compiled, even the format of the images at extremes.

Some tldr notes:

  1. Resize your training and validation images to match exactly the network size.
    • For example: mogrify -verbose -strip -resize 416x416! -quality 75 *.JPG
  2. Build Darknet with support for OpenCV: loading images is slower without OpenCV.
  3. Build Darknet with support for OpenCV: resizing images is slower without OpenCV.
  4. Build Darknet with support for CUDA/CUDNN. (This requires supported hardware.)
  5. Use the tiny variants of the network.

The format of the images -- JPG or PNG -- has no meaningful impact on the length of time it takes to train unless the images are excessively large. When very large photo-realistic image files are saved as PNG, the excessive file sizes means loading the images from disk is slow, which significantly impacts the training time. This should never be an issue when the image sizes match the network sizes.

The table shows the length of time it takes to train a neural network:

  original 4608x3456 JPG images 4608x3456 JPG images, quality=75 800x600 JPG images, quality=75 416x416 JPG images, quality=75
Darknet compiled to use GPU + OpenCV 10 iterations: 42.26 seconds
10K iterations: 11h 44m
10 iterations: 35.27 seconds
10K iterations: 9h 47m
10 iterations: 6.90 seconds
10K iterations: 1h 55m
10 iterations: 6.76 seconds
10K iterations: 1h 53m
Darknet compiled to use GPU + OpenCV,
but using PNG images instead of JPG
n/a 10 iterations: 80.70 seconds
10K iterations: 22h 25m
10 iterations: 6.93 seconds
10K iterations: 1h 56m
10 iterations: 6.71 seconds
10K iterations: 1h 52m
Darknet compiled to use GPU, but without OpenCV 10 iterations: 113.31 seconds
10K iterations: 31h 29m
10 iterations: 106.56 seconds
10K iterations: 29h 36m
10 iterations: 9.19 seconds
10K iterations: 2h 33m
10 iterations: 7.70 seconds
10K iterations: 2h 8m
Darknet compiled for CPU + OpenCV (no GPU) 10 iterations: 532.86 seconds
10K iterations: > 6 days
10 iterations: 527.41 seconds
10K iterations: > 6 days
10 iterations: 496.47 seconds
10K iterations: > 5 days
10 iterations: 496.03 seconds
10K iterations: > 5 days

For these tests, GPU was a GeForce RTX 2070 with 8 GiB of memory, CPU was a 8-core 3.40 GHz.

Note that all the neural networks trained in the previous table are exactly the same. The training images are identical, the validation images are the same, and the resulting neural networks are virtually identical. But the length of time it takes to train varies between ~2 hours and 6+ days.

What command should I use when training my own network?

I store all my neural network projects in subfolders within /home/username/nn/ (where "nn" means "neural networks"). So if I was to start a project called "animals", all my images and configuration files would be stored in ~/nn/animals/.

This is the command I would run to train my "animals" neural network from within Linux:

cd ~/nn/animals/ ~/src/darknet/darknet detector -map -dont_show train ~/nn/animals/animals.data ~/nn/animals/animals.cfg

Note how I don't use any starting weights files. When I train my own neural networks, I always start with a clean slate.

I like to capture all the output generated by darknet (STDERR and STDOUT) so modify the previous command like this:

cd ~/nn/animals/ ~/src/darknet/darknet detector -map -dont_show train ~/nn/animals/animals.data ~/nn/animals/animals.cfg 2>&1 | tee output.log

Darknet does not need to run from within the darknet subdirectory. It is a self-contained application. You can run it from anywhere as long as it is on the path, or you specify it by name.

The various filenames (data, cfg, images, ...) can be relative to the current directory, but I prefer to use absolute filenames in all the darknet configuration files to remove any ambiguity. For example, given the previous "animals" project, the content of the animals.data file might look like this:

classes = 17 train = /home/username/nn/animals/animals_train.txt valid = /home/username/nn/animals/animals_valid.txt names = /home/username/nn/animals/animals.names backup = /home/username/nn/animals

Once training has started, open the image file chart.png to see the progress.

How to run against multiple images, and/or how to get JSON output?

The preferred way would be to use the API. This way you load the network once, run it against as many images you need, and process the results exactly the way you want.

Darknet has a C API, C++ bindings, and there are other open-source libraries such as DarkHelp which provide an alternate C++ API.

If you are looking for something you can run from the CLI without writing any C or C++:

To use the darknet command line instead of the API, search for the text "list of images" in the Darknet readme. It gives a few examples showing how to process many images at once to get the results in JSON format.

Similarly, DarkHelp also has a JSON/CLI mode which may be used to process many images at once.

Should I crop my training images?


Say you want a network trained to find barcodes. If you crop and label your training images like this:


...then your network will only learn to recognize barcodes when they take up 100% of the image. It is unlikely you want that; if the objects to find always took up 100% of the image, then there is little use to train a neural network.

Instead, make sure your training images are representative of the final images. Using this barcode as an example, a more likely marked up training image would be similar to this:

book and barcode

See also: Darknet & DarkMark image markup.

How to annotate images?

Images that don't contain any of the objects you want to find are called nagative samples, and they are important to have in your training set. When marking up your images, the negative samples will have a blank (empty) .txt file, telling Darknet that nothing of interest exists in that image.

In DarkMark, this is done by selecting the "empty image" annotation.

empty image in DarkMark

Meanwhile, the rest of your images should have 1 annotation per object. If an image contains 3 vehicles, and "vehicle" is one of your classes, then you must markup the image 3 times, once for each vehicle. Don't use a single large annotation that covers all 3 vehicles, unless that is what you want the network to learn. And similarly, don't break up your object into multiple smaller parts to try and get better or tighter coverage. You should stick to the rule "1 object = 1 annotation".

3 cars

This becomes much more challenging when trying to detect things like clouds, smoke, fire, or rain: the goal is not to cover the image with many small annotations to achieve 100% pixel coverage. Instead, you want to identify each distinct object you'd like the neural network to identify.

Image annotated incorrectly:

incorrect way to annotate images

Same image annotated correctly:

correct way to annotate images

Additional markup comments and techniques are discussed on DarkMark's "markup" page.

See also: How many images?

How do Darknet/YOLO annotations work?

Darknet/YOLO annotations are saved to text files, one per image. Each line is a bounding box within the image. The annotation coordinates are normalized, and look like this:

0 0.1279690000 0.2870280000 0.0390805000 0.0500511000 1 0.2132137000 0.2787434000 0.0685726000 0.0825132000 0 0.5547320250 0.2944417333 0.0305359500 0.0444498667 9 0.6320830000 0.2855560000 0.0625000000 0.0777778000

The five fields are:

  1. the zero-based class index
  2. the center X coordinate
  3. the center Y coordinate
  4. the width
  5. the height

Here is an example to demonstrate. Say we want to annotate the "5" mailbox digit in this image:

Knowing the center of the "5" is located at coordinate (240, 99.5) and the image dimensions are 560x420, the values are normalized like this:

The classes for this neural network are the digits from "0" to "9", so the first value at the start of the line would be "5". The annotation in the .txt file would look like this:

5 0.428571429 0.236904762 0.078571429 0.097619048

How do I turn off data augmentation?

If you are using DarkMark, then set to zero or turn off all data augmentation options.

If you are editing your configuration file by hand, verify these settings in the [net] section:

saturation=0 exposure=0 hue=0 cutmix=0 flip=0 mixup=0 mosaic=0

How important is image rotation as a data augmentation technique?

Depends on the type of image. Some things don't make much sense rotated (e.g., dashcam or highway cam images). But the impact of rotated images needs to be considered. For example, here is a network that is really good at detecting animals:

puppy upside down puppy

With 100% certainty, that is a very cute dog. But when the exact same image is rotated 180 degrees, all of a sudden the neural network thinks this is more likely to be a cat than a dog.

See also: Data Augmentation - Rotation

Where can I get more help?

Come see us on the Darknet/YOLO Discord!

Last modified: 2021-03-25
Stรฉphane Charette, stephanecharette@gmail.com