I've used darknet quite a bit over the last few months. I wouldn't recommend using it for a serious project, unless you happen to need an extremely fast off-the-shelf object detector. The other exception is if you need a fast convnet running on a Pi - Tiny Yolo will do about 1.2FPS with an optimised fork (using nnpack). Other than that you could just buy a Neural Compute Stick. Yolo itself is pretty great - it's very fast and is accurate enough for a lot of things.
The original repo isn't really updated, and while AlexyAB's fork is much improved, it's still a pain to use.
- If you make mistakes, things fail silently. This is by far the biggest problem. Train/test is difficult to get right because it's very difficult to figure out where exactly you've messed up.
- Support for images is arbitrary. Although you can compile with OpenCV, there are internal glob functions which simply ignore certain image types (I had to recompile it with support for TIFF, for example).
- Bounding boxes are stored in an awkward format, which is easy to get wrong. It's referenced to the centre of the box, stored as a fraction of the image width.
- Logging is very basic. Alexey added a loss graph, but that's about it. If you restart training from a checkpoint, you only get a loss curve from where you restarted.
- Retraining on your own data can seem like dark magic. There's a lot of "copy this config file and edit these numbers" and if you get it wrong, you've wasted a day training.
If you need to use Yolo, I'd recommend looking at reimplementations in more mature frameworks like pytorch (e.g. https://eavise.gitlab.io/lightnet/)
"THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER
SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN
TROUBLE"
FYI, this is the same person(people?) that came out with the YOLO Objection Detection classfier.I haven't used Darknet before but the Tiny Darknet seems very interesting and I might use that in the future for my small projeccts
It would be great if these AI/neural frameworks targeted opencl so it would be platform agnostic. AMD/Intel/Arm makes so very cost effective GPUs that run everywhere.
Not very useful but since it's his own project he can do whatever he likes. It looks like he's having fun, which is great. Except his beehive died. That kinda sucks.
That title is very non-descriptive or, at worst, misleading. I thought it would be about an Internet overlay network or something. Repository's description is Convolutional Neural Networks, so perhaps the title could be "Pjreddie/darknet: Convolutional Neural Networks"? (Not sure if pjreddie is supposed to be well-known, there is probably a reason OP added it to the title.)
The original repo isn't really updated, and while AlexyAB's fork is much improved, it's still a pain to use.
- If you make mistakes, things fail silently. This is by far the biggest problem. Train/test is difficult to get right because it's very difficult to figure out where exactly you've messed up.
- Support for images is arbitrary. Although you can compile with OpenCV, there are internal glob functions which simply ignore certain image types (I had to recompile it with support for TIFF, for example).
- Bounding boxes are stored in an awkward format, which is easy to get wrong. It's referenced to the centre of the box, stored as a fraction of the image width.
- Logging is very basic. Alexey added a loss graph, but that's about it. If you restart training from a checkpoint, you only get a loss curve from where you restarted.
- Retraining on your own data can seem like dark magic. There's a lot of "copy this config file and edit these numbers" and if you get it wrong, you've wasted a day training.
If you need to use Yolo, I'd recommend looking at reimplementations in more mature frameworks like pytorch (e.g. https://eavise.gitlab.io/lightnet/)