
Darknet: C and CUDA open source neural network framework - alphydan
http://pjreddie.com/darknet/
======
jimduk
I use this a bit - the You Only Look Once implementation is good & fast
(should be - redmon wrote the paper
[http://arxiv.org/abs/1506.02640](http://arxiv.org/abs/1506.02640)) and it's
all C. I don't think it's competing to overtake Torch/Caffe/Tensorflow as
'the' framework. What I find it good for is looking at the source code and
trying to understand how something works in code (often not easy even after
reading the papers) - easier than with the big frameworks. Also may be a good
place to start for simpler target platforms e.g. embedded. Kudos to pjreddie.

~~~
nxzero
TensorFlow is already designed to work in embedded systems.

~~~
krona
Given the current memory utilization of TF during training, I doubt it's ready
for embedded systems.

~~~
nxzero
TensorFlow is designed to be trained on distributed systems, but deployed on
embedded systems; in fact, to me, this is the single greatest advantage
TensorFlow has currently.

------
imaginenore
Bad name. Darknet already has a meaning.

[https://en.m.wikipedia.org/wiki/Darknet](https://en.m.wikipedia.org/wiki/Darknet)

And why CUDA, and not OpenCL?

~~~
yeukhon
_sigh_ people just can't think of a better brand name. Agree.

My bold guess is because there are more training machines run on Nviada GPUs
as opposed to AMD's.

~~~
slizard
> My bold guess is because there are more training machines run on Nviada GPUs
> as opposed to AMD's.

Likely not so bold. AMD GPUs are good, but a quite a bit more pain to program
compared to NVIDIA with CUDA (OpenCL on NVIDIA is more or less useless).

------
T-A
Don't get me wrong, I do appreciate the effort that went into this and fully
intend to check it out sooner or later, but lately the appearance of new deep
learning frameworks is beginning to trigger flashbacks to this:
[http://notinventedhe.re/on/2015-5-19](http://notinventedhe.re/on/2015-5-19)

~~~
zump
For some people, the only way to learn is to implement.

~~~
eggy
I agree. I always recreate or implement something to fully understand it. OT,
but his resume rocks! Very funny...

------
Tim61
This is cool. The documentation is very entertaining, although not something
you'd show to your boss. Looks like it implements a lot more than just neural
networks.

Shameless plug: For a minimal neural network implementation in ANSI C, check
out: [https://github.com/codeplea/genann](https://github.com/codeplea/genann)
Sometimes lack of features is a feature.

------
orblivion
One doesn't see web design like this much any more these days. I like it.

~~~
lithander
Also the résumé that's linked there. ;)

~~~
orblivion
I wonder how many jobs he got with that.

------
yarrel
Please, please, please stop basing free software projects on proprietary
libraries like CUDA.

~~~
pizza
What should I learn instead, if I have an nVidia GPU?

~~~
joe_the_user
OpenCl targets nVidia GPUs also, at least according to the documentation.

The main problem is OpenCL seems an order of magnitude more complex than CUDA.
To be honest, Cuda seems semi-portable just because (according to
documentation and tutorials I've read) you do little more than allocate
memory, tag functions, write loops and the compiler figures out the rest.
OpenCL seems to demand everything be specified in _gruesome_ detail whether in
the c or the c++ version. Also, the latest pdf-pamphlet of the Khronos Group
on opencl 1.2 just says they're "exploring" an open source implementation of
the spec, which doesn't encourage one to imagine the spec as available.

Edit: Researching this, it seems that the CUDA compiler actually has been
integrated into clang/llvm. An open spec with open compiler seems as open as
one can get with software - it only targets NVidia but you can just complain
Linux was proprietary when it originally only targeted Intel.

[https://research.nvidia.com/news/nvidia-contributes-cuda-
com...](https://research.nvidia.com/news/nvidia-contributes-cuda-compiler-
open-source-community)

~~~
slizard
> The main problem is OpenCL seems an order of magnitude more complex than
> CUDA.

Do you have any experience to back that up with? I've worked with both and I
know that there are areas where your "order of magnitude" (which BTW I'd like
to see on paper, quantified) does hold up, e.g. dev tools, but in general your
claim is simply false.

> To be honest, Cuda seems semi-portable just because (according to
> documentation and tutorials I've read) you do little more than allocate
> memory, tag functions, write loops and the compiler figures out the rest.

Nonsense, you're likely confusing OpenACC with CUDA, the CUDA runtime API is
in many cases considerably more simple than OpenCL, but it requires a
compiler, and frankly often times is screams rushed design/implementation. In
contrast, the CUDA driver API is quite similar to OpenCL.

...and BTW OpenACC is "open" only on paper and in fact it is a very divisive
move that has created conflicts, polarized the community, and undermined
collaboration between multiple parties (in particular with the OpenMP
proponents).

> Also, the latest pdf-pamphlet of the Khronos Group on opencl 1.2 just says
> they're "exploring" an open source implementation of the spec, which doesn't
> encourage one to imagine the spec as available.

The latest spec is 2.0 FYI; can you explain my how on earth does one's
imagination go astray so badly to believe that just because there is no
reference implementation one would consider the spec "not available". Don't
get me started with the examples of standards that exist without an official
reference implementation.

> Edit: Researching this, it seems that the CUDA compiler actually has been
> integrated into clang/llvm. An open spec with open compiler seems as open as
> one can get with software - it only targets NVidia but you can just complain
> Linux was proprietary when it originally only targeted Intel.

BS. Show me the "open spec of CUDA". Even if one did exist, good luck trying
to influence the design of it if you're not an big oil company, Google, Audi
or GM. Also, try to use the CUDA name without having to pay a ton of money.
And you seem to forget that the "open compiler" does not actually generate
byte-code (that's done by the propritary JIT compiler) and even if did, you
still nee NVIDIA's proprietary driver to run it.

------
billylindeman
Been researching the YOLO network for awhile now. It's a pretty kickass setup.
Hoping to port it to TF or torch at some point.

~~~
jimduk
Haven't used it but this might help - mentioned on the darknet discussion
group - Yolo v3 on Tensorflow - predictions only
[https://github.com/gliese581gg/YOLO_tensorflow](https://github.com/gliese581gg/YOLO_tensorflow)

------
IshKebab
Another one?

------
teekert
Hmm, is this really nice? I doesn't look like something I'd just recommend to
colleagues, with the religious symbols, names like Darknet and Yolo, Nightmare
and black magic. Are they trying to stay away from the corporate market?

~~~
intrasight
I'm sure if they get some traction, they will rebrand.

~~~
pavpanchekha
I give you 10:1 odds they don't.

~~~
pizza
Well, the project solves a different problem than the project's name does..

