Hacker News new | past | comments | ask | show | jobs | submit login
Tiny-dnn – A C++11 implementation of deep learning (github.com/tiny-dnn)
143 points by chang2301 on Nov 30, 2016 | hide | past | favorite | 49 comments



If you're interested in C++ Frameworks that take a similar approach (e.g. a comprehensive C++ API, statically defined networks etc.) and you aren't afraid of meta-programming, then I'd also take a look at the dlib implementation: http://blog.dlib.net/2016/06/a-clean-c11-deep-learning-api.h...


I was shocked to read in that article that MS Visual Studio still does not support the full C++11 spec, which is now over 5 years old. What? I don't use MS products at all, but this really surprises me. Why are they so far behind?


What would that be? Visual Studio 2015 seems like fully covered except C99 preprocessor, if this table fully covers all C++11 features.

https://msdn.microsoft.com/en-us/library/hh567368.aspx

Why they have been behind? Wild guess, Microsoft thought that C#/.NET would be the future for everything and scaled back on C++, but with Windows 10 they realized that the world still consist much of C++, so now it is a equal member again together with .NET and HTML5/JavaScript, i.e Universal Windows Platform.


> Wild guess, Microsoft thought that C#/.NET would be the future for everything and scaled back on C++, but with Windows 10 they realized that the world still consist much of C++, so now it is a equal member again together with .NET and HTML5/JavaScript, i.e Universal Windows Platform.

Actually it is a bit more complex than that.

.NET used to belong to DevTools and C++ to WindowsDev.

WindowsDev lost the political wars to DevTools when MS went full .NET, but given the technical and political battles that caused Longhorn failure, WindowsDev gained ground.

Hence why starting with Vista the "Going Native" motto was born, and COM gained ground as the way to get more OS APIs to user space.

Alongside this, "Going Native" wave, Singularity and Midori meant bringing those efforts back to .NET thus creating .NET Native.

UWP original design, WinRT, can actually be traced back to .NET before the CLR was created. When they were designing COM 2.0 Runtime, as the future way to use VB, C++ on Windows.

Also before you rejoice too much about C++'s role, check how many C++ related talks are there at Build, Ignite and Connect().

C++ is seen as the official systems programming language for kernel programming, drivers, games and GPGPU. For everything else, the documentation or conference sessions tend to focus on .NET languages.


> C++ is seen as the official systems programming language for kernel programming, drivers

Kernel and driver development is still very much C.


Probably, I am infering from Herb's blog entries and the addition of kernel mode support to VC++ on Windows 8.

And the addition of UDMF with COM APIs.


I thought Windows did away with C completely and only supports C++ now?


The official statement from Herb's blog was into that direction.

https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-an...

The modern C support has only been done to the extent required by ANSI C++, and by integrating clang frontend with VC++ backend (C2).

Microsoft is doing with C2 something similar to LLVM for all their languages (.NET Native, VC++, clang frontend).


To further complicate matters, enter C++/CLI , the GC version of C++ present in Visual studio 2005 -> 2015


And Managed C++, the first version of it.

And none of those variants as good as C++ Builder, in RAD and OO library design experience.


Good summary, but just to be clear, I never rejoice the progress of C++. I hope it dies a horrible death in the burning pits of awful programming languages.


VS 2015 doesn't (fully) support Expression SFINAE. Looks like VS 2017 will: https://blogs.msdn.microsoft.com/vcblog/2016/11/16/sfinae-up...


But in 2017, there will be another significant new standard, C++17. I wonder if VS will not support it until 2022, if they remain 6 years behind on new language standards?


People like to complain about VS, but they are quite good in what concerns commercial C++ compilers.

http://en.cppreference.com/w/cpp/compiler_support

And C++ compiler support page doesn't provide information about the likes of TI and similar embedded OEMs, otherwise that reddish colour would be even bigger.


In the embedded world you can consider yourself lucky if you get a halfway working c++98 compliance.


VS2017 already is quite far into supporting C++14/C++17 features, except that expression SFINAE and extended constexpr were not viable to implement on the original (dated) compiler frontend and therefore the past years have been spent on rewriting large portions of the compiler.

The VC team blog provides fairly decent information with regards to reasoning and progress behind this.


C++17 won't be that significant. All of the major features have been postponed to later revisions.


I agree with your general point, but frankly the only feature of C++17 I'm interested in is await/resume and that's already supported in VS 2015.


Await/resume won't be in c++17, as it is very contentious. It will be first released as a Technical Report.


Yes, you are right. And the table I linked to actually said No on Expression SFINAE for Visual Studio 2015 if you read it carefully, which I did not.


I believe that VS still doesn't do two phase lookup. Not sure whether it will be fixed in 2017.


When I looked for a small ANN library with little external dependencies and which could be statically linked I settled on FANN[0].

Worked reasonably, solved my problem as well as I hoped it would. It is rather limited in features though- no training on GPU, single-threaded by design, etc.

tiny-dnn appears to have a lot more choices regarding network architecture, parallelization options. Would definitely have tried tiny-dnn first if I had known about it.

[0]: http://leenissen.dk/fann/wp/


Fann doesn't do DCNN, arguably the most important right now.


> arguably the most important right now.

But for very specific uses, right? DCNN excel at image and video, but are DCNN also common for other tasks?


Surprisingly (to me at least) convolutional neural networks (CNN) are also useful for natural language processing. (I am using CNN for NLP at work.)


CNNs in general excel at infering/decoding "context" from the vectors, so it makes sense to use them for language. But yes, there are cases where they don't make sense/just add complexity.


It's very useful for any kind of signal that can be analyzed with filters.


fann is kind of a dead project though. no relu support, which i PR'd but has been unresponsive for a while now.


"98.8% accuracy on MNIST in 13 minutes training" - that's really slow isn't it? I can probably write vanilla JS or Python code that does that in 10 lines or so and would finish in around 5 seconds.

Why is this taking so long?


> I can probably write vanilla JS or Python code that does that in 10 lines or so and would finish in around 5 seconds.

Really? There are certainly frameworks that you can use to achieve this performance in that amount of code, but I would be really interested to see that in vanilla JS/Python.


Hey, sure - here's 9 lines of code that implement a perceptron. I get ~95% of precision after a second with 400 samples.

```js

     const dotSign = (v1, v2) => v1.reduce((prev, x, i) => prev + x * v2[i], 1) > 0 ? 1 : -1

    module.exports = (data, weights = Array(data[0].content.length).fill(0)) => {
      for(const {label, content} of data) {
        const delta = (label - dotSign(content, weights)) / 2;
        weights = weights.map((x, i) => x + delta * content[i]);
      }
      return { perceive: vector => dotSign(vector, weights), weights };
    }
```

I can upload an electron app that does this with mnist if interested.


> I get ~95% of precision after a second with 400 samples.

MNIST has 60k training samples and 10k test samples. Are you using only 400 of them? Is 95% the accuracy on the test samples or on the same set of training samples? I believe when we talk about MNIST accuracy, we always refer to the accuracy on the 10k test samples.


how long does does it take to get from 95% -> 98.8% though?


This. You can also get near 98% accuracy with vw and one quadratic interaction over the pixel space. It takes seconds. But that last .8%, forget about ever getting there with infinite training time.


About 10 seconds and 10 times the samples does that.


Is this not a binary perceptron? How do you plan to classify all 10 digits with this?


I cheated here (this _is_ a binary perceptron), but the conversion is super easy with one-vs-rest or one-vs-one (training all pairs and then doing pairwise comparison until we find the one that matches), that would still take ~20 seconds to train.



Sorry if the question is not relevant but is that difficult to use that project to implement something like DeepDream (https://github.com/google/deepdream) ?


This should be possible and fairly easy to do, I guess - it's just forward and backward pass with your favorite model through the net and collect the loss for all octaves. You will have to port a few image utility functions (like roll, zoom, etc.) if not available. I ported the basic Deep Dream example to JavaScript [1] and it was not that difficult.

When looking through the code I saw that BP for LRN is not implemented yet, so you cannot pick a model using LRN (or you have to implement it).

[1] https://github.com/chaosmail/caffejs/blob/master/docs/assets...


Thank you!


Contributor here. Welcome to ask question. We are working on GPU support now.


This looks interesting. I cloned the repository and am checking it out. The examples all use supplied with the repo binary image training data. It would be good to document data preparation better (sorry in advance if I just missed seeing that).

20+ years ago, I made my living on C++, but dropped it. Looking at the example programs using C++11 features, I am motivated to get up to speed on C++11. I put writing a NLP example for tiny-dnn and sending a pull request on my todo list.


Can anyone recommend a book, long article or tutorial on deep learning? I'm not in the field but interested in what particular advances have been made that deep learning is suddenly so much more attractive than regular neural networks etc were back in the 00's.


Check out this blog post about deep learning for computer vision applications [1]; it describes very well the historical development and the current advances. In short, not too much changed from LeCuns ConvNets in the 80s. Disclaimer, I wrote this article.

[1] http://chaosmail.github.io/deeplearning/2016/10/22/intro-to-...


Geofrey Hinton (one of the pioneers of deep learning) offers a machine learning course on Coursera: https://www.coursera.org/learn/neural-networks

And Udacity offers a course on deep learning: https://www.udacity.com/course/deep-learning--ud730


When I saw "tiny-dnn" I though. Huh? Tiny Dot Net Nuke?? No such thing exists. DNN is too much bloat!


Wow HN got no sense of humour today.


HN has lots of humour and an appreciation for clever wit and sarcasm. I'm afraid that your effort, while noble, failed to reach this bar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: