I think some of the raving that's going on is unwarranted. This is a very nice, very well put together library with a great landing page. It might eventually displace Torch and Theano as the standard toolkits for deep learning. It looks like it might offer performance / portability improvements. But it does not do anything fundamentally different from what has already been done for many years with Theano and Torch (which are standard toolkits for expressing computations, usually for building neural nets) and other libraries. It is not a game-changer or a spectacular moment in history as some people seem to believe.
This is "not a game-changer" in the same way map-reduce isn't a game-changer wrt for loops.
Also check out TensorBoard, their visualization tool (animation halfway down the page):
Maybe it would work good enough in a service backend. But even there it would not scale that well. For example, it doesn't support multi-threading (running a theano.function from multiple threads at the same time).
Or perhaps you mean to say that Torch itself is difficult to learn because of design choices that were made in order to use Lua?
Seriously, there are three features in the Lua language which are not trivial: metatables, coroutines and _ENV. None of those are needed to use Torch.
It will take more time to learn Torch-specific APIs, but the same problem exists with the other ML frameworks.
An issue has been created to add TensorFlow to this shortly.
Personally, I feel I've been more exposed to Fabrice Bellard's work, which might be not true, but I first learned of Jeff Dean's existence yesterday.
"Jeff Dean once shifted a bit so hard, it ended up on another computer."
I'm dying for this stuff to be dumbed down enough where Joe WebUser can feed in arbitrary data in a csv or point an app at a data source and get some sort of meaningful results.
It truly seems like an area where once the barrier to entry is greatly reduced, the creativity of laymen will lead to some truly amazing executions.
I found this  very approachable, and you can find the material  and code  from the talks on github.
I'd suggest http://karpathy.github.io/2015/05/21/rnn-effectiveness/ is a good place to start.
The other option is using nVidia Digits toolkit.
I'm dying for this stuff to be dumbed down enough[...]
There's no substitute for sweat. Have fun with the code they gave you and see where you end up!
I never said I could/would never put in the work to learn it. I'm saying that the place it is in right now is still too advanced for someone with my background to pick up and play around with without sitting down to seriously study the underlying concepts that are objectively fairly dense subject matter that can require advanced math and CS backgrounds.
To be clear, I'm not advocating that everything should be dumbed down for the sake of it. My point was largely that when the barrier to creation gets low enough, more creative types that don't have the heavy technical backgrounds can pick it up and create things that more technical users may never have imagined.
Not everything is best served as remaining elusively complex for the layman.
Also, for the record, I will probably read up on some of this stuff because I find it interesting and enjoy learning. I just wish it was a step more accessible than it is today, even with this development.
The command line make command on GNU/Linux is an example of something that "dumbs down"/makes easy a quick start , as opposed to editing and configuring the Makefile yourself.
Similarly, yum/apt-get take this "dumb down" one step further.
Nothing wrong with removing friction.
Infact this is an idea for a startup right here, remove friction from machine learning/NLP/API.
That is why I responded to shostack in the first place. The response was specific to his question and I got plenty of downvotes on my karma. No worries there :)
I wasn't trying to bring you down. Maybe what you're looking for is a visual programming environment, where you can drag and drop functions, data, etc?
However I am highly visual and visualizing the impact on the results would be really helpful. I deal with a lot of analytics and data as part of my day-to-day managing digital media. I often find that I can easily spot trends just by glancing at data visualizations, and infer insights from them.
Further, being able to visualize the nature of the functions/data/etc. would also be very helpful. I tend to need to visualize something to fully grok it.
If you have any suggestions for a more visual take on machine learning that is beginner friendly, I'd love a link.
People have been asking about its fundamental differentiators. I'm not sure there are any. Theano and Torch already set a pretty high standard.
We know what good tools look like, and those tools exist even if they're getting incremental improvements.
Now it's just a matter of building really cool things with them.
He says "I don't have anything to announce" so technically not lying.
> Gradient based machine learning algorithms will benefit from TensorFlow's automatic differentiation capabilities. As a TensorFlow user, you define the computational architecture of your predictive model, combine that with your objective function, and just add data -- TensorFlow handles computing the derivatives for you.
Interesting, it kind of look like the machine learning focused version of NASA's OpenMDAO (also a graph-based analysis and optimization framework with derivatives, but for engineering design).
OT but how much does a super engineer like him get paid at Google?
His salarly will probably be in the 6-figures, but he'll be a millionaire many times over. He joined Google in 1999 (IPO was in 2004), so his stock will have made him a very rich man.
Executive compensation is a very odd area, as I said, it pretty much depends how much money he wants.
a) Stuff similar to this has been available for ages and there are no (good) open source voice recognition packages.
b) It requires absolute mountains of training data which we don't have.
c) It requires designing a suitable network, which I'm not sure if we have, but I would doubt it.
d) It requires training a network on the mountains of training data using an immense computing cluster, which we requires money that we don't have.
Don't hold your breath.
Case in point, ever wonder why those captchas include street addresses or 'pick the shape with a hole in it?' Spoiler: you're building training data and validating training data.
How else can we silently retrieve training data?