Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

WordLens was an awesome app and it's good to see that Google is continuing the development.

The new fad for using the 'deep' learning buzzword annoys me though. It seems so meaningless. What makes one kind of neural net 'deep' and are all the other ones suddenly 'shallow' ?




> What makes one kind of neural net 'deep' and are all the other ones suddenly 'shallow' ?

If this is a serious question, then googling "what is a deep neural network" would take you to any number of explanations. But to summarize very briefly, it's not a buzzword; it's a technical term referring to a network with multiple nonlinear layers that are chained together in sequence. Deep networks have been talked about for as long as neural networks have been a research subject, but it's only in the last few years that the mathematical techniques and computational power have been available to do really interesting things with them.

The "fad" (as you call it) is not mainly because the word "deep" sounds cool, but because companies like Google have been seeing breakthrough results that are being used in production as we speak. For example:

http://papers.nips.cc/paper/4687-large-scale-distributed-dee...

http://static.googleusercontent.com/media/research.google.co...

http://static.googleusercontent.com/media/research.google.co...


I honestly didn't realise that it had any definition - I see now that calling it a 'fad' is unfair. However, the boundary between deep learning and (representational) machine learning still seems murky.


Considering the very significant accuracy gains deep learning has achieved over previous approaches (and across a number of fields), it's certainly not a simple fad. Having worked in computer vision for a good 8+ years, deep learning is basically amazing.

Deep learning is a form of representation/feature learning.


Machine learning proper encompasses a swath of applied statistical techniques, of which deep learning is only one. Machine learning could refer to linear regression, SVM, hidden markov models, dimensionality reduction, neural nets, or any number of other loosely related methods. Intro ML classes often don't even get to deep learning because theres so much more fundamental stuff to cover.


So was Word Lens doing this before Google even bought them? Because Word Lens worked fine, locally on a phone, long before Google was doing it's whole deep learning thing.


It's not entirely clear to me, but this sentence from the article:

In the end, we were able to get our networks to give us significantly better results while running about as fast as our old system—great for translating what you see around you on the fly.

Suggests that they previously were not using neural networks, or were using less powerful ones.


> What makes one kind of neural net 'deep' and are all the other ones suddenly 'shallow'

Number of layers

It's that simple


To expand on this some more: for a long time, thanks to Cybenko's theorem[1], people just used 1 hidden layer in their neural networks (also because computing was sloowww..). So, your typical NN architecture was input_layer --> hidden_layer --> output_layer.

Eventually, people realized that you could improve performance by adding more hidden layers. So while theoretically Cybenko was correct, practically stacking a bunch of hidden layers made more sense. These network architectures with stacks of hidden layers were then labelled as "deep" neural networks.

[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...


It is that simple but the more complex story is that when the number of hidden layers exceeds 2, training becomes difficult. Also convnets for example cheat by having the connections between layers be incomplete bipartite graphs (not every node is connected to every other node), usually chosen because of some physical property - for computer vision nearest neighbors - eg.


Use another deep learning network to supervise training of your DLN. You can also use it to supervise itself. It is simple idea invented about decade ago (at least I heard it about decade ago here, in Ukraine).


Well, if all it cares about is looks...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: