Hacker News new | past | comments | ask | show | jobs | submit login

> Many of these criticisms generalize to 'first make it work, then make it better, then make it fast'. Being able to solve some of these problems at all is the kicker, whether or not it is efficient is at the moment not nearly as important as being able to solve them in principle.

There are deep (not universally applicable) assumptions in that framing... It works only if you didn't "make it work" in a way that fundamentally limits/handicaps the future steps, or gives no insights in that direction.

Based on my understanding of the subject, I think that much recent progress in deep learning is less of a breakthrough than it's commonly made out to be, and that there are fundamental conceptual reasons why the approach is limited, almost like achieving the first step of drawing an owl: https://i.kym-cdn.com/photos/images/newsfeed/000/572/078/d6d...

But the jury is out on that one, and I think there is enough room for reasonable people to differ in their opinions. ¯\_(ツ)_/¯




In image classification problems (which covers a lot of ground) for instance the progress is undeniable. But we are still a ways away from extracting the essence and getting to the point where what biology does effortlessly can be done on similar power budgets by a combination of soft and hardware.


Or language models. If you think deep learning or any of the AI (no not AGI whatever that even means) in the last decade are crap/useless then you probably never tried Google Translate before the advent of these breakthroughs. It’s not perfect, but it’s undeniably useful as it is.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: