Hacker News new | past | comments | ask | show | jobs | submit login

Yann LeCun knows his stuff, but he doesn't provide an answer for "What is the upper bound of results you can get from just making a bigger neural net?" The most interesting thing about GPT-3 is that they didn't appear to find that limit. They could keep going. Even if the limit exists in principle, if it's 7 orders of magnitude away, we should seriously consider whether or not the system will be smarter than a human before it reaches that point.

It could be a factor of 2 away from GPT-3! It could be something they already reached, if it is close! But we don't know. And without these answers, this is going to end up being one of the most interesting technical projects in the world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: