Hacker News new | past | comments | ask | show | jobs | submit login

Tesla autopilot is notoriously unsafe, and still extremely bad even at simple problems. Not sure how it could be called a "solved problem" - especially when you have the CEO of Waymo saying publically that he doesn't believe we'll "ever" reach level 5 autonomous driving.

There have been massive improvements in automated driving, but if you want to talk about solved problems, parking assisst is as far as you can get.

Translation is much better, and is often understandable, but it is far from a solved problem.

Colorizing/repairing old photos also often introduces strange artifacts in places where they are unnecessary. Again, workable technology, not a solved problem.

Voice transcription is also decent, but far from a solved problem. You need only look at YouTube auto-generated captions to see both how far it has come and how many trivial errors it still has.

And regarding "generalizable intelligence" and arithmetic in GPT-3, the paper can't even definitively confirm that the examples that they showed are not part of the corpus (they note that they made some attempts to find them that didn't turn out anything, but they can't go so far as to say they are certain that the particular calculations were not found within the corpus). They also make no attempts to check the model itself to find if any sub-structure may have simply encoded an addition table for 2-digit numbers.

Also, AGI will certainly require at least some attempts to get models to learn about the rules of the real world, the myriad bits of knowledge that we are born with that are not normally captured in any kinds of text you might train your AI on (the idea of objects and object permanence, the intelligent agent model of the world, the mechanical interaction model of the world etc.).




Autopilot is distinct from full self driving - I was talking about driver assist.

'Solved' is doing a lot of work here and is an unnecessary threshold I'm willing to concede. I think we would be more likely to agree that things went from unusably bad or impossible to usably good, but imperfect in a lot of consumer categories in the last five years due to ML and deep learning approaches.

The more clearly 'solved' cases of previously open problems (Go, protein folding, facial recognition, etc.) are mostly not in the consumer space.

As far as the gpt-3 bit I encourage others to read that excerpt, they explicitly state they excluded the problems they asked from the test data so it's not memorization. The types of failures it makes are failures like failing to carry the one, it certainly seems like it's deducing the rules. It'll be interesting to see what happens in gpt-4 as they continue to scale it up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: