Hacker News new | past | comments | ask | show | jobs | submit login

No it isn't, this is exactly what babies do.



Noone knows what babies do. Anything else is speculation.


Babies don't know language. They know some words. Dogs recognize some words too associated with what they see. Have they discovered language too?


babies do a lot more. they have feelings. like pain, pleasure. and different verities of them. then they have a lot of things hardcoded. and they have control of their bodies and the environment. for example they quickly learn that crying helps getting what they want.


They do, and from it they learn many other things even before language, including some non-verbal expectation of what is likely to happen next, and that they have some ability to participate in the external world. Until recently, we have not seen anything that has gained some competence in language without having these precursors.


> including some non-verbal expectation of what is likely to happen next

Some of that isn't learned at all (other than through evolution). A newborn will flinch back from a growing circle on a screen.


Indeed - a half-billion years of training since the first neurons appeared - but here I'm thinking more of the sort of understanding which leads, for example, to reactions of surprise to incongruous outcomes.

https://news.mit.edu/2011/infant-cognition-0527


Ok, it’s a lot more like a boltzmann brain in Plato’s cave learning what language is with zero other external context of what reality is.


None of which has much to do with learning language.


Actually the feedback part of learning is important. There's a famous experiment with cats in baskets that demonstrated that.

But AI isn't an animal so the same constraints don't necessarily apply. I think you'd have to have a particularly anti-AI & pedantic bent to complain about calling this language discovery.


Feedback, yes. Feedback from interacting with the physical reality by means of physical body with flexible appendages? Useful in general, but neither necessary nor sufficient in case of learning language.

Feedback is fundamental to deep neural networks, it's how they're trained. And to be honest, all of the things 'astromaniak mentions can be simulated and made part of training data set too. While the "full experience" may turn necessary for building AGI, the amount of success the field had with LLMs indicates that it's not necessary for the model to learn language (or even to learn all languages).


A lot of a baby's language pickup is also based on what other people do in response to their attempts, linguistically and behaviourally. Read-only observation is obviously a big part of it, but it's not "exactly" the same.


Right. But that happens too with ML models during training - the model makes a prediction given training example input, which is evaluated against expected output, rinse repeat. A single example is very similar to doing something in response to a stimuli, and observing the reaction. Here it's more of predicting a reaction and getting feedback on accuracy, but that's part of our learning too - we remember things that surprise us, tune out those we can reliably predict.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: