Hacker News new | past | comments | ask | show | jobs | submit login

The way I understand it, our language is an abstraction to our logics. We think using it and we communicate our logic with words. Literally, how do you reason within your head? By forming sentences and argue with yourself using words? At least that is how I do it. I can't make any deep reasoning without first translating it to sentences.

So an AI after learning so much text and forming an extremely dense network of connections between different words and phrases, it can mimic something similar to reasoning. At its core, it is still just predicting the next word. But the scale of this "prediction" is so large that it begins to mirror our own "reasoning" process. Because in the end, when we apply logic, we are doing it through our own network of concept connections which is reflected in our language.




I guess the author did make a complete explanation when they mentioned arrays of floating point numbers representing connections. I just thought there was more to it that was omitted. This seems like a relatively simple solution which simulates an fascinatingly complicated process when performed at scale. Thanks for a great explanation!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: