Hacker News new | past | comments | ask | show | jobs | submit login

I stopped thinking that humans were smarter than machines when AlphaGo won game 3. Of course we still are in many ways, but I wouldn't make the unfounded claims that this article does--it sounds plausible but never explains how humans can be trained on bodies of work and then synthesize new ideas either. Current AI models have already made discoveries that have eluded humans for decades or longer. The difference is that we (falsely) believe we understand how the machine works and thus doesn't seem magical like our own processes. I don't know that anyone who's played Go and appreciates the depth of the game would bet against AI--all they need is a feedback mechanism and a way to try things to get feedback. Now the only great unknown is when it can apply this loop on its own underlying software.



> The difference is that we (falsely) believe we understand how the machine works and thus doesn't seem magical like our own processes.

We do understand how the machine works and how it came to be. What most companies are seeking for is a way to make that useful.


That's like saying we know how we think because we understand how neurons fire and accumulate electric charges with chemical changes. We have little idea now the information is encoded and what is represented. We're still trying to get models to explain themselves because we don't know how it arrived at a response.


To make that _seem_ useful enough for people to part with their money.


AlphaGo passes Hubert Dreyfus's test -- it has a world -- in a way that LLMs don't.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: