Hacker News new | past | comments | ask | show | jobs | submit login

It could be represented that way. That's a long way from saying that's how brains work.

Does a thermometer predict tokens? It also produces outputs that can be represented as tokens, but it's just a bit of mercury in a tube. You can dissect a thermometer as much as you like and you won't find any token prediction machinery. There's lots of things like that. Zooming out, does that make the entire atmosphere a token prediction engine, since it's producing eg wind and temperatures that could be represented as tokens?

If you need one token per particle then you're admitting that this is task is impossible. Nobody will ever build a computer that can simulate a brain-sized volume of particles to sufficient fidelity. There is a long, long distance from "brains are made of chemicals" to "brains are basically token prediction engines."




The argument that brains are just token prediction machines is basically the same as saying “the brain is just a computer”. It’s like, well, yes in the same way that a B-21 Raider is an airplane as well as a Cessna. That doesn’t mean that they are anywhere close to each other in terms of performance. They incorporate some similar basic elements but when you zoom out they’re clearly very different things.


But we are bringing it up in regards to what people are claiming is a "glorified next token predictor, markov chains" or whatever. Obviously LLMs are far from humans and AGI right now, but at the same time they are much more amazing than a statement like "glorified next token predictor" lets on. The question is how accurate to real life the predictor is and how nuanced it can get.

To me, the tech has been an amazing breakthrough. The backlash and downplaying by some people seems like some odd type of fear or cope to me.

Even if it is not that world changing, why downplay it like that?


To be fair my analogy works if you want to object to ChatGPT being called a glorified token prediction machine. I just don’t agree with hyperbolic statements about AGI.


There's so many different statements everywhere, that it's hard to understand what someone is specifically referring to. Are we thinking of Elon Musk who is saying that AGI is coming next year? Are we thinking of people who believe that LLM like architecture could reach AGI in 5 to 10 years given tweaks, scale and optimisations? Are we considering people who believe that some other arch breakthrough could lead to AGI in 10 years?


>> Are we thinking of people who believe that LLM like architecture could reach AGI in 5 to 10 years given tweaks, scale and optimisations?

Yep, that’s exactly who I’m talking about! I’m pretty sure Sam Altman is in that camp.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: