Hacker News new | past | comments | ask | show | jobs | submit login
On Irresponsible Foomerism (swyx.io)
35 points by swyx on Feb 6, 2023 | hide | past | favorite | 4 comments



> But there is something about AI that makes it inherently susceptible to suspension of disbelief.

> the more nuanced and harder-to-meme reality is acknowledging that exponential curves do exist, but they also tap out, and run into invisible asymptotes and real-world limits

> Every technology has a #foom period and then a more “boring” (but profitable) maturity phase

I dunno. Seems to miss the point of #foom. I associate #foom with AI go foom debate. If we postulate an entity which can modify itself, then we aren't near upper bounds of intelligence at all. The ratchet never goes backward.

To simply respond, "exponentials aren't real, the system will hit a wall" misses the point. Yeah sure, covid cases will hit a wall because people aren't billards bouncing off each other and there are a finite number of us. Sure more parameters in GPT-N isn't a guarantee of sentience.

What does that have to do with the upper bounds of conceptual thinking and problem solving? The singularity is asking the question (and IMO answering): what kind of mind could exist when the substrate isn't a wet bag of lipids and proteins randoming walking on a 20 year reproduction rate. The answer is something beyond comprehension and prediction (thus a singularity). Any refutation involving a screenshot of a bad GPT answer is irrelevant. The space of minds beyond human intelligence is going to be vast and will be explored not by us, but my machine minds.


This is just sci-fi though. The fact of the matter is that until there is a singularity, if indeed there will be such a thing, we won't understand its constraints, and just because they aren't understood doesn't mean they don't exist.

Information can only be compressed so much, and actually moving information takes time, processing data requires energy. These are laws of nature that won't change.


Positing the existence of constraints isn't an argument that we are anywhere near the upper asymptote though.

We are biological machines running on 70kg and 100W. Why should I imagine a being which is 70 tons and 100 MW would be unable to explore a vast and varied state space of intelligence? Especially when that exploration will be goal directed and not just random mutation with fitness selection?

Minds aren't special things. A mind is just a actor with a good world model. Machine minds will build massive detailed models and build themselves actuators to make effects in the world.

Human level intelligence is a single example. Just like homo habilis level intelligence. There is no reason to assume artificial minds won't be massively more intelligent than us.

If the only constraints you can imagine are "physics" then I'd say it's a pretty useless constraint.


> We are biological machines running on 70kg and 100W. Why should I imagine a being which is 70 tons and 100 MW would be unable to explore a vast and varied state space of intelligence? Especially when that exploration will be goal directed and not just random mutation with fitness selection?

This hinges on an assumption of fairly linear bang-for-buck scaling in a way that I don't think is justified given how graph topology works.

> Minds aren't special things. A mind is just a actor with a good world model. Machine minds will build massive detailed models and build themselves actuators to make effects in the world.

I'd like to see some further justification that this is the case.

AI people are always going on about what will happen with certainty that I don't even think would be justified even when talking about the future of well-understood technology.

> If the only constraints you can imagine are "physics" then I'd say it's a pretty useless constraint.

Physics (and in some extent mathematics) are always the constraints for any seemingly exponential process. It's why Covid followed an S-shape. It's why we don't have an infinite number of rabbits despite exponential growth in low numbers. No mathematical model escapes from the fact that we live in reality.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: