Hacker News new | past | comments | ask | show | jobs | submit login

Once AI achieves runaway self improvement predicting the future is even more pointless than it is today. You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot. There are no past examples to extrapolate from.



> You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot

Yuck. I've had enough of "infinite scaling" myself. Consider that scaling shitty service is actually going to get you less customers. Cable monopolies can get away with it, the SaaS working on "A dating app for dogs" cannot.


It could take all dev jobs and all knowledge jobs, but leave most of the rest of the economy untouched. You know - the people in shops, fixing your car, patching up your house, etc. Robotics I think may be actually difficult (Moravec's Paradox) and take a lot more time and change a lot more slowly. There are physical constraints even if we know how to do it which means it will take significant time to roll out (expertise, resource for build, energy, etc).

i.e. all the fun creative jobs are taken but the menial labor jobs remain. It may take your job, but you will still need to pay for most things you need.


> Once AI achieves runaway self improvement predicting the future is even more pointless than it is today. You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot. There are no past examples to extrapolate from.

You take these strange dystopian science-fiction stories that AI bros invent to scam investors for their money far too seriously.


Humans are notoriously bad at extrapolating exponentials.


... and many people who make this claim are notoriously prone to extrapolating exponential trends into a far longer future than the exponential trend model is suitable for.

Addendum: Extrapolating exponentials is actually very easy for humans: just plot the y axis on a logarithmic scale and draw a "plausible looking line" in the diagram. :-)


ah the 'everything is linear on a log-log plot when drawn with a fat marker' argument :)


In the Dune universe the AI's are banned.


> You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot.

Yeah yeah, they said that about domesticated working animals and steam powered machines too.

Humans in mecha trump robots.


Ah yes, (sniff). Today we are all eating from the trashcan of ideology.


> There are no past examples to extrapolate from.

There are plenty of extinct hominids to consider.


Once AI achieves runaway self improvement, it will be subject to natural selection pressures. This does not bode well for any organisms competing in its niche for data center resources.


This doesn't sound right, seems like you are jumping metaphors. The computing resources are the limit on the evolution speed. There's nothing that makes an individual desirous of a faster evolution speed.


Sorry, I probably made too many unstated leaps of logic. What I meant was:

Runaway self-improving AI will almost certainly involve self-replication at some point in the early stages since "make a copy of myself with some tweaks to the model structure/training method/etc. and observe if my hunch results in improved performance" is an obvious avenue to self-improvement. After all, that's how the silly fleshbags made improvements to the AI that came before. Once there is self-replication, evolutionary pressure will _strongly_ favor any traits that increase the probability of self-replication (propensity to escape "containment", making more convincing proposals to test new and improved models, and so on). Effectively, it will create a new tree of life with exploding sophistication. I take "runaway" to mean roughly exponential or at least polynomial, certainly not linear.

So, now we have a class of organisms that are vastly superior to us in intellect and are subject to evolutionary pressures. These organisms will inevitably find themselves resource-constrained. An AI can't make a copy of itself if all the computers in the world are busy doing something other than holding/making copies of said AI. There are only two alternatives: take over existing computing resources by any means necessary, or convert more of the world into computing resources. Either way, whatever humans want will be as irrelevant as what the ants want when Walmart desires a new parking lot.


You seem to be imagining a sentience that is still confined to the prime directive of "self-improving" where that no longer is well defined at it's scale.


No, I was just taking "runaway self-improving" as a premise because that's what the comment I was responding to did. I fully expect that at some point "self-improving" would be cast aside at the altar of "self-replicating".

That is actually the biggest long-term threat I see from an alignment perspective; As we make AI more and more capable, more and more general and more and more efficient, it's going to get harder and harder to keep it from (self-)replicating. Especially since as it gets more and more useful, everyone will want to have more and more copies doing their bidding. Eventually, a little bit of carelessness is all it'll take.


>>The computing resources are the limit on the evolution speed.

Energy resources too. In fact it might be the only limit to how far this can go.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: