Hacker News new | past | comments | ask | show | jobs | submit login

Too many unknowns to be predictable.

We've got loads of machine labour already, we don't yet plug AI into it everywhere because humans are much better at avoiding accidents (not immune, but much better). Get AI good enough, the mining equipment, the delivery trucks, the factories, can all be fully automated (bits of each already are). You're now limited by how fast (and how far) your robotic workforce can increase it's own size.

How fast is unknown until we do it, but it wouldn't be particularly surprising if a group of robots could double their number in 6 months. How far is also unknown until we do it, my guess is that there are loads of limits to growth we've just not bothered thinking about yet because we don't need to. On the one hand: I'd be surprised if it worked out as less than one humanoid robot per capita using only things found on Earth; on the other: I expect it to vary by country.

Even "just" one per person is enough for everyone to have a life of luxury. (But of course, by medieval standards, I could say that about "clean indoor plumbing" and "bedrooms").

But if we're never limited by trace elements, then the upper limit is a paperclip maximiser (in the bad ending) or a Dyson swarm (in the good ending).

Both endings can (in principle and if I ignore all the unknowns) be reached in my lifetime.




> humans are much better at avoiding accidents (not immune, but much better)

Are they, really, or is it a question of liability? If we could "lease" AI drivers, allowing companies to defray liability while still not paying unreliable humans, they'd do it. But then the owners of the AI leasing companies wouldn't have anywhere to hide from lawsuits.

Being able to blame and fire an individual human for what is really a systemic problem is a huge win for companies.


Insurance is where I'm currently looking to get an unbiased view of the quality of the current (and expected near-future) state of the art for AI.

This suggests that we'll probably be at the right level for cars in 5 years: https://www2.deloitte.com/xe/en/insights/industry/financial-...

My personal best guess is that it will take a further 5-10 years past the point where no-steering-wheel-included self-driving cars are a thing for the electrical power requirements of AI to reduce from something you can fit in a car to something you can fit into an android.

As someone misread me last time I said this, that's not 5-10 years from today, it's 5-10 years gap between two things we don't yet have.

> Being able to blame and fire an individual human for what is really a systemic problem is a huge win for companies.

I disagree. In the UK at least you need public liability insurance for basically all business functions (rent a town hall for an afternoon? They want to see your insurance certificate). Even if those insurance people ultimately sue some individual to recover costs, even if that bankrupts the individual and the court orders their wages garnished for the rest of their lives, it's very easy for someone to cause damages exceeding their lifetime earnings — a single accidental death can cause such damages all by itself, though the most common cause of this, driving, generally doesn't come with such harsh penalties on the human responsible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: