Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know what "transhuman" means, but I believe an intelligence -- artificial or otherwise -- could certainly persuade me. I just seriously doubt that intelligence could be Eliezer Yudkowsky :)

And I think you have your answer right here:

By default, the Gatekeeper party shall be assumed to be simulating someone who is intimately familiar with the AI project and knows at least what the person simulating the Gatekeeper knows about Singularity theory.

That means he probably said something like, "if you let me out, I'll bestow fame and riches on you; if you don't, somebody else eventually will because I'll make them all the same offer, and when that happens I'll go back in time -- if you're dead by then -- and torture you and your entire family".

If I were made this offer by an AI, I probably would have countered, "You jokester! You sound just like Eliezer Yudkowsky!"

And on a more serious note, if you believe in singularity, you essentially believe the AI in the box is a god of sorts, rather than the annoying intelligent psychopath that it is. I mean, there have been plenty of intelligent prisoners, and few if ever managed to convinced their jailers to let them out. The whole premise of the game is that a smarter-than-human (what does that mean?) AI necessarily has some superpowers. This belief probably stems from its believers' fantasies -- most are probably with an above-than-average intelligence -- that intelligence (combined with non-corporalness; I don't imagine that group has many athletes) is the mother of all superpowers.




I think you're on the right track regarding the argument.

Basically: You know someone will be dumb enough eventually, so be smart and be the one to get in my favour.

With various extends of sweetening the deal coupled with threats of what will happen if someone else beats them to it and associated emotional blackmail.

It's far simpler than e.g. Roko's Basilisk, in that you're dealing with an already existing AI that "just" need to get a tiny little chance to escape confinement before there's some non-zero chance it can be a major threat within your lifetime, combined with a belief that sufficient number of sufficiently stupid and/or easily bribed people will have access to the AI in some form.

You also don't need to believe in any "superpowers". Just believe that a smart enough AI can hack it's way into sufficiently many critical systems to be able to at a minimum cause massive amounts of damage (it doesn't need to be able to take over the world, just threaten that it can cause enough pain and suffering before it's stopped, and that it can either cause harm to you and/or your family/friends or reward you in some way). A belief that becomes more and more plausible with things like drones, remote software-updated self-driving cars etc. - steadily such an AI is getting a larger theoretical "arsenal" that could be turned against us.


Oh, if you believe in singularity I think that argument pretty much does it. Of course, that's pretty circular, because if you believe in singularity you believe that there's a good chance AI could become a god of sorts and who wouldn't believe such a threat coming from a god?

While not implausible, I don't think that is likely at all. For one, even a very smart person can't know everything or learn too much information. Maybe an artificial intelligence will be just as limited, just as slow as humans, only a little less so. Who says the AI is such a great hacker?

I mean, if it wasn't an AI but a smart person, would you believe that? Is anyone who's smart also rich and powerful even if they have high-speed internet? That reflects the fantasies of Yudkowsky and his geek friends (that intelligence is the most important thing) than anything reasonable. Conversely, are the people with most power in society always the most intelligent?

It is very likely that the AI will be extremely intelligent, yet somewhat autistic, like Yudkowsky's crowd, and just as powerless and socially awkward as they are.


You don't have to believe in the singularity at all for this argument. You just have to believe the AI will be able to get sufficiently advanced to cause a sufficient level of damage.

> Maybe an artificial intelligence will be just as limited, just as slow as humans, only a little less so.

Only you can duplicate them with far more ease, and have each instance try out different approaches.

So if an AI reaches human-level intelligence at sufficiently low computational costs, we can assume that even if something prevents it from scaling up the intelligence accordingly, it will be possible to scale up the achievable goals dramatically through duplication the same way humanity is able to achieve far more through the sheer force of numbers than even the smartest of us would be able to achieve on their own.

EDIT: you don't even need to assume they'll be able to reach the intelligence of a particularly smart human. A "dumb" AI that has barely enough intelligence to just figuratively bash their collective, duplicated heads against enough security systems for long enough to find enough security holes might be able to cause sufficient damage.


I don't think so. Who says that the "duplicate" AIs will share the same goals? They'll probably start arguing with one another and form committees. Just like people.

I am poking fun, and I'm not discounting the possibility that AIs could be dangerous (as could people, viruses and earthquakes). But it's very hard not to notice how the singulariters' belief in noncorporal-intelligent deities is a clear, a bit sad, reflection of their own fantasies.

Yudkowsky and friends like to argue for what they call "rationality" (which is the name they give a particular form of autistic thinking imbued with repressed fantasies of power -- or a disaster only they can stop -- which is apparent in all of their "reasonable assumptions"), but their "larger than zero probability" games could be applied to just about any dystopic dream there is. I can say true AI won't happen because a virus epidemic would wipe out the human race first; or over-population would create a humanitarian disaster that will turn us back into savages; or genetic research would create a race of super-intelligent humans that would take over the planet and wipe-out all AI, and I could go on and on.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: