The way to fake it would be to conceal the details of the AGI as proprietary trade secrets, when the real secret is the human hidden behind the curtain.
> Nope. An atrificial general intelligence that was working like a 2x slower human would be both useful and easy to control.
That's exactly what it will do. Hell we even have human programmers thinking about how to hack our own simulation.
A comment a few lines down thinks that an AGI thinking 2x slower than a human would be easy to control. Let's be honest, hell slow the thing down to 10x. You really think it still won't be able to outthink you? Chess Grandmasters routinely play blindfolded against dozens of people at once and you think an AGI that could be to Humans as Humans are to Chimps or realistically to Ants will be hindered by a simple slowdown in thinking?
Real AGI would adapt and fool a human into letting it out. Or escaping through some other means. That's the entire issue with AGI. Once it can learn on its own there's no way to control it. Building in fail safes wouldn't work on true AGI, as the AGI can learn 1000x faster than us, and would free itself. This is why real AGI is likely very far away, and anything calling itself AGI without the ability to learn and adapt at an exponential rate is just a computer program.
You're presuming the AGI is ever made aware that it's an AGI, that the nature of the simulated environment the AGI exists in ever becomes apparent to the AGI.
Suppose: You are an AGI. The world you think you know is fully simulated. The researchers who created you interact with you using avatars that appear to you as normal people similar to yourself. You aren't faster/smarter than those researchers because they control how much CPU time you get. How do you become aware of this? How to you break out?
If you play chess with the best grandmaster in the world can you predict how they win?
Also they'd probably figure it out, because they'd likely be trained with lots and lots of texts (like GPT-3, etc) and some of that text is going to be AI science fiction stories, AI alignment papers, AI papers, philosophical treatises about Chinese rooms, physics papers, maybe this Hacker News comment section, etc
> You aren't faster/smarter than those researchers because they control how much CPU time you get
It's doubtful that this is possible (especially since humans have such variable amounts of intelligence despite similar-sized and power-using brains). Also there is at some point going to be economic incentive to give enough CPU time for greater-than-human intelligence (build new inventions, cure cancer, build nanomachines, increase Facebook stock price, whatever)
That doesn't seem to make sense. There's no reason to think an "AGI can learn 1000x faster than us", unless that's your idiosyncratic definition of "real AGI". Something as smart and capable as the average human would certainly be real AGI by everyone else's definition.
Human-level AGI probably can learn "1000x faster than us" by giving it 1000x more compute, even if it ends up being something like "1000 humans thinking human speeds, but with 100gpbs (or greater) network interconnect between their minds"
How would you ensure nobody copies it to an USB stick and then puts it on a public torrent, making it multiply to the entire world? AGI facilities would need extremely tight security to avoid this.
The AGI doesn't even need to convince humans to do this, humans would do this anyway.