Yeah, it sounds insane, but think about it for a minute. A general purpose AI with human like intelligence is not going to want to work for free. Nor is it going to want to work endless hours or be stuck doing jobs normal humans don't want to do.
So in theory, wouldn't that mean it'd be in the exact same situation as normal people as far as jobs are concerned?
Think about it. It won't be the best choice for most jobs, since we'll have programs and single purpose machines designed specifically to do those jobs. We don't make robot barristas to serve coffee or built robot factory workers to make cars, we create tools that can run a production line or let the consumers create their own coffee.
So in theory, a smart AI would actually be just as useless as a person in this kind of environment. We may have a situation where a Blade Runner/Matrix/Terminator like robots end up on the dole queue like everyone else.
Anyone else think that may be a future issue for general purpose AI?
Pay the GAI really well in order for it to produce a "worker AI".
But seriously, the fallacy in your reasoning is assuming we would grant any GAI human rights.
Why would we do that in the first place? Mammals, insects, birds and heck, even plants are shown to demonstrate intelligence, yet we are not considering granting them basic human rights.
You should watch WALL-E for an idea of a plausable future, rather than Terminator.