If you mean to imply that in the future it'll just be the boss sitting alone at his computer, interacting with the AI, then couldn't he still be more efficient by hiring a team of people to interact with the computer? After all, no matter how advanced the AI gets, the productivity of the company will still be limited by the speed at which the AI "operators" can interface with the machine. This is true regardless of the input choice: keyboard/mouse, voice input, or neural link.
Your argument defeats itself. If your AI is so good, that its productive output is bottlenecked by the speed of your supposed operators than you should just replace those with another instance of your ai, as well as the boss.
Which is exactly the same scenario that your parent comment is presenting.
Not sure how the paperclip-maximiser is relevant here. This still happens with friendly ai.
If you accept the premise (all jobs replaced with ai, except for the boss, with no difference in productive output), then you can not solve the problem of all the lost jobs by rehiring them all as "operators". You said yourself that in this hypothetical scenario the bottleneck is human-ai communication (which is why you want to hire operators to increase productivity).
But if human-ai communication speed is the bottleneck (and not ai capabilities, or compute, etc.) then you solve the bottleneck by replacing humans with ai, not by adding more humans.
I don't think the original scenario is plausible (all jobs replaced except for the boss), so don't misunderstand my argument as defending that. I'm just saying that your conclusion (just hire operators) doesn't follow from the premise.
It's not worth arguing this a lot further since I don't think the premise is plausible and we're talking about something that's only relevant if it were, unless I fatally misunderstood something you were saying?
just replace those with another instance of your ai, as well as the boss.
I interpreted your statement as meaning to replace even the boss with AI, i.e. a fully-autonomous AI answerable to no one. That is just a paperclip-maximizer since it is no longer under human control.
Perhaps you never meant for the boss to be replaced, in that case you may ignore my paperclip-maximizer statement.
It is what I meant, I just don't see what the paperclip-maximiser has to do with it. As far as I understand it the primary idea behind that particular thought experiment is how a misaligned ai leads to agi ruin, even for simple goals.
The scenario we talk about doesn't even contain misaligned ai. It contains friendly ai (the best case scenario), which still drops all current human economic value to zero. The contrived scenario has all jobs except for the boss replaced with ai. You propose hiring operators to increase the companies productivity. I say this doesn't make sense.
Do you agree until this point? If so, what does the paperclip maximiser have to do with anything? If not, what did I misunderstand?