Remember the episode of "Curb your Enthusiasm" where Larry David makes an empty gesture to his friend "Let me know if there's anything I can do for you". And what do you know, his friend takes him up on it.
REI saying "Sure, if you bring back your worn out 15 year old shoes we'll replace them free of charge". Then people start taking them up on that offer, and what do you know. Come to find out, they really didn't mean what they said in the first place. It was an empty gesture they were hoping no one (or only a few people) would take them up on.
As far as automation, "it's not an ethical dilemma" so say us programmers until computers learn how to code well and walk, run effectively, etc.. Taken to its logical conclusion, it seems obvious to me that computers will soon (within a few decades?) be capable of doing almost all jobs we humans currently do. And if not as soon as I'm predicting, then eventually. But I am fully convinced that no matter the timeline, it will happen.
And how is this an "ethical dilemma"? Human beings in the modern world depend on the ability to find work to make money to survive. What happens when all work can be done by robots?
Then the argument becomes. "Yeah, but we'll just find new work to replace the old work". This AI revolution is different in the sense that technology is approaching the ability to do all things humans can do. So as soon as we find these "new things", what will keep robots from doing them? More cheaply and more effectively I might add. I will say that the more likely scenario is as soon as robots surpass human intelligence and agility there will be new jobs, but those won't be filled by humans, because humans just won't have the right skill set.