>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.