
How to protect humans in a fully automated society - akshayB
https://www.theverge.com/2019/2/1/18205204/automation-ai-software-disruption-artificial-intelligence-big-picture
======
jelliclesfarm
I have always wondered if we need to change currency for transactions if
machines and automation can do everything humans used to do...for example,
reputation can be form of currency..health is another form of currency..can
you create transactions based on one or more forms of common / human universal
values?

~~~
no_dude_no
This is a really stupid idea, because basically it will open up loopholes of
moral relativism, and you’ll see them exploited through arbitrage.

How might this prove worse than what already happens? Simply put, amoral
transactions are weakened by risky barter trades.

A codified exchange means that a more exacting moral relativism is hardened,
and the capacity for pettiness goes up. Sort of like how vacation days work,
but now, we’re talking about prostitution in exchange for starvation, based on
whether web browsing habits indicate a propensity for racism, which carries a
cost to one’s reputation index.

Then we get into political re-education, to boost credit score, and so on.
Defensive driver credits improve insurance prices, but what if your IQ
qualifies your decision making to a degree that authorizes participation in
targeted political assasination for the greater good. The smartest among us
are the best people to entrust in killing those who are of potentially
devastating cost to society, no?

So, what if we needed to kill people now, to prevent the next potentially
horrible election? Would we entertain the thought of murder to cleanse the
ballot of absurd votes?

~~~
jelliclesfarm
I think I understand what you are saying.

But it makes me want to go back and question 1. Why does moral relativism
exist? 2. Is it naturally selected for the population and culture?

Let us consider an ‘idea’. It can be 1. Moral 2. Immoral 3. Amoral. The
evolutionary fitness of an idea places it on a scale with one end being moral
and the other end being immoral. So any idea is somewhere on the spectrum.

It’s a human universal that murder is ‘wrong’ but it is ‘less wrong’ if it was
in self defense. It will be bordering ‘not wrong’ if it was to save one or
more innocent lives.

I think any society has its own laws and values are on a sliding scale
already. We are simply creating a reputation system. Reputation systems have
always existed. In fact, the erosion of reputation tracking and shunning of
moral relativism by assuming that all ‘morals’ have a static value is likely
worse for our modern times.

I would request you to consider the alternative. That everyone follows the
same moral code..and how that would go down if we made it a global rule. Every
culture is an unique ecosystem in a petri dish...society is always poking and
prodding and experimenting as times change.

I would even suggest that the economic construct we call ‘money’ has altered
morals and values over time and continents and cultures.

People are diverse...why can’t morality be on a sliding scale. It seems
absolutely plausible to me.

The more I think about it..a reputation system is a self correcting mechanism
in the eco system it is contained.

That’s as far as I have thought this out. I will have to think more about
this.

