
Artificial Morality - how-about-this
http://blog.lareviewofbooks.org/provocations/artificial-morality/
======
Jupe
Interesting read, and some of it rings true for me, but the article is a
little bland.

A parallel observation: In the "old days" of the internet, the shining new
capability of connecting computers, sharing documents, web pages, software,
messages, opinions, and, of course, pornography, the feeling of the day was
one of betterment. The network would allow for up-turning out-dated businesses
and processes of the past, and we will all be better for it. Information wants
to be free!

Yeah, sure.

Information wants to be free? Well, if you consider my click-stream, search
history, and probably every keystroke I enter, then yes, _that_ information
has certainly been freed. I can even search this treasure trove of
information, if I'm just willing to share every piece of personal information
about me, from my contact list, to my personal emails, to my phone's location,
my calendar and my purchase history.

I no more trust the AI "morality" pundits than I trust the (well meaning, but
mis-guided, IMO) visionaries of the internet's early days (and, yes, you can
count me among those ranks).

~~~
api
I personally think "information wants to be free" (as in no money) is a major
cause for why the Internet went so bad in so many ways. Since everything has
to be free, the only possible business models are those that revolve around
monetizing the user or are otherwise deceptive and indirect.

This created and bootstrapped the whole surveillance capitalism and "adtech"
industry. Instead of paying for information we have people paying to
manipulate us via free media built to be addictive.

I'm sure some of this would have happened had the Internet possessed a payment
mechanism and people willing to pay for stuff, but it would not have become
the dominant model for everything nor as pervasive.

~~~
netsharc
I wonder how "pay to view" Internet would've worked. How do you know if the
page/site you're going to open will be worth the 0.05 cents? There are/were
kids in Macedonia who made fake news sites to make money from Google ads:
[https://www.wired.com/2017/02/veles-macedonia-fake-
news/](https://www.wired.com/2017/02/veles-macedonia-fake-news/) , imagine
that sort of behavior minus the G-middleman.

Could they have built an Internet where there was a system to claim refunds,
with a "judge" who would decide? Could there have been a rating system ("90
out of 154 visitors said the information on this page was worth the money"),
and how could that have been abused?

~~~
AstralStorm
The model would be similar to telco pay per kB, as any other way would be
unfeasible - for public sites, like public TV. And similar to cable networks
where you would get a set of sites available for a monthly installment, with
some options similar to Netflix.

The internet would be much smaller and much more centralized.

------
SpicyLemonZest
The article's underlying thesis seems to be summarized by

> The practitioners of AI are not up-front about the genuine allure of their
> enterprise, which is all about the old-school Steve-Jobsian charisma of
> denting the universe while becoming insanely great.

But that... doesn't seem true? Most modern AI practitioners, especially the
prominent ones, profess that their work is going to change the world forever
and that's why they want to do it. So I'm not sure what the author is really
proposing that AI practitioners should do. Work on less important problems?
Play it by ear instead of trying to develop moral principles preemptively?

~~~
grabbalacious
_> I'm not sure what the author is really proposing that AI practitioners
should do._

If he wants us to work for our moral betterment ('Nobody does AI for our moral
betterment') he's asking us become like old-fashioned saints or modern-day
activists. But they are not problem-solvers per se. They act like they
_already have_ the solutions to our problems.

The irony is that, whatever the intentions of its creators, AGI probably will
lead to our moral betterment by giving us a clearer understanding of what it
means to be human.

------
derefr
In my experience, the AI Safety people are less concerned with a nebulous
"morality", and more concerned with practical things like making AI self-
improve slowly enough that we can catch it before it's a runaway process; or
making AI not think in ways where it imagining torturing people means there's
an actual ephemeral consciousness experiencing the qualia of being tortured.
Engineering stuff, that happens to interact with morality... in about the same
way that genetic engineering could be said to interact with the morality of
the resulting organism.

------
undershirt
> Technological proliferation is not a list of principles. It is a deep,
> multivalent historical process with many radically different stakeholders
> over many different time-scales. People who invent technology never get to
> set the rules for what is done with it

Well said. Pessimism is a reasonable reaction to this sweeping line, but I
offer a quick summary of what I hope is non-naive optimism:

1) Problem: When we think technology is neutral, we miss how it creates
_niches_ outside our intended design and social/political constraints, since
the environment of economics thwarts all. We have agency as individuals, yes,
but the environment rewards and select for _any_ uses of the technology which
yield advantage. “If you incentivize something, it will happen.”

2) Solution?: Nobody fucking knows. But “Game ~B” is what people are calling a
nascent research effort to identify what attributes a more cohesive
civilization must have to allow better collective “sense-making” and “choice-
making” so we may become capable of directing technology instead of leaving it
to the emergent dynamics of hapless competition. What's also interesting are
the considerations for employing antifragile, transitionary social structures
that can thrive in our current entrenched “Game A” dynamics to plant said
attributes. Gotta change the underlying rules instead of designing atop those
which are broken.

3) Future Ethics: I've also seen some excitement about a so-called non-
relativistic ethical framework within Game~B, called The Immanent Metaphysics.
It claims to be commensurable with — but not derivable from — science,
rigorously formalizing intuitive ethical principles, to at least create a
consistent grounding for making effective ethical choices.[1] The hope is to
better inform (but not determine) our use of technology after we step into a
more coherent phase of civilization. Tech can NOT be put back in the bag, so
we _must_ develop our wisdom to carefully wield the godlike powers they give,
which we are currently making a mess of.

[1] only a few people currently understand the dense framework, but I'm
currently working through it and may be able to offer a summary of what I know
so far here. neat stuff

------
Gormisdomai
[https://bioethics.georgetown.edu/2016/02/oxford-uehiro-
prize...](https://bioethics.georgetown.edu/2016/02/oxford-uehiro-prize-in-
practical-ethics-should-we-take-moral-advice-from-our-computers-written-by-
mahmoud-ghanem/)

^ This is a link to a relevant article containing an important counterpoint:
it argues based on a few assumptions that we should let computers write our
moral codes and not the other way around.

------
acoye
If morality is derived from a belief in God, then a nihilist could argue any
form of morality is essentially artificial.

