
Ethical OS and Silicon Valley’s Guilty Conscience - craftsman
https://librarianshipwreck.wordpress.com/2018/08/24/striving-to-minimize-technical-and-reputational-risks-ethical-os-and-silicon-valleys-guilty-conscience/
======
shawn
This is an excellent time to ask for a counterargument to CGP Grey's stance
that immortality should be invented:
[https://www.youtube.com/watch?v=cZYNADOHhVY](https://www.youtube.com/watch?v=cZYNADOHhVY)

Suppose there were a technology X which was going to be invented eventually.
Suppose also that it's a highly unethical technology, for some definition of
unethical.

Is it therefore unethical to create X?

Note: The constraint is that X _is inevitable_. The only question is who
creates it first. And in that context, isn't it at least possible to argue
from multiple axes that you should help to create it? The limit case of this
argument would be "It's your duty to the society you live in to ensure it has
the competitive advantage, not some other society."

A less-hostile way to phrase that would be "The first company to invent a
technology can then try to _enforce ethics_ onto that technology."

That is, if you invent something, it's easier to dictate how it's used than if
you didn't.

Hence, paradoxically as it may seem, the logical conclusion would _seem_ to be
that you should work as hard as you can to invent whatever unethical
technology you're worried about -- in the hopes that you can minimize the
damage later.

If it seems like a technology can't really be controlled (e.g. nuclear
weapons), I counter with this: Bitcoin was the implementation of a set of
ideas. The exact implementation could have been very different. It could have
been inflationary rather than deflationary, for example. The precise choices
were very important, because Bitcoin has huge first-mover advantages. And that
is often true of the first X to be invented.

So, what's the answer? Do we work as hard as we can to invent unethical
technologies in order to mitigate their effects, or do we try to suppress or
discourage the invention of new technology knowing that some less-"ethical"
society will get there first?

Or is that a false dichotomy? I'm fascinated by the possible answers.

~~~
jonathanstrange
Whoever invents it is responsible for it. You could argue that extremely
deadly nerve gas would have been invented _inevitably_ , for instance, but it
is still unethical for you to help in its development. Claiming that "someone
else would have invented it anyway" is the oldest excuse in the book.

 _Do we work as hard as we can to invent unethical technologies in order to
mitigate their effects, or do we try to suppress or discourage the invention
of new technology knowing that some less- "ethical" society will get there
first?

Or is that a false dichotomy?_

This looks like a false dichotomy to me. If your argument was sound, then e.g.
attempting to limit nuclear proliferation would be pointless, since every
nation on earth would eventually develop nuclear weapons anyway. I don't think
that's true, though, national and international laws with suitable enforcement
can prevent unethical technologies.

~~~
shawn
Think of a war that shaped the world, and whose outcome is generally agreed to
be a positive one: "Good guys vs bad guys, and the good guys won."

Suppose nerve gas had been the only way for the "good guys" to win that war.
(This isn't a realistic assumption; the point is to examine ethics.)

Is it more ethical to employ the nerve gas, or to lose the war? Those being
the only two outcomes.

~~~
JohnStrangeII
Same guy as before but from different account. Disclaimer: I am an ethicist,
although my original AoS was philosophy of language.

First of all, there is a whole bunch of contemporary ethicists who would deny
that unrealistic scenarios can give us any ethical insight, but let's not
enter this debate.

There are good and convincing arguments against this view, but let's assume
for the sake of the argument that using the nerve gas in your scenario would
be the right thing to do. That means that you have shown that there is one
hypothetical scenario in which the use of that technology could be considered
better than not using it, although its use would still be very bad and
horrific.

That's not enough to show that the technology is ethical or that its
development should be encouraged. I'd argue for the opposite. Your scenario
also does not provide any argument against my claim that the person who
develops the technology is at least indirectly responsible for its later use.
Some technologies should and maybe even need to be suppressed world-wide.

This is an important topic if you take into account the pace of technological
development. It's entirely thinkable that in the near future - let's say, in a
100 years or so - just about anyone could in theory genetically modify
bacteria and viruses to his likings in a basement and for example develop an
extremely powerful biological weapon capable of wiping out 90% of mankind. It
is obvious that such a technology has to be suppressed and should probably not
be developed in this easy-to-use form.

I believe what you really want to say is that nation states should develop all
those nefarious technologies in order to control their spreading, because
someone ("the opponent") will invent and spread them anyway. That's indeed the
traditional rationale for MAD and the development of nerve gas, biological
weapons, and hydrogen bombs. The problem with this argument is that anybody
can use it, the argument appears just as sound to North Korea than to the US,
and is leading to a world-wide stockpiling of dangerous technologies. So there
must be something wrong with that argument, don't you think so?

~~~
eiieirurjdndjd
> That's indeed the traditional rationale for MAD and the development of nerve
> gas, biological weapons, and hydrogen bombs. The problem with this argument
> is that anybody can use it, the argument appears just as sound to North
> Korea than to the US, and is leading to a world-wide stockpiling of
> dangerous technologies.

But that’s not what happened, right? I mean, it is if you stop reading history
just before the first non-proliferation treaties began being implemented. This
was almost half a century ago, though, so IMO it doesn’t make sense to stop
reading at that point.

~~~
JohnStrangeII
I agree. The solution to massive technological threats is mutual entanglement
by treaties and international laws that limit or prohibit the development of
dangerous technologies. That's my point.

------
lifeisstillgood
Many (many) years ago, I was leading business planning for Demon / Thus and as
part of our template introduced "Conscience Breakers" \- a section (much like
the health and safety planning for school trips i guess) that asked what could
go wrong with our products we were about to launch. It seemed a good idea then
and still does.

it got dropped pretty quick by the higher ups

------
dmead
This is great, but can this really be followed by companies that have
shareholders and investors?

~~~
forapurpose
Could you go into some detail on why it couldn't be followed by them? I know
of some different arguments about why it could or couldn't, but I don't know
what you are referring to.

~~~
Nasrudith
The answer is that publically traded companies face heavy pressure to keep
sustained quarterly growth indefinitely and various "activist" investors will
insist upon ousting any who stand in the way even if it is better for longterm
health not to say lay off experienced engineering staff in a stable industry
to inflate quarterly profits (Boeing) when it comes to bite them with
electrical fires in their next big plane.

------
jl2718
Most change is bad. Some change is necessary.

