
CS is a field that hasn’t encountered consequences - scribu
https://mobile.twitter.com/yonatanzunger/status/975545527973462016
======
mindslight
I love the narrative this starts off with!

But, solutions... I don't see how professional ethics could actually be
applied productively. And I say this as someone with an engineering
background.

If a company's business is building bridges, ethics are an orthogonal concern
and hence relatively straightforward to apply. But how could a civil engineer
productively apply their ethics to a company who's business it is to build
concentration camps? Making sure corners aren't being cut doesn't really
address the problem! If it is just one company building camps in a far off
land, then sure a review board can apply judgment and sanction their employees
from the rest of the industry. But if your domestic society shifts to where
building camps is an upstanding economic sector...

Essentially all of the funding for consumer software is based on the
assumption of surveillance, period. Low marginal costs means investors desire
home runs. Metcalfe's law means the introductory price has to be free. Success
means users end up locked in - so even if the consumer-facing front end
manages to be profitable on its own, the back end is still able to add to
profit through surveillance, or at least store everything until they figure it
out.

Even business sectors that should be outside sources of non-surveillance
development seem to prefer sitting on the sidelines. For example, it would be
pretty straightforward for banks to start offering direct public APIs for
customer access to accounts, letting users choose which software they'd like
to trust with such data. And then even, as an outside authority, putting
restrictions on service providers about data retention. But instead the banks
stonewall, and so we get surveillance-service providers like Yodlee filling
the gap.

I don't really have solution per se. On a different day I'd go on about end-
user Free software. But that's clearly outcompeted for the reasons listed
above. When is the last time you saw a GNU TV commercial?

~~~
scribu
> But if your domestic society shifts to where building camps is an upstanding
> economic sector...

Is that the case, though? It seems to me that in the last few years the tone
has shifted in the opposite direction, where even the mainstream is starting
to question the overall utility of social networks and other surveilance-
filled software.

~~~
mindslight
Talking sure. But has any of this effectively shifted the investment focus?

Founders have not wanted to be creating surveillance-based companies _the
whole time_. The standard course is for that functionality be added afterwards
for monetization. In order for the engineers' ethical judgment to have
mattered, they had to have prevented this change _a priori_ by designing the
software to treat the future company itself as an attacker. This is very
expensive.

------
smittywerben
Computers simply do as they are told for better or for worse.

The radar operator at Pearl Harbor misinterpreted the "awfully big flight" of
incoming planes as friendly U.S. bombers, not enemy bombers. About 55 minutes
later, the attack started.

After this attack people were outraged at the radar technology, describing it
as "nothing more than a freak gadget". Many simply wanted the technology gone.
Ironically the radar was working fine.

In response, the military created the receiver operating characteristic to
identify operators who misinterpret enemy planes as allied planes. Named
appropriately, I think.

[https://en.wikipedia.org/wiki/SCR-270#Use_of_SCR-270_radar_a...](https://en.wikipedia.org/wiki/SCR-270#Use_of_SCR-270_radar_at_Pearl_Harbor)

[https://en.wikipedia.org/wiki/Receiver_operating_characteris...](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)

------
racer-v
Many computer scientists are working in the nascent field of home automation.
Surely this technology, with its myriad Internet-connected sensors and
voracious logging, has the potential to implement a perfect surveillance state
- especially combined with data mining. I would like to think the companies
building these systems are going to value user control over all else, but
somehow that seems unlikely given current models of funding.

------
incadenza
I generally agree. I think moral consequences become especially relevant in
the field of AGI.

~~~
sli
Computerphile has a series of videos on AGI that touches on this topic, using
the lens of "define a human to a computer," a seemingly simple task with a
huge slew of ramifications that come with it. Basically exploring whether or
not Asimov's Laws are actually useful or practical (they aren't, and this is a
huge reason why).

The short version of the specific video in the series about this explores how
edge cases become extremely relevant in this domain, and asks whether or not
an AGI would consider these to be people or sufficiently people-like:

* Dead people

* People in vegetative states

* Unborn people

* Simulated human brains

* Sufficiently intelligent animals

And so on. We can't simply list every edge case, that list would be literally
endless and the developer is guaranteed to miss one that the AGI will
eventually find.

They all come with implications. How can an AGI never harm a human without
also accurately predicting extremely discrete details from the future? Any
action it takes _may_ cause harm or death to one or two humans 50 years later,
and it can't really know for certain. It can probably known if its actions
will affect hundreds or thousands, but not one or two.

Suddenly a whole lot of deep moral judgements have to be made, some of them
nearly impossible for a human (e.g. the aforementioned unborn human example
above). The Three Laws of Robotics unfortunately rely way too much on human
intuition, something that is extremely difficult to model in a machine.

I may be remembering incorrectly, but the whole point of that story was how
those laws fail spectacularly. That's a critical detail. But I digress.

Clearly an AI designer isn't qualified to just make those judgements on their
own, and probably doesn't want to anyway. Really no one, either living, dead,
or unborn, is qualified to be the judge and jury in this case, and a committee
isn't necessarily better.

It's a fascinating topic, to be sure, but I'll admit that it's not my area of
expertise.

~~~
WorldMaker
Asimov's Three Laws of Robotics are often remembered as being hard set rules,
but Asimov more often than not used them as the precursors to murder
mysteries. So many of the best of his Three Laws stories are locked room
murder mysteries where a robot finds some edge case that no one had yet
considered. Especially, the entire concept of the Zeroth law is interesting
because it was a loophole defined by robots, and the ethics of which are
somewhat debated in the series, but most of that debate is left as an exercise
to the reader so readers don't often do it. But even a lot of the very edge
cases listed here are part of the meta-ethics debates across the Three Laws
stories.

Whether or not the Three Laws are a viable system for AGI governance (and I
agree it seems rather unlikely, at least in their simplest define forms most
encountered in Asimov's stories, but even the stories would claim that the way
they are written in English is not exactly how they are coded in positronic
technology, however that may be), the ethics conflicts and edge case debates
in the stories are very fascinating reads.

It is somewhat surprising to me how many people seem to think the laws are set
in stone and don't realize how many of the stories themselves are secretly
ethics debates about edge cases. Just because some of the characters believe
the laws to be infallible doesn't mean those characters are right and the laws
are infallible. There wouldn't be so many tales about the Three Laws if there
weren't so many edge cases and exciting locked room mysteries to explore.

