
Machine Behavior Needs to Be an Academic Discipline - dnetesn
http://nautil.us/issue/58/self/machine-behavior-needs-to-be-an-academic-discipline
======
btilly
When you create a group of experts whose expertise lies in telling other
people how what they're doing is wrong, you create problems.

The experts need to justify their position by coming up with more and more
reasons that people need to listen to them. People who are actually doing the
work quickly become convinced that said experts are ignorant asses who don't
know what they are doing. This automatically sets up grounds for conflict.

I agree that the ethics of AI is important. However I expect the best work to
come from people who are experts in AI first, and only secondarily in ethics.
And I expect that the self-appointed experts in ethics won't recognize this.
(Particularly not when faced with anything that makes their position seem less
important.)

~~~
21
Your proposal has the exact opposite problem - the AI experts are incentivized
to not make two much waves regarding the dangers, because that would endanger
the field (and money invested in it).

Thus we get quotes from them like "overpopulation on Mars"

~~~
JumpCrisscross
> _Thus we get quotes from them like "overpopulation on Mars"_

Context, please? :)

~~~
sgt101
The point being that AGI is at least 50 years and probably 200 years or more
away.

It is such a distant threat and yet such a focus, and all the while we sit in
our lovely apartments 40 minutes from nuclear incineration.

~~~
bllguo
yeah clearly Dr. Ng was addressing fears of AGI in that quote. Thing is that
machine learning can be used for nefarious purposes, intentionally or even
unintentionally, without AGI being a reality. Those are more realistic fears

the conflation of machine learning and AGI causes confusion yet again..

~~~
sgt101
Agree - I think that your point of the use of the tool rather than the tool
itself being of sole concern is really important. Clearly devices (like Nukes)
that can kill thousands must be regulated tightly, but to extend the same
level of control to the whole technology makes applications (like medical
scanners using nuclear tech) that are of great benefit potentially untenable.

------
zitterbewegung
We covered these topics in my Computer Ethics class. I took a bunch of
philosophy classes on Ethics too. You can teach a person from a book all about
Ethics. The problem is that people will make the decision to be ethical with
their gut or in the moment.

For instance I was designing an app that would identify if the food had an
allergen or not. Eventually I came to the realization that my program if
deployed could actually hurt people. So I stopped working on the project. I
wrote a blogpost about this (shameless plug) at
[https://medium.com/@zitterbewegung/making-computer-vision-
sy...](https://medium.com/@zitterbewegung/making-computer-vision-systems-that-
dont-kill-your-users-2c98228d9032) . The core issue was I was giving people
information that they could perform medical decisions. This gave me a bad
feeling and a bunch of lawyers told me when I referenced this post.

I eventually pivoted what I learned into this (yet another shameless plug) :
[https://steemit.com/twilio/@zitterbewegung/mms2text-let-
your...](https://steemit.com/twilio/@zitterbewegung/mms2text-let-your-
computer-look-at-your-message-pics-so-you-don-t-have-to)

which doesn't have the issue of harming people.

~~~
gowld
But not making your app killed people who didn't notice nuts in their food...

~~~
venuur
That is too strong a conclusion. People with nut allergies already have
procedures for knowing if nuts are in food, for example, asking the restaurant
or preparing food their self.

Without knowing if his app was accurate, we cannot say whether building or not
building the app was the right decision.

~~~
zitterbewegung
The app was very inaccurate during my testing. Hiding almonds in a dish made
out of pure chocolate (a Hershey kiss) would be an easy counter example.

Also, I have a nut allergy so I have a complicated set of procedures to figure
out if nuts are in the dish (that was the main motivation for the app).

~~~
gowld
Humans aren't any better at determining if a kiss has invisible almonds. An AI
could solve that (better than a human could!) by knowing memorizing ingredient
lists form public databases and tagging foods that have nutty variants, often
times that people wouldn't know about.

~~~
bo1024
I think it's fair to say humans are better at reasoning about uncertainty and
risk. If the food isn't in the database, or we aren't sure if it's a match,
what does the algorithm say?

ML algorithms work on statistical performance against loss functions or error
rates. They aren't (yet) good at understanding the difference between a
mistake that causes a missed dessert and a mistake that might kill you. Maybe
they can guess correctly a higher percent of the time if shown flashcards, but
that's small consolation from the hospital bed. They also aren't that good at
the limits of their own knowledge, i.e. saying "I don't know".

------
Veedrac
This article seems really wishy-washy and confusing. What is actually being
proposed?

A set of standards that automated algorithms should adhere to? What kind of
standards are we talking about?

A new branch of philosophy that talks about the actions of automated
algorithms? What would be the point? How do you actually turn that into
something that progresses, and that people will act on?

More ethics classes in universities for CS students? How do you differentiate
this from what's already happening? Is it just broader?

\---

To try to steelman this a bit, I figure there are two issues at play.

The first is making sure that AI is working in the way we designed it to; it's
free of biases, it doesn't endanger people, it adheres to laws. We're doing
horifficially on this metric when it comes to long-term superhuman AI
alignment stuff, but most people are talking about next-decade issues. Those
are raw technical issues, and though we're still working on robustness and
detecting biases I really cannot see much need for external guidance outside
of the natural technical progress that the field is already heavily invested
in. These are problems we _want_ to solve already.

The second issue is use of these technologies as tools. This is where we talk
about how large companies' algorithms affect public perception, or how
automated militaries affects warfare, or how self-driving cars are litigated.
These are not AI problems, they're social problems. Yes, the technology behind
those examples looks kind'a similar behind the scenes, but these are distinct
social, legal and economic problems.

This is kind of like seeing the advent of electricity, predicting its effect
on society, and asking for people to study "ethics of electricity".

~~~
currymj
the first problem is one where CS researchers would likely benefit from more
contact with outsiders, although I agree it's a pretty lively area of research
already, at least for the shorter-term issues.

the second problem is one where social scientists could benefit from actually
understanding how these systems work. yes, they are social problems, but the
sociologists could do much better if they had a better understanding of the
details of the technology.

The authors say they want "a consolidated, scalable, and scientific approach
to the behavioral study of artificial intelligence agents in which social
scientists and computer scientists can collaborate seamlessly". So, a new
interdisciplinary category.

I get the sense they are especially concerned with the empirical study of
behavior and social impact of big ML systems in the real world. In another
post I compared this sort of thing to sociologists studying transit
infrastructure.

~~~
Veedrac
> the first problem is one where CS researchers would likely benefit from more
> contact with outsiders

Could you be more concrete about how you would like other fields to
contribute?

> the second problem is one where social scientists could benefit from
> actually understanding how these systems work [...]

> The authors say they want "a consolidated, scalable, and scientific approach
> to the behavioral study of artificial intelligence agents in which social
> scientists and computer scientists can collaborate seamlessly". So, a new
> interdisciplinary category.

I don't really agree. You don't need to know how electricity works to study
its societal effects, and the societal change from electricity-augmented
manufacturing is a completely different problem to the health and safety
regulation of indoor sockets.

The same is true for AI. You don't need to know what backpropagation is to
work on the long-term ramifications of automated warfare, and that in turn is
a largely irrelevant discussion for someone figuring out whether a company's
hiring algorithms are racially biased. There is neither a clear need for top-
down regulations to be grounded in the minutia of the systems, nor any obvious
advantage of grouping these discussions under one umbrella.

There is a need for the disciplines already dealing with these problems to
pick up more specialised knowledge as the systems get more common, but that is
a far cry from what seems to be argued in the article.

~~~
currymj
> Could you be more concrete about how you would like other fields to
> contribute?

In the world of fair machine learning, "What are the right criteria for
fairness under [set of circumstances]?" is usually not easily answerable, and
I don't think we'll find satisfactory answers without more involvement from
non-CS researchers -- both people in economics, law, and the humanities who
are sort of generally concerned with fairness in society, and people who know
a lot about specific domains where systems are being deployed. This in
particular seems like an issue of "machine behavior" as described in the
article.

With regards to transparency/explainability of models, there's a problem of
making sure the "explanation" is actually useful and intuitive to the user,
where psychologists (and HCI people who are already in CS depts) may have a
lot to contribute.

In both cases there is a little ad hoc communication and there ought to be
more.

> I don't really agree. You don't need to know how electricity works to study
> its societal effects, and the societal change from electricity-augmented
> manufacturing is a completely different problem to the health and safety
> regulation of indoor sockets.

I think electricity is not the best analogy, because it has a really subtle
theory that few people understand, but we're so familiar with its use that
many of its interesting properties are too obvious to see. It's also not at
all autonomous -- it is often fruitful to think of a specific machine learning
system as an agent, whereas this doesn't make much sense for electricity. Most
importantly, I think it's just broader and more general than machine learning
(trivially so!).

> The same is true for AI. You don't need to know what backpropagation is to
> work on the long-term ramifications of automated warfare, and that in turn
> is a largely irrelevant discussion for someone figuring out whether a
> company's hiring algorithms are racially biased.

The commonality here is you care about understanding or predicting the
empirical behavior of machine learning systems interacting with the real
world, especially with humans. ("Long-term ramifications of automated warfare"
might not qualify, but I think medium-term ramifications certainly could.)

I don't think CS researchers are trained or particularly interested in
empirical studies of human behavior, the stock market, etc., nor should they
be, so somebody else will have to help. That somebody had better know enough
about CS to be able to collaborate with actual CS researchers, though, or the
results are going to be poor.

~~~
Veedrac
> In the world of fair machine learning, "What are the right criteria for
> fairness under [set of circumstances]?" is usually not easily answerable

There is nothing particularly machine learning specific about this. If I want
to design an AI to detect bank fraud, you're right that I want to do cross-
disciplinary research in its design, but I do not understand what _AI
ethicists_ would add to that.

My disagreement is not that AI will involve itself in other fields, and in the
process we need to learn about those fields. It's with the idea that either
those people should be talking about the raw technology or that there should
be a general field about how AI specifically relates to the sum of every other
field.

> The commonality here is you care about understanding or predicting the
> empirical behavior of machine learning systems interacting with the real
> world, especially with humans.

Which is just the technical domain of AI research. Narrowing it down to the
commonalities has removed all of the interesting points we were going to
study!

I agree that when AI is added to the social systems or the stock market, we
need to involve people versed in social systems or the stock market. I still
do not see why they cannot collaborate in exactly the same way that they have
already. The only difference visible to me is that that the hammers are
bigger.

------
currymj
if a sociologist wanted to study the social impact of transit infrastructure,
that would be perfectly reasonable, and we wouldn't expect them to know much
going in about signaling systems, switches, or different types of rolling
stock. likely a railway engineer wouldn't be all that successful trying to do
the sociologist's job.

i think machine learning systems are different, and any researcher in a new
field of "Machine Behavior" would have to know much more about how they work
than in the railroad example. but there's no reason to suppose that work done
by people trained outside of CS departments will be nonsense.

and as the authors say, there are real blind spots where CS researchers might
just ask the wrong questions entirely.

~~~
adamsea
> if a sociologist wanted to study the social impact of transit
> infrastructure, that would be perfectly reasonable, and we wouldn't expect
> them to know much going in about signaling systems, switches, or different
> types of rolling stock.

Yes -- and a good sociologist would also learn a lot about that stuff once
they started their research, I'd imagine :). Not enough to be a railway
engineer, but presumably enough to be able to speak meaningfully &
intelligently to railway engineers. Not unlike science journalists.

------
jaddood
The problem here, I think, is that although humans can be studied without
knowing the structure of their brains, (psychology, sociology, etc...)
machines of today cannot. The reason for that is that humans are humans and
barely differ from each other biologically (the structure of the brain), but
ANNs are very diverse and their behavior changes depending on their "biology,"
so machine psychology or machine sociology cannot be built on stable grounds

~~~
gowld
> although humans can be studied without knowing the structure of their
> brains, (psychology, sociology, etc...) machines of today cannot.

why?

Or do you mean that study of some humans is _generalizable_ to most/all
humans, whereas machines lack this consistency and generalizability?

Anyway, why would this concern prevent "feature detection" tests that apply to
all machines?

------
sgt101
"In our own work—being computer scientists ourselves—we’ve been frequently
humbled by our social and behavioral science collaborators. " this part rings
truer than any other phrase in the whole thing.

------
s17n
AI is not yet complex enough to warrant such a discipline.

Obviously, like any other tool, AI systems are part of the purview of anyone
studying how society works (and as a particularly complicated tool, deserve a
particularly extensive examination.) But at this point, studying the ethics of
a machine is like studying the ethics of a fruit fly.

------
bad_ramen_soup
"We often cannot certify that an AI agent is optimal or ethical by looking at
its source code, any more than we can certify that humans are good by scanning
their brains."

What do people make of this claim? Is that a fair comparison?

~~~
bo1024
It's fair to some extent. The human brain is vastly more complex, so I don't
love the comparison. However, with very modern machine learning which is based
on deep neural networks, even though the source code is simple, it produces a
very complex neural network after training on data. That network is very hard
to understand and predict.

Even with simpler algorithms, you might argue that in a complex environment
they could have some emergent behavior that's hard to foresee based on the
source code. For example, if one is not careful then even simple linear
regression on some datasets can "learn" techniques like redlining -- using zip
code as a proxy for race as a part of biased predictions.

------
taneq
It sounds like the term they're looking for is "robopsychologist", I was
hoping for a mention of the great Susan Calvin.

