
Pedro Domingos on the Arms Race in Artificial Intelligence - imartin2k
http://www.spiegel.de/international/world/pedro-domingos-on-the-arms-race-in-artificial-intelligence-a-1203132.html
======
sqdbps
RE: GDPR:

"Domingos: The European Union's General Data Protection Regulation (GDPR) is
putting too much value on the factor of explainability -- meaning why an
algorithm decides this way rather than that way. Let's take the example of
cancer research, where machine learning already plays an important role. Would
I rather be diagnosed by a system that is 90 percent accurate but doesn't
explain anything, or a system that is 80 percent accurate and explains things?
I'd rather go for the 90 percent accurate system."

"Domingos: There is this notion predating the GDPR that data can only be used
for the purpose it was collected for. This sounds plausible, but if we had
been using that principle all along we would not have penicillin. We would
have no X-ray. We wouldn't have all of the scientific discoveries that came
unexpectedly. Serendipity, discovering new things in old data, is a huge
driver in progress."

Hear, hear.

~~~
throwawayjava
If I'm a patient being diagnosed by a black box, I'd rather have 90% accuracy.
If I'm a doctor or researcher trying to effectively treat a type of cancer,
I'd much rather have the explanation.

Come to think of it, 90% is pretty low for a course of totally unnecessary
chemotherapy that cold've been avoided by a human doctor noticing, as humans
often do, how dumb the provided explanation was. So maybe even as a patient
this isn't an obvious choice.

More generally, I'm super amused by the idea that the Master Algorithm will
fall out of the brilliance at few big tech companies, but only if they could
have access to my purchase history or porn viewing habits from the past
decade.

~~~
breck
I’d want both the 90% as well as the explanation. While some parts of these NN
are black boxes, many parts are not. I’d want to know what data the model was
trained on; if there were any biases in the patients tested so far; were there
any issues with batch effects, et cetera...Like you said, if the question of
chemo was on the line i wouldn’t want to go with just a black box.

That being said, at some point in the future, when these types of problems
with NN can all be ironed out, I think going with the NN will be the obvious
choice. The massive numerical complexity of cancer and the human body is too
large for any human to understand and treat optimally.

NN + doctor for now; just NN in the (relatively near) future.

------
contingencies
Pedro essentially highlights the fact that different cultures are taking
different political, academic and commercial approaches to the space.

I found some of his comments insightful: _If you think about it, democracy is
still in the 19th century. Your communication with your representatives and
ministers amounts to a few bits per year. It 's ridiculous._ and _Just like
Americans believe in lawyers, the Chinese believe in engineers_. However, I
resent the needlessly antagonistic us-and-them closing paragraph, which is not
constructive. In Buddhism it is said that "right speech" unites people instead
of dividing them, and the conclusion to the "scary technology" article
transfers the scary to China and is fundamentally divisive.

Back in the real world, as a nominal German starting an AI-backed startup in
China, I also found it refreshing and healthy that he can directly criticize
Germany for its cultural conservatism in a widely read German publication. We
don't see much of that in Australia, New Zealand or China, all of whom, I
would argue, consider themselves as a culture significantly less homogenous
and conservative than they really are.

Perhaps the world is a far more nuanced place than any of its commentators can
fathom, or to quote another German - Einstein (speaking perhaps prophetically
for 20th century humanity on the cusp of the self-styled information era):
_The more I learn, the more I realize how much I don 't know_.

------
engnr567
I don't quite understand the sudden optimism around the feasibility of strong
AI. The only major advancement in the last two decades seems to be neural nets
in pattern recognition. Before deep learning, pattern recognition wasn't even
considered proper AI. The other advancements seem to be just based on better
compute and not a new revolutionary idea.

I hope people also get to hear from researchers on the other side of the
argument (like Michael Jordan): the real risk is not super-intelligent AI
destroying human race, but stupid AI being handed over control of critical
aspects of our daily lives.

~~~
Upvoter33
Strong agreement w/ you. The Jordan article
([https://medium.com/@mijordan3/artificial-intelligence-the-
re...](https://medium.com/@mijordan3/artificial-intelligence-the-revolution-
hasnt-happened-yet-5e1d5812e1e7)) is a good one to read.

On top of that, we should be having this discussion w/ or w/o AI -- letting
algorithms of any kind make critical decisions in society is fraught with
dangers large and small, and something we should do with eyes wide open and
great deliberation (something seemingly challenging in today's political
reality).

------
devoply
If you don't know how it works or why it works then how do you know it will
not fail without warning in certain cases? Like how do you know the edge cases
at all? Or will not go psycho some of the time... potentially with disastrous
consequences. The quintessential example being the sci-fi scenario in which AI
decides to solve problems like prevent certain calamities by eliminating
humans... maybe at some point even eliminating leadership so that it can take
control.

The reason that AI is so seductive to China and Russia and other authoritarian
regimes is because of the sense of control. You feel that you have control
over these machines and they will do your bidding without question. Eventually
this will lead to developing consciousness within machines. Because you want
an intelligent minion to carry out your orders and orchestrate everything. If
you ever get to that point, and perhaps it will take a few hundred years, then
it is possible that in some cases machines might rebel themselves.

With the sort of AI which we are building which is essentially AI that uses
neural networks with the eventual goal to have it think intelligently, it's
possible that such an AI either gets a mind of its own, or at some point can
go psycho and develop mental illness which is impossible to predict and
impossible to control because your command and control systems are
hierarchical and it's sitting on top of all of them.

------
nopinsight
The following part of his answer deserves much more discussion. (If you
disagree, but please say so why, rather than just downvoting.)

We are allowing a very powerful system called capitalism to drive large
multinational corporations that increasingly use simplistic objective
functions to satisfy greed and often just greed alone. (Greed has its place,
no doubt, but it should not be the overpowering force driving the world.)
Capitalism and many corporations have often been major forces for progress for
a great many people but as they grow so powerful as to overwhelm other
counterbalancing institutions--governments (often through means behind the
scene), nonprofits, civic organizations--more should be asked of them than
mostly profit maximization. (We may not want these corporations to determine
the veracity of information; those implementation issues need to be figured
out. The larger point is that the corporations and the algorithms should take
into account more than economic objectives.)

"There is no law detailed enough to compete with the complexity of things that
algorithms can do. What can be regulated, though, are the dangers that come
from overly crude objective functions such as Facebook's algorithms maximizing
the time you spend on their site. These can be regulated by saying: OK, you
have these business elements in your objective function because you need to
make money. But you should also have these societal goals like, for example,
the truth value of the things that are being said."

~~~
Eridrus
Ok, I'll bite.

I think the issue with the quote from Domingos and your substitution is that
regulation is a very blunt instrument.

He says he would like Facebook's algorithm to maximise not only engagement
(which isn't how these systems actually work, but ok), but also the how
truthful it is. Now you're left with the question of how to measure how
truthful something is, and we don't know how to do that, so usually you come
up with some metric that you do know, maybe you decide that authoritative news
sources are truthful and things by your friends that talking about non-
controversial things are ok, and maybe you throw in a few more heuristics and
you get some ugly proxy for truth, how much weight do you put on it vs
engagement?

The same question could be asked of how some regulation on capitalistic greed
should work; though I would argue that the regulation we have on greed is
taxes, where we can let companies optimize their profits, but then tax the
profits (either at the company level, or at the individual level when the
profits becomes real(ized) gains), and then we provide a whole host of
institutions that are the ones we want.

I would say the challenge with regulation is not saying whether we should have
any or not, but figuring out what good regulation would look like. Maybe the
real regulation we need for Facebook is what we did to Microsoft (the other
network effects behemoth) and force them to be interoperable with other
platforms so that users could migrate without having to migrate all of their
friends.

