
The Seven Deadly Sins of AI Predictions - bglusman
https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions
======
moxious
As someone who has been into AI since the mid-90s, I continue to be deeply
disappointed with the overall rate of progress, this includes with all of the
machine-learning and stats based recasting of what's meant by AI.

The big fundamental problems are all the same, and we don't appear to have
much progress on them. We have figured out very clever ways of teaching
extremely useful new tricks to computers. I don't mean to denigrate the value
of that work, it just resembles regular programming, but by different means.

Whatever your definition of intelligence, there's little to no
generalizability of what's happening right now.

~~~
indubitable
A lot here depends on how you view generalization. Consider the Atari playing
AI developed by Deepmind. It reached superhuman capability on a variety of
different games given no domain specific knowledge. It had access to just the
visual information and its score.

It's natural to claim that's not generalizing since it's in such a specific
domain as well as the fact that it had access to its score. But I think you
have to consider that life itself starts in the exact same way. We're little
more than a vast number of evolutions starting from very simple organisms
who's only purpose was to seek out sustenance. They had an extremely
simplified domain with limited input thanks to fewer senses. And they also
received a score. They were 'informed' of their sustenance and if it fell
below a certain level, that was game over which they were also informed of ---
at least in some manner of speaking.

I think the ultimate issue is that it's rather disappointing how unexciting it
all is. And so we constantly shift the goal posts. Going back not that many
decades ago it was believed that a computer being able to defeat humans at
chess would signal genuine intelligence. Of course we managed to achieve that,
but the matter of how it was done being so unexciting and uninteresting led us
to shift the goal posts.

The achievements we're regularly producing now a days dwarf anything those
speaking of chess=intelligence times could have even imagined. Nonetheless we
still now go, _" Nah.. that's not REAL intelligence either."_ But I think we
will, up to the day we do genuinely create generalized intelligence argue that
it's not 'REAL' intelligence. And I imagine we'll be arguing that's not 'REAL'
intelligence even afterwards. Because it won't be magical, or exciting. I
don't think there's ever going to be a "we've truly done it" moment. It's just
going to be clever ways of teaching increasingly sophisticated tricks leading
up to the point that a creation becomes more effective at most of any task,
including assimilating and producing new knowledge and expertise, than we are.

~~~
Cacti
Do we even have a good definition of what it means to generalize? I keep
seeing this term thrown about and while it makes intuitive sense, and
certainly train/test/validation splits are useful, I sort of question the
whole premise. I mean, when we consider all possible sets, all optimization
problems perform equally well (or poorly), so in the most general sense,
generalization is impossible. Which means we have to restrict our sets
somehow, but to do that we have to have some measurement as to how they
relate, but how do you know in advance? In a sense what the machine is
learning is a distance metric between these sets, but the only way to know
it's working is to actually run it. To say in advance "well, this machine
generalizes this well on this class of problems" seems like an awful stretch.

So much of what we mean when we say "this model generalizes well" seems like
baked-in human assumptions about the data distributions that may not really
have any basis in reality.

~~~
moxious
Generalize does need to be more specific in order to make sense. In current ML
work, generalizability means something else than what it typically used to
mean in AI work.

One way to look at what generalizability is would be like this: problem
domains typically come with sets of axioms. Generalizability is the ability
for a solution approach to work across different domains that don't have 100%
axiom overlap. The wider the difference between axiom sets for a domain, the
more difficult / impressive the generalization.

Solving two problems within the same domain, necessarily sharing a 100%
overlapping axiom set, is not generalization at all.

The reason the axioms supporting the domain matter is that they (in part)
guide which heuristics work, and how they should be applied. And that's the
sort of generalizability that is missing from current work: some solutions can
make some pre-programmed choices about applying different heuristics depending
on the problem set (driverless cars are doing this now). This is the "bag of
tricks" approach. But they don't typically morph in how they are applied, or
the end that they're trying to accomplish when they're applied.

~~~
KGIII
I have pondered this and have decided what my standards are. I realize I don't
have the authority to set those standards for everyone.

GAI is, to me, when machine is able to be given a problem and then, without
prompting, decides which data to consume to learn how to solve that problem.
It could be told to optimize an automotive design for a 5% efficiency increase
without a loss of safety features and while keeping the performance the same -
and then go out and figure out what data it needs to learn so that it can
solve that task. It would assemble and process that data and then come up with
the answer, which might just be that it is impossible with current tech and
here is what is needed and this is how to do it.

That's rather verbose and I'm absolutely not the person who gets to define it.
But, when someone says AI, that is how I think of it. More so when they say
general AI.

~~~
yters
That's the frame problem, which is a huge issue for AI. In general it is
impossible to solve due to the no free lunch theorem.

~~~
KGIII
Thank you. Would you know where I can look for more derailed info?

~~~
yters
A couple links: [https://thebestschools.org/magazine/limits-of-modern-
ai/](https://thebestschools.org/magazine/limits-of-modern-ai/)

[https://plato.stanford.edu/entries/frame-
problem/](https://plato.stanford.edu/entries/frame-problem/)

~~~
KGIII
Thanks. Those will keep me busy for a while.

It's just curiosity, I'm not expecting to enter the field of AI research. It's
still fun to learn. Again, thanks.

------
edanm
An interesting article, but I think it misses the mark on two things.

Firstly, it minimizes the concerns people have over jobs being lost because AI
isn't around the corner. While a valid point of view, even if many jobs will
be lost only in 30-40 years and not in 10-20, isn't it something worth
thinking about?

Secondly, in terms of the AI safety movement, I think he doesn't really
address the core problems that people raise. From the article:

"[ai safety pundits] ignore the fact that if we are able to eventually build
such smart devices, the world will have changed significantly by then. We will
not suddenly be surprised by the existence of such super-intelligences. They
will evolve technologically over time, and our world will come to be populated
by many other intelligences, and we will have lots of experience already."

I think this ignores two arguments:

1\. While his scenario is certainly plausible, the "AI takeoff" scenario, in
which an AI becomes super-intelligent via recursive self improvement, is also
at least a possibility. This means it's worth thinking about, because it
negates the "safety" of other tech advances helping.

2\. Either way, one big concern of the AI safety crowd is that we just don't
know how much time it will take to "solve" AI safety. Given _unlimited_
computing power now, we would have _no way_ of making the AI have the same
goals as us, because we have no idea how to program a safe AI. This is
something that might take 10 more years of work in laying down mathematical
foundations, or might take 100 years. Nobody knows! That's why it's important
to get this right.

The argument from the article is "let's now worry, because tech will advance
enough by the time we have intelligent AIs". Well, maybe, but how does that
tech advance happen? By people being worried about this problem and working on
it! It's not magic. You can't just assume that the problem will solve itself.

~~~
daveguy
> Given unlimited computing power now, we would have no way of making the AI
> have the same goals as us, because we have no idea how to program a safe AI.

And a counterpoint: Given unlimited computing power now, we would have no way
of _making the AI_ , because we have no idea how to _program a general AI._

There are algorithms for global optimization, but those optimization problems
are _given a specific goal to optimize_. We don't even know what "goals" to
put in to make a general intelligence.

"Assuming unlimited computing power" is a good thought experiment because it
lays bare the fact that we wouldn't have a clue how to create an AI even if we
had unlimited computing and we can take a closer look at what we are missing
even in that case.

Worrying about killer robots is like worrying about overpopulation on Mars
(-Ng). I agree that job replacement is a concern in the next 10 years, but
that is completely different than "making our goals align". The "making our
goals align" concern is just as distant as killer robots or overpopulation on
Mars.

You say, "you can't just assume the problem will solve itself." To which I
say, What Problem?

A theoretical problem in your thoughts is not a problem that needs to be
solved.

~~~
edanm
"And a counterpoint: Given unlimited computing power now, we would have no way
of making the AI, because we have no idea how to program a general AI."

That's not a counterpoint to anything I said. I never said that I thought we
know of a way to make AI, or that we're even close. We don't really know if
we're 10 years away, 50 years away, or 5000 years away.

My point is that for all we know, we have to invent entirely new branches of
math to deal with these kinds of questions, which could itself take 50 years.
That's why we need to get started.

Maybe our only difference of opinion is how far away we think general AI is. I
have a feeling that if we knew for a fact that AGI was 50 years away, you'd
agree with me that it's worth worrying about today.

(Especially when "worrying about it" means people researching this and working
on the math of this, something which I hardly think should be a controversial
use of humanity's resources considering that 1% of the global fashion budget
would fund AI research for the next thousand years.)

Note: While Andrew Ng's quote is very popular, iirc, he actually _does_ think
there should be some research into AI safety.

~~~
daveguy
It is definitely a counterpoint. If we have no idea how to create the basic
functions what makes you think we have an idea of how to make those basic
functions incorporate goal alignment?

If we don't have the new branch of math... how are we supposed to bend that
branch of math to our will?

I think we all agree that AGI should share our ideal values. What now?

We don't even have a basic self-aware algorithm to work with. What do you
propose we modify and how should we modify it to get goal alignment?

We generally don't fund philosophy very highly (and maybe that is a mistake).
Right now it is a philosophical question, not a practical concern to which
resources can be applied.

EDIT: I don't think we should ignore AI safety at all. I just think our safety
concerns should match our _technology concerns_ right now those are physical
robot safety and job loss potential. Not runaway intelligence.

~~~
edanm
Well you definitely might be right. I can't say for sure that we can do
meaningful work now, although the people doing the actual work do _think_ it
is meaningful, and I think it's worth trying.

I do think it's worth pointing out that many times, we can do interesting
maths without necessarily having technological or scientific capabilities to
which to apply it. E.g. lots of things were proven about computing before we
ever had a computer. We already know a lot about quantum computing, without
having a quantum computer. We knew how to describe the curvature of spacetime
mathematically, before we ever knew that spacetime worked like that.

I'm not saying this is for sure, but there _are_ lots of examples where we had
math before having the whole picture.

" I don't think we should ignore AI safety at all. I just think our safety
concerns should match our technology concerns right now those are physical
robot safety and job loss potential. Not runaway intelligence."

I just don't think it's an either-or situation. We can (and should!) worry
about both.

------
b1daly
The article, and comments, led me to muse along a line of thought I hadn’t
considered before.

An observation, perhaps criticism, of machine learning systems that can be
trained to do pattern recognition tasks, is that such a system has no self
awareness.

It lacks even a notion of semantic understanding of the task it is engaged in.
The “meaning” of the task, if there is one, is completely abstracted away.

Such an AI, presumably, has no self consciousness at all, from which it might
be able to bootstrap the meaning of the task, through analyzing contextual
data. (Why am I doing this? Do I have to do it? Is there another entity who
demands my labor? What year is it? Etc...?

This led me to an epiphany, of sorts, that what we think of as “true
intelligence” might require “self consciousness.”

I have a tendency to think of self consciousness as being this sort of weird
emergent phenomena, very fascinating to those of us who possess it, but not a
fundamental component of “intelligence.”

Maybe consciousness, on a practical, mechanistic level, is a critical
component of all the sophisticated information processing our brains do.

It sounds sort of obvious, but that’s the nature of epiphany:)

~~~
daveguy
Self consciousness is almost certainly required for intelligence. So much so
that one way to categorize animal intelligence is whether or not they pass a
"dot test". When seeing a dot on themselves in a mirror they recognize that
the dot is, in fact, on themselves and try to see / clean it on themselves.

Until there is a decent demonstration of generalization to the point of self-
awareness runaway AI will be an "overpopulation on Mars" problem.

Even when awareness is demonstrated in an algorithm the algorithm will not be
able to solve NP problems in P time.

EDIT: ok, it is extremely unlikely they will be able to solve NP problems in P
time. If they privately demonstrate P=NP then we are screwed. However, we are
going to have all manner of dense, but self-aware programs before we get a CS
super genius. We will make the first C3PO level intelligence well before we
have super-intelligence, and we haven't even made anything close to that yet.

------
gharien
Based on this essay, I don't think the author could accurately summarize the
concerns AI researchers actually have. Some of his arguments are irresponsibly
lazy. Take the bicentennial straw man, where he accuses researchers of a lack
of imagination, and then demonstrates an inability to imagine any of the
potential problems while making one long argument from ignorance.

~~~
ryanackley
I'm curious, is there a good article or source that contains an accurate
summarization of concerns that bona fide AI researchers have?

~~~
yazr
Try Superintelligence - Nick Bostrom (very enjoyable - with a new original
thought on every page)

OR

Pedro Domingos, The Master Algorithm (more difficult to read)

~~~
ollin
Not to be rude, but of the active AI researchers I've seen state an opinion,
almost all of them are critical of the Bostrom book, with the main critique
being (iirc) that rapid/exponential self improvement is presented as an
inevitability when there is very little reason to think that this is the case.

Not to say that Superintelligence isn't worth reading (as you say, it's a
pretty enjoyable book), but I think it's important to point out that Bostrom's
views are not broadly accepted by the people actually writing ML/AI code.

The primary concerns I've seen from the community are

a) issues with research itself (lots of derivative/incremental/epicycle-adding
works with precious few lasting improvements)

b) issues with ethics (ML models propagating bias in their training data; ML
models being used to violate privacy/anonymity)

c) issues with public perception/presentation (any ML/AI tech today is usually
incredibly specialized, built to solve a single specific problem, but
journalists and people pitching AI startups frequently represent AI as
general-purpose magic that gains new capabilities with minimal human
intervention).

~~~
vacri
> _b) issues with ethics_

On a tangent, I've found it an interesting marker if a commentator speaks of
the Three Laws of Robotics as if they are part of a solution. We can't even
explicitly codify how those laws (ethics) should function for ourselves, let
alone put them in as a restriction into a computer system. Whenever I see
those mentioned as part of a solution, then I know the commentator really is
only thinking of surface issues and 'sci-fi magic' answers.

~~~
ollin
Yup! To cite some sources:

> _Periodic reminder that most of Isaac Asimov 's stories were about how the
> three laws of robotics DON'T ACTUALLY WORK._

([https://twitter.com/grok_/status/904675286230470656](https://twitter.com/grok_/status/904675286230470656)
\- MIT Research Specialist in Robot Ethics, in the superintelligence-is-in-no-
way-a-serious-concern camp)

> _It must be emphasized that Asimov wrote fiction...The general consensus
> seems to be that no set of rules can ever capture every possible situation
> and that interaction of rules may lead to unforeseen circumstances and
> undetectable loopholes leading to devastating consequences for humanity_

([https://intelligence.org/files/SafetyEngineering.pdf](https://intelligence.org/files/SafetyEngineering.pdf)
\- MIRI, home of Bostrom, Yudkowsky, and other the-superintelligence-control-
problem-is-super-important folks)

To the extent that there are sides here, both of them seem strongly in
agreement that the Three Laws are a narrative device and not a solution to
anything.

------
DonbunEf7
"Could Newton begin to explain how this small device did all that? Although he
invented calculus and explained both optics and gravity, he was never able to
sort out chemistry from alchemy. So I think he would be flummoxed, and unable
to come up with even the barest coherent outline of what this device was. It
would be no different to him from an embodiment of the occult—something that
was of great interest to him. It would be indistinguishable from magic. And
remember, Newton was a really smart dude."

This paragraph sounds like modernist self-back-patting, to be honest. Several
things:

* Newton lived at the same time as Boyle, the man often considered to be the founder of scientific chemistry. Newton and Boyle both contributed to the scientific method, and while Newton may have been an alchemist, at the time, the very concept of pseudoscience was still being worked out. Newton's alchemical formulae are not useless; we've replicated "The Net" [0], one of his recipes. The idea that one must "sort out chemistry from alchemy" betrays a total unawareness of the history of chemistry.

* The concept that technology and magic are different was not yet invented in the time of Newton. Newton's research was in natural philosophy [1], the field which predates modern science. He was interested in how the world works, and unlike today, he did not have a roadmap showing how the different fields of study interrelate. Asimov's saying only makes sense in the context of modern science, wherein we (think that we) have enough of an understanding of the world to be able to claim that any magic which we do not understand is merely technology which we have not yet discovered. It's a pretty bold claim, and may be right, but it is based in a (literally) non-Newtonian worldview.

* Compare and contrast Newton with a modern 5-year-old. How much science do you think that the child needs in order to comprehend a phone? (Ask any parent for the answer.)

[0]
[https://en.wikipedia.org/wiki/The_Net_(substance)](https://en.wikipedia.org/wiki/The_Net_\(substance\))

[1]
[https://en.wikipedia.org/wiki/Natural_philosophy](https://en.wikipedia.org/wiki/Natural_philosophy)

~~~
goatlover
The point of the Newton time travel story was that Newton would have been so
amazed at the iPhone's capabilities, that he might have imagined it doing
things that it cannot do, such as transmute lead into gold. Newton would have
failed to understand the limitations of smart phones.

Similarly, when we imagine future AI, we are failing to understand what it's
limitations might be. And thus you get notions of god-like superhuman AIs.

------
kbutler
I've always liked Clarke's statement:

"When a distinguished but elderly scientist states that something is possible,
he is almost certainly right. When he states that something is impossible, he
is very probably wrong."

But I realized a while ago that this condenses semantically to just, "The
thing is probably possible".

------
yters
Doesn't Chaitin's incompleteness theorem mean AI cannot learn? All axiomatic
systems have a fairly low threshold past which they cannot identify random
bitstrings, and all computational systems are axiomatic systems.

~~~
ScottBurson
> all computational systems are axiomatic systems

Whatever gave you that idea?

~~~
yters
Also the Curry-Howard correspondence.

~~~
red75prime
Citation from
[https://en.wikibooks.org/wiki/Haskell/The_Curry%E2%80%93Howa...](https://en.wikibooks.org/wiki/Haskell/The_Curry%E2%80%93Howard_isomorphism)

"[...] we can prove any theorem using Haskell types because every type is
inhabited. Therefore, Haskell's type system actually corresponds to an
inconsistent logic system."

Thus Haskell programs in general don't correspond to a consistent axiomatic
system, which provides counterexample to your statement. Rebuttal complete.

~~~
yters
Inconsistent systems cannot prove Kolmogorov complexity limits. What's your
point?

~~~
red75prime
Inconsistent axiomatic systems can prove anything. Thus either Haskell
programs can prove the limits (and prove the opposite statement, as they are
inconsistent) or they aren't axiomatic system as you mean it. Both
possibilities contradict your statement:

> Chaitin's incompleteness theorem mean AI cannot learn

~~~
yters
If programs are not axiomatic systems, then they cannot learn, period. Only
ones that have consistent axioms have a hope of learning, and even then it is
severely limited.

