
The Singularity Is Further Than It Appears (2015) - dkarapetyan
http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/
======
leepowers
> _It’s wrong because most real-world problems don’t scale linearly. In the
> real world, the interesting problems are much much harder than that._

And this is what has been nagging me about the Singularity and its associated
predictions. Exponential growth in our problem-solving capabilities is
explosive when the problem-space is linear. But what happens when the problem-
space itself grows exponentially with each iteration? Then we're back to
linear progress.

Futurism, thinking about the future of AI and technology is still important.
Long-term thinking, planning, and prediction are always good. But progress
will probably be much slower than anticipated.

~~~
eli_gottlieb
>Exponential growth in our problem-solving capabilities is explosive when the
problem-space is linear. But what happens when the problem-space itself grows
exponentially with each iteration? Then we're back to linear progress.

I've always wondered: why _should_ recursive intelligence improvement be
possible? That is, if you're an agent, you're nominally searching for improved
versions of your source-code to self-improve. That's obviously going to be a
discrete, combinatorially large search space. Why should each search space
have constant or falling entropy, conditional on the agent's existing source
code and knowledge?

I don't really think "intelligence" can drive the entropy (or the compute-
time) of any given search-problem to zero _just by existing_ , so it seems
like it should need to draw some resource other than memory space and energy
from its environment. This resource is probably information: you would need
some knowledge of the world to improve yourself. Successive improvements would
then require successively better, finer-grained understandings of the world.

That makes sense, but implies that self-improvement becomes more difficult
with each round you attempt, since you've already conditioned on most of the
environmental information you can get. Entropy can't go to zero, precision
can't go to infinity.

~~~
joe_the_user
_I 've always wondered: why should recursive intelligence improvement be
possible?_

Why does intelligence improvement have to be recursive improvement to the
intelligence "source code"?

Why can't it simply involve adding more hardware?

Maybe the resulting intelligence of putting a million pieces of human level
hardware next to each other would only be at the level of million human beings
working in close cooperation with perfect communication ability. But that
seems pretty powerful.

That might not be perfectly omnipotent but it seems like it would be pretty
effect, in the fashion the humans are effective and constantly improving
themselves, slowly on average, and quickly in the best instances.

~~~
eli_gottlieb
>Why can't it simply involve adding more hardware?

Because adding linear amounts of hardware won't give you linear increases in
performance. Almost no machine learning algorithms are O(n) in asymptotic
complexity. Actually, I can't think of one that _is_.

------
legatus
An important aspect that the author, in my opinion, doesn't touch is
attention. Most of us, humans, tend to be capable of paying attention to a
_single_ task for little time, such as some hour. Now, take a computer. A
computer, given sufficient electricity is capable of paying attention to such
a task until its hardware has problems. Imagine if someone such as Einstein or
Schroedinger were capable of paying attention to a single task (such as
unifying physical theories) without needing food, water, waste release, sleep,
social life. Also the task is poorly defined: increasing an A.I. intelligence
isn't a single task, it can be achieved by, for example, creating faster
hardware, increasing efficiency, optimizing software, as well as much higher
level tasks.

~~~
snowwrestler
This seems to reflect a fundamental misunderstanding of cognition, which is
that only the conscious part matters. In fact we don't really understand how
the brain works on problems, but there is plenty of evidence for continuous
background processing: ever heard of someone getting an idea in the shower?
Ever had the answer to a question pop into your head when you were doing
something else?

~~~
robertk
The point stands that one can probably reduce the necessity for many daily
actions when attention and willpower is unlimited. Even if you get an idea in
the shower, you're still "wasting" time not being able to write a paper on it
immediately, instead having to take time to dry and clothe yourself first.

Whether this provides a few percentage points or an order of magnitude
advantage is a different question.

------
akuma73
What will replace Moore's law as the engine of exponential growth? The
progress in the last 30 years in computing will not be replicated in the next
30 years for the simple reason that we are hitting hard physical limits of
atomic scale and thermodynamics.

~~~
pacificmint
Right now chips are mostly two dimensional. Some of the structures might have
three dimensions, but the general layout of the chip is two dimensional.

Going into the third dimension would allow you to cram way more transistors
into a small volume of space. Once we run out of improvements along the first
two dimensions, up might be the only way to go to keep transistor growth going
for the future.

~~~
akuma73
There are thermal issues with die stacking. Where does the heat go?

This is a general problem in solid state physics. Power density is already a
huge problem.

~~~
p1esk
Getting the heat out is an engineering problem. If nature could solve it
(brain is 3D), we will find a way too.

~~~
dkarapetyan
But we haven't yet and saying keep doing the thing that increases power
density when we know that currently doesn't work doesn't really address the
main complaint.

No one knows how the brain computes and the current hardware models are very
poor approximations as the article outlines. Stacking more silicon is probably
not the way to do it.

~~~
p1esk
What does not work? Flash is already 3D. DRAM is rapidly becoming 3D. High end
FPGA chips have been 3D for a couple of years. Going vertical is the obvious
way to extend Moore's Law, and the main reason why we don't see more of it
today is that until recently it was easier to shrink transistors, so that's
what people did. Now that's changing, and people will solve engineering
problems related to 3D just like they solved engineering problems related to
transistor shrinking.

We might not know how _exactly_ brain performs _some_ of the tasks, but we are
pretty sure neurons are arranged in 3D structures, and heat dissipation is not
a problem.

~~~
bsder
> we are pretty sure neurons are arranged in 3D structures, and heat
> dissipation is not a problem.

Actually, heat dissipation is a _HUGE_ problem for biological structures. Heat
exhaustion is a _really_ easy thing to have happen to you.

Biological organisms operate well only in a _very_ narrow temperature range.
The brain is a gigantic heat producing organ coupled to an enormous heat
dissipating organ--skin.

------
fisherjeff
> 2) There’s a huge lack of incentive

This is THE big one for me. In the classic paper-clip-maximizer-gone-wrong
example, there's no fathomable reason why generalized AI is necessary for such
a specific, mundane task. But the organizations that have the (enormous)
resources to develop such an intelligence are almost entirely focused on small
numbers of tightly scoped problems.

It's very difficult to see any marginal ROI for any organization whose end
goal is not some form of world domination.

~~~
eli_gottlieb
>It's very difficult to see any marginal ROI for any organization whose end
goal is not some form of world domination.

Of course, that just means you'll see AGI efforts from organizations whose end
goal _is_ world domination. In fact, in general, half the people I've ever
seen or heard using the term "AGI" were basically wannabe supervillains.

~~~
Eliezer
Every successful supervillain starts out as a wannabe supervillain.

~~~
eli_gottlieb
I was talking about you. And since you've now got a whole organization and
funding and other people working for you, you seem reasonably competent at it.

------
Moshe_Silnorin
Gwern's response to the complexity arguments:
[https://www.gwern.net/Complexity%20vs%20AI](https://www.gwern.net/Complexity%20vs%20AI)

~~~
dwaltrip
Great counter post.

An off-the-top-of-my-head summary of some key points:

\- Don't dismiss diminishing returns. Even if takes 10^6 times more
computations to double intelligence, we may very well build a machine powerful
enough.

\- Units of computing per dollar, in contrast to moore's law, continues to
double consistently and shows no sign of slowing down.

\- Computational complexity is a theoretical model that makes certain
assumptions that don't always matter in the real world, such as optimality and
worst-case performance. Approximate solutions can be far cheaper and plenty
sufficient. Average case performance is often more important.

\- Impassable barriers can sometimes be entirely avoided by solving the root
problem in a different way. Self-driving cars don't need human level vision,
they just need a way of sensing the immediate environment, which LIDAR allows,
despite being technically inferior to the human eye.

Of course none of this says we will achieve strong AI or a singularity. It is
primarily a response to the type of arguments found in the linked article
above.

------
joshuak
It's curious why people try to build arguments for linear or even worse
growth. The arguments are so odd, it seems to me that they are more about the
wishes of the author then an attempt to provide a counter argument. Perhaps
it's just a misunderstanding of the theory?

Why is this argument about AI? The idea of a technological Singularity has
nothing to do with AI. In fact the theory is pretty explicit about the fact
the details of future technology are unknowable, and generally never predicted
correctly. The theory attempts to explain a global evolutionary model in which
global evolution continues despite the speed of localized evolutionary
systems, like say biological evolution. Local effects of any particular
technology's growth are not predicted, and moreover they are said to be
unpredictable.

Even if the op was successful at arguing that AI does not work like every
other technological system that is not salient to the ETA of the Singularity.

The point is not that cpus double in transistor density every 18 months, and
therefore Skynet. The point is that evolution was exponential, human knowledge
was exponential, electrical technology was exponential, digital technology was
exponential, biological technology was exponential, and in aggregate
exponential growth across all technologies is consistent and predictable. Even
as one technology's growth or usefulness tappers off, others supplant it.

If you arbitrary pick something say, vacuum tubes or the printing press, and
create some sort of argument that it doesn't in and of it self experience
exponential growth, you may succeed in your argument, but you haven't said
anything.

------
jumby
Kurzweil's point is the following: "who cares". Even if it takes one-thousand
years, it's a blink in the history of humanity / life.

~~~
p1esk
Actually, Kurzweil cares. He keeps saying that according to his "laws",
Singularity will happen by 2029.

~~~
leepowers
Even more than that Kurzweil boldly predicts that nanobots will cure virtually
all disease within ~15-25 years (by 2030s at least, but no later than the
2040s).

I certainly hope this prediction is true. But I think it has more to do with
that fact that Kurzweil will be creeping towards the end of his life (gauged
by average U.S. life expectancy) in the next 15 years.

~~~
fapjacks
Yeah, it's the old joke about futurists: When will humans become immortal? And
every one of them gives an answer just short of their own expected demise.

~~~
Koshkin
> _humans become immortal_

The obligatory note: the true immortality is unattainable in principle without
indestructibility (which is something that is hard to imagine even
theoretically). Otherwise people will continue to die, _en masse_ , from
various unnatural causes. My guess as to the half-life of a human in the best
of circumstances is just a couple of hundred years...

~~~
tim333
Just make backups.

------
samatman
Mad respect to Mez, but I was disappointed by something here. He devotes a
very thoughtful paragraph to the algorithmic complexity of intelligence
augmentation, then cites Intel achieving n^2 improvements in transistor
density in linear time as being non-transcendental.

He's correct, but the reason is obscured: Moore's Law appears to be following
a logistic curve, and is leveling out as we speak. If it wasn't, the
compounding interest of quadratic transistor increases over linear time could
well lead to a (relatively) hard takeoff at the point where a single chip
contains a human brain's worth of calculating ability. Granted that concept (a
brain's worth of computation) is hand-wavey and poorly defined: but the point
is that if 20n's Intel processor has one brain's worth, than 20n+1 has two,
and 20n+2 has four, and so on.

~~~
Hondor
And all the CPUs in the world have a 20 billion brains worth. Despite the
large number of transistors, they're still not organized in a way which makes
them intelligent, no matter how many times you multiply them.

------
randallsquared
People often opine that groups of humans are more intelligent than any human
in the group. In terms of raw processing power, sure. But in any analogous way
to AGI, it would seem not: for that it would have to be the case that the
collection of humans can accomplish something in fewer human-hours than the
smartest of them could. That might be the case in carefully designed
situations, but generally it doesn't seem to be.

~~~
pbw
The essay talks about Intel. Intel the company could design its next
generation chip much much faster than its smartest member could alone.

------
armitron
He writes:

"Nothing about neuroscience, computation, or philosophy prevents it. Thinking
is an emergent property of activity in networks of matter. Minds are what
brains – just matter – do. Mind can be done in other substrates."

Yet the rest of his post is obsessed with "building" minds or AGI and tries to
extrapolate based on that premise. There is a school of thought that views AGI
as an emergent cybernetic process, "a metasystem-transition" [1]. This has
roots all the way back to the concept of "Noosphere" that comes from
Teilhard/Vernadsky [2]. Even if one does not feel aligned with such ideas, it
is intellectually dishonest to posit AGI solely as the product of directed
human engineering.

It is far more likely in my view, that AGI will be a Black Swan event and
therefore all attempts to place it on a time scale, fraught with peril.

[1]
[https://en.wikipedia.org/wiki/Metasystem_transition](https://en.wikipedia.org/wiki/Metasystem_transition)

[2]
[https://en.wikipedia.org/wiki/Noosphere](https://en.wikipedia.org/wiki/Noosphere)

~~~
AnimalMuppet
There are also schools of philosophy where minds are _not_ just matter.

~~~
Filligree
None of which ever explain why computers can't do the same whatever-it-is as
flesh can.

~~~
AnimalMuppet
Because they claim that the mind is _not just flesh_.

I mean, if you want to disagree, feel free. But at least understand what the
claim is that you're disagreeing with.

~~~
Filligree
No, I got that.

But why can't computers be _not just silicon_? If natural selection can
blunder into exploiting unusual physics, why can't we do so deliberately?

~~~
AnimalMuppet
One more time: The claim is that minds are _not just physics_. Claiming that
we could use unusual physics doesn't address the issue.

The idea that the physical universe is all that exists is so deeply ingrained
that it's really hard to get people to even see that they're thinking inside a
box...

~~~
Filligree
"Physics" is our word for "The rules that the universe follows".

So what are you proposing, exactly? That it doesn't follow rules?

~~~
AnimalMuppet
That there is more that exists than the physical universe.

In short, God. If God exists, if God is someone rather than something (a "he"
rather than an "it"), and if God made humans in his image, then human
personality can be real. It can be more than just a property of the atoms that
make us up.

This may sound like a bunch of mystic woo. But I argue that it is the only
thing that explains our observations of ourselves. No matter how much our
theories say that we are just matter, that our personality is just the
impersonal plus complexity, we still live - cannot avoid living - as if we
were persons, not just machines. Why is that? I assert that our experience of
personality is _evidence_ that materialistic theories are inadequate.

------
tim333
It's a good essay. The term singularity in the AI context has always bugged me
as being ill defined. I think the interesting point will be when intelligent
machines can run things and build other intelligent machines such that if all
the humans disappeared they would keep going. That doesn't mean there needs to
be a sudden increase in a division by zero way or 'sentience' in a way that
keeps the philosophers happy, just that the robots can survive, reproduce and
evolve without us. It would be a big change in history though.

------
ilaksh
Have people had success in efforts to train grounded deep learning systems
across a variety of tasks in simulated environments? Have they had success in
transfering that learning to new tasks?

------
AndrewKemendo
The corporation example always kills me because it's a terrible metaphor.

A corporation optimizes for shareholder value, market cap or some other X
related to business/market goals. They do not optimize for global intelligence
capability. They do not benchmark the company based on quantitative capacity
to meet or beat human capabilities across the spectrum of activities.
Corporations focus on one or a handful of market specific metrics where they
meet or beat their competitors. Full stop.

You could argue that it would be in the company's interest to focus on general
corporate capabilities, in theory giving them a major advantage in the market,
but they functionally don't do that, and I would argue can't because that's
not what they are designed to do. I think the only major company that might be
doing something close is Alphabet, and even they are hamstrung by it.

What I do agree with is this part: _Lack of incentives means very little
strong AI work is happening_

Which is my primary frustration with the field. Most people don't even want to
discuss it, let alone try and specifically work on it (even if it means
working on subsets which could help lead to it).

I think there needs to be a philosophical MOVEMENT to create AGI. I think it
will take that to get there in a short horizon. I think it will happen
regardless, but without evangelical AGI proponents it's going to take a lot
longer.

~~~
eli_gottlieb
>What I do agree with is this part: Lack of incentives means very little
strong AI work is happening

Really? OpenAI is a thing. Numenta has been a lovely scam for VCs for years. I
found an "AGI" company just yesterday that actually publishes peer-reviewed
research
([http://www.maluuba.com/research/](http://www.maluuba.com/research/)).
There's another one that's kinda cranky but sent a guy to give a talk at MIT
CSAIL last October or so
([http://www.vicarious.com/](http://www.vicarious.com/)).

Seems like it's a space where you can pitch yourself as "AGI", but you have to
get your revenue from solving real-world machine learning problems. That seems
pretty appropriate to me: you have to solve someone's problem to have a
business.

~~~
AndrewKemendo
OpenAI isn't trying to create AGI. They are trying to create "Safe AI" (an
impossible oxymoron IMO) and democratize the tools _that so far we think might
lead to AGI_ so that AGI is not all in one entity's hands and is safe. Demis
is the only one so far that I have seen actually say they want to create AGI -
which I why I gave a hat tip to Alphabet in my original post.

Numenta, OpenCog et al... all of their founders want to create AGI, but they
aren't evangelical - they are focused on creating companies which can move
progress forward on NAI. Which by the way makes perfect sense, that's exactly
what I do because it's impossible to fund AGI for it's own sake. In fact I
NEVER talk to investors about how our goal is in the AGI space. For a million
different reasons.

 _Seems like it 's a space where you can pitch yourself as "AGI", but you have
to get your revenue from solving real-world machine learning problems._

In fact it's worse. If you pitch yourself as AGI you'll get 99% of investors
to walk away immediately.

No what I am saying we are not hearing is this:

"The purpose of humanity is to build an Artificial General Intelligence, it's
the thing that we should all dedicate our lives to because it's the offspring
of humanity and the most important thing any of us will ever contribute to."

Nobody is out there beating that drum yet, and it's a shame.

~~~
AnimalMuppet
I can't reconcile these two concepts: "We evolved due to random chance, with
no guiding hand", and "The purpose of humanity is..."

But you pretty much have to believe the first (our intelligence is just
neurons, there is no magic behind the curtain) in order to believe in AGI at
all.

~~~
AndrewKemendo
Existentialism says that because the universe is meaningless you need to
create your own meaning. That applies to humanity as well as it does an
individual.

Right now, Humanity has no "goal" as it is currently with AGI - we haven't
figured out a seed goal for AGI yet. In my view we should agree that the
meaning for humanity is to create our successor: AGI. This fits into the
evolution progression, only it becomes self directed.

~~~
AnimalMuppet
But by your own argument, that's just a meaning that you made up. Why should
the rest of us view it as having any validity whatsoever? Why should the human
race decide that _that_ goal is the one they should have, as opposed to, for
example, destroying all life on earth? (That's the problem with existentialism
- you can pick _anything_ as your meaning.)

~~~
AndrewKemendo
Well that's my task...to sell it so that people will buy it.

------
bryanrasmussen
hmm, I figured the Singularity was just small, and it was fusion that was far
away.

------
scottm84
the industrially complex game that is military dictates that in the end most
guns will point at one target.

The singularity has already picked up that flag and waved it.

------
mSparks
that bit hes bolded isnt key is it?

the singluarity isnt "ai making new ai"

the singularity is ai solving problems that we cant - "greater than human
intelligence."

which has basically already arrived. albeit bounded such that we still have
ultimate control over what problems we direct ai to solve.

------
blueyes
He wrote this about a year before AlphaGo beat Lee Sedol... which happened 10
years before anyone expected. The singularity in the mirror could be much
closer than it appears, and everything he writes tells me he knows nothing
about AI.

This piece is full of sloppy thinking as well as obsolete. Calling
corporations superhuman AIs doesn't clarify the problem; it introduces oranges
to a discussion of apples. And even in this irrelevant tangent, he is wrong.
As we so often see in government and the private sector, many of us can be
dumber than a few of us. Collective decision making has pernicious emergent
properties, which means we should consider many corporations as subhuman AIs.

> The most successful and profitable AI in the world is almost certainly
> Google Search.

This, too, is false. Parts of Google ads might qualify as the most lucrative.
But other parts of Google outside search, notably DeepMind, are much more
successfully pushing AI forward. Autonomous cars and drones are two very
successful examples of tech using AI.

The fact that he even brings up Jeopardy Watson in a discussion of AI shows
that he knows little about the state of the art, which is light years ahead of
IBM's question-answer system.

Ethical issues will not prevent nation-states and corporations from continuing
to pursue the AI arms race.

And there are huge incentives to be the one to get this right. Which is why
enormous investments in AI are being made by governments and the private
sector alike. Google's DeepMind is going to more than double from 400 to 1000
people, half of whom are AI researchers. DeepMind is obviously a research
powerhouse, and that investment alone must cost hundreds of millions of
dollars beyond the acquisition price of 400M pounds.

AI advances hand in hand with hardware capacity. Distributed computing and
faster chips will continue to progress, and pull AI along with them. A
breakthrough in quantum computing will entail a huge step-wise leap in
computing power and therefore AI. So progress will be non-linear, but not in
the sense he thinks.

~~~
goatlover
Rodney Brooks and Marvin Minsky, pioneers in the fields of robotics and AI,
don't think we're anywhere close to general purpose AI. Minsky doesn't think
we've made much progress in that area in the last several decades. The things
you mention were worked on in the 60s (leaving aside Quantum Computing, which
is probably a red herring for AI).

~~~
blueyes
Marvin Minsky is dead, so you shouldn't refer to him in the present tense. He
is unable to have opinions about current events. Secondly, Minsky was
skeptical about neural nets, and he was ultimately proven wrong. Even great
minds make mistakes. In the 1960s, we did not have the confluence of big data,
much faster hardware, and certain algorithmic advances that make current deep
learning performance possible. So what you say is partially false. We had some
of the ideas in the 60s, but we were missing certain conditions necessary to
support and prove them out. Now we're not missing that, and AI progress has
greatly accelerated.

------
vonnik
This guy has no idea what he's talking about, and the original post of which
the linked article is an update was published in 2014. That's a lifetime in AI
research, which is moving very fast. Real advances are happening monthly. The
people closest to those advances in AI, at DeepMind for example, are moving
the field forward quickly and can see strong AI on the horizon. Compute will
determine how quickly we get there, but new chips and hardware are coming onto
the market that will speed this along.

~~~
dkarapetyan
You should address actual points instead of appealing to authority and ad-
hominem attacks.

~~~
Chronic2h
Let me help you out by responding as an _authority_ to the parent.

I worked at DeepMind as an AI researcher. I can guarantee you will not see
"AGI" or "strong AI" in your or your children's lifetimes. It's fun to
_believe_ , it gives us something to look forward to, talk about with friends,
and discuss on HN. But in reality, we are so far from artificial general
intelligence, even with an exponential curve, it will take us 100 more years.
The current deep learning era (or more aptly named, pattern recognition) will
last another 5-10 years, at best. Then another winter will come.

~~~
Eliezer
What on Earth do you think you know, and how on Earth do you think you know
it?

~~~
daveguy
Most people _actually doing AI research_ are much more conservative about
their estimates and expectations of AGI. They understand very clearly how much
of current AI "breakthroughs" really just barely nudge the ball forward
(market speak aside). Do you use an virtual assistant -- OK Google, Alexa,
Siri, etc? Has your experience with those assistants consistently improved or
do they regress in annoying ways that make them seem obviously ignorant of
basic facts, previously known facts, or common sense?

~~~
vonnik
Actually, opinions about AGI are quite mixed. People like Ng are more
conservative; people like Schmidhuber are more aggressive in their
predictions; both are eminent researchers.

RE: Siri and Alexa - Industry deployments of AI are a lagging indicator of
what AI can do. AI is moving at two speeds: research and business/consumer
applications. The research is moving faster than the apps.

AlphaGo and many other fundamental papers to come out in the last two years
have done more than nudge the ball forward. They constitute significant steps,
and bundled together they are an even greater achievement.

