
When Is the Singularity? Probably Not in Your Lifetime - misiti3780
http://www.nytimes.com/2016/04/07/science/artificial-intelligence-when-is-the-singularity.html?action=click&pgtype=Homepage&version=Moth-Visible&moduleDetail=inside-nyt-region-5&module=inside-nyt-region
======
abtinf
TLDR the singularity hasn't happened yet so it will never happen. Typical low
quality technology "journalism".

"For starters, biologists acknowledge that the basic mechanisms for biological
intelligence are still not completely understood, and as a result there is not
a good model of human intelligence for computers to simulate."

Duh. We don't know how the brain works yet. A big chunk of The Singularity is
Near deals with how we build that understanding. Once we know how the brain
works, computing will take advantage.

~~~
Aaargh20318
We didn't need to know how birds fly to build planes.

~~~
ild
Mind is different, because you do not want to create a zombie, that acts like
a human, but an actually conscious system.

~~~
Aaargh20318
Please provide proof that humans aren't zombies. If you act like you're
conscious, you _are_ conscious.

~~~
ild
Consciousness does not have external signs; I have not a slightest idea what
it means "act like you're conscious"; a paralyzed person is conscious after
all. The only reason to believe that humans (aside of myself, obviously) are
conscious is that all humans share structural similarity with me.

------
Lewton
Unless there's a page 2 I'm missing, this is a terrible article.

He seems to dismiss the singularity with 3 points

1\. We don't understand human intelligence

2\. "We've" been wrong about it before

3\. Moores law is ending

1\. Who says it has to be human intelligence? 2\. That's not an argument 3\.
Moores law -might- be ending but datacenters are getting bigger and bigger

~~~
ewzimm
Seems to be a lot of people thinking Moore's Law ending means that innovation
in computers is over. It's really just the opposite. We can't rely on simple
scaling anymore, so we need to take more innovative steps to push computing
forward.

~~~
_yosefk
There's a limit to what you can do without "simple" scaling, meaning, without
making cheaper, faster, smaller individual gates. That is, an architecture
optimized for a given problem domain will give you maybe 1000x efficiency
improvement over a more general architecture, but that's it. (Typically it
will be <1000x, but let's say you're lucky; the point is, that's your last
1000x, and then you'll see no improvements, _ever_.)

So if Moore's law is indeed over, progress will be slow, domain-specific, and
decelerating due to diminishing returns (specializing for a problem domain is
worth it as long as that domain is not too narrow but at some point it's too
narrow to justify the investment into building specialized hardware for it;
you'll be better off using something less specialized. GPGPU, which does not
provide amazing performance in an absolute sense, but does beat CPUs on a
large range of problems, is an example to this - any accelerator more
efficient than GPGPU needs to be justifiable in the sense that GPGPU is
already there wherever there's a need for GPU doing graphics, which is where
GPUs do provide amazing performance in an absolute sense [you can totally beat
GPGPU with more specialized hardware on most benchmarks, while you can't beat
GPUs in graphics.])

I'm a chip & accelerator architect, so it's not like I'm particularly happy
about this, I think I'm realistic though. A higher-caliber architect saying
the same pessimistic thing is Bob Colwell.

The one nice thing about really stopping at some manufacturing technology and
not being able to improve any further is that the cost of using this
technology will likely continue dropping for some years. Only when it reaches
the bottom will progress have truly stopped.

~~~
ewzimm
Interesting analysis, thanks. What do you think about the development of more
radical redesigns like quanum computers, memristors, or anything else that
might show promise to displace the traditional CPU? It seems that there's a
lot going on in the more experimental side of computing right now that is
reaching enough maturity where experiments might become practical products
within the next decade. Even if we don't have pocket-sized quantum computers,
the existence of cloud computing might make something like quantum annealing
practical in the near future. Is there anything that has you excited?

~~~
_yosefk
Displacing CPUs is not the issue (the CPU occupies less than half the area of
a modern application processor chip for instance), the problem is displacing
CMOS. I don't know much about the alternatives, but I trust Colwell's
skepticism.

------
numair
For me, the tl;dr on the entire subject of the Singularity can be summarized
as:

"...the field of artificial intelligence has a long history of over-promising
and under-delivering..."

That being said, I think we are finally at the dawn of "useful AI," like when
my iPhone correctly guesses where I am headed and gives me a time estimate.
For normal humans such as myself, this is WAY cooler and more exciting.

~~~
romaniv
_> That being said, I think we are finally at the dawn of "useful AI"_

Were AI systems that outperformed humans in solving complex differential and
integral equations useless? Were systems that diagnosed diseases better than
doctors useless? Were systems that handled logistics at scale never possible
before useless?

All of those were created in the past (60s, 70s, 80s).

The problem with AI was never the lack of results. It was the hype that far
exceeded the results, however useful and practical they were.

------
visakanv
I was told that driverless cars would not be in my lifetime, and that a
machine would not beat a human at Go in my lifetime. We'll see.

~~~
andreashansen
Agreed. I remember attending a presentation by Google regarding their self-
driving cars. What they told us fascinated me. Their last main hurdle is
ethics, and they used to following scenario to illustrate:

Two motorcycles are coming towards the self-driving car, and the car is forced
to crash into one of them. One of the motorcyclists is not wearing his helmet,
and there's a definite chance of fatal injury if crashed into by a car. By not
wearing his helmet, he is also breaking the law.

The other motorcyclist is wearing her helmet, and there's less of a chance of
fatal injuries.

Should the self-driving car crash into the law-abiding motorcyclist who is
doing everything right but with a less chance of fatal injuries, or into the
irresponsible non-helmet wearing motorcyclist where injuries could be fatal?

That's the kind of scenarios they have to deal with. The technology however,
works.

~~~
bryanlarsen
The answer to almost all of these hypothetical problems is "hit the brakes".
Speed is the main killer in automobile accidents, it's a power law. Just a
small reduction in velocity can have a dramatic impact on the survivability of
an accident.

What about the question of whether to swerve into traffic to avoid a kid who
ran into the street from between two parked cars?

Answer: you were going too fast to begin with. If you're travelling anywhere
that this is a possibility, your maximum speed should be less than 20mph. A
collision at that speed is almost never fatal. That speed also allows almost
instant braking.

The fact that autonomous cars will be driving so slow and defensively in the
suburbs will perhaps be the biggest cause for the coming backlash against
them.

~~~
raisedbyninjas
We're talking about exceptional circumstances. With hundreds of millions of
autonomous vehicles on the road, 1 in a billion is next Tuesday. We can
minimize the need for this decision, but leaving the behavior undefined will
just lead to headaches when it does happen.

~~~
bryanlarsen
We have currently ~30,000 fatalities in a year. What portion of these are
trolley problems? Less than 1 in a million, I'd bet. What percentage of those
trolley problems are entirely avoidable through proper defensive driving
techniques?

------
Rhapso
so the whole idea of the "singularity" has been overblown by fiction and
media. The reason we call it a "singularity" is because it is part of the
"acceleration" which is the observation that the rate of human progress in
technology is increasing faster over time.

The "singularity" is the point beyond which the rate of change is faster than
humans can internalize and react to it. Super AIs and being uploaded to the
great server in the sky are just ways we imagined an event literally defined
by our inability to imagine it.

The singularity could have a few outcomes:

\- The classical "humanity gets left behind by it's creations" doomsday

\- The rate of change caps out simply because humanity cannot drive change
much faster than humanity can react to it.

\- The rate of change asymptotically approaches our capacity to handle it.

Interestingly, even after the "singularity" happens these three states may be
indistinguishable to us.

Right now I am leaning towards the second outcome, that the singularity has
come, and now we are simply crap at predicting the future because of it. I saw
this a few months ago and considered it a potential symptom of that outcome
[https://www.youtube.com/watch?v=aEIPfpxFrlg](https://www.youtube.com/watch?v=aEIPfpxFrlg).

------
dekhn
While I concede it is unlikely anything resembling a Vingeian singularity is
going to occur in my lifetime, I am actively working towards building the
technology to bootstrap the singularity. It seems far more useful to spend
time building new technology that people look at and consider real and
surprising accomplishments (for example, AlphaGo), or to understand the neural
correlates of intelligence, than to spend time pontificating about whether the
singularity will occur.

------
spectrum1234
Very poor quality times article.

Read the two Wait But Why posts on this:
[http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

[http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-2.html)

------
Chris2048
"The notion of the Singularity is predicated on Moore’s Law" \- is it?

~~~
omurphyevans
No, it's predicated on exponential growth, of which Moore's law was one small
part...

~~~
the8472
> it's predicated on exponential growth

Is that part even necessary? Even linear growth with a higher constant factor
than humans have due to faster iterations could rapidly outgrow human
capabilities if we take a hypothetical human-level AI as starting point.

------
creyer
IMHO the “Singularity” will be achieved when AI will invent new things. We
have only reached the point where AI can only do improvements of something
humans have created... Innovation is something hard for humans, so I don't
know if we will see AI innovations anytime soon.

~~~
the8472
I think your standard needs some work. Humans also only improve something
other humans have created. And in niche applications AIs already outperform
humans.

[http://www.damninteresting.com/on-the-origin-of-
circuits/](http://www.damninteresting.com/on-the-origin-of-circuits/)

Ctrl+F for "baffling"

~~~
scrumper
That is really interesting! Here's the paper.
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.63....](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.63.4366&rep=rep1&type=pdf)

------
ild
Your view on singularity largely depends on your relationships with John
Searle Chinese room argument; I accept it, and do not find Kurzweil views
convincing.

~~~
petervandijck
I read the Chinese room argument, it seems really naive. It's like saying "my
neurons aren't conscious therefore I am not conscious", which is clearly not
the case.

We're made of matter, computers are made of matter, where is the difference?

~~~
ild
> It's like saying "my neurons aren't conscious therefore I am not conscious",
> which is clearly not the case.

You need to read Searle's works again; his point is actually the opposite - we
are machines made out of biological neurons, which seem to have an externally
unobservable property of being conscious; we do not know what it is caused by,
but we can see today that it is possible to create a simulation of human
behavior by means of logical gates. We have no idea if it will have mind or
not.

> We're made of matter, computers are made of matter, where is the difference?

That's an odd argument. And actually does not contradict Searle at all.

------
tempodox
The “Singularity” would be the end of humanity, if it ever happened.

~~~
ejolto
By definition not in our lifetime then.

~~~
danieltillett
Well the laws of physics means it would take some finite time from the
initiation of the singularity to the end of all life even if the singularity
was extremely hostile to humans.

------
awinter-py
less HUMINT more LOVEINT

------
solo1717
This article raises literally zero new points that have not been addressed
over and over by futurists, "transhumanists", and others optimistic about the
singularity.

1.) Moore's law still holds true at the moment. If and when it does stop the
argument will begin to have some weight. The claim that Moore's law is about
to stop has been made over and over since the 90s. Until then it is empty.

2) Moore's law continuing ad infinitum is not a necessity for the singularity.
Distributed computing, neural chips, quantum and biological computing all
provide avenues for continued vertical hardware evolution, not counting the
Google method of rigging together thousands or millions of average machines to
produce incredibly powerful supercomputers.

3) The sheer amount of data we are collecting continues to increase
exponentially. ([http://techcrunch.com/2010/08/04/schmidt-
data/](http://techcrunch.com/2010/08/04/schmidt-data/)), much of which is
applicable to machine learning algorithms which brings us to point 4.

4) The efficiency and adaptability of machine learning algorithms continue to
improve year on year. See DeepMind's early videos playing video games., etc.,
etc. To imagine that we won't see new innovations just as incredible almost
every year from here to 2050 is incredibly naive and unrealistically
pessimistic.

So considering that in each of the fundamental areas that we know are
necessary for an AGI -- ie raw computing power, processable/interpretable
data, efficiency/cleverness of algorithms -- we are achieving exponential
growth year on year, it is reasonable to conclude we will get a machine that
can pass the Turing test in our lifetimes.

This is also ignoring the multitude of other areas that contribute to the
likelihood of an intelligence explosion. Brain to computer and brain to brain
interfaces are in their early days but already exist. As they become more
practical they could lead to exponentially more efficient research. Systems
like Watson will speed up scientific research as they evolve. Nootropics and
electromagnetic brain stimulation also help in this area.

Capitalism strongly incentivizes innovators to produce technology that
automates ever more complex problems, or create tools that improve the
efficiency of creating complex problem solving technology. This is an
iterative, continuous process that we are all a part of, knowingly or not.

Now that humanity has been connected with a sort of digital nervous system,
and is thoroughly incentivized to all aim towards this intelligence explosion,
one way or another, it is naive to think we won't continue to find novel ways
of improving the efficiency of every single system we utilize no matter how
macro- or microscopic, which creates an intelligence creating feedback loop.
The singularity has already happened, its just not running fast enough yet for
it to 'feel' magical and miraculous the way it will once human level
intelligence is shown across multiple fields by integrated computer systems.

~~~
ZenoArrow
> "1.) Moore's law still holds true at the moment. "

No it doesn't, it's already broken. Consider:

[http://arstechnica.co.uk/gadgets/2015/07/intel-confirms-
tick...](http://arstechnica.co.uk/gadgets/2015/07/intel-confirms-tick-tock-
shattering-kaby-lake-processor-as-moores-law-falters/)

~~~
solo1717
Depends who you ask. Until we clearly go a reasonable amount of time in which
it proves false we should assume that any momentary hold ups/falterings are
just that.

[http://www.hpcwire.com/2016/01/11/moores-law-not-dead-and-
in...](http://www.hpcwire.com/2016/01/11/moores-law-not-dead-and-intels-use-
of-hpc-to-keep-it-that-way/)

[http://www.hpcwire.com/2015/11/20/top500/](http://www.hpcwire.com/2015/11/20/top500/)

~~~
ZenoArrow
I didn't see anything in the first HPC Wire article about Moore's law not
being dead. All they really said was that HPC was a useful tool to continue
improving processor performance:

"Without supercomputers, we wouldn’t be able to understand what it takes to
continue the march of Moore’s Law, and without this understanding, we wouldn’t
be able to create more powerful supercomputers. This symbiosis is at the heart
of the relationship between Moore’s Law and HPC."

As for the second HPC Wire article, this sums it up fairly well:

"What you see is that the performance per core has taken a dramatic hit around
2005-2006, but it was compensated by our ability to put more and more cores on
a single chip"

Increasing the number of cores on a CPU is not the same as keeping Moore's
Law. We could keep doubling transistor count every two years if we could keep
doubling the silicon wafer size at the same time. Moore's Law only really
makes sense if you consider it as doubling the number of transistors within
the same wafer size.

------
arca_vorago
The real problem with people saying "not in your lifetime" is that they fail
to understand the exponential nature of a self-improving AI entity. That's the
whole reason it's the singularity. Lets say we create an AI that can rewrite
and improve itself. The first iteration may take a year to actually be better.
But then the next version is only a month away. Then the next is only a day.

Suddenly the AI has self-improved beyond our understanding before we even
realize or understand what happened.

The true singularity is dangerous precisely because we won't know when it
actually happens, and it will outpace us before we can respond.

Preemptive drone strikes against AI programmers don't sound too far fetched
given the direction things are going.

Of course, the singularity is a zero-sum game, in the sense that the first
entity there wins. Which is why I plan on programming the singularity AI as a
copy of my consciousness so that I will be the singularity god.

~~~
maxerickson
I think your plan just has the AI god more likely to be fond of you.

~~~
arca_vorago
I consider that good enough. Also, confused about the downvotes with no
response/explanation.

~~~
eli_gottlieb
Basically, everyone is downvoting you because you're out of your mind.

