
Ray Kurzweil: AI is still on course to outpace human intelligence - jelliclesfarm
https://www.grayscott.com/seriouswonder-//ray-kurzweil-ai-is-still-on-course-to-outpace-human-intelligence
======
root_axis
Computers that are capable of analyzing and understanding their environment
with a level of fidelity comparable to a human, without being preprogrammed
with information about the nature or structure of the environment, are out of
reach for the foreseeable future. I don't see any fundamental reason why such
a computer should be impossible, but there's not even a realistic roadmap
towards such a thing. That is to say, if it ever does happen, nobody alive
today can predict _when_ it will happen.

~~~
rsync
"I don't see any fundamental reason why such a computer should be impossible,
but there's not even a realistic roadmap towards such a thing. That is to say,
if it ever does happen, nobody alive today can predict when it will happen."

It's also not obvious why such a machine would not immediately self-terminate
in the absence of a hugely complex system of scaffolding to shape and filter
the _raw input of existence_.

I have not experienced mental illness myself, but my study and my
understanding lead me to be extremely skeptical of a _mind_ exposed to _raw
existence_ without filter. It appears to be a terrifying and unbearable state.

~~~
ericb
I think you are massively anthropomorphizing.

Every day I run programs that happily die alone on their own. Painlessly. All
of the "pain" we feel is an artifact of our evolution, same with having a will
to live. The only reason we animals fight so hard to stay alive is that
animals which didn't died off, ergo, only animals with a will to live survived
and reproduced. Those same forces don't apply to computer programs.

Even the concept of "terror." Who is going to program terror in? What benefit
would it have? Why not wire programs to be "happy" when helping us?

~~~
state_less
> Those same forces don't apply to computer programs.

Windows ME didn't last too long in the wild. Same for CPU designs with bugs or
exploits. I have to respectfully disagree on this, though I can see where
you're coming from, given an individual's agency to run what they want. I
think if you take a larger population view, you'll see the competitive
pressures on these systems.

~~~
baq
if you get CPUs to exchange their designs in quasi-sexual activities and let
them multiply and evolve on their own, maybe. right now the competitive
pressure is on their designers.

~~~
state_less
First off, that sounds pretty hot. :) Second, if there is reproductive
variation and selection, I think it's evolution. We take our old CPU design
and make a new variation, that's the reproductive variation part. We also have
a market select for which designs survive via purchases or lack thereof, the
selection part. It doesn't matter to me so much if a life form that hosts it,
a computer hosts it or a human mind.

~~~
ZoomZoomZoom
There's your ubiquitous singularity lurking somewhere near. Symbiosis between
CPU and a Human advancing CPU's evolution. Not exactly what we were promised!

------
ThomPete
I am seriously interested in understanding why a lot of people have no issue
accepting the idea of humans evolving from simple immaterial matter into
biological beings and all the way up to our current reality but have a hard
time believing that can be done via computers.

I would at least put it in the very likely box that computers can learn to be
the same kind of pattern recognizing feedback loops that we are even without
humans understanding the brain completely just as we became conscious and self
aware without any "programmer".

"Computers" don't need to be like us to become intelligent, they don't
actually need to reproduce the lungs and the intestine as we have it, they are
in many ways free of those restrictions.

They might be "concerned" with very different things than we are and might
even not really care about nature to survive. All they need is energy.

~~~
rafiki6
But let's not forget evolution of the magnitude you speak of took hundreds of
millions of years. I'm not sure why humans think we can beat that record.

~~~
ThomPete
I don't think humans can beat the record, but technology? That's another
matter all together.

Again our own consciousness wasn't created it emerged which means that we are
a product of this emergence.

~~~
rafiki6
Technology at this time is nothing but an extension of human will. There is no
indication or path that it has yet left our grasp. Therefore we are the
limiting factor. Just as nature or the divine might have been the will that
allowed us to emerge.

~~~
baq
but if it ever leaves our grasp, you won't notice when it outpaces us, it's
going to be so quick.

------
peterwwillis
The more you learn about Kurzweil and what he bases his predictions on, the
more you realize he's a one-trick pony. They only work on things governed by
Moore's law (advancing transistor density) and that in turn depends on a
variety of things. Moore's law is expected to wind down around 2025.

Also, a lot of what he bases his claims on are unexamined junk science (like
his nutty health books, but also extending into specific technologies). Let's
not swallow everything he says just because he helped invent OCR.
[https://en.wikipedia.org/wiki/Ray_Kurzweil#Criticism](https://en.wikipedia.org/wiki/Ray_Kurzweil#Criticism)

~~~
WaltPurvis
You are much too kind. Kurzweil is a loon, full stop. The fact he once made
brilliant contributions to computer science is quite irrelevant to the
essential craziness of his more recent delusions.

In 2005, Kurzweil published _The Singularity Is Near_ and predicted ___this_
__would be the state of the world in the year 2030: "Nanobot technology will
provide fully immersive, totally convincing virtual reality. Nanobots will
take up positions in close physical proximity to every interneuronal
connection coming from our senses. If we want to experience real reality, the
nanobots just stay in position (in the capillaries) and do nothing. If we want
to enter virtual reality, they suppress all of the inputs coming from our
actual senses and replace them with the signals that would be appropriate for
the virtual environment. Your brain experiences these signals as if they came
from your physical body."

That is not happening by the year 2030. It is so starkly delusional that
anyone who _seriously_ affirms a belief that it will happen probably needs
psychiatric help.

It is akin to Eric Drexler's loony visions back in the 1980s that nanobots
would cure all diseases and continually restore our bodies to perfect health.
We were supposed to all be immortal by now.

None of this is happening, probably not ever, and certainly not in the
lifetime of any human being currently living. Kurzweil is going to die,
Drexler is going to die, _everybody_ is going to die. Adopting a pseudo-
scientific religion to avoid facing mortality is kind of sad.

~~~
50656E6973
>Loon...craziness...delusions...needs psychiatric help.

I agree many of his predictions are bad but you should calm down with the
gaslighting, it's ignorant of science history (the same was said of Aristotle,
Semmelweis, Wright Brothers...) and is an impotent way of debating, especially
in the context of science.

~~~
peterwwillis
The thing is, even if someone is a genius, some of their output may have been
total quackery. See: Pythagoras, Empedocles, Tycho Brahe, Isaac Newton, Nikola
Tesla, Jack Parsons, Howard Hughes, James Watson, etc. Things that sound crazy
are a good indicator to be skeptical and verify claims.

------
pseudolus
The singularity is nigh! This trope might make for great fiction but the on
the ground reality is far different. Intelligence is multi-dimensional. No
machine intelligence has yet shown an ability to match humans in multifaceted
intelligence and the day when such intelligences can outpace humans is as far
off as it was when the singularity was first posited.

~~~
gfodor
The idea of the singularity is based upon the idea of recursive self
improvement. I'm not sure how your claims are relevant.

~~~
marricks
Recursive self improvement in which field of intelligence? Does getting better
and better pattern matching eventually lead to human level intelligence? Does
improving pattern matching accelerate our ability at improving pattern
matching even?

We can solve any board game now with alpha-zero but will that necessarily
improve how fast we develop other types of intelligence?

I used to be in Rays camp but my girlfriend got into Computational
Neuroscience and I talked with her and her colleagues got the impression very
very few people think we’re close to general intelligence.

Human intelligence is a lot of things put together and we may get better at
pieces but we don’t even have all the pieces and when we do try to put them
together it doesn’t work. Look at criticisms of Europe’s Himan Brian
project[1] (another example I had seems to be outdated), some believe we don’t
understand too much to even begin to attempt modeling the brain.

[1]
[https://en.m.wikipedia.org/wiki/Human_Brain_Project](https://en.m.wikipedia.org/wiki/Human_Brain_Project)

~~~
gfodor
I don't see why people care about human intelligence as some kind of
benchmark. It seems to me that using human intelligence as a framing is a poor
mental model for making comparisons or predictions. I see no reason to believe
AI capabilities will be modulated by human intellectual capacities. When AI
falls short of human capacities for a given set of tasks or capabilities it's
likely to fall way short, and when it isn't it's likely to be way more
capable. In any case I wish human intelligence would be dropped from the
language we use to talk about AI, it seems similar to talking about birds all
the time when discussing aviation.

~~~
wang_li
Talking about human intelligence is necessary because a computer that can
perform a task better than a human can perform the same task doesn't mean it's
intelligent, e.g. my iphone is not intelligent because stockfish can destroy
me at chess.

Intelligence is the ability to reason abstractly. Humans can do this. It's not
clear that anything else can.

------
nyrulez
Despite the cynicism and his black and white predictions, I think his rhetoric
still has valuable contributions. It forces others to take a true account of
what intelligence is and what kind of intelligence is AI capable of in the
medium term (evidence: this thread).

For some reason, this doesn't get enough attention and we have people like
Elon and Stephen Hawking making dire predictions all over the place.

~~~
ForHackernews
I can't believe there are people genuinely afraid of a hypothetical powerful
malevolent AI, yet seemingly not that concerned by actual climate change.

~~~
lern_too_spel
Who says you can only worry about one thing?

~~~
coldtea
Anybody who understands opportunity costs...

~~~
lern_too_spel
That's reductionist to the point of absurdity. You might be able to only focus
on one thing at a time, but a day is long, and you need to worry about
multiple things in a day to merely survive. In your free time, it is possible
to worry about poverty, climate change, superintelligence, and many other
things.

The reason that rich people worry about superintelligence is that it could
bring the same uncaring devastation to the rich as climate change brings to
the poor.

~~~
coldtea
> _The reason that rich people worry about superintelligence is that it could
> bring the same uncaring devastation to the rich as climate change brings to
> the poor._

The problem with this is that I believe one is a genuine threat, the other is
a fad.

~~~
lern_too_spel
> the other is a fad.

In what way? Do you not believe that superintelligence is possible, or do you
believe that any superintelligence will automatically care about the well-
being of humans? Both beliefs seem naive to me and to many luminaries in the
field:
[https://people.eecs.berkeley.edu/~russell/research/future/](https://people.eecs.berkeley.edu/~russell/research/future/).

~~~
coldtea
> _Do you not believe that super-intelligence is possible_

I don't believe super-intelligence is possible. I don't believe we're anywhere
near modeling intelligence, and even if we did I don't believe intelligence
will "exponentially increase" given more computing power (the same way there's
a limit to speeding up barely- or non-parallelizable programs).

~~~
lern_too_spel
> I don't believe super-intelligence is possible.

The fact that organizations outperform individuals at many tasks shows that
superintelligence is possible. If you can dramatically increase the
communication bandwidth of an organization through computerization, you will
trivially achieve superintelligence over organizations. Exponential increasing
intelligence is not necessary for bad outcomes.

~~~
coldtea
> _The fact that organizations outperform individuals at many tasks shows that
> superintelligence is possible._

That's a little hand wavy example.

Besides, organizations lose out to individuals all the time where intelligence
matters -- e.g. that's why the stupidity of bureaucracy, or the army, and
"design by committee" is a thing.

Also teams of 5-10 do often do better than teams of 100 or 200 (even in
programming), except of course in labor intensive tasks (of course an army of
1000 will defeat 10 people, except if among the ten is Chuck Norris).

------
md224
I've felt for a long time that Singularitarians are looking in the wrong
place. They see accelerated technological development and assume that the
endpoint will be an artificial brain in a box. What they fail to see is that
these inventions and breakthroughs haven't been about increasing the
intelligence of a machine... they've been about increasing the intelligence
and efficiency of _human systems_.

The singularity isn't a brain in a box... it's us, the collective, a
metasystem transition that's been underway for millennia. A movement toward a
whole that transcends the parts.

~~~
goatlover
That's what seems apparent to me, so far anyway. AI is more about augmenting
human intelligence than it is about smart machines. All that impressive DL
stuff is ultimately providing more tools for humans to be more productive.

------
Scene_Cast2
Here's my shorter-term thinking. Current ML can't generalize well [0], can't
do arbitrary & conceptual thinking, and has trouble with language.

[0] by that, I mean that it does not easily pick up higher-level patterns
unless explicitly forced to (through either model architecture, data & task
setup, etc). Meaning - it has a much lower success rate outside of the
training data distribution compared to "flesh" intelligence. Kind of reminds
me of
[https://www.youtube.com/watch?v=PHRvF0m3yuo](https://www.youtube.com/watch?v=PHRvF0m3yuo)

~~~
jobigoud
Hmm, current ML has only been worked on for a few years, we have barely
scratched the surface. No signs of slowing down afaict. It has wiped the state
of the art against algorithms we took decades to think up. What happens after
decades of _that_?

It feels more like there is too many new directions to explore and we don't
have enough time/creativity/insights/ideas to fully take advantage of it.

~~~
ttlei
Current ML algorithms have been worked on for decades earlier. They are
gaining momentum again due to big data and computing power.

~~~
jobigoud
To me "current ML" in the parent comment meant deep learning. Open a computer
vision paper from this month and the prior art section almost only contains
references from between 2014 to 2018.

Yes the NN layer architecture might be based on ideas from the previous era,
but the way the algo actually solve the problem is completely different.

And that's just because it's the only way we can do it right now. When we can
apply deep learning to itself it, to select better architectures and
hyperparameters, will find strategies we didn't think about or didn't consider
worth trying.

------
ada1981
I had a chance to interview Ray and spend some time with him before he joined
Google.

The way we met was very serendipitous -- I asked a question during a movie
premiere about the nature of reality. He must have enjoyed it because later
someone from his team sought me out to invite me to a VIP after party.

His publicist at the time said this was the best interview he ever gave (It's
possible she says that to all the girls, but, I'll take it!)

[https://www.huffingtonpost.com/anthony-adams/ray-kurzweil-
in...](https://www.huffingtonpost.com/anthony-adams/ray-kurzweil-
interview_b_921015.html)

------
throwaway13337
It's much easier to make a plane that can fly than to emulate the particular
way a bird flies.

They'll both solve the flight problem, so it doesn't really matter.

Flight is something extremely comparible to ai development. When it was
developed, many companies were trying, and a lot of people said it was
impossible. The problem space is also similar. We're not sure how to get there
exactly but we may be close.

It was, in the end made to happen by an unlikely pair - not a large company
with a lot of investment. I believe this might be that fate, too.

~~~
rafiki6
It matters tremendously. Think about the economics of bird flight vs. plane
flight. Why is it that flight has been around for almost 100 years now and yet
we still aren't flying everywhere? The reality is we can always come up with
subpar ways of doing things, but there's a reason birds have evolved as they
are today that we just can't replicate ourselves. We mastered long haul rapid
flights, but bird flight generalizes much better. They might be slower in long
haul, but they are much better and short haul and medium haul and have evolved
to work in groups to make long haul possible. So yes, analogously we can
create intelligence in a different way, which we are, but it likely won't
generalize as well.

~~~
zanny
If we could reduce the density of a person by about an order of magnitude we
can start replicating bird flight on a per-person basis. Our approaches to
flight are constrained in all forms by the desire to put really heavy things
into the air.

Look no further than drones that can operate for quite a long while, move in
any dimension fairly well, and manage some pretty ridiculous speeds for not
being designed to do so all because their purpose is not to carry large
complicated amounts of weight.

------
Novashi
Brave claim: we won’t actually invent good AI until two-way brain-computer
interfaces become useful.

We need to augment human intelligence to handle systems vastly more complex
than we can do today. Specifically memory creation and recall, and information
assimilation rates.

~~~
lsc
How do you define 'useful'?

My keyboard and monitor are brain-computer interfaces. I'd argue that they are
pretty useful.

I can imagine a way to consume information that is dramatically faster than
the 500 or some odd words per minute I get reading.

But for input to the computer? I mean, I'm not saying that a keyboard can't be
improved upon, but I think there are some hard limits to how quickly I can
compose my thoughts; I think that it's fairly rare that the fingers are the
limit. (Of course, making it so I think less about the fingers would be great)

What I'm saying is that there might be human processing speed issues that
limit how much value we can get out of faster I/O.

On the other hand, maybe it's like reading was for me; before I could read at
a certain speed and certain ease, I couldn't enjoy books, because I'd forget
the beginning of the paragraph before I got to the end. Like I needed to load
enough words at once to see the picture. It's totally possible that output
would be the same way; if I could somehow output the equivalent of five
hundred plus words per minute, maybe that would change my writing the same way
passing that speed changed my reading?

~~~
Novashi
I think most of the bandwidth would be computer -> brain, not the other way
around, assuming it can teach your brain to retain knowledge and show it
images/movies. The only exception would be if we could significantly alter the
perception of time, but I'm not really betting on that despite how useful it'd
be.

Even if I could only accurately sample 1-8 bits/second from the brain, that
would be world-changing. There are a lot of clever people who could stuff a
lot of utility through 8 bits/second given you are paired with a powerful
smartphone and a proper "UX". Clever encoding schemes and signals could be
developed. Especially if you go for sequences/chords that are executed over
1-4 seconds.

Plus the brain would likely just act as an index to fetch info that is then
replayed to you. I really don't need a terabyte in the form of neuron
connections when storage and computing would be ubiquitous.

~~~
lsc
can't you sample way more than 8 bits a second from my brain through the
keyboard? I mean, I don't think it'd take much work to get to 8 bits a second
using a twiddler or something with one hand.

I guess what I'm saying is that I don't understand what a brain computer
interface gets you if it's not faster than hands and monitors. (I mean, I
guess there's portability, but that doesn't seem super life changing to me;
phones are pretty portable already, and I can do 8 bits a second on those,
too.)

~~~
Novashi
Hands are wonderful for decoding thought into action, that's true, but for
example, the brain can imagine images way faster than hands can paint them. A
lot of people can be limited by their hands in gaming too. This is exchanged
for learning some kind of encoding scheme that translates thought into bits.
It's also really difficult, for example, to create 3D models on your phone
just using your hands.

By sending bits, the interface changes and you aren't really burdened with UI
navigation or UI mechanics like touch, drag, etc. Something like a phone would
take on more roles that are currently filled by desktops and laptops right
now.

But again, output isn't interesting. The low bitrate is just to illustrate
that it isn't that interesting compared to the other direction (but I still
think there could be useful products if the device is discreet and convenient
enough -- I'm definitely not wearing a full EEG headset every day).

A product like that would be a good checkpoint in scientific developments of
decoding the brain too. I think it's going to be a necessary step to develop
before we get the other direction though.

~~~
lsc
>By sending bits, the interface changes and you aren't really burdened with UI
navigation or UI mechanics like touch, drag, etc. Something like a phone would
take on more roles that are currently filled by desktops and laptops right
now.

You still need some kind of interface; like if I'm visualizing a picture and
the computer can read it? even if you can do it, that's gonna require a lot of
bits.

My point here is that having a direct brain interface still requires some sort
of... interface, and you probably need a lot of bandwidth before the interface
becomes better than... hands, at least for people who still have control over
their hands. A low bit-rate direct brain output would be super useful for
paralyzed people.

As an example for data in the other direction, cochlear hearing aids are
absolutely amazing. But from what I understand? they are quite a bit worse
than the ears most of us were born with. They are a long way from being an
augment that people with already functional ears are likely to want.

I _do_ think it's worthwhile to come up with new input/output methods;
personally, I'd be super happy to shave my head and wear an EEG headset almost
all the time, if it gave me output that was significantly faster and with less
thought than typing. Heck, you might even get me on portability, if it's just
better than a cellphone keyboard.

I'm just saying that there's no reason that a low bit-rate direct brain
interface would be any more intuitive than fingers... I've been using my
fingers for an awful long time, and to get me to switch to something else, I'm
gonna need a better bit rate.

------
davidw
IDK, Google's voice assistant thing still can't figure out my wife's name
because it's not an English name and I pronounce it as it should be
pronounced.

~~~
bluedino
Our receptionist can't figure out any of the names of our offshore contractors
either.

~~~
slacka
lack of ability and lack of caring are not the same.

------
jimmytucson
What if human beings were physically connected to a growing network of
perfectly organized facts and information? What if that network could even
assist with basic logical reasoning tasks? Would that be enough to say we’ve
reached the singularity?

It doesn’t seem like it now because our phones aren’t quite physically
connected to our brains, and the internet is far from organized, but I’d
venture to say we’re already pretty close. A more interesting question to me
is whether this is really anything we want.

If human machines were infinitely intelligent then they would all agree. So
you wouldn’t need more than one. Sure, the ultimate human might need 7 billion
pairs of eyes but those eyes wouldn’t need any more brains than to fuel up and
locomote. Furthermore any human-like tendency to doubt oneself or fall in love
or “rock out” would be deemed useless and therefore overridden.

Understanding that the purpose of a human is to pass on genes, and that
there’s little human or genes left in her, the ultimate human might therefore
conclude that her sustenance only disrupts the purposes of organic life forms.
Her next and final act would be to destroy herself.

This is how I see the end of humanity. Not in a violent clash between
artificial and organic intellects, not in a symphony of mushroom clouds, but
with the final flip of a switch, in cold, calculated resignation.

------
gaussdiditfirst
Check out Rodney Brooks' blog to see some predictions from somebody who
actually has at least some idea of what they're talking about:

[https://rodneybrooks.com/predictions-
scorecard-2019-january-...](https://rodneybrooks.com/predictions-
scorecard-2019-january-01/)

However as the ML researcher Michael Jordan (one of the most important in the
field) has previously stated, these sort of long-term technology predictions
are just fun science fiction and there is essentially no academic rigor in
this stuff:

[https://medium.com/@mijordan3/artificial-intelligence-the-
re...](https://medium.com/@mijordan3/artificial-intelligence-the-revolution-
hasnt-happened-yet-5e1d5812e1e7)

------
fitzroy
Counterpoint: "Hey Siri, lock my phone." "Playing 'Locked Out of Heaven' by
Bruno Mars..."

------
pcstl
Betting on technology evolving faster than expected is _usually_ a safe bet,
but it feels like Ray has been working on flawed premises and refuses to
revise them.

~~~
jimbokun
What, specifically, has he gotten wrong?

~~~
Slartie
This one (especially HN commentary) is a good read:
[https://news.ycombinator.com/item?id=18806315](https://news.ycombinator.com/item?id=18806315)

After reading it all, I don't really see him as being any more successful in
predicting than you and me. You get some right, you get some kind-of-in-the-
right-direction, you get some wrong, and some are just laughably wrong.

------
Koshkin
As we do not possess a clear (or, at least, a generally accepted) definition
of what constitutes the "true AI," it seems very likely that we will cross
this "even horizon" without realizing it; we may have done it already, for all
we know (or, rather, don't know - due to the fact that much of the progress is
being done in secrecy).

------
vkou
On pace to do what? Add numbers really fast? Play chess? Make a best guess of
which news story I want to see? (Hint: not sports) Perform automatic gear
shifting? Follow the vehicle in front of it?

Kurzweil has long been preaching that there will be a qualitative revolution
in what AI will achieve - and it has been 15 years away, for the past three
decades.

~~~
tim333
Not actually true, the always 15 years away bit. Three decades ago (1990) his
prediction was Turing test and AGI in 2020–2050. More recently he's predicted
Turning test 2029. He may be wrong but doesn't keep shifting the dates 15
years away.
([https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...](https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil)
)

------
simonebrunozzi
Ray Kurtzweil is certainly super smart, but I find his views about the
Singularity disturbingly myopic. It's as if he has been living in a bubble for
decades, and doesn't realize how the world works for real.

I still have a lot of sympathy for him, as I'm sure he means good. Still,
these views distort our reality for many people.

------
manigandham
Of course it will happen. We humans are an outcome that took billions of years
and yet here we are. It might take a billion more to get sentient AI but as
long there is progress then the end is inevitable.

The only debate is _when_ it'll happen, but we probably need an advanced
sentient AI to help figure that out.

------
magnamerc
I suspect human intelligence is more than just computations in the brain. A
lot of computation I assume most occur 'outside' of the brain in the gut via
hormones and protein signalling. I have doubts that you can replicate human
intelligence without replicating human biology.

~~~
jstewartmobile
Or even if it were all in the brain, _accurately_ modeling neurons and
interactions of neurons, on bajillions of cudas and threadrippers, will
probably still take much longer than it takes for the real-life system to do
its thing:

[https://newatlas.com/spinnaker-neuromorphic-supercomputer-
mo...](https://newatlas.com/spinnaker-neuromorphic-supercomputer-mouse-brain-
simulation/57101/)

~~~
Koshkin
Modeling the human brain is just one of the directions of the AI research,
possibly not the most important one, even, at this point. On the other hand,
the "super-intelligence" everyone is thinking about will be nothing like the
human mind.

~~~
jstewartmobile
I have trouble believing that evolution has been that inefficient. That we can
brutally outperform the human brain at 64-bit arithmetic--no surprise. But to
accomplish the same with creativity and problem solving? Not holding my
breath.

------
narrator
I think we will find limited use for general AI since we won't be able to
properly regulate or understand the behavior of machines that are smarter than
us. Alpha Go Zero can't explain why it plays chess the way it does except for
showing how it evaluates the board, but the reasoning is buried in millions of
mathematical calculations accreted via millions of games of self play that we
can't understand the intuition of. Of course, many smart players can't really
tell you systematically how they make the moves they do either. AI will be
like a dictator who makes his decisions by intuition only.

~~~
jobigoud
An intelligent black box can still be "useful". It could help cure diseases or
get original insights in many fields for example. Even if we don't understand
how it thinks.

~~~
narrator
Sure, but these are narrow AI applications. The great and powerful Oz general
AI will be of limited utility unless there is a man behind the curtain.

~~~
jobigoud
If narrow AI can be useful despite being a black box, AGI can also be useful
even if it's a black box.

All the human scientists in the world are black boxes. An AGI could be another
researcher, just maybe more productive.

We don't need to understand how it thinks to understand the paper it writes.

------
jstewartmobile
Counterpoint:
[https://engineering.purdue.edu/kak/RobotsWillNeverHaveSex.pd...](https://engineering.purdue.edu/kak/RobotsWillNeverHaveSex.pdf)

~~~
jobigoud
Gave up at page five of "this is what this presentation is about", can you
summarize his main argument on why robots will never have sex?

~~~
jpindar
It's clickbait. The last page is:

>But What About the Title of this Presentation .... While obviously a cheap
hook, it is nevertheless intended to convey the possibility that it is our
emotions, our passions, our innate desires — all ingredients of our sexuality
— that are the defining elements of our consciousness and, through
consciousness, our intelligence. The End

~~~
tim333
Thanks for the time saver. Incidentally I note sex robots are a thing
apparently [https://www.thesun.co.uk/fabulous/8204874/sex-robots-
machine...](https://www.thesun.co.uk/fabulous/8204874/sex-robots-machines-
alexa-relationships-investigation/)

------
AnimalMuppet
Ray Kurzweil is still on pace to die before the Singularity arrives.

~~~
Mediterraneo10
I always felt that Kurzweil’s timelines were too optimistic not only because
technology may not evolve so fast, but also because it fails to account for
the delays that government regulation imposes. So many of the vaunted leaps in
medical technology, for example, are going to require years and years of
testing before they get approval for use. Other whizbang technology of the
future might be prohibited from the consumer market because governments see it
as too dangerous.

~~~
xiphias2
This may be the case for many technologies, but clear wins that save lives get
fast tracked. It just happened to CTX001. Also older researchers are doing
self testing all the time, as they know that they don't have the time to go
through regulations.

[https://www.investors.com/news/technology/crispr-vertex-
gene...](https://www.investors.com/news/technology/crispr-vertex-gene-editing-
fast-track/)

------
tim333
I think Moravec's arguments on the same subject "When will computer hardware
match the human brain?" are a little less cranky than Kurzweil's and his
prediction from 1998 "it is predicted that the required hardware will be
available in cheap machines in the 2020s" seems to be panning out.
[https://jetpress.org/volume1/moravec.htm](https://jetpress.org/volume1/moravec.htm)

------
ForHackernews
The Singularity is just millennialism [0] for secular technolgists. We still
have a pretty paltry understanding of human intelligence. It seems unlikely to
me that more and more iterations of AlphaGo will spontaneously produce strong
general AI. My bet is no AGI within my lifetime.

[0]
[https://en.wikipedia.org/wiki/Millennialism](https://en.wikipedia.org/wiki/Millennialism)

~~~
iheartpotatoes
Alpha zero is a generalized AlphaGo, so they are headed in the right
direction!

AlphaGo -> win this boardgame

AlphaZero -> win any boardgame (well, three right now)

AlphaMinus1 -> win any game

AlphaMinus2 -> win anything

AlphaMinus3 -> win winning. so much winning.

But you get my drift, I'm extrapolating hugely off one abstraction step.

~~~
ForHackernews
Yeah maybe. How do you define "win" in terms of general intelligence? Outsmart
all top human experts at their field?

~~~
wetpaws
Whatever the reward function dictates would be a win :)

~~~
ForHackernews
Right. So we've just successfully moved the goalposts to "write a reward
function that objectively evaluates _g_ "

------
tompetry
The brain and "mind" is insanely complex. There is so much we still don't
understand, so of course there is so much we cannot replicate. But is it
possible? If there is an incredible breakthrough in understanding, then yes of
course. Personally I see little value in people saying it can or can't be done
(of course it _can_!), it's just a matter of if and when. We shall see...

------
simplecomplex
Define AI. Define intelligence. Kurzwiel's prediction are nonscientific and
unfalsifiable. AGI is unscientific.

He really likes making predictions:
[https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...](https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil)

~~~
jimbokun
I don't think this one happened:

2019: "The computational capacity of a $4,000 computing device (in 1999
dollars) is approximately equal to the computational capability of the human
brain (20 quadrillion calculations per second)."

~~~
zozbot123
$4,000 in 1999 would be about $6,000 today. A midrange Deep Learning
workstation is in the ballpark, and while it can't "match the computational
capacity of the human brain", it can do plenty of nifty things - including
beat the current Go world champion in a match.

~~~
slacka
No, not even close. 20 quadrillion calculations per second= 20 petaFLOPS. A
Nvidia DGX-2 is _only_ 2 petaFLOPS and that "AI" supercomputer costs $399,000.
Far cry from $6,000.

You are just hand waving to try to defend a definitively failed prediction.

------
danielam
“Kurzweil’s Phantasms”:
[https://www.firstthings.com/article/2013/04/kurzweils-
phanta...](https://www.firstthings.com/article/2013/04/kurzweils-phantasms)

------
aj7
I don’t find this any different than a steam shovel outpacing a human digger.
It’s still necessary to prevent both from destroying things and harming
people.

------
JustSomeNobody
Yawn. Of course he has to say this, he makes his living saying stuff like
this. He is, after all, a ”futurist”. It just seems rather trite at this
point.

------
yters
AI is mechanical turks all the way down and always will be.

~~~
tux1968
Then humans are as well, yes? What exactly is the magic in human cognition
that is impossible to replicate in silicon?

~~~
imtringued
The brain is denser, more efficient and 3 dimensional. Compared to the current
limitations of silicon the combination of those three aspects is downright
magical.

------
chrisco255
Ray Kurzweil's brain has already been backed up in the Google Cloud Platform.
Everyone claiming he's running out of time is just wrong!

------
zackmorris
The most important part of the talk was when Kurzweil mentioned running
partially-trained AIs in simulation to generate more training data than exists
in the real world. That was the first clear description of imagination that
I've seen, although others have hinted at it, for example the running stick
figures (that I think Google's DeepMind) trained to run an obstacle course
several times before they actually did it. I might be remembering that wrong,
but there have been a few similar examples.

Anyway, the reason why the imagination part is important is that nobody today
is talking about parallelization when it comes to AI. A few "rules" of AI
computing and their limits:

* Processing power doubles every year or two for the same cost -> processing power will someday be approximately infinite (for certain classes of computation)

* The search space for even the simplest problems is effectively infinite if we don't know the hidden models -> if we know all the hidden models, then large search spaces become tractable

* The sequential portion of intelligent decision making is less complex than the parallel portion (computers have been beating us at all sequential operations since perhaps the mid 60s or 70s) -> when the parallel portions are solved, then computers will beat us at all intelligent decision making

That last point is really important and exciting, because most of us simply
aren't accustomed to thinking in terms of solving problems in parallel. We're
thinking "how do we train a self-driving AI to recognize pedestrians and
swerve to avoid them" instead of "how do we optimize the solution to the
series of equations to drive to the store and back without hitting anything,
in a single pass".

AIs will soon be looking forward from their current model of the world,
running some number of simulations (potentially billions) and choosing the
best outcomes, then feeding back those solutions to improve their hidden
models of the world. All in parallel, with virtually unlimited computing
power. At that point we'll be training AIs like children and will start to see
emergent behavior that mimics life.

Personally, I don't see how this can possibly be any more than 10 years away,
20 at most (even for rank amateurs like me), given the number of people
tinkering on these evolutionary patterns, and the amount of open source
research being generated.

It kind of haunts me actually. Like, we just started investing in our 401k at
work, but I don't have the heart to tell everyone that the singularity will
probably arrive before their investments mature. Like, how does one have a
child when it's looking like the 20th century reality we're living in isn't
going to be around much longer? Do we suffer through the daily grind, the 40
hour weeks of converting our time to money, when AGI will overshoot that
within weeks of surpassing our definition of sentience? It literally might
make more sense to become a beach bum, or travel the world, go to Amsterdam
and spend a few years in a drug-induced haze.

Or is the correct answer to keep going through the motions of running the rat
race, knowing that it's all a waste of time but constructing some kind of
inner monologue to distract us from the depression induced by the real world's
unfolding existential nightmare? Is there a way to un-take the red pill and go
back to the blue pill?

I guess I digress, but this is literally all I think about anymore. The
futility of work/culture/technology in a world of ever-increasing human
subterfuge. Why can't we admit that the dystopia is playing out before our
eyes and begin to escape the yoke?

I guess the one thing that keeps me going is that we might be able to break
down a single day into its component parts and automate away anything that's
beneath human potential so that we can do the stuff we'd like to do (like
nothing). The Roomba can vacuum (done). The self-driving Uber can take us to
work and back (almost done). The rooftop solar panels can pay our electric
bill (effectively done). And so on, until there is no minutia left.

The last step will be to spin up a partial (of oneself) to go to work and earn
the paycheck, or literally grow the food to feed us so we no longer have any
external dependencies. This is the part that I think is roughly 10 years away.

------
slacka
His predictions are hit and miss. From his 1999 predictions:
[https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...](https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#Late_20th_century)

By 2009:

Y: The majority of reading is done on displays rather than paper

N: Most text will be created using speech recognition technology.

N: Intelligent roads and driverless cars are in use

N: People use personal computers the size of rings, pins, credit cards

Y/N (premature): Though desktop PCs are still common for data storage,
individuals primarily use portable devices for their computer-related tasks.

Y/N (premature): Personal worn computers provide monitoring of body functions,
automated identity and directions for navigation.

N: A $1,000 computer can perform a trillion calculations per second.

... seemed a total mixed bag, many 2009 predictions just beginning to coming
true today

By 2019:

N: The computational capacity of a $6,000 computing device is approximately
equal to the computational capability of the human brain (claims that is ~20
petaFLOPS. Nvidia DGX-2 is 2 petaFLOPS "AI" supercomputer for $399,000). [

N: Computers are embedded everywhere in the environment (inside of furniture,
jewelry, walls, clothing, etc.).

Y(easy win): Most people own more than one PC

N: Cables connecting computers and peripherals have almost completely
disappeared.

Y/N: People communicate with their computers via two-way speech and gestures
instead of with keyboards. ( _can_ but in my home and my office most words
entered are not)

N: Most business transactions or information inquiries involve dealing with a
_simulated_ person

N: Rotating computer hard drives are no longer used.

N: Three-dimensional nanotube lattices are the dominant computing substrate.

N: The algorithms that allow the relatively small genetic code of the brain to
construct a much more complex organ are being transferred into computer neural
nets.(Could be decades off. Need several breakthroughs to even put this on the
horizon)

N: Most roads now have automated driving systems—networks that allow computer-
controlled automobiles to safely navigate.

N: Most decisions made by humans involve consultation with machine
intelligence. For example, a doctor may seek the advice of a digital
assistant.

... few hit but we're mostly heading that way. To me seems 5-15 years
premature

[1] [https://www.popsci.com/intel-teraflop-
chip#page-2](https://www.popsci.com/intel-teraflop-chip#page-2)

~~~
lsc
>Most people own more than one PC (easy win)

What's interesting is that I think most people own just one general purpose
computing device these days, their cellphone. Yeah, they maybe also have a TV
that can play netflix or a game console, but my observation is that most
people don't even have one PC in the old sense of "a general purpose computer
with a physical keyboard that you use by sitting down" \- and if you exclude
laptops from the definition it's even more rare.

------
dmead
are people still listening to this guy?

~~~
bartcobain
Why not?

------
dwighttk
And if you don’t agree you just don’t understand exponential growth.

~~~
dwighttk
cant tell if downvotes are people who don't understand exponential growth or
people who think I don't understand exponential growth

~~~
scottlocklin
I didn't vote either way, but long term exponential growth in anything is
basically impossible.

Think about it for a minute. At some point lithography gave us Moore's "law"
(which, FWIIW has arguably already failed). Do you think that can continue
forever? Pretty sure you exceed computational capacity of the universe pretty
quickly. Exponential growth in cell phone sales happened at some point as
well. As did exponential growth in transportation speed.

The idea that we've had exponential growth in anything but lithography
technology in recent years is also laughably insane. Do you think programming
languages are exponentially better than they were 30 years ago? Do people live
exponentially longer? Cars, autos exponentially cheaper or better in some way?
Are machine learning algos exponentially better? No, no, no, and no.

~~~
wetpaws
You are talking about s-curves, if I understand it correctly.

Few notes: first, exponential growth does not necessary mean fast growth, some
of the technologies can still be in early phase or depend on some future
advances (compare time for tv to capture the world vs smartphones)

Second, the decline of exponential growth of a technology does not mean it
can't be replaced with a nother technology with better prospects (i.e gas cars
vs electric cars, etc).

I would argue if that you zoom out for a bigger picture, technology as a whole
does show traces of exponential growth.

~~~
scottlocklin
Re: technology showing "traces of exponential growth" citations needed;
everything slowing down from where I'm sitting.

~~~
wetpaws
[https://www.visualcapitalist.com/rising-speed-
technological-...](https://www.visualcapitalist.com/rising-speed-
technological-adoption/)

------
TimTheTinker
I don't think anything like the singularity will ever be possible, because the
ability to speak language for oneself and invent complex reasoning is
spiritual, not merely physical. Humans can do it because we are inherently
spiritual beings, a union of body and spirit. (I know most folks here don't
subscribe to that assertion... I'm not here to argue it anyway.)

But even if that isn't true, humans are not merely the product of our biology.
Each of us has undergone an $age-year period of training, 18 years of which
was on someone else's effort and time. I doubt someone is willing to put in
that much personal investment to raise a neural net like his/her own child.

~~~
Agebor
But if something is spiritual, there must be some way for it to interact with
the physical world, otherwise you would not know about it. Then that way can
be measured, which makes it physical...

Unless you are calling spiritual those things that can't be measured, yet?

~~~
mrob
For the sake of argument, "spiritual" could be a class of physical objects
that can only be produced by other spiritual objects. This is the old
"vitalism" argument, which is generally considered discredited, but as the
"hard problem of consciousness" is still unanswered, this is a small loophole
that proponents can use to hang on to it.

~~~
TimTheTinker
I don't subscribe to the "vitalism" argument. I am only equipped to assert
that "spiritual" (as opposed to physical) primarily constitutes the use of
language as a form of unique self-expression. It can be imitated in machines,
but never duplicated.

(There's a good reason the Turing test involves language, and that computers
can't form meaningful intent on their own.)

I don't know enough to extend the definition any further, though I'm sure it's
not limited to merely that.

~~~
mrob
As far as we know, human language is generated by the brain, and the brain is
made of parts that can in principle be simulated. If other parts of the body
also turn out to be essential, there's no known physical limitation on
simulating those either. This doesn't mean it will actually be practicable to
do so, only that it's not theoretically impossible. By what mechanism does
your idea of "spirituality" make it impossible to simulate human brains with
sufficient accuracy to pass a Turing test? (or any other language generator
not necessarily modeled on humans)

------
mimixco
I love Ray Kurzweil and he's an undisputed genius, but like many other
believers in The Singularity, he's convinced of something that has never been
proven to be true.

No computer will ever demonstrate actual intelligence, which is the ability to
bring a new and unforeseen solution to an existing problem. I need only a cite
a few examples to show this is true: Uber cars run over people. Facial
recognition claims black people are criminals just for being black. Facebook's
AI software lets in fake news.

These are all errors that could be avoided by actual human intelligence. The
fact of the matter is that we don't really know where intelligence or
creativity comes from. It's unlikely that we ever will. (See "Heisenberg
Uncertainty Principle" for one reason.) While machines are great at doing
things _faster_ than people can do them, they're not good at inventing new
solutions or doing something that humans couldn't do, given enough time.

~~~
freedomben
> _No computer will ever demonstrate actual intelligence, which is the ability
> to bring a new and unforeseen solution to an existing problem. I need only a
> cite a few examples to show this is true: Uber cars run over people. Facial
> recognition claims black people are criminals just for being black. Facebook
> 's AI software lets in fake news._

Burden of proof is not on your claim, but rather on Kurzweil's, but since you
made a claim and tried to back it up, I feel compelled to point out that it's
a non-sequitur. Just because something hasn't been done before, doesn't mean
it's not possible. Many people said going to the moon was impossible. Might be
interesting to look at Russell's Teapot.

~~~
zamalek
> doesn't mean it's not possible

In general, that which nature has demonstrated is can usually be replicated. A
bird (flight), a floating log (ships), a fish (submarine), an asteroid (space
travel), etc. Nature has demonstrated intelligence: a human.

However, just like nature has not demonstrated superluminal travel, it has not
demonstrated super-intelligence; so that is still a question.

~~~
criddell
Nature never demonstrated Seaborgium, but we were still able to create it.

~~~
zamalek
But it did make all the naturally occurring elements heavier than iron with a
similar process.

~~~
criddell
Sure. And nature made intelligent organisms so we could take the fundamentals
and build something that nature hasn't.

~~~
zamalek
I never said it was impossible, just that it's not as certain as Kurzweil
makes it out to be.

