
Geoffrey Hinton and Demis Hassabis: AGI is nowhere close to being a reality - z0a
https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis-hassabis-agi-is-nowhere-close-to-being-a-reality/
======
karmasimida
And no one should be surprised by this. The NN advancement of late doesn't
help addressing human-style symbolic reasoning at all. All we have is a much
more powerful function approximator with a drastic increased capacity (very
deep networks with billions of parameters) and scalable training scheme (SGD
and its variants).

Such architecture works great for differentiable data, such's images/audios,
but the improvement on natural language tasks are only incremental.

I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does
offer an elegant and complete framework. But seems like even DeepMind had
trouble to get it working to more realistic scenarios, so maybe our modelling
of intelligence is still hopelessly romantic.

~~~
sdenton4
Maaaaybe. I tend to think that symbolic reasoning is a learning tool, rather
than a goalpost for general intelligence. For example, we use symbolic
reasoning quite extensively when learning to read a new language, but once
fluent can rely on something closer to raw processing - no more reading and
sounding out character sequences. Similarly with chess - eventually we have
good mnemonics for what make good plays, and can play blitz reasonably well.

And - let's be real - a lot of human symbolic reasoning actually happens
outside of the brain, on paper or computer screens. We painstakingly learn
relatively simple transformations and feedback loops for manipulating this
external memory, and then bootstrap it into short-term reaction via lots of
practice.

I tend to think that the problems are: a) Tightly defined / domain-specific
loss functions. If all I ever do is ask you to identify pictures of bananas,
you'll never get around to writing the great american novel. And we don't know
how to train the kinds of adaptive or free form loss functions that would get
us away from these domain-specific losses.

b) Similarly, I have a soft-spot for the view that a mind is only as good as
its set of inputs. We currently mostly build models that are only receptive
(image, sound) or generative. Reinforcement learning is getting progress on
feedback loops, but I have the sense that there's still a long way to go.

c) I have the feeling that there's still a long way to go in understanding how
to deal with time...

d) As great as LSTMs are, there still seems to be some shortcoming in how to
incorporate memory into networks. LSTMs seem to give a decent approximation of
short-term memory, but still seems far from great. This might be the key to
symbolic reasoning, though.

Writing all that down, I gotta say I agree fundamentally with the DeepMind
research priorities on reinforcement learning and multi-modal models.

~~~
joe_the_user
Once someone is fluent in a language, the logical operations and judgements
involved stop being overt and highly visible to the conscious mind. But that
doesn't mean that one stops getting the benefits and results of logical
operations.

What you might see as logical operations "not mattering", I would see as
logical operations integrated so deeply into reflexive operations that it's
hard to see where one ends and the other begins. The contrast is that humans
can do pattern recognition in a neural net fashion, taking something like the
multidimensional average of a set of things. But a human can also receive a
language-level input that some characteristic is or isn't important for
recognizing a given thing and incorporate that input into their broad-average
concepts. That kind of thing can't be done by deep learning currently - well,
not a non-kludgey sort of way.

 _Similarly, I have a soft-spot for the view that a mind is only as good as
its set of inputs._

It depends on how you want to mean that. A human can take inputs on one thing
and apply them seamlessly to another thing. Neural nets tend to be very
dependent on the task-focused content fed them.

~~~
barrkel
There's a parallel between something being logical and it "feeling right"
without a necessary connection at the "implementation level" between the two,
just like there may be a parallel between an artificial NN recognizer
recognizing something unambiguously and not caught awkwardly with multiple
weak or conflicting activations, and a logical system using rules to determine
a contradiction, without ever needing to embed the second in the first,
however deep - it's just that illogical inputs didn't get good training
because they either don't happen or have no meaningful training data.

I, personally, just know I don't use logical rules very often at all. Usually
I apply them retroactively as a post-hoc justification, or narrative, to
explain a sense of discomfort or internal conflict or dissonance, but I have
no way of knowing if my rationale is true other than how it makes me feel -
I'm simply relying on the same mechanism, with an extra set of pattern
recognition learned specifically to identify fallacies and incorrect logical
constructs. If I didn't have that extra training, my explanations could be
illogical and I'd be none the wiser.

I think humans are very bad at logical reasoning and very inefficient at it.
Only a small % of the population ever does it and they usually do it
incorrectly with biases, constructively to justify an already held conclusion.
They're great at pattern recognition though. I don't think logical reasoning
is anywhere on the critical path to human level AGI at a deep level. It could
very well be a parallel system though to help train recognition if we don't
figure out better ways of doing that.

~~~
joe_the_user
Well, neural nets and similar things laughably worse than AI systems when
confronted with "real world" situation.

I wouldn't argue with the point that humans use rigorous logic and overt
rules-based behavior much less than they imagine (your summary is very much a
summary of the other-NLP model of mind, which I know).

I'd argue that while "refined" logic, systematic logic, might be rare, fairly
crude logic, more or less indistinguishable from simply using language, is
everywhere and it an incredibly powerful tool that human have. Again, being
able to correct object recognition based on things people tell you is an
incredibly powerful thing. You don't need a lot of full rationality for this
but it gets you a lot. And that's just a small-ish example.

~~~
Retric
Intelegence is not limited to what Humans are good at. People are really bad
at several tasks where current AI tech excels, but those things tend to be
excluded from the conversation.

AGI that is as smart as say a rat would easily qualify as AGI even without
language skills.

~~~
joe_the_user
_Intelligence is not limited to what Humans are good at._

Being able to implement all the things human are good at, however, should be
able to get us everything that we could do, because anything we could create,
it could create too.

 _AGI that is as smart as say a rat would easily qualify as AGI even without
language skills._

Indeed, but while a full language-using AI is ways a way at least, using
language is one thing that's at sort-of describable/comprehensible as a goal.
A rat is a lot more robust than any human made robot but how? Overall, I keep
hearing these "there's intelligence that's totally unlike what we conceive"
argument but it seems like computer programs as they exist now either do what
a human could do rationally and more quickly (a conventional program) or
heuristic duplicate human surface behavior (neural nets). You could sort-of
argue for more but it's a bit tenuous. Human behavior is very flexible
_already_ (that's the point, right). And assuming AI is hard to create,
creating something who properties we to-some-extent understand is more like
than creating the wild unknown AI.

Also, "Getting to rat level" might not be the useful path to AGI. If we simply
created a rat like thing, we might win the prize of "real AGI" but it would be
far less useful than something we could tell what to do the way we tell humans
what to do.

~~~
visarga
A rat can do something else that a neural net can't - it is a self replicator.
Our neural nets don't have self replication or a huge, complex environment and
timescale to evolve in. Self replication creates an internal goal for agents:
survival. This drives learning. Instead, we just train agents with human-made
rewards signals. Even a simple environment, like the Go board, when used for
training many generations of agents in self-play, easily leads to super-human
intelligence. We don't have the equivalent simulator for the real world
environment or care to let loose billions of self replicating AI agents in the
real world.

~~~
joycian
Survival is instrumental to any goal. Not only self replication would create
that drive.

~~~
Retric
Bombs don’t need to survive to be useful.

------
epicureanideal
[https://en.wikiquote.org/wiki/Incorrect_predictions](https://en.wikiquote.org/wiki/Incorrect_predictions)

"Hence, if it requires, say, a thousand years to fit for easy flight a bird
which started with rudimentary wings, or ten thousand for one which started
with no wings at all and had to sprout them ab initio, it might be assumed
that the flying machine which will really fly might be evolved by the combined
and continuous efforts of mathematicians and mechanicians in from one million
to ten million years--provided, of course, we can meanwhile eliminate such
little drawbacks and embarrassments as the existing relation between weight
and strength in inorganic materials. [Emphasis added.] The New York Times, Oct
9, 1903, p. 6."

\-----

A couple of the leading minds in AGI say it's a long ways away... just because
the universe likes to give us the finger, maybe AGI is on the horizon. Maybe
we'll look back at this in 10 years and laugh (if we're here).

~~~
nabla9
The arguments like above are "platitude level arguments".

We really don't learn anything from the problem in had by talking in generic
terms. We use these arguments when we want to justify our hopes and feeling,
but there is really nothing to learn from it.

Hinton, Hassabis, Bengio and others point out that we can't 'brute force' AI
development. There needs to be actual breakthroughs in the field and there may
be several decades between them.

AI, brain science and cognitive science are extremely difficult fields with
small advances, yet people assume that it's possible to 'brute force' AGI by
just adding more computing power and doing more of the same.

Macroeconomics is probably less complex research subject than AI or brain
science, but nobody assumes that you can just brute force truly great
macroeconomic model in few years if you just spend little more resources.

~~~
edanm
> AI, brain science and cognitive science are extremely difficult fields with
> small advances, yet people assume that it's possible to 'brute force' AGI by
> just adding more computing power and doing more of the same.

Do people assume that? I mean, I'm sure _some_ people do, but I don't think
I've encountered many people, at least not in the AI safety movement, that
actually think it's a matter of more hardware power. Some people think it's
_possible_ that that's all that's necessary, but I don't think most will say
that that's the most likely path to AGI (rather than, as you say, actual
breakthroughs happening).

~~~
YeGoblynQueenne
That's pretty much the Singularity conjecture in a nutshell: that exponential
advances in computing power will drive an exponential increase in machine
intelligence.

It gets more nuanced than that but there are actually very specialised people
who argue very forcefully that AGI is a hair's breadth away and we must act
_now_ to protect ourselves from it.

Edit: so not "most" people but definitely some very high-profile people.
Although granted, they're high-profile exactly because they keep saying those
things.

~~~
spot
nope
[https://en.wikipedia.org/wiki/Technological_singularity#Algo...](https://en.wikipedia.org/wiki/Technological_singularity#Algorithm_improvements)

"Carl Shulman and Anders Sandberg suggest that algorithm improvements may be
the limiting factor for a singularity because whereas hardware efficiency
tends to improve at a steady pace, software innovations are more unpredictable
and may be bottlenecked by serial, cumulative research."

------
hacker_9
Behind every successful neural network is a human brain. Neural networks are a
tool, an advanced tool for sure, but still just a tool. If we are looking for
AGI, and assuming the brain is an AGI, then there are still many differences
to resolve. For example, back propagation has not been observed in nature. Nor
has gradient descent. So the core mechanisms for learning in nature have still
to reveal their secrets.

~~~
buboard
Behind every brain is a successful neural network. Or at least that's the
promise of connectionism.

------
cantthinkofone
If you want AGI you need to give it a world to live in. The ecological
component of perception is missing. Without full senses, a machine doesn't
have a world to think generally about. It just has the narrow subdomain of
inputs that it is able to process.

You could bet that AGI won't manifest until AI and robotics are properly
fused. Cognition does not happen in a void. This image of a purely rational
mind floating in an abyss is an outdated paradigm to which many in the AI
community still cling. Instead, the body and environment become incorporated
into the computation.

------
anonytrary
Tangential: This title is weird. As if no one but the top minds in AI didn't
know this? This isn't big news to anyone who has done even just a modicum of
AI research.

~~~
brandonmenc
> As if no one but the top minds in AI didn't know this?

Anecdotal, but nearly all of my programmer friends believe that full-blown AGI
is less than a decade away.

~~~
Jach
Sounds like an opportunity to place some bets and probably win some money. Or
maybe they'll back down and widen their intervals -- less than a decade,
_maybe_ , but probably longer. Maybe quite a long time too, and maybe after
development of some other planet-changing tech.

It's worth thinking about this section of [0] when various AI experts offer
predictions:

> Two: History shows that for the general public, and even for scientists not
> in a key inner circle, and even for scientists in that key circle, it is
> very often the case that key technological developments still seem decades
> away, five years before they show up.

> In 1901, two years before helping build the first heavier-than-air flyer,
> Wilbur Wright told his brother that powered flight was fifty years away.

> In 1939, three years before he personally oversaw the first critical chain
> reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence
> that it was impossible to use uranium to sustain a fission chain reaction. I
> believe Fermi also said a year after that, aka two years before the
> denouement, that if net power from fission was even possible (as he then
> granted some greater plausibility) then it would be fifty years off; but for
> this I neglected to keep the citation.

> And of course if you’re not the Wright Brothers or Enrico Fermi, you will be
> even more surprised. Most of the world learned that atomic weapons were now
> a thing when they woke up to the headlines about Hiroshima. There were
> esteemed intellectuals saying four years after the Wright Flyer that
> heavier-than-air flight was impossible, because knowledge propagated more
> slowly back then.

[0] [https://intelligence.org/2017/10/13/fire-
alarm/](https://intelligence.org/2017/10/13/fire-alarm/)

~~~
dwiel
If there is AGI all bets are off so to speak, what's a few dollars worth then?
If there isn't AGI the money still has value. Doesn't sound like a rational
bet, even if you think the odds of AGI soon are high.

------
rozim
Ilya Stuskever of OpenAI says 5 years:

[https://medium.com/intuitionmachine/near-term-agi-should-
be-...](https://medium.com/intuitionmachine/near-term-agi-should-be-
considered-as-a-possibility-9bcf276f9b16)

~~~
why_only_15
I watched the talk linked where that quote apparently comes from, and it was
really good. Thanks for sharing that. Ilya specifically says in the talk that
it is unlikely but that there is sufficient lack of understanding that we
can't rule it out, and that thus the questions around it are worth thinking
about.

------
goolulusaurs
It bothers me that the qoutes in this article are all cut up, in some cases
ending when a sentence clearly wasn't finished. It makes it hard to judge what
they are really saying here, and I wish the full interview would be published.

------
mrdoops
I wonder to what extent the data being fed to these models are the issue. Or
rather the problem is the systems that generate these data-sets and how
representative of reality they are. If we make an app that involves humans and
that data is used in a model - to what extent does user experience and other
factors warp reality?

Maybe our existing methods are good enough given enough compute to reach AGI
but our datasets are too low fidelity and non-representative of the problem
space to reach desired results?

~~~
MAXPOOL
The problem is not the data. The problem is the need for high quality data.
Current ML is data driven statistical learning. ML tries to learn a model that
describes the distribution. It's impossible to get similar performance as the
best reference implementation (human brain) using this approach.
[https://i.redd.it/kvvgv6zzhtp11.png](https://i.redd.it/kvvgv6zzhtp11.png)

Think of 16 year old human:

* it has received less than 400 million wakeful seconds of data + 100 millions seconds of sleep,

* it has made only few million high level cognitive decisions where feedback is important and delay is tens of second or several minutes (say few thousand per day). From just few million samples it has learned to behave in the society like a human and do human things.

* Assuming 50 ms learning rate at average, at the lowest level there is at most 10 billion iterations per neuron (Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes.)

Humans generate very detailed model of their environment with very little data
and even less feedback. They can learn complex concept from one example. For
example you need only one example of pickpocket to understand the whole
concept.

~~~
machiaweliczny
This pickpocket example seems like symbolic? relations reasoning.

I think we need simulation of other agents outputs as primary tool for
reasoning. That seems to be how intelligence emerged in evolution.

Something like this: choose desired action > simulate other agents outputs
based on future state after performing action > check reward for this action
after simulating outputs of others > perform action or not > update all agents
models and relations in "world" graph model

I think world could be modeled as simple graph and each agent as NN.

Then based on graph we could conduct symbolic reasoning and very fast learning
(by updating edges)

I think these models need also need good physical simulator and good
understanding of competitivness.

Is anyone aware of such trials of building AGI as I described?

Humans have natural language as big competetive adventage (easy way to
compress parts of world graph and pass it to others - ambiguous. I think with
aftificial machiness can be done more efficient). Another advantage is
knowledge storage - also easy to do with machiness.

If we can build insect AI building human AI should be easy.

~~~
machiaweliczny
I also think it would be great if we could just do "world_model = fold(books)"
instead of simulation.

Is anyone aware of such efforts/results?

------
nikkwong
Not sure how I feel about this; for one, the Kurzweilian singularity which
largely could be fueled by the advent of AGI is both exciting and yet also
scary. The upside could forever change humanity as we know it; far increased
longevity, the potential to create _anything_ via a universal assembler[0],
bringing everything feasible within the laws of physics to reality. Knowledge
is the only limiting factor stopping us from doing anything which is
physically possible in this universe; and in that light AGI could be an
enlightenment.

On the other hand, the ubiquity of knowledge once it's available could lead
any maniac to use it for the wrong purpose and wipe out humanity from their
basement.

My feelings on the potential of AGI is therefore mixed. I for one have just
found my particular niche in the workforce and am finally reaping the
dividends from decades of hard work. Having AGI displace me and millions (or
billions) of individuals is frightening and definitely keeps me on my toes.

Technology changes the world; my parents both worked for newspapers and talk
endlessly about how the demise of their industry after the advent of the
internet is so unfortunate. Luckily for them they are both at retirement age
so their livelihood was not upset by displacement.

If AGI does become a thing it will be interesting to see how millenials and
gen Z react to becoming irrelevant in what would have been the peak of their
careers.

[0]
[https://en.wikipedia.org/wiki/Molecular_assembler](https://en.wikipedia.org/wiki/Molecular_assembler)

------
diminish
I have a small experiment to discover if AGI is already a solved puzzle.

[https://news.ycombinator.com/item?id=18720482](https://news.ycombinator.com/item?id=18720482)

------
toasterlovin
Not to mention that we don't even know if general intelligence exists. All we
know is _that_ mental abilities tend to correlate, but not _why_ they tend to
correlate. And if you think about designing machines, in general, the idea of
general intelligence is utterly ridiculous. Does a fast car have general
speediness? Of course not, it has dozens or hundreds of discrete optimizations
that all contribute in some degree to the car being faster.

~~~
mac01021
I'm not sure you and the OP mean the same thing by "General Intelligence".

It seems clear that autonomous systems which can apply their computational
machinery to a diverse range of problems, and can, in a diverse range of
settings, formulate instrumental goals as part of a plan to attain a final
goal, do exist.

Because that's what humans are, at least some of the time.

~~~
mannykannot
But if human performance in these regards never exceeded what the pinnacle of
today's AI performance is, we would not regard them as intelligent in a
general sense, either.

------
mikhailfranco
Great interview with Hassabis from the BBC. It's meanderingly biographical,
with insights about his path through internships, curiosity, startups,
commitment, burnout, trusted team mates and eventual successes ...

[https://www.bbc.co.uk/sounds/play/p06qvj98](https://www.bbc.co.uk/sounds/play/p06qvj98)

------
mindgam3
Demis Hassabis (true) statements here would be much more credible if DeepMind
wasn't currently making a mint by promoting AlphaZero to the masses as a
"general purpose artificial intelligence system".

Don't believe me? Check out this series of marketing videos on YouTube by GM
Matthew Sadler.

1\. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a
look at new games between _AlphaZero, DeepMind’s general purpose artificial
intelligence system_ , and Stockfish” (1)

2\. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World
Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a
review with a difference, because we are taking a look at the games together
with _AlphaZero, DeepMind’s general purpose artificial intelligence
system_...” (2)

3\. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a
game between _AlphaZero, DeepMind’s general purpose artificial intelligence
system_ , and Stockfish” (3)

I could go on, but you get my point. Search youtube for "Sadler DeepMind" and
you'll see all the rest. This is a script.

But wait, you say, that's just some random unaffiliated independent
grandmaster who just happens to be using an inaccurate script on his own, no
DeepMind connection at all! And to that I would say, check out this same
random GM being quoted directly on DeepMind's blog waxing eloquently and
rapturously about AlphaZero's incredible qualities. (4)

Let's be clear. I am in no way dismissing AlphaZero's truy remarkable
abilities in both chess and other games like go and shogi. Nor do I have a
problem with Demis Hassabis making headlines for stating the obvious about
deep learning (that it's good at solving certain limited types of puzzles, but
we are a long way from AGI, why is this controversial).

My problem is that Hassabis is speaking out of both sides of his mouth.
Increasing DeepMind/Google's value by many millions with his marketing
message, while acting like he's not doing that. It feels intellectually
dishonest.

To solve this, all DeepMind needs to stop instructing its Grandmaster
mouthpieces to refer to AlphaZero as a "general articial intelligence system".
Let's see how long that takes.

(1)
[https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s](https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s)
(2)
[https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s](https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s)
(3)
[https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s](https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s)
(4) [https://deepmind.com/blog/alphazero-shedding-new-light-
grand...](https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-
chess-shogi-and-go/)

~~~
rozim
I suspect "general purpose artificial intelligence system" means the same
architecture applied to 3 games (western Chess, Shogi, Go).

~~~
mindgam3
Wouldn't that be a "general purpose game-playing intelligence system" at best?
(without mentioning that it only applies to certain types of perfect-
information games)

Maybe it's just me, but "general purpose artificial intelligence system"
sounds like, well, General Artificial Intelligence. Which sounds like
Artificial General Intelligence, which is the holy grail.

~~~
balfirevic
> Maybe it's just me, but "general purpose artificial intelligence system"
> sounds like, well, General Artificial Intelligence.

Well, it doesn't sound like that at all to me and I think the phrasing is
fair. Also, it folds proteins.

------
qwerty456127
If AGI (an artificial human mind with direct access to computational power of
classic computers and whole Internet of information) was possible then we
would probably already be living in the Travelers TV show.

~~~
mindcrime
_... was possible then we would probably already be living in the Travelers TV
show._

How do you know we aren't?

BTW, if you hadn't noticed, Season Three just came out on Netflix. I'm
champing at the bit to binge watch that... :-)

------
yters
As I always ask regarding this sort of story, why do we believe human
intelligence is computable? The only answer I've heard is the materialist
presupposition and sneers at any other metaphysic as "magic," which is not
exactly a valid form of argument.

As an alternative, the human mind could be some sort of halting oracle. That's
a well defined entity in computer science which cannot be reduced to Turing
computation, thus cannot be any sort of AI, since we cannot create any form of
computation more powerful than a Turing machine. How have we ruled out that
possibility? As far as I can tell, we have not ruled it out, nor even tried.

~~~
siekmanj
We know the brain is doing something - if you don't want to call it
computation, then you might as well call it magic.

~~~
yters
There are other possibilities. For example, there can be an immaterial mind
that operates as a halting oracle and interfaces with the world through the
brain. Halting oracles are well defined, and we can empirically test for their
existence. So, no reason why we have to assume everything humans do is
reducible to some sort of automata. The only reason we make the assumption is
because of prior materialistic commitments.

UPDATE: I've been rate limited for some reason, so here is my response whether
the mind intuitively seems to be a halting oracle.

1\. It's obvious there are an infinite number of integers, because whatever
number I think of I can add one to it. A Turing machine has to be given the
axiom of infinity to make this kind of inference, it cannot derive it in any
way. This intuitively looks like an example of the halting oracle at work in
my mind. Or, an even more basic practical example: if I do something and it
doesn't work, I try something else. Unlike the game AIs that repeatedly try to
walk through walls.

2\. We programmers write halting programs with great regularity. So, it seems
like we are decent at solving the halting problem. Also, note that it is not
necessary to solve every problem in order to be an uncomputable halting
oracle. All that is necessary is being capable of solving an uncomputable
subset of the halting problems. So, the fact that we cannot solve some
problems does not imply we are not halting oracles.

~~~
frabcus
Roger Penrose basically suggests what you say in "The Emperor's New Mind".
Roughly, it says that the brain (likely, according to him) uses quantum
computation, and so we can't make an AI out of a classical computer.

The practical flaw with this argument, of course, is that you could instead
make an AI that itself uses quantum computation. I asked Roger Penrose about
this at a university philosophy meetup over 20 years ago, and he agreed.

Likewise, if there is some kind of halting oracle, perhaps we can work out how
the brain creates and connects to that oracle, and make our AI do the same.

Meanwhile, there is no physiological or computational evidence for this
possibility. We should keep hunting though, as that's the same thing as
understanding the detail of how the brain works!

~~~
yters
Well, quantum computation is weaker than a nondeterministic Turing machine, so
not the same thing I'm saying. Penrose correctly identifies the mind cannot be
a deterministic Turing machine, but his invocation of quantum mechanics does
not solve the problem he points out. A DTM can simulate an NTM and hence
anything inbetween, so the inbetween of quantum computation does not solve
anything.

The fundamental problem Penrose identifies boils down to the halting problem,
which requires a halting oracle to be solved. Hence, a halting oracle is the
best explanation for the human mind, and no form of computation, quantum or
otherwise, suffices.

UPDATE:

Since I'm rate limited, here is my answer to the replier's comment:

A partial answer: the mind has access to the concept of infinity, and can
identify new, consistent axioms. Other possibilities: future causality and
ability to change the fundamental probability distribution.

But, it's also important to note that we don't have to answer the "how"
question in order to identify halting oracles as a viable explanation. We
often identify new phenomena and anomalies without being able to explain them,
so the identification is a first step.

~~~
sjeohp
>Hence, a halting oracle is the best explanation for the human mind

What does it explain though? That the human brain has a black box capable of
solving certain problems... how exactly?

~~~
mannykannot
Indeed - it's essentially the homunculus fallacy, or magic dressed up in the
language of knowledge.

[https://en.wikipedia.org/wiki/Homunculus_argument](https://en.wikipedia.org/wiki/Homunculus_argument)

------
magwa101
Yep and considering a very high majority of our work does not require GI we
still have huge AI job disruption looming.

------
hyperpallium
Progress-skeptics are always wrong - except for artificial inteligence.

------
izzydata
I'm not even convinced that a real AI is possible with conventional computer
hardware or anything remotely similar to it. Not even considering software I
get the impression there is a fundamental limitation of hardware.

~~~
mortivore
I'm not convinced we've even defined the problem space well enough to solve
it. Like what is the concrete measure(something to target) for intelligence?
If we develop general intelligence is it going to be human, dog, or fish?

~~~
EliRivers
I'm not convinced any of those creatures have general intelligence. I'm
similarly unconvinced that we'd recognise general intelligence if we saw it.

~~~
izzydata
Are humans even capable of general intelligence? I feel like the philosophical
question of determinism vs free will is unsolvable.

The baseline of human capability would definitely still be impressive.

------
ludicast
Isn't that exactly what an agi would say once it takes over the brains of
leading scientists.

~~~
MR4D
No - it would post what you just said on HN instead. :)

Seriously - that's a wicked funny post you had there!

~~~
mlthoughts2018
Nice try, Mr. AI but you won’t escape detection by pretending to be a jokester
who grew up in Boston.

------
lemoncucumber
As someone who files taxes every year, I'm quite certain that Adjusted Gross
Income is a reality ;)

------
thanatropism
I don't believe in the idea of AGI for Dreyfusard reasons, but it's possible
that it could emerge from something completely different than deep learning.

For all we know, Isabelle and Coq could be speeding through the road to
consciousness but we're busy having a blast doing Computer Vision pretending
it's AI.

~~~
thanatropism
I'm used to random downvotes for comments about current controversies - I'll
say things about income inequality that people won't like, and it's okay, you
have your politics. Deep learning shouldn't be part of your politics, it
either does stuff or doesn't.

Deep Learning is amazeballs for Computer Vision. It's fun because people like
looking at pictures. But sufficiently prodded Isabelle proves theorems, I've
seen it first hand, and the "sufficient prodding" is way underdeveloped yet.
At one point backpropagation was dead too.

------
roenxi
The computational power of the hardware is getting really close to what a
human brain is capable of (on an exponential scale, anyway). If "nowhere
close" means not in the next 5 years then sure.

Over the medium term I'm not sure AI researchers are the best people to ask.
They are completely dependent on how much power the electrical engineers give
them - I doubt there is a deeper understanding what a doubling or quadrupling
of computer power will do than any programmer learning about neural networks.

~~~
anonytrary
> The computational power of the hardware is getting really close to what a
> human brain is capable of

Why do you say that? AFAIK computing architecture and brain architecture are
completely different. How would you even begin to compare their power?

~~~
roenxi
Well, Wikipedia was my source [0] and it links
[http://hplusmagazine.com/2009/04/07/brain-
chip/](http://hplusmagazine.com/2009/04/07/brain-chip/) as its source.

Google has TPU that are off from the estimated power required to simulate a
brain by a factor of 3, so technology is reaching the ballpark. Given that
brains were evolved, the part that does symbolic thinking is probably "easy to
stumble on" in some practical sense.

[0]
[https://en.wikipedia.org/wiki/Computer_performance_by_orders...](https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude)

~~~
aerodude
Faster hardware will help, but I'm not convinced that it's the answer. OpenAI
Five used on the order of 2000 years of experience to train their agent. There
are clearly still _huge_ algorithmic gains to be had.

Given how we've managed to improve on nature in other domains (see solar cell
efficiency, for example), I think that if we can figure out how intelligent
organisms manage to learn so quickly we can likely beat nature's efficiency.

------
nuguy
I take huge offense to this article. They claim that when it comes to AGI,
Hinton and Hassabis “know what they are talking about.” Nothing could be
further from the truth. These are people who have narrow expertise in _one_
framework of AI. AGI does not yet exist so they are not experts in it, in how
long it will be, or how it will work. A layman is just as qualified to
speculate about AGI as these people so I find it to be infinitely frustrating
when condescending journalists talk down to the concerned layman. This
irritates me because AI is a death scentance for humanity — its an incredibly
serious problem.

As I have stated before, AI is the end for us. To put it simply, AI brings the
world into a highly unstable configuration where the only likely outcome is
the relegation of humans and their way of life. This is because of the
fundamental changes imposed on the economics of life by the existence of AI.

Many people say that automation leads to new jobs, not a loss of jobs.
Automation has never encroached on the sacred territory of sentience. It is a
totally different ball game. It is stupid to compare the automation of a
traffic light to that of the brain itself. It is a new phenomenon completely
and requires a new, from-the-ground-up assessment. Reaching for the cookie-
cutter “automation creates new jobs” simply doesn’t cut it.

The fact of the matter is that even if most of the world is able to harness AI
to benefit our current way of life, at least one country won’t. And the
country that increases efficiency by displacing human input will win every
encounter of every kind that it has with any other country. And the pattern of
human displacement will ratchet forward uncontrollably, spreading across the
whole face of the earth like a virus. And when humans are no longer necessary
they will no longer exist. Not in the way they do now. It’s so important to
remember that this is a watershed moment — humans have never dealt with
anything like this.

AI could come about tomorrow. The core algorithm for intelligence is probably
a lot simpler than is thought. The computing power needed to develop and run
AI is probably much lower than it is thought to to be. Just because DNNs are
not good at this does not mean that something else won’t come out of left
field, either from neurological research or pure AI research.

And as I have said before, the only way to ensure that human life continues as
we know it is for AI to be banned. For all research and inquires to be made
illegal. Some point out that this is difficult to do but like I said, there is
no other way. I implore everyone who reads this to become involved in popular
efforts to address the problem of AI.

~~~
nradov
Stating something more times doesn't make it true. Everything you've written
is pure speculation, and alarmist at that. There's no proof that AGI is even
possible, and if it is possible there's no proof that it will end humanity.

~~~
995533
Clearly, AGI-level intelligence is possible, because human brains exist.

So unless you pose that a function has to rely on its materialization (there
is something untouchably magic about biological neural networks, and
intelligence is not multiple realizable), it should be possible to
functionally model intelligence. Nature shows the way.

AGI will likely obsolete humanity. Either depricate it, or consume it (make us
part of the Borg collective). Heck, even a relatively dumb autonomous atom
bomb or computer virus may be enough to wipe humanity from the face of the
earth.

~~~
nradov
It's not at all clear that AGI is technically feasible. Human brains exist but
we have only a shallow understanding of how they work.

Even if we assume for the sake of argument that AGI is possible, there's no
scientific basis to assume that will make humanity obsolete. For all we know
there could be fundamental limits on cognition. A hypothetical AGI might be no
smarter than humans, or might be unable to leverage it's intelligence in ways
that impact us.

Nuclear weapons and malware can cause damage but there's no conceivable
scenario where they actually make us extinct.

~~~
995533
Something can be possible, while still technically not feasible.

I agree our knowledge currently is lacking, but see no reasons why this will
never catch up.

There are fundamental limits on cognition. For one our universe is limited by
the amount of computing energy available. Plenty of problems can be fully
solved, to where it does not matter if you are increasingly more intelligent
(beyond a certain point, two AGI's will always draw at chess). Another limit
is practical: the AGI needs to communicate with humans (if we manage to keep
control of it), so it may need to dumb down so we can understand it.

Even an AGI as smart as the smartest human will greatly outrun us: it can
duplicate and focus on many things in parallel. Then the improved bandwith
between AGI's will do the rest (humans are stuck with letters and formulas and
coffee breaks).

Manually deployed atom bombs and malware can already wreck us. No difference
with autonomous (cyber)weapons.

------
995533
If AGI is possible, it already happened. If even AI experts put it a 100-1000
years out, where some human monkeys banging on digital typewriters could
eventually create it, then, in the vastness of space, time, military
contracts, alien intelligences, and random Boltzmann brains, it must have been
reality multiple times already.

If AGI is impossible, it will never happen. We already know that perfectly
intelligent AGI's are not physically possible: Per DeepMind's foundational
theoretical framework, optimal compression is non-computable, and besides
that, it is not possible for an inference machine to know all of its universe
(unless it is bigger than the universe by at least 1 bit, AKA it _is_ the
universe).

Remains being more intelligent than all of humanity. To accomplish that, by
Shannon's own estimates, there is currently not enough information available
in datasets and the internet. Chinese efforts to artificially increase the
intelligence of babies is still in its infancy too (the substrate of AGI is
irrelevant for computationalism, unless it absolutely needs to run on the IBM
5100).

So until that time travels, we will have to make due with being smarter
than/indistinguishable from a human on all economic tasks. We're already there
for some subset of humanity, you may even be a part of that subset, if you
believed this post was written by a human.

