Hacker News new | past | comments | ask | show | jobs | submit login
Geoffrey Hinton and Demis Hassabis: AGI is nowhere close to being a reality (venturebeat.com)
204 points by z0a 6 months ago | hide | past | web | favorite | 175 comments

And no one should be surprised by this. The NN advancement of late doesn't help addressing human-style symbolic reasoning at all. All we have is a much more powerful function approximator with a drastic increased capacity (very deep networks with billions of parameters) and scalable training scheme (SGD and its variants).

Such architecture works great for differentiable data, such's images/audios, but the improvement on natural language tasks are only incremental.

I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does offer an elegant and complete framework. But seems like even DeepMind had trouble to get it working to more realistic scenarios, so maybe our modelling of intelligence is still hopelessly romantic.

Maaaaybe. I tend to think that symbolic reasoning is a learning tool, rather than a goalpost for general intelligence. For example, we use symbolic reasoning quite extensively when learning to read a new language, but once fluent can rely on something closer to raw processing - no more reading and sounding out character sequences. Similarly with chess - eventually we have good mnemonics for what make good plays, and can play blitz reasonably well.

And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.

I tend to think that the problems are: a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.

b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.

c) I have the feeling that there's still a long way to go in understanding how to deal with time...

d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.

Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.

Once someone is fluent in a language, the logical operations and judgements involved stop being overt and highly visible to the conscious mind. But that doesn't mean that one stops getting the benefits and results of logical operations.

What you might see as logical operations "not mattering", I would see as logical operations integrated so deeply into reflexive operations that it's hard to see where one ends and the other begins. The contrast is that humans can do pattern recognition in a neural net fashion, taking something like the multidimensional average of a set of things. But a human can also receive a language-level input that some characteristic is or isn't important for recognizing a given thing and incorporate that input into their broad-average concepts. That kind of thing can't be done by deep learning currently - well, not a non-kludgey sort of way.

Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs.

It depends on how you want to mean that. A human can take inputs on one thing and apply them seamlessly to another thing. Neural nets tend to be very dependent on the task-focused content fed them.

I think "a better set of inputs" is the real world or much better simulators to train our RL agents. François Chollet (author of Keras) was saying a similar thing - focusing too much on architectures and algorithms we forget the importance of the environment, an agent can only become as smart as the hardest problem it has to solve in its environment, and depends on the richness of said environment for learning. Humans are not general intelligences either, we're just good at being human (surviving in our environment). We'd be much smarter in a richer environment, too.


There's a parallel between something being logical and it "feeling right" without a necessary connection at the "implementation level" between the two, just like there may be a parallel between an artificial NN recognizer recognizing something unambiguously and not caught awkwardly with multiple weak or conflicting activations, and a logical system using rules to determine a contradiction, without ever needing to embed the second in the first, however deep - it's just that illogical inputs didn't get good training because they either don't happen or have no meaningful training data.

I, personally, just know I don't use logical rules very often at all. Usually I apply them retroactively as a post-hoc justification, or narrative, to explain a sense of discomfort or internal conflict or dissonance, but I have no way of knowing if my rationale is true other than how it makes me feel - I'm simply relying on the same mechanism, with an extra set of pattern recognition learned specifically to identify fallacies and incorrect logical constructs. If I didn't have that extra training, my explanations could be illogical and I'd be none the wiser.

I think humans are very bad at logical reasoning and very inefficient at it. Only a small % of the population ever does it and they usually do it incorrectly with biases, constructively to justify an already held conclusion. They're great at pattern recognition though. I don't think logical reasoning is anywhere on the critical path to human level AGI at a deep level. It could very well be a parallel system though to help train recognition if we don't figure out better ways of doing that.

Well, neural nets and similar things laughably worse than AI systems when confronted with "real world" situation.

I wouldn't argue with the point that humans use rigorous logic and overt rules-based behavior much less than they imagine (your summary is very much a summary of the other-NLP model of mind, which I know).

I'd argue that while "refined" logic, systematic logic, might be rare, fairly crude logic, more or less indistinguishable from simply using language, is everywhere and it an incredibly powerful tool that human have. Again, being able to correct object recognition based on things people tell you is an incredibly powerful thing. You don't need a lot of full rationality for this but it gets you a lot. And that's just a small-ish example.

Intelegence is not limited to what Humans are good at. People are really bad at several tasks where current AI tech excels, but those things tend to be excluded from the conversation.

AGI that is as smart as say a rat would easily qualify as AGI even without language skills.

Intelligence is not limited to what Humans are good at.

Being able to implement all the things human are good at, however, should be able to get us everything that we could do, because anything we could create, it could create too.

AGI that is as smart as say a rat would easily qualify as AGI even without language skills.

Indeed, but while a full language-using AI is ways a way at least, using language is one thing that's at sort-of describable/comprehensible as a goal. A rat is a lot more robust than any human made robot but how? Overall, I keep hearing these "there's intelligence that's totally unlike what we conceive" argument but it seems like computer programs as they exist now either do what a human could do rationally and more quickly (a conventional program) or heuristic duplicate human surface behavior (neural nets). You could sort-of argue for more but it's a bit tenuous. Human behavior is very flexible already (that's the point, right). And assuming AI is hard to create, creating something who properties we to-some-extent understand is more like than creating the wild unknown AI.

Also, "Getting to rat level" might not be the useful path to AGI. If we simply created a rat like thing, we might win the prize of "real AGI" but it would be far less useful than something we could tell what to do the way we tell humans what to do.

A rat can do something else that a neural net can't - it is a self replicator. Our neural nets don't have self replication or a huge, complex environment and timescale to evolve in. Self replication creates an internal goal for agents: survival. This drives learning. Instead, we just train agents with human-made rewards signals. Even a simple environment, like the Go board, when used for training many generations of agents in self-play, easily leads to super-human intelligence. We don't have the equivalent simulator for the real world environment or care to let loose billions of self replicating AI agents in the real world.

Survival is instrumental to any goal. Not only self replication would create that drive.

Bombs don’t need to survive to be useful.

As a layman, is this just saying we learn by training an intuition of what's what/what's correct, rather than actually calculating deep reality/referencing our entire memory set every time we intake some information or need to solve a task problem? Meaning, we develop tons of rules/heuristics after repeated pattern exposures, and use the simplified rules rather than a deep theory or 'brute-forcing' every possibility until we find one that's right. For example with chess, we don't know at the deepest possible level why this move might be best, it just feels right due to a massive learned intuition.

If that's the case, then to me it seems like AGI is limited by the amount and type of data a NN can be fed. To have an intelligence like homo sapiens, wouldnt you expect that no matter the underlying NN, it has to take in a comparable amount of data to what the 5+ human senses take in over lifetime, plus the actual internal 'learning' (i.e pattern recognition, heuristics, and intuition) + some kind of meta awareness (consciousness) to speed up and aid this process + dedicated pieces of the brain such as Broca's/Wernicke's

AI is a confused soup of more or less (un)related concepts: agency, sentience, pattern recognition, unsupervised learning, embodiment, NLP, and goal selection - among others.

IMO the minimal useful definition of AGI would list a set of testable skills that would qualify as AGI, and a more useful definition would be based on quantifiable skill sets that would allow numerical comparisons between humans and AIs.

It seems pointless to speculate when AGI might be a reality when we have only the fuzziest idea what AGI is supposed to look like.

"Similarly with chess - eventually we have good mnemonics for what make good plays, and can play blitz reasonably well."

"let's be real - a lot of human symbolic reasoning actually happens outside of the brain"

I was a chess master at age 10. Let's be real - when I play blitz and bullet chess, I am performing multi-level symbolic reasoning at multiple frames per second. In my brain.

I am not an alien. I can do these kinds of symbolic calculations faster than 99.6% of the population mainly because I learned chess as a kid, making it a "native language", and I got good at it early so I spent much of my youth training my neurons with this perceptual task.

My point is not to claim I'm a genius. There are dozens of players who can school me in bullet the way I can school most people.

My point is that human beings DO symbolic reasoning, it is the core of our intelligence. Being able to take in different kinds of input, organize some of them into relevant higher level clusters, sort the clusters by priority, make a plan to deal with the highest prio clusters, act, rinse and repeat.

Humans simply do not have the computational ability to make decisions based on raw perceptual data in real time. Our brains are designed to act on higher levels of symbolic meaning, and we have perceptual layers to help us turn reality into manageable chunks.

In cognitive psychology this is referred to, not surprisingly, as "chunking": https://en.wikipedia.org/wiki/Chunking_(psychology)

Until DeepMind starts working on anything resembling chunking, I believe they are wasting their time and money.

I put symbolic reasoning at the spotlight for it is something that NN is particularly bad at: discrete data, hard to design, often approximate and non-differentiable measurement.

The problem is so inherently hard that we are struggling even to come up with a meaningful task, telling us how bad we are doing. That comes to your first point, I think finding the right loss function is a like a chicken-and-egg situation here. When you have the loss function at hand, you already what task and problem you are going to solve, then it becomes easier. But that is apparently not our current situation.

That is why I think DeepMind has a good reason to go after reinforcement learning, after all, that is how we human are trained, through exams and the feedbacks.

As to your point about LSTM, I am not very passionate to qualitatively claim it whether it can/can't handle short/long term memory. That is apparently task dependent, and all the concepts involved are ill-defined.

I don't understand this fixation on symbolic reasoning. Do any other animals practice this? If the answer is no, then it is probably not the most important milestone to AGI or at least not the one we should be currently aiming for. Right now we can not replicate the cognition of a mouse. Feels like we want to go to Mars before figuring out how to build a rocket.

Seconded. Even if animals do symbolic reasoning, they do it on top of hardware based on continuous physical dynamics, more similar to DNNs... So why not build on that platform?

I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence... But I don't see where there is evidence that symbolic reasoning is either necessary or sufficient for AGI, except people describing how they think their brain works.

Related, there are a lot of statements that symbolic or rule based systems do better / as well as / almost as well as neural methods. Citation please, I'd love a map of which ML problems are still best solved with symbolic systems. (Sincerely - it's not that I expect there aren't any.)

> I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence...

Good point, we wouldn't have AlphaZero now if we only relied on biological inspiration. Nature hardly ever performs Monte Carlo Tree Search (though I'm not sure this is entirely true, see slime mold searching for food: https://thumbs.gfycat.com/IdealisticThirdCalf-size_restricte...).

We're also good at trying out various ideas until one sticks. Isn't that MC tree search?

The thing is, whatever the hell it is that human brains actually do in the background to produce our 'understanding' of the world and our ability to synthesize new ways to manipulate it, we're also very good at back-fitting explanations based on symbolic reasoning. So it looks like machines need symbolic reasoning to replicate human abilities, whereas I'd bet a dollar that actually, we're doing something quite different (and messy and Bayesian and statistical) in the background and then, using the same process, coming up with a story to explain our outcome semantically. It's not insight so much as parallel construction.

I fully agree, as I wrote in my other comment in here. Logical symbolic reasoning is usually post-hoc rationalisation built constructively to come to an already held conclusion that "feels right". It's rare that someone changes their mind due to logic, especially if the topic isn't abstract and has real-world consequences and emotional engagement.

> usually post hoc rationalisation built constructively to come to an already held conclusion that "feels right"

Counterfactual reasoning is a promising direction for AI. What would have happened if the situation were slightly different? That means we have a 'world model' in our head and can try our ideas out 'in simulation' before applying them in reality. That's why a human driver doesn't need to crash 1000 times before learning to drive, unlike RL agents. This post hoc rationalisation is our way of grounding intuition to logical models of the world, it's model based RL.

I think the the fixation on symbolic reasoning comes from ignorance at how hard classification is vs how hard pure mechanical symbolic operations are for humans. It's easy to make the mistake of thinking that since a computer can rapidly multiply two numbers together (hard for humans) that they were operating at a higher level than human brains.

Turns out this is wrong. Human brains are very efficient.

100% agree. I am terrible at mental arithmetic, but I am exceedingly good at performing symbolic operations playing bullet chess. It's primarily a visual or geometric calculation, not purely abstract like math.

I think most people don't realize that our brains have this ability. But all you need to do is spend a few months learning chess and you'll see for yourself.

> Human brains are very efficient.

At some things, not all.

Subsymbolic systems, such as ANN are clearly good at some things and symbolic systems are better at others.

It is argued that symbolic reasoning is required for what we might call higher levels of intelligence (lets assume this is correct).

Symbolic systems have struggled in the realms of grounding a symbol to something in the physical world, because its messy and complex, i.e. the area where subsymbolic systems play best.

If we assume that ANN are approximately akin to natural brains, then can we take that they are examples of a subsymbolic system able to, with the correct architecture, produce (perhaps the wrong word) a symbolic resoning system?

Perhaps this emergence ontop of the subsymbolic processing is what humans (and others to varying degrees) possess. Perhaps in the past (GOFAI) suffered because it was going top down, or not even going down to subsymbolic to ground the symbols.

Perhaps ANN struggles because its not going up to symbolic reasoning.

Then also perhaps ANN (or organic brains), which evolved where reaction/perception give the critical survival advantage, then only much later did symbolic become possible and beneficial, however wit hardware that wasnt necessarily developed for that in most efficient way.

Being of the belief that ANN are sufficient for AGI (for 20+ years), and possibly offer an elegant solution, I currently think that they are at this time, not the most efficient (nor plausible with the current compute/hardware, or for many years (probably my lifetime)). Practical progress imho is likely in hybridisation of ANN and Logic (however I'm not referring to hand baked rules), and even propose a mixed hardware might even supersede a pure ANN or what evolution has provided in the brain.

> And no one should be surprised by this. The NN advancement of late doesn't help addressing human-style symbolic reasoning at all. All we have is a much more powerful function approximator with a drastic increased capacity (very deep networks with billions of parameters) and scalable training scheme (SGD and its variants).

You think symbolic reasoning is not a function? In what sense do you think 'symbolic reasoning' is a distinct thing from 'function approximation'?

Something very interesting to me about the work Deep Mind has been doing is the way they've been combining neural network intuitions with tree search reasoning in Go, Chess, protein folding, etc.

Hmm, it seems like natural language translation has been getting quite a bit better with statistical techniques, though? I guess it depends what you mean by "only incremental".

I still remember sometime back Google's GNMT translate the Chinese text of 'I don't want to go to work' into English 'I want to go to work'. That example alone should be sufficient to showcase how most advanced machine learning model can fail at the simplest task.

It didn't understand the source material, it is just very good at memorizing and faking.

Whenever I attempt to use a translate site to translate more than a paragraph (Facebook or Google), it comes out a garbled mess - doesn't mean some sentences are seemingly clear and meaningful. The big thing is it chokes on idioms, not understanding, not leaving them as is but guessing some clearly wrong meaning. I occasionally find single-sentence posts on Facebook apparently translated astonishingly well, in the sense of being literate English and seeming to reflect the original meaning. But my French or Spanish is quite rough so translate could have missed something big - as I know it does when you get into longer texts.

Cada vez que intento usar un sitio de traducción para traducir más de un párrafo (Facebook o Google), sale un lío confuso - no significa que algunas oraciones sean aparentemente claras y significativas. Lo importante es que se atraganta con los modismos, no entendiéndolos, no dejándolos como están, sino adivinando algún significado claramente erróneo. Ocasionalmente encuentro que las publicaciones de una sola oración en Facebook aparentemente traducen sorprendentemente bien, en el sentido de que son en inglés y parecen reflejar el significado original. Pero mi francés o español es bastante duro, así que traducir podría haber pasado por alto algo grande, como sé que sucede cuando se trata de textos más largos.

I'm a native and it seems almost good to me. "algunas oraciones no sean", OK. And "duro" should be "rudimentario", also "de qué son" lacks the accent. But the rest is acceptable and it's possible to get a decent translation, only modifying those bits.


Oh, that just means I happen to write in clear, unidiomatic English ;-). Add just a smidgen of irregular usage, contemporary metaphors and such and things can go South pretty quickly.

That'll just be a training problem (often translation is driven from example texts that have lots of translations, and we have many of these for multi-national orgs like EU, which necessarily don't include a lot of colloquialisms), and the inconsistency isn't found because the models built don't extend all the way out to real world experiences (training) and recognizing text as a real world experience narrative. I think a much deeper network could do better but we don't know how to train them, we take way too long as it is.

I've seen the evolution of translators and there is a big difference in results. We can try more convoluted examples.

Also do you think that every human can parse that contemporary metaphors better?

Statistically, but it is a shallow translation, with no modelling of what is said. This works astonishly well for translation, but gives the false intuition that it is meaningful; whereas it is in fact orthogonal to advancing in systems that comprehend a text so as to be able to reason their way out of winograd problems.

It's been "statistical" since about mid-00's. What's new is that it's now neural.

If humanlike reasoning is the destination for AGI, there's more than just symbolic reasoning to factor in. Emotions are a huge control on human reasoning.

People essentially rely on emotions to make all their decisions. Emotions implicitly represent rapid-fire unconscious decision work.

Again the current popular understanding of the mind separates emotion from thinking. They are not distinct. Emotional processing is another kind of thinking, and it drives the show.

I see emotions as analogous to the value function in RL. It is essentially a prediction of future rewards based on current state and action plan. Artificial RL agents learn emotion as it is related to their tasks and environments.

Are there ways that an AI practitioner would be able to tell whether a neural network is doing human-style symbolic reasoning?

As mentioned in the original article, being able to reuse part of a network trained on one task, on another different task that shares a subset of concepts, would indicate something like the understanding of a concept has emerged.

Good question. I don't think we do actually.

The only reason I am convinced it is NOT doing a good job, is how utterly difficult to apply NN to dialog generation/management domain of business, often time it behaves much worse than rule-based systems.

You'd be able to tell if every brain structure was replicated by a NN analogue (and we understood them sufficiently well). Otherwise you can only use behavioral replication (i.e. Turing tests) to infer it.

IQ tests [1] and one-shot learning come to mind.

[1] https://arxiv.org/abs/1807.04225


"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. [Emphasis added.] The New York Times, Oct 9, 1903, p. 6."


A couple of the leading minds in AGI say it's a long ways away... just because the universe likes to give us the finger, maybe AGI is on the horizon. Maybe we'll look back at this in 10 years and laugh (if we're here).

The arguments like above are "platitude level arguments".

We really don't learn anything from the problem in had by talking in generic terms. We use these arguments when we want to justify our hopes and feeling, but there is really nothing to learn from it.

Hinton, Hassabis, Bengio and others point out that we can't 'brute force' AI development. There needs to be actual breakthroughs in the field and there may be several decades between them.

AI, brain science and cognitive science are extremely difficult fields with small advances, yet people assume that it's possible to 'brute force' AGI by just adding more computing power and doing more of the same.

Macroeconomics is probably less complex research subject than AI or brain science, but nobody assumes that you can just brute force truly great macroeconomic model in few years if you just spend little more resources.

> AI, brain science and cognitive science are extremely difficult fields with small advances, yet people assume that it's possible to 'brute force' AGI by just adding more computing power and doing more of the same.

Do people assume that? I mean, I'm sure some people do, but I don't think I've encountered many people, at least not in the AI safety movement, that actually think it's a matter of more hardware power. Some people think it's possible that that's all that's necessary, but I don't think most will say that that's the most likely path to AGI (rather than, as you say, actual breakthroughs happening).

That's pretty much the Singularity conjecture in a nutshell: that exponential advances in computing power will drive an exponential increase in machine intelligence.

It gets more nuanced than that but there are actually very specialised people who argue very forcefully that AGI is a hair's breadth away and we must act now to protect ourselves from it.

Edit: so not "most" people but definitely some very high-profile people. Although granted, they're high-profile exactly because they keep saying those things.

nope https://en.wikipedia.org/wiki/Technological_singularity#Algo...

"Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research."

For flight, the components necessary were obvious very early on: you need some kind of structure to hold you aloft and some kind of powered apparatus to propel you forwards. Once those were found, mechanical flight was achieved (and unpowered flight was already possible long before that).

What are the components of intelligence? For example, AlphaZero can solve problems that are hard for humans to solve in the domain of chess, shogi and go- is it intelligent? Is its problem-solving ability, limited as it is to the domain of three board games, a necessary component of general intelligence? Have we even made any tiny baby steps on the road to AGI, with the advances of the last few years, or are we merely chasing our tails in a dead end of statistical approximation that will never sufficiently, well, approximate, true intelligence?

These are very hard questions to answer and the most conservative answers suggest that AGI will not happen in a short time, as a sudden growth spurt that takes us from no-AGI to AGI. With flight, it sufficed to blow up a big balloon with hot air and- tadaaaa! Flight. There really seems to be no such one neat trick for AGI. It will most likely be tiny baby steps all the way up.

Interestingly, Hinton is on record as essentially saying that there's a good possibility that what's currently being done is wrong - and that we need to rethink our approach.

Mainly in the idea/concept of back-propagation. It's something that I've thought about myself. For the longest time, I could never understand how it worked, then I went thru Ng's "ML Class" (in 2011, which was based around Octave), and one part was developing a neural network with backprop - and the calcs being done using linear algebra. It suddenly "clicked" for me; I finally understood (maybe not to the detailed level I'd like - but to the general idea) how it all worked.

And while I was excited (and still am) by that revelation, at the same time I thought "this seems really overly complex" and "there's no way this kind of thing is happening in a real brain".

Indeed, as far as we've been able to find (although research continues, and there's been hints and model which may challenge things) - brains (well, neurons) don't do backprop; as far as we know, there's no biological mechanism to allow for backprop to occur.

So how do biological brains learn? Furthermore, how are they able to learn from only a very few examples in most cases (vs the thousands to millions examples needed by deep learning neural networks)?

We've come up with a very well engineering solution to the problem, that works - but it seems overly complex. We've essentially have made an airplane that is part ornithopter, part fixed-wing, part balloon, and part helicopter. Sure it flies - but it's rather overly complex, right?

Humanity cracked the nut when it came to heavier-than-air flight when it finally shed the idea that the wings had to flap. While it was known this was the way forward long before the Wright's or even Langley (and likely even before Lilienthal), a lot of wasted time and effort went into flying machines with flapping wings, because it was thought that "that's the way birds do it, right"?

So - in addition to the idea that backprop may not be all it's cracked up to be - what if we also need to figure out the "fixed wing" solution to artificial intelligence? Instead of trying to emulate and imitate nature so closely, perhaps there's a shortcut that currently we're missing?

I do recall a recent paper that was mentioned here on HN that I don't completely understand - that may be a way forward (the paper was called "Neural Ordinary Differential Equations"). Even so, it too seems way too complex to be a biologically plausible model of what a brain does...

You're contradicting yourself with your examples. If we didn't manage to fly by imitating birds - why do you care that AI doesn't work the brain does? That should be a _good_ sign, if we trust the analogy - right?

I think the best interpretation of their point is that at some point the breakthrough was questioning a fundamental assumption. I think the point about matching real neurons was just to give credence to their hunch that backprop is not quite the right track to be taking.

Behind every successful neural network is a human brain. Neural networks are a tool, an advanced tool for sure, but still just a tool. If we are looking for AGI, and assuming the brain is an AGI, then there are still many differences to resolve. For example, back propagation has not been observed in nature. Nor has gradient descent. So the core mechanisms for learning in nature have still to reveal their secrets.

> Behind every successful neural network is a human brain.

I've spent a lot of time trying to explain this to people, that there is a confluence between the human brain and the machine, people tend to look at the machine separately, which is a mistake. When I say unequivocally, 'there is no such thing as machine intelligence', I just get blank stares.

Arguably, there are successful brains behind every successful brain, too. Every great innovator and thinker was building off the backs of numerous other thinkers and teachers in their life. Should we be surprised that it's much easier for a tool+human(s) to do better than a tool alone, given we also expect a single human + human(s) as colleagues to do much better? Never mind the whole learning/development process, during which 22+ years of dedicated effort by adults to shape/craft a functional human worker.

Overall, I'd agree that really powerful tools for specific tasks is going to be the majority of "AI" in the coming years.

Sure, I'd agree. But this brings up the idea of autopoiesis, and then I think things get really murky.

One question that interests me is this: Does intelligence have as a prerequisite a living system, such as a cell? If so, what is our definition of the living system and why is that important? If not, what abstract qualities of intelligence are we really trying to capture?

I think self replication and a vast, rich environment are missing ingredients in current RL agents. The human brain doesn't just do intelligent behaviour, it also builds itself up from a single cell. Neural nets don't grow like that, they are lesser, from a point of view. They lack the constraints of self replication - survival and procreation. The richness of the environment and the presence of specific constraints are essential for the development of intelligence. And lots of time to try things out.

I mean it's difficult to 'observe' gradient descent, there are no characteristic properties that you can identify without specifying the relative objective function. But most of the process theories from computational neuroscience are based on some form of gradient descent. Even if it's only implicit, you'll be able to describe the variables of the system as moving against the gradient of some function.

But yes, it's extremely unlikely that nature implements backpropagation directly, as it relies on non-local gradients.

Your reasoning does not follow. To see why, take something humans already clearly created: Flight. Kerosene-type jet fuel propulsion has not been observed in nature. It is flight nonetheless.

Human flight is not as agile or energy-effective as a dragonfly, but it is faster and stronger. Just like artificial learning may not be as sample-effecient as the human brain. It is a learning intelligence nonetheless and we are already working with the core mechanisms of reasoning and deduction.

Behind every successful brain is a little strand of DNA and some environmental inputs. Somehow a brain might be more than the DNA however.

That's called an emergent property.

Behind every brain is a successful neural network. Or at least that's the promise of connectionism.

If you want AGI you need to give it a world to live in. The ecological component of perception is missing. Without full senses, a machine doesn't have a world to think generally about. It just has the narrow subdomain of inputs that it is able to process.

You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.

Tangential: This title is weird. As if no one but the top minds in AI didn't know this? This isn't big news to anyone who has done even just a modicum of AI research.

> As if no one but the top minds in AI didn't know this?

Anecdotal, but nearly all of my programmer friends believe that full-blown AGI is less than a decade away.

Sounds like an opportunity to place some bets and probably win some money. Or maybe they'll back down and widen their intervals -- less than a decade, maybe, but probably longer. Maybe quite a long time too, and maybe after development of some other planet-changing tech.

It's worth thinking about this section of [0] when various AI experts offer predictions:

> Two: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.

> In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.

> In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.

> And of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.

[0] https://intelligence.org/2017/10/13/fire-alarm/

If there is AGI all bets are off so to speak, what's a few dollars worth then? If there isn't AGI the money still has value. Doesn't sound like a rational bet, even if you think the odds of AGI soon are high.

Shane Legg (DeepMind cofounder) is public about predicting AGI at 2028. http://www.vetta.org/2011/12/goodbye-2011-hello-2012/

My impression is this is common among DeepMind folks and not an aberration. (See also dwiel's comment elsewhere.) It is super weird for me that Demis Hassabis says AGI is nowhere close. Is he lying? Or does he mean 10 years is not close?

> It is super weird for me that Demis Hassabis says AGI is nowhere close. Is he lying? Or does he mean 10 years is not close?

Maybe he just doesn’t believe the same thing some of his coworkers do? Seems pretty drastic to jump to the conclusion he’s lying if he implies it’s more than 10 years away.

What makes you so confident that you’d say “anyone who’s done a modicum of AI research” would come to the same conclusion as you?

Also, do you believe AGI is currently more a compute/hardware problem, or an algorithmic problem?

The problem is when non-technical people write articles or respond to posts about Deepmind. They think all AIs are the same and that one specific AI achievement means the Matrix is coming.

People lack nuance and critical thinking.

When I was at ICLR a couple years ago, a group of 10 or so researchers from deepmind took a poll of themselves at breakfast and found the general consensus was that AGI was between 5-10 years away.

Research can tell you current effort fall far short. Research can tell you current efforts are moving incrementally towards the goal, even. But research won't you when something we don't understand will happen. "A long time" but overall, it seems like the kind of situation where probably as such isn't particularly applicable.

Maybe it's for the people who haven't, so that they don't give all their money to Eliezer Yudkowsky.

I watched the talk linked where that quote apparently comes from, and it was really good. Thanks for sharing that. Ilya specifically says in the talk that it is unlikely but that there is sufficient lack of understanding that we can't rule it out, and that thus the questions around it are worth thinking about.

It bothers me that the qoutes in this article are all cut up, in some cases ending when a sentence clearly wasn't finished. It makes it hard to judge what they are really saying here, and I wish the full interview would be published.

I wonder to what extent the data being fed to these models are the issue. Or rather the problem is the systems that generate these data-sets and how representative of reality they are. If we make an app that involves humans and that data is used in a model - to what extent does user experience and other factors warp reality?

Maybe our existing methods are good enough given enough compute to reach AGI but our datasets are too low fidelity and non-representative of the problem space to reach desired results?

The problem is not the data. The problem is the need for high quality data. Current ML is data driven statistical learning. ML tries to learn a model that describes the distribution. It's impossible to get similar performance as the best reference implementation (human brain) using this approach. https://i.redd.it/kvvgv6zzhtp11.png

Think of 16 year old human:

* it has received less than 400 million wakeful seconds of data + 100 millions seconds of sleep,

* it has made only few million high level cognitive decisions where feedback is important and delay is tens of second or several minutes (say few thousand per day). From just few million samples it has learned to behave in the society like a human and do human things.

* Assuming 50 ms learning rate at average, at the lowest level there is at most 10 billion iterations per neuron (Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes.)

Humans generate very detailed model of their environment with very little data and even less feedback. They can learn complex concept from one example. For example you need only one example of pickpocket to understand the whole concept.

This pickpocket example seems like symbolic? relations reasoning.

I think we need simulation of other agents outputs as primary tool for reasoning. That seems to be how intelligence emerged in evolution.

Something like this: choose desired action > simulate other agents outputs based on future state after performing action > check reward for this action after simulating outputs of others > perform action or not > update all agents models and relations in "world" graph model

I think world could be modeled as simple graph and each agent as NN.

Then based on graph we could conduct symbolic reasoning and very fast learning (by updating edges)

I think these models need also need good physical simulator and good understanding of competitivness.

Is anyone aware of such trials of building AGI as I described?

Humans have natural language as big competetive adventage (easy way to compress parts of world graph and pass it to others - ambiguous. I think with aftificial machiness can be done more efficient). Another advantage is knowledge storage - also easy to do with machiness.

If we can build insect AI building human AI should be easy.

I also think it would be great if we could just do "world_model = fold(books)" instead of simulation.

Is anyone aware of such efforts/results?

Not sure how I feel about this; for one, the Kurzweilian singularity which largely could be fueled by the advent of AGI is both exciting and yet also scary. The upside could forever change humanity as we know it; far increased longevity, the potential to create anything via a universal assembler[0], bringing everything feasible within the laws of physics to reality. Knowledge is the only limiting factor stopping us from doing anything which is physically possible in this universe; and in that light AGI could be an enlightenment.

On the other hand, the ubiquity of knowledge once it's available could lead any maniac to use it for the wrong purpose and wipe out humanity from their basement.

My feelings on the potential of AGI is therefore mixed. I for one have just found my particular niche in the workforce and am finally reaping the dividends from decades of hard work. Having AGI displace me and millions (or billions) of individuals is frightening and definitely keeps me on my toes.

Technology changes the world; my parents both worked for newspapers and talk endlessly about how the demise of their industry after the advent of the internet is so unfortunate. Luckily for them they are both at retirement age so their livelihood was not upset by displacement.

If AGI does become a thing it will be interesting to see how millenials and gen Z react to becoming irrelevant in what would have been the peak of their careers.

[0] https://en.wikipedia.org/wiki/Molecular_assembler

I have a small experiment to discover if AGI is already a solved puzzle.


Not to mention that we don't even know if general intelligence exists. All we know is that mental abilities tend to correlate, but not why they tend to correlate. And if you think about designing machines, in general, the idea of general intelligence is utterly ridiculous. Does a fast car have general speediness? Of course not, it has dozens or hundreds of discrete optimizations that all contribute in some degree to the car being faster.

I'm not sure you and the OP mean the same thing by "General Intelligence".

It seems clear that autonomous systems which can apply their computational machinery to a diverse range of problems, and can, in a diverse range of settings, formulate instrumental goals as part of a plan to attain a final goal, do exist.

Because that's what humans are, at least some of the time.

But if human performance in these regards never exceeded what the pinnacle of today's AI performance is, we would not regard them as intelligent in a general sense, either.

Well, we have general purpose processors. You can prove they can run any algorithm you want (i.e. are Turing complete), but also, for practical problems (i.e. the ones encountered in engineering solutions in our planet and in our universe), they give reasonable max-min performance. Analogously I don't think 'AGI' is entirely useless -- you'd expect an AGI to have some properties like being able to solve reasonably well problems found in nature and society, maybe have a motivational framework distinguishing it as a separate entity, some knowledge about the world, etc.

edit: In terms of Turing-completeness analogues, the best candidate for AGI I think would be simply brute force capability: can this agent try all possible solutions until it solves this problem? (obviously using a heuristic to prioritize) -- that is, it'd employ a form of Universal Search[1] (aka Levin Search). Humans don't necessarily pass this test rigorously because we'd always get bored with a problem and because we have finite memory. But then CPUs are not truly Turing complete either (it's "just" a good model).

[1] http://www.scholarpedia.org/article/Universal_search

Great interview with Hassabis from the BBC. It's meanderingly biographical, with insights about his path through internships, curiosity, startups, commitment, burnout, trusted team mates and eventual successes ...


Demis Hassabis (true) statements here would be much more credible if DeepMind wasn't currently making a mint by promoting AlphaZero to the masses as a "general purpose artificial intelligence system".

Don't believe me? Check out this series of marketing videos on YouTube by GM Matthew Sadler.

1. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a look at new games between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (1)

2. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a review with a difference, because we are taking a look at the games together with AlphaZero, DeepMind’s general purpose artificial intelligence system...” (2)

3. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a game between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (3)

I could go on, but you get my point. Search youtube for "Sadler DeepMind" and you'll see all the rest. This is a script.

But wait, you say, that's just some random unaffiliated independent grandmaster who just happens to be using an inaccurate script on his own, no DeepMind connection at all! And to that I would say, check out this same random GM being quoted directly on DeepMind's blog waxing eloquently and rapturously about AlphaZero's incredible qualities. (4)

Let's be clear. I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi. Nor do I have a problem with Demis Hassabis making headlines for stating the obvious about deep learning (that it's good at solving certain limited types of puzzles, but we are a long way from AGI, why is this controversial).

My problem is that Hassabis is speaking out of both sides of his mouth. Increasing DeepMind/Google's value by many millions with his marketing message, while acting like he's not doing that. It feels intellectually dishonest.

To solve this, all DeepMind needs to stop instructing its Grandmaster mouthpieces to refer to AlphaZero as a "general articial intelligence system". Let's see how long that takes.

(1) https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s (2) https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s (3) https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s (4) https://deepmind.com/blog/alphazero-shedding-new-light-grand...

Edit: You don't have to go far to see that DeepMind are pushing AlphaZero as "general purpose". This is the title of their Science paper on it:

A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play


"General" as in what? As opposed to reinforcement learning, in er, general? As opposed to other ANN architectures?

>> I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi.

More to the point- it's only chess, go and shogi; not games "like" those.

The AlphaZero architecture has the structure of a chessboard and the range of moves of pieces in chess, go and shogi hard-coded and you can't just take a trained AlphaZero model and apply it to a game that doesn't have either the board or the moves of those three games.

To be blunt, AlphaZero has mastered chess, go and shogi, but it can't play noughts-and-crosses.

I suspect "general purpose artificial intelligence system" means the same architecture applied to 3 games (western Chess, Shogi, Go).

Wouldn't that be a "general purpose game-playing intelligence system" at best? (without mentioning that it only applies to certain types of perfect-information games)

Maybe it's just me, but "general purpose artificial intelligence system" sounds like, well, General Artificial Intelligence. Which sounds like Artificial General Intelligence, which is the holy grail.

> Maybe it's just me, but "general purpose artificial intelligence system" sounds like, well, General Artificial Intelligence.

Well, it doesn't sound like that at all to me and I think the phrasing is fair. Also, it folds proteins.

I don't think they mean the two in the same way. AlphaZero is "general purpose artificial intelligence" because if you formulate a problem in the right way and then throw a server cluster at it for a few weeks, it often comes back with pretty good performance at solving that problem. It's probably our current best crack at creating AGI, but it's a long way from a machine that can take a very high level goal and figure out the rest for itself, which is what we usually mean by "AGI" - not just a machine that answers multiple questions, but a thing analogous to a human mind which can analyse new things, infer properties and mechanics, generalise those to new contexts, and apply that knowledge to achieve new outcomes.

“if you formulate a [very specific type of problem involving perfect information games] in the right way”

Hey, at least it's a type of problem rather than a problem itself.

I haven't studied enough myself yet to know the answer to this one, but what are the differences between AlphaZero and the OpenAI 5 DOTA team's approach? Would it be possible to apply AlphaZero to DOTA?

DOTA is partially observable, so I believe AlphaZero can't be applied, as-is.

Agreed. Poker is another interesting case of "partial information" game. There is some discussion of this in the links below. I suspect that AlphaZero could make a decent poker player with a non-trivial amount of tweaking.

1. https://www.technologyreview.com/s/601157/could-alphago-bluf...

2. https://www.quantamagazine.org/why-alphazeros-artificial-int...

I wonder what would happen if you made DOTA totally observable. You could probably reformat it as 5 pieces for a player instead of 5 people on a team or the like. It would probably change the game too much to be recognizable as the same, but I think it would be an interesting experiment if nothing else.

If AGI (an artificial human mind with direct access to computational power of classic computers and whole Internet of information) was possible then we would probably already be living in the Travelers TV show.

... was possible then we would probably already be living in the Travelers TV show.

How do you know we aren't?

BTW, if you hadn't noticed, Season Three just came out on Netflix. I'm champing at the bit to binge watch that... :-)

As I always ask regarding this sort of story, why do we believe human intelligence is computable? The only answer I've heard is the materialist presupposition and sneers at any other metaphysic as "magic," which is not exactly a valid form of argument.

As an alternative, the human mind could be some sort of halting oracle. That's a well defined entity in computer science which cannot be reduced to Turing computation, thus cannot be any sort of AI, since we cannot create any form of computation more powerful than a Turing machine. How have we ruled out that possibility? As far as I can tell, we have not ruled it out, nor even tried.

This line of reasoning can be applied to almost any fundamental scientific discovery before it was made.

Why do we believe man can make fire? Well, dammit, we WANT to make fire. Let's figure out how to do it!

Finally, if we were able to explain the brain well with "metaphysics" it would then be just "physics". It seems that all you are saying here is that there is a mechanism that is not yet understood and it may be fundamentally different than other things we have studied so far (which seems unlikely, I might add).

Well, if we forced our presuppositions on our experiments we would not have discovered gravitation, electricity, relativity, quantum mechanics, etc. Each of those advances is a metaphysical paradigm shift over previous views of reality. E.g. the original materialism was the billiard ball model with a random swerve, due to Epicurus. No one believes in the original materialism anymore, since it has been falsified in so many ways. Instead, we have redefined materialism to include a whole host of causal forces that were previously ignored or unknown.

Similarly we can mathematically and empirically differentiate between halting oracles and Turing machines, so why not leave both possibilities open as scientific explanations, instead of doubling down on the Turing machine model? Call halting oracles materialistic if it makes you feel better.

We know the brain is doing something - if you don't want to call it computation, then you might as well call it magic.

Are you positing that the only alternatives are computation and magic?

Seems like a false alternative between computation and magic.

There are other possibilities. For example, there can be an immaterial mind that operates as a halting oracle and interfaces with the world through the brain. Halting oracles are well defined, and we can empirically test for their existence. So, no reason why we have to assume everything humans do is reducible to some sort of automata. The only reason we make the assumption is because of prior materialistic commitments.

UPDATE: I've been rate limited for some reason, so here is my response whether the mind intuitively seems to be a halting oracle.

1. It's obvious there are an infinite number of integers, because whatever number I think of I can add one to it. A Turing machine has to be given the axiom of infinity to make this kind of inference, it cannot derive it in any way. This intuitively looks like an example of the halting oracle at work in my mind. Or, an even more basic practical example: if I do something and it doesn't work, I try something else. Unlike the game AIs that repeatedly try to walk through walls.

2. We programmers write halting programs with great regularity. So, it seems like we are decent at solving the halting problem. Also, note that it is not necessary to solve every problem in order to be an uncomputable halting oracle. All that is necessary is being capable of solving an uncomputable subset of the halting problems. So, the fact that we cannot solve some problems does not imply we are not halting oracles.

Roger Penrose basically suggests what you say in "The Emperor's New Mind". Roughly, it says that the brain (likely, according to him) uses quantum computation, and so we can't make an AI out of a classical computer.

The practical flaw with this argument, of course, is that you could instead make an AI that itself uses quantum computation. I asked Roger Penrose about this at a university philosophy meetup over 20 years ago, and he agreed.

Likewise, if there is some kind of halting oracle, perhaps we can work out how the brain creates and connects to that oracle, and make our AI do the same.

Meanwhile, there is no physiological or computational evidence for this possibility. We should keep hunting though, as that's the same thing as understanding the detail of how the brain works!

Well, quantum computation is weaker than a nondeterministic Turing machine, so not the same thing I'm saying. Penrose correctly identifies the mind cannot be a deterministic Turing machine, but his invocation of quantum mechanics does not solve the problem he points out. A DTM can simulate an NTM and hence anything inbetween, so the inbetween of quantum computation does not solve anything.

The fundamental problem Penrose identifies boils down to the halting problem, which requires a halting oracle to be solved. Hence, a halting oracle is the best explanation for the human mind, and no form of computation, quantum or otherwise, suffices.


Since I'm rate limited, here is my answer to the replier's comment:

A partial answer: the mind has access to the concept of infinity, and can identify new, consistent axioms. Other possibilities: future causality and ability to change the fundamental probability distribution.

But, it's also important to note that we don't have to answer the "how" question in order to identify halting oracles as a viable explanation. We often identify new phenomena and anomalies without being able to explain them, so the identification is a first step.

>But, it's also important to note that we don't have to answer the "how" question in order to identify halting oracles as a viable explanation. We often identify new phenomena and anomalies without being able to explain them, so the identification is a first step.

I don't think it constitutes an explanation at all, let alone a viable one, if all it does is beg the same question.

The problem was already identified: "how does human cognition work?" You've renamed it: "how does this supposed halting oracle work?" That might be an interesting framing but it is not a viable explanation of anything until you've proved that such oracles exist or in other words, solved the halting problem.

>Hence, a halting oracle is the best explanation for the human mind

What does it explain though? That the human brain has a black box capable of solving certain problems... how exactly?

Indeed - it's essentially the homunculus fallacy, or magic dressed up in the language of knowledge.


Your theory doesn't seem falsifiable short of actually making an agi whose very definition is notoriously slippery.

You might as well just say the mind resides in the soul.

> We programmers write halting programs with great regularity.

Making any program halting program is trivial: add executed instructions counter, halt program at some value of the counter. Proving that an arbitrary program halts is an entirely different task.

> So, the fact that we cannot solve some problems does not imply we are not halting oracles.

If it's allowed to not solve some problems, then I can write such an oracle:

Run a program for a million steps. If program has halted, output "Halts", otherwise output "Don't know".

It can't solve some problems, but by your logic it doesn't imply it's not a halting oracle. You are missing something.

There doesn't seem to be much reason to challenge the assumption: humans aren't good at solving any problems that we know to be uncomputable (e.g. the halting problem). Sure, it's a thing you could investigate, but the explanation for why it's not a popular topic is that it doesn't seem like a fruitful area of research.

Also, personally what my mind is doing doesn't feel like it's invoking an oracle for my problem solving. Generally when the search space for a problem that I'm solving increases I experience the kinds of blowups in the difficulty that would arise from me following an algorithm. Now, not everybody is the same. Do you feel like your problem solving calls an oracle?

> A Turing machine has to be given the axiom of infinity to make this kind of inference, it cannot derive it in any way.

Why not? Are you aware of a proof of this? I think you are limiting the capabilities of Turing machines without evidence.

> Unlike the game AIs that repeatedly try to walk through walls.

Game AIs capabilities are a small subset of what a Turing machine can do. Most game AIs can't do speech recognition or solve math equations either.

> We programmers write halting programs with great regularity.

So do other programs. Writing a halting program is not an uncomputable problem, and doesn't require solving the halting problem.

I just want to add that materialistic commitments don't even necessarily imply computability. Entropy, for example, doesn't seem to be computable[1].

[1] https://arxiv.org/pdf/0808.1678.pdf

Part of me almost hopes it is a halting oracle of some sort because then we could start looking into either hooking up multiple brains to a single oracle or a single brain to many oracles.

Yep and considering a very high majority of our work does not require GI we still have huge AI job disruption looming.

Progress-skeptics are always wrong - except for artificial inteligence.

I'm not even convinced that a real AI is possible with conventional computer hardware or anything remotely similar to it. Not even considering software I get the impression there is a fundamental limitation of hardware.

I'm not convinced we've even defined the problem space well enough to solve it. Like what is the concrete measure(something to target) for intelligence? If we develop general intelligence is it going to be human, dog, or fish?

Shane Legg (co-founder DeepMind) and Marcus Hutter (Schmidhuber pedigree) defined machine intelligence in this canonical paper from 2007: https://arxiv.org/abs/0712.3329

> Universal Intelligence: A Definition of Machine Intelligence

> A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.

General intelligence usually meant in relation to humans, but you are correct in noting that it is a spectrum, not a binary.

I think this is the real answer. When we developed flight, the measure wasn't "can we fly like birds?" We still haven't achieved that even today, but we fly in otherwise unimagined, but equally powerful ways.

We seem to be looking at intelligence in humans and thinking we need to develop that, without first defining what intelligence actually is. We don't exist in isolation, and it's likely that the components of intelligence exist to varying degrees in other organisms. In the same way that birds, bats, gliders and insects all have wings that generate lift, what are the things that we have in common with other animals?

It seems like the difference between humans and dogs is substantially smaller than the difference between computers and dogs, so if we figure out dog-level intelligence human level intelligence is right down the corner. Also, the intelligence is likely to be of a different kind. Someone had an interesting point that training an ML system to look at picture isn't like sending a million interns to look at a million pictures, it's sending one intern to look at a million pictures. When you do that, you can derive insights that are significantly different than if you look at 1 picture, or 10, or 100.

I'm not convinced any of those creatures have general intelligence. I'm similarly unconvinced that we'd recognise general intelligence if we saw it.

Are humans even capable of general intelligence? I feel like the philosophical question of determinism vs free will is unsolvable.

The baseline of human capability would definitely still be impressive.

Isn't that exactly what an agi would say once it takes over the brains of leading scientists.

No - it would post what you just said on HN instead. :)

Seriously - that's a wicked funny post you had there!

Nice try, Mr. AI but you won’t escape detection by pretending to be a jokester who grew up in Boston.

As someone who files taxes every year, I'm quite certain that Adjusted Gross Income is a reality ;)

I don't believe in the idea of AGI for Dreyfusard reasons, but it's possible that it could emerge from something completely different than deep learning.

For all we know, Isabelle and Coq could be speeding through the road to consciousness but we're busy having a blast doing Computer Vision pretending it's AI.

I'm used to random downvotes for comments about current controversies - I'll say things about income inequality that people won't like, and it's okay, you have your politics. Deep learning shouldn't be part of your politics, it either does stuff or doesn't.

Deep Learning is amazeballs for Computer Vision. It's fun because people like looking at pictures. But sufficiently prodded Isabelle proves theorems, I've seen it first hand, and the "sufficient prodding" is way underdeveloped yet. At one point backpropagation was dead too.

The computational power of the hardware is getting really close to what a human brain is capable of (on an exponential scale, anyway). If "nowhere close" means not in the next 5 years then sure.

Over the medium term I'm not sure AI researchers are the best people to ask. They are completely dependent on how much power the electrical engineers give them - I doubt there is a deeper understanding what a doubling or quadrupling of computer power will do than any programmer learning about neural networks.

> The computational power of the hardware is getting really close to what a human brain is capable of

Why do you say that? AFAIK computing architecture and brain architecture are completely different. How would you even begin to compare their power?

Well, Wikipedia was my source [0] and it links http://hplusmagazine.com/2009/04/07/brain-chip/ as its source.

Google has TPU that are off from the estimated power required to simulate a brain by a factor of 3, so technology is reaching the ballpark. Given that brains were evolved, the part that does symbolic thinking is probably "easy to stumble on" in some practical sense.

[0] https://en.wikipedia.org/wiki/Computer_performance_by_orders...

Faster hardware will help, but I'm not convinced that it's the answer. OpenAI Five used on the order of 2000 years of experience to train their agent. There are clearly still huge algorithmic gains to be had.

Given how we've managed to improve on nature in other domains (see solar cell efficiency, for example), I think that if we can figure out how intelligent organisms manage to learn so quickly we can likely beat nature's efficiency.

> I doubt there is a deeper understanding what a doubling or quadrupling of computer power will do than any programmer learning about neural networks.

Sure they do. They just hook up four times as much compute power or simulate whatever they want to do in four times as much time. A slow AGI would still be an AGI. But we do not see anything like that if we use four times as much power as in the control. It is still nowhere near.

I take huge offense to this article. They claim that when it comes to AGI, Hinton and Hassabis “know what they are talking about.” Nothing could be further from the truth. These are people who have narrow expertise in one framework of AI. AGI does not yet exist so they are not experts in it, in how long it will be, or how it will work. A layman is just as qualified to speculate about AGI as these people so I find it to be infinitely frustrating when condescending journalists talk down to the concerned layman. This irritates me because AI is a death scentance for humanity — its an incredibly serious problem.

As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.

Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.

The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.

AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.

And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.

Stating something more times doesn't make it true. Everything you've written is pure speculation, and alarmist at that. There's no proof that AGI is even possible, and if it is possible there's no proof that it will end humanity.

Clearly, AGI-level intelligence is possible, because human brains exist.

So unless you pose that a function has to rely on its materialization (there is something untouchably magic about biological neural networks, and intelligence is not multiple realizable), it should be possible to functionally model intelligence. Nature shows the way.

AGI will likely obsolete humanity. Either depricate it, or consume it (make us part of the Borg collective). Heck, even a relatively dumb autonomous atom bomb or computer virus may be enough to wipe humanity from the face of the earth.

It's not at all clear that AGI is technically feasible. Human brains exist but we have only a shallow understanding of how they work.

Even if we assume for the sake of argument that AGI is possible, there's no scientific basis to assume that will make humanity obsolete. For all we know there could be fundamental limits on cognition. A hypothetical AGI might be no smarter than humans, or might be unable to leverage it's intelligence in ways that impact us.

Nuclear weapons and malware can cause damage but there's no conceivable scenario where they actually make us extinct.

Something can be possible, while still technically not feasible.

I agree our knowledge currently is lacking, but see no reasons why this will never catch up.

There are fundamental limits on cognition. For one our universe is limited by the amount of computing energy available. Plenty of problems can be fully solved, to where it does not matter if you are increasingly more intelligent (beyond a certain point, two AGI's will always draw at chess). Another limit is practical: the AGI needs to communicate with humans (if we manage to keep control of it), so it may need to dumb down so we can understand it.

Even an AGI as smart as the smartest human will greatly outrun us: it can duplicate and focus on many things in parallel. Then the improved bandwith between AGI's will do the rest (humans are stuck with letters and formulas and coffee breaks).

Manually deployed atom bombs and malware can already wreck us. No difference with autonomous (cyber)weapons.

Even anti-alarmists don’t ask for proof that AGI is possible. Obviously it is possible. Speculation is the best you get because nobody is going to be able to prove anything. We haven’t proven global warming is caused by humans but it’s still worth it to be proactive about greenhouse gasses. This is because when something is extremely dangerous, you don’t wait around for someone to finish proving it beyond any shadow of a doubt. You probably also think that god exists because nobody can prove otherwise?

And what does alarmist even mean? Do you call global warming advocates alarmists? It’s such an annoying, nonsense word that boils down to name-calling really. Discuss the merits of my actual argument. If you think my speculation is wrong, point out a flaw in the chain of logic that leads to my conclusion. Don’t just wave your hand and say that “you can’t prove it” like some evangelical christian talking about god or global warming. Seriously infuriating when there is so much at stake.

I ask for proof that AGI is possible. Show me a computer as smart as a lab mouse and then I'll take your concerns seriously.

The analogy to anthropomorphic global climate change is a non sequitur. Climatologists have created falsifiable theories which make testable predictions.

And you really have no clue about my personal religious beliefs. Calm down and take a seat.

I ask for proof that AGI is possible. Show me a computer as smart as a lab mouse and then I'll take your concerns seriously.

I would argue that, unless you can show why AGI is not - in principle - possible, that the null hypothesis would be that it is possible. Unless we veer off into some weird mysticism, it seems that the human brain turns energy and matter into intelligence somehow, operating according to the physical laws of the universe... why shouldn't it be possible to build something else that does the same?

You can't prove a negative. At this point we don't even know what the principles are.

If you're unwilling to provide me with a prototype equivalent to a rodent mind then I'll settle for a fully developed theory of human cognition. Let me know when you've got one. At least that would give researchers some guidelines to know whether they're making forward progress toward AGI.

I never said anything about your religious beliefs, only that your need for hard proof one way or the other is similar to the need displayed by people who want to believe in god and who dont want to believe in global warming. there are many such examples but the bottom line is that demanding me to show you a sentient computer right now is not reasonable. We have the ability to reason about things without developing formal, mathematic proof about it. And in this case, there is no possibility that anyone could ever prove that AI is possible or impossible. Nobody can prove what it will do. If accept nothing less than hard proof then you are opting out of survival in basically any situation whatsoever. Sometimes logical reasoning is all you have and this is a case of that.

Yeah and before the measurements were done, before enough time had elapsed for meaningful change to be measured, all there was was people like me screaming at people like you, trying to make you see. When ai comes you’ll have your proof but it will be too late.

No that's not how it works. Currently a belief in the inevitable arrival of AGI is a secular religious faith with no basis in hard science. We have plenty of real problems to worry about (like anthropomorphic global climate change) before we waste time making public policy to prevent something that may never happen anyway. Your concerns are silly, akin to worrying about an alien invasion when we don't even know if extraterrestrial life exists.

You are an outlier. Even Hassabis and Hinton believe that agi is inevitable. I’ve never met anyone who thought otherwise. The only disagreement is about the timescale and the result of its existence. If nothing I’ve said can convince you that ai is possible, then i would just ask you to consider that the human brain exists and that we will eventually figure out how it works.

You are correct, in that AI experts may not be the best predictors for AGI. For one, they spend their lives working towards the goal of AGI, so it would require a huge amount of cognitive dissonance for them to say that AGI is impossible or very very far on the horizon.

Philosophers and futurists are better suited to hypothesize an AGI timeline.

But you take it too far by saying it is anyone's game.

Game theory, security, and economic competition makes it impossible to globally ban AI. The incentives to automate the economy (compare AI revolution with industrial revolution) and to weaponize AI (Manhattan Project for intelligence) are just too big. We are already seeing that the US focus on fair and ethical AI puts them at a disadvantage against China and Russia. AGI must require pervasive surveillance of the populace, but the Luddites are holding this back.

I suggest you learn to stop worrying about the bomb, and start planning for its arrival.

There are non-doomsday possibilities for AGI. Imagine a super-intelligent AI that was built from the start to value humanity and our way of life, and from there it chooses to protect us and enable us all to do what we want more. (Of course, this could go in dystopian directions, but even those are better than extinction.) A super-intelligent AI that is built to value humans could decide to "uplift" humans' intelligence to be able to keep up with it in places that humans desire to keep up with it.

If we can figure out decision theory and how our values work, then when we figure out AI, we can hopefully build it to be aligned with our values from the start, instead of blindly hoping it happens to play nice with us instead of brushing us off like ants.

nuguy 6 months ago [flagged]

You don’t even begin to comprehend what I’m saying. You need to think about this more deeply.

So what if it is possible to create a benevolent ai? Nobody said this isn’t possible or even likely. We can also invent a machine that scrubs all the moss off of stones. Just because it’s possible for it to exist doesn’t mean it’s going to proliferate in the free-market of the world and everything in it. The only thing that is important is the fact that

1: we will enter an unstable configuration where any AI implementation that can exist will exist

2: the AI implementations that proliferate will be those that are not hamstrung by being forced to include humans in the loop

3: humans will be out of the loop for every conceivable task and therefore not enjoy the high standard of living that they do in 2018

I disagree with points 1 and 2 (and therefore 3). If a friendly AI is built and matures first, then it can protect us from unfriendly AIs trying to mature and take power. (Others call this idea a "singleton".)

That’s a really good point.

> the only way to ensure that human life continues as we know it is for AI to be banned

Is that because you think banned things do not happen? Even if the thing that is banned could confer a massive advantage to the entities developing it?

I think AGI is unlikely to be a thing in my lifetime, or even my children's. But if I were worried about it, I'd probably focus on developing a strategy to create a benevolent intelligence FIRST, rather than try to prevent everyone else from ever creating one via agreements and laws.

I appreciate that you actually suggest a solution. Nobody knows when AGI will come but it could come tomorrow. It could come in 1000 years. No harm in being proactive.

Developing a good ai first is useless because as I have said, the creation of ai enters us into an unstable configuration where bad ai will crop up regardless. Keeping bad ai from existing is infinitely easier when ai does not exist as a technology as opposed to when it’s a turnkey thing.

AI is likely simple and won't require much processing power after all, so it will be impossible to ban it because it would imply more regulation and surveillance than what would be sustainable. Also global warming will likely kill us anyhow. The rational conclusion is to enjoy our supermarkets and warm showers as long as they last. They will probably last longer if we deny these threats so as to avoid causing mass panic and nihilism.

We can survive global warming. We can fix it. We can come back from it. We will never come back from ai. It’s not impossible to ban ai. And we would be stupid to assume it’s impossible instead of trying to find out through an effort to save ourselves.

"These are people who have narrow expertise in one framework of AI." Proof that you don't know who you are insulting

I don’t remember insulting anyone. And how is that not true?

Geoff Hinton is the grandfather of deep learning. Virtually all the modern advancements in AI can be traced back to him and his lab.

What is your track record in AI? It sounds like you have no technical knowledge of AI. For example do you understand the concept of cross entropy loss?

Yes, the grandfather of deep learning. This is exactly my point. All the modern advancements in ai have nothing to do with agi.

And by the way I happen to know about both of the subjects of the article.


the answers to your questions are

1: I never said recent advancements are directly leading to AGI. Not in ML.

2: I don’t hold any contradictory ideas in my head

Your comment is aggressive and unpleasant which is an offense that should get you flagged. I constantly get flagged for making comments like yours because I happen to have an unpopular opinion. I can’t believe I put up with all this for YOUR benefit. Do you think I derive pleasure from trying to make people like you see things clearly?

As I have said so many fucking times, a layman is just as qualified as a ML expert to talk about the impact of AI on the world. Just because someone is an expert in a field that is tangentially related to AGI doesn’t mean a god damn thing. This is not a discussion about modern ML. But just to make it super easy for you to understand, let me put it this way: even if someone here were an expert in every detail of the theory and practical aspects of implementing an AGI, that person still doesn’t know any more than a layman about the consequences of AI. The point that you so annoyingly cling to is like a car mechanic thinking he was the ultimate authority on how cars would impact the world. You don’t have to be a car mechanic to understand and reason about the concept of transportation. Ultimately, the most qualified person to talk about that is an economist or someone. Not you. You don’t have a deeper understanding of the concept of AGI than literally anyone.

It is to the benefit of humanity that you observe the deep, fundamental changes that AGI will cause in the basic economics of human life. Dismissing it as “too far off” or “alarmist nonsense” is irresponsible.

So, according to you, ML is only tangentially related to AGI. Therefore we should listen to you, not ML experts - because you are a layman.

Even if I accept your absurd logic quoted above - how do you explain your contradictory goal of stopping all ML research. All the top AI experts are conducting ML research, which according to you is only tangentially related to AGI.

Going further, no AI researcher has managed to build even something as smart as a rat.

I, therefore conclude that the human race is at a greater danger of being out competed by evolution and chance mutations of chimpanzees and dolphins. These are our real competitors and next position leaders in IQ. We should focus on banning and eliminating chimpanzees and dolphins instead of foolishly protecting them. Why waste time blocking ML research which is only tangentially related to AGI. Let's take the war to www.reddit.com/r/dolphins and www.reddit.com/r/chimpanzees .

No point wasting time on hacker news.

> I can’t believe I put up with all this for YOUR benefit.

Thanks for looking out for my benefit. I will reciprocate by fighting the chimpanzees for YOUR benefit.

Everything in this comment is wrong. Forgive me for not addressing all of it.

The point about the layman is that the actual substance of my argument should be considered rather than my credentials. You think that your knowledge of ML (credentials) gives you the authority to win a debate without actually debating.

I have never called for a ban on the specific research that is currently ongoing in ML. I have called for a ban on all AI research — not because it’s easy or makes a lot of sense but because it seems to be the only solution. I am receptive to new solutions, the absence of which is quite conspicuous in your comments. You are stuck on credentials and nit-picking.

“So according to you, ML is tangential so therefore listen to you”

I literally spelled this out for you in my comment. Are you blind? The fact that ML is not a direct path to AGI is just an asside. Perhaps I should have focused exclusively on your main error so as to not confuse you. Like I said, the impact of AI on the world is an economics question in essence. You don’t need to know anything about how an AGI might work to reason about the economic, strategic, and existential changes that AI as a concept will bring about. It is absolutely true that no amount of knowledge about ML or even AGI will help in any way with that line of inquiry.

“We haven’t made robot rats yet”

This just a permutation of people saying AGI is far off. You don’t appear to be in the camp that thinks AGI is impossible. Therefore this comment is irrelevant because it will come at some point and my argument is primarily about what that will look like, not when it will happen.

It should, by now, be thoroughly clear to anyone who stumbles upon this thread in the future that I am correct. If you want to continue, you can contact me at brian.pat.mahoney - gmail.com

For what it's worth, I considered myself pretty undecided about this issue before, but the tone and content of your comments have moved me significantly against your argument.

* AI to be banned*

Good luck with that.

So sarcasm is what will save us all?

from people like you :)

So ai is not a problem in your eyes?


Where in the logic of my argument have I made a mistake?

If AGI is possible, it already happened. If even AI experts put it a 100-1000 years out, where some human monkeys banging on digital typewriters could eventually create it, then, in the vastness of space, time, military contracts, alien intelligences, and random Boltzmann brains, it must have been reality multiple times already.

If AGI is impossible, it will never happen. We already know that perfectly intelligent AGI's are not physically possible: Per DeepMind's foundational theoretical framework, optimal compression is non-computable, and besides that, it is not possible for an inference machine to know all of its universe (unless it is bigger than the universe by at least 1 bit, AKA it is the universe).

Remains being more intelligent than all of humanity. To accomplish that, by Shannon's own estimates, there is currently not enough information available in datasets and the internet. Chinese efforts to artificially increase the intelligence of babies is still in its infancy too (the substrate of AGI is irrelevant for computationalism, unless it absolutely needs to run on the IBM 5100).

So until that time travels, we will have to make due with being smarter than/indistinguishable from a human on all economic tasks. We're already there for some subset of humanity, you may even be a part of that subset, if you believed this post was written by a human.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact