Such architecture works great for differentiable data, such's images/audios, but the improvement on natural language tasks are only incremental.
I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does offer an elegant and complete framework. But seems like even DeepMind had trouble to get it working to more realistic scenarios, so maybe our modelling of intelligence is still hopelessly romantic.
And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.
I tend to think that the problems are:
a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.
b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.
c) I have the feeling that there's still a long way to go in understanding how to deal with time...
d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.
Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.
What you might see as logical operations "not mattering", I would see as logical operations integrated so deeply into reflexive operations that it's hard to see where one ends and the other begins. The contrast is that humans can do pattern recognition in a neural net fashion, taking something like the multidimensional average of a set of things. But a human can also receive a language-level input that some characteristic is or isn't important for recognizing a given thing and incorporate that input into their broad-average concepts. That kind of thing can't be done by deep learning currently - well, not a non-kludgey sort of way.
Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs.
It depends on how you want to mean that. A human can take inputs on one thing and apply them seamlessly to another thing. Neural nets tend to be very dependent on the task-focused content fed them.
I, personally, just know I don't use logical rules very often at all. Usually I apply them retroactively as a post-hoc justification, or narrative, to explain a sense of discomfort or internal conflict or dissonance, but I have no way of knowing if my rationale is true other than how it makes me feel - I'm simply relying on the same mechanism, with an extra set of pattern recognition learned specifically to identify fallacies and incorrect logical constructs. If I didn't have that extra training, my explanations could be illogical and I'd be none the wiser.
I think humans are very bad at logical reasoning and very inefficient at it. Only a small % of the population ever does it and they usually do it incorrectly with biases, constructively to justify an already held conclusion. They're great at pattern recognition though. I don't think logical reasoning is anywhere on the critical path to human level AGI at a deep level. It could very well be a parallel system though to help train recognition if we don't figure out better ways of doing that.
I wouldn't argue with the point that humans use rigorous logic and overt rules-based behavior much less than they imagine (your summary is very much a summary of the other-NLP model of mind, which I know).
I'd argue that while "refined" logic, systematic logic, might be rare, fairly crude logic, more or less indistinguishable from simply using language, is everywhere and it an incredibly powerful tool that human have. Again, being able to correct object recognition based on things people tell you is an incredibly powerful thing. You don't need a lot of full rationality for this but it gets you a lot. And that's just a small-ish example.
AGI that is as smart as say a rat would easily qualify as AGI even without language skills.
Being able to implement all the things human are good at, however, should be able to get us everything that we could do, because anything we could create, it could create too.
Indeed, but while a full language-using AI is ways a way at least, using language is one thing that's at sort-of describable/comprehensible as a goal. A rat is a lot more robust than any human made robot but how? Overall, I keep hearing these "there's intelligence that's totally unlike what we conceive" argument but it seems like computer programs as they exist now either do what a human could do rationally and more quickly (a conventional program) or heuristic duplicate human surface behavior (neural nets). You could sort-of argue for more but it's a bit tenuous. Human behavior is very flexible already (that's the point, right). And assuming AI is hard to create, creating something who properties we to-some-extent understand is more like than creating the wild unknown AI.
Also, "Getting to rat level" might not be the useful path to AGI. If we simply created a rat like thing, we might win the prize of "real AGI" but it would be far less useful than something we could tell what to do the way we tell humans what to do.
If that's the case, then to me it seems like AGI is limited by the amount and type of data a NN can be fed. To have an intelligence like homo sapiens, wouldnt you expect that no matter the underlying NN, it has to take in a comparable amount of data to what the 5+ human senses take in over lifetime, plus the actual internal 'learning' (i.e pattern recognition, heuristics, and intuition) + some kind of meta awareness (consciousness) to speed up and aid this process + dedicated pieces of the brain such as Broca's/Wernicke's
IMO the minimal useful definition of AGI would list a set of testable skills that would qualify as AGI, and a more useful definition would be based on quantifiable skill sets that would allow numerical comparisons between humans and AIs.
It seems pointless to speculate when AGI might be a reality when we have only the fuzziest idea what AGI is supposed to look like.
"let's be real - a lot of human symbolic reasoning actually happens outside of the brain"
I was a chess master at age 10. Let's be real - when I play blitz and bullet chess, I am performing multi-level symbolic reasoning at multiple frames per second. In my brain.
I am not an alien. I can do these kinds of symbolic calculations faster than 99.6% of the population mainly because I learned chess as a kid, making it a "native language", and I got good at it early so I spent much of my youth training my neurons with this perceptual task.
My point is not to claim I'm a genius. There are dozens of players who can school me in bullet the way I can school most people.
My point is that human beings DO symbolic reasoning, it is the core of our intelligence. Being able to take in different kinds of input, organize some of them into relevant higher level clusters, sort the clusters by priority, make a plan to deal with the highest prio clusters, act, rinse and repeat.
Humans simply do not have the computational ability to make decisions based on raw perceptual data in real time. Our brains are designed to act on higher levels of symbolic meaning, and we have perceptual layers to help us turn reality into manageable chunks.
In cognitive psychology this is referred to, not surprisingly, as "chunking": https://en.wikipedia.org/wiki/Chunking_(psychology)
Until DeepMind starts working on anything resembling chunking, I believe they are wasting their time and money.
The problem is so inherently hard that we are struggling even to come up with a meaningful task, telling us how bad we are doing. That comes to your first point, I think finding the right loss function is a like a chicken-and-egg situation here. When you have the loss function at hand, you already what task and problem you are going to solve, then it becomes easier. But that is apparently not our current situation.
That is why I think DeepMind has a good reason to go after reinforcement learning, after all, that is how we human are trained, through exams and the feedbacks.
As to your point about LSTM, I am not very passionate to qualitatively claim it whether it can/can't handle short/long term memory. That is apparently task dependent, and all the concepts involved are ill-defined.
I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence... But I don't see where there is evidence that symbolic reasoning is either necessary or sufficient for AGI, except people describing how they think their brain works.
Related, there are a lot of statements that symbolic or rule based systems do better / as well as / almost as well as neural methods. Citation please, I'd love a map of which ML problems are still best solved with symbolic systems. (Sincerely - it's not that I expect there aren't any.)
Good point, we wouldn't have AlphaZero now if we only relied on biological inspiration. Nature hardly ever performs Monte Carlo Tree Search (though I'm not sure this is entirely true, see slime mold searching for food: https://thumbs.gfycat.com/IdealisticThirdCalf-size_restricte...).
Counterfactual reasoning is a promising direction for AI. What would have happened if the situation were slightly different? That means we have a 'world model' in our head and can try our ideas out 'in simulation' before applying them in reality. That's why a human driver doesn't need to crash 1000 times before learning to drive, unlike RL agents. This post hoc rationalisation is our way of grounding intuition to logical models of the world, it's model based RL.
Turns out this is wrong. Human brains are very efficient.
I think most people don't realize that our brains have this ability. But all you need to do is spend a few months learning chess and you'll see for yourself.
At some things, not all.
Subsymbolic systems, such as ANN are clearly good at some things and symbolic systems are better at others.
It is argued that symbolic reasoning is required for what we might call higher levels of intelligence (lets assume this is correct).
Symbolic systems have struggled in the realms of grounding a symbol to something in the physical world, because its messy and complex, i.e. the area where subsymbolic systems play best.
If we assume that ANN are approximately akin to natural brains, then can we take that they are examples of a subsymbolic system able to, with the correct architecture, produce (perhaps the wrong word) a symbolic resoning system?
Perhaps this emergence ontop of the subsymbolic processing is what humans (and others to varying degrees) possess. Perhaps in the past (GOFAI) suffered because it was going top down, or not even going down to subsymbolic to ground the symbols.
Perhaps ANN struggles because its not going up to symbolic reasoning.
Then also perhaps ANN (or organic brains), which evolved where reaction/perception give the critical survival advantage, then only much later did symbolic become possible and beneficial, however wit hardware that wasnt necessarily developed for that in most efficient way.
Being of the belief that ANN are sufficient for AGI (for 20+ years), and possibly offer an elegant solution, I currently think that they are at this time, not the most efficient (nor plausible with the current compute/hardware, or for many years (probably my lifetime)). Practical progress imho is likely in hybridisation of ANN and Logic (however I'm not referring to hand baked rules), and even propose a mixed hardware might even supersede a pure ANN or what evolution has provided in the brain.
You think symbolic reasoning is not a function? In what sense do you think 'symbolic reasoning' is a distinct thing from 'function approximation'?
It didn't understand the source material, it is just very good at memorizing and faking.
I'm a native and it seems almost good to me. "algunas oraciones no sean", OK. And "duro" should be "rudimentario", also "de qué son" lacks the accent. But the rest is acceptable and it's possible to get a decent translation, only modifying those bits.
Also do you think that every human can parse that contemporary metaphors better?
People essentially rely on emotions to make all their decisions. Emotions implicitly represent rapid-fire unconscious decision work.
Again the current popular understanding of the mind separates emotion from thinking. They are not distinct. Emotional processing is another kind of thinking, and it drives the show.
The only reason I am convinced it is NOT doing a good job, is how utterly difficult to apply NN to dialog generation/management domain of business, often time it behaves much worse than rule-based systems.
"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. [Emphasis added.]
The New York Times, Oct 9, 1903, p. 6."
A couple of the leading minds in AGI say it's a long ways away... just because the universe likes to give us the finger, maybe AGI is on the horizon. Maybe we'll look back at this in 10 years and laugh (if we're here).
We really don't learn anything from the problem in had by talking in generic terms. We use these arguments when we want to justify our hopes and feeling, but there is really nothing to learn from it.
Hinton, Hassabis, Bengio and others point out that we can't 'brute force' AI development. There needs to be actual breakthroughs in the field and there may be several decades between them.
AI, brain science and cognitive science are extremely difficult fields with small advances, yet people assume that it's possible to 'brute force' AGI by just adding more computing power and doing more of the same.
Macroeconomics is probably less complex research subject than AI or brain science, but nobody assumes that you can just brute force truly great macroeconomic model in few years if you just spend little more resources.
Do people assume that? I mean, I'm sure some people do, but I don't think I've encountered many people, at least not in the AI safety movement, that actually think it's a matter of more hardware power. Some people think it's possible that that's all that's necessary, but I don't think most will say that that's the most likely path to AGI (rather than, as you say, actual breakthroughs happening).
It gets more nuanced than that but there are actually very specialised people who argue very forcefully that AGI is a hair's breadth away and we must act now to protect ourselves from it.
Edit: so not "most" people but definitely some very high-profile people. Although granted, they're high-profile exactly because they keep saying those things.
"Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity because whereas hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research."
What are the components of intelligence? For example, AlphaZero can solve problems that are hard for humans to solve in the domain of chess, shogi and go- is it intelligent? Is its problem-solving ability, limited as it is to the domain of three board games, a necessary component of general intelligence? Have we even made any tiny baby steps on the road to AGI, with the advances of the last few years, or are we merely chasing our tails in a dead end of statistical approximation that will never sufficiently, well, approximate, true intelligence?
These are very hard questions to answer and the most conservative answers suggest that AGI will not happen in a short time, as a sudden growth spurt that takes us from no-AGI to AGI. With flight, it sufficed to blow up a big balloon with hot air and- tadaaaa! Flight. There really seems to be no such one neat trick for AGI. It will most likely be tiny baby steps all the way up.
Mainly in the idea/concept of back-propagation. It's something that I've thought about myself. For the longest time, I could never understand how it worked, then I went thru Ng's "ML Class" (in 2011, which was based around Octave), and one part was developing a neural network with backprop - and the calcs being done using linear algebra. It suddenly "clicked" for me; I finally understood (maybe not to the detailed level I'd like - but to the general idea) how it all worked.
And while I was excited (and still am) by that revelation, at the same time I thought "this seems really overly complex" and "there's no way this kind of thing is happening in a real brain".
Indeed, as far as we've been able to find (although research continues, and there's been hints and model which may challenge things) - brains (well, neurons) don't do backprop; as far as we know, there's no biological mechanism to allow for backprop to occur.
So how do biological brains learn? Furthermore, how are they able to learn from only a very few examples in most cases (vs the thousands to millions examples needed by deep learning neural networks)?
We've come up with a very well engineering solution to the problem, that works - but it seems overly complex. We've essentially have made an airplane that is part ornithopter, part fixed-wing, part balloon, and part helicopter. Sure it flies - but it's rather overly complex, right?
Humanity cracked the nut when it came to heavier-than-air flight when it finally shed the idea that the wings had to flap. While it was known this was the way forward long before the Wright's or even Langley (and likely even before Lilienthal), a lot of wasted time and effort went into flying machines with flapping wings, because it was thought that "that's the way birds do it, right"?
So - in addition to the idea that backprop may not be all it's cracked up to be - what if we also need to figure out the "fixed wing" solution to artificial intelligence? Instead of trying to emulate and imitate nature so closely, perhaps there's a shortcut that currently we're missing?
I do recall a recent paper that was mentioned here on HN that I don't completely understand - that may be a way forward (the paper was called "Neural Ordinary Differential Equations"). Even so, it too seems way too complex to be a biologically plausible model of what a brain does...
I've spent a lot of time trying to explain this to people, that there is a confluence between the human brain and the machine, people tend to look at the machine separately, which is a mistake. When I say unequivocally, 'there is no such thing as machine intelligence', I just get blank stares.
Overall, I'd agree that really powerful tools for specific tasks is going to be the majority of "AI" in the coming years.
One question that interests me is this: Does intelligence have as a prerequisite a living system, such as a cell? If so, what is our definition of the living system and why is that important? If not, what abstract qualities of intelligence are we really trying to capture?
But yes, it's extremely unlikely that nature implements backpropagation directly, as it relies on non-local gradients.
Human flight is not as agile or energy-effective as a dragonfly, but it is faster and stronger. Just like artificial learning may not be as sample-effecient as the human brain. It is a learning intelligence nonetheless and we are already working with the core mechanisms of reasoning and deduction.
You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.
Anecdotal, but nearly all of my programmer friends believe that full-blown AGI is less than a decade away.
It's worth thinking about this section of  when various AI experts offer predictions:
> Two: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.
> In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.
> In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.
> And of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.
My impression is this is common among DeepMind folks and not an aberration. (See also dwiel's comment elsewhere.) It is super weird for me that Demis Hassabis says AGI is nowhere close. Is he lying? Or does he mean 10 years is not close?
Maybe he just doesn’t believe the same thing some of his coworkers do? Seems pretty drastic to jump to the conclusion he’s lying if he implies it’s more than 10 years away.
Also, do you believe AGI is currently more a compute/hardware problem, or an algorithmic problem?
People lack nuance and critical thinking.
Maybe our existing methods are good enough given enough compute to reach AGI but our datasets are too low fidelity and non-representative of the problem space to reach desired results?
Think of 16 year old human:
* it has received less than 400 million wakeful seconds of data + 100 millions seconds of sleep,
* it has made only few million high level cognitive decisions where feedback is important and delay is tens of second or several minutes (say few thousand per day). From just few million samples it has learned to behave in the society like a human and do human things.
* Assuming 50 ms learning rate at average, at the lowest level there is at most 10 billion iterations per neuron (Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes.)
Humans generate very detailed model of their environment with very little data and even less feedback. They can learn complex concept from one example. For example you need only one example of pickpocket to understand the whole concept.
I think we need simulation of other agents outputs as primary tool for reasoning. That seems to be how intelligence emerged in evolution.
Something like this:
choose desired action > simulate other agents outputs based on future state after performing action > check reward for this action after simulating outputs of others > perform action or not > update all agents models and relations in "world" graph model
I think world could be modeled as simple graph and each agent as NN.
Then based on graph we could conduct symbolic reasoning and very fast learning (by updating edges)
I think these models need also need good physical simulator and good understanding of competitivness.
Is anyone aware of such trials of building AGI as I described?
Humans have natural language as big competetive adventage (easy way to compress parts of world graph and pass it to others - ambiguous. I think with aftificial machiness can be done more efficient).
Another advantage is knowledge storage - also easy to do with machiness.
If we can build insect AI building human AI should be easy.
Is anyone aware of such efforts/results?
On the other hand, the ubiquity of knowledge once it's available could lead any maniac to use it for the wrong purpose and wipe out humanity from their basement.
My feelings on the potential of AGI is therefore mixed. I for one have just found my particular niche in the workforce and am finally reaping the dividends from decades of hard work. Having AGI displace me and millions (or billions) of individuals is frightening and definitely keeps me on my toes.
Technology changes the world; my parents both worked for newspapers and talk endlessly about how the demise of their industry after the advent of the internet is so unfortunate. Luckily for them they are both at retirement age so their livelihood was not upset by displacement.
If AGI does become a thing it will be interesting to see how millenials and gen Z react to becoming irrelevant in what would have been the peak of their careers.
It seems clear that autonomous systems which can apply their computational machinery to a diverse range of problems, and can, in a diverse range of settings, formulate instrumental goals as part of a plan to attain a final goal, do exist.
Because that's what humans are, at least some of the time.
edit: In terms of Turing-completeness analogues, the best candidate for AGI I think would be simply brute force capability: can this agent try all possible solutions until it solves this problem? (obviously using a heuristic to prioritize) -- that is, it'd employ a form of Universal Search (aka Levin Search). Humans don't necessarily pass this test rigorously because we'd always get bored with a problem and because we have finite memory. But then CPUs are not truly Turing complete either (it's "just" a good model).
Don't believe me? Check out this series of marketing videos on YouTube by GM Matthew Sadler.
1. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a look at new games between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (1)
2. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a review with a difference, because we are taking a look at the games together with AlphaZero, DeepMind’s general purpose artificial intelligence system...” (2)
3. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a game between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (3)
I could go on, but you get my point. Search youtube for "Sadler DeepMind" and you'll see all the rest. This is a script.
But wait, you say, that's just some random unaffiliated independent grandmaster who just happens to be using an inaccurate script on his own, no DeepMind connection at all! And to that I would say, check out this same random GM being quoted directly on DeepMind's blog waxing eloquently and rapturously about AlphaZero's incredible qualities. (4)
Let's be clear. I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi. Nor do I have a problem with Demis Hassabis making headlines for stating the obvious about deep learning (that it's good at solving certain limited types of puzzles, but we are a long way from AGI, why is this controversial).
My problem is that Hassabis is speaking out of both sides of his mouth. Increasing DeepMind/Google's value by many millions with his marketing message, while acting like he's not doing that. It feels intellectually dishonest.
To solve this, all DeepMind needs to stop instructing its Grandmaster mouthpieces to refer to AlphaZero as a "general articial intelligence system". Let's see how long that takes.
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
"General" as in what? As opposed to reinforcement learning, in er, general? As opposed to other ANN architectures?
>> I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi.
More to the point- it's only chess, go and shogi; not games "like" those.
The AlphaZero architecture has the structure of a chessboard and the range of moves of pieces in chess, go and shogi hard-coded and you can't just take a trained AlphaZero model and apply it to a game that doesn't have either the board or the moves of those three games.
To be blunt, AlphaZero has mastered chess, go and shogi, but it can't play noughts-and-crosses.
Maybe it's just me, but "general purpose artificial intelligence system" sounds like, well, General Artificial Intelligence. Which sounds like Artificial General Intelligence, which is the holy grail.
Well, it doesn't sound like that at all to me and I think the phrasing is fair. Also, it folds proteins.
I haven't studied enough myself yet to know the answer to this one, but what are the differences between AlphaZero and the OpenAI 5 DOTA team's approach? Would it be possible to apply AlphaZero to DOTA?
How do you know we aren't?
BTW, if you hadn't noticed, Season Three just came out on Netflix. I'm champing at the bit to binge watch that... :-)
As an alternative, the human mind could be some sort of halting oracle. That's a well defined entity in computer science which cannot be reduced to Turing computation, thus cannot be any sort of AI, since we cannot create any form of computation more powerful than a Turing machine. How have we ruled out that possibility? As far as I can tell, we have not ruled it out, nor even tried.
Why do we believe man can make fire? Well, dammit, we WANT to make fire. Let's figure out how to do it!
Finally, if we were able to explain the brain well with "metaphysics" it would then be just "physics". It seems that all you are saying here is that there is a mechanism that is not yet understood and it may be fundamentally different than other things we have studied so far (which seems unlikely, I might add).
Similarly we can mathematically and empirically differentiate between halting oracles and Turing machines, so why not leave both possibilities open as scientific explanations, instead of doubling down on the Turing machine model? Call halting oracles materialistic if it makes you feel better.
UPDATE: I've been rate limited for some reason, so here is my response whether the mind intuitively seems to be a halting oracle.
1. It's obvious there are an infinite number of integers, because whatever number I think of I can add one to it. A Turing machine has to be given the axiom of infinity to make this kind of inference, it cannot derive it in any way. This intuitively looks like an example of the halting oracle at work in my mind. Or, an even more basic practical example: if I do something and it doesn't work, I try something else. Unlike the game AIs that repeatedly try to walk through walls.
2. We programmers write halting programs with great regularity. So, it seems like we are decent at solving the halting problem. Also, note that it is not necessary to solve every problem in order to be an uncomputable halting oracle. All that is necessary is being capable of solving an uncomputable subset of the halting problems. So, the fact that we cannot solve some problems does not imply we are not halting oracles.
The practical flaw with this argument, of course, is that you could instead make an AI that itself uses quantum computation. I asked Roger Penrose about this at a university philosophy meetup over 20 years ago, and he agreed.
Likewise, if there is some kind of halting oracle, perhaps we can work out how the brain creates and connects to that oracle, and make our AI do the same.
Meanwhile, there is no physiological or computational evidence for this possibility. We should keep hunting though, as that's the same thing as understanding the detail of how the brain works!
The fundamental problem Penrose identifies boils down to the halting problem, which requires a halting oracle to be solved. Hence, a halting oracle is the best explanation for the human mind, and no form of computation, quantum or otherwise, suffices.
Since I'm rate limited, here is my answer to the replier's comment:
A partial answer: the mind has access to the concept of infinity, and can identify new, consistent axioms. Other possibilities: future causality and ability to change the fundamental probability distribution.
But, it's also important to note that we don't have to answer the "how" question in order to identify halting oracles as a viable explanation. We often identify new phenomena and anomalies without being able to explain them, so the identification is a first step.
I don't think it constitutes an explanation at all, let alone a viable one, if all it does is beg the same question.
The problem was already identified: "how does human cognition work?" You've renamed it: "how does this supposed halting oracle work?" That might be an interesting framing but it is not a viable explanation of anything until you've proved that such oracles exist or in other words, solved the halting problem.
What does it explain though? That the human brain has a black box capable of solving certain problems... how exactly?
You might as well just say the mind resides in the soul.
Making any program halting program is trivial: add executed instructions counter, halt program at some value of the counter. Proving that an arbitrary program halts is an entirely different task.
> So, the fact that we cannot solve some problems does not imply we are not halting oracles.
If it's allowed to not solve some problems, then I can write such an oracle:
Run a program for a million steps. If program has halted, output "Halts", otherwise output "Don't know".
It can't solve some problems, but by your logic it doesn't imply it's not a halting oracle. You are missing something.
Also, personally what my mind is doing doesn't feel like it's invoking an oracle for my problem solving. Generally when the search space for a problem that I'm solving increases I experience the kinds of blowups in the difficulty that would arise from me following an algorithm. Now, not everybody is the same. Do you feel like your problem solving calls an oracle?
Why not? Are you aware of a proof of this? I think you are limiting the capabilities of Turing machines without evidence.
> Unlike the game AIs that repeatedly try to walk through walls.
Game AIs capabilities are a small subset of what a Turing machine can do. Most game AIs can't do speech recognition or solve math equations either.
> We programmers write halting programs with great regularity.
So do other programs. Writing a halting program is not an uncomputable problem, and doesn't require solving the halting problem.
> Universal Intelligence: A Definition of Machine Intelligence
> A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
General intelligence usually meant in relation to humans, but you are correct in noting that it is a spectrum, not a binary.
We seem to be looking at intelligence in humans and thinking we need to develop that, without first defining what intelligence actually is. We don't exist in isolation, and it's likely that the components of intelligence exist to varying degrees in other organisms. In the same way that birds, bats, gliders and insects all have wings that generate lift, what are the things that we have in common with other animals?
The baseline of human capability would definitely still be impressive.
Seriously - that's a wicked funny post you had there!
For all we know, Isabelle and Coq could be speeding through the road to consciousness but we're busy having a blast doing Computer Vision pretending it's AI.
Deep Learning is amazeballs for Computer Vision. It's fun because people like looking at pictures. But sufficiently prodded Isabelle proves theorems, I've seen it first hand, and the "sufficient prodding" is way underdeveloped yet. At one point backpropagation was dead too.
Over the medium term I'm not sure AI researchers are the best people to ask. They are completely dependent on how much power the electrical engineers give them - I doubt there is a deeper understanding what a doubling or quadrupling of computer power will do than any programmer learning about neural networks.
Why do you say that? AFAIK computing architecture and brain architecture are completely different. How would you even begin to compare their power?
Google has TPU that are off from the estimated power required to simulate a brain by a factor of 3, so technology is reaching the ballpark. Given that brains were evolved, the part that does symbolic thinking is probably "easy to stumble on" in some practical sense.
Given how we've managed to improve on nature in other domains (see solar cell efficiency, for example), I think that if we can figure out how intelligent organisms manage to learn so quickly we can likely beat nature's efficiency.
Sure they do. They just hook up four times as much compute power or simulate whatever they want to do in four times as much time. A slow AGI would still be an AGI. But we do not see anything like that if we use four times as much power as in the control. It is still nowhere near.
As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.
Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.
The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.
AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.
And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.
So unless you pose that a function has to rely on its materialization (there is something untouchably magic about biological neural networks, and intelligence is not multiple realizable), it should be possible to functionally model intelligence. Nature shows the way.
AGI will likely obsolete humanity. Either depricate it, or consume it (make us part of the Borg collective). Heck, even a relatively dumb autonomous atom bomb or computer virus may be enough to wipe humanity from the face of the earth.
Even if we assume for the sake of argument that AGI is possible, there's no scientific basis to assume that will make humanity obsolete. For all we know there could be fundamental limits on cognition. A hypothetical AGI might be no smarter than humans, or might be unable to leverage it's intelligence in ways that impact us.
Nuclear weapons and malware can cause damage but there's no conceivable scenario where they actually make us extinct.
I agree our knowledge currently is lacking, but see no reasons why this will never catch up.
There are fundamental limits on cognition. For one our universe is limited by the amount of computing energy available. Plenty of problems can be fully solved, to where it does not matter if you are increasingly more intelligent (beyond a certain point, two AGI's will always draw at chess). Another limit is practical: the AGI needs to communicate with humans (if we manage to keep control of it), so it may need to dumb down so we can understand it.
Even an AGI as smart as the smartest human will greatly outrun us: it can duplicate and focus on many things in parallel. Then the improved bandwith between AGI's will do the rest (humans are stuck with letters and formulas and coffee breaks).
Manually deployed atom bombs and malware can already wreck us. No difference with autonomous (cyber)weapons.
And what does alarmist even mean? Do you call global warming advocates alarmists? It’s such an annoying, nonsense word that boils down to name-calling really. Discuss the merits of my actual argument. If you think my speculation is wrong, point out a flaw in the chain of logic that leads to my conclusion. Don’t just wave your hand and say that “you can’t prove it” like some evangelical christian talking about god or global warming. Seriously infuriating when there is so much at stake.
The analogy to anthropomorphic global climate change is a non sequitur. Climatologists have created falsifiable theories which make testable predictions.
And you really have no clue about my personal religious beliefs. Calm down and take a seat.
I would argue that, unless you can show why AGI is not - in principle - possible, that the null hypothesis would be that it is possible. Unless we veer off into some weird mysticism, it seems that the human brain turns energy and matter into intelligence somehow, operating according to the physical laws of the universe... why shouldn't it be possible to build something else that does the same?
If you're unwilling to provide me with a prototype equivalent to a rodent mind then I'll settle for a fully developed theory of human cognition. Let me know when you've got one. At least that would give researchers some guidelines to know whether they're making forward progress toward AGI.
Yeah and before the measurements were done, before enough time had elapsed for meaningful change to be measured, all there was was people like me screaming at people like you, trying to make you see. When ai comes you’ll have your proof but it will be too late.
Philosophers and futurists are better suited to hypothesize an AGI timeline.
But you take it too far by saying it is anyone's game.
Game theory, security, and economic competition makes it impossible to globally ban AI. The incentives to automate the economy (compare AI revolution with industrial revolution) and to weaponize AI (Manhattan Project for intelligence) are just too big. We are already seeing that the US focus on fair and ethical AI puts them at a disadvantage against China and Russia. AGI must require pervasive surveillance of the populace, but the Luddites are holding this back.
I suggest you learn to stop worrying about the bomb, and start planning for its arrival.
If we can figure out decision theory and how our values work, then when we figure out AI, we can hopefully build it to be aligned with our values from the start, instead of blindly hoping it happens to play nice with us instead of brushing us off like ants.
So what if it is possible to create a benevolent ai? Nobody said this isn’t possible or even likely. We can also invent a machine that scrubs all the moss off of stones. Just because it’s possible for it to exist doesn’t mean it’s going to proliferate in the free-market of the world and everything in it. The only thing that is important is the fact that
1: we will enter an unstable configuration where any AI implementation that can exist will exist
2: the AI implementations that proliferate will be those that are not hamstrung by being forced to include humans in the loop
3: humans will be out of the loop for every conceivable task and therefore not enjoy the high standard of living that they do in 2018
Is that because you think banned things do not happen? Even if the thing that is banned could confer a massive advantage to the entities developing it?
I think AGI is unlikely to be a thing in my lifetime, or even my children's. But if I were worried about it, I'd probably focus on developing a strategy to create a benevolent intelligence FIRST, rather than try to prevent everyone else from ever creating one via agreements and laws.
Developing a good ai first is useless because as I have said, the creation of ai enters us into an unstable configuration where bad ai will crop up regardless. Keeping bad ai from existing is infinitely easier when ai does not exist as a technology as opposed to when it’s a turnkey thing.
What is your track record in AI? It sounds like you have no technical knowledge of AI. For example do you understand the concept of cross entropy loss?
And by the way I happen to know about both of the subjects of the article.
1: I never said recent advancements are directly leading to AGI. Not in ML.
2: I don’t hold any contradictory ideas in my head
Your comment is aggressive and unpleasant which is an offense that should get you flagged. I constantly get flagged for making comments like yours because I happen to have an unpopular opinion. I can’t believe I put up with all this for YOUR benefit. Do you think I derive pleasure from trying to make people like you see things clearly?
As I have said so many fucking times, a layman is just as qualified as a ML expert to talk about the impact of AI on the world. Just because someone is an expert in a field that is tangentially related to AGI doesn’t mean a god damn thing. This is not a discussion about modern ML. But just to make it super easy for you to understand, let me put it this way: even if someone here were an expert in every detail of the theory and practical aspects of implementing an AGI, that person still doesn’t know any more than a layman about the consequences of AI. The point that you so annoyingly cling to is like a car mechanic thinking he was the ultimate authority on how cars would impact the world. You don’t have to be a car mechanic to understand and reason about the concept of transportation. Ultimately, the most qualified person to talk about that is an economist or someone. Not you. You don’t have a deeper understanding of the concept of AGI than literally anyone.
It is to the benefit of humanity that you observe the deep, fundamental changes that AGI will cause in the basic economics of human life. Dismissing it as “too far off” or “alarmist nonsense” is irresponsible.
Even if I accept your absurd logic quoted above - how do you explain your contradictory goal of stopping all ML research. All the top AI experts are conducting ML research, which according to you is only tangentially related to AGI.
Going further, no AI researcher has managed to build even something as smart as a rat.
I, therefore conclude that the human race is at a greater danger of being out competed by evolution and chance mutations of chimpanzees and dolphins. These are our real competitors and next position leaders in IQ. We should focus on banning and eliminating chimpanzees and dolphins instead of foolishly protecting them. Why waste time blocking ML research which is only tangentially related to AGI. Let's take the war to www.reddit.com/r/dolphins and
No point wasting time on hacker news.
> I can’t believe I put up with all this for YOUR benefit.
Thanks for looking out for my benefit. I will reciprocate by fighting the chimpanzees for YOUR benefit.
The point about the layman is that the actual substance of my argument should be considered rather than my credentials. You think that your knowledge of ML (credentials) gives you the authority to win a debate without actually debating.
I have never called for a ban on the specific research that is currently ongoing in ML. I have called for a ban on all AI research — not because it’s easy or makes a lot of sense but because it seems to be the only solution. I am receptive to new solutions, the absence of which is quite conspicuous in your comments. You are stuck on credentials and nit-picking.
“So according to you, ML is tangential so therefore listen to you”
I literally spelled this out for you in my comment. Are you blind? The fact that ML is not a direct path to AGI is just an asside. Perhaps I should have focused exclusively on your main error so as to not confuse you. Like I said, the impact of AI on the world is an economics question in essence. You don’t need to know anything about how an AGI might work to reason about the economic, strategic, and existential changes that AI as a concept will bring about. It is absolutely true that no amount of knowledge about ML or even AGI will help in any way with that line of inquiry.
“We haven’t made robot rats yet”
This just a permutation of people saying AGI is far off. You don’t appear to be in the camp that thinks AGI is impossible. Therefore this comment is irrelevant because it will come at some point and my argument is primarily about what that will look like, not when it will happen.
It should, by now, be thoroughly clear to anyone who stumbles upon this thread in the future that I am correct. If you want to continue, you can contact me at brian.pat.mahoney - gmail.com
Good luck with that.
If AGI is impossible, it will never happen. We already know that perfectly intelligent AGI's are not physically possible: Per DeepMind's foundational theoretical framework, optimal compression is non-computable, and besides that, it is not possible for an inference machine to know all of its universe (unless it is bigger than the universe by at least 1 bit, AKA it is the universe).
Remains being more intelligent than all of humanity. To accomplish that, by Shannon's own estimates, there is currently not enough information available in datasets and the internet. Chinese efforts to artificially increase the intelligence of babies is still in its infancy too (the substrate of AGI is irrelevant for computationalism, unless it absolutely needs to run on the IBM 5100).
So until that time travels, we will have to make due with being smarter than/indistinguishable from a human on all economic tasks. We're already there for some subset of humanity, you may even be a part of that subset, if you believed this post was written by a human.