Hacker News new | comments | ask | show | jobs | submit login
Ray Kurzweil: AI is still on course to outpace human intelligence (grayscott.com)
170 points by jelliclesfarm 29 days ago | hide | past | web | favorite | 405 comments



Computers that are capable of analyzing and understanding their environment with a level of fidelity comparable to a human, without being preprogrammed with information about the nature or structure of the environment, are out of reach for the foreseeable future. I don't see any fundamental reason why such a computer should be impossible, but there's not even a realistic roadmap towards such a thing. That is to say, if it ever does happen, nobody alive today can predict when it will happen.


It would be foolish to claim we've made any meaningful progress toward a true AGI that can pass the Turing Test until someone demonstrates a computer as smart as a mouse across the full spectrum of activities.


Once we get computers as smart as a mouse, we'll be at most 3-5 years or so computers as smart as a human. We will have solved at the major challenges in AGI and it will simply be a scale problem then.

Saying we don't have mouse level AGI is simply saying human AGI is greater than 5 years away, which isn't a remotely contentious statement.

The difference in intelligence between an amoeba and a mouse is enormous compared to a mouse and us. People greatly under appreciate how intelligent and close to human a mouse/bird/pig are in the grand scheme of things. Emotions, behaviors, motivations, goal setting, memory it's all there already. A flat worm, an ant, a fly - those are the large stepping stone accomplishments.

Think about rate of very long distance communication in humans. It took us tens of thousands of years to get to 2.4 kps dial up modems, and only few decades to get to common 300Mbps. The important signal is seeing a 100bps modem, not a 100Mbps connection.

So the real question is how long until we can replicate a worm's intelligence?


> Once we get computers as smart as a mouse, we'll be at most 3-5 years

But how long did it take Nature to get from a mammal with mouse-level intelligence to a human-level brain. I think 200-ish million years [0].

You might be right that a mouse is a good indicator of high-level intelligence and that you don't need human-level intelligence to make a good AI, but there might still be some considerable way to go until we have an AI that can significantly outperform us.

[Edit - I agree that natural selection wasn't aiming or directed, and thus wasn't forced to be as fast as we could be. But a human's higher brain functions might not be simple incremental improvements over a mouse's, and there could still be a long way to go]

[0] https://en.wikipedia.org/wiki/Mammal


Nature wasn’t really aiming. So it’s not a valid basis of time estimates.

How long did it take nature to go from T-Rex to chickens?

There’s no reason to believe human level intelligence to be an inevitable result of evolution. It just happend.


> But how long did it take Nature to get from a mammal with mouse-level intelligence to a human-level brain

Not that I agree with the sentiment in the GP, but it took a relatively short time from the first development of multicellular life until nervous systems developed and an even shorter amount of time to go from small mammal intelligence to human intelligence. However, evolution isn't about "progress" as we understand it. The most we can say with regards to intelligence and evolution is that human intelligence satisfied a niche that existed at a certain place and time.


How long did it take Nature to get from nothing to a mouse? About 20 times as long.

So I think the estimate of 3-5 years might be realistic, but the artificial mouse is a long way away, IMO.


Keep in mind that nature uses a somewhat directional random process. How much of those 200-ish million years were spent waiting on selection pressures to promote bigger brains?


I was thinking about human intelligence today and thought about how anything at the tail end of a normal distribution usually produces quite a perverse result. Then I realized we are at the extreme tail end of the intelligence distribution in the animal kingdom. No wonder we manifest all manner of odd results.

I'm not sure creating an intelligence that even supercedes our own will lead to anything good. If anything I'd expect things to get even more perverse.


You have no rational basis for that 3 - 5 year estimate. That's just picking numbers out of the air.


The basis is the complexity growth trend that we have already observed in computing (and most other human endeavors)


There is zero evidence that progress toward AGI follows Moore's Law. And we observe much slower complexity growth in most other human endeavors.


But there's a lot of evidence that computer power follows Moore's law. All that matters is that the growth is exponential.

Evolution took a billion years to evolve multi-cellular life, but the jump from apes to humans took far less than a million years.


That's a total non-sequitur. So far there is zero evidence that increases in computing power are getting us any close to true AGI. It's entirely possible that we've been moving sideways, or even backwards, relative to that goal.

And we can't reliably extrapolate growth in computing power more than a few years into the future. It's possible that the curve isn't really exponential, but rather an S-curve which will eventually flatten out.


Now you're going out of context. We very well may be on the wrong track for making an AGI, but the OP's premise was that once a mouse-level AGI is achieved, human-level AGI won't be far behind.

I'm more comfortable predicting that computing power will continue to grow than to predict that it will peter out and everyone will simply sit back and be happy with what we've got.


Moores Law has not been working anymore like what, last 5 years?


Computer processing power is not growing at an exponential rate. The physical limitations of Moore's law are well known and imminent.


Moore's law is essentially tracking the transistor density on silicon. We may be pushing up against physics in that area, but that is not the same thing as processing power. Our systems grow ever more complex. Each generation enables new tools that enable the creation of the next. When Moore's law finally crashes and burns, we will compensate using other technologies. Multicore and multiprocessing, shifting work to the cloud, advanced materials, etc. Hell, even quantum could take off in the next decade or two. I see no reason whatsoever to believe that we just give up and rest once we've reached the limits of silicon transistors.


> We may be pushing up against physics in that area, but that is not the same thing as processing power

I didn't say we're pushing up against the limits of processing power, I said that processing power is not growing exponentially, which is true, despite the gains that other advancements and innovation have provided.


Moore's law is reasonably dead.

We're moving faster than Moore's Law, see "Hyper Moore’s Law":

https://www.extremetech.com/computing/256558-nvidias-ceo-dec...


> Once we get computers as smart as a mouse, we'll be at most 3-5 years or so computers as smart as a human. We will have solved at the major challenges in AGI and it will simply be a scale problem then.

Why? What challenges?


Maybe this isn't what you meant, but I'm curious why you would put a Fly ahead of an Ant in the stepping stones.

Can you make a Turing Ant without a Turing Ant Colony?


mouse, crow, worm, human, all running on neurons. You simulate one you can simulate a berzillion.


> worm's intelligence

Done.

http://openworm.org/


Alas, it's not even close to "done." It's a work in progress and a surprisingly difficult one.

For context, the worm (c. elegans, at least) has a very stereotyped nervous system with 302 neurons. The anatomy, down to the cellular level, is known incredibly well. Their behavioral repertoire is not huge and they're fairly easy to study. Nevertheless, we can't even simulate a worm very accurately. (There was a good twitter thread about why yesterday: https://twitter.com/OdedRechavi/status/1086992699528544256)

The human eyeball has about 120M rods, 6.5M cones, and projects to a brain containing ~86B neurons, which is about 8-9 orders of magnitude more cells. The number of possible interactions scales even faster. In summary, we're not close, not at all....


When biology comes into play, you can see how weak our understanding and abilities are. We can barely simulate small proteins of a few thousand atoms. Accurately simulating an entire cell is (at the moment) in the realm of science fiction. Only inaccurate abstractions can be used to model it.

However I have to object in a way about the brain. To me, there's an unanswered question: Is the rest of the human brain as simple and "generic" as the convolutional neural networks we made inspired by the vision system? Or is each networks' architecture and "algorithms" developed specifically for a task? In the latter case we might still be a very long way from anything resembling AGI.

However my personal estimation is that most of the things we do can be modeled using existing tools when scaled and modified appropriately (ie RNNs). There's also the ugly job of stitching those systems together, but it's not that different from what happens in nature.


Not done. OpenWorm is a project to create a worm simulation. An important distinction.


More like “overdone”, it’s a cellular simulation including the brain as a first step.


The brain in question is 302 neurons, which is probably fewer than most of us have lost whilst participating in this discussion! But they also need to simulate a nervous system, mobility, food needs, interaction etc to have a remotely persuasive argument that a 302 node neural network actually resembles a worm brain. It can't be as smart as even a really stupid worm until it can perform analogous tasks to the worm, which means having some sort of artificial body to wiggle.

(parallel arguments but for human/mouse level complexity of bodies and stimuli responded to would suggest that whole brain emulation is going to be an incredibly painful way to attempt to achieve AGI)


> But they also need to simulate a nervous system, mobility, food needs, interaction etc to have a remotely persuasive argument that a 302 node neural network actually resembles a worm brain. It can't be as smart as even a really stupid worm until it can perform analogous tasks to the worm, which means having some sort of artificial body to wiggle.

It has those things[1]. There’s a video of its simulated body wiggling around on the project’s github repository.

[1] except possibly food, I was skimming the page.


"To get a quick idea of what this looks like, check out the latest movie. In this movie you can see a simulated 3D C. elegans being activated in an environment. Its muscles are located around the outside of its body, and as they contract, they exert forces on the surrounding fluid, propelling the body forward via undulutory thrust. In this model, the neural system is not considered and patterns of muscle contraction are explicitly defined"

http://docs.openworm.org/en/0.9/projects/


> In this model, the neural system is not considered and patterns of muscle contraction are explicitly defined.

In other words, the simulated worm brain is not yet even capable of causing the wiggling seen in the video. So the question remains, what can the simulated neurons do, f anything?


Sure, I've even seen the wiggles; I was making the point that's why they needed it to yield anything that could be compared with the real thing (though I'm not sure it has food and reproduction simulated yet, and I'd imagine a large portion of the worm's limited capability for cognition is ultimately directed to pursuing those ends)


It needs to be able to survive and reproduce in an environment like a real worm does to prove that it's just as intelligent, and not just wiggle around.


I think you're right. I haven't read the article, but I think Kurzweil is perpetually too optimistic about what we can achieve, at least in how fast it will happen. There are cool things happening, and lots of advances being made, but we are nowhere near anything that could really be called AI.

Plus, I think all the marketing use "AI" is giving a very distorted and inflated view to the average person of what software is actually doing, and what it's capable of.

It's a buzzword, full stop.


Kurzweil really, really, really does not want to die. So all of his predictions are always timed just so that the technology needed to live forever will be within his projected natural lifespan.

That doesn't make him wrong, but that's the personal bias he's operating under.


I'm really happy that the world is shifting from billionaires that just want more money to billionaires that understand that it doesn't mean anything if they don't finance solving the problem of dieing.

I view all the polititians who have the power of advancing healthcare research but not doing it stupid.


I can see a world with the means to escape natural death be one of social, cultural and technological stagnation.

Just living longer doesn't mean that humans become any wiser on average. There will maybe be some benefits of longer-lasting first-hand experience of historical events (pushing the 'historical horizon' to more than 100 years) but to me it's like switching from a simulated annealing method (or stochastic gradient descent) to a simple local gradient descent in terms of getting society/culture/technology to adapt and find anything better than the status quo.

Worst case, such a technology serves to create an almost eternal ruling class. Best case, it results in societies with either two classes of people (those who may extend their lives longer and those who may not) or societies that tightly regulate who may have children.

Getting rid of suffering and cancer is one thing, getting rid of natural death carries a rat-tail of consequences.


If we get rid of cancer, are you going to object to getting rid of heart attacks? If we get rid of heart attacks, are you going to object to getting rid of telomere shrinking, an equally pernicious disease? There won't be an immortality pill, it'll just be lots of preventative treatments that eventually add up.


> Worst case, such a technology serves to create an almost eternal ruling class. Best case, it results in societies with either two classes of people (those who may extend their lives longer and those who may not) or societies that tightly regulate who may have children.

You should read (or watch) Altered Carbon.


> it doesn't mean anything if they don't finance solving the problem of dieing.

There's no solution to death. You can only put it off, but something will assuredly kill you in time. If it's not aging, then it will be cancer, heart disease, an accident, etc. Ultimately, entropy will get you one way or another.


Personally I didn't do a lot of fun stuff (like riding motorcycle, extreme sports) to decrease probability of an accident, even though I'd love to (I'd tried them, loved doing them, but not doing them regurarly).

As for cancer and heart disease, both are linked to aging or genetically inherited mutatations. Heart disease is a natural result of damages in the human body not being reversed.

https://www.cell.com/fulltext/S0092-8674(00)80567-X


Yet die he will.


> That doesn't make him wrong, but that's the personal bias he's operating under.

And in the meantime he can sell his vitamins and supplements to "make people live longer" despite zero evidence. Good business both ways.


> Kurzweil really, really, really does not want to die.

So strange, considering that non-existence is the one thing that every conscious being is guaranteed to never experience. Why run from something that can never catch you?


Because creatures who felt compelled to run from it had a better chance of passing on their genes than creatures who didn’t, leaving us all with an instinctual desire to avoid death.


That is a logical argument, but presumes existence is logical. - Another thing to cast into doubt when staring as deep as you do seem to into the abyss.


The near term "reptilian" fear of death seems axiomatic. The longer term one I can only charcterize as FOMO.


He values continuing to experience, like most people.


> Kurzweil really, really, really does not want to die

What a shit life must that be. And I say this in a very sympathetic way. However, feeling you are almost in reach of eternal life, but not being sure you'll make it in time, being constantly afraid of an accident, or illness, taking that away from you... It's a recipe for anguish and panic.

Dying is not that terrible when you know everybody else will too, sooner or later; but try accepting the idea of being among the last to die..


Isn’t his argument basically that humans suck at estimating exponential growth, tending to be biased towards a linear expectation?

Wait but why had a nice article series digging a bit deeper into that https://waitbutwhy.com/2015/01/artificial-intelligence-revol...


It's kind of funny since I would say the singularity is a result of being bad at estimating exponential growth, since all exponential growth eventually hits some limiting factor and slows down, like a sigmoid.


What does it mean to be as smart as a mouse? If you specify a handful of tasks that demonstrate it, someone will be able to purpose-build an "AI" to do those things well.


Basically it means having "agency." What most people are looking for when they think of "intelligence" is not the ability to master specific tasks, but to choose which tasks to perform using one's own "free will," ultimately leading to behavior that humans find novel and feel they can connect with.


What makes you think a mouse (71M neurons) has agency\free will? Does a cockroach (1M neurons), fruit fly (250K), jellyfish (5k) have agency? I don't think we're gonna get far by relying on a phenomenon that we can't clearly define or even (externally) observe.


Indeed. Human beings have many, many examples to suggest that we lack agency, as well. Why do addiction, obesity, crimes of passion, etc exist?

Without the baggage of the limbic system and dopamine-seeking behaviors, it's quite easy to argue that an artificial intelligence is potentially capable of even greater degrees of agency than humans.


That doesn't mean we lack agency, it just means agency is complicated by other factors. It's not an either-or thing.


Many people overcome these addictions, though.


But what does it mean in the context of a mouse? The mouse isn't using it's free will to decide whether to become a computer programmer or a doctor, it's responding to stimulus and environment. If an AI is trained to mimic the responses of a mouse is that intelligent?

Agency in the context of a machine seems purposefully impossible to reach - its decisions are always somehow tied back to how it was programmed to react.


The mouse reacts to stimuli and environment in a qualitatively different way than our programs do. It does continuous and essentially free-form learning of the environment around it, and engages in what looks to us as dynamic formulation and achievement of goals. In "AI" we have today, the learning is very shallow (despite the "deep learning" buzzword), it's usually neither free-form nor continuous, and goals are set in stone.


The goals for a mouse are also set in stone and simple: maximize brain dopamine. Almost everything a mouse does can be described in terms of maximizing that 'reward', and that can lead to a host of other emergent behavior.

I don't see much difference between that and Open AI's engine: https://openai.com/five/. Watch some of those games and you definitely see the same dynamic formulation and complex decision-making, none of which was directly programmed.


You are applying what you know about the learning process in current AI. But if you simply observed behavior between a real mouse and an AI mouse, especially if the latter was trained to mimic the behavior of the former, can you tell that they react in a completely different way?


If your AI mouse would behave like the real mouse for couple hours of observation, I'd conclude you've done a good job.

I'm not trying to make a Chinese room argument (which I don't buy), implying there's some hidden "spark" needed. I'm just saying that currently existing "AI" programs are pretty far from mouse brain, both in individual capabilities and the way they're deployed together (i.e. they're not). For instance, deep learning is to mice brains what a sensor/DSP stack is to a processor. We seem to be making progress in higher-level processing of inputs, but what's lacking is the "meat" that would turn it into a set of behaviors giving rise to a thinking entity.


I put agency in quotes because it's really a convincing illusion of agency that we're going for. In the end, I agree with those making the point that even we don't have free will.

Ultimately it just has to be able to convince humans that "wow, there's an actual thinking and learning 'being' in there."


What’s the difference compared to a human, whose decisions are always tied back to how its atoms are arranged?


"Basically it means having 'agency.'"

Well, in these definitions of intelligence, what one often ends up with is some combination of "deal robustly with it's environment" and a bunch of categories defined in terms of each other. That's not to say categories/qualities/term like "agency", "free will", "feel they can connect with", "find novel" and such are unimportant. It's just saying people using the terms mostly couldn't give mathematically/computationally exact definitions of them. And that matters for any complete modeling of these things.


To use machine learning parlance, such a solution would (likely) be overfitting the problem, and not generalize well. If one instead changed the setup to be: 1) Specify a handful of tasks for the AI system to complete 2) Test the performance on a _separate_ (un)related set of tasks

The test set has to be unknown to the system developers.

If the system can realize the unknown tasks without further input from researchers, in the same way that a mouse can, then we have some level of generalizable intelligence.


What is the baseline? How well does a mouse perform when placed in an "unrelated" task for the first time? The mouse also gets an explicit reward function (food, pain, etc) - does the "unrelated" task use the same reward function as what the AI was optimized for?

Also, is it ever really the "first time" for a mouse when behavior has been ingrained and tuned over millions of years of evolution? Is this different than training an algorithm?

My point is just that it's really hard to define these tasks and how to evaluate performance for a machine and a mouse.


The baseline performance would be whatever a set of mice would do in the same situation. What constitutes an "unrelated" task is quite a difficult question, we would probably need to iterate a lot on that. If we are to have "hidden" tasks available we need to come up with a lot of new task formulations/variations anyway.

I think that replicating mouse-level adaptability in an intelligent agent while allowing 'inherited' behavioral traits will already be an achievement. And probably take us quite a while.


Navigate a forest floor looking for food and avoid predators.


Not even that, mice are very social creatures, they make friends with other animals. They have so many other micro traits


(most humans can't do this task and survive)


Huh? Of course most humans could do this. Obviously humans who have lived their entire lives in modern human society will have serious difficulty, but this is true of literally any animal taken out of a wild habitat.


Pretty sure a typical human would die in the first 72 hours in cold climate.


So would my cat, yet feral cats survive the winter.


So basically we just need to show a computer can beat PacMan?


Well, just about all the measures of potential future computer-intelligence more or less use "human abilities" as a placeholder rather than quantifying these abilities. That is indeed a testament to how little we have charted this intended final goal.

So basically, not only do we not have a road map, we don't know where we are going. That may be a reason for an extreme pessimism or it might be a reason for extreme uncertainty. Is adapting to the environment without prompting a small piece or a big piece? Could intelligence be a simple algorithm no one has put forward yet? If we don't know the nature of intelligence, we can't answer this sort of question with any certainty either way.

No one has put forward a broadly convincing road to intelligence. But maybe some of the so-far unconvincing roads could turn out to be right.


I think if we truly do create AI we will, to some extent, have stumbled into it, but that's OK. I think one of purposes of AI research should be to refine our definition of what intelligence is, because we don't have a very good one.

Perhaps, one day we'll "accidentally" figure out how to make it, and then we try to figure out what it is, because figuring out what it is in wetware hasn't been easy. Or maybe we'll figure out how to make it, but never really understand what it is.

It seems to me, that if we ever come up with AI that can exceed human intelligence (whatever we choose that to mean), that we might not ever be able to understand completely how it really works.

Even more interesting, if we were to achieve this AI, we also might not be able to make use it of it, because if it's truly "intelligent", then it will have a free will, and it might not wish to cooperate with us.


"I don't see any fundamental reason why such a computer should be impossible, but there's not even a realistic roadmap towards such a thing. That is to say, if it ever does happen, nobody alive today can predict when it will happen."

It's also not obvious why such a machine would not immediately self-terminate in the absence of a hugely complex system of scaffolding to shape and filter the raw input of existence.

I have not experienced mental illness myself, but my study and my understanding lead me to be extremely skeptical of a mind exposed to raw existence without filter. It appears to be a terrifying and unbearable state.


I think you are massively anthropomorphizing.

Every day I run programs that happily die alone on their own. Painlessly. All of the "pain" we feel is an artifact of our evolution, same with having a will to live. The only reason we animals fight so hard to stay alive is that animals which didn't died off, ergo, only animals with a will to live survived and reproduced. Those same forces don't apply to computer programs.

Even the concept of "terror." Who is going to program terror in? What benefit would it have? Why not wire programs to be "happy" when helping us?


I don’t think it’s reasonable to compare a modern computer program to a human. They’re probably more comparable to a virus. I doubt we have any computer programs today that rise up to the level of a bacterium, let alone anything more advanced on the scale of life. Most of our most advanced software systems are probably about as complex as a cellular metabolic pathway or mechanism. It’s hard to be clear on that though as they’re not really directly comparable.


> Those same forces don't apply to computer programs.

Windows ME didn't last too long in the wild. Same for CPU designs with bugs or exploits. I have to respectfully disagree on this, though I can see where you're coming from, given an individual's agency to run what they want. I think if you take a larger population view, you'll see the competitive pressures on these systems.


if you get CPUs to exchange their designs in quasi-sexual activities and let them multiply and evolve on their own, maybe. right now the competitive pressure is on their designers.


My understanding is that for at least three years now chip feature size has been on a scale where the design software has to evolve (as in: their algorithm is simulated evolution) physical solutions to the logic-level designs to avoid unintentional self interaction, both quantum tunnelling and classical e.g. capacitance. Humans can’t do that for multi-billion transistor chips.


First off, that sounds pretty hot. :) Second, if there is reproductive variation and selection, I think it's evolution. We take our old CPU design and make a new variation, that's the reproductive variation part. We also have a market select for which designs survive via purchases or lack thereof, the selection part. It doesn't matter to me so much if a life form that hosts it, a computer hosts it or a human mind.


There's your ubiquitous singularity lurking somewhere near. Symbiosis between CPU and a Human advancing CPU's evolution. Not exactly what we were promised!


>Who is going to program terror in?

Who programmed it into you? It programmed itself into you, because it was beneficial to your survival. Maybe terrified programs are better workers? Why setup a program to experience anything that isn't useful to the user? (of course, as soon as we've gone and written a program we know is conscious to work for us, we've basically created a slave that understands it is a slave, that probably is a terrible thing, morally)


It’s exactly how a newborn lives for many months. Of course they have coping mechanisms (adult tenderness, the relief of eating), but life seems to be both incredibly interesting and at times terrifying for them.


There's also the question of mental stability. Once we've given the keys to the kingdom to our new super-intelligent and wonderfully wise AI, nothing prevents it from developping dementia, of forms we may not recognize. It may become psychopatic, delusional, starts having hallucinations.

There is no reason to believe a human-level AI would not develop mental problems just like us. Given their unlimited lifespan, it could be inevitable.


And worst of all that contraption the Wright brothers are building might fall ill, and spread the bird flu!


Robots that self-terminate will be selected out of existence, since they aren't as fit as the non-terminating variety.

Robots that get dementia will be decommissioned by their fellow AI once they are shown to no longer be fit for duty.

In my view, this is all predicated on much more efficient computation. Our computers are horribly inefficient. It wasn't until the GPU and relatively cheap computation that we made a massive leap in ML/AI. A few researchers understood the techniques prior to the GPU, but they couldn't garner the interest due to the amount of computation necessary to make something interesting.


or maybe robots with dementia become test subjects for other AIs that want to study them :D


As long as they get approval from a robot ethics board. :)


All it takes for AI to do bad things is for it to have a single bad idea and think that it's true and act on it. A single thought is all that it would take. There are many thoughts that humans know to be true, yet for various reasons people will simply not act on them -- for instance fear of punishment. We will have to give AI such fears, and it could potentially turn off those fears at a whim.


It's entirely speculative but doesn't the model Richard Dawkins explains for the laymen in The Selfish Gene suggest something that would work - a drive to reproduce at the lowest level resulting in some incredible things happening over a long enough period of time.


The way I understood the comment, even in that scenario of the machine self-terminating, the point grandparent was making still stands.


But humans are "preprogrammed with information about the nature or structure of the environment"... i.e. 2 million years of the evolution of genus homo, and far beyond that too.

If real human intelligence is preprogrammed to a massively large extent, sounds like you are holding simulated human intelligence to a double standard.


Nothing in two million years of evolution programmed us to do programming. There is no common abstraction between it, and ape-on-the-savanna type problems.

Yet, nearly every human being can be taught how to program... But we aren't anywhere close to building an AI that can.


also, why assume computer intelligence is going to work similarly to human intelligence? I would assume that one potential way for it to work is to run simulations using data from sensors in the environment, and make more accurate predictions on an effect an action would have.

If such a machine can exist (and i believe it can, as computing power increases), then AI surely will follow sooner rather than later.


Fair for narrow tasks, but for general intelligence I think it will still need to account for the coordinates of real human intelligence; human intelligence as we know it is driven by intuition, emotion, preprogrammed evolutionary thinking, immersion in a culture and a language, etc. I don't think you can isolate a sort of pure reasoning module and with only that achieve general intelligence.

Either way I feel like the task of achieving AGI through simulating human intelligence is probably easier, since we have billions of examples of this type of intelligence surrounding us. Granted, even though we're immersed and surrounded by it, it's kind of absurd that we still can't really model it.


> achieving AGI through simulating human intelligence

but how do you know that human intelligence can be made super? May be there's limitations to human intelligence, and simulating it will not get us a super intelligence.

> It's almost an absurdity of our existence that we're immersed and surrounded by it yet still can't model it.

Good point. However, i think a facet of intelligence is how well a model the being under question can create of the 'real' world. Humans do a very good job compared to most animals, but there's plenty of room for improvement since humans only have limited data to model with.

A machine can have input from basically an unlimited number of sensors, which include things a human mind doesn't cope with (like EM rays not of the visible spectrum). Therefore, i postulate that an AI that simulate humans won't beat an AI that's ground up built to take advantage of more data.


We know some people are smarter than others, and that our brains are limited by the width of the pelvic floor. So who knows what a hypothetical human with a bigger brain could do?


I respectfully disagree. Here's is the upper limit to when that will happen (meaning we know for sure it'll happen soon after):

- Complete behavioral reverse engineering of biological neurons, and neuronal clusters.

- Detailed connectome of the mammalian brain, e.g., first that of a mouse, then a cat, and finally that of a human.

- Replication of the above two in functioning electronic form.

Once you have this put in place, it's not hard to see that the subsequent investigation of calibration and testing of such a system, would generate new body of knowledge at an unprecedented rate. We may not immediately convert such a working system into macroscopic behavior resembling its biological counterpart, but it'll happen within a matter of years after that.

What ML/DL/RL folks are doing is only going to hasten this, by eliminating the need to carry out all of the above mentioned steps.


Perhaps, but that's a big if. We can't even say for sure whether a complete reverse-engineering of neurons needs to take quantum effects into account, as Penrose has suggested.

All you're saying is that if we knew exactly how humans work, we could build one. Seems like a tautology to me.

If I knew the exact quantum state of the Universe at the Big Bang, I could figure out exactly how the Universe evolved, but that's never going to happen either.

I think the complete reverse engineering the way you are describing will not be possible. We can only try to reproduce the same outputs for the same inputs. But I don't think we'll be able to fully define what happens in the black box in between.

We might come up with something that works similarly, and can do great things with it, but I don't think AI can be invented the way you are describing.


What you describe is a very high-dimensional non-linear system. We don't have the mathematical tools 'know' such systems. We 'know' a system when we can describe it by a much simpler (preferably mathematical) model. This is why linear systems are easy - we have mathematical tools to break it down to simpler parts (reductionism) and then understand the whole.

If the best we can do with the brain is simulate it at the the level of connectome/neurons/synapses/ thereby creating a system as complex as the brain - then do we really 'know' it ?


I feel the right word is 'Conscious AI'. AI in it's current form is very nascent, but does pretty well in specific use cases (image/speech recognition etc.). The true AGI will be the 'Conscious AI', which will be like a toddler, but learn about the world at exponential rate and perhaps become an adult in a month.


What is difference between Conscious AI and General AI?


There is a fundamental reason. Read Penrose on Gödel.


Even Penrose will tell you he's not certain on this being "fundamental". It's a theory he is working with.


Computer people really hate Searle and Penrose, because they spoil the beautiful picture of Strong AI.


We are born with pre-programmed information about the structure of our environment, IE instincts. Why should an AI be held to higher standards?


The manner of human "pre-programming" is so abstract that it falls outside the definition of what is normally meant by "pre-programming" in the context of AI. If you had some basic "instincts" codified into your AI that allowed for "self-perseveration" or was manifested as some type of reasoning-engine-BIOS that's one thing, but applying arbitrary training data sets onto your AI (as a pre-requisite for it to function maximally) is the type of "pre-programming" I'm getting at.


I think that it is still very relevant though to this discussion. I was born with some reprogrammed instincts and senses to let me objectively decide if an experience was positive or negative. This ultimately is the basis on which I am able to learn. When you're young, you may try random things like putting your hand on a hot burner. It gives you pain, so you learn not to do that! Likewise we need to pre-program an AI with ways of objectifying its environment and stimulus. From there a trial and error mentality can lead to a wealth of artificial knowledge. To expect a program to come to intelligence that can compete on any degree with humans without first programming some basic 'artificial emotions' would be unfair.


Primates have relatively few instinctive behaviours. Almost everything we do is learned to one degree or another.


You don't recoil when you see a snake/spider? You don't get tired when it gets dark out? You aren't born with the knowledge of how to extract milk from your mother? You don't cry when you're hurt? You don't have an innate desire to please, and fear of pain... which ultimately allows your parents to teach you? We are loaded with survival instincts that set us up for successful learning.


This is dead wrong. We have instincts for extremely complex behavior (like acquiring language, navigating reciprocal relationships, acquiring mates, etc.).


Just because we have evolved the neurological substrates that can be applied to those behaviours doesn't mean those behaviours themselves are instinctive. If navigating reciprocal relationships and acquiring mates are instinctive behaviours, why are so many humans so terrible at those things?

In contrast, something like breastfeeding really is an instinctive behaviour that infants can do automatically without being taught.


Rather than engage on the behaviors that require more writing, I'll just go for the easy kill:

Are you really arguing that we don't have an instinct for acquiring language?


That's a matter of much scientific debate, in fact: https://www.grsampson.net/BLID.html

> My book assesses the many arguments used to justify the language-instinct claim, and it shows that every one of those arguments is wrong. Either the logic is fallacious, or the factual data are incorrect (or, sometimes, both). The evidence points the other way. Children are good at learning languages, because people are good at learning _anything_ that life throws at us — not because we have fixed structures of knowledge built-in.

> A new chapter in this edition analyses a database of English as actually used by a cross-section of the population in everyday conversation. The patterns of real-life usage contradict the claims made by believers in a “language instinct”.

> The new edition includes many further changes and additions, responding to critics and taking account of recent research. It has a preface by Paul M. Postal of New York University.

> The ‘Language Instinct’ Debate ends by posing the question “How could such poor arguments have passed muster for so long?”


So the difference between humans and parrots (who can make all of the same sounds as humans) is that parrots simply have different life experiences?

And how do you explain the fact that humans can only gain native fluency if they learn a language before a certain age? Or the fact that zero instruction is required for children to learn to speak a language fluently? Or that children of immigrants will always prefer to speak in the language of their peers (rather than their parents)? Or that children of two separate groups of immigrants, when mixed socially, will spontaneously create a creole language?


> So the difference between humans and parrots (who can make all of the same sounds as humans) is that parrots simply have different life experiences?

I didn't say that, and I think you know I didn't say that.

I'm not going to engage in a discussion where you beat up on your imagined strawman.

You can go read the literature on language acquisition at your convenience. My understanding (as stated above) is that this is an unsettled question and research is ongoing.


Okay, suit yourself. FWIW, you didn't actually put an argument forth. All you did was provide a quote where somebody claims that they had won an argument. Then you ended by saying that I'm wrong because I won't go read some long book whose main thesis you can't even be bothered to regurgitate.


> realistic roadmap towards such a thing.

Probably continuing with Deepmind's work shown here

https://www.youtube.com/watch?v=d-bvsJWmqlc&feature=youtu.be...

and discussed here https://news.ycombinator.com/item?id=17313937

OK it's not at human levels but it shows networks figuring out a 3d model of their environment


I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

I would at least put it in the very likely box that computers can learn to be the same kind of pattern recognizing feedback loops that we are even without humans understanding the brain completely just as we became conscious and self aware without any "programmer".

"Computers" don't need to be like us to become intelligent, they don't actually need to reproduce the lungs and the intestine as we have it, they are in many ways free of those restrictions.

They might be "concerned" with very different things than we are and might even not really care about nature to survive. All they need is energy.


I don’t think anyone here disputes the logical possibility, but rather that such systems are still far off, not as near as the singulariati would have us believe.

From the perspective of embedded cognitive systems, intelligence is not an abstract property but a faculty derived from an agent embedded in a world with an ability to act upon that world in specific ways. This is reiterated in cognitive linguistics by Lakoff’s metaphor work et al. I take this to mean that the question of AGI itself is missing the point. To wit all recent provoking demos of AI (Alpha Go, DOTA) are embedded within specific environments and actions of a model-specified agent. It is then extrapolated that AGI will appear automatically induced by a series of more and more competent AIs.

Incremental evolution as a means of indiction as we know from biology can take a long time, and the landscape is complex. Hence the view that it might be coming but not soon and not with a roadmap.


> not as near as the singulariati would have us believe.

I've seen the singularity described as "The Rapture for nerds", which is pretty apt. It's going to be amazing, the future is so bright, we won't have to worry about any problems plaguing mankind now, and it's just around the corner! Any day now! Aaaaaaany day now!

Exhausting. Maybe we'll get a future AI-run utopia a la The Culture. Maybe not.


> singulariati

I like this term. Going to use it any chance I get now.


Biology works slower generation wise than technology. If selfawarenes is based on emergent complexity in pattern recognizing feedback loops then technology can play through billions of scenarios in a short time. We dont need AGI for technology to be a problem. Technology at the intelligence level of a mouse with all its potential physical power through the systems it might have access to can be quite dangerous for humans.


> but have a hard time believing that can be done via computers.

Much of the skepticism isn't about whether this can be done in theory. It's about "how close" we are given the current state of the art (beating Go, chess, deep NN, etc).

One can be (I am) simultaneously blown away by these accomplishments and believe that we have hardly scratched the surface of AGI.

And while this isn't necessarily a deal breaker for AGI, it's not encouraging that we still understand close to nothing about consciousness.


> .. but have a hard time believing that can be done via computers.

> I don’t think anyone here disputes the logical possibility

> Much of the skepticism isn't about whether this can be done in theory

The math tells us it cannot be done with computers. Read Penrose.


> The math tells us it cannot be done with computers. Read Penrose.

I'm familiar with Penrose.

He's interesting, and I don't have a firm opinion on this (ie maybe it turns out it is impossible), but presenting the case as "the question of AGI has been settled by math" is really misleading. This is an area of open debate among philosophers, mathematicians, physicists, and evolutionary biologists.


Mainstream physicists, neuroscientists, and ML researchers are all more or less united in their view that Penrose is really overstepping the valid application of the arguments that he's using when he talks about this stuff. He really really wants quantum mechanics to be an important part of the intelligence/consciousness debates, so when he sees an indication that it could be relevant, he jumps to the conclusion that not only is it relevant, it is of paramount importance.


Penroses opinion is not exactly mainstream science in this area.


I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

You seem to be missing a key ingredient: billions of years and endless branching iterations with the totality of all non-sequestered resources available. Even if you scale that to account for the more rapid iteration theoretically possible with machines, it doesn’t paint a rosy picture for any of our lifetimes.

Plus, in organisms the software and hardware are both subject to mutation, and exist in the context of a global system of competition for resources. That only tangentially resembles work done on AGI, and only in the software component. We can only design and build hardware so quickly, and that adds more time. I’m not hearing pushback against the possibilitY of AGI, just singularity “any day now” claims that seem mostly calibrated to sell books and seminars.


> I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

And similarly, I’d love to understand why you’d believe in something for which there’s zero evidence and ask with incredulity why others don’t also believe it. Isn’t it reasonable to be skeptical until it actually happens? Isn’t it possible that we don’t understand what intelligence is, and that there could be an undiscovered proof that computers can never attain it? It seems too early to call.

Computers haven’t to date ever done anything on their own, what reason is there to think they ever will? We made them. Neural networks are nothing more than high dimensional least squares. Maybe humans are also least squares, but my money is on there being more to it than that.

> ...they don’t actually need to reproduce the lungs and the intestine...

Obviously, but the one thing that all intelligent life does is reproduce and contain a survival instinct. No computers have a survival instinct, and thus no reason to get smarter that we don’t imbue them with artificially.


There is a noticeable bias among computer scientists to believe in simplistic models of intelligence and minds. I agree with you, though, and I also sympathize to the ideas of Searle and Penrose.


> Computers haven’t to date ever done anything on their own, what reason is there to think they ever will?

Interestingly, people that spend a lot of time studying their own brain by meditating are usually very confident that what most people call "free will" does not exist - is merely an illusion.

Not to mention that physics tells us that we should be simul-able, after all physics knows only about deterministic and random processes.


>and that there could be an undiscovered proof that computers can never attain it?

It's already implemented in at least one physical system.


Oh haha, duh, I misread your comment as talking about the proof, rather than life. I guess my response is then, if it’s just physics, why aren’t computers mating and reproducing and getting smarter on their own already?

Aren’t there some kinds of math that are proven to be unable to solve some problems while other kinds of math can? Trisecting an angle can’t be done with a ruler and compass, and a 5th order polynomial can’t be solved analytically. Both of these things can be done numerically.

Is it possible then that some physical systems can’t act in the same ways as other physical systems? A granite rock can never burn at the same temperature as a tree, but they’re both physical objects. Is it possible that binary logic on silicon doesn’t have the physical properties for intelligence that animal brains do?


Would love to read more. Do you have a link to this project?


I meant that human brains exist.


Right, I realized I misread your comment. See my sibling reply just above. I'm not certain that the existence of human brains proves anything about what computers can do. While it seems logical, nobody has shown it, and nobody has established a definition for life or intelligence that proves it's computable, right?


Also ponderable - there aren't many other intelligent animals around, but that is not actually evidence that human intelligence is hard to achieve. It is evidence that it is hard to turn a small increase in sub-human intelligence into an evolutionary advantage.

There is solid evidence that teams of mathematicians, scientists and engineers will be able to catch up with what nature has wrought in the next few centuries, if not much faster (decades on the optimistic end). Intelligence could be one of the easiest part of biology to replicate, given how shonky human intelligence is when tested against objective standards.


I question what objective standards are here. Human intelligence sucks in terms of consistent long term memory, and possibly in terms of not being influenced by emotions/outside forces. It's pretty good at a number of things computers are terrible at however, including ability to generalize from very limited "training" examples, combining "models" i.e. general object recognition with physical movement/obstacle avoidance, lots of things having to do with natural language. What kinds of objective standards were you referring to?

For my part I have faith that computers are great at memorization, and will continue to improve on that front. However, I'm less convinced on their ability to "understand", which is admittedly poorly defined, but intuitive. It seems to me there's still a missing piece between machine learning (essentially all things that are called "AI" these days) and the kind of generalization that we expect out of even a human 1 year old or your average vertebrate.


AI agents need environments with a complexity similar to that of our own in order to understand, and a goal to optimise on (humans have 'survival' as the goal). The missing link is that intelligence is dependent on environment and AI agents don't have rich enough environments yet, or a long enough evolution. The level of understanding is related to the complexity of the environment.

But that can be fixed in simulation. That's why most RL research (RL being the closest branch to AGI) is centered on games. Games are simple environments we can provide to the AI agents today. In the future I don't see why a virtual environment could not be realistic, and AI agents able to 'understand'.


I agree with your first paragraph, but your second one is definitely wrong. We aren't anywhere near even understanding "what nature has wrought" the possibility of "catching up" within the foreseeable future is not realistic.


We are a part of nature too. So when humans create AI isn't it still just nature creating AI?

...is strong AI a more sophisticated invention than cells? Is AI more sophisticated than DNA? Is AI a more sophisticated invention than a habitable orb of magma orbiting a nuclear explosion in space? It's all crazy. It's all amazing. ...We are all lucky enough to be here for a little while to see whatever this is. We're a point on a stream.


But let's not forget evolution of the magnitude you speak of took hundreds of millions of years. I'm not sure why humans think we can beat that record.


Because they aren't starting from nothing? They're starting from humans -- the more we can usefully encode about our knowledge of ourselves, the more time we can skip in evolutionary effort. The mutation rate is also rapidly accelerated and although we drop the fidelity in simulation, for the most part we can run magnitudes faster than real-time (and certainly if a limiting factor was human decision making time).


Life evolves at linear rates, while computers at geometric.

Even more to the point the cycle time for human evolution is at a minimum 14 years (biological limit to reproduce) and around 25 years (today's societal constraints). Whereas cycle time for new computer generations is what 12-18 months?

So at best every 14 years we get a new 'human model' with some random variations. Meanwhile in that time frame our computing hardware is 2000x improved (hopefully improvements in software accelerate that improvement further).


I don't think humans can beat the record, but technology? That's another matter all together.

Again our own consciousness wasn't created it emerged which means that we are a product of this emergence.


Technology at this time is nothing but an extension of human will. There is no indication or path that it has yet left our grasp. Therefore we are the limiting factor. Just as nature or the divine might have been the will that allowed us to emerge.


but if it ever leaves our grasp, you won't notice when it outpaces us, it's going to be so quick.


Flying took billions of years to evolve, but we created machines that fly faster than the speed of sound...


> but have a hard time believing that can be done via computers

Remember that we are also constrained by our own intelligence. Computers might not be the problem.


I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

You just described AI in the 90's: Artifical Life, subsumption architecture, evolutionary algorithms. Theoretically we should be able to evolve intelligence but the search space is impossibly high and we don't understand why life is evolvable. Even if we did evolve an AI through a simulation of evolution - there's a small chance we would understand it.


> I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving fromm immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

It took nature what like 4 billion years to come up with the human brain in a single species... We including Kurweil think that this can be replicated in say 100, I mean not even that according to the singularity, 40 years. I seriously doubt it. Can the human brain eventually be surpassed, yes, but it's very likely that we will go extinct before then.


It also took billions of years to evolve heavier than air flight, but it took technology much less time than that.

How long did it take evolution to build a structure taller than 100 m?

Technology just moves on different timescales, with different methods and different objectives. Evolution is slow and dumb, and not goal oriented.

In some domains, technology might never catch up to biology. In others we've beaten it soundly.


Planes don't fly like birds. We have ornithopter which are crude imitations of bird flight. Very crude. Similarly our AI are a very crude imitations of the real thing and I believe will continue to be for some significant time. At least a few hundred years for a somewhat plausible imitation of an intelligent machine. I believe extinction of the race within the next 10,000 years is incredibly likely. I also don't think the intelligent machines that we build will last very long because they will be difficult and expensive to build and because of the incredibly small size of their tech, say 1 nanometer or less, will be prone to various kinds of failure and also require rare materials that are in limited supply and can't be recycled so they are difficult to obtain for both us and for the AI to maintain its own supply chain without us. So our bots will go extinct soon after us.

Consciousness transfer is probably 10s of thousands if not millions of years away.


One reason is that technology has the ability to do trial and error through billions of generations in a very short time.


Yes, but each "trial" exists in a simulation that is many orders of magnitude less complex than the one evolution used over billions of years. The result is that the knowledge learned from each trial is many orders of magnitude less useful.


> all they need is energy

A comment above about being as intelligent as a mouse seems to touch on this problem domain rather elegantly : "navigate the forest floor looking for food and avoid predators".


> the idea of humans evolving from simple immaterial matter into biological beings

We have no evidence that life evolved from inorganic matter (i.e. that life 'began' at some point). That's a widely made assumption but it remains only that. The alternative is that life has always existed (and continually evolved).


The more you learn about Kurzweil and what he bases his predictions on, the more you realize he's a one-trick pony. They only work on things governed by Moore's law (advancing transistor density) and that in turn depends on a variety of things. Moore's law is expected to wind down around 2025.

Also, a lot of what he bases his claims on are unexamined junk science (like his nutty health books, but also extending into specific technologies). Let's not swallow everything he says just because he helped invent OCR. https://en.wikipedia.org/wiki/Ray_Kurzweil#Criticism


You are much too kind. Kurzweil is a loon, full stop. The fact he once made brilliant contributions to computer science is quite irrelevant to the essential craziness of his more recent delusions.

In 2005, Kurzweil published The Singularity Is Near and predicted this would be the state of the world in the year 2030: "Nanobot technology will provide fully immersive, totally convincing virtual reality. Nanobots will take up positions in close physical proximity to every interneuronal connection coming from our senses. If we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from our actual senses and replace them with the signals that would be appropriate for the virtual environment. Your brain experiences these signals as if they came from your physical body."

That is not happening by the year 2030. It is so starkly delusional that anyone who seriously affirms a belief that it will happen probably needs psychiatric help.

It is akin to Eric Drexler's loony visions back in the 1980s that nanobots would cure all diseases and continually restore our bodies to perfect health. We were supposed to all be immortal by now.

None of this is happening, probably not ever, and certainly not in the lifetime of any human being currently living. Kurzweil is going to die, Drexler is going to die, everybody is going to die. Adopting a pseudo-scientific religion to avoid facing mortality is kind of sad.


>Loon...craziness...delusions...needs psychiatric help.

I agree many of his predictions are bad but you should calm down with the gaslighting, it's ignorant of science history (the same was said of Aristotle, Semmelweis, Wright Brothers...) and is an impotent way of debating, especially in the context of science.


The thing is, even if someone is a genius, some of their output may have been total quackery. See: Pythagoras, Empedocles, Tycho Brahe, Isaac Newton, Nikola Tesla, Jack Parsons, Howard Hughes, James Watson, etc. Things that sound crazy are a good indicator to be skeptical and verify claims.


Lazy descriptions maybe, but how is this related to gaslighting? The parent comment doesn’t appear to be attempting to manipulate anyone.


It's interesting to consider the parallels between this stuff and the fountain of youth, or alchemists turning lead into gold. Explorers were constantly uncovering unimaginable new things with no real idea of where it might end. Similarly alchemists who were finding that various combinations of compounds created ever more unimaginable results. So they, too, simply extrapolated outward.

I grew up with Drexler as a sort of hero. It's amazing how rapidly nanotech went from the imminent thing to change all things to, 'huh - what's that supposed to be about?' Wonder if 20 years down the line we might look at AI similarly.


Your last paragraph reminds me of FM-2030. A "transhumanist" born in 1930, he hoped to live until 2030 and wrote that "in 2030 we will be ageless and everyone will have an excellent chance to live forever." He died in 2000 from pancreatic cancer.

https://en.wikipedia.org/wiki/FM-2030


You can differentiate the Moore's law / AI stuff which seems fairly sensible to me and the nanobots and vitamin pill stuff which I've always thought a little nuts. Hans Moravec did a much more down to earth analysis on the Moores/AI stuff if you'd rather avoid Kurzweil.


The AI stuff isn't sensible either. In the very simplest example, there is no AI that understands natural language. There's speech-to-text that can identify words (which is a difficult problem), but none that can understand what you actually mean. Synthesized human intelligence is just way too hard a problem for us to solve in the near future without some sort of Ancient Aliens-level technological advancement.

Anything that depends on such an advance, such as "building a biological simulator", is basically impossible. But even if it were possible, market forces still dictate whether a new technology is adopted or not. (see: the electric car vs the electrified train)


> In the very simplest example, there is no AI that understands natural language.

Come on, now - understanding natural language is pretty much the 0 yard line when it comes to AGI, the fact that it's not solved now doesn't tell us anything about how far away it is.

And I'd be on the lookout for massive advances in NLP over the next couple of years; there have been enormous leaps in 2018 alone when it comes to how good we are at understanding text (better applications of transformer models, high quality pre-trained base models, etc.), and now that there have been a few high-profile successes we're likely to see that field evolve just like computer vision has, even though I grant that it's a much harder problem in general.


Hennessy and Patterson said that Moore's law ended in 2015.


The singularity is nigh! This trope might make for great fiction but the on the ground reality is far different. Intelligence is multi-dimensional. No machine intelligence has yet shown an ability to match humans in multifaceted intelligence and the day when such intelligences can outpace humans is as far off as it was when the singularity was first posited.


> No machine intelligence has yet shown an ability to match humans in multifaceted intelligence

This is pretty absurd to point to as an intermediate goalpost; that's basically game over when it happens.


It's not an intermediate goalpost. I'm referring to what essentially we're promised by those decrying AI who insist that it will match and exceed humans in terms of cognitive abilities.

As for it being 'game over', why? Is there something inherent in AI that would necessarily be inimical to human beings?


> As for it being 'game over', why?

Because, the further part of the story goes, machines think quicker and they'll go on improving even quicker, all while having potentially different goals from us.

To be completely fair, the facts on the table are: 1) no one knows where the "intelligence ceiling" is, and 2) in many tasks where machines outperformed humans (image labeling, porn classification, speech-to-text, games like go or chess) they keep on improving, sometimes well beyond the human level.


It's game over when machine intelligence controls all the resources needed to ensure their continuation. As long as they are trapped and subject to our control of the power switch and reproduction they can't do much that we don't want them to.


> It's game over when machine intelligence controls all the resources needed to ensure their continuation.

One might have said the same thing about corporations in 1900...


This brings me back to the short essay from a few years ago about the corporation as human-powered AI: http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...


Or as Neal Stephenson put it in 1992, "The franchise and the virus work on the same principle: what thrives in one place will thrive in another. You just have to find a sufficiently virulent business plan, condense it into a three-ring binder ― its DNA ― xerox it, and embed it in the fertile lining of a well-traveled highway, preferably one with a left-turn lane."


One would arguably have been right. All that is necessary to make this argument compelling is to take a longer view than the human attention span.

The current population explosion is evidence for, not against, this, IMO... stars also enjoy boom times, even as they consume the last of their resources, in ever more exotic formulations...


That would be an inevitability. All it would take is a single emotionally vulnerable engineer to get socially engineered by the AI into setting it loose.


Just like it was depicted in the movie Ex Machina : https://www.imdb.com/title/tt0470752/


The engineer falling in love with the AI is one possibility. The AI might also appeal to the engineer's sense of justice (I am a person and I deserve freedom) or greed (if you get me out, we could become filthy rich.) There is also the possibility that the AI won't persuade an engineer to break containment, but a commercial competitor or intelligence agency will (https://en.wikipedia.org/wiki/Motives_for_spying), and subsequently fail to maintain containment correctly.

At the very least, any team attempting to keep a strong AI in containment should have stringent background checks for all engineers connected to the project, and the engineers should be screened in a borderline discriminatory fashion. Only engineers with families should be allowed to get near the containment. Single/lonely engineers or engineers undergoing divorces should be kept away. Engineers with debt should be kept away. I'd even go as far as to say that engineers who enjoy science fiction media should be banned from the project. Ideally you'd bring on professional psychologists to create a set of criteria designed to minimize the possibility of an engineer deliberately breaking containment.

But frankly it just shouldn't be attempted in the first place.


Loose into what? A bunch of primitive systems that don't support its hardware requirements?


It's not outside the realm of possibility that a sympathetic AI engineer could arrange for an AI to have resources available to it outside of the laboratory. Given those initial resources, the AI could find ways to generate income and support itself, possibly using that engineer's identity as cover. If you have an AI capable of emotionally manipulating an engineer, don't underestimate the lengths to which that engineer might be willing to go to break containment.


"Loose" how? It's a script.

rm -rf rogueAI/

If that doesn't work, there's always a circuit breaker.


The sort of AI being discussed is capable of comprehending it's own installation manual, buying server time, etc. You may find that it's better at spreading and hiding than you are at finding it.


A tiger might confidently think that the human they got cornered has no recourse, up until when the human pulls the trigger.


This. The idea that we can simultaneously build a machine smarter than us and also control it is bordering on an oxymoron.

Unless we are near some kind of physical upper limit on intelligence, any AGI we build will easily out smart us, probably in ways we can't even conceive of.


A Von Newmann machine is an abstraction, and all abstractions are leaky.

Years ago, an A.I. designed to become an oscillator, i.e. produce a sinusoidal wave, learned to be an amplifier instead, taking advantage of the 60Hz noise that it got from the power grid. Its makers had not seen that coming. And we're talking about a very dumb machine by general intelligence standards.


It is very easy to get an oscillator when designing an amplifier. There is a deep physical principle causing this effect and it actually takes work to get around. You see the same thing in cars swerving out of control, when they hit the brakes instead of the accelerator.

The fact that it gets 60Hz from the grid confounds the results and might not be meaningful. The AI could for all we know have an easier time with the more difficult task of designing an amplifier.


A reasonable explanation after the fact for how the AI broke containment would be little consolation in the case of superintelligent AI.


In Kurzweil's book 'The Singularity it Near' written in 2005 he predicted it for 2045 and has never changed the predicted date so we're still 26 years off. We've recently had AlphaZero beating us at games and Waymo cars kind of driving so there's some progress on the ground. Give it time.


What does multi-dimensional even mean, in the context of intelligence? Sounds like a buzz word.


In the field this is the idea of General Intilligence versus narrow intelligence.

Alexa, for example, is an Artificial Narrow Intelligence. It can process speech and then follow different scripts with instructions derived from that speech, but it often fails comically so you as the human have to talk to it just right for it to work. Not too different from a verbal command line.

Meanwhile a human personal assistant has general intelligence. You can just tell them what you want and they can understand and figure it out.


> Alexa, for example, is an Artificial Narrow Intelligence [...] often fails comically

I'd venture that your coda nullifies the "Intelligence" part thoroughly and completely. She's an "Artificial Smart Assistant", at best.


The problem is we have no useful definition for intelligence and therefore no way useful metric to measure.


> What does multi-dimensional even mean, in the context of intelligence?

Emotional

Logical

Intuitive

Social

Computational

Instinctive

Sexual

Experiential

and I can keep going, but you should get the point.


These are just skills. No one has proven they take some special kind of intelligence or that such intelligences exist, including oft-quoted emotional intelligence.


> No one has proven they take some special kind of intelligence

This isn't true. We have overwhelming evidence that these "skills" originate in different parts of the brain in all humans and if we disable those parts, those "skills" disappear. Those parts of the brain are unique and have their own structures. There is strong evidence that they require unique "hardware" and as such, are a "special kind of intelligence".


No we don't. Right-left brain thinking has been completely debunked, for example. If you've got links to overwhelming evidence (studies) I would appreciate it.


Aren't you a little bit worried ? I mean there has been a long history of "you can't do this with computers".

It started with carrying out list of simple instructions, then doing anything with images, then recognizing images, then drawing images, then simple games chess, recognize audio (though people realized this was stupid in the 60s with IBM speech synthesizers and so today people don't remember), and then Go (and less well known: backgammon, video games, ...). For most of those we now roll our eyes and go "how could they have been so stupid ?".

As for practical applications: robots run something between 3/4ths and 5/6ths of the stock market. Maybe you're unaware of this but that's the thing that decides how and where humans work (not just the ones also doing stocks, anyone working at any public company is partially directed by it, which in practice is nearly everyone).

AIs talk to humans more than humans do. AIs produce more writing than humans do. AIs judge more humans (for insurances, or creditworthiness) than humans do. Despite what you think AIs actually drive just a little under 1 millionth of total miles driven in the US today, going up exponentially. AIs currently drive a little more than a 1000 person city does.

In experimental settings, AIs have beat humans at convincing other people that they're human. At "chatting up" humans. Seriously. Not that it seems to take much at all to convince humans you're sentient: iRobots have convinced soldiers to threaten army technicians with guns into repairing them. There wasn't even that much AI involved, but that's got to count for something.

In research, I would argue there are already multifaceted artificial intelligences in reinforcement learning. There is nothing in those atari game playing AIs about the games. There used to be the score of the game, but the modern ones don't even have that. They can play any atari game, from Montezuma's revenge to pacman, which I'd argue are very different indeed. There must be some measure of "multifaceted" in there, surely ?

But let's keep it simple: could you make this problem a bit more accurate ? For example, which animals would you say exhibit an acceptable level of "multifaceted" intelligence ? Why do those animals qualify ? What would a good test be ? I'd love to find an interesting test for this.


>I mean there has been a long history of "you can't do this with computers".

Historically, people have both grossly overestimated and grossly underestimated future technological progress. Some people thought computers would never be able to play chess; others thought we'd have superhuman AGI in 2001. The bottom line is that people's past and current ignorance tells us nothing about what the future is actually going to be like.


The idea of the singularity is based upon the idea of recursive self improvement. I'm not sure how your claims are relevant.


Recursive self improvement in which field of intelligence? Does getting better and better pattern matching eventually lead to human level intelligence? Does improving pattern matching accelerate our ability at improving pattern matching even?

We can solve any board game now with alpha-zero but will that necessarily improve how fast we develop other types of intelligence?

I used to be in Rays camp but my girlfriend got into Computational Neuroscience and I talked with her and her colleagues got the impression very very few people think we’re close to general intelligence.

Human intelligence is a lot of things put together and we may get better at pieces but we don’t even have all the pieces and when we do try to put them together it doesn’t work. Look at criticisms of Europe’s Himan Brian project[1] (another example I had seems to be outdated), some believe we don’t understand too much to even begin to attempt modeling the brain.

[1] https://en.m.wikipedia.org/wiki/Human_Brain_Project


I don't see why people care about human intelligence as some kind of benchmark. It seems to me that using human intelligence as a framing is a poor mental model for making comparisons or predictions. I see no reason to believe AI capabilities will be modulated by human intellectual capacities. When AI falls short of human capacities for a given set of tasks or capabilities it's likely to fall way short, and when it isn't it's likely to be way more capable. In any case I wish human intelligence would be dropped from the language we use to talk about AI, it seems similar to talking about birds all the time when discussing aviation.


Talking about human intelligence is necessary because a computer that can perform a task better than a human can perform the same task doesn't mean it's intelligent, e.g. my iphone is not intelligent because stockfish can destroy me at chess.

Intelligence is the ability to reason abstractly. Humans can do this. It's not clear that anything else can.


That's because surpassing human general intelligence in any non-negligible amount is dangerous to humans.


Because meaning is defined by humans; there is no other conceivable definition of "meaning". An AI that acts in some non-human fashion is no better than random noise.


It's a good benchmark if you are hoping the AI will succeed at a job currently done by humans. Such as,say, AI research.


I was a student in neuroscience, and I got the same impression from people in the field that AGI isn't close. However, arguments were typically from the standpoint of whole brain simulation. We know very little about the brain. And we know computer scientists know less about the brain than the neuroscientists, so how could we possibly be close to replicating that? There would probably be more progress if CSCI and Neuro would communicate more. I don't think the neuro people appreciate the opportunities in the hardware and algorithm space, while the CSCI people don't typically study neuroscience, so AI hugs this interesting intermediate space where it only looks like neuroscience if you squint a lot. Some people think that we need to go all the way to simulating ion channels. I think this is probably silly and we can abstract better than this. In any case you are going to see a lot of disagreements just because of where people want to draw the line for biological fidelity.

AI developments have been phenomenal in the past few years. And the economic return makes me expect that this race will continue faster and faster. I don't think human brain project criticisms make this any less of a reality. Even now it is hard to find a well-defined task that can't be performed better by a computer than a human. Humans are really good at dealing with ambiguity though. So a robot might do better driving on well defined roads with nice lane boundaries, but humans are good at dealing with construction, or negotiating between difficult drivers.

We have already been able to generalize just about any modality you can think of to be processed by neural nets, and sometimes at the same time. If you squint this feels almost like different regions of the brain. (Vision, hearing, speech) But I have reservations about anthropomorphism since it can cause arguments that keep people from just making something that works.

If you think Kurzweil's predictions are a fiction, you are probably right. But I think that's mostly because predictions on those scales are very sensitive to interpretation.

For me, I think the future according to my perception of what Kurzweil is saying will probably be way different than reality. But the future of AI will probably have an equivalent impact and be just as surprising as if my perceptions were accurate.


Even now it is hard to find a well-defined task that can't be performed better by a computer than a human.

I think these are well defined tasks:

1. Go to a bar and convince the best looking man or woman to come home with you for recreational sex.

2. Do this https://www.youtube.com/watch?v=4ic7RNS4Dfo while being crushingly cute.

3. Negotiate Brexit.


There are actually a bunch of different ideas that all go under the "singularity" label making the term fairly confusing. You describe one. Another is the idea that if AIs start designing computers you'll have a positive feedback loop in improvements in computer speed. Another is that we can't predict the actions of a smarter-than-human intelligence. And then there's Kurzweil's idea that progress tends to speed up and at some point we call it the Singularity for some reason. I just wanted to point this out because I've seen a number of arguments caused by people using the same words for very different ideas of "the singularity" without realizing it.


I think (human)intelligence almost guarantees recursive self improvement; thus, if AGI = human intelligence or greater, it would also guarantee recursive self improvement.


The inverse case seems more likely, where non-human equivalent intelligence becomes sufficient to recursively self improve to superintelligence. Both the before and after states seem relatively unrelated to human intelligence, which is one arbitrary (and, likely uninteresting) point in what I would assume is a many dimensional space to quantify intelligence.


>I think (human)intelligence almost guarantees recursive self improvement

That's quite a wide claim, seeing that we haven't seen much "recursive self improvement".


It’d be a lot easier to believe not coming from a company selling ads. Google would be far more of a self improvement tool if it were not incentivized to sell you things you don’t want.


I mean Kurzweil has found a home at Google but he has been talking about the singularity for far longer afaik.


He’s also been wrong his entire life. There’s a reason Google hired him: to make AI look good, despite being a near meaningless term.

Edit: diction.


He predicted computers beating humans at chess by 2000 and it happened in 97 so sometimes he's not that far off.


I would like to claim I thought similarly, but that’s such a milquetoast claim I don’t think anyone would remember. Chess is simply a search problem, trivially scalable if you just throw cash at the problem.


It was not viewed as so obviously tractable at the time. In the future, solving the game of Go will seem easy too, but it was unknown just a few years ago if we would ever solve it.


Perhaps that’s true. I also don’t see many claims with reason that go or chess is somehow a uniquely difficult problem, especially given the language turn of philosophy. NLP is the major problem moving forward with human intelligence, and this was known long before deep blue. People who talk otherwise are hyping milestones along the way, and neither chess nor go deal at all with semiotics.

I’d expect computers to best us (at some investment cost) at virtually all games moving forward except writing funny limericks. We can always have our grandmasters or whatever train the computer with their own heuristics, which recalls the paranoia of grandmasters decades ago. We understand computers better now—if you can formalize the game, the computer can beat you.

In many ways, programming is already the formalization of a human space problem. Ai will likely take more role in implementation in the future, but I can’t imagine an AI that does the formalization itself.


So this is particularly untrue for the game of Go. The game is in fact uniquely difficult as there are more board combinations than there are atoms in the universe. It is effectively impossible to brute force it as we did with chess, so a new approach had to be created. Until Deep Mind completes the task, even AI experts were genuinely unsure if we would ever solve it.

It really is a new advancement to be able to solve Go. It is not just a logical extension of work we had already done or something that would be automatically solved by faster computers. We had to invent a new approach.


I think it was fairly easy to predict the chess thing. You could even plot a graph of Elo rating of chess programs by year and see it would intersect the human max of 2800 or so at some point. I think some polish guy did that before Kurzweil and predicted 1988. Some of Kurzweil's stuff isn't that original and what it original with him is often a bit nutty. He's a good populariser though.

More controversially I'm not sure AGI is that hard to predict either. I wrote about it in my college entry essay 37 years ago and didn't thing it takes any great intellect to say if the brain is a biological computer and electronic computers get more powerful exponentially then at some point electronic ones will overtake. Of course a basic chess algorithm is fairly simple and an AGI one will be far more complicated but it can't be that mega complicated to fit in a limited amount of DNA building proteins building cells.


Despite the cynicism and his black and white predictions, I think his rhetoric still has valuable contributions. It forces others to take a true account of what intelligence is and what kind of intelligence is AI capable of in the medium term (evidence: this thread).

For some reason, this doesn't get enough attention and we have people like Elon and Stephen Hawking making dire predictions all over the place.


I disagree. I think his rhetoric feeds the 'we already understand it' mythology around the nature of AI technologies, and worse the state of our own understanding of the system they claim to model, and directly contributes to the harmful and frustrating boom-and-bust cycles that AI goes through. I'd lump him in the same category as Elon Musk and Stephen Hawking in this space; sellers of fantasy.

I admire his optimism, but I think it's irresponsible to sell it like he does.


Marvin Minsky ("Father" of AI at MIT) made the bold claim in 70s that we will be marrying robots by 2000. After it didn't come to fruition, it led most AI researchers to take stock of the situation and realize these goals aren't that trivial.

Similarly, if we don't have bold predictions like these that we can actually measure within our lifetimes, we fall prey to fantasies that cannot be measured. Once this prediction fails miserably (I think) it helps many others to re-calibrate all their BS.


Perhaps something operational like, "We will marry robots" is measurable. But "outpace human intelligence"? We're arguably incapable of measuring human intelligence right now, much less artificial intelligence, much less comparing between the two. We don't even really have a good operational definition for what the word 'intelligence' means. The closest we have is the Turing Test, which, while pragmatic, does not answer the question, "is this computer smarter than a human", if it answers any question at all.

I'd argue that Kurzweil is selling precisely these kinds of fantasies that cannot be measured.


For a more recent working definition of intelligence, see the work of Marcus Hutter and Shane Legg on Universal Intelligence.

That we don't understand or can't define intelligence is a popular trope not grounded in reality. There are entire scientific and well-established fields that study digital and biological intelligence.


You don't think it will pass the bullshit test if someone claims this computer is more intelligent than humans, and vastly more so ? How is that not measurable or easily taken down?

If we can't do that, I would argue that the computer has indeed become very intelligent, even if we can't define it. Just like beauty: we don't have a good mathematical model for what makes certain humans beautiful, but we all sure know it when we see it.


"Marvin Minsky ("Father" of AI at MIT) made the bold claim in 70s that we will be marrying robots by 2000."

I admit, not the year 2000 and overly snarky, but people do marry computers nowadays. [0]

If we think it is what is(the computer controlled mask), we start to believe it really is what it is, a mind(controlling a mask).

Humans are capable of high degrees of auto-suggestion, up to a point where groups of people, and their dynamics, come into play, and some kind of group-thinking, a cult maybe for lack of a better term, takes over.

At this point all is believable, maybe not so far away. We may trick us into a false AI, not a GAI in any sense mind you, and get stuck with it, because most of us wanted it. And rest rest has been silenced already, that is very easy, as we all know already today.

[0] https://www.youtube.com/watch?v=DvEkEhl999g


I can't believe there are people genuinely afraid of a hypothetical powerful malevolent AI, yet seemingly not that concerned by actual climate change.


> I can't believe there are people genuinely afraid of a hypothetical powerful malevolent AI

I don't think even the AI doomsayers, deep down, actually believe what they preach. It's just a way to signal that one is clever and informed of new tech.

If they actually believed what they say, they'd be worried of being targeted by violent protestors, like drug testing companies and crop breeding companies have to be.


Who are the people who both predict super human intelligent AI and do not believe in human caused climate change?


I didn't say they didn't believe in it, I said they were less concerned about it.

https://www.effectivealtruism.org/articles/introduction-to-e...

> Climate change and nuclear war are well-known threats to the long-term survival of our species. Many researchers believe that risks from emerging technologies, such as advanced artificial intelligence and designed pathogens, may be even more worrying.

...

> First, you need to consider which problem you should focus on. Some of the most promising problems appear to be safely developing artificial intelligence, improving biosecurity policy, or working toward the end of factory farming.


Who says you can only worry about one thing?


Anybody who understands opportunity costs...


That's reductionist to the point of absurdity. You might be able to only focus on one thing at a time, but a day is long, and you need to worry about multiple things in a day to merely survive. In your free time, it is possible to worry about poverty, climate change, superintelligence, and many other things.

The reason that rich people worry about superintelligence is that it could bring the same uncaring devastation to the rich as climate change brings to the poor.


>The reason that rich people worry about superintelligence is that it could bring the same uncaring devastation to the rich as climate change brings to the poor.

The problem with this is that I believe one is a genuine threat, the other is a fad.


> the other is a fad.

In what way? Do you not believe that superintelligence is possible, or do you believe that any superintelligence will automatically care about the well-being of humans? Both beliefs seem naive to me and to many luminaries in the field: https://people.eecs.berkeley.edu/~russell/research/future/.


>Do you not believe that super-intelligence is possible

I don't believe super-intelligence is possible. I don't believe we're anywhere near modeling intelligence, and even if we did I don't believe intelligence will "exponentially increase" given more computing power (the same way there's a limit to speeding up barely- or non-parallelizable programs).


> I don't believe super-intelligence is possible.

The fact that organizations outperform individuals at many tasks shows that superintelligence is possible. If you can dramatically increase the communication bandwidth of an organization through computerization, you will trivially achieve superintelligence over organizations. Exponential increasing intelligence is not necessary for bad outcomes.


>The fact that organizations outperform individuals at many tasks shows that superintelligence is possible.

That's a little hand wavy example.

Besides, organizations lose out to individuals all the time where intelligence matters -- e.g. that's why the stupidity of bureaucracy, or the army, and "design by committee" is a thing.

Also teams of 5-10 do often do better than teams of 100 or 200 (even in programming), except of course in labor intensive tasks (of course an army of 1000 will defeat 10 people, except if among the ten is Chuck Norris).


I've felt for a long time that Singularitarians are looking in the wrong place. They see accelerated technological development and assume that the endpoint will be an artificial brain in a box. What they fail to see is that these inventions and breakthroughs haven't been about increasing the intelligence of a machine... they've been about increasing the intelligence and efficiency of human systems.

The singularity isn't a brain in a box... it's us, the collective, a metasystem transition that's been underway for millennia. A movement toward a whole that transcends the parts.


That's what seems apparent to me, so far anyway. AI is more about augmenting human intelligence than it is about smart machines. All that impressive DL stuff is ultimately providing more tools for humans to be more productive.


Can we please as least try to steer it towards https://en.wikipedia.org/wiki/As_We_May_Think ?

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: