Hacker News new | past | comments | ask | show | jobs | submit login
The Road to Superintelligence (2015) (waitbutwhy.com)
65 points by rbanffy on July 25, 2017 | hide | past | favorite | 60 comments



I'm not convinced that progress has continued to accelerate. Compare the past 70 years to the previous 70 years and ask yourself which period of time experienced more radical progress, upheaval, and scientific discovery. My suspicion is that there was more from 1877-1947 than from 1947 till now.


You're looking at the impact on everyday life, which has nothing to do with the abstract advancement of technology. Superintelligence only requires continued exponential growth of computing power, which has documented consistency.


Not over the last 10 years vs the prior 10 years.

https://en.wikipedia.org/wiki/Logistic_function / Sigmoid function's look exponential, but there is no longer a lot of room at the bottom. When things where 10,000 atoms wide dropping that to 1,000 was no big deal yet massive speedup. When they hit 5 atoms wide we what use 0.5 atoms?

PS: Arrays of low power processors like GPU's are still making progress, but we are rapidly approaching the point where there is more than one core per pixel. We might be able to use 1,000 cores per pixel, but I doubt it. EX: At 1080p SLI GTX 1080 Ti = 289 pixel's per CUDA core. Sure, 4/8k will push this off for a while, but monitor resolution has not been going up all that fast.


Why are pixels relevant to a discussion on computing power?


Pointing out that add more cores is yet another wall because FLOPS is only a proxy and few things are truly embarrassingly parallel when your start talking 1+ billion cores.


Furthermore, adding cores is not an exponentially-growing process. On the other hand, there is some evidence, from the fact that intelligence is possible within the human brain, that intelligence is embarrassingly parallel.


I think neurons are closer to transistor analogs than core analogs. The brain wires them up into networks that have meaning vs passing along data that has meaning. And in that context we crossed the 10 billion transistor on a chip milestone recently vs 100 billion neurons in a human brain.


And yet here we are, made of neurons.


An GTX 1080 Ti has 12 billion transistors which is closer analogy to the 100 billion neurons in a human than saying we have 100 billion cores.


It also requires learning how to create software that has general intelligence. Some people think that current techniques will be sufficient, but that remains a conjecture at this point.


The bipolar junction transistor was invented in 1948, so I have a hard time going with you here.


From 1877 - 1947, the following became an everyday reality : the telephone, television, cars, flight, the light bulb, flush toilets, skyscrapers.

The computer age has changed things in a big way, sure, but it's pretty hard to overstate how profoundly many aspects of human life changed in that period.

There was a Planet Money episode recently about this exact topic : http://www.npr.org/sections/money/2017/05/19/529178937/episo...


Electrical applications were developed during the late 19th and early 20th century, so I don't have a hard time going there. And that's just one of many areas.


Power, and a few signal, applications were developed starting in the mid- to late 19th century, sure. But it strikes me as extremely difficult to argue other than that the development of solid-state electronics has vastly accelerated every field of human endeavor with which it is coincident - and, as we've broadened the range of its applications, that set of fields has turned out to be most of them.

In any case I would be very interested in hearing more about a historiography which holds otherwise!


By mid to late 19th century you mean to include 20th century up to 1947 including television & vacuum computers, right?

The combustion engine, powered flight, QM, Relativity, modern warfare (including the 2 World Wars that reshaped a lot of the world), commercial refrigeration, widespread use of vaccinations and penicillin, atomic weaponry, and nitrogen for fertilizer are pretty big deals.

The question to ask is which time traveller would have more difficulty understanding the world. The one from 1877 transported to 1947, or the one transported from 1947 to 2017?


That depends where the time traveler landed. In 1947, there were still a lot of places in earth that were unchanged from 1877. Now pretry much every place on earth has power and a cell network.


You know what the GP meant.

And even today, if you take a random poke at Eurasia map, there's an overwhelming chance to end up in the middle of Siberian forest with no cell reception.


Exactly, most things in use today had a long history before they became popular, then they entered a short burst of activity where they become smaller and cheaper, after which they enter a very long plateau again. These bursts create the illusion of ever increasing acceleration, but they happened in the past as well, and just as quickly, and the majority of them have been forgotten just as the majority of 'startup innovations' will be forgotten in 20 years.


I wouldn't even say there was a burst of progress. There was simply a long time of incremental progress before the respective technology crossed over into practical viability. It's only at that point that people generally start to notice.


Progress is fractal, by definition. Technically the rate of change possible is limited by the number of edges on the fractal and the number of people who can work on it.


Superintelligence, The Idea That Eats Smart People: http://idlewords.com/talks/superintelligence.htm


This is the best thing I have read in a long time. I have always been skeptical of "superintelligence" and glad to see I am not alone.


Personally I believe that a huge part of intelligence and superintelligence is about emergent complexity in networks which has increased along with the computational power.

Humans (and biological life in general) is as far as I can see pattern recognizing feedback loops. Humans have more connections in the brain and better/longer memory than other species which seems to be at least somehow fundamental to our difference from the rest.

So while we don't know for sure IF this complexity is fundamental to intelligence we sure seem to be on the right track.


If you add too much complexity of the wrong type, you get a star. Just making something more complex by no means guarantees making it cleverer. And beware of using "emergence" to hide the need for an actual explanation: http://lesswrong.com/lw/iv/the_futility_of_emergence/


I am not talking about just complexity anymore than an evolutionist is talking about simple randomness in the evolution of species.

Emerging in this context means incremental i.e. in small steps based on previous steps. Plants react to light they don't think about what light is.

Not sure what that article has to do with what we talk about here.


It is in response to "So while we don't know for sure IF this complexity is fundamental to intelligence we sure seem to be on the right track."

Which to me reads as "hey, if we make things more complex then they might become intelligent - who knows!" which seems extremely optimistic.


thats not what i said. I said if emerging complexity is at the core we are on the right track. I am willing to hear arguments against complexity at the center of this but then that should be what we argue. The star comparison doesent work in this vontext anymore than random does in evolution.


Pattern recognition is peripheral I think. It's there and is a prerequisite, but not primary. The main thing we have that other advanced mammals don't to the same extent is the ability to build models, or internal simulations of situations, individuals, groups and systems in our heads, modify those simulations and model how it will behave in different situations or over time. It's this deductive reasoning that distinguishes us and I'm not sure that pattern recognition plays much of a role in it.

Focusing on Complexity is putting the cart before the horse. Human brains are complex because that is a requirement to be able to do the things that they do, they are sophisticated systems that solve complex problems and therefore need to be complex. That doesn't mean that arbitrarily complex systems will somehow become intelligence because emergence. Consciousness, maybe, that might be an emergent property of our brains other functions, but intelligence and conciousness are not the same thing.


Pattern recognition is fundmental to reasoning. Without pattern recognition no ability to establish separation between the experienced no way to build models or simulate. Without pattern recognition there would be only noise.


Sure, but I don't think that our modelling and reasoning abilities are superior to other primates primarily due to superior pattern recognition. Yes we couldn't function without it, but no I don't think it's the secret sauce we have as humans. It's what we do after we have resolved the patterns that makes the difference IMHO.

Why do I think this? Pattern matching will only tell you this thing or situation is like this other thing or situation you have recognised before. That only allows you to implement a simple strategy re-use or random variation algorithm. To go beyond that you need to reason about the ways this situation is different from previous patterns, analyse it, figure out how previous successful strategies might work or might need adjusting, or conceive a new strategy. Pattern matching is just the first phase. All animals have this to some extent. But the rest is supendoudly more advanced in humans than any other animal, if they have them at all.


Not the secret sauce as all organic material have it to some extent. The point is the emerging complexity plus pattern recognition creates more and more consciousness.


I don't think you'll find many neuroscientists that believe this. What's emerging complexity?


You don't believe many neuroscientists believe that the brain requires a certain level of complexity to become conscious?

You don't believe many scientists who believe that human life started in simple form and evolved into more and more complex life form ending with us and our brain?

Unless you are with Searle and his magical thinking in the Chinese Room argument I don't think you find many who disagree with this.


No, as I have explained I don't think complexity by itself inevitably leads to consciousness. That's what I meant by cart before the horse. You need both a cart and a horse, but the dependency relationship matters. But what I was referring to was the proposition that all intelligent behaviour is composed of pattern matching.

Yes of course I believe our brains evolved that way. I don't believe that arbitrary complex systems in arbitrary environments are bound to evolve in the same way. The environmental conditions, the fitness criteria, the selective pressures, the inheritance mechanism, or even having an inheritance mechanism, these are all crucial. For a designed system, the architecture matters.

Searle constructs convoluted arguments using intellectual sleight of hand to support a really dumb conclusion.


My point is that complexity is a necessary building block. It's not the cart before the horse exactly because I am talking about emerging complexity i.e. increasing complexity in a very specific context namely human brains and computer AI.

No one is claiming arbitrary complex systems in arbitrary environments are bound to anything. I am talking about something very specific namely how our brain came to be and how computers seem to be getting closer and closer to our brains. Just like how evolution in itself is not just arbitrary randomness.

Unless you are claiming that extremely simple systems can create the kind of consciousness we are talking about here I have a hard time understanding what you are disagreeing with in what I am saying.


As a counter, see the previous discussion on Kevin Kelly - the AI Cargo Cult & the myth of a Superhuman AI (1).

See also several other articles making similar points (2).

And "In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky conducted an online debate about the future of artificial intelligence, and in particular about whether generally intelligent AIs will be able to improve their own capabilities very quickly (a.k.a. “foom”)."

(1) https://news.ycombinator.com/item?id=14205042

(2) http://www.salon.com/2015/10/15/calm_down_artificial_intelli...

(3) https://intelligence.org/ai-foom-debate/


Ah, Yudkowsky... The autodidact "AI theorist" who does not understand simple math and philosophy concepts, is afraid of being resurrected and punished by a future super AI, has not added anything to human knowledge despite collecting millions of dollars to fund his research institute (which changes its name every few years for some reason) and has actually argued with a bot on reddit. Even when you ignore the singularity bullshit he spews, it is astonishing that anyone takes him seriously.


> is afraid of being resurrected and punished by a future super AI

I hope you similarly mock those Christians or Hindus you come across. What makes this belief system so much more worthy of scorn, even if Yudkowsky actually were afraid of this (given that, AFAIK, he has never expressed anything to suggest that he is)?

"Autodidact" is not the insult you seem to think it is.

Remove those two, and tone down the attacking voice, and you might have something one could discuss.


> I hope you similarly mock those Christians or Hindus you come across. What makes this belief system so much more worthy of scorn, even if Yudkowsky actually were afraid of this

When said person is trying to argue that his believe system is factual, I do very much believe that mockery is counted for. E.g. if a christian where to tell me that the big bang did not happen and instead the universe was created by a magical sky fairy who is his own father and who has a really unhealthy obsession with my sexuality. Said christian is very much free to hold and express this believe. But I'm also free to express my incredulity.

> "Autodidact" is not the insult you seem to think it is.

Did I say it's an insult? It simply means that the guy does not know what he is taling about. Which is kinda important when people try to claim he is worth listening to.


Just nick picking but any irrational belief is mockable, if not the people holding it themselves. And when people holding risible beliefs speak from authority, it is not unfair to cite those.


There, but for the grace of God, go I. (And I repeat that I don't know of an instance where Yudkowsky has ever expressed fear that he will be revived by a future AI and tortured. This sounds like it's referring crudely to the memetic hazard known as Roko's Basilisk, which he has explicitly stated he doesn't believe in.)


You are right, I misremembered that.


Yudkowski is the most well-reasoned author I have ever read.


I agree with some of the below comments that there is more hype than reason. For me the observation is from a different starting point: progress in mathematics is not increasing exponentially, and personally I feel that revisiting parts of mathematics is more enlightening than the current trends in AI.

NN's for example are mostly just an application of linear algebra where design determines the nodes and training the weights; and then, the "decision making" is done by a notion of product. Very useful, but not at all superintellegence.


My feelings are similar. Current AI seems to be the infinite monkey theorem put into practice. I'm not saying it doesn't work, it obviously is, but I wonder what the limitations of it are. When do we start running out of monkeys too the point that only nation states, and companies with the resources of one, are able to achieve the next level?


Viewing math as a delimiter of intelligence only works if you can also factor the requisite skills of intelligence and know that math is sufficient to achieve it, which we can't.

The grand AI question has always been, "How much cognitive power + information is needed to implement strong AI/AGI?" The power of linear algebra can't answer that since it measures only one of the two criteria, and only some aspects of cognition. The information threshold to achieve HAL 9000 remains a mystery, not to mention the necessary aspects of the facility to acquire it, namely learning.


Quite a good write up I thought, especially interesting about the OpenWorm project and their successfully funded Kickstarter [1]

[1] https://www.kickstarter.com/projects/openworm/openworm-a-dig...


So what's the evidence that exponential growth of AI will continue at this rate for the next few decades ?


Nothing conclusive, but:

1. If we look at rate of change over the last few thousand years, it doesn't seem completely unreasonable to extrapolate that forward.

2. It's hard to argue that no more breakthroughs can possibly be made in physical computing architecture or computer science, though admittedly it's equally hard to argue breakthroughs definitely will be made. However...

3. Modern industry is heavily incentivized toward AI advancement, because of the optimization capabilities it provides. Where such incentives exist, barriers tend to dissolve.


It doesn't really matter whether it continues exponentially, even if the singularity is hundreds of years away the dangers are mostly the same.


The exponential hype?


Pure hype.


> Imagine taking a time machine back to 1750—a time when ... all transportation ran on hay

wait, what? weren't ocean-going ships using wind power in 1750? come to think of it, weren't ships using wind power in 1750 BC?


In my view, we should avoid terms like "progress" and "intelligence" in these conversations, since they make implicit value judgments and tend to draw us irresistibly into certain unproductive patterns of argument. The topic of evolution has a similar pitfall.


> "In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved."

I suspect that it is no longer possible to achieve a DPU going forward. We now know that amazing things have been achieved and will continue to be achieved. Plus we have science fiction that can conceive of almost any possibility, from intelligence at sub-atomic scale to creatures the size of galaxies, parallel universes, humans capable of controlling time and space by mere thought etc etc.

Based on recent history, I expect the future to be unrecognisable!


Why assume intelligence is computable?


Some thoughts:

Progress is not like a liquid, that increases continually with research. Progress is an unknown function, composed of huge number of discrete "discoveries". Each discovery is hidden behind research of variable level of difficulty. Each discovery, may or may not enable more discoveries. The reality may have either a finite or infinite amount of discoveries waiting to be made. We don't know for sure. This universe could have a finite amount of discoveries. But it is possible that we may find a way to travel to infinite other universes, and these may have infinite discoveries waiting to be made.

The shape of the technical progress function, depends on these unknown factors. So it is wrong to assume that it is an exponential. Although an exponential seems like a good approximation around the local point in which we currently are. The population have been growing, and the economy have been growing. These two factors have enabled an increased amount of resources dedicated to research. The more research being done, increases the probability of discovering the available discoveries that exist at the current technological level. The speed of progress depends on the amount of research, and the number of latent discoveries hidden in our reality.

We are approximating the point, in which we can build artificial general intelligence. This will be a machine similar to a human mind, but capable of dramatically faster reasoning. Its internal dialogue will be millions of times faster than a human mind. Because it will move at electrical speeds, instead of biological speeds. Also it will have practically perfect and unlimited memory (compared to human capabilities). And will have almost instantaneous capability to resolve mathematical calculations of reasonable level. With these improvements, it can be expected that it will be much more effective at making discoveries than a human. Additionally these machines will be industrially replicable. So it will be possible to put a large amount of them at work on problems. It is reasonable to expect that these machines will resolve the chain of discoveries available to be made faster than humans.

These artificial machines will maximize discoveries, from the chain of discoveries available. What this is going to mean, depends on the actual unknown amount discoveries available in this universe. If things like nanotechnology, molecular machines, biological machines, etc, are actually possible, these intelligent machines are well equipped to discover them dramatically faster than we humans. If there is new physic available to be discovered, these machines have much better chance than humans at discovering it.

Will machine superintelligence actually create a singularity? maybe, or maybe not. Depends on the amount of discoveries to be made, contained in this universe, and their level of difficulty. It could be the case that our universe is running out of hidden discoveries. So any prediction on the shape of the curve of progress, is pure speculation. For example, we could have already run out of exploitable significant discoveries in physics. Or we could be on the verge of discovering faster-than-light/instantaneous communications, and lots of other things.

In my opinion the invention of artificial general intelligence, and superintelligence is imminent. A matter of years. I base this on introspective observation of the thinking process of my mind. And in comparison of it with the operation of artificial neural networks. They show similarities. The thinking process of the mind is entirely reproducible with deep learning networks, assembled in the right structure. An interesting topic is, who is doing this research. Obviously, the big tech corporations are working on it. But are states organizations also working on it? Who is doing the biggest investment? Who has the best odds of inventing it? What will happen when someone gets it? Are they going to immediately announce it?


Can we put (2015) in the title? Maybe replace the publication name, which is visible twice. This was big when it was published, now slightly outdated.


Also put "AI" in the title. Upon reading the title, I thought it was about genetics/breeding a superhuman.


This is HN. This gives AI a slightly higher Bayesian prior than genetics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: