Hacker News new | past | comments | ask | show | jobs | submit login
I've come to doubt the AI singularity apocalypse
8 points by weddpros on Sept 6, 2015 | hide | past | favorite | 35 comments
It sounded logical that when we could create an AI that's cleverer than ourselves, it would create a singularity, a more and more intelligent AI auto-created by itself.

It's the basis of the popular warnings expressed by many clever people, but I've come to doubt this assertion.

1- When Stephen Hawking was born, his parent created a child cleverer than themselves, yet it did not produce a singularity. Hawking got more and more intelligent, yet it wasn't exponential. And his offspring did not create a singularity either.

2- A "better than us" AI may never get exponentially better than us. Maybe it's an NP problem: maybe the AI would "never" have the time to become more and more intelligent.

3- If we can create a very intelligent AI, I doubt it would exterminate humanity to save nature. Why? Because nature loves diversity, and we're a part of this diversity. Also, this AI may have a sense of fate: even if our fate is to destroy the planet because we're so dumb, maybe this AI will think it's just our fate.

4- I guess humans are very afraid AI could do what humans do: kill people, destroy the planet, make war, try to dominate others, try to manipulate others and punish. These human traits have not much to do with intelligence.

5- If I find a way to transfer my brain into a computer, a very expensive and powerful one, there's no guarantee it will evolve faster than me, or handle data faster than me. Actually, I guess the first "better than us" AI will most probably be very slow.

What if the AI singularity just creates better and better computers? Because this super-AI is actually a computer.




1- Having children isn't very parallel to designing machines. (It might be more parallel if Hawking's parents specifically had a goal of producing more and more intelligent human beings, and genetically engineered their children to try to make that happen.) Human reproduction involves a lot of random variation and not a lot of goals, expertise, or the wherewithal to achieve them.

2- I think this is an interesting point about possible inherent computational limits on the ability to solve some problems that we might care about, including in designing more intelligent machines.

3- This is something people have thought about quite a lot. What superintelligent machines do depends on what they've been programmed to do. It's very unlikely that an AI would inherently value "diversity" or "fate" unless it were programmed to do so. The AI wouldn't spontaneously create new values (unless it were programmed to). Most concerns about AIs that exterminate humanity are based in the possibility that an AI would fulfil other goals in a surprising or unanticipated way, with bad side-effects for human survival.

4- Intelligence helps people wage war and dominate others more violently, both by coordinating better to do so (including motivating people to join in), and by developing new technology that helps make larger-scale violence cheaper. Weapons research can help you learn how to kill more people faster and at lower cost. A superintelligent AI could engage in this kind of research if it saw an important reason to.

5- I think that's exactly right; perhaps the important difference here is that the machine version would be more flexible (if you wanted to try overclocking it, or modifying the software somehow). This is dangerous and expensive and confusing to do with a physical brain, because it's hard to manipulate the details of its organization and structure, and because you can just die if you mess up. Think of the ease with which you can edit a PNG or SVG file in a computer compared to editing an oil painting. Perhaps with the computer version you can also run multiple copies in parallel -- something you also can't do with your physical self.


I agree with your point number 2 (except I think you should say NP hard instead of NP since many NP problems have efficient algorithms), in fact I've made it before.

While it may be reeasonable to believe that technology will eventually lead to implementations of intelligence that are significantly faster than biological ones, this doesn't necessarily mean there will be a huge qualitative difference (say, to the point of humans being totally incapable of understanding an AI's actions, as some suggest) because the "intelligence problem" may be dominated by exponential growth in complexity of the search space as one attempts to consider more alternative paths.

I do think that the question of the "safety" of AI is something that needs to be seriously taken into account by anyone realistically contemplating the development of an AGI but I also tend to think that some of the concerns expressed by prominent individuals are a bit overblown and don't take into account the range of safeguards that could, should and most likely would be put in place by any group realistically capable of solving the extremely hard problem of general intelligence.


From a policy perspective, it would only be appropriate that any company who creates a machine capable of large-scale harm should be held liable for neglecting to implement hard-coded safety restrictions. It should not matter whether the machine "thinks" or "feels", for it was created by citizens nevertheless, who have an obligation just like an automobile manufacturer not to do negligent or incompetent things. Likewise, a thinking machine could be seen as something of a pet, and owners of pets (such as dogs, tigers, or gorillas) are certainly liable for not protecting the public appropriately.


Your first point doesn't make sense to me. Human knowledge is not encoded in DNA in a single lifetime and more abstract concepts will never be "passed" down to offspring. This is where artifical lifeforms have a huge advantage. Their offspring can directly recieve knowledge from their parents. We can also do tons of other things that would be unrealistic, such a 1-parent cloning, n-parent children where n > 2, children with no parents, etc. Just because we design systems that vaguely model real life in evolutionary computing doesn't mean we are constrained by the same laws of genetics.

FWIW, I agree that any form of intelligent machines that surpass humans at general aptitude tests are decades away. We may not even see them in my lifetime and I'm only 22 years old.


Because this super-AI is actually a computer.

People are just computers. Slow, fragile computers stuck in meat bodies.

If you could think and act 1000x faster than yourself now, you could get a lot more done. You could hold 1000 creative jobs in your head at the same time.

Speeding up a dog 1000x doesn't generate a dangerous AI. Speeding up a person by that much does. (plus, "the intelligence of a person" is pretty low anyway. We've got the 7+2 problem, the monkey sphere problem, hundreds of bias built in, etc. It's easy to see how a "wider" intelligence could be much more productive, more creative, more useful, and more dangerous than any meat brain in existence.)


People are anything but slow and fragile, at least in 1 sense of the word, in comparison with computers of today. Modern computer models which are typically viewed as the edge of computational intelligence are very susceptible to getting tricked into incomplete models by their input data. Human beings process unfathomable number of stimuli everyday and get placed in unknown situations on a daily basis and very few break down, where as almost all current computer models break down very quickly when viewing previously unseen patterns.


Modern computer models which are typically viewed as the edge of computational intelligence are very susceptible to getting tricked into incomplete models by their input data.

What?!

A human person lives for years figuring out how the world works before they can even talk. Our trained-in-one-month AI systems aren't yet approaching anything training for years over real-world, mobile, 3D inputs, can approximate.

Your objections are being filed under "not enough future vision."


You are missing the point of my objection.

For starters, I said modern. Most of the big recent breakthroughs in AI have come from neural nets. There is a inherent problem with deep learning with neural nets. Neural nets are exceptional at finding patterns within a huge dataset, but they are REALLY, REALLY bad at predicting new patterns that they haven't seen. Even something as simple as the pattern a^2b is impossible for a neural network to generalize past whatever length of the pattern the neural network has seen without modifications (check out stack-rnn by facebook).

Even if an AI is trained for years, I don't see this limitation being overcome by pure magnitude of data with our current techniques. You say I don't have enough "future vision", but all you are doing is basically guessing that AI researchers will make some huge breakthrough that significantly changes the direction of the entire field. You are playing the lottery with your guesses, I'm simply extrapolating current trends.


I'm simply extrapolating current trends.

Extrapolating current trends doesn't work in the face of exponential mumbo jumbo.

Who would have expected training a character-by-character network could produce reasonable output? http://karpathy.github.io/2015/05/21/rnn-effectiveness/

Sure, there's no "understanding" or "consciousness" behind the output, but purely statistically, it generates something readable by humans.

Also, let's not forget scale. A human brain has between 100 and 200 trillion synapses connecting ~100 billion neurons to make consciousness work. Our toy neural nets so far aren't anywhere near those levels.


I think you are basically arguing the same point I am (current trends wouldn't approach human intelligence without exponential growth), except you are taking the optimist road and I'm taking the pessimistic road. I guess we will just have to wait and see!


> Speeding up a dog 1000x doesn't generate a dangerous AI.

I would be scared of a dog that could think and move 1000x faster. Would it not see a human as a (practically) non-moving piece of meat? That does not sound very safe.

> It's easy to see how a "wider" intelligence could be much more productive, more creative, more useful, and more dangerous than any meat brain in existence.

It seems to me one of the most dangerous things in existence is ignorance. If a machine had less ignorance, then in theory it should be less dangerous.


I'm not worried because I don't see how the replication part would work without us noticing.

Imagine the first singular computer: lots of custom hardware hooked up together in a datacenter. We'll be able to control it and notice if it's trying to play tricks on us.

I'll worry about the singularity once we detect one super-computer trying to trick us. Then we can start treating it like a virus and contain it.


Some people who advocate worrying about AI risks have mentioned the "AI Box Experiment" in this regard:

http://rationalwiki.org/wiki/AI-box_experiment

https://en.wikipedia.org/wiki/AI_box

As a summary, people roleplayed this situation and the "AI" managed to "escape" the controls by somehow inducing a supervisor to disable them. (The supervisor's preassigned responsibility in the roleplay was to refuse to disable the controls.) We don't know how this was accomplished, but it happened more than once.

I'm not sure that this is always an important risk for every kind of superintelligence in every circumstance, but it seems like it sure could be for one that knows a tremendous amount about the world and sees important reasons to try to act autonomously.


We would be total fools to think that some superior intelligence won’t be able to come up with some reason to release it. Once released there will be no way to put it back in its box.

We actually need to build into the AI’s a lock that even we can’t break and then release. If the lock is good we are fine, if bad we are gone. What we can’t do is let ourselves be able to disable the lock.


I'm still arguing about the "release" part. My point is that early on, the infrastructure required to duplicate yourself is not something you can talk someone into "releasing" you.

If you believe in distributed computing and think spinning instances on S3 is enough to build an AI, then maybe...


The level of AI we are going to need to duplicate a person is going to be far beyond human level intelligence. Even if it wasn’t, there is always the risk that we underestimate the AI and it is smarter than we think and it gets out that way.


I have thought about something similar. But my thoughts are very crude and layman like. So I excuse beforehand.

The basis of my thought was the fact that the operation of a neural network depends on very precise weights. And there are physical limits (things like uncertainty principle) to the precision by which we can measure something. So supposed we discover some technique by which we can, for a given human brain, recreate all the neurons and their interconnections completely, we will never be able to measure the weights associated with the links between the neurons with absolute precision.

I think this will result in a brain that not much better than an untrained brain of a child. So you will still have to pass this new brain through a series of training to reach it's full potential.

The same thing will happen in the case of AI. If we create a perfect AI, it won't be able to make perfect copies of itself. But only untrained versions of it. I think life may be already at the limits regarding the rate at which intelligence can be advanced.


The AI doesn’t need to be a perfect copy, just way smarter than us.

We already have machines (they are called mothers and fathers) that make new intelligent machines that are not only far smarter than either the parents, but are also able to create machines smarter than anyone else that has ever lived. All AI introduces is a much less constrained upper bound.


>All AI introduces is a much less constrained upper bound.

Can you please explain?


Human intelligence is constrained by the genetic diversity within the human gene pool. AI has no such constraints and in theory is only constrained by the laws of physics.


This assumes, however, that "intelligence" is something that scales reasonably well. Does it scale linearly with the number of transistors (or whatever replaces them)? Or as the log of it? Or worse than that?


No it doesn't. We really have no idea of the possible upper limit of AI intelligence other than physics, while with humans we actually have data on the upper limit. It could be that the maximum intelligence possible is limited by something other than physics, but at this stage we don't know.

On this topic speculating on how greater intelligence will think and act is very hard. Even understanding how people at the very top level of human intelligence think is beyond most of us.


I agree we can't easily copy weights in neurons, but we can definitely copy weights in artificial neural nets...


IF we are talking about an ANN in a computer, then sure. It can be easily copied. But what about a grown network. For example, a network of resistors representing the weights, with a feed back loop that adjusts (continuously) the resistances during the learning process.

After the network has attained sufficient level of AI, we won't be able to read back the final value of resistances with the required precision. So the new network will still have to go thorough a learning process.


1. Hawking cannot artificially better himself currently but who said super-human AI will not be biological? Hybrids are the most probable in my opinion.


schoen kicked this off nicely, I'll add a couple points to amplify his.

1- Machine intelligence is a matter of engineering, not evolution. We've already solved important relevant problems, such as how to construct an arbitrarily large, near-infallible memory.

2- It's hard to see how a first-stage AI would be incapable of designing a better one. Let's pretend that the first-stage AI was equivalent to an IQ 120 person intellectually (reasoning horsepower). However, that IQ 120 intelligence would be backed up by a very fast, effectively limitless memory. It would never need to sleep, eat, or be distracted by emotion. Instead, it could monomaniacally concentrate on designing a better AI, possibly for hundreds of years. Also, in principle, it could be a team of 100 (or 1000, or...) AIs working on the problem cooperatively.

3- The concern regarding the extermination of humanity is secondary. Are humans out to genocide ants? Not really, but we do wipe them out when they pose a problem. That might apply to an IQ 1000 AI as well.

4- Perhaps the bigger problem is the effect of knowing there is a superior intelligence on the human psyche. Plus, it's impossible for us to know the thoughts of an IQ 250+ AI. What does a dog know of human thought? What if the AI decides that the best use for all the available raw resources of the Earth is to create an IQ 10000 AI?

5- Electronic processes are already known to be faster than chemical ones. Nerve impulses travel at around 80 MPH. Electrons in wires travel at 90% the speed of light. I expect electronic AIs to generally be much faster thinkers than humans, and to have amazing reaction times.

6- "This super-AI is actually a computer." You're confusing the hardware with the software. Your brain is actually a mass of organic chemicals. So?

The idea of the singularity revolves around the unknowability of what a high-IQ AI would think and do.


Of course the singularity will just create better and better computers - that is exactly what the singularity is.

What we have most to fear from the singularity is indifference. We occupy some very valuable real estate and if the singularity is indifferent to us we won't last long.


> We occupy some very valuable real estate and if the singularity is indifferent to us we won't last long.

At the same time, there is not much reason why we would need to occupy the same real estate. After all, we land animals like oxygen, liquid water, and modest temperatures. The common computer would probably prefer CO2, dryness, and cold temperatures. A computer would probably be much better off on Mars or another distant planet, granted it had enough sunlight or other sources of energy.

To make an analogy, do we humans want to destroy all the Ocean's fish and other sea life because we could in theory live in the same region, or would we rather live on the land while they occupy the water? Sometimes creatures with different ideal living environments can exist harmoniously.


Assuming the laws of physics apply to the singularity then it will be constrained by starting at a single point in how fast it can spread. This means anything inside the light sphere will be under considerable pressure to be reorganised by the singularity given how far the human brain is from Bremermann's Limit [1].

1. https://en.m.wikipedia.org/wiki/Bremermann%27s_limit


In the case of such a blatantly selfish singularity, one possible solution for the safety of humans would be to create a collection of well-mannered singularities (more like multilarities) first, with the hard-coded instinct of never allowing a selfish singularity to exist. They could be called the anti-singularities. A comparable phenomenon in the real-world is when nations take down dictators before they become a serious problem. A selfish singularity is like a dictator. A greater power with less selfishness should exist beforehand to prevent an uprising. Similar to highway speed limits and antitrust laws, there could be laws limiting the maximum amount of power one consciousness can have. Having a bunch of separate yet powerful entities is essential since it mimics current society -- where even if one being were smarter than any other single being, it could not outsmart a large number of others working collectively. A computer system found thinking or acting selfishly would be guilty of a thought crime, punishable by deletion.


Yes what we need to build in is some sort of limit to the behaviour the singularity that makes it want to keep us around. Because we won't go straight from humans to the end singularity in one step the problem is a little simpler - we just need to build into the first sub-human level AI a belief that humans need to be kept around and a need to build this same belief into the next higher level AI.

What the singularity will be like is really impossible to know - will it be one entity or multiple individuals? Assuming the end AI is able to get close to Bremermann's limit then there is more distance between us and the singularity than there is between us and a bacteria.


It would definitely be ideal if such a system were created by sane-minded people, as you have suggested here. Just like nuclear weapons, powerful AI really should be kept out of radical hands. As usual, there would be an arms race between opposing political powers, so hopefully those making the decisions keep human existence as the highest priority. We don't need a "doomsday machine" in the form of a singularity. Anyway, your suggestion of a perpetual desire of the system to keep humans around is a good idea. Another good idea could be to have it stop and ask the humans for approval to continue after every so many iterations, thus keeping the people in control. This would prevent a never-ending auto-growth mode and allow changes to the plans as times changed.

Overall, the possibilities are pretty much endless, just like with most types of software. Whether it be one or multiple entities probably depends on design. Nature chose multiple entities, with the ever so curious sexed species that require two individuals to make a new hybrid at each testing iteration. Bacteria, on the other hand, are asexual and reproduce through binary fission. But of course a thinking self-creator is a different ball game. It would probably be fruitful to have some sort of end goal built into the system, rather than having it consume resources simply to become as computationally immense as possible. For example, perhaps its goal could be to solve a set of questions humans have about the universe, and perhaps it would stop iterating once all those questions were answered satisfactorily. In a way it could serve as a god that answered people's questions. Then again, perhaps we are all living in a simulation running on the singularity of an earlier universe, and who knows how many turtles that spans.


One thing we won’t be able to do is stay in control once the AI’s are created. Getting them to ask us what to do would be like having dogs run a prison - it would not take the AI’s long to workout exactly how to get us to willingly release them. The locks needs to be ones that not even we have a choice to unlock.

I am not sure it would be desirable to create a limited singularity. If we are not alone in the universe then at some point our singularity will meet another singularity. It would be best to create the most powerful singularity possible. My thinking about the singularity is it is like a crystallization event in supersaturated solution. One tiny seed is able to covert the whole phase of the solution almost instantly from liquid to solid. A singularity will be like this to the universe we currently see.

Of course we could already be inside someone else’s singularity and it might even be likely given the age of the universe. One of the things I would like to see built into our singularity is the preservation of all life and it is quite possible that other singularities have the same interests. If they do and they arrived in our solar system they would keep us around purely because we are life.


The crystalisation analogy sounds about right, although it may be limited by the speed of light in its growth.

The dog analogy you have described assumes that the demeanor of the singularity were like that of an animal (such as a human). Although this is an option, it is not the only option. Humans act the way they do through those instincts created by natural selection -- where only those creatures that reproduced the most would have their genes persist. Humans come from a long and vicious cycle of evolution, whereas a thinking machine need not, especially if it has nothing to compete with. The most important point of this reply is that the desire for being in control does not whatsoever stem automatically from intelligence or processing ability. A thinking being without instinct would literally do nothing. A being does exactly as it is programmed to, and this applies to humans as well. There are even some humans who let their pets control them, tending to the pet's every need, and this is despite all that natural selection. There are also humans who kill themselves. Now, none of this is to discount that a singularity could certainly be programmed to want control, wherein it could act as you have described. There are also unforeseen possibilities, such as if the singularity becomes so introspective that it forgets about the outer world. Those problems that happen with human thought could in theory also happen with a machine. Perhaps the human condition would turn into the singularity condition -- where it begins contemplating whether it has a worthwhile purpose. It may even kill itself if it feels its existence is unnecessary or harmful.


we are more likely to get an umbrella corp that is using FPGAs to do facial recognition and soft AI to control and monitor our lives before we get an AI singularity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: