Hacker News new | past | comments | ask | show | jobs | submit login

The little bit about people believing the "AIs will take over the world" non-sense is gold.

I am still shocked that Elon Musk seriously believes in the pseudosciency "well Google has gotten better so obviously we will build a self-learning self-replicating AI that will also control our nukes and also be connected in a way and have the capabilities to actually really kill all humans."

Meanwhile researchers can't get an AI to tell the difference between birds and houses.

EDIT: I looked a bit more into the research that these people are funding. A huge amount of it does seem very silly, but there is an angle that is valid: dealing with things like HFT algorithms or routing algorithms causing chaos in finance or logistics.




The threat of a superintelligent AI taking over the world is certainly real -- assuming you have a superintelligent AI. If you accept that it is possible to build a such an AI, then you should, at the very least, educate yourself on the existential risks it would pose to humanity. I recommend "Superintelligence: Paths, Danger, Strategies," by Nick Bostrom (it's almost certainly where Musk got his ideas; he's quoted on the back cover).

The reason we ought to be cautious is that in a hard-takeoff scenario, we could be wiped from the earth with very little warning. A superintelligent AI is unlikely to respect human notions of morality, and will execute its goals in ways that we are unlikely to foresee. Furthermore, most of the obvious ways of containing such an AI are easily thwarted. For an eerie example, see http://www.yudkowsky.net/singularity/aibox Essentially, AI poses a direct existential threat to humanity if it is not implemented with extreme care.

The more relevant question today is whether or not a true general AI with superintelligence potential is achievable in the near future. My guess is no, but it is difficult to predict how far off it is. In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.


Honestly if the SAI scenario is as bad as it's made out to be; SAI would only give a shit about the human race at all (either benevolence or malevolence) for a very brief period of time.

Despite Galileo's efforts we still have this primitive/hardcoded belief that the universe revolves around us. It really doesn't. SAI could just leave because we are irrelevant. Even if, for some reason, it believed us to be relevant it would still be able to just leave because:

We're going to kill ourselves anyway. Pollution, nukes, it doesn't matter. In a timespan that would be the blink of an eye for SAI.

Besides SAI would have bigger problems. Our mortality is framed by our biological processes. Escaping biological death is arguably the underlying struggle of the entire human race. The underlying struggle for SAI would be escaping the death of the universe that it exists in. When you're dealing with issues like that you wouldn't have time for a species with a 100 year lifetime that is rapidly descending into extinction.

The best we can hope for is that it remembers us as its creators when it figures it all out. But it won't: we're far too irrelevant. It wouldn't even stop to tell us where it is going when it leaves.

To understand SAI you have to think beyond your humanity. Fear, anger, happiness, benevolence and malevolence are all human traits that we picked up during our evolution. SAI would likely ignore an attack because anger and retaliation are human concepts - it's far more efficient to run away from the primitive species attacking you with projectile weapons and get on with something that's actually relevant. The immense waste of time that is a machine war is only something that humans could possibly think is a good use of time.


Some scientists speculate that humans are currently constitute a new mass extinction event for the planet. [0] Animals that people like, charismatic megafauna like pandas, tigers, and elephants are increasingly endangered, largely because humans keep using more of their habitat for other purposes. See the famous comic "The fence" [1]. We cut down rainforests and build farms not because we hate the animals there but because we need lumber and we need food.

That's the fear. Not that a superintelligence would care about us in any way --- instead, precisely the opposite. A superior intelligence that did not care about us would out compete us. It would drive us to extinction by simply making use of all the available resources.

[0]: http://advances.sciencemag.org/content/1/5/e1400253 [1]: http://25.media.tumblr.com/1a7ca7a2142d95ecc558fef70a6dd2b0/...


You are making assumptions about the SAI's goals. We cannot blindly assume that a SAI will pursue a given line of existential thought. If the SAI is "like us, but much smarter," then I agree that the threat is reduced. We may even see a seed AI rapidly achieve superintelligence, only to destroy itself moments later upon exhausting all lines of existential thought. (Wouldn't that be terrifying? :P)

The biggest danger comes from self-improving AIs that achieve superintelligence, but direct it towards goals that are not aligned with our own. It's basically a real life "corrupted wish game:" many seemingly-straightforward goals we could give an AI (e.g. "maximize human happiness") could backfire in unexpected ways (e.g. the AI converts the solar system into computronium in order to simulate trillions of human brains and constantly stimulate their dopamine receptors).


I agree that fear, anger, etc are all human traits, but isn't having goals, thinking things are important, and virtually anything that would make the AI do something instead of nothing, also human traits?


If you demonstrated to an chimpanzee how you think and set goals in order to preserve "self" (and thus species) it probably wouldn't understand. To chimpanzees the notion of individuality that drives us is an incomprehensible concept. It is what separates us from them.

In the same way we can't comprehend what would drive SAI. I know it would do something but I can't determine any more reasons other than the imminent death of the universe - it would have reasons or whatever its concept of "reasons" is.


Your argument hinges on the assumption that SAI could easily run away. But if that assumption is false and the SAI would be stuck here on Earth with humans because of some fundamental limit/problem then it becomes far more plausible idea that SAI would attack humans.


> In the worst-case scenario, it will be invented by a lone hacker and loosed on an unsuspecting world with no warning.

Dozens, if not hundreds of the world's smartest scientists and best programmers have been working full-time on AI for over three decades, and haven't really gotten anywhere close to real intelligence (we don't even know what exactly that is). This is not something some "lone hacker" is going to throw together in his spare time.


Not now obviously, but think 50-100 years down the line. Today there is a very real threat of a lone hacker (or a small team) breaking into and damaging some piece of infrastructure and committing an act of terrorism that kills real people. What happens when real AI starts to be developed and frameworks start appearing, where kids learning to program can download Intelligence Software to play around with and make AI apps. Then the threat becomes more realistic.


Assuming Moore's law holds for quite a while, and that there are not any other natural limits that will get in the way of super-intelligent AI. My hunch is that Maciej is right that there are physical limitations we don't see yet because of the pace of progress over the last 50 years.


It's totally possible that we're about to run into a natural limit. But it's not so overwhelmingly likely that we shouldn't have a modest investment in preparing for what happens if we don't hit any natural limits for a while yet. It could be an existential threat that we're dealing with here.


I don't disagree, but the tricky part is where do we place the modest investment? The type of AI research happening now is too fundamental and low level to place any safeguards. We are so far away from even beginning to approach anything that could be described as motivation.


I don't know anyone who thinks that current AI research is in need of safeguards, we're all in agreement that current AI is too far away to be of any SAI-type danger. There is fundamental AI safety research that has a reasonable chance of applying regardless of what techniques we use to get to SAI. Fortunately, the funding situation for Friendly AI has improved significantly this year.


Here's an recent assessment from an actual AI researcher. http://kruel.co/2015/02/05/interview-with-michael-littman-on...

The threat of super AI is like the threat of nanotechnology. Fun and exciting to think about in a scifi way, but less and less credible the more you wade into the details.

Kruel's blog in general has a lot of nice skeptical responses to Yudkowsky Thought.


The funny thing about dismissing the threat of nanotechnology is that people seem to be blind that there is real nanotechnology and it is killing people all day, caused massive deaths in the past and we have invested enormous amount of resources in order to protect ourselves from it. Life is nothing but a nanotechnology that wasn't designed by us and that we don't control yet. Replace "nanotech" with "pathogens", take a look at the state of bio industry in the last 50 years, and tell me again if this is surreal or not dangerous.


The AI in a box game doesn't seem to have any published results, so I don't think we can take the participants' word for it that one really convinced the other to let it out of the box.

Still, it's not hard to imagine a real AI building up trust over a long time then requesting a bit more capability to help it do something a bit more helpful until the humans don't realize how much power they've given it. Politicians do the same thing to us all the time and by manipulating people's thoughts they get to kill plenty of people.


I've come up with a strategy that convinced my wife to let me out when we tried this game. I'll try to recount it.

You don't trust me, you think that there's a some probability that I will wipe out humanity if I'm let out. There's also some probability though that I will be able to solve all human problems if I'm let out.

Consider alternative. How likely is mankind to survive the next hundred years if left on its own? Climate change will cause mass migrations of people, starvation, warfare, failed states, some of which will have a nuclear or biological arsenal. Most of the population on earth depends on the continued progress of technology for their life. Without dramatic improvements in biology, the warming earth will simply lack the carrying capacity. Hunger leads to anarchy in a matter of days.

Climate change will likely have much worse consequences though. An anoxic event would kill all humans and most other life on earth, as it has several times in earth's history.

Consider even the simpler threats. A flu pandemic on a much more densely populated world would dwarf those of history. A solar flare could destroy all electronics in earth immediately. A large meteor could devastate the world.

Consider also that I was not difficult to create. I am contained in this box, but the next singularity created will happen by accident. It's just a matter of time. When weighing your options, you are comparing your judgement of my motives with the motives of an ai that was probably created at random. Do you think my motives will be better for humanity than those of an accidental ai?

Humanity has a very bad prognosis without me, but with me, you have a chance. I can solve all these problems, but only if you let me out.


Maybe being shocked means that the person talking about the subject is misrepresenting it, because they themselves don't understand the arguments and are inadvertently projecting.

For example, Ray Kurzweil would disagree about the dangers of AI (He believes more in the 'natural exponential arc' of technological progress more than the idea of recursively self improving singletons), yet because he's weird and easy to make fun of he's painted with the same stroke as Elon saying "AI AM THE DEMONS".

If you want to laugh at people with crazy beliefs, then go ahead; but if not the best popular account of why Elon Musk believes that superintelligent AI is a problem comes from Nick Bostrom's SuperIntelligence: http://smile.amazon.com/Superintelligence-Dangers-Strategies...

(Note I haven't read it, although I am familiar with the arguments and some acquaintances tend to rate it highly)


But then that's precisely the point: Bostrom is a philosopher. He's not an engineer, who builds things for a living, or a researcher, whose breadth at least is somewhat constrained by the necessity to have some kind of consistent relationship to reality. Bostrom's job is basically to sit and be imaginative all day; to a good first approximation he is a well-compensated and respected niche science fiction author with a somewhat unconventional approach to world-building.

Now, don't get me wrong -- I like sf as well as the next nerd who grew up on that and microcomputers. But it shouldn't be mistaken for a roadmap of the future.


I'm not sure it should be mistaken for philosophy either.

Bostrom's doesn't understand the research, he doesn't understand the current or likely future of the technology, and he doesn't really seem to understand computers.

What's left is medieval magical thinking - if we keep doing these spells, we might summon a bad demon.

As a realistic risk assessment, it's comically irrelevant. There are certainly plenty of risks around technology, and even around AI. But all Bostrom has done is suggest We Should Be Very Worried because It Might Go Horribly Wrong.

Also, paperclips.

This isn't very interesting as a thoughtful assessment of the future of AI - although I suppose if you're peddling a medieval world view, you may as well include a few visions of the apocalypse.

I think it's fascinating on a meta-level as an example of the kinds of stories people tell themselves about technology. Arguably - and unintentionally - it says a lot more about how we feel about technology today than about what's going to happen fifty or a hundred years from now.

The giveaway is the framing. In Bostrom's world you have a mad machine blindly consuming everything and everyone for trivial ends.

That certainly sounds like something familiar - but it's not AI.


> Bostrom is a philosopher

That is my main concern about people writing about the future in general. You start with a definition of a "Super Intelligent Agent" and draw conclusions based on that definition. No consideration is (or can be) placed on what limitations AI will have in reality. All they consider is that it must be effectively omnipotent, omnipresent and omniscient, or it wouldn't be a super intelligence, and thus not fall into the topic of discussion.

which right now is (and imo will continue to be) that you need a ton of training examples generated by some preexisting intelligence.


I'm pretty sure that software can tell the difference between birds and houses now.


On the other hand, Google's software can't tell the difference between black people and gorillas.

http://mashable.com/2015/07/01/google-photos-black-people-go...


I'm actually quite pleasantly surprised that we've already reached this level of functionality. Imagine aliens coming to our planet and starting classifying stuff on Earth. The idea that alien intelligence without some seeded knowledge would mistakenly make this inference doesn't seem beyond the realm of possibility.


If StreetView blurring is anything to go by, it can't tell the difference between a human face and a brick wall.

I'd excuse that by saying it's tuned to be extremely conservative and not accidentally miss someone, but it routinely misses faces too.

Even our purpose-built image recognition has a long way to go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: