Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Gates, Musk, Hawking, and others have voiced concerns over AI. Why?
7 points by Billesper on July 16, 2015 | hide | past | favorite | 8 comments
What evidence is there that developing even a rudimentary general/strong AI is remotely likely in the near future (within the next 10, 20, 50, 100 years)?

1) What is the state of general AI research? Are there any promising (i.e. having some measure of real progress) approaches currently?

2) Are the major problems/roadblocks to creating a strong AI even known in any sort of nontrivial sense?

(To be clear, please leave discussion about "if we have a strong A.I., it will/could be a threat because...." for a different thread. The question is about why the prospect of a general A.I. is being seriously considered in the first place)

From doing a bit of research, it seems like most contemporary AI methods are statistical/optimization techniques in Machine Learning, etc. These can be extremely powerful tools, but as far as I can tell, they are applied to very specific problem instances. Is there any hope of more powerful/general techniques emerging from this area?

It seems like a real solution is not simply a matter of time (within the next 100 years or so), but instead will probably take several major breakthroughs and insights coming from unknown places to achieve.




...because they were asked. put yourself in their position. somebody who may very well have the power to make you look like an idiot asks you a question. do you insult them? no! do you give them the boring, "oh, i won't see it in my lifetime", not if you're smart. if a reporter honors you with their time and attention and you want them to do right by you you give them something they can use.

and its completely harmless to warn people about something 100 years off.


When I interviewed John D. Cook for a Profile in Computational Imagination I asked him a similar questions here is his answer:

John: One danger that I see is algorithms without a human fail-safe. So you could have false positives, for example, in anti-terrorist algorithms. And then there’s some twelve-year-old girl that’s arrested for being a terrorist because of some set of coincidences that set off an algorithm, which is ridiculous. Something more plausible would be more dangerous, right? I think the danger could increase as the algorithms get better.

Mike: Because we start to trust them so much, because they’ve been right so often?

John: Right. If an algorithm is right half the time, it’s easy to say well, that was a false positive. If an algorithm is usually right--if it’s right ninety-nine percent of the time--that makes it harder when you’re in the one percent. And the cost of these false positives is not zero. If you’re falsely accused of being a terrorist, it’s not as simple as just saying oh no, that’s not me. Move along nothing to see here. It might take you months or years to get your life back.

If you want to read more of our conversation it is available at http://computationalimagination.com/interview_johndcook.php


I am not a visionary like the people you mentioned, but one thing that I can see go wrong with large-scale AI automation is not that the scenario of robot uprising or something, but more of software bugs that we all are familiar with.

There have been lots of cases where a bug in an automated trading system causes millions of losses[0][1], if we have a larger system that manages everything from the power grid, the gas lines, self-driving cars, your smart home, to your own personal calendar, it's not hard to imagine the potential damage if there is even a single bug.

Can be things like edge cases that nobody thought of, or simply unexpected sensor reading caused by natural disaster, etc.

[0](http://dougseven.com/2014/04/17/knightmare-a-devops-cautiona...) [1](http://www.bloomberg.com/news/articles/2013-08-20/goldman-sa...)


Yes, it's something we're thinking about in the insurance industry. Lots of headaches and edge cases that come about with self-driving cars, smart homes, etc. I also recall the Patriot Missile Software Problem [0]

[0] http://sydney.edu.au/engineering/it/~alum/patriot_bug.html


To answer the title if not the content of your question:

* AI is something that potentially poses an existential risk to humanity, ie, it might wipe out our species.

* AI is a trendier topic to write about than other things: there's lots of possible pop culture references, the general public doesn't need to feel guilty about it

In comparison, something like climate change probably doesn't pose an existential threat to our species: it may just merely wipe out a fraction (20%? 80%?) of the human population over the next hundred years.

Edit: to be a bit more on topic, do you think there are very dangerous technologies around these days that would have been very difficult to anticipate / seemingly ludicrous to consider back 100 years ago? Would the development of nuclear weapons in the cold war have seemed like a serious concern or viable development in the 1850s?


This book by Nick Bostrom will help you find answers: Superintelligence: Paths, Dangers, Strategies[1]

The author is the director of the Future of Humanity Institute at Oxford.

[1] http://www.amazon.co.uk/Superintelligence-Dangers-Strategies...



BUMP... following for getting answers....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: