Hacker News new | past | comments | ask | show | jobs | submit login

As I've said many times on HN over the years, there is currently no clear path to science fiction like "AI". To return your question, hopefully without being rude, is there any proof that AI capable of having a moral disposition will ever exist?

Andrew Ng (I believe) compared worrying about evil AI to worrying about overpopulation on Mars. Which is to say, the problem is so far off that it's rather silly to be considering it now. I would take it a step further and say that worrying about the implications of AGI is like thinking about Earth being overpopulated by space aliens. First we have to establish that such a thing is even possible, for which there is currently no concrete proof. Then we should start to think about how to deal with it.

Considering how hypothetical technology will impact mankind is literally the definition of science fiction. It makes for interesting reading, but it's far from a call to action.




is there any proof that AI capable of having a moral disposition will ever exist?

Why does an AI need to be capable of moral reasoning to perform actions we'd consider evil?

The concern is that computers will continue to do what they're programmed to do, not what we want them to do. We will continue to be as bad at getting those two things to line up as we've always been, but that will become dangerous when the computer is smarter than its programmers and capable of creatively tackling the task of doing something other than what we wanted it to do. Any AI programmed to maximize a quantity is particularly dangerous, because that quantity does not contain a score for accurately following human morality (how would you ever program such a score).

If you're willing to believe that an AI will some day be smarter than an AI researcher (and assuming that's not possible applies a strange special-ness to humans), then an AI will be capable of writing AIs smarter than itself, and so forth up to whatever the limits of these things are. Even if that's not its programmed goal, you thought making something smarter than you would help with your actual goal, and it's smarter than you so it has to realize this too. And that's the bigger danger - at some unknown level of intelligence, AIs suddenly become vastly more intelligent than expected, but still proceed to do something other than what we wanted.


"Andrew Ng (I believe) compared worrying about evil AI to worrying about overpopulation on Mars."

Berkeley AI prof Stuart Russell's response goes something like: Let's say that in the same way Silicon Valley companies are pouring money in to advancing AI, the nations of the world were pouring money in to sending people to Mars. But the world's nations aren't spending any money on critical questions like what people are going to eat & breathe once they get there.

Or if you look at global warming, it would have been nice if people realized it was going to be a problem and started working on it much earlier than we did.


Improvements in AI aren't linear, though. Artificial Superintelligence, after reaching AGI, might happen in the span of minutes or days. I imagine the idea here is to guide progress so that on the day that AGI is possible, we've already thoroughly considered what happens after that point.

Secondly - it's not necessarily about 'evil' AI. It's about AI indifferent to human life. Have a look at this article, it provides a better intuition for how slippery AI could be: https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-...


> Improvements in AI aren't linear, though

This is a point everyone makes, but it hasn't been proven anywhere. Progress in AI as a field has always been a cycle of hype and cool-down.

Edit (reply to below). Talk about self-bootstrapping AIs, etc. is just speculation.


Sure, though you can't extrapolate future technological improvements from past performance (that's what makes investing in tech difficult).

Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence. AI safety addresses the risk of bootstrapped superintelligence indifferent to humans.


>Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence.

Of course, that assumes the return-on-investment curve for "bootstrapping its own intelligence" is linear or superlinear. If it's logarithmic or something other than "intelligence" (which is a word loaded with magical thinking if there ever was one!) is the limiting factor on reasoning, no go.


I don't see why a program needs to be a self-improving Charles Stross monster to have an impact on the world, for good or ill.


Hype then cool-down is a sine wave... See, not linear at all!


Or maybe the first few AGIs will want to spend their days watching youtube vids rather than diving into AI research. The only intelligences we know of that are capable of working on AGI are humans. We're assuming that not only will we be able to replicate human-like intelligence (seems likely, but might be much further away than many think), but that we'll be able to isolate the "industrious" side of human intelligence (not sure if we'll even be able to agree on what this is), enhance it in some way (how?), and that this enhancement will be productive.

But even if we can do all that any time soon (which is a pretty huge if), we don't even know what the effect will be. It's possible that if we remove all of the "I don't want to study math, I want to play games" or "I'm feeling depressed now because I think Tim's mad at me" parts of the human intelligence, we'll end up removing the human ingenuity important to AGI research. It might be that the resulting AGI is much more horrible at researching AI than a random person you pull off the street.


The main question is not about whether the AI would or could have morality. The more important question (and I don't think we disagree here) is whether there could be a superhuman AI in the near future - some decades for example - that might "outsmart" and conquer or exterminate people.

This is a matter of conjecture at this point: Andrew Ng predicts no; Elon Musk predicts yes.

I agree with you that, if you can be sure that superhuman AI is very unlikely or far off, then we have plenty of other things to worry about instead.

My opinion is, human-level intelligence evolved once already, with no designer to guide it (though that's a point of debate too... :-) ). By analogy: it took birds 3.5B years to fly, but the Wright brothers engineered another way. Seems likely in my opinion that we will engineer an alternate path to intelligence.

The question is when. Within a century? I think very likely. In a few decades? I think it's possible & worth trying to prevent the worst outcomes. I.e., it's "science probable" or at least "science possible", rather than clearly "science fiction" (my opinion).


We assume that we'll be able to replicate human level intelligence because we'll eventually be able to replicate the physical characteristics of the brain (though neuroscientists seem to think we're not going to be able to do this for a very long time). Superhuman intelligence, though - that's making the assumption that there exists a much more efficient structure for intelligence and that we (or human intelligence level AIs) will be able to figure it out.

So returning to your Wright brothers example, it's more like saying: "It took birds 3.5B years to fly, but the Wright brothers engineered another way. It seems likely that we'll soon be able to manufacture even more efficient wings small enough to wear on our clothes that will enable us to glide for hundreds of feet with only a running start."


How are you estimating how far away AI is so accurately that you can disregard it entirely? The best we can do to predict these things is survey experts, and the results aren't too comforting: http://www.nickbostrom.com/papers/survey.pdf

>We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that highlevel machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.


As the history of science & technology shows, by the time there is any proof of the concept of a technology lethal to the human race, it is already too late.

I would suggest you read the history of the Manhattan project if you want to continue in your belief system regarding "impossible" deadly technology.


>I would suggest you read the history of the Manhattan project if you want to continue in your belief system regarding "impossible" deadly technology.

To quote Carl Sagan:

>They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.


It's already here however. Components of drones killing people are AI/ML backed. The data on where to bomb can be "inferred" from various data...

Now for a less "killer" use case, you might get denied access to your credit card because of what Facebook "thinks" based on your feed and your friends feeds (this is a real product.)

AI doesn't have to be full blown human-like and generalizable to have real world implications.

This is what my piece called personas is about.. Most people don't understand the implications of what's already happening and how constrains of programming/ML lead to non-human like decisions with human-like consequences. http://personas.media.mit.edu


>I would take it a step further and say that worrying about the implications of AGI is like thinking about Earth being overpopulated by space aliens. First we have to establish that such a thing is even possible, for which there is currently no concrete proof.

Given that I could probably sketch out a half-assed design for one in nine months if you gave me a full-time salary - or rather, I could consult with a bunch of experts waaaaaay less amateurish than me and come up with a list of remaining open problems - what makes you say that physical computers cannot, in principle, no matter how slowly or energy-hungrily, do what brains do?

I'm not saying, "waaaaah, it's all going down next year!", but claiming it's impossible in principle when whole scientific fields are constantly making incremental progress towards understanding how to do it is... counter-empirical?


Ahhh... The power of not knowing what you don't know.

I mean, why can't I live forever? Let's just list the problems and solve them in the next year!


>Ahhh... The power of not knowing what you don't know.

Ok: what don't I know, that is interesting and relevant to this problem? Tell me.

>I mean, why can't I live forever?

Mostly because your cells weren't designed to heal oxidation damage, so eventually the damage accumulates until it interferes with homeostasis. There are a bunch of other reasons and mechanisms, but overall, it comes down to the fact that the micro-level factors in aging only take effect well after reproductive age, so evolution didn't give a fuck about fixing them.

>Let's just list the problems and solve them in the next year!

I said I'd have a plan with lists of open problems in nine months. I expect that even at the most wildly optimistic, it would take a period of years after that to actually solve the open problems and a further period of years to build and implement the software. And that's if you actually gave me time to get expert, and resources to hire the experts who know more than me, without which none of it is getting done.

As it is, I expect machine-learning systems to grow towards worthiness of the name "artificial intelligence" within the next 10-15 years (by analogy, the paper yesterday in Science is just the latest in a research program going back at least to 2003 or 2005). There's no point rushing it, either. Just because we can detail much of the broad shape of the right research-program ahead-of-time, especially as successful research programs have been conducted on which to build, doesn't mean it's time to run around like a chicken with its head cut-off.


Yes, let’s.

http://sens.org


Jeez I had no idea you were such an AI genius. If only someone would fund you!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: