Safety was one of the main goals they promoted when it was founded. 2 of the 4 authors listed have publicly spoken about their belief of AI as an existential risk.
I'm not saying that a game playing AI is going to take over the world. But it does demonstrate the risk - we still have no idea how to control such an AI. We can train it to get high scores. But it won't want to do anything other than get high scores. And it will do whatever it takes to get the highest score possible, even if it means exploiting the game or hurting other players, or disobeying it's masters.
Now imagine they succeed in making smarter AIs. And their research spawns new research, which inspires new research, etc. Perhaps over several decades we could have AIs that are a lot more formidable than being able to play Pac Man. But we may still not have made any progress on the ability to control them.
In addition, there is the AI Open Letter, which is signed by many "who actually works or has done serious research in ML", including Demis Hassabis and Yann LeCun. From the letter:
"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. "
Are the experts concerned about a "Skynet scenario"? No. But there is certainly genuine concern from the experts.
The problem is "any AI" whose goals are not positively aligned-with/respectul-of human society's values. In terms with Asimov's Lay of Robotics meme, the problem is not robots harming humans, but robots allowing humans to come to harm as a side effect of their actions. This is an important ethical issue that, IMHO, technologist in general are failing to address.
Unethical AIs are only a special case in the sense that they are believed to be able to cause much more chaos and destruction than spammers, patent trolls or high frequency trading firms.
1.: I take issue with non concerned issues being dismissed as exactly one reason why people should be concerned. I dislike circular reasoning.
2.: There does not exist uniform agreement that AI is safe, in fact at least in German hacker circles quite the opposite, but as far as I see the problems that AI risk evangelists emphasize are the wrong ones. Take over the world, paperclip etc...not going to happen. For that, AI is too stupid for now. And the rapid takeoff scenarios are not realistic. But ~30% of all jobs are easily replacable by current AI technology, once the laws and the capital is lined up. AI is also making more and more decisions, legitimizing the biases with which it was either trained or programmed because it is "AI" and thus more reliable(\s). And there is a great cargo cult of "data" "science" in development (separate scare quotes intended)
3. I am starting to dislike any explicit mentions of fallacies, especially if they were used just sentences ago by the mentioner
Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
We should not slow it down. We should push forward, educate people about the risks and keep as much as possible in public scrutiny and possession (open source, government grants, out of patents/universtiy patents)
>Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
>Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
Just like electriciy ate the world, and steam before...slowing down is not an option, making sure it ends up benefiting everybody would be the right approach. Pushing for UBI is infinitely more valuable than AI risk awareness, because one of the two does not depend on technological progress to work/give rewards
Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?
We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.
Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.
>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.
Second, there's no magical adjusting. Just as you can't just adjust to having nukes in the living room.
Real AI isn't coming anytime soon, so it's better to focus on the market imbalance of Google et al's who are rolling up all the smart AI people and technology and have a huge head start. Having more companies exploring more business models that aren't ad based and hopefully are privacy first or non-profit would be much healthier.
But I think the biggest concern for OpenAI right now is not being behind the curve. If OpenAI is going to build safe AI, they have to beat everyone else to the level of AI that needs to be made safe!
To use an analogy from a favorite show of mine, Person of Interest OpenAI has to build The Machine before someone else manages to build Samaritan.
It would be particularly scary if someone like Google or Facebook had a monolithic AI capable of accessing large percentages of our information, and influencing our lives using it, before anyone building more ethical AI was capable of doing so.
Person of Interest is an amazing show, it's series finale airs tomorrow on CBS. If you're interested in the topic of morality in artificial intelligence, and enjoy a good scifi show, watch this one.
This is exactly the kind of arms race that the AI safety people have been warning about basically from the start.
Just saying, this does not fill me with confidence.
If they can get there before the arms race dynamic becomes an actual problem, then they'll have succeeded in largely divesting the major players of their technological advantage (presumably Google and Facebook will still retain a massive advantage in terms of data).
When state-of-the-art AI research is open, it allows everyone to have a more accurate pulse on potentially dangerous advancements.
Conceivably, the organizations or entities which decide what rules AI must follow may very likely be those which can create the technology. OpenAI is at least a couple years behind what, a dozen companies? If OpenAI wishes for a seat at that table, they have a lot of catching up to do.
That's great if anyone can have a home robot, but can we have home robots that don't keep logs of everything that goes on in our house at undisclosed locations managed by a multinational corporation with unknown access control standards?
We already have AI safety problems, never mind hypothetical future problems.
Then they say:
> We’re also working to solidify our organization's governance structure and will share our thoughts on that later this year.
I interpreted that to mean that their governance structure will be key to the "safety" aspect of the mission.
Like you, though, I await their discussion of safety with great anticipation.
A good example of this is the autonomous car, which is forced into the choice between protecting the human in the car, or the pedestrians on the street. This is an ethical/moral question that most humans cant really come to a consensus about, but we are expecting to be rely on a centralized set of programmers who either through code will force a choice, or (through code again) come up with the process that makes the choice.
Oh and even if a consensus existed about the dangers/nondanagers of AI, you would still have to examine your "Experts opinion bias" to see if they are the only argument you have that is supporting your belief.
PS we need more of the Older semi AI training games, things like RoboCode(with neural Nets), NERO, ect. Iv seen a few pop up over the years but they always fizzle out. Huge opportunity for Tactical AI training as a game.
And DeepMind is able to use a single agent rather than a different specific one for each game. I wonder if OpenAI wants to go in a different direction although RL has considerable success. Whereas the other goals definitely need a lot of work before they are "real-world" functional, especially Goal 3. But of course that depends on their definition of 'useful'.
I guess I wanted to say that there isn't a lot different to be done when an agent is being trained for different games. And that makes it a general game playing agent, doesn't it?
I really doubt we will see an agent which can be an expert at a game without even doing some computations which can fall into a grey area which people consider to be game specific computation and not general gameplay. The general agent which you are thinking of, which can be the best at a game without any thinking (about what its game playing process should be) is a fantasy. It will definitely need to have a 'gameplan' which it can get by simulating the outcomes without actually playing it.
"Build a household robot" is high up the list.
That doesn't seem inherently 'general'.
Certainly, people have been working on that for years; there are all sorts of subproblems like vision, contextual reasoning etc.
It could be treated as a general problem, requiring a lot of 'common sense'.
But a team which sets out to optimize that particular goal, could spend years on relatively narrow tasks that get good performance returns on household chores (e.g. developing version 10 of the floor cleaning algorithm), but don't really make progress towards the problem of general intelligence.
For me, what was really interesting about the benchmarks that Deepmind chose (the choice of a selection of Atari games) was that they were inherently somewhat general.
Are you not worried that by putting a narrow domain fairly high up, you'll get distracted by narrow tasks, rather than making progress towards what's really interesting - generality? Won't it introduce tension to try and keep the general focus in the presence of a narrow goal, where you can get good returns by overfitting?
The problem you describe of falling into the trap of brute-force optimizing a narrow task also applies to the Atari games. In fact it applies even more-so: it would be trivial for a lot of HN programmers to brute-force code an AI for challenging Atari games that deep learning still struggles against (like Montezuma's Revenge). But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks. You avoid this problem by... not hard-coding brute force solutions/heuristics! The research community can smell BS very easily (HN, not so much).
A household robot is substantially (probably an order of magnitude) more general than Atari games, even for narrow tasks (obviously it is nowhere near the vicinity of the generality of AGI). The perception problem is tremendously more complex. The control/planning problem is similarly tremendously more complex.
How difficult are you talking about here? Similar to training game agents? I really doubt it.
I feel like the training problem is VERY hard in case of real world random object handling (factory like fixed, mechanical situation can be purely hard coded and be much better than a human). In case of virtual games you can just use a bunch of GPUs and accelerate the process. But it is a much more difficult problem in reality. The grasping ability that we have with everyday objects is a marvel once you try to make a computer to do it.
This might be of interest: http://spectrum.ieee.org/automaton/robotics/artificial-intel...
This is the state of the art of "traditional AI" (not deep learning) robotics: https://www.youtube.com/watch?v=8P9geWwi9e0
Most decent HN programmers could code an AI for an Atari game in a few weeks.
Again, I would encourage you to read the literature instead of speculating.
>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.
is very wrong. It is not really possible comparing atari games to even the same class of difficulty as household chores. You've just agreed with my point and said that I'm speculating.
>Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)
I never said that they are trivial. Although the point I'm making, again, is that you can't say that we can brute force even narrow household chores. It has a level of complexity - friction (which is a huge problem), elasticity and even air flow can mess up the actions, and they lack the computing power to account for everything. Whereas we have something called intuition (which I may add everyone interested in AGI to properly read up on, starting with Brain Games - S4:Intuition which is on netflix)
And it seems like you don't consider brute-forced solutions as proper solutions. I agree with that, as will any one who has common sense and read a couple of wikipedia articles. But RL is not exactly the brute forcing as we think of it, although it might look like it. We all employ brute force learning in our own lives, to whatever extent it might be, although our feedback and thought processes are much more complex so we feel we are acting out of pure intelligent deductions we make in our brain. We still need a couple of 'brute force' attempts, although with the number of iterations we need, you can't call them that.
I suggest you read some literature too, and please point out where I'm speculating.
1. DeepMind's reinforcement learning paper : http://www.readcube.com/articles/10.1038/nature14236?shared_...
I am surprised by #2: "Build a household robot". It's my understanding that efficient actuation and power are largely unsolved problems outside of the software realm. What's the plan for tackling stairs, variable height targets, manipulator dexterity, power supply, etc. in a general purpose robot with off-the-shelf parts? (Answering these questions may be part of that goal but maybe someone knows more on the subject.)
In terms of off-the-shelf:
I also vaguely recall seeing some demos of first responder robots and stairs. I don't know about the other issues you've raised...
Is there an information firewall between what startups are sharing in the hope of investment and what is shared to advance OpenAI? If there is a firewall, how is it enforced?
Any impetus into actually dreaming up what may come next after GPUs and Async DRL? Non-neural models, quantum computing based AI, optogenetic hacking ;)
Otherwise, excellent list!
If so, why is this not one of the fundamental technical goals?
If not, why?
Precisely because of the ethical questions. I want problem-solving AIs to work on all the problems humanity needs solved, starting with "humans die" and then going on to lower-priority problems. I don't want AI to have goals and values of its own, which might potentially diverge from those of humans; I want it to serve humanity's goals and values. We can build a system capable of human-level problem-solving and well beyond, without actually creating a sentient being.
If, and that's a big if, we want to create an artificial sentient being, that's a separate problem with its own set of ethical concerns; that seems both more dangerous and far less useful than human-level problem-solving. (In the event we did create such a being, such a being would absolutely need to have the same rights as humans or any other sentient species; I'd just rather avoid having to define and draw such a line, and get distracted by the fight over that, when it'd be far more useful to have machines capable of solving problems.)
> but desirable for a digitized brain to have them?
I hope the value of preserving human life is self-evident. Humans already have those qualities and many more; I want to see human life last forever, with all its qualities.