In a sense, we have no other defense. AI is just math and code, and I know of no way to distinguish good linear algebra from evil linear algebra.
The barriers to putting that math and code together for AI, at least physically, are only slightly higher than writing "Hello World." Certainly much lower than other possible existential threats, like nuclear weapons. Two people in a basement might make significant advances in AI research. So from the start, AI appears to be impossible to regulate. If an AGI is possible, then it is inevitable.
I happen to support the widespread use of AI, and see many potential benefits. (Disclosure: I'm part of an AI startup: http://www.skymind.io) Thinking about AI is the cocaine of technologists; i.e. it makes them needlessly paranoid.
But if I adopt Elon's caution toward the technology, then I'm not sure if I agree with his reasoning.
If he believes in the potential harm of AI, then supporting its widespread use doesn't seem logical. If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.
It doesn't matter at all who makes AI first. Without the control problem solved, no one will be able to use it for evil ends, even if they want to. But it's still incredibly dangerous if it gets loose in some uncontrolled setting.
So this project is the exact opposite of what you would want to do if you wanted to minimize AI risk. This is rushing towards the risk as fast as possible.
Of course the reason governments don't do this is because almost nobody sees the risks of AI "acting on its own" (and they're probably right), nor the risks of things like rogue drones made from cheap commodity hardware (which might be sensible as well - these are more like RPGs in the wrong hands than they are like nukes in the wrong hands.)
Are you a villain in a Vernor Vinge novel?
I may not like the numbers you're crunching, but I will defend to the death your right to crunch them.
In Prof. Tegmark’s recent presentation at the UN he mentioned the possibility of extremely cheap drones that approach the victim's face very quickly and pierce a bolt into their brain through one of their eyes. Such a drone wouldn't require high precision projectiles which would make it cheap to build.
> Of course the reason governments don't do this is because almost nobody sees the risks of AI "acting on its own"…
It is near impossible to enforce something like this globally and forever. At best, it would be a near-term solution, especially so because there is a huge military and economic interest in technology and AI. Quite possibly, the only long-term solution is solving the control problem.
That's super impractical for many reasons. Drones don't move fast enough, nor can they change their velocity quickly enough to do that. People will almost certainly react if they see the drone coming for them, and cover their face, or swat it down, etc.
But even if it did work, it's not a serious threat. For some reason people spend a ton of time thinking about all the ways new technologies could be abused by terrorists. For some reason they never consider that tons of existing technologies can be abused too.
Many people have come up with really effective terrorist ideas that would kill lots of people, or do lots of damage. The reason the world is still here is because terrorists are really rare, and really incompetent.
> Short range slingshot mechanisms are several orders of magnitude cheaper to build than firearms.
Not necessarily. It's actually not that difficult to make a firearm from simple tools and parts from a hardware store. And it will be way more deadly and accurate than a sling. Not to mention simple pipe bombs and stuff.
I think a new quality about this kind of weapon is that it can be controlled remotely or can even operate semi-autonomously. Deadly pipe bombs are certainly heavier than a crossbow and ignition mechanisms aren't trivial to build.
It's not much more of a threat to society than a handgun. A WMD it is not, unless you make a lot of these and launch them at the same time, which is probably less effective than an H-bomb. (The one major difference between such a drone and a handgun is you might be able to target politicians and other people who're hard to shoot; a somewhat higher mortality rate among politicians is hardly a huge threat to society though.)
> there is a huge military and economic interest in technology and AI
There's also a huge interest in nuclear energy, and it doesn't follow that a consumer should be or is able to own a nuclear reactor. If anyone took the dangers seriously, it wouldn't be that hard to at least limit what consumer devices can do. Right now a single botnet made from infected consumer PCs has more raw computing power than the biggest tech company server farm, which is ridiculous if you think of rogue AI as an existential threat. Actually it's ridiculous even if you think of "properly controlled" AI in the wrong hands as an existential threat; the botnet could run AI controlled to serve the interest of an evil programmer. Nobody cares because nobody has ever seen an AI that doesn't fail the Turing test in its first couple of minutes, much less come up with anything that would be an existential threat to any society.
These drones could be programmed to target specific groups of people, for example of a certain ethnicity, and attack them almost autonomously. Short range slingshot mechanisms are several orders of magnitude cheaper to build than firearms. Moreover, the inhibition threshold is much lower if you are not involved in first-hand violence. There is also a much lower risk of getting busted and no need for intricate escape planning.
you got me thinking, googling, then frowning. imagine this https://www.youtube.com/watch?v=crzXD6NjBAE milspecced.
Are these drones also self-replicating and fully independent?
Let's say you have these two options: (1) A technology that will make you feel the best you possibly can, without any negative consequences, or (2) a robot that will do stuff for you, like cleaning your house, driving you to work. Now which one would you choose?
Not to be condescending, but do you have any idea what practical AI actually looks like? The scenario you've imagined makes about as much sense as a laptop sprouting arms and strangling its owner.
I see this phrase thrown around a lot by Kurzweil and fans. What does it even mean? How do you measure intelligence? Smarter than whom?
Intelligence (in a domain) is measured by how well you solve problems in that domain. If problems in the domain have binary solutions and no external input, a good measure of quality is average time to solution. Sometimes, you can get a benefit by batching the problems, so lets permit that. In other cases, quality is best measured by probability of success given a certain amount of time (think winning a timed chess or go game). Sometimes instead of a binary option, we want to minimize error in a given time (like computing pi).
Pick a measure appropriate to the problem. These measures require thinking of the system as a whole, so an AI is not just a program but a physical device, running a program.
The domain for the unrestricted claim of intelligence is "reasonable problems". Having an AI tell you what the mass of Jupiter or find Earth-like planets is reasonable. Having it move its arms (when it doesn't have any) is not. Having it move _your_ arms is reasonable, though.
The comparison is to the human who is or was most qualified to solve the problem, with the exception of people uniquely qualified to solve the problem (I'm not claiming that the AI is better than you are at moving your own arms).
Besides, an AI might be really good in solving problems in one specific domain. This does not mean this AI is anything more than a large calculator, designed to solve that kind of problems. That calculator does not need to, and will not become "self-aware". It does not need, and will not have, a "personality". It might be able to solve that narrow class of problems faster than humans, but it will be useless when faced with most other kinds of problems. Is it more intelligent than humans?
It's not at all clear how to develop an AI which will be able to solve any "reasonable" problem, and I don't even think that's what most companies/researchers are trying to achieve. Arguably the best way to approach this problem is reverse engineering our own intelligence, but this, even if successful, will not necessarily lead to anything smarter than what is being reversed engineered.
A computer that is thousands of times more intelligent than humans, means it can do things we might think are impossible. Come up with solutions to problems we would never think of in our lifetimes. Manage levels of complexity no human could deal with.
"A computer that is thousands of times more intelligent than humans, means it can do things we might think are impossible. Come up with solutions to problems we would never think of in our lifetimes. Manage levels of complexity no human could deal with."
Or did you just redefine intelligence as: "the ability to tell what color the sky is?"
So the moment there's a war between two superhuman agents, either of them could end up de-prioritizing human life, more so than they might if either of them existed in isolation.
And if there's actually a starvation of resources if there are a large number of superhuman agents? Am I missing something obvious here?
If I'm not missing anything... it's painfully ironic to me that we worry about the AI Box, and yet by open-sourcing the work of the best minds in AI, we voluntarily greatly increase the probability that AI will be born outside of a box - somebody is going to be an idiot and replicate the work outside of a sandbox.
Now, despite all of this, I'm optimistic about AI and its potential. But I believe the best chance we have is when the best, most well-intentioned researchers among us have as much control as possible over designing AI. Ruby on Rails can be used to make fraudulent sites, but it's fine to open-source it since fraudulent sites don't pose an existential risk to sentient biological life. That is not necessarily the case here.
A few years back Bill Joy was sounding the alarm on nanotechnology. He sounded a lot like Elon Musk does today. Nanobots could be a run-away technology that would reduce the world to "grey goo". But nothing like that will ever happen. The world is already awash in nanobots. We call them bacteria. Given the right conditions, they grow at an exponential rate. But they don't consume the entire earth in a couple of days, because "the right conditions" can't be sustained. They run out of energy. They drown in their own waste.
AI will be the same. Yes, machines are better than us at some things, and that list is growing all the time. But biology is ferociously good at converting sunlight into pockets of low entropy. AI such as it exists today is terrible at dealing with the physical world, and only through a tremendous amount of effort are we able to keep it running. If the machines turn on us, we can just stop repairing them.
Solar panels can collect more energy than photosynthesis. Planes can fly faster than any bird. Guns are far more effective than any animal's weapons. Steam engines can run more efficiently than biological digestion. And we can get power from fuel sources biology doesn't touch, like fossil fuels or nuclear.
We conquered the macro world before we even invented electricity. Now we are just starting to conquer the micro world.
But AI is far more dangerous. It would take many many decades - perhaps centuries - of work to advance to that level. It's probably possible to build grey goo, it's just not easy or near. However AI could be much closer given the rapid rate of progress.
If you make an unfriendly AI, you can't just shut it off. It could spread it's source code through the internet. And it won't tell you that it's dangerous. It will pretend to be benevolent until it no longer needs you.
A gun isn't effective unless human loads it, aims it and pulls the trigger. All your other examples are the same. We do not have any machine that can build a copy of its self, even with infinite energy and raw materials just lying around nearby. Now consider what an "intelligent" machine looks like today: a datacenter with 100,000 servers, consuming a GW of power and constantly being repaired by humans. AI is nowhere near not needing us.
Never understood this reasoning.
We are not talking about machines vs. biologic life, this is a false dichotomy. We are talking about intelligence.
Intelligence is the ability to control the environment through the understanding of it. Any solvable problem can be solved with enough intelligence.
Repairing a machine is just a problem. The only limitations for intelligence are the laws of physic.
Maybe, but not necessarily, and even if they do "drown in their own waste" they might take a lot of others with them. When cyanobacteria appeared, the oxygen they produced killed off most other species on the planet at the time . The cyanobacteria themselves are still around and doing fine.
They may actually be. Although mostly not in the way you're talking about, but there is something to be said for the dispersing of power. If one or two players have a power no one else has, there's more temptation to use it. If it's widely distributed, it seems reasonable that any one actor would be less likely to wield that power. (I admit I'm being a bit hand-wavy here on what I mean by "power" but bear with me. It's kind of an abstract point).
To argue against myself, I'd say that the difference between weapons and AI is that AI is more general. It's not just a killing machine. In fact, I hope that killing represents the minority of its use cases.
So do I generally speaking, but with a caveat... I think that having multiple (eg, more than 1 or 2) nuclear powers is likely a Good Thing (given that the tech exists at all). The whole MAD principle seems very likely to be one reason the world has yet to descend into nuclear war. The main reason I prefer nuclear non-proliferation though, isn't because I genuinely expect something like a US/Russia nuclear war, it's more the possibility of terrorists or non-state actors getting their hands on a nuke.
It's interesting though, because these various analogies between guns, nukes and AI's don't necessarily hold up. I was about to say a lot more, but on second thought, I want to think about this more.
Unfortunately during the cold war, neither side could really rely on the other to just chill out for a couple of days. So now we can deliver hundreds (thousands?) anywhere in the world in about 45 minutes.
Guns are not exactly good at healing, making or creating.
A better comparison would be knives. Knives can be used for stabbing and killing but also for sustenance (cooking), for healing (surgery), for arts (sculpture). So perhaps this is akin to National Cutlery Association (not sure if such an entity exists but you get the idea).
There is actually very little evidence for this. Violent crime is not strongly correlated with gun ownership (and it may even be negatively correlated). Instead, it appears to be based strongly on factors like poverty.
Here's a good summary. http://politics.stackexchange.com/questions/613/gun-prevalen...
Hunters are a different case, but their weapons are rather different too. To be honest I wouldn't that much care about depriving them of a pastime if it meant turning US gun death figures into European ones. But that's probably unnecessary.
Not at all the case. Guns allow for the physically weak to still have a chance to defend themselves. On NPR I remember calling for the police to help as her ex was breaking into the home. They didn't have anyone anywhere near by and the woman had no weapons on her. The boyfriend ended up breaking in and attacking her quite badly. He didn't need a weapon and a weapon wouldn't have made what he did any worse, but it might have given her the chance for the victim to defend herself or scare him off.
Not necessarily. AGI might be possible but it's not necessarily possible for two people in a basement. AGI might require some exotic computer architecture which hasn't been invented yet, for example. This would put it a lot closer to nuclear weapons in terms of barriers to existence.
Developing specialized hardware isn't out of reach, because of FPGAs.
One sort of exotic computer architecture I had in mind was a massively parallel (billions of "cores"), NUMA type machine. You can't really do that with an FPGA, can you?
My point is - even if we had, say, a million times more flops, and a million times more memory than the largest supercomputer today, we would still have no clue what to do with it. The problem is lack of algorithms, lack of theory, not lack of hardware.
We do have a clue about the architecture of the human brain. Billions and billions of neurons with orders of magnitude more connections between them.
even if we had, say, a million times more flops, and a million times more memory
The point is that we could have those things but we don't have a million times lower memory latency and we don't have a million times more memory bandwidth. Those things haven't been improving at all for a very long time.
There are tons of algorithms we can think of that are completely infeasible on our current architectures due to the penalty we pay every time we have a cache miss. Simulating something like a human brain would be pretty well nothing but cache misses due to its massively parallel nature. It's not at all inconceivable to me that we already have the algorithm for general intelligence, we just don't have a big enough machine to run it fast enough.
You call this a "clue"? It's like saying that computer architecture is "Billions and billions of transistors with orders of magnitude more connections between them". Not gonna get very far with this knowledge.
...we don't have a million times lower memory latency and...
Ok, let's pretend we have an infinitely fast computer in every way, with infinite memory. No bottleneck of any kind.
What are you going to do with it, if your goal is to build AGI? What algorithms are you going to run? What are you going to simulate, if we don't know how a brain works? Not only we don't have "the algorithm for general intelligence", we don't even know if such an algorithm exists. It's far more likely that a brain is a collection of various specialized algorithms, or maybe something even more exotic/complex. Again, we have no clue. Ask any neuroscientist if you don't believe me.
You would obviously run AIXI: https://wiki.lesswrong.com/wiki/AIXI
We know how to make AI given infinite computing power. That's not really hard. You can solve tons of problems with infinite computing power. All of the real work is optimizing it to work within resource constraints.
Ok, then, back to the very fast computer.
Simulate the set of all possible states and find the ones which resemble AGI.
They aren't interchangeable concepts, though: guns can only be used to harm or threaten harm. Artificial general intelligence could invent ways to harm but could also invent ways to anticipate, defend against, prevent, mitigate, and repair harm.
> AI appears to be impossible to regulate.
It could be regulated if there were extremely authoritarian restrictions on all computing. But such a state would be 1. impractical on a global scale, 2. probably undesirable by most people and 3. fuel for extremist responses and secretive AI development.
> If an AGI is possible, then it is inevitable.
The only thing that could preclude the possibility of creating AGI would be if there was something magical required for human-level reasoning and consciousness. If there's no magic, then everything "human" emerges from physical phenomena. Ie short of a sudden catastrophe that wipes humanity out or makes further technological development impossible, we are going to create AGI.
Personally, I think that Musk and the OpenAI group may already have a vision for how to make it happen. Figuring out how to make neural networks work at human-comparable levels for tasks like machine vision was the hardest part IMO. Once you have that, if you break down how the brain would have to work (or could work) to perform various functions and limit yourself to using neural networks as building blocks, it's not that difficult to come up with a synthetic architecture that performs all of the same functions, provided you steer clear of magical thinking about things like free will.
Actually, you need a number of things other than neural networks, but... nevermind, everyone here is clearly fixated on pro-Musk-Bostrom-bloc vs anti rather than on the science.
Sorry but this "guns > NRA > bad > evil" thing is getting pretty old. I don't even own a gun and I have trouble making the "gun > bad" connection when there are hundreds of millions of guns in the US. We should have rivers of blood running down every street. We don't.
Just stop it. They are not the problem. Crazy people are the problem.
Crazy people with drones are a problem. That does not make drones bad.
Crazy people, well, intelligent crazy people, with AI are a problem. That does not make AI bad.
If we are going to have intelligent arguments let's first be intelligent. The minute someone makes the "guns > NRA > bad > evil" argument in some form I know they are making it from a total ideological corner or from political indoctrination. I challenge anyone to put the numbers to Excel and make an argument to support "guns > NRA > bad > evil" or "drones > no rules > bad > evil" or "AI > watched Terminator too many times > bad evil" without having a crazy person as part of the equation.
Normal people don't take a gun out of their safe, load it, throw a bunch of rounds in the back of their car, put on a bullet-proof vest and go shoot-up the neighborhood community center, church, school or mall. Those other people, the one's who would, those are the "crazies", not in a clinical sense but in that there's something seriously wrong with them that they would actually do the above.
The down-votes on my original statement show I didn't do a good job of presenting my case.
I do firmly believe we need to do something about access to guns. That does NOT mean taking guns away from law-abiding people. That means criminals or people who are living under circumstances that might compel them to commit crimes. The overwhelming majority of guns and gun owners do absolutely nothing to harm anyone. In fact, I'd be willing to bet most guns sit unused except for an occasional trip to the range or hunting.
Some of us who would like to engage in a truly sensible conversation about guns or drones or green lasers pointed at planes and, yes, AI and robots.
Yet if we come off the line making statements like "We have too many guns! The NRA is a terrorist organization!" we, in fact, have become the crazies. Because these statements are undeniably insane in the face of equally undeniable evidence.
These statements only serve to instantly stop the conversation. The come back goes from "Guns don't kill people, people kill people" to "More guns in Paris would have saved lives". Both of which are undeniably factual statements.
And, with that, the conversation stops. We can replace "AI", "knifes", "drones", "lasers" and more into these and similar statements. The end result is the same. Those advocating for some control become the crazies and the conversation goes nowhere.
Because you are telling a perfectly harmless, responsible gun owner who might have a few guns in a safe that he is the problem. You are calling him a criminal. You are calling him "the problem". And, in his context, well, you are certifiably insane for saying so.
The guy who believes he needs a gun to protect his home isn't going to take that gun and go shoot-up a theater, school, mall or community center. If we claim he is we are the crazies, not him. The fact that a number of us disagree with the need for such protection (I personally can't see the need) is irrelevant. Calling him a criminal is simply insane.
I know people like that. I know people with over 20 guns in a safe. And I know they have not been out of that safe but for an occasional cleaning in ten or twenty years. And when those people hear the anti-gun, anti-NRA language spewing out they conclude "they" are insane. They are absolutely 100% correct in reaching that conclusion. Because he is not dangerous and his guns require a dangerous person in order to be loaded, carried to a destination and used to inflict harm.
He is right and everyone else is crazy and the conversation stops.
The right approach is to recognize that he isn't the problem. He is part of the solution. Because these types of gun owners --responsible and law abiding-- also happen to be the kind of people who abhor the use of guns to commit crimes. This is a powerful intersection of ideology gun control advocates have not woken up to.
You acknowledge them as what they are, harmless law-abiding people, and ask them for help in figuring out how to reduce the incidence of guns being used to kill innocent people. Then you'll engage him, the community he represents and, yes, the NRA, in finding a solution. Becoming the crazy person who calls all of them dangerous criminals despite the overwhelming evidence to the contrary gets you nowhere. The conversation stops instantly, and rightly so.
Let's not do the same with AI and technology in general. Let's not come off the line with statements that make us the crazies.
Military use of AI and drones is very different subject, just like military use of guns is a different subject.
No, that's definitely not undeniably factual.
And your continued repeated use of "crazies" is fucking repugnant.
You've missed the point of the article I shared with you. The point was normal people under extraordinary circumstances can be pushed to breaking point and take it out on others. Normal people do not always behave normally.
Not everyone is "wired" to deal with life's challenges the same way. I had a friend who committed suicide after losing his business in 2009. Sad. On the other hand, I've been bankrupt --as in lost it all, not a dime to my name-- and suicidal thoughts never entered my mind. In fact, I hussled and worked hard for very little until I could start a small business.
That article has an agenda, follow the money trail and you might discover what it is.
As for the article's agenda, perhaps it had one, but it appears to be an agenda backed up with facts, for example:
“Fewer than 5 percent of the 120,000 gun-related killings in the United States between 2001 and 2010 were perpetrated by people diagnosed with mental illness,”
When we look at violent crime we see almost all perpetrators do not have a diagnosed illness, and also they do not have a diagnosable illness.
You are falling for the conjunction fallacy. You see "violent", and insist "violent and mentally ill" even though violent is more probable.
You are an intelligent person. You HAVE to know that I do not mean someone with autism or a developmental disorder. That would be sick and repugnant. But that's not what I mean. And you know it.
What I mean is someone with such a mental illness or sickness or reality distortion that they can justify picking up a gun and killing twenty children. A person has to be sick in the head to do something like that. Sick in the heart too. Use whatever terms you care to pull out of the dictionary but we all know what we are talking about.
Someone has to be "crazy" (define it as you wish) to behave in such ways.
* Two guys with pistols in a crowded bar: Attacker is favored.
* Trench warfare during WWI: Defender is favored.
* Nukes: Attacker is favored, although with the invention of nuclear submarines that could lurk under the ocean and offer credible retaliation even in the event of a devastating strike, the attacker became favored less.
In general equilibria where the defender is favored seem better. Venice became one of the wealthier cities in the medieval world because it was situated in a lagoon that gave it a strong defensive advantage. The Chinese philosopher Mozi was one of the first consequentialist philosophers; during the Warring States period his followers went around advancing the state of the art in defensive siege warfare tactics: http://www.tor.com/2015/02/09/let-me-tell-you-a-little-bit-a...
Notably, I'm told that computer security current favors attackers in most areas: http://lesswrong.com/lw/dq9/work_on_security_instead_of_frie... (BTW the author of this post is a potential Satoshi Nakamoto candidate and knows his stuff.)
In equilibria where the attacker is favored, the best solution is to have some kind of trusted central power with a monopoly on force that keeps the peace between everyone. That's what a modern state looks like. Even prisoners form their own semiformal governing structures, with designated ruling authorities, to deal with the fact that prison violence favors the attacker: http://www.econtalk.org/archives/2015/03/david_skarbek_o.htm...
Thought experiment: Let's say someone invents a personal force field that grants immunity to fists and bullets. In this world you can imagine that the need for a police force, and therefore the central state authority that manages the use of this police force, lessens considerably. The enforcement powers available to a central government also lessen considerably.
This is somewhat similar to the situation with online communities. We don't have a central structure governing discussions on the web because online communities favor the defender. It's relatively easy to ban users based on their IP address or simply lock new users out of your forum entirely and therebye keep the troll mobs out. Hence the internet gives us something like Scott Alexander's idea of "Archipelago" where people get to be a part of the community they want (and deserve): http://slatestarcodex.com/2014/06/07/archipelago-and-atomic-... Note the work done by the word "archipelago" which implies islands that are easy to defend and hard to attack (like Venice).
Let's assume that superintelligent AI, when weaponized, shoves us into an entirely new and unexplored conflict equilibrium.
If the new equilibrium favors defense we'd like to give everyone AIs so they can create their own atomic communities. If only a few people have AIs, they might be able to monopolize AI tech and prevent anyone else from getting one, though the info could leak eventually.
If the new equilibrium favors offense we'd like to keep AIs in the hands of a small set of trusted, well-designed institutions--the same way we treat nuclear weapons. It could be that at the highest level of technological development, physics overwhelmingly favors offense. If everyone has AIs there's the possibility of destructive anarchy. In this world, research on the design of trustworthy, robust, inclusive institutions (to manage a monopoly on AI-created power) could be seen as AI safety research.
The great filter http://waitbutwhy.com/2014/05/fermi-paradox.html weakly suggests that the new equilibrium favors offense. If the new equilibrium favors defense, even the "worst case scenario" autocratic regimes would have had plenty of time to colonize the galaxy by now. If the new equilibrium favors offense, it's entirely possible that civs reaching the superintelligent AI stage always destroy themselves in destructive anarchy and go no further. But the great filter is a very complicated topic and this line of reasoning has caveats, e.g. see http://lesswrong.com/lw/m2x/resolving_the_fermi_paradox_new_...
Anyway this entire comment is basically spitballing... point is that if $1B+ is going to be spent on this project, I would like to see at least a fraction of this capital go towards hammering issues like these out. (It'd be cool to set up an institute devoted to studying the great filter for instance.) As Enrico Fermi said:
"History of science and technology has consistently taught us that scientific advances in basic understanding have sooner or later led to technical and industrial applications that have revolutionized our way of life... What is less certain, and what we all fervently hope, is that man will soon grow sufficiently adult to make good use of the powers that he acquires over nature."
And I'm slightly worried that by virtue of choosing the name OpenAI, the team has committed themselves to a particular path without fully thinking it through.
I suspect that after reading you'll be convinced if you aren't already that in the realms of biological and chemical warfare (special cases of nanotech warfare) nature overwhelmingly favors offense. We've been able to worldwide keep research and development on those limited, and there's incentive to avoid it anyway since if word gets out others will start an arm's race so they can at least try and maybe get to a MAD equilibrium, but it's not nearly as stable as the one with nukes. An additional incentive against is that the only purpose of such weaponry is to annihilate rather than destroy enough to achieve a more reasonable military objective.
But molecular nanotech is on a completely different playing field. Fortunately it's still far out, but as it becomes more feasible, there is a huge incentive to be the first entity to build and control a universal molecular assembler or in general self-replicating devices. Arms control over this seems unlikely.
Giving everyone their own AGI is like giving everyone their own nation state, which implies being like giving everyone their own nuke plus the research personnel to develop awesome molecular nanotech which as a special case enables all the worst possibilities of biological and chemical and non-molecular-nanotech nanotech warfare, and then more with what you can do with self replication, most of which are graver existential threats than nukes. Absent a global authority or the most superintelligent Singleton to monitor everything that situation is in no way safe for long.
I agree that replacing AI with guns makes an interesting point to consider, but is safety a good metric to use? For example, banning alcohol makes the world much safer. And if you prioritize the safety of those who follow the ban over those who don't, things get really disturbing (poisoning alcohol, like the US government did in the past).
A dictator who has a monopoly on weapons/AI can be pretty safe for everyone who falls in line. But it isn't very free.
So perhaps the better question is what increases overall freedom, and I think that having equal AI for everyone is the best approach.
Money is great, openness is great, big name researchers are also a huge plus. But data data data, that could turn out to be very valuable. I don't know if Sam meant that YC companies would be encouraged to contribute data openly, as in making potentially valuable business assets available to the public, or that the data would be available to the OpenAI Fellows (or whatever they're called). Either way, it could be a huge gain for research and development.
I know that I don't get a wish list here, but if I did it would be nice to see OpenAI encourage the following from its researchers:
1) All publications should include code and data whenever possible. Things like gitxiv are helping, but this is far from being an AI community standard
2) Encourage people to try to surpass benchmarks established by their published research, when possible. Many modern ML papers play with results and parameters until they can show that their new method out performs every other method. It would be great to see an institution say "Here's the best our method can do on dataset X, can you beat it and how?"
3) Sponsor competitions frequently. The Netflix Prize was a huge learning experience for a lot of people, and continues to be a valuable educational resource. We need more of that
4) Try to encourage a diversity of backgrounds. IF they choose to sponsor competitions, it would be cool if they let winners or those who performed well join OpenAI as researchers at least for awhile, even if they don't have PhDs in computer science
The "evil" AI and safety stuff is just science fiction, but whatever. Hopefully they will be able to use their resources and position to move the state of AI forward
umm... you can offer proof that we have nothing to worry about?
Does the proof go like: Just as all people are inherently good, therefore all AIs will be inherently good?
Or is it more like: since we can now safely contain all evil people, therefore we will be able to safely contain evil AIs?
Sounds to me like there is some risk, no?
Andrew Ng (I believe) compared worrying about evil AI to worrying about overpopulation on Mars. Which is to say, the problem is so far off that it's rather silly to be considering it now. I would take it a step further and say that worrying about the implications of AGI is like thinking about Earth being overpopulated by space aliens. First we have to establish that such a thing is even possible, for which there is currently no concrete proof. Then we should start to think about how to deal with it.
Considering how hypothetical technology will impact mankind is literally the definition of science fiction. It makes for interesting reading, but it's far from a call to action.
Why does an AI need to be capable of moral reasoning to perform actions we'd consider evil?
The concern is that computers will continue to do what they're programmed to do, not what we want them to do. We will continue to be as bad at getting those two things to line up as we've always been, but that will become dangerous when the computer is smarter than its programmers and capable of creatively tackling the task of doing something other than what we wanted it to do. Any AI programmed to maximize a quantity is particularly dangerous, because that quantity does not contain a score for accurately following human morality (how would you ever program such a score).
If you're willing to believe that an AI will some day be smarter than an AI researcher (and assuming that's not possible applies a strange special-ness to humans), then an AI will be capable of writing AIs smarter than itself, and so forth up to whatever the limits of these things are. Even if that's not its programmed goal, you thought making something smarter than you would help with your actual goal, and it's smarter than you so it has to realize this too. And that's the bigger danger - at some unknown level of intelligence, AIs suddenly become vastly more intelligent than expected, but still proceed to do something other than what we wanted.
Berkeley AI prof Stuart Russell's response goes something like: Let's say that in the same way Silicon Valley companies are pouring money in to advancing AI, the nations of the world were pouring money in to sending people to Mars. But the world's nations aren't spending any money on critical questions like what people are going to eat & breathe once they get there.
Or if you look at global warming, it would have been nice if people realized it was going to be a problem and started working on it much earlier than we did.
Secondly - it's not necessarily about 'evil' AI. It's about AI indifferent to human life. Have a look at this article, it provides a better intuition for how slippery AI could be: https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-...
This is a point everyone makes, but it hasn't been proven anywhere. Progress in AI as a field has always been a cycle of hype and cool-down.
Edit (reply to below). Talk about self-bootstrapping AIs, etc. is just speculation.
Just as one discovery enables many, human-level AI that can do its own AI research could superlinearly bootstrap its intelligence. AI safety addresses the risk of bootstrapped superintelligence indifferent to humans.
Of course, that assumes the return-on-investment curve for "bootstrapping its own intelligence" is linear or superlinear. If it's logarithmic or something other than "intelligence" (which is a word loaded with magical thinking if there ever was one!) is the limiting factor on reasoning, no go.
But even if we can do all that any time soon (which is a pretty huge if), we don't even know what the effect will be. It's possible that if we remove all of the "I don't want to study math, I want to play games" or "I'm feeling depressed now because I think Tim's mad at me" parts of the human intelligence, we'll end up removing the human ingenuity important to AGI research. It might be that the resulting AGI is much more horrible at researching AI than a random person you pull off the street.
This is a matter of conjecture at this point: Andrew Ng predicts no; Elon Musk predicts yes.
I agree with you that, if you can be sure that superhuman AI is very unlikely or far off, then we have plenty of other things to worry about instead.
My opinion is, human-level intelligence evolved once already, with no designer to guide it (though that's a point of debate too... :-) ). By analogy: it took birds 3.5B years to fly, but the Wright brothers engineered another way. Seems likely in my opinion that we will engineer an alternate path to intelligence.
The question is when. Within a century? I think very likely. In a few decades? I think it's possible & worth trying to prevent the worst outcomes. I.e., it's "science probable" or at least "science possible", rather than clearly "science fiction" (my opinion).
So returning to your Wright brothers example, it's more like saying: "It took birds 3.5B years to fly, but the Wright brothers engineered another way. It seems likely that we'll soon be able to manufacture even more efficient wings small enough to wear on our clothes that will enable us to glide for hundreds of feet with only a running start."
>We thus designed a brief questionnaire
and distributed it to four groups of experts in 2012/2013. The
median estimate of respondents was for a one in two chance that highlevel
machine intelligence will be developed around 2040-2050, rising
to a nine in ten chance by 2075. Experts expect that systems will move
on to superintelligence in less than 30 years thereafter. They estimate
the chance is about one in three that this development turns out to be
‘bad’ or ‘extremely bad’ for humanity.
I would suggest you read the history of the Manhattan project if you want to continue in your belief system regarding "impossible" deadly technology.
To quote Carl Sagan:
>They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.
Now for a less "killer" use case, you might get denied access to your credit card because of what Facebook "thinks" based on your feed and your friends feeds (this is a real product.)
AI doesn't have to be full blown human-like and generalizable to have real world implications.
This is what my piece called personas is about.. Most people don't understand the implications of what's already happening and how constrains of programming/ML lead to non-human like decisions with human-like consequences. http://personas.media.mit.edu
Given that I could probably sketch out a half-assed design for one in nine months if you gave me a full-time salary - or rather, I could consult with a bunch of experts waaaaaay less amateurish than me and come up with a list of remaining open problems - what makes you say that physical computers cannot, in principle, no matter how slowly or energy-hungrily, do what brains do?
I'm not saying, "waaaaah, it's all going down next year!", but claiming it's impossible in principle when whole scientific fields are constantly making incremental progress towards understanding how to do it is... counter-empirical?
I mean, why can't I live forever? Let's just list the problems and solve them in the next year!
Ok: what don't I know, that is interesting and relevant to this problem? Tell me.
>I mean, why can't I live forever?
Mostly because your cells weren't designed to heal oxidation damage, so eventually the damage accumulates until it interferes with homeostasis. There are a bunch of other reasons and mechanisms, but overall, it comes down to the fact that the micro-level factors in aging only take effect well after reproductive age, so evolution didn't give a fuck about fixing them.
>Let's just list the problems and solve them in the next year!
I said I'd have a plan with lists of open problems in nine months. I expect that even at the most wildly optimistic, it would take a period of years after that to actually solve the open problems and a further period of years to build and implement the software. And that's if you actually gave me time to get expert, and resources to hire the experts who know more than me, without which none of it is getting done.
As it is, I expect machine-learning systems to grow towards worthiness of the name "artificial intelligence" within the next 10-15 years (by analogy, the paper yesterday in Science is just the latest in a research program going back at least to 2003 or 2005). There's no point rushing it, either. Just because we can detail much of the broad shape of the right research-program ahead-of-time, especially as successful research programs have been conducted on which to build, doesn't mean it's time to run around like a chicken with its head cut-off.
I'd be more impressed by a Human Intelligence Project - augmenting predictive power to encourage humans to stop doing stupid, self-destructive shit, and moving towards long-term glory and away from trivial individual short-term greed as a primary motivation.
AI is a non-issue compared to the bear pit of national and international politics and economics.
So the AI Panic looks like psychological projection to me. It's easier to mistrust the potential of machines than to accept that we're infinitely more capable of evil than any machine is today - and that's likely to stay true for decades, if not forever.
The corollary is that AI is far more likely to become a problem if it's driven by the same motivations as politics and economics. I see that as more of a worry than the possibility some unstoppable supermachine is going to "decide" it wants to use Earth as a paperclip factory, or that Siri is going to go rogue and rickroll everyone on the planet.
Job-destroying automation and algorithmic/economic herding of humans is the first wave of this. It's already been happening for centuries. But it could, clearly, get a lot worse if the future isn't designed intelligently.
"The precautionary principle ... states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action." 
Definitely bold... might be just crazy enough to work! Would love to see the arguments laid out in a white paper.
Reminds me of the question of how far ahead in cryptology is the NSA compared to the open research community.
Note: I'm personally not too worried about the AI apocalypse, but I think "we don't even understand neural nets" should cause more concern, not less.
But hey, I labor in this domain: if paranoid richy-rich types want to throw money at it to ensure that they remain at the top of the heap, I'm all for it.
Yes, but data also can be collected openly collectively, in the spirit of Wikipedia or OpenStreetMaps etc.
What I think OpenAI should encourage is the development of algorithms that can be used to crowdsource AI. I don't think there are good algorithms yet for model merging, but I would be gladly proven wrong.
There already exist drones that kill based on AI.
This is essentially Ray Kurzweil's argument. Surprising to see both Musk and Altman buy into it.
If the underlying algorithms used to construct AGI turn out to be easily scalable, then the realization of a dominant superintelligent agent is simply a matter of who arrives first with sufficient resources. In Bostrom's Superintelligence, a multipolar scenario was discussed, but treated as unkikely due to the way first-arrival and scaling dynamics work.
In other words, augmenting everyone's capability or intelligence doesn't necessarily preclude the creation of a dominant superintelligent agent. On the contrary, if there's any bad or insufficiently careful actors attempting to construct a superintelligence, it's safe to assume they'll be taking advantage of the same AI augments everyone else has, thus rendering the dynamic not much different from today (i.e. a somewhat equal—if not more equal—playing field).
I would argue that in the context of AGI, an equal playing field is actually undesirable. For example, if we were discussing nuclear weapons, I don't think anyone would be arguing that open-source schematics is a great idea. Musk himself has previously stated that [AGI] is "potentially more dangerous than nukes"—and I tend to agree—it's just that we do not know the resource or material requirements yet. Fortunately with nuclear weapons, they at least require highly enriched materials, which render them mostly out of reach to anyone but nation states.
To be clear, I think the concept of opening up normal AI research is fantastic, it's just that it falls apart when viewed in context of AGI safety.
I guess the risk is embedding into systems that manage missiles or something. But you don't need sophisticated algorithms for that to be a risk, just irresponsible programmers. And I recon those systems already rely on a ton of software. So as long as we don't build software that tries to "predict where the this drone should strike next", we're probably fine. Actually shit we're probably doing that.. ("this mountanous cave has a 95% feature match with this other cave we bombed recently..."). Fuuuuck that sounds bad. I don't know how OpenAI giving other people AI will help against something like that.
But on the chance that some day we do reach that level of advancement, even if it's 100 or 500 years, can't hurt to prepare, right? Better to waste time preparing unnecessarily than to face destruction from improper planning.
Try proposing that we prepare for an alien invasion, and you'll be laughed out of the room.
In essence, I mean the dangers of using AI for large scale propaganda through Internet services. The best tools of the most dangerous people and movements have always been manipulation and propaganda; what if a perfect AI does it? Could we even notice it?
Even when the AI is given a seemingly safe task, such as "optimize for clicks" in a news web site, something dangerous might happen in the long run if that's optimal for clicks.
I doubt it though. People got quickly desensitized towards advertising. Propaganda will follow.
Since the dawn of time that meant man or gods. Soon that list will need to include computers.
Excuse me, but I'd rather worry about those threats, than about a robot uprising.
I don't consider the two viewpoints expressed to be contraditory.
You can be malicious and destructive without traditional weapons... perhaps even more so with AI presiding over all our data. I mean we already have AI that answers our emails for us... won't be that long before its slipping things in that we don't notice.
Let's say we program an AI and get one little detail wrong and things go to hell as a result. We can call that "human error" or "AI error" but either way it's a reason for caution.
I am actually somewhat concerned by this OpenAI project, and here's why. Let's say there's going to be some kind of "first mover advantage" where the first nation to build an AI that's sufficiently smart has the possibility to neuter the attempts being made by other nations. If there's a first mover advantage, we don't want a close arms race, because then each team will be incentivized to cut corners in order to be the first mover. Let's say international tensions happen to be high around this time and nations race to put anything in to production in order to be the first.
The issue with something like OpenAI is that increasing the common stock of public AI knowledge leaves arms race participants closer to one another, which means a more heated race.
And if there's no first mover advantage, that's basically the scenario where AI was never going to be an issue to begin with. So it makes sense to focus on preparing for the more dangerous possibility that there is a first mover advantage.
I'm not even sure governments are interested in developing AGI. They probably want good expert systems as advisers, and effective weapons for the military. None of those require true human level intelligence. Human rulers will want to stay in control. Building something that can take this control from them is not in their interests. There likely to be an arms race between world superpowers, but it will probably be limited to multiple narrow AI projects.
Of course, improving narrow AI can lead to AGI, but this won't be the goal, IMO. And it's not a certainty. You can probably build a computer that analyses current events, and predicts future ones really well, so the President can use its help to make decisions. It does not mean that this computer will become AGI. It does not mean it will become "self-aware". It does not need to have a personality to perform its intended function, so why would it develop one?
Finally, most people think that AGI, when it appears, will quickly become smarter than humans. This is not at all obvious, or even likely. We, humans, possess AGI, and we don't know how to make ourselves smarter. Even if we could change our brains instantaneously, we wouldn't know what to change! Such knowledge requires a lot of experiments, and those take time. So, sure, self-improvement is possible, but it won't be quick.
Bostrom and others makes an argument that the difference in intelligence between a person with extremely low IQ and extremely high IQ could be relatively very small related to the possible differences in intelligence/capability of various (hypothetical or actual) sentient entities.
There's also the case of easy expansion in hardware or knowledge/learning resources once a software-based intelligent entity exists. E.g. if we're thinking of purely a speed difference in thinking, optimization by a significant factor could be possible purely by software optimization, and further still if specialized computing hardware is developed for the time-critical parts of the AI's processes. Ten PhDs working on a problem is clearly more formidable than one PhD working on a problem, even if they are all of equal intelligence.
We don't know if humans are 1000 times smarter than rats. Maybe we are 10 times smarter, or a 1000000 times. We don't know how much smarter Perelman or Obama is than a Joe Sixpack. We don't even know what "smarter" means. So talking about some hypothetical "sentient entities", and how "smarter" they can be compared to anything, is a bit premature, IMO.
Maybe but better safe than sorry. Here's a scenario that OpenAI could protect against - super intelligent AI is first built by the US military, patented and backdoored by the NSA to protect us against terrorism. Then some one evil hacks them and turns them against us. Super intelligent AI being open source would reduce such risks.
If there aren't military contractors or in house teams using machine learning for exactly that purpose I'll eat my hat. In fact they were probably doing it 10 years ago (for units, not drones), with tech we're only seeing the beginnings of now
What do we mean by "human level" anyway? Last time I got talking to an AI expert he said current research wouldn't lead to anything like a general intelligence, rather human level at certain things. Machines exceed human capacities already in many fields after all...
"Human level" as in actually as intelligent as a human. An artificial brain just like a biological one, or at least with the same abilities.
We are also mining general knowledge about the world from the web, images and books. This knowledge is represented as feature vectors containing the meaning of the input images and text, or the so called thought vectors. We can use these to perform translation, sentiment analysis, image captioning, answer general knowledge questions and many more things.
On top of these perception systems there needs to be an agent system that receives thought vectors as input and responds with actions. These actions could be: reasoning, dialogue, controlling robots and many other things. It's in this part that we are still lagging. A recent result has been to be able to play dozens of Atari games to a very high score without any human instruction. We need more of that - agents learning to behave in the world.
I'd like to see more advanced chat bots and robots. I don't know why today robots are still so clumsy. When we solve the problem of movement, we'll see robots that could do almost any kind of work, from taking care of babies and the elderly to cooking, cleaning, teaching, driving (already there). We only need to solve walking and grasping objects, perception is already there, but unfortunately there's much less research going on in that field. I don't see yet any robot capable of moving as well as a human, but I am certain we will see this new age of capable robots in our lifetime.
On the other hand, we can start building intelligence by observing how humans reason. We extract thought vectors from human generated text and then map the sequences of thoughts, learning how they fit together to form reasoning. This has already been tried but it is in the early phases. We are very close to computers that can think well enough to be worthy dialogue companions for us.
Funny how they just slipped that in at the end
That same caveat could apply to any fund raised by a venture fund - usually funds are committed, and the actual capital call comes later (when the funds are ready to be spent).
It's an important caveat in some circumstances (e.g. it hinges on the liquidity of the funders, which may be relevant in an economic downturn), but in this one, I'm not sure it really makes a difference for this announcement.
Instead of funding areas of research where grad students legitimately struggle to find faculty or even industry research positions in their field, YC Research decided to join the same arms race that companies like Toyota are joining.
>> YC Research decided to join the same arms race that companies like Toyota are joining.
Or perhaps YC Research providing a sandbox next to a warzone.
But yes, I'm also concerned about the lack of safety-focused headliners at OpenAI, given the message that they think safety is important.
All of the hype around ML today is in deep learning (let's be honest, OpenAI would not exist if that wasn't the case), and AFAIk there is almost no overlap between people who are prolific in deep learning and prolific in FAI.
1. AI / ML is not AGI.
2. Deep learning may be a tool used by an AGI, but is not itself capable of becoming an AGI.
3. MIRI believes it would be irresponsible to build, or make a serious effort at building, an AGI before the problem of friendliness / value alignment is solved.
So are they philosophers? Of a sort, but at least Eliezer is one who can do heavy math and coding that most engineers can't. I wouldn't have an issue calling him a polymath.
There are lots of individuals who disagree to various extents on point 3. Pretty much all of them are harmless, which is why MIRI isn't harping about irresponsible people. But the harmless ones can still do good work on weak AI. You should look up people who were on the old shock level 4 mailing list. Have a look into Ben Goertzel's work (some on weak AI, some on AGI frameworks) and the work of others around OpenCog for an instance of someone disagreeing with 3 who nevertheless has context to do so. Also be sure to look up their thoughts if they have any on deep learning.
I'm not speaking about anyone's abilities, but from my perspective Eliezer's work is mostly abstract.
It's true that Bostrom and Yudkowsky, as individuals, aren't deep learning people. However, I know that MIRI and I think FHI/CSER do send people to top conferences like AAAI and NIPS.
>...imagine a hypothetical computer security expert named Bruce. You tell Bruce that he and his team have just 3 years to modify the latest version of Microsoft Windows so that it can’t be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.
>Bruce just stares at you and says, “Well, that’s impossible, so I guess we’re all fucked.”
>The problem, Bruce explains, is that Microsoft Windows was never designed to be anything remotely like “unhackable.” It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you can’t just slap on a special Unhackability Module at the last minute.
>To get a system that even has a chance at being robustly unhackable, Bruce explains, you’ve got to design an entirely different hardware + software system that was designed from the ground up to be unhackable. And that system must be designed in an entirely different way than Microsoft Windows is, and no team in the world could do everything that is required for that in a mere 3 years. So, we’re fucked.
>But! By a stroke of luck, Bruce learns that some teams outside Microsoft have been working on a theoretically unhackable hardware + software system for the past several decades (high reliability is hard) — people like Greg Morrisett (SAFE) and Gerwin Klein (seL4). Bruce says he might be able to take their work and add the features you need, while preserving the strong security guarantees of the original highly secure system. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy the mysterious reasons while remaining unhackable. He and his team succeed just in time to save the day.
>This is an oversimplified and comically romantic way to illustrate what MIRI is trying to do in the area of long-term AI safety...
The authors of the manifesto seem to be concerned with avoiding some of the obviously bad possible outcomes of widespread AI use by explicitly looking for ways that it can also change society for the better. Just being able to articulate what we mean "by benefitting humanity as a whole" would already be a good contribution.
Side note: I wonder if the Strong AI argument can benefit from something akin to Pascal's Wager, in that the upside of being right is ~infinite with only a finite downside in the opposing case.
lets say that a general AI is developed and brought online (the singularity occurs). Lets also say that it has access to the internet so it can communicate and learn, and lets also say that it has unlimited amount of storage space (every harddrive in every device connected to the internet).
at first the AI will know nothing, it will be like a toddler. than, as it continues to learn and remember, it will become like a teenager, than like an adult in terms of how much it knows. Than it will become like an expert.
but it doesn't stop there! a general AI wouldn't be limited by 1) storage capacity (unlike human's and their tiny brains that can't remember where they put their keys) or 2) death (forgetting everything that it knows).
so effectively a general AI, given enough time, would be omnipotent because it would continually learn new things forever.
Why should one hypothetical be assumed true and not the other?
The high-stakes wager isn't success vs failure in creating strong AI, it's what happens if you do succeed.
You would have to posit a sort of hell simulation into which all human consciousnesses are downloaded to be maintained in torment until the heat-death of the universe for it to be an equivalent downside.
I don't think the big breakthroughs in artificial general intelligence are going to come from well funded scientific researchers anyways, they are going to come out of left field from where you least expect it.
Simply stated, an AI that writes AI.(Forget the halting problem for a moment) How many iterations can it create in 3 hours?
Imagine a massively parallel optical computer with the same transistor density as the human brain, the size of an olympic swimming pool running at the speed of light, and networked with 1000s of other similar computers around the world.
Foomp, superintelligence, you won't even be able to pinpoint the source.
So certainly you would get the baby situation first, but going from manageable baby to astral foetus could potentially happen rapidly and unexpectedly as the rate of progress accelerates to unfathomable speeds, which is what's happened already in going from tribal man to modern civilization, and if you extrapolate that very consistent and reliable trendline, it leads to progress happening in a foomp step perceived as a foomp in real time. Really, all life is just one big accelerating foomp.
We don't need to rely only on humans to design every aspect of neural networks. We are already computationally searching for AI designs that work better. In a recent paper, hundreds of variations of design for the neuron of the LSTM network have been tried to see which one is the best and which of its components are important.
Also, we can play with networks like clay - starting from already trained networks, we can add new layers, make layers wider, transfer knowledge from one complex network into a simpler one and initialize new networks from old networks so as they don't start from scratch every time. We can download models trained by other groups and just plug them in (see word2vec for example). This makes iterative experimenting and building on previous success much faster.
I don't think evolving a super intelligence will happen by simple accident, it will be an incremental process of search. The next big things I predict will be capable robots and decent dialogue agents.
More fundamentally, we are trying to achieve what we can't even define. Define AI, and implementing it should be quite easy.
"Human level AI" seems like trying to define problems through observed characteristics.
I think it was Douglas Hofstadter who had said something to the order that we don't even exactly understand what 'intelligence' means, let alone a clear definition reducible to a mathematical equation or a implementable program.
Your chess programs, are really not 'thinking' in pure sense, there are trying to replace 'thinking' with an algorithm that resembles the outcome of 'thinking'.
There are different ideas for what constitutes AI. Expert systems and knowledge-based reasoners? Pattern-recognizer black boxes? Chatbots? AGI?
Over the years the concept of AI shifted. Until recent years "AI" was mostly used for things like A* search, creating algorithms that play turn-based table games for example (see the Russell-Norvig book), symbolic manipulation, ontologies etc., a few years ago it began to also refer to machine learning like neural networks again.
Neural networks are good at what they are designed for. Whether they will lead to the path to human-like artificial intelligence is a speculative question. But symbolic manipulation alone won't be able to handle the messiness of sensory data for sure. I think neural nets are much better suited for open-ended development than hand engineered pipelines that were state of the art until recently (like extracting corner points, describing them with something like SIFT, clustering the descriptors and using an SVM over bag-of-words histograms). Hand engineering seems too restrictive.
you're not alone in thinking so
i get the impression that terminology bifurcated into "AI" and "cognitive science" around the time Marr published Vision in the 80's.
quibbles and q-bits aside, i was glad to see the announcement from the perspective of a if-not-free-then-at-least-probably-open-source-ish software appreciator.
Additionally the second paragraph:
We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.
This infers they think AI will be used for hostile means. Such as wiping out the human race maybe? It is just un-informed people making un-informed decisions and then informing other un-informed people of said decisions as if they were informed.
And yes, AI will definitely be used for all sorts of purposes including hostile means. Just like anything else, really. Financial manipulation, spying, intelligent military devices, cracking infrastructure security, etc.
These are realistic concerns, we shouldn't fall for the Skynet red herring. We can have problems with ethical AI use, even if it's not a self-aware super-human superintelligence.
The downside is that much of the research is probably held secret for business advantages. The public releases are more of a PR and hiring strategy than anything else in my opinion. By sending papers to conferences, Google's employees can get to know the researchers and attract them to Google.
Others say there's nothing to worry about, Google and Facebook are just today's equivalent of Bell Labs, which gave numerous contributions to computer technologies without causing much harm.
EDIT: I have to agree with _delirium's skepticism towards them doing much in that regard though.
Also, I see a distinct lack of "fear-mongering" in this post.
All three of those links are about neural networks.
This is not over-glorifying. That is fact.
Now, the human brain is definitely a complicated thing to study and understand (by whom? by itself!), but framing it as if the brain was a computer that received a task that it then solved, is the wrong way of thinking about this.
Much more useful than static data on a hard drive.
A hard drive would find it much faster.
I just don't understand the folks that are so confident that strong AI is either not possible, or not achievable within our lifetimes.
If you're in the camp that thinks it's not possible, then you must ascribe some sort of magical or spiritual significance to the human brain.
If you don't think it's possible inside of 100 years, then you're probably just extrapolating on history. The thing about breakthroughs is they never look like they're coming. Until they do.
If consciousness is more than a mere product of brain's functioning, Strong AI does not have to be beyond the horizon.
Should there be an update/amendment/qualification to the laws of robotics regarding using AI for something like ubiquitous mass surveillance?
Clearly the amount of human activity online/electronically will only ever increase. At what point are we going to address how AI may be used/may not be used in this regard?
What about when, say, OpenAI accomplishes some great feat of AI -- and this feat falls to the wrong hands "robotistan" or some such future 'evil' empire that uses AI just as 1984 to track and control all citizenry, shouldnt we add a law of robotics that the AI should AT LEAST be required to be self aware enough to know that it is the tool of oppression?
Shouldn't the term "injure" be very very well defined such that an AI can hold true to law #1?
Who is the thought leader in this regard? Anyone?
EDIT: Well, Gee -- Looks like the above is one of the Open Goals of OpenAI:
Is Eliezer going to close up shop, collaborate with OpenAI, or compete?
We're on good terms with the people at OpenAI, and we're very excited to see new AI teams cropping up with an explicit interest in making AI's long-term impact a positive one. Nate Soares is in contact with Greg Brockman and Sam Altman, and our teams are planning to spend time talking over the coming months.
It's too early to say what sort of relationship we'll develop, but I expect some collaborations. We're hopeful that the addition of OpenAI to this space will result in promising new AI alignment research in addition to AI capabilities research.
That said, although a lot of money and publicity was thrown around regarding AI safety in the last year, so far I haven't seen any research outside MIRI that's tangible and substantial. Hopefully big money AI won't languish as a PR existence, and of course they shouldn't reinvent MIRI's wheels either.