What you should be fearing is military drones being given the ability to make decisions on targets or to fire, even with human assistance, and these systems won't just reside in the hands of large governments either.
Already police and militaries around the world are using abstracted forms of force, wherein target are identified with algorithms, and then force trained on those targets
What's do you think is going to happen first, SkyNet, or a predator imaging drone telling a human operator falsely that the current image is a terrorist?
What's going to happen first? SkyNet, or self driving cars putting millions of people out of jobs because of a lack of demand for drivers in transportation, or manufacture of cars? (I'm not saying it's a bad thing, but it will be very very disruptive)
If SkyNet is a threat, it's 50 or 100 years off I think. "AI" as it is now, is no where near the capability people are talking about. It's sheer hyperbole.
> a predator imaging drone telling a human
> operator falsely that the current image
> is a terrorist?
Are we going to have hearings auditing what the algorithm is like? is there going to be a scandal because isBrownPerson() is discovered in there somewhere? Is someone going to run it against a video collection of regular daily life and discover the drones would be indiscriminately bombing innocent situations if given the chance?
The end result is that we're going to have to effectively have to come up with rules of engagement for every scenario. I think this will lead to war being fought more justly and with less collateral damage than ever before.
Thinking we can wage wars without risk to ourselves or our "soldiers" is preposterous.
"Just" wars are a ridiculous idea rooted in superstition and hubris.
> drones justify terrorism because it's the only available
> response to fighters from the "other side".
More generally people resort to terrorism in the face of an
overwhelming enemy. The west could relatively easily win the wars
they're fighting now in the middle-east with their WWI armies if they
were willing to care less about civilian casualties, that's how
lopsided the odds are.
> Thinking we can wage wars without risk to ourselves or our
> "soldiers" is preposterous.
2. People will not come up with "responsible" rules of engagement as you seem to suggest. isBrownPerson() will be coded into the algorithm but not in a discernible way. Sure, you could verify this against a video collection: like we all know the statistics of profiling and incarceration rates for different races.
If that seems extreme, consider the self-driving car problem. 30,000 people in the US alone are killed per year in car accidents involving human drivers. What is the acceptable number of motor vehicle related deaths per year when all cars are self-driving?
Obviously because those humans then share and bear the moral dilemma of doing it or not. Which also affects the outcome, as opposed to just being some inconsequential detail. Humans can demand out of an injust war (as they've done time and again). Killing machines can not.
It's the difference between blindly following orders (which not even all Nazi's did) and being able to have a change of heart.
It's also the sheer (number of magnitudes) more killing efficiency of the killing algorithms, when a machine could kill 10000 people in the time it takes for a single human to make a decision about 1 of them -- until we find there's a != when we needed == in the algorithm.
> 30,000 people in the US alone are killed per year in car
> accidents involving human drivers. What is the acceptable number
> of motor vehicle related deaths per year when all cars are
What we won't have would be Vasili Arkhipovs, Oskar Schindlers and Chris Taylors...
This is an odd statement. In my reading of historical battles, I've never seen a mention of the idea that soldiers before 1950 did not shoot to kill when ordered to.
However, the original statistic is apparently from a not so credible source, a study by SLA Marshall in WWII that's not so well regarded. This thread goes into some depth:
Not sure what the truth of it is.
An old thread on the topic:
Including many variations of that same problem. "Sorry, you don't qualify for ____ because our 'clever' assessment algorithm doesn't account for your particular situation." Humans with decision making power correct these kind of problems all the time.
Computers with clever algorithms can be incredibly useful, but they are still just tools. Unfortunately, given how bad we humans are with statistics, it's easy to overestimate their accuracy.
In addition to the raw accuracy problems, there is a risk of prejudices being baked into algorithms. We already see this with "redlining" and other housing practices, where loans availability ends up being coded racism. With the added complexity of machine learning and other prediction and analysis methods, it is probably a lot easier to hide improper discrimination.
 reason #57 why statistics should be a mandatory HS math class, right after algebra
While I agree that humans may enshrine their prejudices in code which will later will turn hard to adjust, again what I fear more is a reverse scenario. A machine can be perfectly fair and people won't like it, because it fails to apply the biases they want. People often apply counter-discrimination to compensate for what they believe is unfair treatment.
A mandatory class in statistics and applied probability theory could maybe help the next generation to accept that what is fair may not look so at a first glance.
To be honest, I don't think this will help much since most people won't interested in it and will forget what they learn as soon as the class is over. Also, being an expert in statistics can help you better understand reality, or it can help you present data in a way that confirms your biases (even unintentionally). It's sort of like sophistry, except not quite to that degree or moral ambivalence.
edit: If you want a good example of this, see Scott Alexander show how parapsychology can actually come up with some pretty good statistics while appearing to check off all the right boxes with regard to experimental design: http://slatestarcodex.com/2014/04/28/the-control-group-is-ou...
A small flying drone operating an 9mm pistol has extremely poor accuracy. Especially with multiple shots. If you increase the accuracy, you have to increase the mass. Which makes it bigger target. Currently regular guy with little training and a rifle can probably shoot down a gun wielding drone most of the time before the opposite can happen.
Then there are drones with bombs attached to them. One of the most sophisticated is called Hellfire missile. But it's needlessly big and expensive. The quad copter in next video costs 100$ and has payload to carry regular hand grenade.
You could go RC car route. But you again need a bit of size to conquer stairs. Also it's easier to track where it comes & goes.
I'm less worried about government doing bad shit and more worried about private citizens getting nasty. Autonomous flight, GPS navigation, address from google maps and bit of facial recognition. First civilian drone murder is just matter of time. People kill with more ease the further they can be when it's done.
I'm guessing this internet privacy thing will seem like child's play when all of sudden everybody wants to hide their physical address. Once that is handled, it's only bit like having rabid dogs with homing device.
The MSF Trauma Hospital in Kunduz, Afghanistan (3rd October) didn't have this luxury and in the fog of war,  12 MSF medical staff and ten patients were killed. ,
Killing by remote sensing, is indiscriminate. Adding AI is another order of stupidity.
True, read the Ars article: "The BMC crew is responsible for steering the aircraft to targets, identifying them, and shooting them; the aircraft's battery is slaved to the sensor suite for targeting." - Aircrew, well trained, professional. As ethical as you can get.
Now what happened is "A US special operations team on the ground, given coordinates of the Afghani NDS building by the Afghan forces they were working with, passed them to the AC-130. But when the AC-130 crew punched the grid coordinates into their targeting system, it aimed at an open field 300 meters away from the actual target. Working from a rough description of the building provided from the ground, the sensor operators found a building close to the field that they believed was the target. Tragically, it was actually the hospital."
Will adding AI to this situation make any improvement when the real issue is ' lack of a "common operating picture,"'?
This sort of thing: https://youtu.be/DTqa-NEwUbs?t=94
It could even drop "phone home" over radio, allow the operators to point out the targets at that point, and then fire.
But governments could greatly increase their capability for surveillance.
> Adding AI is another order
> of stupidity.
Boeing is developing compact portable laser weapons technology that could conceivably be mounted on drones.
If you look past the hype, lasers are far from ready.
Another scenario that is less likely but still more likely than killer robots would be AI that simply loses interest in human affairs, stops interacting with us, and does its own thing-- like depicted in the movie "Her" !
Politics, moral only exist because there a large quantity of human.
Snowden like episodes have shown us that we should not trust our government. The government does blatantly illegal things, lies when confronted and in many cases kills or jails our own citizens just because they are exposing the government. I am more scared of government using this AI to oppress its own people.
Imagine this, a person who is 18 years today becomes a potential presidential candidate or major activist in future. Some clerk at DC would simply run a computer program that will bring up the fact that this guy sent a sext to his 17 year old girlfriend in past, smoked weed when 19, bought beer when he was 20 etc. Such government would them implicate him in various court cases and jail him or even worse simply blackmail him to shut up.
Well we work on global warming even though the worst effects are likely decades off. I think one computer science professor put it something like this: "If experts told us aliens were likely arriving on Earth in 50 or 100 years, it'd make sense to start preparing now." You don't want to wait until it's too late to do whatever needs to be done.
Though that stuff isn't nice it's probably an improvement on previous technologies - looking out of the bomber and thinking that looks like some good buildings to bomb and similar.
Your food shopping point still needs middlemen. You have "chain of trust" issues with the producers. How do you know that you'll get good oranges from the orange grove. Who capitalizes the distribution system and designs the business model.
Considering historical examples, we'll see more use for people in the economic problems that are still machine hard. Machine easy economic problems will converge to resource costs + any premium for monopoly. This will free up more demand for other products. People still want to buy an easier, better life.
First off, we'll never let such an AI to be completely objective, because, well, the paperclip scenario. And second, the AI will be about as "objective" as Google search is - in other words, not really. At the end of the day it's still humans that decide the algorithms for Google search, and it will be humans that decide the algorithms for the "smarter than human" AI.
It's not a close risk comparison. The world is far less dangerous today than it has ever been, that includes both terrorism and war. AI and autonomous weapons systems are not going to suddenly make the developed countries that deploy those weapons want to slaughter each other.
How many people die from medical mistakes per year in the US? More than died in all wars and all acts of terrorism combined globally in 2015. The AI that is no doubt going to show up throughout healthcare over the coming 30 or 40 years, will probably improve on the rate of human mistakes - while it still kills tens of thousands per year through mistakes in just the US.
While the military can get away with remote action it won't happen with police in Democratic countries except possibly with non lethal force. I could see remote shut down of cars, even a flash bang or sonic system.
The real danger from AI isn't it taking over, it is from too many people checking out and not participating in life anymore
Once you get beyond 8 researchers, you'll have problems with politics and egos if people aren't focused on a problem. Everyone will have their pet approach for specific problems, and they won't compose into something generally useful. AI is really like 10 or 20 different subfields (image understanding, language understanding, motion planning, etc.)
I think self-driving cars are a perfect example of a great problem for AI (and something that many organizations are interested in: Google, Tesla, Apple). Solving that problem will surely advance the state of art in the AI (and already has).
tl;dr "OpenAI" is too nebulous.
Get into bed with the government and they will piss in it. The most likely outcome is costly complicated regulations that hobble legitimate development and accomplish nothing in terms of making us safer from anything. The end result will be like ITAR and crypto export controls: pushing development off shore and making the USA less competitive.
I say this not as a hard-line anti-government right-winger or dogmatic libertarian, but as someone who has a realistic view of government competence in highly technical domains. Look at other areas and you don't see much better. Corn ethanol, for example, is literally the pessimum choice for biofuels-- it is technically the worst possible candidate that could have been chosen to push. The sorts of folks who ascend to political power simply lack any expertise in these areas, and so they fall back on listening to the agenda-driven lobbyists that swarm around them like flies. The results are awful. Government should do government but should stay the hell away from specific micromanagement of anything technical.
If regulations do turn out to be the right path, I'd suggest that people within the AI field form their own informal regulatory body first, fortifying it against institutional failure modes like corruption by lobbyists etc. Then get the government to grant them legal authority. Hopefully that would go a ways towards addressing the issues you describe.
From the article:
> Sam, since OpenAI will initially be in the YC office, will your startups have access to the OpenAI work?
> Altman: If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that.
You could for instance make the same kind of point about YC companies' access to investor networks, advice of partners, all those sorts of things which aren't explicitly reserved just for them but of course are more readily accessible by virtue of being in the program. It's not something that's inherently bad, it's just how it works.
I'm not saying that having very close contact with this research group won't be advantageous to YC companies, of course it will, but with that as a given, the ethos of this group's findings being made open and freely available for anyone to use is well-intentioned and to be applauded, when it's a privately funded initiative that could just as easily be justified in being somewhat or completely closed and proprietary. Is it really important if some YC companies happen to have a slight advantage from this, as an inevitable side effect, in the big picture?
Keep the team distributed across the world and make all communication surrounding the projects open as well. If it's for the world it should be by the world.
>You could for instance make the same kind of point about YC companies' access to investor networks, advice of partners, all those sorts of things which aren't explicitly reserved just for them but of course are more readily accessible by virtue of being in the program. It's not something that's inherently bad, it's just how it works.
I wouldn't argue that, that's just how business works. I would argue that the founders are playing OpenAI up as humanitarian aid when really it disproportionately benefits them (autonomous cars, paid for by research grants? Investment in early adopters of the technology? Uh yeah).
>I'm not saying that having very close contact with this research group won't be advantageous to YC companies, of course it will, but with that as a given, the ethos of this group's findings being made open and freely available for anyone to use is well-intentioned and to be applauded, when it's a privately funded initiative that could just as easily be justified in being somewhat or completely closed and proprietary. Is it really important if some YC companies happen to have a slight advantage from this, as an inevitable side effect, in the big picture?
YC's business is growing businesses, and they'll take any advantage they can get. If it benefits them more at all then it's not charity or non-profit, and they shouldn't be billing it as such.
It doesn't sound like this project has any scope to address this practical concern, which to me, is largely economic. I don't see how universal access to AI puts food on the table.
There's also a few positive ones, and I hope we can move towards them. One way would be to shift from taxation of human labour to taxation of the means of production. Another way is if access to quality of life products becomes so cheap that they require very little labour to earn.
If you extrapolate the progress of solar power, 3D printing, and synthetic meat, you can imagine a machine that is cheap to produce, but which would make each human completely self-sustainable. Not needing to work to put food on the table every single goddamn day would transform our society quite a lot in a positive way.
Extrapolate further, imagine a machine that runs on solar power, and creates whatever food you want from water, carbon dioxide, and human poop. Essentially short-circuit the whole raise-crops-feed-cattle-slaughter-get-meat cycle. Make the food out of the machine perfectly nutricious as well, because why not.
There would still be things to strive for, to work for, if you want. But baseline survival is just taken care of. Sounds like a good future to me.
When it comes to food, baseline survival is already taken care of in western countries and we are wasting about one third of the food we produce. It's not food that's the problem, but living space and forever rising health care costs.
But you know what the irony is? We don't know a thing about what constitutes a nutritious healthy diet, as the reductionist science we've been applying is not up for this task. Even more aggravating is that trying to shorten the "raise-crops-feed-cattle-slaughter-get-meat" cycle and do it on an industrial scale (by means of replacing sun's energy with fossil fuels and do it in concentrated operations) is precisely the root cause of many of the problems we find ourselves into.
As meddling with the things we ingest has given us the modern day diseases such as cancer, diabetes, obesity and heart disease, not to mention that we're on the brink of going back to the dark ages due to the upcoming "antibiotics apocalypse".
And yet here you are, hoping that some future 3D printer will synthesize meat out of thin air, instead of fixing the real problems in our society, which is that we consume and waste too much from processes that aren't sustainable. But yeah, 3D printing will save us, seemed to work for Star Trek characters at least. Good luck with that mate.
Benevolent AI dictator that runs farm machinery and food distribution networks?
I only half joke. At some point, we're going to need to ditch the puritanical bullshit that work is required, and realize that GDP as a metric is hogwash. Quality of life, happiness, those are what need to be measured and delivered on.
That's why I agree with your second point of rethinking "work". Instituting a Universal Basic Income lets us keep capitalism while putting more power in the hands of consumers and not relying on the kindness of AI/strangers.
Otherwise, we'll continue to see an increase in wealth disparity until there's no longer any function of the market.
However, looking at the current trends, it's clear that the owners of the hardware are going to be the gatekeepers and middlemen. If a benevolent AI that's working for you requires you to run it off of AWS/Google/Azure, power will be concentrated to them, and they will always be able to run a more powerful AI since they could utilize their entire hardware capabilities.
The threat of superintelligent computing is a serious risk to humankind, and this threat magnifies if there are few AIs and those few are only accessible by the rich and/or powerful.
I imagine that Musk would rather live in a world in which superintelligent AI never comes into existence, but since he has no power to stop that future, this seems like the next best alternative.
Probably not, because at large scales there is a positive correlation between automation and employment. That is, the nations with the most automation are also the nations that have developed the best employment. The U.S., for example, has much better employment than it did 100 years ago, and better than China today. China would love to have the economy of the U.S., automation and all.
But if AI would be as dangerous to society as for example cars, then we don't need such an initiative. So for me the whole thing seems to be a marketing stunt of sleep-deprived billionaires who read the wrong books.
Do current AAPL/GOOG/FB engineers dislike this so much? There's secrecy within most for-profit entities, what makes AI so different?
It's not about AI vs other fields, it's about research vs. engineering. Admittedly, that line is more blurry than most, especially in highly technologically competitive fields like AI and graphics (and less blurry in fields where the research-to-implementation gap is larger, like PL or algorithmic complexity).
Open technology will empower the expression of many human wills, individual and collective. Human wills are today constrained and empowered by many human-imagined systems of thought, and we can invent new ones. Will there be an AI which explores the possibility space of constraints on AI-using humans?
1) Growth rates in nature are never exponential. They are sigmoidal. Sigmoids look very exponential when you're in the middle of them, but we are starting to see the level of moore's law (yeah yeah, it's technology not ICs, still sigmoidal)
2) Even if we had a computer that was 1,000 times faster than the ones we have today and used 1/1,000 of the computing power we STILL don't have the algorithms to produce a human intelligence and that is one hell of an algorithm.
3) The focus for a long time has been moore's law and the associated increase in FLOPS. I think what is more important and more limiting is the bandwidth and bandwidth is a couple orders of magnitude lower than FLOPS where FLOP=Byte and an extra order of magnitude lower when FLOP=Word.
Even if we had a computer that was 1,000 times faster than the ones we have today and used 1/1,000 of the computing power we STILL don't have the algorithms to produce a human intelligence and that is one hell of an algorithm.
- One has to put such an algorithm as in the realm of "unknown unknowns". Anyone who says they know for sure that a human-capability-equivalent algorithm is complex would have to know a least a lot about such an algorithm. No one can yet make that claim. So the non-existence of such an algorithm isn't a certainty but just a trend.
And the "unknown unknown" things might a long time out, might be just around the corner or might be utterly impossible.
2. I didn’t say that we are close to human level intelligence, just that if generalised Moore’s law holds then we will not get much warning - we will move the 1% level to human level in 10 years. How long it will take to get to the 1% level is currently unknown.
3. I was talking about generalised Moore’s law (a doubling of processing power ever 18 month), not the mechanism of how we do this. Just increasing transistors counts does not have much future (it appears), but there are many other ways of increasing computational power that I am sure will be used over the coming decades.
It's quite likely that human intelligence is actually highly optimized to solve specific classes of problems, such that increasing performance in one respect decreases performance in another.
If you want to see this in play, look at human variability. There are plenty of examples of humans with extreme intelligence, but those people don't dominate other humans in every regard. They are brilliant on certain subjects while other subjects don't even register. The most brilliant physicist in the world might say the wrong thing to the wrong person and get shot. And software has no intrinsic way to integrate the two systems that might be able to perform well at those two tasks respectively. The very computational structures which lead to physics breakthroughs might be involved in the social mistake.
The integration of conflicting models is solved by human beings through society. We empathize, coordinate, fight, and kill until some consensus emerges. People seem to assume that AIs will just be able to automatically integrate their knowledge with one another, but that doesn't make sense. If that's true you haven't gotten to the "conflicting models" scenario yet.
The whole notion of "general" intelligence is highly suspect. Intelligence is really a certain kind of adaptability to your environment. But there's no such thing as general adaptability. Features that let you adapt to the sea will be liabilities if you find yourself on land. Some combinations of adaptation may be compatible, but many will be are antagonistic.
The realistic scenario is that AIs will just join our society, occupying their own place on the spectrum of specialization.
So, yeah, we're not too far off from the end of the road for silicon lithography. We've had quite a good run, and we've advanced so, so far from the humble beginnings 60 years ago.
With each generation of chips, with ever shrinking process size, the designs get harder and harder. At some point soon, we're going to decide that we just can't improve this technology any further. That silicon-lithography based computers just aren't going to get any better.
So what happens then? What happens to Intel, and all the other semiconductor vendors when this year's newly released chips aren't any better than last year's chips?
Is the market going to accept that? Will it be OK for Apple to say to everyone that the iPhone 20 (or whatever) is as good as it is going to get, and no new whizzy features are going to be implemented?
I think the investors will file lawsuits, and all the heads of the technology companies will be replaced with people who are going to try harder, and use some other, better, technology instead to keep the profits rolling in.
And what is that going to be? Molecular nanotechnology. Precise placement of individual atoms to create materials and structures with superior properties to what we have now.
And that will enable a whole bunch of things, including AGI.
We have a rough idea of how much processing the human brain is capable of by looking at the eye and optic nerve. Our most powerful computers are many orders of magnitude below human level processing.
How can this be true? Human brain consumes just so much energy, our chips are already running close to single-electron level switching, and consume comparable amounts of energy (not even talking about computer clusters/supercomputers). May be layouts/programming are not good enough, but bare computing power is there.
Modern deep neutral network demonstrate that they do comparable decisions with relatively little power. (i.e. modern speech and image recognition running on laptops, for example).
So I say for modern computers it's all about correct programming.
Isn't this like gun control all over again?! You give more guns to people so that they can be safe, instead you end up killing each other.
Also this is amazing, making serious effort towards AGI is what we need. We'll play with RNNs configurations for a long time, but I think it's a good call to fund people who think about the broad picture.
Of course, as a pure hypothetical, it's virtually impossible to come up with a definite danger-model for AGIs.
With stuff like CRISPR, perhaps Elon should invest to stop the zombie apocalypse. :)
If they truly believe AI is dangerous, how does promoting / accelerating it is supposed to help?
Or is it a way to commoditize R&D in machine learning so that it will never be a bottleneck for startups?
Maybe if I was a billionaire I'd understand.
Nuclear weapons come to mind. Would we prefer that the knowledge of how to make them be more widespread?
If we believe that DNA is a kind of information and our genes are "looking for" better weasels to survive through then it's only natural to also see technology as a much better carrier of that information than us.
The problem many have with coming to grasp with the idea that AI could be a threat is because they look at where technology is right now and then try and imagine a computer being anywhere near our capabilities.
But this is because many think of it as a thing. As in. "Now we have finally build a strong AI thingiemagick". However just as humans consciousness and intelligence isn't a thing, neither will AI be. It's going to be a lot of things. Some are better developed than others, but most moving at impressive speed and at one point enough of them are going to be put together to create some sort of pattern recognizing feedback loop with enough memory and enough smart sub-algorithms to became what we would consider sentient. </tinfoil hat>
In contrast, what organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute (MIRI and FHI) consider the main danger (and have considered the main danger for over 11 years) is that the AI will not care about any person at all.
For the AI to do an adequate job of protecting human welfare it needs to understand human morality, human values and human preferences -- and to be programmed correctly to care about those things. Designing an AI that can do that is probably significantly more difficult than designing an AI that is so intelligent that the human race cannot stop it or shut it down (although everyone grants that designing an AI that cannot be stopped or shut down by, e.g., the US military is in itself a difficult task).
The big danger in other words seems to come not from a research group using AI research to try to take over the world or to gain a persistent advantage over other people, but rather from a research group that means well or at least has no intention to be reckless or to destroy the human race, but ends up doing so by having an insufficient appreciation of the technical and scientific challenges around protecting human welfare, then building an AI that is so smart that it cannot be stopped by humans (including the humans in the other AI research groups).
I fail to see how changing the AI-research landscape so that more of the results of AI research will be published helps against that danger. If one team has 100% of the knowledge and other resources that it needs to build a smarter-than-human AI (and has the will to build it) and all the other teams have 99.9% of the necessary knowledge, there might not be enough time to stop the first team or (more critically IMHO) to stop the AI created by the first team. In particular, if the first AI is able to build (e.g., write the source code for) its own successor -- a process that has been called recursive self-improvement -- it might rapidly become smart enough to stop any other smarter-than-human AI from being built (e.g., by killing all the humans).
Rather than funding a non-profit that will give away its research output to all research groups, a better strategy is to give the funds to MIRI who for over 11 years have been exhibiting in their writings an vivid appreciation for the difficulty of creating smarter-than-human AI that will actually care about the humans rather than simply killing them because they might interfere with the AI's goal or because the habitat and the resources of the humans can be repurposed by the AI.
Any effective AI -- or any AI at all really -- will have some goal (or some set or system of goals, which for brevity I will refer to as "the goal") which may or may not be the goal that the builders of the AI tried to give it. In other words, everything worthy of the name "mind", "intelligence" or "intelligent agent" has some goal -- by definition. If the AI is powerful enough -- in other words, if the AI is efficient enough at optimizing the world to conform to the AI's goal -- then all humans will die -- at least for the vast majority of possible goals one could put into a sufficiently powerful optimizing process (i.e., into an sufficiently powerful AI). Only a very few, relatively complicated goals do not have the unfortunate property that all the humans die if the goal is pursued efficiently enough -- and learning how to define such goals and to ensure that they are integrated correctly into the AI is probably the most difficult part of getting smarter-than-human AI right.
That used to be called Friendliness problem and is currently usually called the AI goal alignment problem. The best strategy on publication is probably to publish freely any knowledge about the AI goal alignment problem, while keeping unpublished most other knowledge useful for creating a smart-than-human AI.
I will patiently reply to all emails on this topic. (Address in my profile.) I do not get a salary from FHI or MIRI and donating to FHI or MIRI does not benefit me in any way except by decreasing the probability that my descendants will be killed by an AI.
Andrew Ng thinks people are wasting their time with evil AI:
AI luminary Stuart Russell also takes on this analogy in this presentation: https://www.cs.berkeley.edu/~russell/talks/russell-ijcai15-f...
>OK, let’s continue [the overpopulation on Mars] analogy:
>Major governments and corporations are spending billions of dollars to move all of humanity to Mars [analogous to the billions that are being spent on AI]
>They haven’t thought about what we will eat and breathe when the plan succeeds
>If only we had worried about global warming in the late 19th C.
We've already fucked this planet so I sincerely hope a few people are thinking of ways to avoid fucking another one.
EDIT: Actually, this is nearing "crazy" levels. Just ignore unless you really enjoy stream-of-consciousness. Sorry about this, HN! :)
I know I'm really late to the party here, but there's a premise in this whole discussion that I'm not sure I understand.
Why should we prevent AI from taking over? I mean, I "get it"... it wouldn't be HI and that feels kind of weird, but what's objectively special about HI? Why are we treating "HI==good" as axiomatic? I mean even us tribal, overly-emotional (&c) humans value DI (Dog Intelligence) even if we're pretty sure that it can't contemplate the fact that we're all made of the remnants of supernovae. There's no evidence as of yet that a greater intelligence a) exists, even in principle, or b) would be any less benevolent towards us. Perhaps they would even create nice little simulations for us to exist in. Though, I wonder what the purpose of my simulation is, given current circumstances :).
Yes, a transition from HI->AI would inevitably lead to a lot of human death (unless we're talking really out-there take-over plans involving disease and such), but would AI really be worse? And for whom and why? Humans themselves have caused a lot of death and we seem to value ourselves pretty highly overall (and undeservedly, IMO).
It might be that HI is the "end of the road" just like the Turing Machine appears to be the end of the road in terms of what you can compute... but not in terms of how fast you can compute it. Would "faster" automatically mean "better" (see footnote)? I dunno.
 The existence of a "higher" ("faster" is probable) intellige is a interesting question. How would you judge such a thing? Is there more "power" to be gained through something other than being able to reflect at yourself? AFAIUI self-reflection is one of the distinuguishing features of intelligence, but given that we're "better" than Chimps -- who have an idea of "self", thus self-reflection -- it may not be the decider. And even so, such reflection is still subject to Physics and thus without "free will".
 Not just faster, but "better", in some non-linear sense.
Good for them. I expect some great work to come out of this. :) I'm most excited to automate travel as quickly as possible --- too many people die each year from automobile accidents.
So, you have 'red team' and 'blue team'. Blue team is super rich and builds itself an awesome AI. Red team needs some "rally round the flag" pick me up and so, looking around for targets, decides that attacking a bunch of machines is a safe bet. If they win, awesome. If not, then they didn't kill any persons, just made a bunch of junk.
Blue team's response is to internalize the threat (as is only natural, or is at least politically expedient to some subset of blue team) and frame the situation as follows: "This is what we built our AI for. This is an existential threat. It has the capacity. We only need to let it off the leash. The choices are 'destroy' or 'be destroyed'. This is nothing less than an existential moment for our civilization."
And, with that horrible, non-technical, propaganda riddled rationalization the AI developed by the most well meaning of people will be let off the least, will run way, and nothing that we know about the AI up to that point will be worth diddly squat.
I respect anyone that tries to tackle this issue. But, the nature of the issue, the kernel of the problem, is nothing less than Pandora's box. We won't know when it is opened. But, the AI will.
AI should definitely be constrained by financial means. Computing, unbounded by financial constraints, will eat everything.
Well, for Y Combinator is easy: by ensuring funding goes to "Uber for X" and "Facebook for Y" startups instead of real technology advancing businesses
As opposed to (almost) the entire startup ecosystem which is focused on ... profits.
Edit: And what does "to much power" even mean other than trying to use hyperbole to make some kind of point.