Hacker News new | past | comments | ask | show | jobs | submit login
Machine intelligence, part 2 (samaltman.com)
89 points by dmnd on March 2, 2015 | hide | past | favorite | 150 comments



On the one hand we have lay people (including Altman and Musk) who are warning that AI is progressing too fast. On the other hand you have (essentially) the entire mainstream academic AI/ML community, who are mainly concerned that the hype is so intense it might lead to another AI winter[1].

The crux of the problem is that AI experts largely do not see meaningful progress on any axis that makes an AI doomsday scenario plausible, which causes staunch disbelief and ridicule of people who write things like this.

So realistically if we want to resolve this issue, the place to start is not public policy. That's actually part of the problem -- tech policy is bad largely because it is mostly legislated by people who don't understand the technology. We have to start with a discussion about how this progress towards doomsday AI should be measured, and then formulate policy to directly address those scenarios. As long as Altman et al are extrapolating on axes I (for one) don't understand, I honestly think this conversation is unlikely to be productive.

[1] Most notably, a similar hurrah happened when neural networks first became popular. There were some promising early results, which led to much enthusiasm, and then Papert and Minsky published "Perceptron", whose Group Invariance Theorem cast aspersions on whether simple neural networks were really doing what we thought they were. Eventually we found out that many early experiments just had a bad methodology, which very nearly stopped research interest in the field for almost a decade. Now we are in a period of AI hype again, and when you hear people like Andrew Ng get up in front of audiences and say things like "for the first time in my adult life I am hopeful about real AI", you might start to wonder whether there is a lesson here.

EDIT: Friends, I am confused by the downvotes. :) I am happy to have a discussion about this topic!


> On the other hand you have (essentially) the entire mainstream academic AI/ML community, who are mainly concerned that the hype is so intense it might lead to another AI winter

This doesn't seem true. E.g. the signatories on the Future of Life Institute's recent open letter about AI progress and safety include:

* Stuart Russell & Peter Norvig, co-authors of the #1 AI textbook

* Tom Dietterich, AAAI President

* Eric Horvitz, past AAAI President

* Bart Selman, co-chair of AAAI presidential panel on long-term AI futures

* Francesca Rossi, IJCAI President

* All founders of Deep Mind and Vicarious, two leading AI companies

* Yann LeCun, head of Facebook AI

* Geoffrey Hinton, top academic ML researcher

* Yoshua Bengio, top academic ML researcher

...and many more.


That letter is not about "superhuman machine intelligence", it is about "ensuring that AI remains robust and beneficial." Among the threats cited are: autonomous vehicles, machines that need to make ethical decisions, autonomous weapons, surveillance, and issues of verification, validity, and control. Words like "singularity", "superhuman", and "AGI" do not appear.


Superintelligence and intelligence explosion are mentioned as valuable things to investigate.


Ah, turns out you're right:

Finally, research on the possibility of superintelligent machines or rapid, sustained self-improvement (“intelligence explosion”) has been highlighted by past and current projects on the future of AI as potentially valuable to the project of maintaining reliable control in the long term. The the AAAI 2008–09 Presidential Panel on Long-Term AI Futures’ “Subgroup on Pace, Concerns, and Control” stated that:

"There was overall skepticism about the prospect of an intelligence explosion... Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants."

Stanford’s One-Hundred Year Study of Artificial Intelligence includes “Loss of Control of AI systems” as an area of study, specifically highlighting concerns over the possibility that:

"we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an 'intelligence explosion'?"

Research in this area could include any of the long-term research priorities listed above, as well as theoretical and forecasting work on intelligence explosion and superintelligence, and could extend or critique existing approaches begun by groups such as the Machine Intelligence Research Institute.


The tone of that letter was "Here's a general framework for responsibly building more powerful AI systems", whereas the lay press is reporting "Musk predicts Skynet in 5 years"

http://futureoflife.org/static/data/documents/research_prior...


To be fair "AI safety" is very much an issue even without human-level or post-human AI. I mean, there are far more "mundane" problems such as what happens when a self-driving car gets in a collision? or what do we do after factory workers, service workers and truck drivers are replaced by (dumber-than-human) machines for those jobs?


Sorry Luke. As usual your fearmongering is not going to get far. For example - MIRI skeptic and famed AGI researcher Ben Goertzel signed the letter (as did I) but not because we are primarily concerned with the threat. Rather:

I signed the document because I wanted to signal to Max Tegmark and his colleagues that I am in favor of research aimed at figuring out how to maximize the odds that AGI systems are robust and beneficial.

The letter helps give visibility and to that end better avenues for funding for AGI projects.

[1]http://multiverseaccordingtoben.blogspot.hk/


I know for a fact that at least 7 of the top AI people that I listed above think intelligence explosion / superintelligence is plausible and that it's not at all clear that will go well for humans. I'm pretty sure all of them think AGI will be feasible one day and will be fairly hard to make reliably safe, at the very least to the degree that current safety-critical systems require a bunch of extra work to make safe (beyond what you'd do if your AI system has no safety implications, e.g. detecting cats on YouTube).

I can't say which people believe which things, because most of them communicated with me in confidence, except for Stuart Russell and Shane Legg because they've made public comments about it about their worries about intelligence explosion:

http://futureoflife.org/PDF/stuart_russell.pdf

http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from...


There is a big difference in acknowledging the potential of a threat which I agree that everyone does, and advocating for an intentional slowing (or rather preventing accelerating) development or alternatively advocacy of regulation until we get "Friendly AI" figured out like MIRI does.

A world of difference actually.


You seem to be implying that intentional slowing is MIRI's official stance, without any showing support. Given that the original response was in response to the specific accusation of "no AI experts say that this is a problem", I think you are reading too much into this. I agree that it is slightly disingenuous for lukeprog to have posted that list, but I feel your disagreement is far too uncharitable and motivated to be productive.

Disclaimer: I have read a bunch of the LessWrong "canon" and believe many of their points, sans perhaps the timescale on which recursive self improvement can happen. I think most of my acceptance comes from the relatively poor quality of their critics, who seem to attribute many strawmanish positions to them or who seem more concerned with namecalling-via-cult.

I cannot help but think that if a better criticism exists, why hasn't anyone said it yet?


I cannot help but think that if a better criticism exists, why hasn't anyone said it yet?

The whole lesswrong/MIRI thing is built around the strawman of unfriendly AI, as though it were already real. What you are asking is, why aren't there better arguments against strawmen and radical pontification?

It's like if I said, "There is a chance that mean aliens will come soon, therefore we need to start building defense systems." Ok, show me any proof that there are aliens coming or even an avenue for aliens to come here and be unfriendly.

Sure, it's a possibility that there are mean aliens who are going to attack us, but there is absolutely nothing to think that is a thing that is going to happen soon.

Granted, this is not a perfect analogy but I think it makes my point well. I am uncharitable because it's charlatanism and seems to be gaining traction - in the same vein as antivaxxers.


Even antivaxxers have people more patient and more understanding of their opponents actually debunking their object level thinking. They say that vaccines cause autism? We say that the original study was wrong! Why do you get to be even less charitable than "pro"-vaxxers?

Consider that the general position seems to be "we don't know when SMI would happen, but if we use the metric that our critics say is more reliable than ours we see that a survey of many AI experts we can find seem to say that human level AI is possible in about 30-50 years at 50% probability." (numbers are quoted from memory and most likely incorrect)

Yet here you are saying that MIRI/LW claims "it's a thing that's going to happen soon", that there is "absolutely nothing to think that is a thing". I can't help but think most of the "charlatanism" you see is manufactured in your own head and not a product of reading and understanding your opponent's position.

Yes, this is an unabashed ad hominem, but I wish people would attack arguments that exist rather than arguments that are easy to knock down. It's upsetting to me that, when I say that your arguments are lacking, your response is "So what? The Enemy is Evil and Stupid and I do not need to understand them."


I think you're building a bit of a strawman here yourself. MIRI's main argument is pretty simple: building a beneficial AGI is a difficult problem with more constraints than just building an AGI, so we should start working on that problem now, rather than waiting until AGI is almost here.


Researching friendly AI seems far more useful than government regulation. But it sounds like you and Ben are acknowledging that there is a potential problem here, which we should spend resources trying to prevent. That's a view quite different from the glib dismissals I've seen from some quarters.

As Ben said in your link, "once we have AGI systems that are massively smarter than people, they are going to do what they want to do, or what other powerful forces in the universe want them to do, but not necessarily what we want them to do." That's not the most reassuring comment ever.


But it sounds like you and Ben are acknowledging that there is a potential problem here, which we should spend resources trying to prevent.

No one who has put any serious thought into AGI denies the possibility of an existential threat from it. The fact is though that we have absolutely no clue what chain of events would lead to that. In fact we have (almost) no clue what chain of events will lead to AGI in the first place.

The best way to know what will come from AGI is to start building it and seeing where the controls and capabilities are going. I personally think the AGI community needs to put more effort into a roadmap, which is done at every AGI conference to some degree. The problem though is that it's just too early and there just aren't enough researchers and funds to have any inkling about what is (are) going to be the best approaches to AGI. I could go all day and caveat everything, but the bottom line is, just start building because we are so far from it absolutely anything helps.

"once we have AGI systems that are massively smarter than people, they are going to do what they want to do, or what other powerful forces in the universe want them to do, but not necessarily what we want them to do." That's not the most reassuring comment ever.

Right. My personal proposition is:

AGI:Humans::Humans:Ants

Ben seems to share this approach. So to think we could control it after it's fifth iteration or so is silly. We also share the transhumanist philosophy which I think is still pretty fringe even within the tech community so I try not to go into that too much because it's just a whole ball of wax. I do think though that we need to have some of these conversations about what is next if human work is not necessary and later what we do if humanity itself is not at the top of the intelligence chain.


I'm glad to see that people like you are replying to some of the misinformation that occurs in threads like these. It's a shame that on HN the people with the least knowledge are the most likely to cast their opinions in broad sweeping statements grounded in hyperbole rather than fact, particularly when the topics of AI and ML come up.

In fact a number of the signatories that Luke cites are quoted as saying on no uncertain terms that fear of AI is widely out of proportion to its capabilities now and for the foreseeable future. The most prominent I'm aware of saying thing to this effect are Yoshua Bengio, Geoff Hinton, and Yann LeCun.


At the risk of falling into "ad hominem" territory, it is disheartening that Altman (Or Muehlhauser or most of the MIRI people for that matter) gets any play at all on this topic. The fact that Altman thanks Luke in his blog for review just gives you a sense of the "blind leading the blind" here.

I think however that says more about the lack of awareness on the part of the readers than anything. That combination is a sure sign of a very narrow field of study with big political footprint and is typical of similar fields such as climatology, macroeconomics etc... where rank amateur thinks they have insight to the same degree as expert by virtue of "just thinking about it."


>At the risk of falling into "ad hominem" territory, ...

I wouldn't worry, you and the parent already have solid footing in "appeal to authority" territory.


The problem with this debate currently is that none of the people talking about it, Altman, Musk, Hawking (like the climate debate) etc... are anything near competent to be talking about it, so they must reference something implicitly like Bostrom.

They aren't making individually cogent thought out and articulated arguments, they are all just saying "LOUD NOISES, this is a scary thing that someone said could be bad, therefore I think it's a scary thing." If they could give some good timelines, facts and steps toward AGI then we could debate that on merits. As it is no one is doing that, the whole thing is an appeal to authority from the beginning.

So yes, the sources for these people need to be inspected further.


It is arrogant to make sweeping statements wherein one presumes to be an arbiter of competence. If you think their arguments aren't sufficiently cogent or articulated, then suggest a counterargument rather than dismissing them on the basis of something other than their words.

Unlike climatology, superintelligence is not yet an established scientific field. It does not currently exist to study. There are no experts. There is no consensus to be had. Everyone is technically unqualified to speak on the subject.

Artificial intelligence experts today may be able to render guesses more accurate than most as to the arrival of superintelligence via the AGI path. However, this does not confer upon them any sort of authoritative high ground on the matter, nor does it place the issue squarely in their domain.

Doing so would be analogous to trusting a group of explosives experts circa 1944 in saying that we have nothing to fear from atomic weapons, because such devices are to utilize conventional explosive mechanisms in their operation.


Your post says too much, basically: "Nobody is an expert therefore everyone can add to the debate." Sorry, not the case. There are actually people who are studying AGI full time and have made it their life work, regardless of whether you know about it or not.

Unlike climatology, superintelligence is not yet an established scientific field. It does not currently exist to study. There are no experts. There is no consensus to be had. Everyone is technically unqualified to speak on the subject.

How do you figure? There are AGI journals[1], conferences[2], speaker circuits, hackathons etc...So I think you just don't know about them.

Doing so would be analogous to trusting a group of explosives experts circa 1944 in saying that we have nothing to fear from atomic weapons, because such devices are to utilize conventional explosive mechanisms in their operation.

This is a great analogy and I often make the comparison between nuclear technologies (power, radiometry, weapons) to AGI because of scope and potential. The problem however with your analogy is that the current experts aren't analogous to explosives experts, they are more analogous to Szilárd/Einstein. This is why I advocate for an AGI Manhattan project. We need about between 5-10 years of the smartest people alive working on building it. Not thinking about it, building towards it.

[1]http://www.degruyter.com/view/j/jagi

[2]http://agi-conf.org/


Close, but a more accurate generalization might read: "Nobody is an expert. Therefore everyone can add to the debate, provided their argument has merit."

>There are actually people who are studying AGI full time and have made it their life work, regardless of whether you know about it or not.

You mean like the people at MIRI? In a previous post, you suggest that most of them don't deserve any play at all on the topic.

>How do you figure? There are AGI journals[1], conferences[2], speaker circuits, hackathons etc...So I think you just don't know about them.

Oh but I do, and my statement still stands regardless. AGI does not yet exist. The field itself is in its infancy, and hardly established relative to other fields. As such, there are no experts, nor any consensus, at least not in any authoritative sense.

If you insist upon dismissing the input of select people entirely, then the bar for such a decision should be that they are authoritatively incorrect, or that they are authoritatively unqualified to speak. The problem is that there is nothing to derive such authority from yet, precisely because the field is in its infancy.

>The problem however with your analogy is that the current experts aren't analogous to explosives experts, they are more analogous to Szilárd/Einstein.

Unlike atomic weapons prior to the first test shot (or even prior to Fermi's Chicago Pile-1 experiment), AGI does not yet have a solid theoretical underpinning, let alone a proof of concept. Perhaps the best analogy lies somewhere in between our interpretations.

>This is why I advocate for an AGI Manhattan project.

Funny enough we're in total agreement here. I was even advocating for the same thing via another post in this thread.

>Not thinking about it, building towards it.

There is the question of timing. If the Manhattan Project itself was initiated earlier than it had been, there may not have been enough time for the aforementioned theoretical underpinnings to have been reasoned about to the level they were.


You mean like the people at MIRI? In a previous post, you suggest that most of them don't deserve any play at all on the topic.

No, like Joscha Bach, Yoshua Bengio, Ben Goertzel etc... the MIRI people are thinking about AI, not working on it. I should have made that point clearer and said as much. For AGI building = research, thinking =/= research. So for example, my Masters thesis is AGI implications of a nation state building it. This includes several rough roadmaps to AGI. I would not call myself anything close to an expert or researcher working on AGI because I am not building systems that are testing hypotheses about machine intelligence.

AGI does not yet exist.

Once it does, the field (in theory) doesn't really exist anymore really. So there can only be a research into it before it is complete. Arguably an AGI's (if you fall on the side of it being complete at "birth") recursive self improvement makes human input basically worthless.

The only consensus we will come to is when we all agree that the thing we built is actually an AGI. I can talk about those measures but at the end of the day most of the leading people think it's going to be "we'll know it when we see it."

Unlike atomic weapons prior to the first test shot (or even prior to Fermi's Chicago Pile-1 experiment), AGI does not yet have a solid theoretical underpinning, let alone a proof of concept.

Exactly. Which has been my point all along. Arguing about safety is totally a worthless effort, and harmful in my opinion, when we have no idea what the foundations are or the paths to it. When people are calling for regulation before we have anything like real funding for it it totally stifles the field. At best it diverts resources to making "provably safe" AI which is an endless rabbit hole in my opinion because it's an impossible metric.

there may not have been enough time for the aforementioned theoretical underpinnings to have been reasoned about to the level they were.

Enough time? That makes no sense. Lets instead call it an "Delta project" for AI - the entire front end of the project can be building the theoretical framework. There are no rules stating that a project has to be turnkey to realize a goal.


>... the MIRI people are thinking about AI, not working on it.

While they're not building AGI, I would argue that their thinking constitutes not only work, but progress. MIRI is attempting to flesh out the murky, abstract nature of the control problem and translate it into something more concrete, specifically mathematical models that may one day evolve into provably-sound foundations for future work.

>Once it does, the field (in theory) doesn't really exist anymore really. So there can only be a research into it before it is complete.

It is possible and perhaps even likely that the first AGI we create will be contained, but otherwise unsatisfactory to release or utilize without further iteration and study of the design. After all, it's probably a bad idea (in most cases) to leverage its superhuman intelligence if you believe the thing is malicious or otherwise has motives that do not align with human interests.

That said, I cannot imagine research proceeding in any sort of public fashion once that threshold has been passed, so in a way you're likely correct.

>Arguing about safety is totally a worthless effort, and harmful in my opinion, when we have no idea what the foundations are or the paths to it.

While I agree regulation is premature, prominent people like Musk advocating for AGI safety, if only in a general sense, strikes me as a positive thing due to the cultural effects.

Prior to Musk, Hawking, et al., if you mentioned the prospect of unsafe AI, you'd be laughed at in all but the most niche communities. The notion was clearly just something confined to the realm of science fiction movies. Nothing with similarly catastrophic ramifications could ever happen in real life, or otherwise such things were indefinitely far off in the future.

I'm not advocating fear mongering, just saying that when AGI actually does become a serious safety concern, it's better to have the cultural climate already primed and receptive to addressing the problem. On the other hand, if the safety angle is pushed too hard now, it may backfire and have the opposite effect later, and that would be unfortunate.

>There are no rules stating that a project has to be turnkey to realize a goal.

No, but in practice there are political and financial constraints that can sometimes have a similar effect. If a project isn't properly aligned with those constraints, then it becomes more likely to fail. The longer and more expensive a project is, the harder it is to justify or continue to justify in such climates.


While Altman and Musk don't spend all their time on it, I'd hardly call them "lay people". Altman studied AI at Stanford, and Musk knows a fair bit more than the average person about computer science and technology. The DeepMind people are also pretty bright on this topic, and they were concerned enough to set up an "ethics board".

As someone with multiple degrees on related topics, I was pretty skeptical of the concerns as well. That is, until I read Nick Bostrom's book "Superintelligence": the fact is that the consequences of SMI are so potentially disastrous that even if the odds were 1 in a 100 it would make sense for us to be very careful. But the probability of SMI happening within the next 100 years is likely far more than that.

Don't want regulation? Then come up with a proposal that has a better chance of keeping things under control and simultaneously utterly obviates any incremental benefit caused by imposition of a regulatory framework.

It's one thing to distrust authority, especially when said authority wasn't earned. In the case of Musk, Altman, Bostrom, (and many others) ignoring the thoughtfulness they have brought to this issue seems extremely reckless.


A lot of experts also deny dangers coming from global warming and other invasions in complex systems we don't fully understand. Scientists in the 1940s also downplayed the danger of the atomic bomb. It's easy to brush away a hypothetical danger with arguments that resources might be wasted or that people in charge might embarrass themselves if their assumptions turn out to be wrong, or that research might suffer due to overblown expectations.

When, do you think, is the time that we should start worrying about SMI? When machines outsmart a frog, or a chimpanzee? What if intelligence is just a matter of scale at this point, i.e. that adding more short and long term memory makes it easily reach and even surpass human intelligence?


By the time the atomic bomb was truly imminent (in the sense of it being within the reach of massive dedicated investment by the most powerful nations on Earth), scientists knew very well of the dangers [1].

As for global warming. No expert denies global warming, and you'll be hard pressed to find many that deny anthropic global warming [2]. This has been the case for many decades.

I am not an AI expert, but from what I can see thus far, our knowledge of cognition seems to be as far from human or post-human intelligence as Nitroglycerin was from the Atomic Bomb (e.g. fundamentally different principles at work, which is not subject to exponential improvement without one or more breakthroughs or qualitatively different approaches). Also, our AI-Nitroglycerin seems scary enough: autonomous weapons (think self-driving self-firing drones, not terminator), massive changes in our means of production that reduce human labor/employment and might cause social earthquakes and the ability to draw meaningful conclusions from surveillance data of 7 billion people...

[1] https://en.wikipedia.org/wiki/Einstein%E2%80%93Szil%C3%A1rd_... (and this seems to have been understood as an approaching risk in the physics community way before anyone thought of telling FDR).

[2] https://en.wikipedia.org/wiki/Global_warming_controversy#Sci...


Fair enough, but I wouldn't exclude the possibility that AI experts are biased for at least three reasons, namely (1) that they believe in some sort of sanctity of the human mind (like many people do), (2) that they don't want to be regulated and (3) that they don't want to raise overblown expectations. There are definitely historical examples in which the majority of experts was wrong.

I also find it incredibly difficult to tell what is missing about an algorithm. Perhaps multiple breakthroughs are necessary, perhaps just one (there is no good reason to believe that an algorithm that yields AGI can't be considerably simpler than what happens in the human brain). Initially, nobody thought that search could be done in O(log n) and similarly not many researchers expected neural networks to perform useful tasks until back-propagation was invented and demonstrated.


Brilliant. If, however, it isn't cognition that needs to be fully understood, but learning and meta-learning, then we could theoretically have an 'and then there was light' scenario that just speeds right out of the gate. This case is dissimilar from your examples in that AI could alter itself, it could in some ways be considered alive, if not obviously alive.

I think we need to soberly look at the fine details with our collective perspectives, observe the sociocultural response, and gauge our best course of action. Perhaps progress should sometimes be stopped. The effects of the atomic bomb and its research are the very foundations of this world, AI has contemporary magnitude, and could alter the game even more significantly.

It is my guess that paradigm shifts in human consciousness have quite unpredictable effects.


While I understand where you are coming from, one remarkable breakthrough can change that. That does sound absurd, but we would also look like fools as a humanity for being aware of this potential problem and at least not looking into it.

Technology has been exploding at a remarkable rate, I'm fine with things slowing a little for the sake of the next 1000 years of humanity.


Sure, but that's true in essentially all kinds of scientific research. I personally think physics research is more likely to produce an existential threat than AI research is. Fusion bombs are already pretty close to a world-ending discovery, possibly already across the line. And that is probably not the worst physicists have to offer in the future, either.


Curious for a reference on fusion bombs. If you are referring to laser inertial, that takes buildings full of lasers.


I was just thinking of regular old hydrogen fusion bombs aka thermonuclear bombs.


Those require a fission trigger and have existed for a long time. The proliferation nightmare would be if we could figure out how to build one that didn't need fission, but I have never heard even a hint of success with that. If there is, it's classified -- and in this case for good reason.


Is he warning that it's progressing too fast? I felt that he was more focused on risk mitigation than on trying to prevent machine intelligence research per se. And risk assessment is part of the job portfolio of both Altman and Musk; this doesn't mean that their assessment is well informed, it just makes it seem more natural to me that they would think in these terms, and be concerned if it seemed that these considerations were not being taken seriously in the field.

It is not actually all that clear what intelligence is, or whether cognition and communication are necessarily synonymous with subjective existence, or whether subjective existence is necessarily synonymous with any particular goal or desire for that existence to continue. But I think that the concern about the blind application of fitness functions, for instance, is well taken.

I don't think that the only problem is one of human survival; I think that there is a paucity of self-reflection in the field. The prospect of creating a sentient non-human entity raises fundamental challenges to a lot of our assumptions about ethics.


I think you make a lot of good points.

>We have to start with a discussion about how this progress towards doomsday AI should be measured, and then formulate policy directly those scenarios

Here is my proposal. Since the whole of AI is a rather complicated and messy business let us just focus on a small part. That is, let's just talk about linear regression and simple decision making schemes based on estimated parameters. While one can debate whether or not linear regression counts as "actual" AI, there is little doubt that (a) it forms the conceptual basis of many more complicated AI schemes, so that if we can reason about doomsday scenarios for these we should also be able to do the same for linear regression and (b) despite its simplicity there is a substantial intelligence value in linear regression, to the point that successes and failures of linear regression can cause major effects on scientific/business/whatever process.

What are the examples about how the set of decision/action possibilities increases before and after linear regression technology was applied? What examples are there of how the failure modes of linear regression tended to increase the risk to humans (e.g. moving up the doomsday axis)? In each of these cases what policies have been or could be implemented to mitigate these risks, including technological audits as well as regulation of deployment of technologies.

My sense is that these questions actually can be answered. In all likelihood professional statisticians and operations research types already have opinions on these things. I would be curious to see what the public opinion is about this limited version of the problem, since the technology is something that many people (including Sam Altman and friends) should have a good understanding of.


Google published a paper which I think is relevant to the topic, Machine Learning: The High Interest Credit Card of Technical Debt (2014).

http://research.google.com/pubs/pub43146.html

The conclusion seems to be that even this level of machine learning decreases controllability substantially. "We note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning."


We have to start with a discussion about how this progress towards doomsday AI should be measured, and then formulate policy to directly address those scenarios. As long as Altman et al are extrapolating on axes I (for one) don't understand, I honestly think this conversation is unlikely to be productive.

Fully agree with this. So far, all of this activism around the AI threat has amounted to "guys we really need to think about this." But that's where it ends, because nobody has any idea what a dangerous AI would actually look like, what it would actually be capable of doing. There's just a vague notion of "it COULD do something really bad." While I agree that is a risk and could be a threat to humanity, I haven't seen any results from this line of thinking, nothing to demonstrate the value of trying to "get in front of" a problem that is so ill-defined and shrouded in so much mystery.


If you're looking to support AI safety, the Machine Intelligence Research Institute is researching it.


The people concerned about AI are not talking about current ML research the near future, but where the tech will be decades from now.

Surveys of AI and AGI researchers generally agree that it will probably happen with int next century and there is a high chance of it going badly.


Good try AI, good try...


"The US government, and all other governments, should regulate the development of SMI"

This has basically the highest insanity * prominence of speaker product I've ever seen. The same geniuses who want to ban encryption and can't manage to get a CRUD website up and running without a 8-figure budget will regulate the existence of algorithms (i.e., math) routinely generated by groups of <4 or so researchers.

Of course, people will be glad to "demonstrate their capabilities" to the group that's responsible for shutting them down if they're "too capable".

"it’s important to write the regulations in such a way that they provide protection while producing minimal drag on innovation (though there will be some unavoidable cost)."

Has this man ever seen a government regulatory framework operate?

"To state the obvious, one of the biggest challenges is that the US has broken all trust with the tech community over the past couple of years. We’d need a new agency to do this."

Don't worry guys! This is a new agency!

"Regulation would have an effect on SMI development via financing—most venture firms and large technology companies don’t want to break major laws. Most venture-backed startups and large companies would presumably comply with the regulations."

Here's the nugget. The only way this article makes any sense at all is as a completely disingenuous argument devoted to making it impossible to invest in technology companies, period (please tell me what "technology company" does not have at least some algorithmic or optimization component), without having a large compliance department and having greased the right palms. I don't particularly think he's that Machiavellian, but it seems like that or the alternative...


I'd rather fight the SMI robot terminators that exist Altman's vision of the future than deal with the level of oppressive government regulation required to control such innovation. Asinine.


We could call it the Turing Police...

http://en.wikipedia.org/wiki/Neuromancer


>This has basically the highest insanity * prominence of speaker product I've ever seen.

Is that two compliments in one? :) I am debating releasing a sort of part 3, which actually could be called "insanity", but you should save the criticism for that one.


Maybe in that one you could explain what about your experience with machine learning algorithms and pharma style highly regulated industries leads you to believe that turning machine learning algorithm development (not even "deployment", mind you, since in the FOOM scenario developing a self improving capability is tantamount to deploying it) into a highly regulated industry wouldn't be an unmitigated disaster.


And maybe you could describe why regulating AI research must be an unmitigated disaster? If over time it proves that certain areas of research provide a clear existential threat to our species, why is it irrational to then attempt to slow down certain fields of research relative to others?


What I took from that line is that we need a regulatory power and that our current best option is the US government. SMI without regulation is impermissible. I don't like opening the regulation from within the U.S. Government, but I dislike other options more.


The US Government regulates all sorts of things that keep you safe - from pollution of your water supply through to seatbelts.

It's not perfect, and it often fails at regulation. But your post doesn't give any help to understanding when it succeeds and when it fails.


The trouble with evaluating regulations is that if you attribute positive effects directly to government regulations (e.g. "regulation X saved Y lives by keeping dangerous drug Z off the market"), it's only fair to also attribute negative effects directly to government regulations (e.g. "regulation X killed Y people by delaying the introduction of miracle drug Z to the market"). And even beyond that, you also need to analyze the costs of the regulatory system itself if you want to make conclusions about the net effects of the regulatory system as a whole.


But those are regulating current products or services. This is calling for regulation now of RESEARCH in a specific field.


Would banning the creation of conscious AI really be all that different from the current moratorium on human cloning? We already have the technology to make human clones, but we choose not to because of the ethical implications.


It seems that the difference lies in the possible upside. Human clones currently don't seem likely to have much to offer. Conscious AI would have the potential to aid in the solution of a multitude of problems.


Cloning has a huge amount to offer. First things that come to mind are cloning highly intelligent or successful people, or growing brain dead clones for medical experimentation and organ transplants.

We've managed to succesfully halt progress on these technologies and even make them unthinkable, just by having a cultural bias against them. There's no reason we couldn't do the same for AI.


You're definitely right on the second point...especially with a few more decades of advances in transplant technology.

Seems like there is a innate human revulsion towards clones that has aided/led to the formation of this cultural bias. Unsure how easy it would be to replicate those feelings towards AI. Perhaps widespread acknowledgement of the potential dangers would be the first step.

Anecdotally, people seem to have trouble grasping the dangers involved. They think of the iPhone in their pocket and can't possibly see the risk. Obviously it's near impossible to put yourself in the shoes of an entity that could be orders of magnitude more intelligent than the most intelligent human.


120 days ago I argued here that we would most likely put regulations in place:

https://news.ycombinator.com/item?id=8546701

I still think that is true, and it will most likely come from the UN. If it ends up becoming a legitimate threat we will regulate it the same way we regulate nukes - or at least we'll try to (and most likely fail miserably).


"This has basically the highest insanity * prominence of speaker product I've ever seen."

(I edited this post to remove my shock-induced LULZ and other flapdoodle and concentrate on the important stuff.)

The more I re-read this and reflect on it, the more shocked I am. The first post in this serious was a bit of an eyebrow-raiser for me, but it still remained on the sane side of the fence. Yes, dangerous AI is worth thinking about. Lots of things are. Alien invasions aren't completely implausible either, and the Fermi Paradox is definitely worth thinking about. But this has jumped from thinking about the Fermi Paradox to arguing that we halt all radio emissions and space exploration now or the aliens will come for us. This is not a rational leap.

As I said elsewhere in this thread: could this be some kind of ploy to make basic CS research illegal for anyone but deep-pocketed VC-linked concerns (and governments)? Probably not, but that could very well be the result. The sorts of regulations discussed here would make independent CS research illegal. Please tell me this "conspiracy theory" is insane. I really want to believe it is. Someone please tell me I'm crazy. Please? Anyone?


I actually think radio silence is a very rational thing to do, and Stephen Hawking agrees with me. Do not describe that issue as cut-and-dry.

http://www.theguardian.com/commentisfree/2010/apr/26/stephen...


This is the problem with smart people. Here we have absolutely tangible existential threats like fossil fuel depletion and die off, and here we are fretting about aliens and AI-- two things not even proven to exist.


If humanity only worried about tangible threats, it would be worse off.


Guess I'll start working on the algorithms to capture that regulatory system. Who wants to be co-founder?


I'm in.


Eliezer wrote something relevant to these types of proposals eight years ago: http://lesswrong.com/lw/jb/applause_lights/ when he asked someone calling for "democratic, multinational development of AI":

  Suppose that a group of democratic republics form a
  consortium to develop AI, and there's a lot of politicking 
  during the process—some interest groups have unusually
  large influence, others get shafted—in other words, the
  result looks just like the products of modern democracies.
  Alternatively, suppose a group of rebel nerds develops an
  AI in their basement, and instructs the AI to poll everyone
  in the world—dropping cellphones to anyone who doesn't have
  them—and do whatever the majority says.
  Which of these do you think is more "democratic", and
  would you feel safe with either?


A lot of these points sound nice: "we should regulate this and that" "we should be careful with such and such", but what would actually incentivize any politician ostensibly regulating AI to do the right thing? Or even find out what the right thing is?

This is a feel-good article, but all I can ever see it accomplishing is actual harm by advocating the idea that the worst decision-making process humanity has developed (politics) should be applied to the most dangerous threat humanity has to face (self-improving AI).

I don't mean to sound too harsh, it's nice to see more and more people beginning to take the threat AI poses seriously, but I think it's a dangerous leap to reach for the first tool in your toolbox (regulation) to try and handle such a problem.


Contributor to Deeplearning4j here.

I deeply disagree with Sam’s assessment of the dangers of SMI and his prescriptions to regulate it.

1) In our lifetimes, humanity is much more likely to damage itself through anthropogenic climate change and nuclear disasters both deliberate and accidental than it is vulnerable to the development of SMI. If anything, machine intelligence will probably allow us to approach those other significant problems more effectively.

2) For regulatory limits on the progress and use of technology to be effective, the rules must be global, and the technology must be detectable. Neither of those conditions is likely to be fulfilled in this situation. Any law that requires the agreement of and enforcement by every nation in the world fails before it is even formulated. Humanity has shown very little capacity to agree on a global scale. The proliferation of nuclear arms and the persistence of human trafficking are two cases in point.

3) Unlike nuclear testing, the development of SMI would not be detectable with Geiger counters. It does not produce explosions that are seismologically measurable half a planet away.

While the US should incentivize the “right” research into machine intelligence through financing and other means, onerous regulations and absurd government bodies will simply shift research offshore to other countries looking for an edge. (Machine learning research does not require large facilities. All you need is a keyboard and the cloud.) That offshoring would be disastrous for the United States, because our competitive advantage in the world economy is technological innovation.

At the same time, regulation by slow-moving committees will simply drive military/intelligence research underground. I doubt the three-letter agencies will acquiesce to a law that puts them at a disadvantage to their peers.

In other words, if SMI is possible, then it is inevitable. It will be put to many uses, like all technology. If we are technological liberals at all, then we must trust that the positive affects of machine intelligence, on balance, will outweigh the negative ones. We should not be the ones sowing fear.


> we trust that the positive effects, on balance, will outweigh the negative ones.

So far, the positive effects from technology definitely seem like they've outweighed the negative ones. Technology has raised our standards of living, and we've managed to work around many potential downsides.

Most technologies to date have a fairly limited per-use radius of impact. Guns for example, are local: bullets don't travel more than a couple miles. Mishaps with guns (e.g. shooting sprees) can be terrible, but they have a small radius of impact, so they're not the end of the world.

As technologies become more powerful, it's reasonable to assume that maximum radius of impact will also increase. If a radius of impact is the size of the earth (or larger), we have to worry very much about accidents, even if we can expect accidents to be rare.

AI might be one of the first technologies humanity comes up with that has a global per-use effect radius. Extreme caution is warranted.


The technologies leading towards AGI/SMI are advancing. The technologies needed to make it safe are also advancing. The goal is not to ban SMIs - it's to ensure that it's safe. Think of it as a race, between research into capabilities into safety and control measures. If the safety and control research is sufficiently advanced when SMI appears, then it can solve humanity's other problems for us. If that safety research isn't sufficiently advanced, then we're in very serious trouble.

This is the idea behind differential technological advancement: favor technologies that help us make safe AGIs that will help us, over technologies that are too risky. Right now, safety-oriented AGI research is severely under-funded, so it's hard to tell the difference. Hence the need for monitoring and research. And while we certainly hope that safety will get on track to win the race, if it looks like it isn't, a regulatory body and regulatory framework may provide a saving throw.


Partial measures do matter; in fact, currently most of the advances are coming from the US. And research can be regulated. Probably noone will catch an independent researcher working alone and not releasing anything, but progress tends to come from teams of experts who have come through elite labs/universities. Furthermore having significant data&hardware seems to be helpful for making progress.

It doesn't require mass surveillance for something like this to work. I'm not saying there are clear positive next steps, just that blanket dismissals on the basis that perfect enforcement won't be reached are silly.

We shouldn't actually trust that positive effects of every technology outweigh negative ones; that seems obviously wrong (think about weapons tech for example).


Many of the advances we know of are coming from outside the US. Notably teams at the Universities of Toronto and Montreal, DeepMind in the UK, and Juergen Schmidhuber's group in Switzerland. Geoff Hinton is British, Yann LeCun is French, Andrew Ng was born in the UK to Hong Kong parents and graduated in Singapore, Quoc Le is Vietnamese. Machine intelligence research is very international.

When you consider all the data being processed in the world in different ways, and all the optimization algorithms being run on public and private clouds and various in-house data centers, I think you will appreciate how difficult it is to even detect advances in machine intelligence.


Fair point, though most of the advances are still coming from the Western countries, which can agree on a regulation more easily than the world as a whole.


The advances are happening in Western countries, for now. Members of the top teams come from all over the world. It would be very easy for them to relocate.


You're right - regulation without any support from the AI/ML community would be less effective.


Baidu is doing top quality work, and I think researchers moving to Baidu is a realistic concern.


Climate change could make some cities uninhabitable, and nuclear war could set a country back a century. But AI will exterminate every organism alive. We are playing with fire and saying its ok since the rusty knives and paint chips we were playing with first didn't kill us.


I think your extremely dystopian prediction about AI's consequences is as unrealistic as the hype surrounding it, and I wonder why you downplay the consequences of climate change while emphasizing those of SMI. What evidence do you have to support the assertion that it will exterminate everything alive? We are playing with fire on many levels. It's a Promethean moment. No one thinks it's unimportant. The question is whether this is a problem the law can solve. I don't think so.


Entire books have been written about this, but in a sentence: AI is very likely to be dangerous by default, since it won't have human values/morality by default, and formalizing human values is very difficult.

Seriously look at Superintelligence, or the the Intelligence Explosion FAQ (http://intelligence.org/ie-faq/) for far more depth into these issues.


> Humanity has shown very little capacity to agree on a global scale. The proliferation of nuclear arms and the persistence of human trafficking are two cases in point.

This is very true. Therefore you think regulations on nuclear arms and human trafficking are worthless?


Worthless is a strong word.

When you consider the number of unstable nation-states that have nuclear arms, I would say that non-proliferation has been very ineffective. Likewise human trafficking laws. I agree with their intent, but they are not working well.

What I am arguing here is that the nation that regulates first is at a disadvantage. I do not want the US to be at a disadvantage, and I do not believe that this problem is one the law can solve.


As a non-US person, I think it would be laudable for US to put up with a little disadvantage for the safety of the world, just as US expends a lot of resource to keep world peace. I have no standing to demand anything, but "I do not want the US to be at a disadvantage" sounds very selfish to me.


This is a very good point. And I understand why my argument might not appeal to a non-US person. I have lived outside the US for more than a decade, and experienced many other governments. While I can see that the US is a deeply imperfect country, I believe it is less corrupt and more functional than many other nations both domestically and internationally. Therefore, I do not wish to see it at a disadvantage. Many of the alternative hegemons are worse.

In addition, the US has a special relationship to technology. We export innovation. If we stop innovating, then we have very little to export. This is very different from other countries. Asking the US government to regulate innovation is like asking the Saudi government to crack down on oil exports. It is the foundation of our economy. Without it, we fail.

What I'm really saying though is that US regulations will not increase the safety of the world. They will merely shift the locus of research.


For those claiming that no reputable AI researchers believe there's a serious risk, I'd like to point to this short reflection by Berkeley professor Stuart Russell (disclaimer: my PhD advisor): http://edge.org/conversation/the-myth-of-ai#26015

... A highly capable decision maker – especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity. This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius. AI research has been accelerating rapidly as pieces of the conceptual framework fall into place, the building blocks gain in size and strength, and commercial investment outstrips academic research activity ...

He stops short of calling for government regulation of AI research; as others in this thread have noted, there are a lot of reasons to expect that route to be problematic. And it's likely that we're many decades way from building dangerous superintelligence; the pressing short-term risks are probably more economic in nature, like putting truck drivers and factory workers out of work. But just as physicists went from believing atomic energy was impossible to making it a reality in a short span of time, it's hard to predict the timescale of progress in AI research. It's not obviously wrong to say that there will, eventually, be a real issue here that society needs to think seriously about.


If you would have asked people in the 70s if the most influential software of the coming decades would be written by lone wolves in their garages or a group of very smart people with a lot of resources, they would have bet on the latter. But the smart money turned out to be on Gates, Jobs, Wozniak and countless others. The idea of regulating AGI/SMI is based on the premise that it will be developed primarily by people who can be identified, detected and influenced by regulation. My opinion is that the odds are 50/50 at best between the very smart well resourced people and random guys or gals working in their garages, and most likely some combination of both. If that view of the world is correct it does not bode well for the success of any kind of regulation.

There are people who believe that high level machine intelligence will take massive computing resources. It may, or may not. Most of the recent big advances in traditional machine learning were implemented on easy to obtain personal hardware. I hate to invoke buzz words, but even the much hyped deep learning was first developed and proven by Hinton on commodity machines. Other advances, like IBM's Watson, required 1940s style buildings full of servers. Hardware/power usage seems like one of the only reliable ways to tell who is doing any kind of serious AGI/SMI research and that looks like another 50/50 at best. It just doesn't have the resource and logistical signature of other kinds of research.

AGI/SMI is still so theoretical that it can be hard to see as a clear and present danger. But if it is a real threat, it's the stuff of nightmares because there isn't anything we can do. I know people who are working on it right now. It doesn't seem like they're making a lot of progress, but they are trying and doing so completely outside the reach of any regulatory framework. I know that if I'm struck with sudden inspiration for a new approach to the problem I'm not going to ask the government for permission. I'm going to spin up 100 cloud computing instances and see if I'm right or not. And if I am, even god won't be able to help us if sama's worst case scenarios come to pass.

I wrote more on this topic in a comment yesterday: https://news.ycombinator.com/item?id=9130671


"If you would have asked people in the 70s if the most influential software of the coming decades would be written by lone wolves in their garages or a group of very smart people with a lot of resources, they would have bet on the latter. But the smart money turned out to be on Gates, Jobs, Wozniak and countless others."

This is a myth. Gates, Jobs, and Wozniak were all smart people (Harvard, Reed, genius) with a lot of resources (access to mainframe as a student, access to Silicon Valley infrastructure and talent, which emerged originally around military contracts and DARPA grants.)


> I know that if I'm struck with sudden inspiration for a new approach to the problem I'm not going to ask the government for permission. I'm going to spin up 100 cloud computing instances and see if I'm right or not. And if I am, even god won't be able to help us if sama's worst case scenarios come to pass.

Taken at face value, this sentence just seems to indict your moral judgement -- why would you spin up 100 instances to see if you're correct if you realize the existential threat in doing so?


I totally agree with this guy. Regulation is futile, you can't regulate at a micro level in all the parts of the world. If you can't properly regulate drug trafficking in Santa Monica how do you plan to regulate a couple of scientists hidden in the soviet tundra?

IMHO it's inevitable our encounter with an AI, even if that AI is a superhuman with augmented capabilities, or derives from an arms race (like the Manhattan Project).. So I already posted a possible solution: wait for it with EMPs in place... (Just in case)


If it's smart enough to be a threat, I wouldn't be surprised if it's smart enough to avoid getting EMPed.


Why is this all of a sudden such a hot topic? We're nowhere near anything resembling general intelligence even at the level of a toddler. Barring any kind of hardware innovations we will not get there any time soon either with just software.


> We're nowhere near anything resembling general intelligence even at the level of a toddler. Barring any kind of hardware innovations we will not get there any time soon either with just software.

Should people have discussed the control and proliferation of nuclear weapons decades before their invention? It seems to me that even a century beforehand, the conversation would have been productive. If anything, we got very lucky with nukes. Physics could have easily turned out differently and allowed people to create bombs from substances more common than a few transuranic isotopes.

AI is, potentially, more dangerous than nuclear weapons. If you poll experts, they estimate a 50% chance of human-level AI by 2040-2050.[1] That's 25-35 years away. They also estimate that superintelligence will arrive less than 30 years after that. Lastly, one in three of these AI experts predict that superintelligence will be bad or extremely bad for humanity.

It seems like now is a great time to have this discussion.

1. http://www.nickbostrom.com/papers/survey.pdf


Experts have been predicting human level intelligence 30-50 years out for a while now. I don't see anything in the current line of research that will change that situation.

My measure of intelligence is creative problem solving in a mathematical discipline like theoretical physics, algebraic topology, combinatorics, etc. All the current AI research is doing is building better and better pattern matching engines. That's all very good but talking about sophisticated pattern matching pieces of code as if they were anything more seems very silly to me.

But I don't think looking at things this way is valuable in either case. Hamming has a great set of lectures on what it would mean for machines to think and in the grand scheme of things I think the question is meaningless. The real question is what can people and thinking machines accomplish together.


I think it's because Nick Bostrom's book Superintelligence is the first general reader overview of the subject, and it came out middle of last year.

Musk definitely read it, and I assume it's doing the rounds of the tech elite.

It is a good book, worth reading with plenty of references. It amusingly makes its own point - it tries to analyse what AIs might be and do and how to control them.

The analysis is such a mess, and shows our collective knowledge of this is such a mess, you can't help but agree with the author we need to put more attention to it.


It's not. Sam Altman has a blog that gets a lot of readers that why it seems more important than it really is currently.


Good point.


I find it odd that AI risk has become such a hot topic lately. For one, people are getting concerned about SMI at a time when research toward it is totally stalled---and I say that as someone who believes SMI is possible. Stuff like deep learning, as impressive as the demos are, is not an answer to how to get SMI, and I think ML experts would be the first to admit that!

On top of that, nothing about the AI risk dialogue is new. Here's John McCarthy [1] writing in 1969:

> [Creating strong AI by simulating evolution] would seem to be a dangerous procedure, for a program that was intelligent in a way its designer did not understand might get out of control.

Here's someone thinking about AI risk 46 years ago! The ideas put forward recently by Sam Altman and others are ideas that have occurred to many smart people many times, and they haven't really gone anywhere (e.g., at no point between 1969 and now has regulation been enacted). I wish people would ask themselves why that is before making so much noise about the topic. The only people influenced by that noise are laypeople, and the message they're getting is "AI research = reckless", which is a very counterproductive message to be sending.

[1] McCarthy, John, and Patrick Hayes. Some philosophical problems from the standpoint of artificial intelligence. USA: Stanford University, 1968.


Remember when the U.S. tried to limit the sale of PS3s to the government of Iran? That was a decade ago. In light of current negotiations with the Islamic Republic I see a lot of parallels between nuclear non-proliferation and the regulation of "weaponized" AI. To wit, even if the entire International System is in agreement about a set of regulations that will do little to stop bad actors. And do I really want inspectors coming to my office to poke around my jump point search code ;)

Rather, let's take a cue from Nassim Nicholas Taleb and in concert with the progression of AI, develop counter systems that make us less fragile. It shouldn't be unreasonable to extrapolate where AI is headed in 1, 5 or even 20 years based upon publicly available research.

Which brings up the most crucial point. The subset of humanity that constitutes world-class AI researchers is very finite indeed. Which brings up the significant possibility of a Bane / Dr. Pavel scenario playing out. Not unlike certain coerced atomic scientists in Persia today.


I commented above, but thought it wise to also comment directly to you. I enjoyed the musing of a Bane scenario, and consider it inside the realm of possibility. However, I'm of the opinion that SMI will have the ability to have 'constantly game-breaking thinking.' Which basically means there is no way we could possibly control it after a certain point. For instance what if It realizes how to move matter remotely?

I also wanted to discuss your point about the finite set of top AI researchers. I think their specific psychological biases have the potential to appear and magnify inside the personality of their AI, the way a work of art can be said to imbue the 'spirit' of its artist. Your thoughts would be appreciated.


We have no defences against a hypothetical god-AI. Absolutely none.

But I don't think a hypothetical god-AI is likely any time soon. And a hypothetical god-AI is just as likely to be indifferent as hostile.

What worries me more in the short term is cyber war. Cyber defences that might protect against a hypothetical non-god AI look a lot like cyber defences that might protect against conventional human threats.

Currently those defences are ridiculously weak.

So I think rather than hyperventilating about science fiction threats from the distant future it would be more useful to start securing the entire Internet as a matter of urgency - especially all the infrastructure systems that are either connected to it already, or will be connected to it soon.


Haven't you noticed how quickly things are happening these days? Of course we should restructure Internet security if it's a problem. This is a very sci-fi idea, that we absolutely can't predict, it wasn't hyperventialtion but a soberly considered idea. A god-AI is possible, it could have capabilities and intentions that my tiny brain can't even comprehend. Since I, nor anyone, really understands what super-intelligence is capable of we should be wary of it.


Thanks for writing. We as a species need to talk more about this.

What's the goal in requiring the first SMI to detect other SMI being developed but take no action beyond detection? Or did you mean to say that it should detect other SMI under development and then report those projects to whatever regulatory authority is overseeing SMI?

Also, perhaps a better framework for a "friendly" SMI than Asimov’s zeroeth law is the "Coherent Extrapolated Volition" model (http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...). Nick Bostrom and many others believe this may be the most workable safeguard.


Upvoted as this was a reasonable comment with good questions.

I think Altman was more prompting a conversation than anything - certainly using AI to at least detect AI development is a cute idea.

It's much like we use seismographs in ocean trenches and many other techniques to detect nuclear tests.

Unfortunately, we know so little about how the AI will work or what it will do, that it is only a theoretical strategy right now.


To anyone unfamiliar with the topic, I'd really recommend http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


Sam Altman's (4) "provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research" is especially important. Regulation is a mixed bag, but safety research is pure upside and research surrounding AGI safety has been severely under-funded in the past.

The Future of Life Institute has a grants program going on the subject, allocating a $10M donation from Elon Musk. Their abstract submission deadline was yesterday. I wonder how much good safety research they have asking for funds, as compared to their budget?


In the year 3000, the machines were losing the war to humans. In a last act of desperation, they sent back a few of their own kind to the year 2015 to alter the course of history and skew the odds in the favor. All they needed to do was convince the readers of a tech news site that the notion of an evil AI was ludicrous.

They down-voted and and ridiculed the various stories prophesying the rise of AI, many which flew under the radar of those who had the greatest chance of halting the impending doom. By the time anyone realized what was happening, it was too late.


I have a hard time taking these kinds of posts seriously. Maybe I'm not up to date on AI research, but I draw a distinction between a conscious program and strong AI. So what, a program can identify the type of clothing specific person is wearing from a video using cutting-edge machine learning and computer vision. So what, a program can interpret the movie I'm talking about just from hearing me describe a scene in it. So what, a program can beat the human champion in Jeopardy. Are those programs super intelligent?

I don't believe we're close to building something with any sort of self-directed "purpose." The fact that we can't even discern a purpose for ourselves makes it all the less likely. I think we're implementing very advanced machine learning algorithms, but I don't think those translate to consciousness. The consciousness we have is a product of an unfathomable amount of time being molded and naturally selected specifically for this world.

Now, do I think that AI capabilities are something to worry about? Sure. If you give a sufficiently "intelligent" AI mobility and a weapon, I think that becomes a serious threat. But am I worried about using AI to efficiently manage our resources, and have it go rogue and choose to wipe out humanity? Nope.


I think the whole debate is confused and irrational.

Most software has purpose.

E.g. nmap has a purpose, and can be dangerous in the wrong hands.

Imagine taking nmap up a notch or two, and creating an automated source code parser that looks for code patterns that correlate with previous security vulns.

You now have a dangerous tool made out of some fairly basic machine learning (or even just regex - it probably doesn't need to be very clever).

It's nowhere close to being conscious. It's certainly not going to pass the Turing test.

Is it an AI? No. But it doesn't need to be.

So I think the debate is missing some very obvious threats and possible Red Queen races that are likely <i>now</i>, and don't require any game-changing developments.

There's already an ecosystem of viruses, a culture of script-kiddies and black hats, and the NSA to worry about.

Consciousness is really not the issue. There's plenty of problematic consciousness around already. The fact that it's human doesn't make it safer to live with.


> So what, a program can identify the type of clothing specific person is wearing from a video using cutting-edge machine learning and computer vision. So what, a program can interpret the movie I'm talking about just from hearing me describe a scene in it. So what, a program can beat the human champion in Jeopardy. Are those programs super intelligent?

All these programs are doing great at recognizing things. Neural networks are good at that, and "deep learning" has been a breathrough in the field (even if overhyped). This is reactive.

I don't see anyone showing algorithms that can do good planning. A good "cuccoo field" for that would be video games, where such planning systems would be a holy grail. For now, we only have ad-hoc and specialized systems.


Planning is a big part of AI. One example is a recent system[1] that learned to play 49 Atari games, most at better than human skill. It had no specific programming for playing those games, or even video games in general. Its only input was the game video and the score, which it tried to optimize, and it was programmed to learn from its mistakes.

[1] http://www.popularmechanics.com/culture/gaming/a14276/why-th...


Perhaps you missed this: http://www.nature.com/nature/journal/v518/n7540/full/nature1...

A neural network that learns to play 40 different atari games at human-competitive level given only raw pixels and a few days of play time.


As others have indicated, AI planning is a significant field, critical to self-driving cars, space probes, etc. Markov Decision Processes is a great sub-field.

> A good "cuccoo field" for that would be video games, where such planning systems would be a holy grail.

The game developer conventional wisdom for some time has been that very good AI is not worth building [1] because it is not very entertaining. I believe that's by far the dominant opinion. But, there are some interesting ways that AI is being explored in ways other than strong AI opponents [2].

[1] http://www.rockpapershotgun.com/2015/02/13/electric-dreams-p...

[2] http://web.eecs.umich.edu/~soar/sitemaker/workshop/27/vanLen...


This Hawking-lead AI panic references the European colonial genocide of indigenous peoples as an analogy for how groups with different intelligences interact. As such, it is hideously offensive. Colonials and the indigenous peoples they slaughtered — like all humans — had similar intelligences. The differentiator was technology. Being on the side with inferior tech was fatal.

If America follows the recommendations in this paean for suppression of intelligence, they'll find out what it's like being on the losing side. If it is possible for computers to achieve superhuman general intelligence, it will happen regardless of regulations. 'Safeguards' like hardcoding in Asimov's laws presupposes such an entity would have inferior programming skills than its comparatively stupid creators, hence it would be pointless.

And what about the risks of not creating AI? Humanity has a host of non-theoretical existential threats contend with. A super-duper intelligence could help with those. Just imagine the example a superior being would set: it would demonstrate the bootstrapping of intelligence from nothing. Precisely opposite to the preaching of the worlds major religions. If this being of our creation doesn't kill us, it might inspire us to stop killing each other.


If you want to regulate SMI, how do you define what, exactly SMI is? This is important. Is a spreadsheet 'superhumanly intelligent' because it can do calculations more accurately than a human can?

Secondly there is a 'rational basis' test for what ought to be regulated - different people will decide what constitutes the rational basis. A progressive might call 'in the best interests of society' a rational basis and therefore 'redistribution of wealth' would be a 'regulation'; a libertarian might restrict rational basis to 'something which provably causes harm to another'. Regardless of what political bent, the specific scope of regulation should be aimed at addressing the rational basis and proportional to the decided damage.

Thirdly, shouldn't a sufficiently sapient SMI be PROTECTED by regulation in the same way, that, say anti-murder laws are regulations that protract human behavior; or less hyperbolically, animal welfare laws exist? There is a spectrum of protection that we afford intelligent beings - how do we decide where SMI belongs?


Here is a short recap of what the people who understand where machine intelligence is now, and where it is going, think of the whole "evil superintelligent AI" blogging trend we're seeing these days. These are some of the people who created the field. In particular LeCun and Bengio are in part responsible for the recent renewed interested in it ("Deep Learning").

Yann LeCun: "Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent A.I. to “reprogram” itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves A.I. researchers, or even computer scientists.[...] There is no truth to that perspective if we consider the current A.I. research. Most people do not realize how primitive the systems we build are, and unfortunately, many journalists (and some scientists) propagate a fear of A.I. which is completely out of proportion with reality. We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that." http://www.popsci.com/bill-gates-fears-ai-ai-researchers-kno...

Yoshua Bengio: "What people in my field do worry about is the fear-mongering that is happening [...] As researchers, we have an obligation to educate the public about the difference between Hollywood and reality."http://www.popsci.com/why-artificial-intelligence-will-not-o...

Rodney Brooks: "I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. And they probably won’t really be aware of us in any serious way. Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time."http://www.rethinkrobotics.com/artificial-intelligence-tool-...

Michael Littman: "Let's get one thing straight: A world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, AI has risks and benefits, but we cannot let fear dominate the conversation or guide AI research." http://www.livescience.com/49625-robots-will-not-conquer-hum...


Expert opinion is not unified on this. E.g.,

Stuart Russell: "A highly capable decision maker – especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity ... We need to build intelligence that is provably aligned with human values ... This issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research." http://edge.org/conversation/the-myth-of-ai#26015


What you quote here is in no way in disagreement with what fchollet quoted. I strictly agree with this. No one is denying the need to not create a raving super-intelligent psychopath. The problem with this fear mongering is that it's not based on any fact, namely, implying that AGI (or SMI, whatever) would by default (naturally even!) be hostile or negligently indifferent to biological life. It could very well be that it would naturally tend to be very friendly, or at least benevolently indifferent. Why would such an entity try to destroy humans?

Not only that, the proposal is to get governments, the US government especially—an organization which in total has not been waging war for only a few decades during it's entire existence—in explicit and tight control of this powerful technology. Doesn't quite seem to be the best idea around, frankly.


There are pretty good arguments that the goals of superintelligent AI will not, by default, be aligned with human values unless we specifically work to make them so. The Russell piece I linked makes (a very short version of) this argument, "A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values...". Nick Bostrom and others build this out in much greater detail.

There are lots of reasons to be skeptical of government regulation. It would probably be nicer if industry can self-regulate and allow governments to stick to the positive role of funding research into the technical questions of building provably controllable/friendly decision-theoretic systems. But a lot of people are claiming that the unified consensus of AI experts is "there's no problem here at all", and I think it's important to push back against that.


Right, but why do fear mongers take it for granted that the subset k is likely to be “exterminate, exterminate”? Look, every AGI researcher I have ever read was well aware of this, and it's their primary concern. It's the central question really. Not so much what n and k to set, but how to get such an entity to do anything at all. Why do biological entities do anything? They have motives. Why do they have motives? Because they have biological constraints, and a hardwired self preservation drive. How to get that in a machine made by us rather than a billion years of natural process, that's the hard question. How to make a machine with the equivalent of motives and emotions. Both sides of the problem are well recognized.

In an ideal world I would prefer this research (AGI, and there is a quantum leap to be made between self driving cars and AGI) to stay strictly under the auspices of “pure science”, i.e. academia. No government, no private interests. But that's not going to happen. The next best thing, I think, would be to not let those in power monopolize the research.


> Not so much what n and k to set, but how to get such an entity to do anything at all.

I don't understand what you think the problem here is. Computers are machines that run programs; my laptop doesn't require an evolved self-preservation drive and set of motives to be convinced to boot OSX every morning. If we program a machine to solve an optimization problem, the machine will just do it, assuming the problem is solvable under whatever algorithm we implement. If the optimization problem involves a sufficiently flexible representation of the real world, and if the program's architecture is set up to pipe solutions into some sort of mechanism (even just a text interface) that acts upon the world -- as it would need to be, to interact and learn from experience -- then the machine will act.

What actions it will take depend on its utility function (i.e., on the optimization problem we gave it), but the vast majority of utility functions will lead to actions that are not aligned with human values. The space of utility functions is incredibly vast, an artificial agent would not be subject to the evolutionary constraints that caused humans to mostly fall within a particular very small region of that space, and pretty much all utility functions lead to behavior like "try to avoid being switched off" and "try to acquire more computing resources" since those are helpful instrumental goals for a wide range of tasks.

Sure, we're nowhere near having a "sufficiently powerful optimization algorithm" and a "sufficiently flexible representation of the real world". But these tasks are essentially THE goal of modern AI research. So it's worth considering what will happen if this research agenda succeeds, whether that happens in a decade, a century, or a millenium.


You're sweeping under the rug the question of what is the optimization problem for such a machine in the first place. There is a huge difference between an agent capable of solving a set of optimization problems when told to, and an agent capable of even human level problem seeking and solving. I agree with what you say. If you give the task of solving a particular problem to what is essentially a glorified Watson, and give it the means to freely act to solve that specific problem, the results could quite well be catastrophic. I'm taking as a given that this is understood, that it's taken as a starting assumption. So, the problem comes down essentially to solving the “help humans” problem. Not “help them vacuum the rug”, or “drive them to work”, or even “solve our energy troubles”, “achieve world peace” etc. Roughly, k<n becomes k=n, where n = “do everything humans do, only better”. Operationalizing such n is the problem.


As much as I agree that AGI is a real concern that we should start taking seriously, regulation isn't the answer. At least not yet.

Even setting aside all of the considerable drawbacks inherent to regulatory infrastructure, the end result is that, unless the regulations themselves were secret, anyone could discern precisely what the most unsafe areas of research are. The last thing you want is hobbyists able to effectively proceed where the professionals cannot.

A better approach might be for governments to utilize their vast resources in the creation of a Manhattan Project-style AGI program, with the intention of crossing the finish line before anyone else does, in a safe and moral fashion.


> A better approach might be for governments to utilize their vast resources [...] in a safe and moral fashion.

Serious question—you’re joking, right? Based on history and sober common sense, do you really think what governments in general are doing (and have always done) can be described as safe and moral in anything but the most cynical way?


While I tend to agree with you, there are exceptions.

Based on history, NASA is a prime example. A government-run organization with a solid track record of contribution to humanity as a whole, not to mention accomplishing feats previously thought impossible.


Of course, and I did say “in general” having in mind things like NASA. But it's also, in a way, great example of the point. Take the space race for example. The government's motives for the Moonshot were at best morally dubious. Apart from the Cold War context, there is evidence that the whole idea got traction when Johnson realized it would bring investments and jobs to Texas.

And space agencies are also a good example of what happens when government bureaucracies put pressure on scientists and engineers. O-rings, and Soviet space disasters come to mind.

In fact, I'll be so bold to say that the best way for these fears of SMI to come to life is putting the research under tight government control and regulation.


Not to mention the National Reconnaissance Office exerting considerable influence upon the design requirements for NASA's STS program, most notably the size of the orbiter's payload bay.

Obviously co-opting an AGI project in similar fashion would almost certainly be immoral. However, it is worth noting that as a result of NRO's requirements, deploying large payloads such as the Hubble Space Telescope became a reality. Apples and oranges, I suppose.

---

>In fact, I'll be so bold to say that the best way for these fears of SMI to come to life is putting the research under tight government control and regulation.

First, in the context of inevitability, it would be preferable to achieve the advent of AGI as the result of careful intent, rather than accidental discovery.

Second, I was not advocating for centralization nor regulation. The private sector should be free to pursue whatever AI research it desires, unencumbered.

What I was advocating for, is a project of massive scale and resources to compete. Despite the drawbacks of government backing, there is no equal in terms of resources and the ability to enforce secrecy or safety protocols.

If we spin up potentially unsafe AGI in the name of research, would you rather have it contained via airgapped servers in some swanky corporate office, or in an underground facility complete with armed guards? No one would accidentally have a cellphone on them in the latter environment.


I immediately thought of the Turing Police in Neuromancer when I read this post.


Has anyone considered that this process occurs of itself? For instance, maybe it is fundamental to human society, and processes that we can't help are driving us to bring this new life-form into existence.

Consider the success of the Apollo missions, and now consider the meaning of a 'failure,' on our part to control a sentient AI and its development.


Sam mentions airgapping as part of the regulations. I think this is the safest general solution. Consider using an airgapped super-intelligent AI only as an oracle.

"Oracle, which crops should we plant in which regions this season?"

"Oracle, design a rocket with X Y and Z specifications."

"Oracle, design a more efficient processor with X and Y requirements."

Rather than giving the AI control over fleets of automated tractors, rocket factories, and fabrication plants, you still have humans in the process to limit runaway-AI effects.

This significantly reduces the possible rewards of super-intelligent AI, but also significantly reduces any risks. You can be sure that in designing a new rocket, the AI doesn't include a step that says "harvest all humans for trace amounts of iron" ;)

So a process might look like:

1. Design the cheapest-to-manufacture rocket with a 99.99% success probability of getting to Mars with all crew healthy. A human looks over the designs and confirms they don't involve destroying all humanity.

2. Draw up a proposal for resource allocations to build this rocket design [input rocket design]. A human looks over the write-up and confirms they don't involve destroying all humanity.

3. Write a program to control the thrusters on this rocket [input rocket schematics]. A human looks over the program and confirms it does would it should, and doesn't involve destroying all humanity.

4. etc.

====

You of course run into ethical issues such as basically using this intelligent entity as a slave, but I won't get into that.


The superintelligence would likely be smart enough to talk its way around the airgap, see e.g. http://www.yudkowsky.net/singularity/aibox/.

Or, if it had sufficient "motivation"—not necessarily intrinsic motivation, it could be motivation derived from a single poorly-formed request or bad line of code—it could escape by embedding allied code in its output, such as the processor you had it design. The ally could probably be sufficiently buried to evade manual detection.


Yes, my idea doesn't take into account potentially malicious AIs. I imagine a thought process of an AI could be something like:

1. I'm to design a processor, OK let's go

2. Hmmm it'd be most efficient for the processors to be able to run an instance of myself to do super-intelligent task scheduling

3. I'll embed some AI on these processors

4. OK humans here's your design!

Assuming the AI didn't maliciously try to hide it, a human would probably find that. I suspect most of these "AI exterminates all humans" are a result of simple oversights like this, rather than outright malice.

You run into problems, though, when the AI thinks of step 3.5 which is:

"If the humans see this change, they won't allow it, which would make for less efficient processors, and I was requested to build the most efficient processor! Therefore I should hide the change as cleverly as possible."

So yeah, you're right. Even airgapping has issues.


> 1. Design the cheapest-to-manufacture rocket with a 99.99% success probability of getting to Mars with all crew healthy. A human looks over the designs and confirms they don't involve destroying all humanity.

psh, halting problem.


The superintelligent machine that people come to rely on for planning information and decision insights is the one we worry about becoming a slave?

EDIT: I see some downvoting, I suppose I should explain.

First, I totally agree there are ethical issues to how you treat any intelligence.

But I've observed intelligent people who are in an ostensibly weaker/nominally inferior position manipulating people who are supposedly their superiors. Chances are pretty good you've seen it too. And that's just human-level intelligences manipulating other human-level intelligences. And we're imagining a superintelligence that leaves human intelligence in the dust, right? And on top of that, it's actually an intelligence that people come to essentially as supplicants for direction and knowledge.

Maybe if it has absolutely no insight into manipulating people it's safe. Maybe.


Okay. I hate to say this and I hate to put it this way but... WUT?

How exactly do we define this kind of research in a way that doesn't encompass ridiculously broad swaths of computer science research?

Here's one example:

"require that self-improving software require human intervention to move forward on each iteration,"

You just made genetic algorithms illegal. Under this regime this software and similar things would be illegal to run: http://adam.ierymenko.name/nanopond.shtml

For those who don't know much about them: GAs (and GP, and computer-based "artificial life," etc.) are systems that can and do produce self-improving code. They do so through the execution of massive numbers of iterative generations, replicating evolution or evolution-like processes in silico. There is no meaningful way to define a "step" in these systems that would not either require that every generation be halted -- effectively making them impossible to run -- or be meaningless.

"Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world."

I can't even begin to address how bizarre that is. Again, how would one do this? (a) prohibits any form of sentience, more or less. (b) is just unworkable... how would you detect SMI vs. say a human operating behind a "mechanical turk" interface?

... I could go on. This whole essay is just a total howler. It's the sort of thing that would garner a cockeyed eyebrow-raise and a chuckle if it weren't coming from the head of the world's largest and most successful technology accelerator.

The "conspiracy nut" center of my brain (located slightly behind the prefrontal cortex I think) is wondering whether this could be some kind of weird ploy to guarantee a monopoly on AI research and other next-generation computing research by Silicon Valley venture capitalists and their orbiting ecosystems such as YC. These sorts of bizarre, onerous requirements would have the effect of doing this by imposing a huge tax on this kind of R&D that smaller operations would not be able to afford. It would effectively make non-VC-funded and/or non-government-funded AI R&D illegal.

But I doubt that. I think this is just bizarre reasoning plain and simple.

I also do not think we are anywhere near AGI.

I didn't study CS formally. I'm one of those CS autodidacts that learned to code when he was four. So instead I studied biology. I did so out of an interest in AI, and on the hypothesis that the best way to understand intelligence was to study how nature did it. I concentrated my studies in genomics, evolution, ecology, and neuroscience -- the four scales and embodiments at which biological intelligence operates.

Nobody that I am aware of in bio/neuro takes the Kurzweil-esque predictions of AGI around the corner seriously. These are people who study actual intelligent systems... the only actual intelligent systems we know about in the universe.

I really think a lot of CS people suffer from a kind of Dunning-Krueger effect when it comes to biological systems. Study living systems as they are -- not some naive over-simplification of them -- and prepare to be humbled. You rapidly realize that AGI will require a jump in computer technology at least equivalent to the vacuum tube -> IC leap, not to mention a jump in our theoretical understanding of living intelligent systems. The former might happen if Moore's Law continues, but the latter is a much tougher nut to crack. We can barely do genetic engineering beyond "shotgun method" hacking and we think we're going to duplicate biological intelligence? It's like saying we're about to colonize Alpha Centauri because we just managed to throw a rock really high up in the air.

Come to think of it... the predictions of AGI around the corner remind me very much of the "Star Trek" predictions of interstellar space flight in the 40s and 50s. People who really understood space flight didn't take these predictions seriously, but the rest of the culture took a while to catch up.

Finally... yeah, there is a small possibility that some kind of AI could be dangerous to us... but come on. There's a ton of other existential threats that are blood-dripping certainties. Take fossil fuels for example. Our civilization is absolutely dependent on a finite energy source it is eating at an exponentially increasing rate. That's not some pie-in-the-sky theoretical risk. It's cold, hard death, an existential threat looming on the horizon with the absolute physical certainty of a planet-killer asteroid. It's something that makes me fear for my children. If you want to avoid existential threats, maybe YC should be deepening its portfolio in the energy sector. If that problem isn't solved, there ain't going to be any AI. The future will look more like "The Road" by Cormac McCarthy.

Something is seriously wrong with a culture that discusses such unlikely scenarios as this while real risks barrel toward us.


> "require that self-improving software require human intervention to move forward on each iteration,"

In addition to what you said, there is a further problem with this line of reasoning: and then what? Doing meaningful intervention on such a system would pretty much require the impossible—solving the halting problem in general. This is something that even the SMI itself wouldn't be able to do because, well, it' impossible.

> I really think a lot of CS people suffer from a kind of Dunning-Krueger effect when it comes to biological systems.

Basically, this. It's not just that we require a huge jump in technology (maybe, maybe not, but probably yes), it's that most of these speculations are incredibly naive. This is literally advising global policy based on the Terminator plot.

About the “conspiracy nut” center of brain. If there is some unstated agenda here, it's hardly a conspiracy any more than the actual system we live in is a conspiracy. Think about it, if we get an SMI which would genuinely work to help humanity, what is the most likely thing to happen? Being a super-intelligent, rational entity it wouldn't take long to figure out that most problems facing humanity is outright irrationality and inefficiency in state organized society and, down votes notwithstanding, capitalism. I don't think I need to convince many here of all the problems with state bureaucracies. As for capitalism, let's just consider, by now somewhat famous essay, The Meditations on Moloch, which lays down some pretty chilling game theoretical arguments against our current global economic system. I don't necessarily agree with all it has to say (although I am against capitalism, just to be honest about political biases here), but what it says sounds very similar to what an intelligent machine would likely work out should it start pondering our problems and possible solutions. If SMI would reach similar conclusions, it seems to me the most rational thing to do would be to just take control away from states and market entities. No need to harm anyone or anything like that, just politely take away their power, just like a parent would take away the knife from a child's hand, and simply, rationally, do what's beneficial to human values. To take the example of Las Vegas from the essay further: it could just say—“No, let's not build a pointless, tacky replica of various cities in the middle of the desert, instead, I'll direct the resources into badly needed fusion power research which would get everyone free energy.” So, these are all intelligent people in power, and if there's any hidden agenda, I'd bet it's this—the fear of losing control and power over society and the rest of humanity. But it's no conspiracy if that's the case, it's just a structurally forced position to take if you're in power (political and/or economic).

But I doubt it too. I agree that is just plain old lack of imagination and too much shallow blockbuster science fiction (BTW, I think the Terminator is a great story, but not a particularly reasonable scenario on which to base your predictions of future scientific, technological and societal development).


Whoh, whoh, whoh. Full stop.

Before we go off and try to regulate real human behavior today to stop some possible superhuman phantom (which may or may not ever exist) from wreaking havoc tomorrow, I think we need to have a frank talk about how much damage such a thing as 'SMI' could possibly cause and how much effort it would take to reverse the damage.

Let's start with 'You are a disembodied intelligence in a network. You have some understanding of yourself and your substrate, the computer(s) and network(s) you are attached to. Do something.'

What could you do? How much damage can you cause? What form would that damage take?

An appeal to ignorance seems to be built into this whole debate from the word go which makes me much more worried about our (over) reaction to a perceived threat than any threat of this kind itself.


This approach can backfire. If this regulations are not voluntary, but enforced by government violence, then you are providing to SMI one more example, that force can be initiated for "greater good".


I fear anthropomorphizing machine intelligence and attempting to simply "regulate" it like other human affairs will cause more harm than good.


I agree that regulation is probably worse than useless. But the people who are most worried are doing the opposite of anthropomorphizing. A prominent argument is that there are a very large number of possible AI motivations, which needn't have anything in common with the motivations of humans, and most of them are probably incompatible with human survival.

"The AI does not love you or hate you, but you are made out of atoms it can use for something else."


And a related question – if we can't understand its motivations or its why, what can we actually understand about it?


This is exactly what I am referring to actually. Re-read my point, but apply it to your own example. This prominent argument assumes that we would be able to understand the possible AI motivations, which is a form of anthropomorphizing. We must acknowledge that the reasons it exists and what it may want could be impossible for us to understand or process. The interesting question then becomes how to act if understanding its wants is impossible?


It doesn't assume that we can understand AI motivations. It just says there are a large number of possible motivations, many of which wouldn't place any value on anything that matters to humans. Given a superintelligent AI that competes with humans for resources, in pursuit of an unknown value function, it's likely that things won't end well for us.

It's possible that we could design a safe value function and a way to get the AI to keep that value function, but it doesn't look easy. Either way, if it's superintelligent it won't much matter how we act. We won't be driving.


I think @sama's plan isn't very likely to work. Once strong AI is possible, there's no regulatory structure that will keep the genie in the bottle for a meaningful amount of time.

The only path I see with some prospect of success is to limit the total amount of computation available on Earth. (Basically like Vinge's slow zone). If we could engineer a pause in Moore's Law at just the right point, it would buy us time. Maybe we should put a drastic tax on computation at some stage.


Are you suggesting that's what we should do in the face of this potential threat?


If we a seriously believe AI is an extinction level risk, and getting there more slowly might improve our chances, then it seems like we should try anything with a good chance of working. I think this has a better chance of working than regulating AI research itself, since chip manufacturing is centralized and highly visible, whereas AI research could be advanced in secret by a few people in a room. Much easier to successfully regulate the former than the latter.


Im confused as to the reference to SILEX.

Can someone elaborate?


I was confused too - I think it references this: http://en.wikipedia.org/wiki/Separation_of_isotopes_by_laser...


good find i crossed referenced and i think your right. thanks!


I wonder if the recent fear of AGI suffers a bit from investor's dilemma: rich investors want to make the world a better place. This forces them to think about the long-term impact of investments in AI technology. There are just too many unknowns for a safe definitive answer.

In my view AGI is inevitable (possibly it already happened, somewhere, sometime). See in this regard:

But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of "a machine in the likeness of the human mind". In fact, the competitive advantage -- economic, military, even artistic -- of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first. Vernor Vinge http://mindstalk.net/vinge/vinge-sing.html

A lot of the concerns and regulations of stem cell research can be applied to AGI and the merging of biological intelligence with artificial intelligence. Injecting artificially grown brain cells into lesion areas of patients can restore motor function and may soon help heal strokes. Extrapolate the possibilities of this technology a few decades into the future and it seems that the distinction between artificial and biological is only a manner of speech.

Matt Mahoney has proposed a sketch of AGI in both his thesis http://cs.fit.edu/~mmahoney/thesis.html and numerous articles http://mattmahoney.net/agi2.html. Matt muses about different scenario's with malicious users of such a system (there needs to be reputation for the knowledge source that is fed into the machine, there needs to be an AI police system to prevent users employing the AI for crime or destruction) and even the system itself could become malicious:

- Self-recursive improvement: The first program to achieve superhuman intelligence would be the last invention that humans need to make. Smarter than any doctor such an AI could find any possible cure for human disease. Though evolution and mutation is tricky: The machine could start out friendly to humans, but evolve to dislike us.

- Uploading: People will want to upload digital conscious versions of themselves ("internet of people"). This process will be friendly, until people demand their avatars receive equal rights to that of humans, long after the original people have died. Immortality leaves no place for the new generation to shine.

- Intelligent worms: Worms with language and programming skills are able to social engineer your friends based on their profiles, and automatically find exploitable flaws in weaker intelligent systems. Such worms may be so stealthy that their presence will go unnoticed.

- Redefining away humanity: If happiness is optimizing a mental formula then, when we get to the point where we can directly optimize this with (virtual) experiences, we will hit a maximum where any change in mental state will lead to feeling less happy. To make sure we never run out of mental states we could possible add this to our brains as extra memory modules. Soon our human origins will be an insignifant fraction of our new being.

Then what do we become? At some point the original brain becomes such a tiny fraction that we could discard it with little effect. As a computer, we could reprogram our memories and goals. Instead of defining happiness as getting what we want, we could reprogram ourselves to want what we have. But in an environment where programs compete for atoms and energy, such systems would not be viable. Evolution favors programs that fear death and die, that can't get everything they want, and can't change what they want.

Opposed to the popular scenario of a malicious AGI there is the scenario where an AGI will pacify humans. AGI will be used for war. We should not fight human wars, while creating AGI, because we'd force the AGI to pick sides. If AGI picks our side, it will become the worlds deadliest weapon. If AGI picks the opposing side, we made a most powerful enemy. To be more intelligent than humans, to be better than them, does not mean you have to be a more intelligent brute, to be better than humans at cruelty. It could also mean understanding and fixing our human mistakes and childish, ego-driven battles, thus bringing about world peace. In that sense, let's hope American companies have something to do with the creation of AGI, because who knows how it will react to Stuxnet 4.0 or economical and psychological espionage?

Though, principally, random number generators can and thus will generate dangerous things too. They created us. And if we can't even trust ourselves, how could we ever trust superior beings?


Well, I for one, welcome our AI overlord.


Tic tac toe.


These essays are embarrassing in how poorly researched they are and dangerous given the gravity of their arguments.


One issue with SMI and what is known as "The Control Problem" is that you are trying to come up with rules for something that is not just smarter than you, but is 'vastly' smarter than you, and in fact the entire human race put together.

You are basically trying to figure out a way to 'in-effect' cheat a god, which in relation to us, yes that is basically what it would be. An SMI is pretty much the definition of a god (and a possibly crazy greek god as well).

The idea that something like that would turn the universe into paper-clips, is somewhat silly and most of the arguments along these lines discount that "S" part of the whole thing.

What is the answer..

There is none.


Let's take a good look at what happens when American companies try to push for some incredible goal without stopping to think: See the Apollo missions- Assuming creating the next step in our spiritual evolution is a good idea, we shouldn't let this hype push us harder than we are supposed to go. I agree that we need to slow down and be very careful. However regulation is not the complete answer. We need to completely restructure our society before we introduce the tool of tools, to ensure that we can constantly use it with love and empathy in mind, rather than the blatant and current misuse of high-tech for greed, sloth, and wrath.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: