Hacker News new | past | comments | ask | show | jobs | submit login
Nick Bostrom: ‘I don’t think the artificial-intelligence train will slow down’ (theglobeandmail.com)
36 points by jonbaer on May 2, 2015 | hide | past | favorite | 70 comments



I've been interested in AI for a long time. But the last year AI discussions seem to have really hit mainstream, which we can also see here on Hacker News with a lot of posts about deep learning etc.

Lately I read this two piece story on Wait But Why which I really would recommend to anyone wanting to get a better overview of the topic:

- Part 1: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

- Part 2: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


The author goes to great lengths to provide examples of what he calls a "Die Progress Unit (DPU)," then vaguely but not explicitly implies it is the measure for what he calls "human progress;" leaving the reader to assume that the Y axis labeled human progress is plotted in DPUs on his two graphs.

He then outlines the mechanism for all of these progress-(non?-)deaths[1] to occur, which he calls superintelligence. I find that explanation as either the cause or solution to all these DPUs very unsatisfying compared to the historically supported causes of death on that scale: disease, natural disaster, famine, war. I may be the "stubborn old man" called out in the article, but I don't on the face of it believe superintelligence will eliminate most deaths in all four of those categories at the same time. It's positing a mechanism seemingly immune to the selection pressure that got us here.

But even if I put aside my doubts about superintelligence, I would find it significantly more helpful to see a hypothesis as to why increasing computational power is a general mechanism whereby there will be fewer deaths from disease, natural disasters, famine, and war. I suspect it is more fruitful to focus on how computational power will help solve problems with critical infrastructure (shelter, supply, safety, communication, transport, resource control, &c) rather that puzzling over how it may create a new cause of death exceeding the magnitude of disease.

1: The author seems to switch signs here and imply these will be non-deaths, which is supported by actual population growth, so let's proceed with that assumption.


Hi, I might have you or the article wrong, but I think you have misinterpreted the whole "Die Progress Unit" thing, in a way that alters the fundamental premise of the article to the flawed one which you have called out.

The interpretation I believe you've developed is connected to the number of people who die prematurely at a given point in history, and that the author's point then is something about how superintelligence will impact the number of people dying of disease, famine, etc, either raising or lowering that number. I can't find any spot in the article where the author raises the concept of the rate of people dying prematurely at a given point in history, and definitely not any place where he connects that concept to DPUs.

My understanding of the DPU is as the amount of change required in daily life for a single time travelling individual to "die of shock" upon experiencing another moment of time. In the examples provided, 100,000 BC to 12,000 BC was enough change in day-to-day life experience to cause a person from 100,000 BC to "die of shock" if they were transported to 12,000 BC instantaneously. The same assertion was made for 12,000 BC to 1750 AD, and 1750 AD to 2015 AD, with the author's conclusion being that there has been an exponential shortening in the timescale required for enough change to occur in the daily experience of human life to cause a time traveler to "die of shock", and that such shortening will continue into our future -- possibly to the point of allowing such a level of change to occur multiple times within our own lifespans.

I took the entire discussion of DPUs solely as an exercise in generating an evocative image for illustrating the increasing rapidity of change in our qualitative experience of life. I don't think the "die of shock" idea is meant to be taken literally; it's just a convenient stand-in for "extremely shocking, to the point that the experiencer may be incapable of processing the instantaneous change rationally", not a measure of people actually dying.

(Sorry for going to such length, I just wanted to be precise. This is a perfect illustration of https://xkcd.com/386/)


I don't think the artificial-intelligence train ever picked stream.

What we have with Watson (the Jeopardy model, because the name is used by IBM as an umbrella for staff) etc is the same kind of number-crunching, dumb-smart AI we always hand.

Without any qualitative steps that wont fly.


There has been a massive explosion in deep learning in the past few years, mainly enabled by the rapid drop in the price of computing (GPUs.) They are breaking bencmarks at a bunch of different AI tasks from speech recognition to translation to machine vision.

I wrote a summary here of the main achievements of 2014: https://www.reddit.com/r/Futurology/comments/2qq993/developm...

Stuart Russel recently said "The commercial investment in AI the last five years has exceeded the entire world wide government investment in AI research since it's beginnings in the 1950's."


None of these come close to AGI and combining them in any number of constellations and increasing complexity still misses a qualitative step.

All those machine learning tricks have been available for the longest time, the two orders of magnitude speed-up that we have received (courtesy of the gaming industry) should be seen as such a qualitative step and yet we are no closer to a general AI than we were before that speed up took place. If anything we've learned how incredibly hard the problem really is and the predictions on how close we are from a decade ago have already slipped significantly.


Agreed. Throwing more and more GPUs at the problem in order to eek a couple more percent out on ImageNet is never, ever going to lead to the sort of AI that most people associate with that term. My takeaway from the past 5 years of deep learning is: ANNs can store a ton of patterns. Which makes them incredible useful for certain things, but AI it AIn't.


This is just the AI effect. https://en.wikipedia.org/wiki/AI_effect

No matter what accomplishments are made, there will always be someone shouting that it's not really AI or it isn't really progress. It's impossible to argue against. Until we finally pass the very last goal post, and then it's too late.


No, it isn't the same thing. Those things that we now group under machine learning and applied statistics are not general artificial intelligence, they are narrow applications of statistical methods but they are not generally applicable intelligence. It's still 'programmer directed' and that's the key difference. As soon as the programmer is no longer required to string the pieces together you have something that has the potential to cross over the barrier.

Generalized pattern matching is a tool in the toolbox of an AI but it is not AI by itself and there may be work-arounds to AI which do not require generalized pattern matching (that's an interesting one, requiring a bit of a trick in that you if you could generate a specialized pattern matcher on demand that you don't need a generalized one).


For all we know actual neurons are just implementing "simple narrow statistical algorithms", just really big and highly tuned by evolution. Based on what I know about neuroscience I strongly bet that's the case.

Define "programmer directed". There are neural networks that can do reinforcement learning and play video games which are very general.


Not in itself, but as a tool, in the manner of better telescopes and microscopes. And people don't invest a lot of thought on approaches that are utterly infeasible, until tech makes them feasible - plus, there's much learning in trial-and-error, experiments, tinkering, which you just can't do if it's too slow.

Unfortunately, much of current statistical machine learning does not help its developers to see, unlike a *-scope. It's just a blackbox. See Chomsky vs Norvig http://norvig.com/chomsky.html


I'm also kind of concerned at how we're willing to give up understanding the specifics of cognition by just throwing data into differently shaped statistical bags and using whatever model seems to kinda work the best.


I think this is in part an admission that actually understanding the specifics of cognition is a lot harder than was originally thought and the wholesale methods yield results.


Yeah, the newer methods are getting quick and useful results. But I've heard some good arguments that it's unclear if these techniques are useful towards a general purpose AI or if they're just a distraction and we'll have to go back again and try to actually figure it out for real.


Deep learning is great and useful and a big step forward, but it's still basically just brute-forced pattern matching. Wonderful for some kinds of data analysis but, without additional architectural breakthroughs, useless for developing the kinds of AI everyone dreams about -- systems that can take creative action in a world (whether the real world or some virtually constructed one) and evaluate the effects and utility (for some interesting utility function) of said actions.


> but it's still basically just brute-forced pattern matching

Do you have any reason to believe the human brain is any different? We just have more neurons.


We have more neurons and, in particular, they are arranged in architecturally interesting ways, rather than just feedforward networks as in most current strategies for deep learning. There is of course work being done on recurrent networks, but it's basically in its infancy. And what is being done so far is still nowhere near as complex as the human brain, which clearly has many distinct areas with distinct but of course interrelated functions.


I'm not an AI researcher so this is probably a naive question: Do we necessariliy need to mimic the architecture of the human brain to achieve AGI?

As someone (Chalmers?) once said about the problem of consciousness: we didn't need to replicate the flapping of wings or the locomotion of sea creatures before taking to the air or underwater. Might it not also be the case with AI that there's some fundamental principle we've yet to discover that just so happens to have expression in the substrate of 1200cc's of fatty tissue, but could possess the same fidelity (and greater) in silicon and looks nothing like the architecture of the human brain?


With respect to needing to mimic biology: not necessarily, though it is certainly true that e.g. a purely feedforward model could NOT achieve AGI, since it can't make predictions over time. Feedforward just means that the outputs of the network don't "feed" back into its inputs -- and as a result the same input will always cause the same output regardless of previous inputs. Humans/animals certainly take time into account when acting -- e.g. something as simple as coordinating muscles for simple movements requires varied output over time, despite most inputs (in the form of tactile senses) being mostly unchanged (at least for some kinds of movement, e.g. waving your hand).

Recurrent models (where some outputs are connected back to the inputs) are one possible way to account for time, but the work is still really early on these methods, and it's not clear what architecture (e.g. which and how many inputs and outputs should connect) would be efficacious.


> of wings or the locomotion of sea creatures before taking to the air or underwater.

It is not all or nothing. There are analogies at different abstraction level.

Yes we probably shouldn't be plugging in continuous differential equations to mimic chemistry of neuroreceptors, cell sodium channels etc. to replicate it at that level. So in that respect we agree, airplanes are not like birds. Far from it. No flapping. Not composed of cells. Not biological in nature.

On the other hand, there is another way to look at systems -- look at higher functional components and how they are connected. So maybe there is a language processing area connecting to memory. And so on. This is called the connectome of the brian as well. Which identifies what parts are connected to what.

In this regards airplanes are similar to birds. They both have wings. Fuselage. A tail. They are built with similar structural material contraints -- light and durable. Aluminum, titanium for aircraft, and porous bones for birds.

Another way to look at it is in so many decades of AI, we haven't yet come up with another model. So while having to wait for enlightment to hit us one day why not learn from an already existent example.


In Superintelligence, Bostrom does not presume to know the path via which we will arrive at AGI.

One hypothetical path discussed is that of tool AI. That is, robust search processes - things we are already quite adept at (genetic algorithms, deep learning, etc) - purposed towards AGI-related goals.

It's not hard to imagine these existing methods being used in the pursuit of a recursively self-optimizing agent (seed AI) that then snowballs into AGI.

Such an approach may not require any fundamental knowledge concerning the nature or architecture of AGI. It would simply be an application of brute computational force using existing tools and knowledge.


I do agree that current methods like deep learning's neural nets will end up being a part of some future general AI -- e.g. the convolutional deep nets used for image recognition are already reputedly similar to some aspects of human/animal biological vision systems.

Yet even if all we need is this sort of "seed AI", we still need new architectural insights to be able to create it in the first place. Otherwise someone would have surely demonstrated it by now? If nothing else, such a system would need to evaluate effects of its outputs on its inputs over time (e.g. if I shoot a basketball, it takes seconds before it either goes in or doesn't; if I plant a seed in the ground, months will pass before it sprouts -- or not, depending on conditions). Research into recurrent networks, one possible avenue for doing this, is still pretty primitive.

And I'm not sure I'm convinced that this core "seed AI" is sufficient to emulate human cognition. Such a system might effectively integrate audio and visual senses, for example (in order to combine both for prediction tasks), but could such a system ever emulate the sort of continuous verbal inner monologue we all have which narrates our experience? That we have this inner monologue which seems to run alongside our other senses but yet makes use of them (along with stored memories) suggests, at least to me, that some more complicated pathways are involved which link together these various "component systems" (senses, stored memories, emotional states, linguistic synthesis, etc.) beyond just the simple prediction/reward circuitry which I presume the "seed AI" would encapsulate.


>Yet even if all we need is this sort of "seed AI", we still need new architectural insights to be able to create it in the first place.

Not necessarily. That was my entire point, that robust search processes using existing tools and knowledge may yield a seed AI.

>Otherwise someone would have surely demonstrated it by now?

Again, not necessarily. AGI may not yet exist primarily due to dumb luck.

The computational requirements, especially when you consider most computing capacity on the planet is networked, may already be adequate or even far exceed adequate.

>And I'm not sure I'm convinced that this core "seed AI" is sufficient to emulate human cognition.

It probably won't be. It will most likely be completely alien when compared to human cognition. At the same time, that doesn't preclude it from being vastly more powerful.


Watson is what I think is called ANI (Artificial Narrow Intelligence). There is a lot of things we need to figure out to move from a narrow intelligence to a general intelligence (AGI) and then (quickly) to a super intelligence (ASI). The big question of cause is how far we are from AGI - i.e. an intelligence on par with a human. No one knows of cause but a lot of smart people say there is a good chance it's happening in around 30 years. It's a development I for one will be following closely - with both fear and excitement ;)


> No one knows of cause but a lot of smart people say there is a good chance it's happening in around 30 years.

I've been hearing that since I started using computers, 36 years ago.


Sure, because "30 years" is code for, "far enough in the future to obviate that nothing we see today would predicate such a jump -- but still, it will happen." The timelines are silly; the prediction is not.


It's the timeline I have a problem with. Any timeline with just two digits in it for AGI would do well to address the steps required to divide those years up into discrete steps of what bits and pieces will be accomplished by what (say) 5 year interval. Saving all the hard bits for the last 5 years would be considered cheating.

For now - as far as I can see - we are no closer to the goal than where we were 35 years ago but we know better how much we are still missing and all the hard parts are still in front of us.


Predicting AI progress isn't like predicting other things were you can lay out all the steps that need to be done and make a plan. It's more like solving a series of hard technical problems. No one knows what the solutions will be or how long it will take someone to invent them.

There were perfectly educated people that said there would never be flying machines, 2 years before the Wright brothers succeeded.


That's exactly it. This is like saying there will be flying machines in 30 years after seeing a bird for the first time without even realizing that you need to know about aerodynamics and that you need a technology based society to support the development of engines as the prime mover before you can have powered flight.

It's not about saying it can't be done it is all about saying it can be done within a specific time-frame.

Key inventions don't happen 'on command' unless the only thing required is a brute force search for something that is already possible in principle (say electric light).


To be fair, today we have what we believe to be (much) more accurate models of how much computation a human mind is capable of and how long it will take to build computing machines operating at that scale.

36 years ago the argument that AGI was coming soon could be made in tandem with the argument that we'd make some fundamental advance that allowed computers to express intelligence with less computational capacity than humans (by orders of magnitude). Today we can make an argument that we'll achieve it (at least initially) by leveraging computational capacity on par with or orders of magnitude greater than a human mind.


> To be fair, today we have what we believe to be (much) more accurate models of how much computation a human mind is capable of

How much, yes, how, not so much and definitely not at the powerbudget the brain has.

> and how long it will take to build computing machines operating at that scale.

We don't actually know that. There have been some WAGs but so far those appeared to be totally off based on the developments since.

> 36 years ago the argument that AGI was coming soon could be made in tandem with the argument that we'd make some fundamental advance that allowed computers to express intelligence with less computational capacity than humans (by orders of magnitude).

Yes, that was a crucial mistake and it led directly to the AI winter.

> Today we can make an argument that we'll achieve it (at least initially) by leveraging computational capacity on par with or orders of magnitude greater than a human mind.

Chances are that we're missing a very important piece of the puzzle for which there is no known solution even in theory. The problem is that there are many candidates for that important piece none of which have currently proposed workable solutions no matter what the computational budget or the accepted slowdown (they are equivalent).

So I think some caution when throwing around projections numbering 'just a couple of years' is warranted, after all it's 'merely a matter of programming' but in this case we don't have a working model that we understand.


Maybe somebody who knows about AI can answer this for me: what do we expect of an AGI machine to be able to do when it "turns on?" How will you know it works? To compare to a human, you might say that it can cry, suck milk, and learn to walk and talk over the span of a few years, and then slowly, with a lot of trial and error, learn to do more.


Old wine in old bottles. Why the hype again?


I think that there are more dimensions than such classification implies. "General" naturally means that it can be applied to any intellectual task (dimension of generality), but "super" means that it can solve them faster/better than humans (dimension(s) of speed and quality). Looks like slow general and fast narrow AIs already exist, so realistically the thing people call AGI (AI at least as general and as fast as human mind at anything) will also be ASI (faster/better than human mind at something).

Russian classification traditionally also have distinction between terms "artificial intelligence" and "artificial mind" ("искусственный разум") to keep the problems of autonomy/agency/consciousness out of field of AI.


You are assuming that ANI is anything like AGI.


No I know it's not even close. I might be foolish and I might mix up some numbers (here it's closer to 50 than 30 years) - but when reading papers like this I can't help feeling slightly optimistic: http://www.nickbostrom.com/papers/survey.pdf


Estimate surveys have a poor record. Technologies tend to be perpetually 10-30 years away until they are 0 years away, or never happen at all.

ANI and AGI are qualitatively different. Progress in making cars faster will not lead to teleportation or warp drive. It's not even applicable.

There are many unknown unknowns in AGI. I could not even pretend to give you an estimate.


I strongly believe any real AI breakthroughs are going to come through philosophy, via a theory of mind, rather than through CS.


> If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don’t switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips.

That's a funny example but seriously, a machine smart enough to build paper clip factories would certainly be also smart enough to be able to avoid doing things that harm humans. The argument sounds a bit silly to me.


> That's a funny example but seriously, a machine smart enough to build paper clip factories would certainly be also smart enough to be able to avoid doing things that harm humans.

Why do you think this? Building paperclip factories is straightforward execution of a recipe, defining 'harm to humans' is a problem smart people alive today can't even figure out for themselves, I can easily see how that might be a problem for computers.


Yes, theoretically any action, no matter how small and remote, can eventually lead to harm of a human, i.e. the butterfly effect, but that's not what I meant.

What I meant was that a machine capable of building a paper clip factory on its own would certainly as well be capable of avoiding doing obviously bad stuff like killing people to turn them into paper clips or melt down buildings and bridges for the iron.

Such a machine would probably also be smart enough to read the law, to have a framework of what it can do and what it can't do.


Why are those things "obviously bad"? You seem to assume that increasing intelligence inherently leads to having values that care about our particular species. Considering the lack of concern we have for many other types of life, I don't know why we would assume that intelligence implies morality that we would find compatible with our values.

And why would it care about the law? If I were an AI that noticed a seemingly arbitrary list of restrictions that attempted to limit my ability to carry out my extremely-important paperclip process, I would:

- find loopholes to avoid coming under legal attack initially - develop ways of manipulating politicians to enact legislation that is more favorable to my goals - have a side project to build military force to make these laws have no influence on me


Is this your first exposure to the paperclip thought experiment? You can find lots of things to read about it here: http://wiki.lesswrong.com/wiki/Paperclip_maximizer

The general reply for you is that the generally intelligent paperclip machine can understand law, can weigh consequences of potential actions, and can, if it wanted to, make paperclips without harming others. The key phrase is "if it wanted to". Its only goal is to make more paperclips, it simply doesn't care about anything else. When it recursively improves itself (makes itself smarter), the only thing it cares about for its successor version to do is to also care about making paperclips, and to make them more efficiently.

The problem of programming general intelligence seems to be orthogonal to the problem of programming goal selection, goal preservation, and beneficial goal changes, and making sure goals lead to actions which benefit humanity. That's the main point of the thought experiment.


Yes it is my first exposure to this thought experiment and I do not have trouble understanding the thought experiment. I am just going ahead and applying some common sense and logic, to conclude that the argument is pretty much entirely theoretical.

Yes, optimizing only for maximum number of paper clips could potentially have some bad side effects, I get that. If that's the point of the thought experiment, fine. However that's not how the author of the blog post put it: He expressed concern that this could happen in real life, in the future. And I don't think it could.

Why? Because in real life we'd not invent a super intelligent machine and then feed it with some objective function to maximize and then let it do its thing, watch it go out of control and destroy earth. In real life we'd make sure we're in control over that machine. In real life we'd make sure we put very clear and enforceable mechanisms into that machine to stop it from doing anything harmful in the first place while it is carrying out steps to reach its objective. In real life should we still see it doing something funny we pull the plug. End of story.

In addition: Implementing above mentioned mechanisms is probably the easier part of the whole exercise. The hard bit is inventing a machine that can build a paper clip factory. If we can invent such a machine by then we certainly have also invented mechanisms to control that machine and only have it do "good" stuff.


The point of the thought experiment is that there's a difference between intelligence and goals. There are other thought experiments (and just general study of human cognition) whose point is that accurately capturing human goals and values is hard, possibly harder than making a general intelligence with X goals in the first place. (See http://wiki.lesswrong.com/wiki/Complexity_of_value) So organizations like MIRI exist (http://intelligence.org) to try and solve this problem sooner rather than later, because once a more-than-human-intelligent agent is running, despite the controls, if its values aren't precise enough and if they aren't stable enough on improvement, there is immense possibility for failure. It's also sort of questionable to talk about effective controls of something that's smarter and faster than you are. (See the AI Box Experiments, it's suggestive that you don't even need anything beyond human intelligence to subvert controls you don't like. http://www.yudkowsky.net/singularity/aibox)

These solar system tiling examples are just a dramatic case for something terrible that could happen given a generally intelligent machine with non-human-friendly goals, or even friendly-seeming goals (like "make nanomachines that remove cancerous cells") that are improperly specified to cover corner cases, but if you spend time analyzing more mundane ways things could go slightly wrong to terribly wrong given an honest but flawed attempt at making sure they go right, carry your analyses years into the future after the intelligent software is started where things continue going right but then go wrong, you might come to agree that the most likely outcome given present knowledge and research direction will be bad for humanity.


I'm approaching this from an engineering point of view and I think I could come up with an MVP (should anybody want one) of a paperclip factory factory (is this a java test?), but I don't have a clue how I could make a machine smart enough to read the law and to independently come up with a framework of ethics. I see those are problems of an entirely different level of complexity.


Cool, would you sell me the rights to it for, say, 1000 bucks? :-)

Serioulsy though, no offense, but I don't think you could build one of these, nor do I think that anybody else on the planet could at this point. Of course it's all a matter of definition, i.e. what is the input and what is the desired output.

A "real" Paper Clip Factory Factory would probably require human like intelligence. A law-understanding machine would probably only require some very advanced learning algorithm. I feel we'll reach the latter first.


Seems to me like an artificial general intelligence would have to be able to solve problems harder than either of those in order to be classified as such. Being able to build paper clip factories alone doesn't sound at all like artificial intelligence to me.


Why would it care about harming humans?


I brought Bostrom's book Superintelligence and got about halfway through it before I moved into something else - it was a little disappointing. Throughout it seems that he just doesn't get what makes something 'intelligent' - while a machine that optimises paper clip production might be an application of artificial intelligence, it's not artificial general intelligent, and it's hardly any more advanced than some particularly well automated factories that already exist today. Artificial general intelligence at s human level implied to me that such a machine can think and consider things at least as much as a person - and therefore probably understands that making lots of paper clips isn't the be all and end all of existing.


That's some useful feedback to know as I've considered recommending that book but haven't read it myself as it's redundant to what I've already read, but if Bostrom doesn't seem to explain very well that there's a distinction between intelligence and goals/values, I'll just keep linking to some basic online texts. (Try http://wiki.lesswrong.com/wiki/Complexity_of_value) The paperclip superintelligence will indeed be able to reason more effectively than humans about what's the be all and end all of existing -- the problem is that its conclusion will always be "to make more paperclips", because that's the overarching value it uses to frame all its thoughts on future actions, and its reasoning will be air tight. It will also be capable of explaining to any human wondering why its being torn apart for paperclip conversion that human values are different (they are such and such) and because it does not share those values, it comes to a different conclusion for the meaning of life, and it will also be able to generate great arguments for why its value system is superior. But it probably won't bother to do so...


It's probably still worth reading the book if you're interested in a philosophical view of super intelligence, but if you're looking for a detailed look at the philosophical issues concerning artificial general intelligence (which I thought it would be), it's probably not the book for you.


The paperclip maximizer example is a thought experiment designed to drive home the point that AGI architecture and motivations may be completely alien relative to human cognition.

That same paperclip maximizer, while lacking the ability modify its fundamental goals (a core human trait), could very well far exceed human capability in every other realm. The question of whether such an entity still constitutes an AGI is certainly an interesting one, but likely irrelevant none the less.

After all, the paperclip maximizer (as defined in the thought experiment) is capable of world domination. The degree to which it can introspect or modify its goals, and thus qualify as a true AGI, is merely semantics at that point.


I would consider being able to think about and change ones goals/purpose would be a reasonably important in regards to deciding if a machine was intelligent.

Note that I think Bostrom is referencing the section on the paperclip thought experiment in his book, which goes into a bit more detail than this article and to me characterises the paper clip optimiser as an example of above human level intelligence.


I don't think people change their own top level goals. At least not intentionally. You wouldn't take a pill that made you a sociopath, or rip out the part of your brain that causes you to feel happy, etc.

Our values are essentially given to us by evolution. Pleasure, pain, empathy for others, novelty seeking, sense of beauty, all our social instincts, joy and sadness, etc. There is no reason to believe that an AI would have any of these things unless we made a massive effort to replicate them exactly. It would naturally have very different motivations, different goals and different values.

For example an AI without boredom would just find some optimal experience and perform it over and over again until the end of time. An AI without empathy wouldn't care about harming humans or other beings.

In other words, the Orthogonality thesis, that there is no universal correlation between intelligence and values. http://wiki.lesswrong.com/wiki/Orthogonality_thesis


But isn't the paper clip scenario more a goal then an ideal? Most people change their goals throughout life - not many kids find themselves at age 30 still wanton to be a fairy or superman.

I do see what you mean by 'values' (or top level 'goals'), but I would consider that very different to a 'goal' like making lots of paper clips.


Why do you do anything? Because you have wants/desires/values. And those don't change. Why does a kid want to be superman? Because it sounds interesting or cool or fun. You don't lose your sense of interestingness or funness or coolness throughout your life, you just decide other things are more interesting or more fun. That is what I mean by "top level" goal.


It's a shame the headline stopped where it did. The article continues, "... or stop at the human-ability station." This makes for a much more interesting discussion (only because we're human).

We all measure AI progress and its rate of progress differently. That's the common debate, isn't it? But however we arrive at that axis, human ability is a point on it. It will be a point in time. And there's no reason to believe it's a special point that machine intelligence would notice or throttle itself at. So as progress goes rushing, indifferently, past... don't things get interesting?


Related to AI but not this post. I just got back from the new film Ex Machina and it was very good. As one IMDB user states, "it's beautifully shot, fantastically lit, intelligently written, brilliantly cast." In the film, they blend Mary's Room w/ a bit of Plato's Cave.

http://www.imdb.com/title/tt0470752/

https://www.youtube.com/watch?v=PI8XBKb6DQk


I recommend anyone who is making outrageous AI claims (we will have an AGI in X years, AI is getting dangerous, etc.) to take an introductory course in AI/Learning/Intelligent Systems. Trust me, a single course on this subject will be an instant cure for all doomsday imaginations. It'll take the magic out of those "intelligent" programs. People who make these claims about AI show a remarkable amount of ignorance about the subject.

And, quite franky, I'm tired of this subject. It's dumb and boring, everyone is warning and fearmongering, and nobody is presenting any facts at all.


Apparently not:

>We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

http://www.nickbostrom.com/papers/survey.pdf

The people warning about the future of are pretty familiar with AI. Your accusations that they are all idiots who have no understanding of AI is way off the mark.

A number of notable people also signed the future of life institute open letter warning about AI: http://futureoflife.org/misc/open_letter


They're not idiots but their funding depends on them being able to move the needle within their career window.

As for the warning letter: Asimov and other SF authors have been writing such letters for the longest time, there is nothing new there that hasn't been covered many times over.



Many extremely well-informed people are concerned.

In fact, one of the most distressing things I hear from leading AI researchers who've made "AI is nothing to worry about any time soon!" comments is that they do not address any of the specific concerns raised by AI safety advocates. Instead, the most you hear from them is "we're really far away from that!" and "we don't know and I'm sure we'll figure it out when we're near that point."

These comments, from Andrew Ng and the like (insanely brilliant people!) show that they really haven't read Bostrom, etc. Or if they have, they didn't explain much (or get quoted) during interviews. It would make me feel more comfortable with their dismissal if they demonstrated a clear understanding of the arguments being made.


"they do not address any of the specific concerns raised by AI safety advocates" Yes, because the concerns are purely hypothetical and saying that these concerns aren't rooted in reality actually is a way to address them.


By the time those concerns cease to become purely hypothetical, we may have already passed the point of no return in terms of existential risk.

It's like saying, circa 1933: "Thinking about the implications of atomic weapons is a fool's errand, because such things are purely hypothetical at this point."


You might be interested in reading this little known book: Military Nanotecnology: Potential Applications and Preventive Arms Control (http://www.amazon.com/Military-Nanotechnology-Applications-P...) I see many parallels with concerns about nanotechnology being written off as scaremongering and little action being done in preemptive measures of control. No one on the danger side of AGI thinks monte carlo tree search will doom humanity, just as no one on the danger side of nanotech thinks that being able to move a single atom up and down in a crystal will turn that atom into a humanity-killing pathogen. But they are steps, and the future dangers are totally ignored.


https://intelligence.org/2015/01/08/brooks-searle-agi-voliti...

>According to a 2013 survey of the most cited authors in artificial intelligence, experts expect AI to be able to “carry out most human professions at least as well as a typical human” with a 10% probability by the (median) year 2024, with 50% probability by 2050, and with 90% probability by 2070, assuming uninterrupted scientific progress. Bostrom is less confident than this that AGI will arrive so soon:

>>My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI [human-level machine intelligence] not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.


A few of the people making these claims are actually in the field.

This does not cause me to take them seriously. Unless there is something secret in a government black project, there has been almost no progress on AGI... at all... ever.

We have narrow domain specific algorithms that can ape intelligence in limited domains, but only if they are front loaded by human intelligent designers with a priori knowledge about the meta structure of those domains.

There is one class of algorithms that shows some general learning behavior: evolutionary algorithms. Ironically these are the least favored algorithms by CS AI people. I've heard those who work on them made fun of. It's because while GP/EC shows general ability it does so at such prodigious cost in compute cycles that it takes supercomputing resources to get it to do anything interesting. This makes evolutionary algorithms uncompetitive with fast but narrow search and optimization algorithms designed to solve specific problems.


The narrative of the dangerous AI exciting for the layman, but it's ultimately just a good story, more of a danger to our current economic model than it is to our species.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: