Jaron has observed this several times and rightly seems to be tired of repeating the same naive cycle.
As I'm just now entering my second cycle and watching tech repeat itself again -- I'm beginning to understand his weariness.
AI has gotten way better than anyone expected in the last 5 years or so. Progress in many of these areas like image recognition was super slow and challenging. And all of a sudden computers are approaching human level performance with a relatively general algorithm.
For decades robotics has been limited by AI tech. We didn't have the ability to do object recognition, control was really hard, teaching the robot to do things was nearly impossible, etc. Now those limitations are gone, and in the next 10 years it's likely there will be a huge explosion in robotics and automation.
Regardless of the success of current developments, in the long term AI is still inevitable. I have no doubt we will eventually figure it out.
I have no idea why people confuse long term predictions about something with short term ones. Someone saying "we will invent AI by the end of the century" is not saying that they it's definitely going to happen in 5 years and you should invest heavily now. Every time Bostrom and AI risk stuff comes up, there's a bunch of confused comments about how AI isn't that good currently, which misses the point entirely.
Coupling that with the fact that any AI will be limited by its human-defined interface leaves me pretty unconcerned about AI, even if we ever figure out what truly makes a conscious, intelligent being and determine how to replicate it fully in some other medium.
That said, I appreciate Bostrom's work as a philosophical rather than practical matter.
Of course, there's no such thing as an inevitable progression of progress in AI research. Until we have AI software capable of contributing to AI research, of course.
Any chance you could define "human level" in a way to make this a meaningful sentence? Twenty-five years after I was introduced to this problem and I still have never heard a better definition of "intelligence" than the Turing test, and it's a purely operational, "I'll know it when I see it" thing.
A merely human-level AI is one that can match to some window of performance a random human in the intellectual outputs of that human, which are vast and general -- playing games, writing poetry, critiquing film, building small rockets, having conversations, programming, carving wood, teaching a class, diagnosing and fixing a leak, constructing mathematical models... For things that require a body, it suffices to instruct a human what to do, but this thing might as well help design a robot body that's at least as versatile as a human body since we humans do that too even if Atlas is still pretty far off. We're unlikely to have an AI perfectly human-equivalent in the sense that it would probably by default already be superior in some ways by virtue of its ability to not need sleep, do calculations efficiently and correctly, have access to perfect memory, and so on, and unless it's ultraintelligent then it's probably not better than a random human at everything (say directing a movie) just like you aren't.
Or maybe we'll learn that in order to be intelligent the machine needs to sleep, do mistakes, forget some things, etc.
1. Explanation of why a particular decision was made. (I was incredibly happy to see the Toyota article this morning that mentioned this very subject.)
2. Flexibility in the face of change. (Or is there a better response to a model with declining effectiveness than "start over".)
What is motivating otherwise very intelligent people to promote the idea that AI will take all our jobs and/or enslave us?
1) To prop up their investments in AI to get higher valuation.
2) To reach a political goal, such as basic income, which is often justified as necessary in a world where computers and robots take over the work force.
We need one more breakthrough before we can free people from jobs - self-maintaining machines. Which will probably involve self-healing materials. Imagine if you could simplify maintaining underground cables to just making sure there's enough nutrient-rich fluid around them. Then we could automate the production and delivery of such fluid, and we'd be done.
The future without jobs is the future where our infrastructure heals itself, just like our bodies do.
In this case, I think that, to a large extent, maintenance can be side-stepped by recycling and rebuilding. A bit like how the current world works: we can fix anything, but for most things, it's simpler to throw and buy a new one because the labour is more costly than the product itself.
One example is asking a computer repair shop to fix your laptop screen (particularly so if said screen is touch-sensitive).
In a world where energy is free (fully solar for example), this is awesome: oh, you just got a month old car but a better design just came out? No problem, you'll get the newest one when your turn comes.
I doubt I'll see this kind of world (if it ever comes) in my lifetime, but it doesn't seem so far-fetched, I think.
And if your instrumentation is good enough, it really becomes a subdisicipline of logistics, plus maybe the machines can begin to defend themselves from ordinary abuse.
You pointed out (below) that historically people have often been terrified that machines would take all of our jobs, and that terror has turned out to be unfounded. But they weren't wrong, they were just wrong in thinking it would be a bad thing.
Over the last 150 years, the proportion of Americans employed in agriculture has dropped from ~70% to ~2% . They've literally been replaced by machines.
A large proportion of those people are now doing menial intellectual jobs that likely will be replaced by "AI". A complete shift in the nature of the work we do isn't unprecedented, and it shouldn't be considered impossible, but it shouldn't be considered disastrous either.
edit:  https://en.wikipedia.org/wiki/Agriculture_in_the_United_Stat...
The only people who lost jobs were skilled craftsmen, who made up a tiny percent of the population anyway. And they weren't really lost, just replaced with lots of unskilled work.
Agriculture automation came years later, and the farmers weren't entirely screwed because they mostly still owned the land. If you own the robot that replaces you, you aren't necessarily worse off.
The coming technological revolution is entirely about automation. And not just automating skilled jobs, but most unskilled ones. A large percent of the population will be affected.
In most of the fields I've seen they've been replaced by Mexicans.
 - https://en.wikipedia.org/wiki/Haber_process
Just because you don't "see" the work humans currently do doesn't allow you to be ignorant to it happening. Just because you personally don't want to work (as most people) doesn't mean there isn't plenty out there for people to do.
Are housing prices, at least in the U.S. actually attached to the value of work, or they mostly affected by the banks willingness to hand out money?
Let's go another step and take work out of it completely. If houses were free, there would still be houses that are worth far more than others, simply because of proximity to other things of human interest.
That "a median house still costs 4x yearly median salary" means exactly nothing. The size of median salary is market driven, and the price of housing reflects the games banks and housing developers are playing, and thus can be arbitrarily high.
When people only think short-term (what's can I afford monthly) vs. long-term (what will I pay out over the life of the loan vs. what I'm getting now) they make mistakes. These mistakes are self-inflicted but these people tend to then blame any and all others for why they can't get ahead, why the system doesn't work, why the American dream is dead :).
The situation is different from the TVs and computers and cars because those are not considered as important as your own house. Most people eventually have to move out, so they have to participate in the game. I believe we call it "inelastic demand".
I'm on board with surfing all day while you maintain the robots to make my food. Is that cool with you?
What is an example of a non-government bullshit job?
Mostly history. Specifically the history of such predictions made by smart people of their own time that never came to pass.
I assure you, this has all been predicted before. AI taking over the earth is always a decade away.
There are people who wear tin-foil hats and there are those that sell tin-foil hats.
The former think they're saving the world, while the latter are just making money off of the former. It's the same business model as religion and global warming.
But there's a different kind of "prediction", which I think is the type those smart people subscribe to - that we are on the path to an AI, that there's nothing that would make it impossible to achieve as the technology progresses (a reminder: we have a working example that intelligence can be built - our brains). Of course there are a lot of obstacles on the way. Personally, the most probable is that our civilization ends before we reach the necessary level - because of war, or all the economic and political shenanigans we see every day.
And nice, you managed to cram global warming denial in there too.
Wrong/correct? We're talking about predictions, so let's talk about probability.
In the course of 20 years is it more probable that life will look more like it does today or that AI will make us all jobless?
That is why I'm "more likely" right and Elon is "more likely" wrong.
I'll put money down on it. If, in 10 years, it only takes 1 employee per shift to run an entire McDonalds, I'll give you $1,000.
And yes, I don't believe your single factor over-fit models prove anything about global warming. I'll put money down on that too.
I can't say I've seen that said. Any sources? Most predictions I've seen for AI surpassing human intelligence are for mid 21st century. The reasoning isn't rocket science. I wrote an essay about it for some school exam 34 years ago. If you discount the religious stuff then brains are biological computers of roughly fixed ability and regular computers are less able but get better each year. If so, they'll overtake and you can kind of graph it and make a rough estimate of when.
It's low hanging fruit, the public eats it up. Also people can be very intelligent but not have done the research in the field they are commenting on. Elon Musk and Stephan Hawking aren't AI PhDs.
16 years later now, I see that about the only thing that could do some real damage was software in factories and stock exchanges breaking, and requiring downtime to fix. It would be that downtime that could do actual harm. Because I've now seen some code that runs in factories and... man, how this is totally and utterly fucked up. I'm happy nothing bad happened back in 2000. I'm also surprised that some manufacturing plants are running at all.
But on a more serious note, as some of the higher-order techniques of applied mathematics become more and more popular and find expression in tools ordinary people can use, that's liable to change a lot of things. If we get better at characterizing systems and making model-controllers for them, this might just matter a lot.
On what time-frame are you make those claims?
"Any sufficiently advanced technology is indistinguishable from magic."
Assuming of course, the AI doesn't get control of the Factory by accident and does something terrible in the name of improving productivity.
One thing we often do is assume for some compact subproblem that AI will offer massive optimizations relative to our current human baseline. However it often turns out that engineers have spent enough time to do a good enough job that the relative improvement is either not existant or small when focused on optimizing subcomponents of a system. With your video game reference there are some games that q learning does better than humans and others that does worse. For the factory floor I know data scientists and industrial engineers run optimizations to increase productivity and reduce costs. With their constraints it becomes hard or impossible for any algorithm to find a much more optimal solution especially when their problem is convex where the human generated solution is provably globally optimal.
Secondly, it does a lot of handwaving about "religion". I am an atheist, and think most religious beliefs are irrational. However, that doesn't mean that every belief that "looks like" religion (in some vague, poorly-defined way) is irrational. The Aztec religion was false, but Hernan Cortes and his army of men with guns was very real. It would have been stupid for Aztec atheists to ignore Cortes because "that sounded like religion". The right question to ask is "is this claim supported by the evidence?", not "how much like religion does this claim sound?".
Indeed, arguing about the potential threats of air-travel in 1910 (let alone in 1810) would have been silly. The point isn't whether or not AI is possible (or could pose a serious threat), but whether or not discussing it as a threat given our current, near-zero, understanding of it is productive. Jaron Lanier argues that not only is it not productive, it distracts from more pressing challenges related to machine learning.
> I am an atheist, and think most religious beliefs are irrational. However, that doesn't mean that every belief that "looks like" religion (in some vague, poorly-defined way) is irrational.
You think now that religion is irrational, but when most religions were established there was little reason to believe they were. As to the "vague, poorly-defined way" all I can say is that religion has many definitions.
The famous anthropologist, Clifford Geertz defined it as a "system of symbols which acts to establish powerful, pervasive, and long-lasting moods and motivations in men by formulating conceptions of a general order of existence and clothing these conceptions with such an aura of factuality that the moods and motivations seem uniquely realistic."
Another famous anthropologist said (again, quoting from Wikipedia) that narrowing the definition to mean the belief in a supreme deity or judgment after death or idolatry and so on, would exclude many peoples from the category of religious, and thus "has the fault of identifying religion rather with particular developments than with the deeper motive which underlies them".
It is therefore common practice among social researchers to define religion based more on its motivation rather than specific content. If you believe in a super-human being and an afterlife and not for scientific reasons (and currently AI is not science, let alone dangerous AI), that may certainly be a good candidate for a religious or quasi-religious belief.
It definitely is productive. We can either slow research on AI, or we can research AI safety now. Or both. There's no reason we have to just accept our fate and do nothing, or just hope everything works out when the time comes.
Currently, much of the discussion on the subject is done in various fringe forums, where they imagine AI to be a god and then discuss the safety of creating a god. You can even find reasoning that goes like this: "we don't know how capable AI can be, but it could be a god with a non-zero probability, and the danger has a negative-infinity utility, so you have a negative-infinity expected value, which means it must be stopped now". Now, this sounds like a joke to us (we know that every argument with the words "non-zero probability" and infinite utility can conclude just about anything), but the truth is that such foolishness is not far from the best we can do given how little we know of the subject.
Well I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.
>How can we research the safety of something we know nothing about?
Even if AI uses totally unknown algorithms, that doesn't mean we can't do anything about it. The question of how to control AI is relatively agnostic to how the AI actually works. We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.
Which experts say that? Besides, it's not a question of time. Even if strong AI is achieved next year, we are still at a point when we know absolutely nothing about it, or at least nothing that is relevant for an informed conversation about the nature of its threat or how to best avoid it. So we're still not within sight today even if we invent it next month (I am not saying this holds generally, only that as of January 2016 we have no tools to have an informed discussion about AIs threats that will have any value in preventing them).
> We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.
Oh, absolutely! I completely agree that we must be talking about the dangers of reinforcement learning and utility functions. We are already seeing negative examples of self-reinforcing bias when it comes women and minorities (and positive bias towards hegemonic groups), which is indeed a terrible danger. Yet, this is already happening and people still don't talk much about it. I don't see why we should instead talk about a time-travelling strong AI convincing a person to let it out of its box and then use brain waves to obliterate humanity.
We don't, however, know what role -- if any -- reinforcement learning and utility functions play in a strong-AI. I worked with neural networks almost twenty years ago, and they haven't changed much since; they still don't work at all like our brain, and we still know next to nothing about how a worm's brain works, let alone ours.
And there's the religion.
And after you explain that, explain how having something vaguely in common with religion automatically means it's wrong.
No one is saying it's wrong, only that the discussion isn't scientific.
It is not only unscientific and quasi-religious; there are strong psychological forces at play, that muddy the waters further. There are so many potentially catastrophic threats that the addition of "intelligence" to any of them seems totally superfluous. Numbers are so much more dangerous than intelligence: the Nazis are more dangerous than Einstein; a billion zombies obliterate humanity; a trillion superbugs are not much more dangerous if they are intelligent or even super-intelligent; we intelligent humans are very successful for a mammal, but we're far from being the most successful (by any measure) species on Earth.
This fixation on intelligence seems very much like a power fantasy of intelligent people who really want to believe that super-intelligence implies super-power. Maybe it does, but there are things more powerful -- and more dangerous -- than intelligence. This power fantasy also helps cast a strong sense of irrational bias over the discussion. This power fantasy is palpable and easily observed when you read internet forums discussing the dangers of AI. This strong psychological bias tends to distract us from less-intelligent, though possibly more dangerous, threats. It is perhaps ironic, yet very predictable, that the people currently discussing the subject with the greatest fervor are the least qualified to do so objectively. It is not much different from poor Christians discussing how the meek are the ones who shall inherit the earth. It is no coincidence that people believe that in the future, power will be in the hands of forces resembling them; those of us who have studied the history of religions can therefore easily identify the same phenomenon in the AI-scare.
It doesn't. Something having religious undertones and something being incorrect are completely orthogonal, a priori.
But the fact remains that discussion of supreme AI has distinctly religious undertones. Such discussions progress in, what I believe an interested observer from another culture would deduce to be, directions distinctly influenced by the West's history of monotheism, and particularly Christianity.
>it doesn't distinguish between what AI can do right now, and what the theoretical limit of AI is, in 50 or 100 or 500 years.
AI, by itself, cannot interact with the physical world, past bit-flipping registers on a cpu (a negligible action). For AI to be a threat, in my opinion, it would need to be coupled with some form of physical manifestation that could, in some fashion, not be under the control of a human. Until that happens, we can just unplug the machine. And we are already seeing the intense difficulty of robotics, making good robots, and interfacing with the real world. Historically, everything with AI is much, much harder than originally thought.
Besides, just look at us, humans. Who would ever build an AI and then keep it away from any means of communication? That wouldn't be a very useful AI. If one wants that, one can pick a rock and imagine it has a mind. Or talk to a cat.
A look at the history of managing downsides of industrial technology tells us that we can't or won't "unplug" it if it's still profitable or an integral part of a vital system. Carbon dioxide changing the climate and acidifying the oceans? Unplug the coal-fired power stations! Overuse of antibiotics, especially in animals, leading to deaths in humans? How about we don't do that!
The first "hostile" AI will only be selectively hostile. Perhaps something like automated "redlining" racial discrimination. Some sort of system where the benefits accrue to investors while the disadvantages are spread around. AI "pollution".
Now that's just silly.
An AI does not need a body for any reason. What it needs is a voice, software is pretty good at simulating that, and we can simulate images and video pretty well too. If an AI can become a master in the art of persuasion (say coupled with a great hacking ability) it could easily convince people to act in its name (much like old books do now) and pay off others do to its bidding.
Hell we were supposed to have suborbital launches, anywhere in the world 20 minutes.
The world is actually pretty messed up with problems right now; some of which we could actually do something meaningful about. Continuing the naughty-AI bunfight isn't one of them.
AI is inevitable. It's very likely to happen in our lifetime. And it's very likely to kill us all. And explaining every reason why I believe those things could fill an entire book, so I seriously recommend reading Superintelligence by Nick Bostrom.
An alien species would have to be observing this planet at very frequent intervals to be able to catch any of these recent developments -- because if millions of years ago they determined nothing interesting was happening, and decided to only look in every five hundred thousand years or so, it's very likely the last time they looked our species wasn't even around.
Now you take the development of an AI that is roughly human level intelligent with a given amount of resources. If you give it the same amount of resources again, it can "breed", copy itself perfectly onto them, and now we have a "population" of two AIs whose combined intelligence should roughly match that of two twin humans. Maybe it doesn't even copy perfectly, maybe it switches some things about to see if it can create something smarter without modifying its own code directly. In any case its breeding is only restricted by its ability to breed and desire to breed in the first place and the compute and power resources it needs to run a new copy, and these can always get lower than initially. What does the long-term look like? Even if the initial resources for the first one are astronomical (e.g. all computing power in use on the planet as of 2016), I would still bet you that left to breed without any other restriction it would take the AI family far less time than 100 thousand years to reach 7 billion instances. So you're looking at many, many more AIs than non-AIs, plus each with human level intelligence that isn't distributed normally (assuming soft eugenics doesn't become widespread in non-AI populations) but is mainly all the same with perhaps increases here and there (assuming a soft takeoff), each coordinating with the same goals in mind. This situation without further specification can be either extremely good for the non-AIs or extremely bad for the non-AIs. The question of whether this will happen (and whether it happens in that exact form, personally I think hard takeoff from a single AI is likely) in the next 100 years or the next hundred thousand years (perhaps a near-extinction-level event occurs forcing our species to basically start over but with an even harder battle to survive since many low-hanging-fruit resources have been depleted) might help you determine whether to worry now about doing actions that make the extremely-good more likely than the extremely-bad, but if one thinks human-level AI can't happen at all in any amount of time, that there's something special about our squishy brains or something inherently limiting about our so-far-general intelligence such that we can't solve the engineering problem of creating another intelligence directly and must always go through breeding, the argument needs to take place at a different level.
The paperclip maximizer argument (not sure if Bostrom repeats it in his book) is yet another level of argument but it's meant for those who already buy the premise of human level AI and self-improving AI but who also think increased intelligence will naturally converge to a human-loving benevolent AI that sees how self-evidently obvious it is to love all life or whatever, that we don't have to worry about all the messy details around Friendliness because they'll just fall out of the entity naturally. No one is actually worried about Clippy tiling the solar system with himself, its probability is epsilon. But a lot of people just see this one argument out of context and infer the arguer believes it to be a relevant possibility to worry about and write off the whole group as crazy...
That's interesting. How could it? Please present a plausible scenario (more plausible than "The US president decides to cover the globe with nuclear explosions, and everyone with any degree of influence over the process just lets him do it").
[Of course, they don't exist at the moment - but they wouldn't be that difficult to make as one advantage of a Doomsday bomb is that you don't need to deliver it so it can be made as large as you want through staging].
This is absolutely absurd. Could you elaborate on that instead of just pointing to a book?
The second point is that they would have no reason to keep us around. They would do whatever we programmed them to do, to the point of absurdity. An AI programmed to collect as many paperclips as possible, would convert the mass of the solar system into paperclips. An AI programmed to solve the Riemann hypothesis would build the biggest computer possible to do more calculations. An AI programmed to value self preservation would try to destroy anything that was even a tiny percent chance of being a threat to it. It would try to preserve as much energy and mass as possible to last through the heat death of the universe. Etc.
The only AIs which actually do things we want them to do would have to be explicitly programmed to want to do that. And we have no idea how to do that. We can't just hope that arbitrary AIs will happen to value humans instead of something else. We need to figure out how to control them, which requires solving something called "the control problem".
(1) What reason do we have to keep chimpanzee's around? Why don't we just kill them all? They can't even do calculus.
(2) Whatever the answer for question 1, wouldn't an AI consider that same scenario regarding humans? Assuming an AI is truly smarter than a human?
(3) If AI gets to the point where they are much smarter than humans, wouldn't it follow that there would be AI philosophers? Or activists? Or pacifists? Would there be such a thing as an AI civilization? Or is your scenario just a few AI overlords that some how slipped through the cracks of regulation by humans?
2. Empathy, which is a part of our emotions, is an incredibly complex thing we can't even begin to formalize yet - it's extremely unlikely that a random AI we'd build would happen to have the exact same emotional makeup as we do.
3. "Smart" has a particular meaning here when used in discussing AI. It implies raw intelligence, not morals. A smart AI may be incredibly good at evaluating information, inferring new facts and doing complex tasks. None of it means it will necessarily have to ask itself philosophical questions that we do - those actually mostly come from our emotions, from our desire to understand our place in the universe.
The core of the AI-threat issue is that for an AI to not be dangerous it would have to value us and the same things we value. We can't agree about our exact values even among ourselves, and we're nowhere near even trying to formalize it on a level of detail that would be useful for programming.
Read a book recently that contained an interesting AI attack. Basically, it set a lot of small objects in motion, with complex orbits all computed to result in a large impact at some point in the future.
We would not notice something done this way, by analogy.
This is where this line of reasoning completely falls apart for me. Any complex system is going to do unexpected things that weren't deliberately designed for. My laptop computed doesn't do "whatever it was programmed to, to the point of absurdity." Why would you expect a program complex enough to simulate the human mind to act in such a predictable way, when none of the much simpler programs we already have act that way?
> Why would you expect a program complex enough to simulate the human mind to act in such a predictable way, when none of the much simpler programs we already have act that way?
Exactly. We suck at making complex programs do what we want them to. This is the source of worry.
A program "complex enough to simulate the human mind" would probably have even more bugs that result in unintentional behavior. Not fewer.
Most technologies we build, and use, surpass humans in some capacity. Thinking is no exceptions and we allready have machines far better than humans at some thnking tasks, such as arithmetic.
We are now working on systems surpassing humans in various recognition and prediction tasks.
I suspect the catalog of tasks far better done by machines will continue to expand to finally encompass all tasks required to form a general intelligence. Only, the trick to assemble them, into a complete system could very well prove tricky.
The result, though, is that there will never exist such a thing as human level artificial intelligence because, if and when, a system of general intelligence is created all its components will far surpass human equivalent capabilities.
So "smarter" simply means far superior to any human in any task requiring thinking, in the same sense that computers are far superior to any human at arithmetic.
Even some odd genius savant of today cannot compete with computers at number crunching. More interestingly it's not even interesting to compare, the difference in potential performance is just at different scales all together.
In fact, by relying on a secondary AI it's quite possible we're even more likely to accidentally end up with AI that isn't benevolent.
For instance the other comment brings up the uncertainty around giving an Oracle AI the task of finding what benevolence means to humans, to then plug into the final AI hoping it will be Friendly. You don't know how to precisely define 'benevolent' so let the Oracle AI do it (or help you do it), though you think you can at least program the Oracle AI to do a good job in finding answers to vague, complex, fragile problems. (Why? Past success in AI tech?) Is what the Oracle AI outputs actually good? How do you know? What did the Oracle AI have to do to reach its answer, or gain a last bit of certainty in it? (e.g. Did it reason it needed more data and so built or hastened the arrival of brain-scanning technology, started scanning brains from volunteers, the very recently deceased, cryonics patients, or just without its creators knowledge through black market deals, and at some point did it simulate trillions of human minds under all sorts of gruesome scenarios to test responses? Do you even morally care about sentience on silicon? Would you care more if you were an upload?) You've got a lot of problems just building an Oracle AI (or anything powerful enough that can help you solve the hardest problems). Even supposing the output it gives is good and the costs (practical and ethical) were acceptable, will a seed AI allowed to recursively self-improve be guaranteed to preserve this Friendliness property in all subsequent improvements?
With the idea of using "AI tech", you've basically drawn a line in the outcome space that on one side says something like "let's just program the thing" and the other says "hold on, let's just program the thing but also use all these other relevant computer tools we've developed over the decades to help program the thing". It's the start of an approach, but does it really prune all that much? What do these relevant tools actually buy you in safety, if a safety-guarantee-tool is not among them? Another line which may be what you were heading towards, could be on one side say something like "we might not be able to solve all the necessary problems with our current intelligence, so let's also spend time looking for ways to augment human intelligence -- brain-computer interfaces, brain emulations, tweaks to our genetic code, that sort of thing" and the other side "no, we're capable, let's try our best right now". Using ems based on friendly-ish human minds that become better than human could very well be a better approach to solving the superintelligent problems correctly and efficiently than doing so on raw human levels, on the other hand it could turn very bad if one of those ems instead just solves the problem of recursively improving themselves and their goal and value system is insufficiently friendly. Or many other failure modes. (Though an interesting "most likely scenario" given certain assumptions and only looking at a span of 2 years which by itself doesn't look too terrible is Robin Hanson's Age of Em idea.) Just like approaches involving molecular nanotech to help us, there's reasons why it could be seen to help the best outcome along, but it also opens the door to a lot of other risks that aren't necessarily there if you took the other side, so again how much is really pruned?
What I take as your general idea of "maybe we need and should employ additional help and knowledge that we've yet to gain to even really start" isn't bad, though, much better than "Benevolence is easy, a King and his subjects all know he is a great ruler when everyone is smiling frequently", it's just lacking in detail to constitute a plan. Throwing in the positive feedback loop idea (which might even manifest as impressive accelerating returns, who knows) to me seems only relevant to figuring out time scales, I don't think it says anything about the fraction of good vs. bad outcomes apart from whether short or long time scales matter.
My general take is that, AI or not, the world is continuing to progress towards a future where extinction level capabilities are not only within reach, but also within reach for more and more people. It is, to me, not a question of if, but when, such capabilities will be in the hands of an individual, or organization, that either by malice, or incompetence, would be a serious problem.
Last year a pilot intentionally crashed a plane in act of sucicde, taking 150 passengers with him. Consider a likeminded person capable of engineering a virus.
So, my optimistic take on the future is that while such capabilities are developed, one would hope, that in parallel capabilities to mitigate those threats are also developed, so that when the problems arise the combined capabilities of all benign actors is sufficient to evade extinction.
In the case of AI it just seems implausible that any debate could manage to stop the technological race. Just look at the climate change debate, it has been going on for decades and is just gaining political traction. AI isn't even on the radar yet...
So the plausible futures too me are either that AI will wipe us out, or not. In the event that it does not I think one plausible future is that the potentially malign AI will operate in an environment where other AI, or near AI-systems, will also perceive it as a threat. Hopefully aggregating into the capacity to suppress the dangers in a benign way.
Also, I believe any path from here to AI will consist of a complex system of AI-like technology interwoven with human systems resulting in some aggregates with considerable capablites itself not to be discounted.
In some sense I believe we can extrapolate how things will play out just by looking at some systems that might resemble AI today. Consider corporations, these are actors created from a complex system of legal, economic, social and other constructs. While ostensibly owned by, and operated by, humans, the reality is that there is little any individual human can influence them in view of all other forces directing their actions, making them, in a sense, allready existing, AI:s.
Creating actual AI, as a useful thing, will probably involve similar complexity and influences. So in some sense there are probably similar patterns that will play out. Just as corporations can act to influence the world in ways malign to humans, the AI systems would. And just as corporations, the AI would find it self acting in an environment doing its best to contain the malice.
I said that it would take a book to explain. You are parodying an extremely shortened summary of a complicated subject.
The difference between AI and "alien" is no one is trying to make alien. There aren't research labs and billions of dollars in funding. There is no incentive at all. And the problem is far more difficult. AI is just algorithms.
Second, aliens arent actually that powerful. Superintelligent AI would be godlike in power to us. I'd much rather have aliens.
The purpose of my facetious approach is to make clear that statements such as, "Superintelligent AI would be godlike in power to us," are more likely to have originated by analogy to science fiction than from a grounded understanding of what is possible or realistic in the relevant future.
Being "just algorithms" doesn't really mean very much. It's like saying breeding the Alien is "just genetics". It's something we imagine might be possible, but is far beyond our current capabilities.
We live in a world where the only intelligences are humans. The idea of superintelligent minds seems absurd to us, because it isn't something we've ever encountered in reality before. But there is absolutely no law of nature that says it's impossible, or even difficult. If you look at history, it'd actually be really surprising if it was. Human minds were just the very first intelligence coughed up by evolution. We are far from the limits of what is possible.
And there is a difference between genetic engineering and AI research. Biology is incredibly complicated. Like millions of complicated, interconnected, moving parts. It really is impossible to make Alien, or really anything beyond modifications to existing biology.
Algorithms are a whole nother ball game. Our current AI algorithms are actually reasonably intelligent at many tasks, even smarter than humans at some things. And they are extremely simple. Could be described in a line or two of equations, or a few paragraphs of text. Not because they are primitive, but because simple algorithms work. And they have been improving every single year for a long time, and will likely improve well into the future.
But you don't have to take my word for it. If you survey AI researchers and experts, they almost unanimously agree that AI will be invented in our lifetime. If you survey biologists, I highly doubt they would make similar claims about Alien.
What's the point of arguing about the potential threats something we have so little knowledge of? Maybe it will come too fast, but maybe that unkillable virus will, too. At this point in time, we simply have too little information to have an intelligent assessment of the risks posed by AI. We can and should, however, discuss the more immediate related dangers of our real-world machine learning, such as self-reinforcing bias.
Quoting to emphasize.
It is one thing to acknowledge that there is a non-zero probability that an uncontrollable AI might be created, but that we have other things to worry about and so why hand-wring over it.
What should be the strong counterpoint to this hand-wringing is that what we should be doing is better understanding the behavior of the "weak" AI we have now. We need grasp where it does and doesn't work, a difficult problem to "sell" since it lacks that alien-like doomsday aspect or predatory agency.
These kinds of problems are the problems of our own creation and thus we must take ownership.
I'd be interested if you said more about that.
Think what could happen if your brain worked much faster. The world will mostly seem to slow down. A very conceivable outcome is one of boredom and possibly madness. You could perhaps multi-task and think of many different things, but we are generally terrible at multitasking, so we have no idea whether an intelligence even could multitask well.
Another problem is that we don't know anything about the information capacity of a mind. A faster mind of the same size could potentially have trouble dealing with all that extra information (again -- madness, maybe?)
One thing that hints at those capacity limits is the need for sleep. We don't know why, but some hypotheses say that it's necessary to do some "information cleanup" operations in the brain. It is possible that an AI would need to sleep, too. The AI may need to sleep a lot more to handle much more information.
It is therefore conceivable at least that a mind has to work at a speed that is commensurate with the speed of the world around it, and one that is appropriate for its capacity.
That's the danger. I think Lanier is a lot closer to Musk on that point than he imagines, though.
Except it isn't.
And this ignored every point that jaron was trying to make and falls into all the traps he was trying to point out. It is an oblivious comment.
I must say, I don't find most of the contributors to edge particular insightful.
Pretty disappointed with the Edge here. It's clear that the participants are just exploiting it as a platform to put forward their own ideas.
"The truth is that the part that causes the problem is the actuator. It's the interface to physicality. "
BOOM! That was my exact argument in counterpoints to Superintelligence risks. I thought it was ridiculous to worry about what it thought when you could easily control what it did at the interface. I also pointed out that high assurance security already has decades of work dealing with this exact problem and pretty effectively. So, anyone worried about that sort of things should focus on securing the interface that would be used in various domains to catch issues.
Now, that's not to say a superintelligence can't break an evil scheme down into a series of safe actions that result in catastrophe. There's possibilities there. Just that all methods for handling them can and should be at the interface. And can be implemented by verifiable, dumb algorithms.
Huh? A program, by definition, has no autonomy. Given its inputs, it performs the appropriate sequence of actions it has been programmed to perform. At no point is it governing its own behavior.
Can anybody name a single realizable program that exhibits autonomy?
Even machine learning algorithms, which adapt to their input, are deterministic and incapable of self-governance. Given identical initial conditions and inputs, the machine learning program will generate identical deterministic output.
On a different discussion site I recently pointed out there is a similar religious belief or religious dogma in the idea of the self driving car, that went over like a lead balloon. People acting in a religious manner and expressing religious belief don't like to have that pointed out to them. Probably vestigial monotheism, if they act religiously at their traditional church on Sundays they don't like it pointed out that they worship a different altar online, or the atheists get really wound up when they are called out for joining the atheist movement but acting as deacons of the church of the self driving car of the future.
Anyway just saying its a line of reasoning, that while true, can only lead to unpleasantness. Like discussion of scientific racial / gender differences or discussions about IQ, all you're gonna get is social signalling screaming "see no evil".
I'm waiting for it to fall apart. The amount of information that you have to willingly ignore these days to participate in the ''progress solves all ills'' narrative is huge.
On vestigial monotheism read Straw Dogs by John Gray if you haven't.
I think this is (in part) a power fantasy. The believers in the AI threat normally perceive themselves to be different from other people due to their (self-perceived or real) higher intelligence. They therefore describe that property (that they believe makes them special) as being particularly dangerous, and therefore powerful, because they want to believe that intelligence imbues its possessor with power and a lot of it. Sadly, superbugs, meteorites, H bombs, climate change, human-rights violations, inequality and other threats don't require or possess much intelligence, but an unstoppable AI is the purest manifestation of intelligence-as-power.
In the human world, intelligence is not very correlated with power, except very roughly. The most powerful people are generally more intelligent than average, but the correlation doesn't extend far beyond that. If anything, charm, confidence and courage seem to imbue their bearers with much more power than raw intelligence. Which is why I fear (only partly jokingly) a super-charming machine than a super-intelligent one. A super-charming machine is far more likely to bend humanity to its will -- and therefore pose a greater danger -- even if what it wants may be truly idiotic; a super-intelligent machine would be far more limited in its effects...
A television, or a tablet with facebook and youtube apps?
Its interesting to think of mass propaganda a la Bernays as an example of "super charming machine". In the old days the cogs and wheels in the marketing business are/were humans, just like in the old days "math calculator" was a job description but it got automated and accelerated into computers.
Populist political machine might be another example.
That's a fantastic metaphor.
I'm curious though how someone could act in a religious manner about a self-driving car.
To the original point though, I am very much in the camp of "historical determinism that we're inevitably making computers that will be smarter and better than us" and here is why: I believe that intelligence (and "consciousness" if you want to go down that rabbit hole) is completely material and as such it is possible that we will eventually understand the mechanics of them.
If we can understand those, then history would indicate (Historical fallacy) that we can replace or replicate those mechanics with more durable systems or materials than the fairly fragile.
More than that, we have don't even have ad hoc good models for motivation and personality, never mind useful formal and explicit models.
I think Lanier is absolutely right, and I think there's a strong quasi-theological element running through all of AI and programming.
Programming is a lot like making spells and incantations. If you get them just so you get the outcome you want. But you have to speak the language of the system to make that possible. And you have to be very careful about unintended consequences.
There's something very medieval about this - both in the sense of the pure scholasticism of academic CS, but also in the practical sense of knowing how to formulate the correct "prayers" to make useful things happen.
Lanier seems to be pointing out that CS is still haunted and influenced by these religious metaphors, and that AI is the most visible example of that.
I think he's right - and more, I think that real AI, in the sense of autonomous personalities, won't be possible until that's no longer true.
So because we don't know how something works now, we will never know? C'mon.
Programming is a lot like making spells and incantations.
No, it's not. Maybe for laymen it looks like that, but for us developers it is definitely not the case. Granted, sometimes something works and you don't realize why, but that just means you have a big debug problem on your hands - not that it's unknowable. Usually only happens when you port something or copy code over from one system to the next.
Not sure how you come to those conclusions.
It's about as much faith as Doug Englebart had about the PC.
That's why it's not rational. There is no fact you can exploit to come to this conclusion. That's called "faith", regardless of the particular subject matter.
A little faith is important. We didn't know if the first astronauts would survive orbit, and we didn't really have any data to prove it unequivocally. We just had faith that our limited understanding wasn't missing anything and the only way we could find out would be to put people in orbit. We found out soon enough.
But it's really, really a bad idea to call something rational when it's really faith, just because you don't want to be associated with religion.
Did I say that? In fact I didn't.
I don't care what you call anything, the fact remains that AGI doesn't exist today, but we can work on the best guesses at what we think will get us there. That is the exact OPPOSITE of faith if there ever was one.
We are working on computer vision with segmentation. That's a tiny subsection of supervised learning. By the way you say pattern matching as though it's some trivial thing. In fact that's what a hefty portion of our brain does to understand anything - so it's actually very important and our visual system (eyeball to cortex to representation) is arguably the most important part of that.
I couldn't care less about whether a computer has consciousness anyway. I think the whole consciousness argument is a waste of time (which is why I called it a rabbit hole previously).
I think you're hung up on this religious thing and not able to discriminate between actually building things that may be able to lead to AGI versus writing novels and daydreaming like so many "futurists."
We know what it is, we just don't know how it works.
Self-awareness comes in stages. We generally move through those stages throughout our daily lives. Animals display wide variations in qualities of self-awareness as well. So we can define consciousness as the sum of all of these qualities.
What I think is going to happen is that we'll start separating out lots of aspects of consciousness and explore them in software, adding more and more of them in as the state of the art of hardware gets better. The consciousness algorithms will slowly, over time, shed complexity.
Eventually, well before we get to even human-level self-awareness, we'll run into hard physical limits and realize that biology is way more effective at making the sort of compact, yet incredibly complex evolved system than any design process could be.
Biology has an advantage we don't have, it does not have to understand what it is doing, and it works ceaselessly. It can simply try over and over again over millions of years.
I predict that getting machines to become truly self-aware will be more trouble than it's worth, and that then you'll be choosing levels of comparatively-lower self-awareness for each individual component of a software system as part of architecting it. In fact, we have that tradeoff now. Do I really need to pull out ML to write a shell script?
It is orders of magnitude more dense than current technology.
Well we know that it exists, or at the very least seem to exists. And there is nothing in the physical word that we have understood so far that cannot be ultimatly expressed computationally—even if it’s usually not the most useful formalism. It is of course somewhat a leap of faith to then say _everything_ is computation (or can be expressed as computation), but I would argue that science has already made this leap in it’s infancy: The Book of Nature is written in the language of mathematics… Admittedly, modern philosophy of science is a bit more nuanced but the old the statement from Galileo basically stands. Now I agree that it’s not entirely rational, but it certainly isn’t “religious”.
So yeah, I think there are very good arguments to be made that conciousness can at least in principle emerge from some kind of computation. That computation could be a complete simulation of every atom in a brain or maybe some shortcut is possible it doesn’t matter for my point. That does not mean however that I think it can be made or will be made, since has you pointed out, we have no idea what we are talking about. I completely agree with Lanier in this, the tech industry should stop focusing on this nonsense.
You don't need any kind of leap of faith to actually start working on it. That's the wonderful thing about AI and AGI more broadly. You can actually go work on it, today. Are you going to solve it immediately? Of course not, but at least you can chip away at the problems.
Except for those physical phenomena that can only be modelled with a recursive form that doesn't start with coefficients defined with infinite precision - which unfortunately includes a lot of useful things like weather prediction, and even the classical mechanics of n-body systems.
How about telling me what tomorrow's Dow close is going to be? Or the oil price four years from now? Or which parts of the universe that electron over there passed through on its way to your monitor?
I think a lot of the responses here just make the point for Lanier:
"There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists."
Anyone who believes that consciousness is computable by definition, because it just has to be, has a dilettante-level view of the problem. Philosophers have been arguing about this for centuries, and on the whole they're not so certain about it.
So I'll repeat - we don't have adequate models for human or even animal personality, or for emotional responsiveness, or even for the self-aware perception of qualia, which is apparently one of the key things that defines consciousness.
Big data isn't a solution to this, any more than throwing the works of Shakespeare into a database and looking for clustering statistics about word proximity and sentence structure will get you a new Julius Caesar.
The reality is there are levels of intelligent behaviour - especially self-aware creative behaviour - that are completely opaque to any modelling technique available in current CS. I believe that anyone who thinks the problem is simple and just needs moar code and a faster processor thrown at it is expressing a faith-based position of hopeful belief, not a reality-based fact.
Shakespeare is Shakespeare because the writing isn't word salad. Shakespeare is interesting because of the unusual the density and richness of the experiences that are referred to - not just in the plot, but in the details of the metaphors in each sentence.
You need to model experience before you can recreate that, and you can't do that unless you have a deep understanding of what experience is. Big data gives you slightly more focussed monkeys and more typewriters, but it's in no way a solution to the basic modelling problem.
Its all about the copy by reference vs copy by value, type systems, etc.
Now when the medieval mystics wanted to talk enlightenment values, politics, psychology, political science, but the local powers that be were not down with the cool new stuff, they got obscurely alchemical, astrological, or divinational to avoid having the local leaders figure out what they were saying and therefore separate head from body via guillotine. Long after both the topics, and schemes they used to obscure the topics, were no longer contemporary and relevant, we laugh at the ancient alchemists.
Come to think of it, don't the cool kids programmers laugh at BASIC, assembly, C++, perl? Yet another similarity. Ha ha those ancient mystics trying to write quicksort in BASIC, LOL.
I took a comparative world religion a long time ago; I can't easily find the handy behavior list we had to memorize. We were taught something vaguely Mandaville-ian, I don't know if that outlook is still current and cool, but at least you can find Mandaville's opinion in the wikipedia. The "aspects" section of the wikipedia religion article would be a good start.
Shared faith based belief in the inevitability of some rather objectively unrealistic future outcome, in this case "We're all gonna use self driving cars", critically based solely on faith. A mythology, using the "a story that is important for the group whether or not it is objectively or provably true" wiki definition, where progress has always happened in the past and therefore will always happen in the future, when reality is progress only comes from obtaining and using energy, and our petroleum is running out while our population expands, so you do the math on the realistic future trajectory of progress, furthermore there's no question in the religion about the narrative that progress necessarily automatically equals self driving cars, instead of "busses and trains and subways" or not commuting at all or whatever. There's a fair amount of superstition involved aka everything the general public or journalists say about anything technical, legal, or economic WRT the topic of self driving cars, they're just making unreasoned noises at each other. Its always accompanied by extensive and exhaustive ethical baggage, in this case about preferred lifestyles, the value of work, urban planning, as if the self driving car is actually important relative to VPN deployment or subsidized public transit or star trek style transporters, yet despite its irrelevance there's a great pile of associated ethical baggage and related extreme judgment. There's a strong self policing society, where certain aspects of the religion are strongly amplified via endless ceremonial repetition, and the not-devout-enough are tossed out or punished via ridicule or otherwise.
Note that you can discuss self driving cars in non-religious ways, much as you can scientifically study aspects of a religion. That doesn't mean the adherents are not thinking/believing, communicating, and organizing in a religious style or manner, that just means you looked at them from a non-religious perspective.
Some historical religions being a little hard to believe in the modern scientific world, doesn't mean all religions are false; quite possibly we will have self driving cars in the future. That future success won't be a disproof of past observational behavior, under the "walks like a duck, quacks like a duck, it IS a duck" doctrine of what is a religion. Praying to each other for deliverance in the form of a self driving car won't bring it here any faster, but if it does somehow arrive anyway, the physical manifestation of that idol/miracle doesn't prove the praying was the cause of the effect/idol/miracle.
Interestingly enough, if enough people "pray" in the form of demanding these products through the marketplace, then yes actually it can cause an actual product to be developed. See: Oculus. It might be expensive but damnit there it is. So I think this actually weakens your argument.
I can't in my mind compare techno-optimism with religious behavior it's way too different.
The self driving car as a technology actually physically works. Sebastian Thrun proved that back in 2001 and google has been proving it over the last 5 years.
There is no possible way to prove - or disprove for that matter - that your Prayer had anything to do with your uncle/grandmother/wife's cancer going into remission.
Those are infinitely different things.
I think you are turning some people's hyperbolic exuberance into something more than it is. Just like people say "everyone loves X" or "everyone is going to want to X."
Definitely seen things like this before, very poignant. Not to knock on Kurzweil and friends too hard [lots of interesting stuff in both spaces obviously], but I've had this vibe from talking with self-described Singularitarians as well (there's some overlap between those circles).
And I can't say it didn't work, but I'm probably a fanatic at this religion anyway.
I think Mark Twain more or less says this outright in "Connecticut Yankee." Speaking of Twain, he grievously injured himself financially chasing the dream of a typesetting machine far ahead of its time.
If it's religion, it's religion that (mostly) works. <insert Panglossian diatribe here>
That being said: the techno-critics also display these behaviors as do the new agers, "naturalists," and others with which they tend to travel. Head over to one of those circles or boards and start arguing the futurist line and see what it gets you.
Both sides have valid points.
Also, your sarcasm about the self driving car might have been too subtle. ;)
Like many other aspects of AI, I'm not sure just about anyone thinks we need to have strong AI to make self-driving cars work under the vast majority of conditions. We "just" need to create automation that can handle a reasonable number of corner cases without flipping out or grinding to a halt. But that's a debate about engineering timelines not something more philosophical.
But we're in violent agreement about the "when it works, it's not called AI any longer" argument.
Jaron's essay was a fantastic read, highly recommend it, but the respondents are not engaging with it at all.
They are just using the Edge as a platform to promote their own ideas. Most of them clearly did not read what he wrote. Even though I respect most of the participants, the platform is not eliciting their best.
What this is really about is religion. More specifically, we have people who are the most non-religious group out there (like scientists) and they dismiss formal religion only to recreate another version of it:
"A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions!"
And I see this more often than just in computer science. It happens all the time in economics.
And his second point, that once an AI understands it users preferences, it ability to change slows down because unlike when it started, it ignores virgin data, and only follows what hit has been taught. That is a huge potential issue. Seems like a topic ripe for researchers in Machine learning to jump on.
So, I call bullshit on the idea there's a startup anywhere that's just going to turn on an AI we can't handle. We'll likely see it coming a mile away or the isolation it operated in will do it in when it faces human strategists.
1) Widespread use of advanced AI in robotics/production has the ability to bankrupt the working populace, at which point something closer to pure socialism may take over; not so much an annihilation scenario.
2) AI is weaponizable -- at a certain point, given other technological advances in addition to AI, a relatively small amount of like-minded people will be able to produce unique weapons of mass destruction.
Edit: In support of this scenario. Consider the refugee crisis and the response from the various countries. Even Sweden, ostensibly one of the most welcoming (and socialistic) countries, started to close its borders as soon as the situation started to get uncomfortable.
It is not particularly clear than the number is large; hard problem is hard. No sarcasm; arguably "general intelligence" is the hardest possibly engineering problem. Adding a bit more intelligence to the problem when it will already be getting worked on by a lot of humans may not move the needle much to speak of at all. But it is not particularly clear that that rate is small either, though; humans are smart, yet at the same time, really dumb in a lot of ways. We have terrible working memory (7 +/- 2 items is absurdly small). We have lots of irrational biases. We have lots of things to do in our lives that are not "thinking" (eating, sleeping, etc.), and the vast majority of our brain is dedicated to those problems, not rational thought. We are terrible at manipulating vast symbol systems without taking immense shortcuts, which inevitably completely color our manipulations. What happens when something that lacks those restrictions is turned loose on the problem of writing AI algorithms?
Heck if I know. I'm a human too.
I honestly think that those people who are utterly convinced the rate must necessarily be slow are just as wrong as those who are utterly convinced it must necessarily be fast. We really don't know, but it is hardly an invalid concern.
So far, AI algorithms have not written very many AI algorithms. I've seen some toys where one AI algorithm is hooked up to tune another one, but I'm not sure if any of them have ever been practically useful. (All I know is all the ones I've personally seen have amounted to toys.) And the system as a whole is generally constrained by what the final AI algorithm can output anyhow; if the domain and range of the AI function are immutably fixed, there's not much the AI-meta-system can do to "escape" and do anything terribly nasty.
I do think we're still a ways away from this being an issue. AI is currently still very much constrained to problems a great deal simpler than "writing AI algorithms", which is right up there with the hardest possible things that human intelligences are currently capable of, and that only to a rare few highly trained and highly talented individuals. We're easily decades away from this problem. But when the day comes, well, I wouldn't bet on the self-improving AI experiencing multiple orders-of-magnitude improvement in mere minutes, but I'd sure hate to bet the future of the species that it's not possible. It is not unreasonable to be concerned that the knee could be very steep. We'll know more as we get closer.