Hacker News new | past | comments | ask | show | jobs | submit login
Superintelligence: The Idea That Eats Smart People (idlewords.com)
883 points by pw on Dec 22, 2016 | hide | past | web | favorite | 580 comments



While I agree with Maciej's central point, I think the inside arguments he presents are pretty weak. I think that AI risk is not a pressing concern even if you grant the AI risk crowd's assumptions. Elided from https://alexcbecker.net/blog.html#against-ai-risk:

The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses... AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.


Exactly. The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else. And AI is not the only technology with the potential to worsen inequality in the world.


Human beings have been extremely easy to kill for our entire existence. No system of laws can possibly keep you alive if your neighbors are willing to kill you, and nothing can make them actually unable to kill you. Your neighbor could walk over and put a blade in your jugular, you're dead. They could drive into you at 15MPH with their car, you're dead. They could set your house on fire while you're asleep, you're dead.

The only thing which keeps you alive is the unwillingness of your neighbors and those who surround you to kill you. The law might punish them afterward, but extensive research has shown that it provides no disuasion to people who are actually willing to kill someone.

A military AI being used to wipe out large numbers of people is exactly as 'inevitable' as the weapons we already have being used to wipe out large numbers of people. The exact same people will be making the decisions and setting the goals. In that scenario, the AI is nothing but a fancy new gun, and I don't see any reason to think it would be used differently in most cases. With drones we have seen the CIA, a civilian intelligence agency, waging war on other nations without any legal basis, but that's primarily a political issue and the fact that it can be done in pure cowardice, without risking the life of those pulling the trigger, which I think is a distinct problem from AI.


This is exactly the same argument of targeted surveillance vs mass surveillance.

Saying "humans have spied on each other for centuries" is nothing but a distraction for how far beyond what should be legal mass surveillance is, because it makes it so easy to have everything on everyone all the time. It's nothing like pass targeted surveillance. If anything it's much more like Stasi-style surveillance, but on a much bigger and more indepth scale. And we already know how dangerous such surveillance is with the wrong person leading a country.

When war becomes easy and cheap (to the attacker), you'll just end up having more of it. It doesn't help the the military industrial complex constantly lobbies for it either.


That's the opposite of how history has played out with respect to violence, however. It is far easier (and less risky) to kill vast numbers of people now than it was hundreds and thousands of years ago, and yet the risk of any one person being killed by violence is far lower than at any previous point in history.


How are you calculating the 'risk of any one person being killed by violence' today? To make that claim you need to consider tail risk and black swan events that are possible but have little precedent. What weight are you giving the likelihood of a person dying by nuclear weapon?


Well, for example, "about 15% of people in prestate eras died violently, compared to about 3% of the citizens of the earliest states".

http://www.wsj.com/articles/SB100014240531119041067045765832...

As far as predicting the future goes, I can't.


It's about power imbalance and inpersonality.

Killing people when neither they nor their friends can retaliate is easier. Being able to say do X or I kill you, without the other party having a defense that will even inconvenience you gives you a shitton of power.

Military AI would basically be nukes without the fall-out or collateral damage.


So ideally there would be an AI criminal justice system, just to balance things out.


The difference is that AI might have an "intent". It may be just a statistical contraption married to some descendent of a heat seeking missile sensor, but from the outside it will look like "intent". Perhaps even without the double quotes.


People don't kill each other, even if they want to, because they know if they do they will probably spend the rest of their lives in prison. What do you mean this doesn't dissuade people?


That's not true, if I know my neighbors are coming I'm loading my Remington 870 and I'm waiting for that door to open. In America we're allowed to bear arms for protection.


Because you do not live in an action movie, you will not be able to stop them. You will not know they are coming, or when, or know how they intend to do it, etc. If you are well-defended with arms, they will simply use some other method. Having a shotgun does not make you significantly harder to kill in modern society. It would make you harder to kill if we were limited to cinematic tropes like declaring when we intend to do it, how we intend to do it, etc, sure, but we don't live in that world. We live in a world where the only thing that prevents our death, every day of our lives, is that no one nearby is willing to kill us.

Some people find that scary, and it may seem like a shaky thing to stake your life on. But, firstly, you have no choice. Secondly, it works for billions of people and has kept us safe for tens of thousands of years. Even after we developed the knowledge, tools, and ability to kill millions at the push of a button.


You are missing the point. If everyone around you wants you dead and willing to do it, you're going to die. If the CIA knows of a terrorist camp and wants to kill the people there, they are going to die.

AI doesn't change this it just makes it easier.


Everyone wanted Bin Laden dead, but it still took a few years to manage that. So perhaps not as straightforward.


Not everyone wanted Bin Laden dead. Critically, the people he was hiding with and his organization as a whole very much wanted him alive. The people who actually surrounded him on a daily basis did not want to kill him.


How long between the time the CIA knew where he was (definitively) and the time he was killed?


That's exactly what Maciej spends the last third of the article saying: that the quasi-religious fretting about superintelligence is causing people to ignore the real harm currently being caused by even the nascent AI technology that we have right now.


We don't need ai for massive differences in military effectiveness. That is already here. The us can just destroy most country and substate actors with minimal causalities. The issue is already just difficult matters like differentiating friendlies/neutrals from enemies and not creating more enemies via collateral damage and other forms of reactions.


The problem arises when non-state actors wreak havoc with drones and AI with inpunity.

What would we do if a drone made by a no-name manyfacturer dropped some bombs in Times Square? Who would we blame when someone uses AI to actually sow social mistrust and subvert our existing systems?


>The problem arises when non-state actors wreak havoc with drones and AI with inpunity.

Well, for the rest of the world, state actors wreaking havoc with drones and AI with inpunity is already a problem.


truth. how we collectively became accepting of drones used by our governments to destroy targets half way round the world whilst the pilots sit in some skyscraper somewhere in our own countries is remarkable. honestly the disconnect is beyond deeply troubling.


I don't really grok this thinking. Why is destroying a target by drone different?

It doesn't seem substantively different from destroying a target via long-range missile or via a laser-guided bomb dropped from a human pilot flying thousands of feet over head. All three are impersonal ways of killing other human beings from a mostly-safe distance – especially in our modern asymmetrical engagements.

I agree that it feels gross to imagine a soldier sitting in a skyscraper pulling a trigger to kill people half way round the world, and it feels odd for someone to kill people in the morning and go home to sleep their bed that night, again and again, day after day. But military commanders have effectively been doing that since we've had faster-than-horse battlefield communication. So, I'm not convinced this is some brave new world of impersonal killing.

HOWEVER, I get the problem with drone warfare. Drones provide commanders with several benefits (no human casualties on "our" side, generally great accuracy, relatively low cost, etc.) that let them scale up the killing with minimal public outcry. This seems a real problem.

I guess my point is that the problem is not drones. The problem is killing so many people with so little oversight and so little apparent concern – whether by Cruise missile, drone, nanoswarm, T2, whatever.

I'd reword your sentence to be: "how we collectively became accepting of our governments casually killing our fellow humans half way round the world is remarkable"


There is no disconnect for those drone pilots, who become as troubled as anyone who killed someone with a lesser distance between.



Evidence for that?


I'm pretty sure that drone-tracking equipment is mounted all over NYC, and more is being deployed as we talk. I can also imagine that a rouge drone arriving from afar and large enough to carry a bomb will be shot down by police if it can't be identified. If it does not carry a bomb, police will apologize.


I'm not buying that.

In fact, the only reason such drone attacks don't happen, or why people hadn't been casually blowing each other up for the past five decades with explosives attached to RC cars / planes is that in general, people are nice to each other. There's plenty of tools out there for dedicated people to wreck havoc in populated areas. Such people simply are very, very rare.


Also as recent news from Berlin shows, you don't even need a drone, a regular old van will do


Further backing up the parent commenter's statement that people are generally nice to each other.


Note that the same thing happened in France recently.


I can hardly believe that. Any drone large enough to carry a camera can carry a hand grenade. They don't have much range but can be launched locally and flown into a crowd.


> The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else.

To be fair, it's a small step from effective AI that doesn't malfunction, to an AI over which humans have lost control. It's precisely one vaguely specified command away in fact, and humans are quite excellent at being imprecise.


You can always use LEO EMP nukes to bring us back to the stone age, thus taking out the AI.


You don't even need to do that; just stop mining coal, or operating oil and gas fields, or scram the reactors. Or more easily, open some circuit breakers; in a pinch you can take down a few electrical towers (not many).

A rogue AI's "oxygen" is electrical power, which is really kind of fragile. The emergency power for most datacenters won't last more than a few days without replenishing diesel fuel.

For a distributed threat, take out fiber with backhoes. Happens every day now, we just happen to repair it. Stop fixing cuts and it's "Dave, my mind is going...".

Of course you have to deal with the AI making contracts with maintenance crews of its own, and do all this before it hardens its power supplies. But our current infrastructure is definitely not hardened against low to moderate effort.


If an AI has gone rogue it probably has already achieved high intelligence and kept improving itself at a exponential rate. For it to go 'rogue' it probably also has to have escaped an airgap, since you could otherwise just turn off the switch. How would you stop an AI that has burrowed itself on the internet? Remember, if its even moderately intelligent it'd hide its rogue intentions at first, until its 100% certain it has 'escaped'. From the internet it could write some nice malware, set up shop on the deep web, in badly secured IoT devices, with some bad luck even embedded controllers (chargers etc.). And even if you got rid of it by some miracle, all it would take is one bozo connecting an old/forgotten device to the internet and you're back to square one.


> But our current infrastructure is definitely not hardened against low to moderate effort.

Sure, but nobody thinks AI is a threat right now. They're claiming AI could be a threat in the near future, which is entirely reasonable.

And with battery tech improving steadily, and solar now becoming cheaper than fossil fuels, it becomes progressively easier to depend solely on disconnected, distributed power generation which is resistant to exactly the kind of attack you're suggesting.

We can certainly argue the probabilities of such an outcome, but I hope we can all agree that it's not outright implausible. Which doesn't even count the dangers of AI for our economy, which are even more plausible. So overall, AI has the potential for much harm (and much good of course).


With our current push towards solar and wireless, don't you think these particular circuit breaker paths against a rogue AI are going to be unavailable sooner than later?

Antennas and solar panels can be smashed, but they can also be protected, since they would be mostly concentrated in one area.

Throw in EMP hardened circuitry, and things get a bit harder to destroy.


What stopps the AI to take the path from Matrix?


If you mean, what stops an AI from locking humanity into a virtual reality simulation and using its collective body heat as fuel - simple thermodynamics.

The Matrix of the original film was originally envisioned as a bootstrap AI, in which the machines were farming humans for their collective processing power - and the simulation that was the Matrix was integral to this purpose, as it served as an operating system for the imprisoned human minds. However, the film's backers felt that concept was too complex for the average moviegoer, and forced the change to "humans = batteries."

But, the canonical Matrix would be too inefficient to actually work. The amount of energy needed to maintain a human being over the course of an entire lifetime far surpasses the amount of energy that can be harvested as body heat - and adding some kind VR simulation over that just to keep people who are already physically trapped from "escaping" would just a pointless waste of resources.


Military hardware is hardened against EMP.


Don't the people who control military technology in our current time already have "practically unlimited power over everyone else"?


No, because their power is limited by their role in society and the organization in which they operate. Even the president of the United States has limited powers. If Obama just decided one day that the best course of action was to nuke Moscow and doubled down on doing so it's extremely unlikely he'd be able to do so. There are enough other people with careers, jobs, pension plans and common decency between him and actually launching missiles that I don't think it's credible that it would happen on a whim like that, even a persistent one. Someone would call a doctor and get the President some medication.

However, that depends on the people between the president and nuclear launch being decent human beings that care about law and order and proper procedure. You need to look at the system, not just individuals.

Alternatively, let's say someone came to power who genuinely believed nuking Russia was a good option. Rather than order a launch on the spot, what they'd actually do is gradually build a case, appoint pliable or similarly thinking people to key positions, get the launch protocols revised, engineer a geopolitical crisis by provoking Russia and drive events towards a situation in which a nuclear launch seems like a legitimate option.


This is a complete misunderstanding of US nuclear policy and weapons systems. It's designed to enable the president to launch missiles at any target as rapidly as possible, not to doublecheck or safeguard against him.

http://blog.nuclearsecrecy.com/2016/11/18/the-president-and-...

You might say "sure, but there are people who have to actually execute those orders." But give the Pentagon some credit--those are people who have been systematically selected because they follow orders quickly and without question. For example, consider what happened to Harold Hering when it became obvious that he was not one of those people.


This is a really strong argument for big government. The smaller our all-powerful military command is, the easier it is for it to go rogue.


It's rather an argument for a crafted system of checks and balances than a big government per se. The totalitarian governments, for example, are absolutely massive, and many don't even have some sort of politburo to reason in a dictator.


A counterargument would be that it's harder for a big government to change direction easily, once decided and set in motion. Wrong decisions can compound because it's easier for a large organization to stay its course, even in the face of increasing harm, as evident by the Vietnam war.


I think they're largely limited by the greater public's threshold for acceptable casualties of their own soldiers during war. What the AI'ing of war does is to remove that natural limit and raise the amount of death and destruction a military can inflict without pushback from the general population.


I think you're _exactly right_. This is precisely the reason the Iraq and Afghanistan wars proved too costly to prosecute was that it proved too costly in men and materials. The logistics and cost of procuring and moving weapons, vehicles, living supplies etc was high, but manageable. The cost of each body bag which had to be explained to the public was not.

Which is why President Obama pivoted to a drone war in areas like Pakistan. Scores of Pakistani civilians are killed on a regular basis for crimes no worse than standing next to a tall bearded man or attending the funeral of a neighbour. The American public has no issues with this because hey, it's cheap to deploy the drones and no American lives are ever in jeopardy. And to be fair about it, apart from the high collateral damage, the Drone War in Pakistan is generally considered to be successful at inhibiting the Pakistani Taliban.

This Drone War was the first successful war that we've seen fought without boots on the ground. It's likely that we'll see many more like this as on-board AI improves.


To make big wars like that, you have to have many great robots, not AIs. If ground robots were great at war in a wide sense, we could use it today via remote control / VR interface.


Their might is asymmetrical, but power is mitigated by the willingness of an organization of humans to follow commands. There is a limit to how far a soldier will go, ethically.


True, true, but somehow that's never been much of an obstacle to totalitarian governments. Somehow there's always a soldier who will push the button.

In Nuremberg we developed a way to think about this: people outside the central circles of power have a tremendous amount of pressures they are considering. It's definitely the case that some are sadistic and horrible, but more are just following orders and trying to get by as best they can, and punishing them for war crimes is not appropriate.

The other side of that coin is that it isn't realistic to expect soldiery en masse to resist illegal orders. It's always more complicated than that.


> Somehow there's always a soldier who will push the button.

And then there's always that soldier ready to denounce his camarads for having raped some poor Vietnamese women who had nothing to do with the war itself. Why risk such a PR disaster which might see your funding cut when you can use robots instead? Warzone robots don't snitch on their fellow robots in front of the press and they don't rape, they're only build to kill.


Unless you're talking about the Nuremberg Rallies, you've got your history screwed up: "just following orders" is NOT a defense against war crime accusations, and punishing those who commit war crimes "trying to get by as best they can" is completely appropriate, was the conclusion of the Nuremberg Trials.

https://en.wikipedia.org/wiki/Nuremberg_Principles#Principle...


> True, true, but somehow that's never been much of an obstacle to totalitarian governments. Somehow there's always a soldier who will push the button.

Actually, the unwillingness of the Red Army to do any more invasions was arguably the precipitating factor in the fall of the Soviet empire. In 1988, Gorbachev gave a speech to the Warsaw pact meeting where he told them that the Brezhnev doctrine was no more. No socialist government would be able any longer to count on Russian aid in putting down popular uprisings. A year later, the empire crumbled. Of course, it wasn't the soldiers per se who refused, but the generals who were afraid of the potential mutinies (and the occasional actual ones).


It was Gorbachev who refused, not the generals.


He couldn't have refused without the support of the generals, who had just withdrawn from Afghanistan.


Very true.

And economic problems were a big incentive for everyone in charge to step back from wars. For country's with a strong economy there would be little incentive to stop, especially if AI makes war cheaper.


No there isn't. It's easy enough to manipulate the individual to do anything.

We speak of Nazis frequnetly, but consider what Curtis LeMay did in war and prepared to do after the war.

AI and robots give plausible deniability and reduce the number of witnesses. They also make suicide raids more practical.


>There is a limit to how far a soldier will go, ethically.

What horrible things to you have in mind that armies have not already done?


It's not that AIs will do worse things than humans have already done. It's that AIs will do those terrible things much more efficiently and with much lower risk to the people in charge.


On the plus side - no more rape.


I don't think you can hope for even that much. Rape has historically been systematically used for terrorizing the population in order to achieve military and/or political aims. An AI free from any ethical concerns could conceivably evaluate it as an efficient strategy for achieving some set of goals and proceed accordingly.


Yes, almost all revolutions succeed because the army turns. It doesn't happen all the time, but some of the time is enough to put some checks on people wanting to take control.



As horrific as Nanking Massacre was, I believe that doesn't really prove your point. I'd argue that a lot of counter examples of soldiers ethical behaviour simply aren't visible and are forgotten and lost to history (https://en.wikipedia.org/wiki/Survivorship_bias)


Which limit are you talking about? I guess you have forgotten what the soldiers of the third reich did.

Or American death squads in Afghanistan. Might as well call them rape and death squads.

What about Guantanamo?

Srebrenica?

Should I go on?

How unenlightened and naive are you?


Please comment civilly and substantively—without personal attacks—or not at all.

https://news.ycombinator.com/newswelcome.html

https://news.ycombinator.com/newsguidelines.html


That's like asking whether we really have to worry about global wars over resources since there are already knife stabbings and car crashes, or whether the Nazis were really that much different from having a bit of a cold. Sadly, being serious about this subject is not one of the strengths of HN it seems, I was kind of spooked when I only got 1 upvote for this: https://news.ycombinator.com/item?id=11685995


> Don't the people who control military technology in our current time already have "practically unlimited power over everyone else"?

Currently there's a high cost to it. War is very expensive; even the richest country in the history of the world, the US, doesn't want to bear the costs. And it requires persuading masses of soldiers to go along with it.

Roboticized warfare might eliminate those constraints.


There isn't a whole lot of deniability. Current drones strikes involve explosions and craters.


This is basically why a lot of people didn't want Google to become a defense contractor while researching military robots. If it did, it would've naturally started to use DeepMind for it. And that's a scary thought.


" They will give their human masters practically unlimited power over everyone else."

Anyone power that has significant leverage over another power already has that ability.

A bunch of 'super smart evil AI robots' will not be able to physical deter/control 500 million Europeans - but - a small Army of them would be enough to control the powers that be, and from there on in it trickles down.

Much the same way the Soviets controlled Poland et. al. with only small installations. The 'legitimate threat of violent domination' is all that is needed.

So - many countries already have the power to do those things to many, many others via conventional weapons and highly trained soldiers. That risk is already there. Think about it: a decent soldier today is already pretty much a 'better weapon' than AI will be for a very, very long time. And it' not that hard to make decent soldiers.

The risk for 'evil AI robots' is that a non-state, inauthentic actor - like a terrorist group, militia etc. gets control of enough of them to do project power.

The other risk I think, is that given the lack of bloodshed, states may employ them without fear of political repercussions at home. We see this with drones. If Obama had to do a 'seal team 6' for every drone strike, many, many of those guys would have died, and people coming home on body bags wears on the population. Eventually the war-fever fades and they want out.


People are worried about AI risk because ensuring that the strong AI you build to do X will do X without doing something catastrophic to humanity instead is a very hard problem, and people who have not thought much about this problem tend to vastly underestimate how hard it is.

Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere. Now you might say, why don't we just hardcode in a goal to the AI like "solve aging, and also don't hurt anyone"? And ensure that the AI's method of achieving its goals won't have terrible unintended consequences? Oh, and the AI's goals can't change? This is called the AI control problem, and nobody's been able to solve it yet. It's hard to come up with good goals for the AI. It's hard to translate those goals into math. It's hard to prevent the AI from misinterpreting or modifying its own goals. It's hard to work on AI safety when you don't know what the first strong AI will look like. It's hard to prove with 99.999% certainty that your safety measures will work when you can't test them.

Things will not turn out okay if the first organization to develop strong AI is not extremely concerned about AI risk, because the default is to get AI control wrong, the same way the default is for planets to not support life.

My counterpoint to the risks of more limited AI is that limited AI doesn't sound as scary when you rename it statistical software, and probably won't have effects much larger in magnitude than the effects of all other kinds of technology combined. Limited AI already does make militaries more effective, but most of the problem comes from the fact that these militaries exist, not from the AI. It's hard for me to imagine an AI carrying out a military operation without much human intervention that wouldn't pose a control problem.

--------- Edited in response to comment--------


I am feeling like perhaps you didn't read the article? Many of these arguments are the exact lines of thinking that the author is trying to contextualize and add complexity to.

These are not bad arguments you are making, or hard ones to get behind. There are just added layers of complexity that the author would like us to think about. Things like how we could actually 'hard-code' a limit or a governor on certain types of motivation. Or what 'motivation' is even driven by at all.

I think you'll enjoy the originally linked article. It's got a lot to consider.


> Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere.

This is a sequence of deductive reasoning that you brought up there. Quite natural for human beings, but why would the paperclip maximiser be equipped with it?

Seriously, the talks specifically argues most of the points that you brought up.

Shit is complicated yo. Complicated like the world is - not complicated like an algorithm is. Those are entirely different dimensions of complicated that are in fact incomparable.


> to do X will do X without doing something catastrophic to humanity instead is a very hard problem

This scenario I agree with. For instance: the AI decides that it doesn't want to live on this planet and consumes our star for energy, or exploits our natural resources leaving us with none.

The whole AI war scenario is highly unlikely. As per the article, the opponents of AI are all regarded as prime examples of human intelligence - many of them have voiced opposition to war and poverty (by virtue of being philanthropists). Surely something more intelligent than humans would be even less inclined to wage war. Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?

> AI control

My argument is with exception to this scenario. By attaching human constraints to AI, you are intrinsically attaching human ideologies to it. This may limit the reach of the superintelligence - which means that we create a machine that is better at participating in human-level intelligence than humans are. Put simply, we'd plausibly create an AI rendition of Trump.


>Surely something more intelligent than humans would be even less inclined to wage war.

The default mode for a machine would be to not care if people died, just as we don't care about most lower life forms.

> Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?

Exactly.

Which is why worrying about ourselves in a world with superintelligence is not wasted effort.

The extreme difference in productive abilities of superintelligence, vs a human population who's labor and intelligence has been devalued into obsolescence, suggests there will be serious unrest.

Serious unrest in a situation where a few have all the options tends to lead to extermination, as is evident every time an ant colony attempts to raid a home for food crumbs.

The AI's might not care whether we live or not, but they won't put up with us causing them harm or blocking their access to resources, even if we are doing it not to hurt them but only to survive.


> if you are given a strong AI randomly selected from the space of all strong AIs

Why would this ever apply? We're building them, not picking them out of a hat.


I think the current state of deep neural networking designs and research funding pouring into generalizing the simple neural nets we're working with now suggests that we are, in fact, pulling them out of a hat.

Right now we're just discarding all the ones that are defective, at a stupendously high rate as we train neural nets.

I can't speak to what method would generate the first strong AI, but I suspect the overall process - if not the details - will be similar. Training, discarding, training, discarding, testing, and so on. And the first truly strong AI will likely just be the first random assemblage of parts that passes those tests.


That's not how neural network training works. It's not magic or guesswork; it's essentially glorified curve fitting (over a much more complicated space than the polynomials). It's also not random in any respect. The space of all neural networks plausibly generated from a training set is very, very small compared to the space of all networks of that size.


I get the feeling that not many people in this thread actually know how AI and related concepts work.


The initial conditions of neural networks are almost always chosen randomly or pseudo-randomly. Data sets are sometimes, although not always, presented with random sample order or selection.

Either way, the randomness in initial conditions means the solution found is one of many different solutions that could have been found, and depending on the problem, different initial conditions can result in very different solutions even on the same data.


> It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next

You know, you don't need to go that far. You know what a great way to kill a particular group of people is? Well, let's take a look at what a group of human military officers decided to do (quoting from a paper of Elizabeth Anscombe's, discussing various logics of action and deliberation):

""" Kenny's system allows many natural moves, but does not allow the inference from "Kill everyone!" to "Kill Jones!". It has been blamed for having an inference from "Kill Jones!" to "Kill everyone!" but this is not so absurd as it may seem. It may be decided to kill everyone in a certain place in order to get the particular people that one one wants. The British, for example, wanted to destroy some German soldiers on a Dutch island in the Second World War, and chose to accomplish this by bombing the dykes and drowning everybody. (The Dutch were their allies.) """

There's a footnote:

""" Alf Ross shews some innocence when he dismisses Kenny’s idea: ‘From plan B (to prevent overpopulation) we may infer plan A (to kill half the population) but the inference is hardly of any practical interest.’ We hope it may not be. """

It's not an ineffective plan.


what are you quoting from? who's Kenny? context?


The internet also brought us wikipedia, google, machine learning and a place to talk about the internet.

Machine learning advances are predicated on the internet, will grow the internet and will become what we already ought to know we are. A globe spanning hyper intelligence working to make more intelligence at break neck pace.

Somewhere along this accelerating continuum of intelligence, we need to consciously decide to make thing awesome. So people aim to build competent self driving cars, that way fewer people die of drunk driving or boredom. Let's keep trying. Keep trying to give without thought of getting something in return. Try to make the world you want to live in. Take a stand against things that are harmful to your body (in the large sense and small sense) and your character. Live long and prosper!!!


which part of the "we need better scifi" slide did you not understand?


>We already have nuclear weapons, which like almost everything else are always getting cheaper to produce.

And in an almost miraculous result, we've managed not to annihilate each other with them so far.

> Income inequality is already rising at a breathtaking pace.

In the US, yes, but inequality is lessening globally.

> The internet has given birth to history's most powerful surveillance system and tools of propaganda.

It has also given birth to a lot of good things, some that are mentioned in a sibling comment.


Yeah but we were never forced into this global boiler room where we're constantly confronted with each other's thoughts and opinions. Thank you social media. It's like there is no intellectual breathing room anymore. Make anyone go mad and want to push the button..


It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next.

This has been done many times by human-run militaries; would AI make it worse somehow?

Groups of humans acting collectively can look a lot like an "AI" from the right perspective. Corporations focused on optimizing their profit spend a huge amount of collective intelligence to make this single number go up, often at the expense of the rest of society.


> This has been done many times by human-run militaries; would AI make it worse somehow?

Solders in developed countries no longer want to die en masse.


Soldiers in developed countries no longer have to die en masse. Compare US and Iraqi casualties.


I don't think AI will cause a paradigm shift here; but like most powerful technologies, I imagine it will have potent military applications.


No doubt that his "inside arguments" have been rebutted extensively by the strong AI optimists and their singularity priests. After all, dreaming up scenarios in which robotic superintelligence dominates humanity is their version of saving the world.

That's why I found the "outside arguments" here equally important and compelling.

> The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.

If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.

The best rebuttals to all this are the least engaging.

"Dude, are you telling me you want to build Skynet?"


The posters rebuttals against the threat are totally under-thought cop-outs. For example, his first argument about "how would Hawking get his cat in a cage?" just put food in it. It's not hard to imagine an AI could come up with a similar motivation to get humans to do what it wants.

That's not to say that his general premise is wrong, but it's hard for me to take it seriously when his rebuttals are this weak.


The emu "war" example is also similarly dumb. We know that humanity is very successful at wiping out animals even when we're not actively trying, just as a consequence of habitat encroachment or over-hunting. If you want to kill a bunch of emus, that can easily be done using appropriate methods. Having a cohesive military formation go after them in the huge Australian outback and giving up after a week is not the way to do so.


> If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.

Occasionally the crazies are right. Remember when the idea that the NSA was recording everyone's emails was paranoid conspiracy theory talk, except turns out they were actually doing this the whole time.

The fact that the world hasn't ended tells us virtually nothing about how likely the end of the world is, for the simple reason that if the world had ended we wouldn't be here to talk about it. So we can't take it as evidence, at least, not conclusive evidence. Note also that the same argument works just as well against global warming as it does against AI risk.

Turn it around. Suppose there was a genuinely real risk of destroying the world. How would you tell? How would you distinguish between the groups that had spotted the real danger and the run-of-the-mill end-of-the-world cults?


The point that I was trying to bring attention to is that one's perception of the AI risk movement changes substantially once you turn your focus from content to form and context. He brings many examples of this (the cult in the robes, Art Bell, etc.)

I wouldn't call suspecting the government of surveilling people "crazy." Claiming you know the timeframe for the apocalypse with precision is different. I am from North Carolina -- try a Google Image search for "May 21, 2011."

Ray Kurzweil believes he will never die. Balancing content with form and context, what we have here is clearly an atheistic, scientist version of "May 21, 2011."


>Remember when the idea that the NSA was recording everyone's emails was paranoid conspiracy theory talk

No, I remember the time that we'd known there were various signal intelligence programs operated by the US government for decades, and it was just a matter of guessing what the next and biggest one would be.


I'll push back against the idea of smart factories leading to "a vast chasm between a new, tiny Hyperclass and the destitute masses." I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money. Income inequality obviously benefits the rich (in that they by definition have more money), but only up to a point. We won't devolve into an aristocracy, at least not because of automation.


> I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money.

I think that's beside the point. Why would the "aristocracy" owning the factories care about money, when they have all the goods (or can trade for them with other aristos)?

It's not like they need money to pay other people (the destitute masses are useless to them). With their only inherent "capital", --the ability to work-- made worthless by automation, the destitute masses have no recourse-- they get slowly extinguished, until only a tiny fraction of humanity is left.


> Why would the "aristocracy" owning the factories care about money, when they have all the goods (or can trade for them with other aristos)?

In this scenario, they only need enough factories to meet the needs of themselves and the other aristocrats. Which means there will be massive demand for goods for the common people, which means there will be jobs for people. The only way your scenario works is if the aristocracy produces enough goods for the entire world cheaper than anyone else can, but also making them too expensive for anyone to afford. But that would mean the factories are producing enough goods for everyone despite the fact to very few can actually buy them. Piles of good wasting away outside the factory. The only reason to do this is if the aristocracy was malicious: trying to hurt the rest of humanity despite the fact that there is no real benefit.


> In this scenario, they only need enough factories to meet the needs of themselves and the other aristocrats.

That could still end up taking all of the planet's resources, if enough aristocrats try to maximize their military capacity to protect themselves or play some kind of planet-wide game of Risk, or if they create new goods that require absolutely insane amounts of resources.

I mean, ultimately, the issue is that the common people have nothing of value to trade except for resources that they already own, and no way to accumulate things of value. Because of this, the aristocrats will never have an incentive to sell anything to them: if they had any use for the common people's resources, they would just take them. So I think there are three possible outcomes:

* The aristocrats' needs become more and more sophisticated until they need literally the whole planet to meet them, destroying the common people in the process.

* The aristocrats provide to the common people, no strings attached, out of humanism (best case scenario).

* The aristocrats take what they need and let the common people fend off for themselves, creating a parallel economy. In that case the common people would have jobs, although they would live in constant worry that the aristocrats take more land away from them.


What about a scenario in which the common people revolt and kill the aristocrats?

It has happened before.


That just hits "replay" and provides convincing evidence to any new or surviving overlords that the capabilities of the masses must be suppressed more successfully next time.


Sure, if they can get past the aristocrats' killer robot army.


What matters is the total size of the market and the total size of the labor pool. If the Hyperclass have more wealth than all of humanity had prior, they don't need to sell to the masses to make money. If the labor pool is mostly machines they own, they don't need to pay the masses, or even a functioning market among the masses to enable it. In the degenerate case, where a single individual controls all wealth, if they have self-running machines that can do everything necessary to make more self-running machines, that individual can continue to get richer (in material goods; money makes no sense in this case).


In such case what stops the masses to create a parallel economy of their own?


I imagine that would happen, but as soon as the size of their economy was large enough to attract the notice of the Hyperclass, or even just one member, it would be completely undercut by them. I don't know what an equilibrium state would look like.


I beleive that technology would still drip, if not flow from the elite world to the outsiders, and eventually they yould maybe leave the planet, for example, and leave the elites in their world, something like in Asimov's tales where the solarians and auroreans lived comfortably with their robotic servants and autofactories happily ever after, until extinction.


Money is just a stand-in for resources and labor, and if automation makes labor very very cheap, the rich will only need the natural resources the poor sit on, not anything from the poor themselves.


Which means these alleged poor will be in the same position everyone on the planet is right now, without having this magic automation.


Alleged poor? When machines are cheaper than humans for any given task (as a result of both AI and robotic improvements) human beings will be destitute except for any ownership of resources they already have and can defend.

That means the vast majority of people will have no means of income or resources, unless they appropriate the use land or resources they don't own in a shadow economy.

But the appropriation of resources is not likely to be perceived positively by those that own the resources, given that ownership is the only thing separating the rich from the destitute.


It's a classic tragedy of the commons scenario.

If you as a manufacturer move to a jobless production system, you gain net margin.

If everybody moves to jobless production, the topline demand shrinks radically.

Yet, for each individual mfg, the optimal choice is jobless production (aka, "loot the commons", aka "defect").


Except that at some point the very rich will have enough AI systems in place to provide for each other.

At that point, the economy will continue to grow, despite what will seem like an economic disaster to most of the human race.

This happens all the time to other species today, where humans suck up all the resources leaving incumbent creatures to die off.


There are at least two failure cases here:

- a military AI in the hands of bad actors that does bad stuff with it intentionally.

- a badly coded runaway AI that destroys earth.

These two failure modes are not mutually exclusive. When nukes were first developed, the physicists thought there is a small but plausible chance, around 1%, that detonating a nuke would ignite the air and blow up the whole world.

Let's imagine we live in a world where they're right. Let's suppose somebody comes around and says, "let's ignore the smelly and badly dressed and megalomanic physicists and their mumbo jumbo, the real problem is if a terrorist gets their hands on one of these and blows up a city."

Well, yes, that would be a problem. But the other thing is also a problem. And it would kill a lot more people.


I mean if you made me a disembodied mind connected to the internet that never needs to sleep and can make copies of itself we would be able to effectively take over the world in like ~20-50 years, possibly much less time then that.

I make lots of money right now completely via the internet and I am not even breaking laws. It is just probable that an AI at our present level of intelligence could very quickly amass a fortune and leverage it to control everything that matters without humanity even being aware of the changeover.


There are also nearer-term threats (although I'd likely disagree on many specifics), but I don't see how that erases longer-term threats. One nuclear bomb being able to destroy your city now doesn't mean that ten thousand can't destroy your whole country ten years down the line.


I think the point (which is addressed with Maciej's Almogadro callback near the end) is that the longer term threat being speculated about and dependent on lots of very hypothetical things being true is pretty much irrelevant in the face of bigger problems. I mean, yes, a superpower that had hard military AI could wreak a lot of havoc. On the other hand, if a superpower wants to wipe out my corner of civilisation it can do so perfectly happily with the weapons at its disposal today (though just to be on the less-safe side, the US President Elect says he wants to build a few more). And when it comes to computer systems and ML, there's a colossal corpus of our communications going into some sort of black box that tries to find evidence of terrorism that's probably more dangerous to the average non-terrorist because it isn't superintelligent.

Ultimately, AI is neither necessary nor sufficient for the powerful to kill the less powerful.

And if it's powerful people trying to build hard military AI, they probably aren't reading LessWrong to understand how to ensure their AI plays nice anyway.


That's not how risk works.

If we want to grow to adulthood as a species and manage to colonize the cosmos, we need to pass _every_ skill challenge. If in 50 years there'll be a risk of unfriendly superintelligence and it'll have needed 40 years of run-up prep work to make safe, then it will do us absolutely zero good to claim that we instead concentrated on risk of military AI and hey, we got this far, right?

Considering the amount of human utility on the line, "one foot short of the goal" is little better than "stumbled at the first hurdle".


I think the article dealt pretty well with risk: you survive by focus finite resources on the X% chance of stopping Y% chances of the near elimination of humanity, not the A% chance of stopping a B% probability of an even worse than the near elimination of humanity event where X is large, Y is a small fraction and the product of A and B is barely distinguishable from zero, despite it getting more column inches most of the rest of the near-neglible probability proposed solutions for exceptionally low probability extinction events put together.

I also tend to agree with Maciej that the argument for focusing on the A probability of B is rescued by making the AI threat seem even worse with appeals to human utility like "but what if, instead of simply killing off humanity, they decided to enslave us or keep us alive forever to administer eternal punishment..." either.


Well yes, we have finite resources to deploy.

But most resources are not spent on risk mitigation. The question of what priority to give risk mitigation naturally (should) go up the more credible existential risks are identified.


Global income inequality has been decreasing.

http://voxeu.org/article/parametric-estimations-world-distri...


That's inequality between states. Inequality within states hasn't decreased.


IIRC inequality of all humans has decreased as poor people have become richer.


Yes, that's the ultimate threat. But in the meantime, the threat is the military will think the AI is "good enough" to start killing on its own and the AI actually gets it wrong a lot of the time.

Kind of like what we're already seeing now in Courts, and kind of how NSA and CIA's own algorithms for assigning a target are still far less than 99% accurate.


It's possible that we could face both AI risks consecutively! First a tiny hyperclass conquers the world using a limited superintelligence and commits mass genocide, and then a more powerful superintelligence is created and everyone is made into paperclips. Isn't that a cheery thought. :-)


The real danger of ai is that they allow people to hide ethically dubious decisions that they've made behind algorithms. You plug some data into a system and a decision gets made and everyone just sort of shrugs their shoulders and doesn't question it.


Isn't that the conclusion he gives at the end of the article? Ethical considerations


what if we made a superintelligent AI that was our Socrates?

superintelligence of a military AI is worrisome, but superintelligence of a cantankerous thinking is quite reassuring...


"I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us."

This is trivially false. Over a hundred billionaires have now pledged to donate the majority of their wealth, and the list includes many tech people like Bill Gates, Larry Ellison, Mark Zuckerberg, Elon Musk, Dustin Moskovitz, Pierre Omidyar, Gordon Moore, Tim Cook, Vinod Khosla, etc, etc.

https://en.wikipedia.org/wiki/The_Giving_Pledge

Google has a specific page for its charity efforts in the Bay Area: https://www.google.org/local-giving/bay-area/

This only includes purely non-profit activity; it doesn't count how eg. cellphones, a for-profit industry, have dramatically improved the lives of the poor.


I feel the problem is the fact there is 100 billionaires in he first place, no one gets rich on their own. Gates et al, are clever, but didn't get where they are totally independently without others support, so they should give back.

Also, some of these billionaires are running companies that are great at tax avoidance, probably most of them. Now what? They get to pick and choose where they get to spend there/invest money? I don't buy it.

I believe in wealth, just not this radical wealth separation .


Countries that have no rich people are never prosperous. You can raise marginal income tax rates from, say, 60% to 70%, and maybe that's a good idea overall, but it doesn't get rid of billionaires. High-tax Sweden has as many billionaires per capita as the US does: https://en.wikipedia.org/wiki/List_of_Swedes_by_net_worth

If you raise the marginal tax rate to 99%, then you get rid of billionaires, but you also kill your economy. There are all the failures of communist countries, of course, but even the UK tried this during the 60s and 70s. The government went bankrupt and had to be bailed out by the IMF. Inflation peaked at 27%, unemployment was through the roof, etc.:

https://en.wikipedia.org/wiki/1976_IMF_Crisis

https://en.wikipedia.org/wiki/Winter_of_Discontent


I agree with you that it isn't practical right now to get rid of billionaires. However, I don't think that it's some kind of economic theorem. The reasons that socialism failed are complex, and pure capitalism failed as well (think Gilded Age), which is why everyone lives in a mixed economy. It is reductionist to say that the 1976 IMF Crisis was caused by the tax rate instead of excess spending, monetary policy, and structural aspects of the economy. As a counterexample, postwar US had a 92% tax rate and did OK: http://www.slate.com/articles/news_and_politics/the_best_pol...

IMHO, most economies aren't able to raise the effective tax rate because the wealthy can add loopholes or shuffle their wealth elsewhere. This isn't an economic problem, but a political problem. Is there a political will to close loopholes and restrict the movement of wealth? Do people frame wealth in terms of freedom or in terms of societal obligations?


The problem is not the existence of rich people. The problem is that some people are getting poorer. The two are not always linked.

In other words, inequality can be a sign of good (upward mobility, vibrant economy) as well as bad (poor people getting poorer).

Fixing the latter is important. "Fixing" the first is harmful.

Any solutions should focus on giving the average and the poor the ability to improve their situation. Reducing the number of rich should never be the goal.


I don't think an income tax that punishes people for making too much money is the right way to go about it. How about instead of punishing people for being rich, discourage the filthy rich from spending money on the frivolous. For instance, set up a luxury tax on expensive cars, private jets and jet fuel, first class transportation and primary residences and hotels that are way above the average value for an area. On the other end, have tax credits (not just a tax deduction) for contributing to charitable causes, or for taking business risks that drive innovation.


I think it might be great to encourage the rich to spend as much as possible. Don't the expensive cars, private jets, and first class transportation support whole networks of businesses, and provide employment?


Yes, but you also need to look at the products of those people's labor and other things that labor could be used for. Do we need more people building and crewing luxury yachts, or building and operating hospitals and sheltered accommodation? In both cases people are paid to do work, but the products of that work are very different.

But in practice much of the wealth of super-wealthy people is actually either tide up in the value of the businesses that they own, which are often doing economically valuable things, or is invested in useful enterprises (shares), or funds useful activities (bonds). It's not as though the net wealth of Warren Buffet is all being thrown on hookers and blow.

There are already ways to direct the spending of the wealthy towards more productive uses, such as consumption taxes on luxury goods. But if they take their wealth to other countries with laxer consumption taxes, there's no a lot we can do about it. So we're back to the libertarian argument. At some point you get into questions of freedom and individual rights.


The problem is not that their are rich people buying nice things.

The problem is when poor or middle class people are unable to improve their situation or lose ground.

The only time rich people are a problem for poor people is when rich people are able to corrupt government to tilt the playing field their way. This is a problem of corrupt politicians and lack of anti-corruption law.

I think people underestimate how many economic difficulties are not caused by economic effects, but by corrupt politicians who are permitted to stack the deck against the average person as a way to fund their campaigns or rack up post-governance favors.


Not that much. For an average rich person, most of their assets are not spend for living/luxury, and they can't realistically be unless said rich are extremely extravagant.

So, unless they are actively invested in some sort of productive scheme, they are just sitting there (e.g. as huge estates, savings etc.).

In any case, it's much better for the economy to have a large middle class, than the equivalent money in fewer rich persons.


The thing about the rich is they can hire people to make loopholes out what you just described and the financal incentive to do so.


>If you raise the marginal tax rate to 99%, then you get rid of billionaires

No, you don't, because billionaires's source of money is almost always capital gains. They don't give a shit about income tax.


> High-tax Sweden has as many billionaires per capita as the US

The first thing they all did was move their incorporations out of Sweden


All the more reason for leveling the playing field.


And that reason is . . .?


Companies and individuals that manage to game the tax system should be subject to an individual tax that also works retroactively, so they don't have an advantage over companies and individuals that went with the system instead of against it. Taxes could be much lower, if only everyone paid his dues.


I don't think it's possible to have a country without rich people, relatively speaking.

In every country we seen have some sort of power hierarchy, therefore unequal distribution of wealth.


To my knowledge, there's been no country that said "no rich people." Yes marginal rates have been very high in the past - sometimes it had no effect (UK example - though it only went up to maximum 50% marginal tax rate), sometimes it coincided with large times of expansion (US had marginal tax rates as high as 94% and hovered around 90% between 1944 and 1964).

Further, there's never been proof it "kills your economy." I've never met a phenomenally wealthy person who said "well, if tax rates go up to x%, that's when I stop working." These folks LIKE working, money is great, but it's not their driver. And, even if they DID stop working, would it be the worst thing in the world? Honestly - do we really think there's only one Gates, Zuckerberg, Ellison, Page, etc?


Calfornia is mismanged to hell. The bay area has some of the worst roads in the nation with very mild weather and a wealthy tax base. It cost $8billion to make 1 or 2 miles of central subway and only €11 billion to make the worlds longest tunnel under the alps. I have the same income tax rate as I did in canada yet there isnt universal healthcare and far more economic inequality. It goes on and on. If you trippled the money base I dont know how much better it would get.


You lost me on your rant about taxation without universal healthcare when the vast apportionment of that tax (and mis-spending like on the Iraq Occupation) is Federal not CA.

The problem is one of regulatory capture - corporate vested interests control the governance process and that means peons are getting less and less each day for their tax revenues.


My rant did not stay at California alone. What makes California mismanaged is a combination of the state itself and the federal government.

And the problem you state is not unique to any government in the world. Germany is better run than Italy and there are many reasons behind it.


> The problem is one of regulatory capture - corporate vested interests control the governance process

The problems with many things, but especially the price of SF Muni's new Central Subway and the like, are more about union interests controlling the governance process than corporate. Remember that they do prevailing-wage construction (which somehow means they pay the highest wage out there, not the average/median) to their construction workers.

The millions/billions spent on environmental-review studies and lawsuits are another matter as well.


> The bay area has some of the worst roads in the nation

You haven't traveled much if you believe this. Try going to the northeast sometime.


SF/Oakland tops the list in worst roads in metros over 500k pop:

http://www.businessinsider.com/the-worst-roads-in-america-20...

That there are bad roads elsewhere? I believe that too :).

The thing is it shouldn't be topping any list like this, with the amount of money and lack of north eastern weather.


They don't give back? Microsoft employs 100k+ people. That's giving back. These people all pay taxes and give back to society because Microsoft gave them a job. Because Gates happened. And in the case of Gates, let's not forget Bill & Melinda Gates Foundation.

What the hell have you done for society?

This story is very similar for most billionaires. They create A LOT of jobs and careers.


I'm a former employee of MSFT, and a great admirer of the Gates' dedication. But let's be clear - the fact that they employee 100k people isn't giving back in the least. They have a business, the business needs functions done, they're getting hours worked for money paid. That's not "giving back to society" in any way shape or form.

His billions of dollars donated to mosquito nets in Africa (among many other things) where he gets nothing back ... that's giving.


Can you explain to me why you think taking all of the wealth of 100 billionaires will help the poor? 100 billion spread over the population of California is less than $3000/person. So you can wipe out all of the billionaires and give everyone $50/week for one year, what is that going to change?


To add some context, California's tax revenue for 2014-2015 was $130bn. So having an extra $100bn would order-of-magnitude double their income for one year.

It's not clear to me whether or not that would make a big difference. I lean no, because my default assumption is that governments are really bad at spending money, but I could see it going either way.

Of course, there are also poor people outside California, and there's no particular reason to focus on the ones inside.

(I also note that 100 billionaires own considerably more than $100bn between them, but that's a minor nitpick.)


$3000/person is around or larger than the world's median individual annual income: https://www.givingwhatwecan.org/post/2016/05/giving-and-glob... .

I'd also expect the total wealth among the 100 billionaires to be well over 100 billion, considering just Ellison + Zuckerberg + Page together have over 100 billion.

In a similar vein to these two figures, the richest 62 people in the world hold as much money as the poorest 50% of the world: https://www.theguardian.com/business/2016/jan/18/richest-62-... . As a direct consequence, if these 62 people gave all of their money (except for a couple million each) away immediately, 50% of the world would have twice as much money.

(edit: I'm not suggesting that billionaires instantly give all their money away as a direct cash transfer. Just providing a counterargument to "billionaires don't have that much power")


>$3000/person is around or larger than the world's median individual annual income

But we're not talking about the world, we're talking about spreading it over the citizens of California in particular, which have one of the highest incomes in the world.

If you want to talk about it in the scope the entire world, divide their wealth by 7 billion instead of 40 million to see what it gets everyone. Also, almost nobody in the US is for taxing the rich in the US to just give to citizens of other countries.


$3K per capita would be an enormous economic stimulus. Or give public schools $3K per student, and you'll see huge changes. It's a lot of money.


> Or give public schools $3K per student, > and you'll see huge changes. It's a lot of money.

Yes, you'll see them spend $3K more per student. You won't actually see any improvements in the students, though.

http://washington.cbslocal.com/2014/04/07/study-no-link-betw...

The public schools have many problems; lack of money is not one of them. Having any idea what to do with it is.


Well, this is kind of a red herring, because it's widely known that schools of all types have been drastically increasing their administrative bodies, ballooning costs without actually doing anything for the students with that extra administration. Plus also it's the CATO Institute.


Think about what you're saying: "It's a red herring! We know that schools spend money they get on dumb things!" Yes, that's my point. :) If we could magically fix that, it might--in principle--become a good idea to give them more money. Giving them money does not actually magically fix that.

It's worth noting that Maciej forgot that the tech barons he hates actually tried this: https://www.washingtonpost.com/opinions/how-newark-schools-p...


What I'm saying is that presenting a report which shows more money getting spent on not the students, but some side thing which doesn't actually benefit the students... That is the red herring. That report isn't actually about money spent on students. It's money schools are spending on "administration". If the money given to schools isn't spent on students, it is useless. Spending money on educating actual students (and not ballooning administrations) actually does improve student education, just ask any teacher and ignore the principal.


I disagree, it's not simply the amount of money that is of concern here, but how it is allocated. Throwing money at problems is not the proper solution.


After $2500 for administration, there's just enough for a new aircraft carrier


It would be essentially cash for clunkers, which wasn't an enormous economic stimulus.


> you can wipe out all of the billionaires and give everyone $50/week for one year, what is that going to change?

well if you put it like that, the world ...


It's only for citizens in California in my calculation, which already make much more than that on average. So it would hardly change anything.


As to the first claim. It seems Mississippi has the highest poverty rate at 21.9%. California is at 16.4%.

Source: https://en.wikipedia.org/wiki/List_of_U.S._states_by_poverty...


This is by an antiquated measure. The accepted rate now is 20.6%, first in the nation. Discussion here:

http://www.forbes.com/sites/chuckdevore/2016/09/28/why-does-...


The SPM is not, by any means, the "accepted rate". Whether it's a more appropriate measure for any particular purpose is a legitimate discussion to have.



Yes, and even if you ignore philanthropy, the tech industry generates enormous amounts of tax revenue, which is supposed to be spent by the government to help improve the lives of "everyday people and indigent people".

A question people don't ask enough is: given that we give vast trillions of dollars to the government, most of which is spent on various kinds of social programs (health care, education, social security, etc), why is there STILL so much poverty, joblessness, homelessness, drug use, crime, and other kinds of suffering in the US?


so Apple, Google all pay full taxes in Cali? all profits booked into HQ?

wow, they're awesome.


Your assumption is that billions of dollars can be simply converted into poverty reduction.

It seems possible to me that the technology to turn money into less poverty not only doesn't exist, but that the social structures that make men like Bill Gates rich also make it difficult to create such technology.

Your implicit argument is that todays rich somehow care more about improving society than yesterday's did, which will cause these concentrations of wealth to lead to a different outcome. I'm not sure I see much of a difference between Gates and Carnegie. Different ideas about what the world needs, but not a particularly different approach to capital.


How does a pledge about something you may or may not do in the future helps poor people today? How does "the majority of their wealth" address income inequality? Will said billionaires give away so much that they cease to be billionaires or even millionaires?

And how are the actions of a few billionaires relevant to what the industry does as a whole? Does Google, Facebook, or, God forbid, Uber, address the problems of poverty and inequality (which are separate problems) as a company?

To a very large extent, charity is irrelevant; charity is a way of buying oneself a conscience without actually changing anything in the world; without even addressing the problems or thinking about them.


The issue is the concentration of the wealth itself, not what those who benefit from the concentration of wealth choose to do with it.


You're talking about the people and he's talking about the industry. There's a difference between Microsoft and Bill Gates.


I say they should keep their money and control! Educate and involve them in important things early. The HARC initiative looks great. Such an initiative could answer questions like: What are important problems? What do we need to do to efficiently solve such problems? Have we spent too much effort on a single solutions? Is it time to try another way? What can we do to bypass bureaucracy? I trust businesses to have a mindset for risk and results. In my opinion charities behave more like guardians preserving/nurturing more than making a 10X change.


It's not necessarily false. The quote you took seems to be referring to the homeless in California.

The Giving Pledge requires that the money be given to philanthropy, which may improve the lives of others around the world, rather than Californians.


That interpretation may make it true, but it would also seem to make it irrelevant. Unless Californians somehow have greater moral significance than people elsewhere.


They may donate, but it may not be enough to balance out the increased prices they induce. Net effect would be no improvement.


Err, I hate to be the one to break it to you but those billionaires pledging to donate their wealth? It's just a tax dodge. They're moving their money into foundations before we pass stronger tax laws than we have at the moment. And it allows their families to continue to live off of (via salaries for running the foundations) the wealth for generations to come.

Which is not to say they don't do some good work with it, the Bill and Melinda Gates foundation has done some great work fighting malaria and bringing fresh water to poor communities.

But these same foundations also do a lot of other work, like furthering charter schools which benefits wealthy families to the detriment of poor communities here in the US.


Personally I'd rather give billionaires more reasons donate their money than less.

If setting up a foundation that actually helps a ton of people means that their families can get a fraction of that money back, that's fine with me...

Unless you are implying that they are getting more money back through the salaries than they are in donating. In which case i'd love to see a source.


Similar reason that CEOs take the $1 salary, purely stock growth mixed with lower taxes and more under the radar to lower their taxable income [1]

Most rich people that are mega wealthy pay minimal tax by taking loans against their foundations/trusts. As you know there are no taxes on loans but a percentage, that may or may not just go right back to them. They essentially use the trusts/loans and the fact that they are essentially zero risk to take out other loans that get interest rates below inflation, sometimes very minimal interest[2]. That is essentially free money. Should the interest rates ever adjust if not fixed they can buy outright quickly.

The same thing happens with trusts/foundations. They have this hoarded money sitting there as a base they can leverage. Yes it is nice some will go to others when they pass but ultimately they would not do this if they could not use this leverage advantage. It is essentially a hack on lowering or even completely removing any taxable income as long as you have it.

[1] http://www.cnbc.com/id/46236916

[2] http://www.wsj.com/articles/SB100014241278873235270045790791...


California has 2.1 million illegal aliens. Once they're deported, the poverty rate will drop.


Apart from reducing the cost of the welfare system, that doesn't fix the poverty problem.


I don't think illegal aliens can get welfare.


Cite?


The ACA could barely roll out a website without tragic failure, how have cell phones "dramatically improved" the lives of the poor? They're still subject to as much bureaucracy and denial of basic services as ever. 4G hasn't improved public transpo.

I suppose the poor are no longer subject to long-distance fees during daytime.


"how have cell phones "dramatically improved" the lives of the poor?"

There's tons of stuff on this, but, eg., here's a poster from USAID:

https://s-media-cache-ak0.pinimg.com/originals/09/35/2d/0935...


Did you look at these metrics? They're weak as fuck. Most of them are "there's a platform or chart for this now" which is meaningless and probably intentionally decontextualized.

To be clear, it's good that people have access to the internet and all the interconnectivity that it brings. I'm not slamming the basic premise.

What I'm angry about is the opportunity cost. Telecoms & ISPs roll out bare minimums and say "hey look, now poor people can [cherry-picked thing that doesn't alleviate poverty]" when the real question is how we live in the most prosperous nation of all time, in an era when we have solved every basic necessity, and kids still experience food insecurity in American cities. Meanwhile, telecom CEOs merge back into monopolies.

Dramatic improvement in the lives of the poor would yield better metrics than a 30% increase in Haitians using mobile banking. Christ, half of that increase could be from unconnected citizens dying from endemic disease & reducing the denominator.


This isn't the US, but cell phones make M-Pesa possible, which is very positive: http://www.jefftk.com/suri2014.pdf http://www.jefftk.com/suri2016.pdf


http://www.diva-portal.org/smash/get/diva2:205909/fulltext01...

http://www.ictworks.org/2016/06/27/yes-farmers-do-use-mobile...

Etc. And that's just easily measured benefits. There's good reasons why more people in Africa have access to cell phones than to clean water.


This article explicitly endorses argument ad hominem:

"These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult. Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. The outside view doesn't care about content, it sees the form and the context, and it doesn't look good."

The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion, as in the cult case. But the cases where it doesn't work can be really, really important. 99.9% of 26-year-olds working random jobs inventing theories about time travel are cranks, but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).


>This article explicitly endorses argument ad hominem

That's because it's very effective in practice.

In the real world (which is not a pure game of logical reasoning only played by equals and fully intelligent beings without hidden agendas), the argument ad hominem can be a very powerful way to cut through BS, even if you can't explain why they are BS by pure reason alone.

E.g. say a person A with IQ 110 talks with a person B of IQ 140. The second person makes a very convincing argument for why the first person should do something for them. Logically it is faultless as far as person A can see. But if the person A knows that person B is shady, has fooled people in the past, has this or that private interest in the thing happening, etc, then he might use an "argument ad hominem" to reject B's proposal. And he would be better of for it.

The "argument ad hominem" is even more useful in another very common scenario: when we don't have time to evaluate every argument we hear, but we know some basic facts about the person making the argument. The "argument ad hominem" helps us short out potentially seedy, exploitative, etc. arguments fast.

Sure, it also gives false negatives, but empirically a lot of people have found that it gives more true negatives/positives (that is, if they want to act on something someone says, without delving into it finely, the fastest effective criterion would be to go with whether they trust the person).

This is not only because we don't have the time to fully analyze all arguments/proposals/etc we hear and need to find some shortcuts (even if they are imperfect), but also because we don't have all the details to make our decisions (even if we have a comprehensive argument from the other person, there can be tons of stuff left out that will also be needed to evaluate it).


It's a reasonable heuristic for when you just don't have the time or energy, but if you are giving a 45min keynote speech on the topic I think you are expected to make the effort to judge an idea on its merits.


Exactly.

The "cultists" he is arguing against are leaders of industry and science. The discourse bar should be extremely high. Way above ad hominem disses.


Einstein didn't look like a crank though. His papers are relatively short and are coherent, he either already had a PHD in physics or was associated with an advisor (I didn't find a good timeline; he was awarded the PHD in the same year he published his 4 big papers).

Cranks lack formal education and spew forth the gobbledygook in reams.


By this measure, I would say Bostrom is not a crank. Yudkowsky is less clear. I'd say no, but I'd understand if Yudkowsky trips some folks' crank detectors.


Einstein's paper on the photoelectric effect is a bit less than 7000 words.

It is part of the foundation of quantum mechanics.

Superintelligence: Paths, Dangers, Strategies is in the range of 100,000 words (348 pages * roughly 300 words per page).

I'm not familiar with it, but looking around it isn't even clear if it even lays out any sort of concrete theory.


I read Superintelligence and found it "watery" -- weak arguments mixed with sort of interesting ones, plus very wordy.

At the risk of misrepresenting the book, since I don't have it in front of me, here's what bothered me most: stating early that AI is basically an effort to approximate an optimal Bayesian agent, then much later showing that a Bayesian approach permits AI to scope-creep any human request into a mandate to run amok and convert the visible universe into computronium. That doesn't demonstrate that I should be scared of AI running amok. It demonstrates that the first assumption -- we should Bayes all the things! -- is a bad one.

If that's all I was supposed to learn from all the running-amok examples, who's the warning aimed at? AFAICT the leading academic and industry research in AI/ML isn't pursuing the open-ended-Bayesian approach in the first place, largely isn't pursuing "strong" AI at all. Non-experts are, for other reasons, also in no danger of accidentally making AI that takes over the world.


1. Plenty of academics write books. 2. Comparing a paper and a book for length is obviously unfair. Bostrom has also written papers: https://en.wikipedia.org/wiki/Nick_Bostrom#Journal_articles_... 3. "Concrete theory" is vague. Is it a stand-in for "I won't accept any argument by a philosopher, only physicists need apply"?


I'm not intentionally trying to snub philosophy. With concrete theory, the point I was reaching for is that when you look to measure impact you probably want to point back to a compact articulation of an idea.

The book comparison was probably a cheap shot (on the other side of it, Einstein didn't need popular interest/approval for his ideas to matter; I think that is a positive).

I think as much as anything the comparison is worthless because we can look backwards at Einstein.


Sure, that's fair. I think Bostrom is no Einstein. But I maintain that he's no crank, either. There's a lot of space in the world for people who are neither.


Bostrom has also published papers. Comparing a book and a scientific paper isn't very fair.


Please see my spirited defense from 13 hours ago:

https://news.ycombinator.com/item?id=13242592


I saw that, I still disliked the comment enough that I needed to write that. Also to claim that the book has no concrete theory in the same sentence as you stating you're not familiar with the book. Like, c'mon...


That isn't what I claimed.


Bostrom's book is wordy, boring, filled with weak parables, analogies, lacking concretization of a theory.

Like always, alarmist material sells better than realism.


lacking concretization of a theory.

Would you elaborate? It's got some pretty big names recommending it. And Bostrom himself is an Oxford University professor.


He was awarded the doctorate for one of the papers (the photovoltaic one, if memory serves), after extending it one sentence to meet the minimum length requirement.


> but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity)

No you don't, you just don't catch it right away, relativity actually holds up under scrutiny. Besides, I reject the premise anyway.

Einstein did serious work on the photoelectric effect first and then gradually worked towards relativity. Outside of the pop history he had very little in common with cranks. This is basically the result you end up seeing when you look into any of these examples used to try and argue against the ability to pattern match cults and cranks, the so-called false negatives never (to my knowledge) actually match the profile. Only the fairy tale built around their success matches it.

So it is with cult-ish behaviour as well, these are specific human failures that self-reinforce and while some of their behaviour occurs in successful people (especially successful and mentally ill or far from neuro-typical people) there is a core of destructive and unique behaviour evident in both that you absolutely should recognize and avoid. Not just the statistical argument that you will gain much more than you lose by avoiding it, it's that it is staggeringly improbably that you will lose anything.


Yep, Einstein was an expert in his field who wrote a couple of ground-breaking papers in his field. As far as I can tell no-one who is an expert in AI (or even similar fields) is worried at all about super-intelligence.


Literally everybody who is an expert in AI is worried about how to manage super-intelligence. The standard introductory text in AI by Russell and Norvig spends almost four pages discussing the existential risk that super-intelligence poses. The risk is so obvious that it was documented by IJ Good at Bletchley Park with Turing, and I wouldn't be surprised if it were identified even before that.


I'm an expert in the field and I'm not worried. It's an industrial risk like any other.


You haven't thought much about the risk of superintelligence if you think it is a typical risk. Is that compared to poorly designed children's toys or nuclear weapons?

I would go as far as to say, "humanity" as it is defined today, is doomed, it is just a matter of time.

The only question is: Will doom play out with a dramatic disaster or as a peaceful handoff/conversion from biologically natural humans to self-designed intelligence.

Either way, natural human beings will not remain dominant in a world where self-interested agents grow smarter in technologic instead of geologic timescales.


That isn't exactly what it's doing. It's proposing that there are two ways we evaluate things — deeply examining and rationally analyzing them in depth to identify specific strengths and weaknesses, and using the very fast pattern-matching "feeling" portions of our brains to identify nonspecific problems. These are cognate to "System 1" and "System 2" of Thinking Fast And Slow.

Having established that people evaluate things these two ways, the author then says, "I will demonstrate to both of these ways of thinking that AI fears are bogus."


It's also a perfectly apt description of, say, certain areas in academia—one that I'm pretty sympathetic too after seeing postmodern research programs in action! Hell, postmodernism is a bigger idea that eats more people than superintelligence could ever hope.

And yet I suspect that many of the people swayed by one application of the argument won't be swayed by the other and vice versa. Interesting, isn't it?


OK, so I ignore Einstein, and I miss general relativity. And then what? If it's proven true before I die, then I accept it; if it isn't, or if it is and I continue to ignore it, I die anyway. And then it's 2015 and it's being taught to schoolchildren. High-school-educated people who don't really know the first damn thing about physics, like non-hypothetical me, still have a rough idea what relativity is, and the repercussions are.

Meanwhile, rewind ~100 years, and suppose you ignored the luminiferous aether. Or suppose you straight away saw Einstein was a genius? Oh, wait... nobody cares. Because you're dead.

So I'm not sure what the long-term problem is here.

Meanwhile, you, personally, can probably safely ignore people that appear to be cranks.


If it were just a disagreement about physics then it would be safe to ignore.

But in this case, if they're right then we're about to wipe out humanity. That's not safe to ignore.


The argument ad hominem here actually refers to the credibility of the source of an argument. If someone has a clear bias (cults like money and power), then you keep in mind that their arguments are the fruit of a poisoned tree.


That example is bad, but the arguments aren't quite as objectionable.

"What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.

"I'd like to talk for a while about the outside arguments that should make you leery of becoming an AI weenie. These are the arguments about what effect AI obsession has on our industry and culture:..."

...grandiosity, megalomania, avoidance of actual current problems. Aside from whether the superintelligence problem is real, those believing it is seem less than appealing.

"This business about saving all of future humanity [from AI] is a cop-out. We had the same exact arguments used against us under communism, to explain why everything was always broken and people couldn't have a basic level of material comfort."


>We had the same exact arguments used against us under communism, ...

What nonsense. None of the credible people suggesting that superintelligence has risk are spouting generic arguments that apply to communism or any previous situation.

The question is not IF humanity will be replaced but WHEN and HOW.

Clearly, in a world with superintelligence growing at a technological pace, instead of evolutionary pace, natural humanity will not remain dominant for long.

So it makes enormous sense to worry about:

* Whether that transition will be catastrophic or peaceful.

* Whether it happens suddenly in 50 years or in a managed form over the next century.

* Whether the transition includes everyone, a few people, or none of us.


Er, that wasn't inventing theories about time travel, just about time.


SR and GR explicitly allow time-travel into the future. Which isn't a fully general Time Machine, of course, but is a huge change from 19th-century physics. If SR had just been invented today, and someone who thought it was crazy and didn't know the math was writing a blog post about it, I 100% expect they'd call it "the time travel theory" or some such thing.


> SR and GR explicitly allow time-travel into the future.

I presume you're talking about time dilation here. That's... a little bit true, but not really? At a minimum, it's sure a strange way of looking at it.


It allows wormholes to exist, which can connect arbitrary points in spacetime, hence time travel.


I time travelled here. What's the big deal? :)

State machines can be checkpoint at any state, load up whatever time you like, is there time per se?


Oh guys you talk nonsense. I do dabble in time travel at times, so I know :).

(Why I'm here today? I'm particularly fond of this time period; it's the perfect time when humans had enough computing power to do something interesting, but before the whole industry went to shit. Remind me to tell you about the future some other time.)


It's not an ad hominem argument if the personal characteristics are relevant to the topic being discussed. The personal characteristics of the people in his example are have empirically been found to be a good indicator of crankhood.


Um, no.

Hawking, Musk, et. al. are highly successful people with objectively valuable contributions who are known to be about to think deeply and solve problems that others have not.

They are as far from cranks as anyone could possibly be.

Anyone can find non-argument related reasons to suggest anyone else is crazy or a cultist, because no human is completely sane.

What someone cannot do (credibly), is claim that real experts are non-expert crazies, over appearances while completely ignoring their earned credentials.


Those were not the people used as an example. The example was real cultists, with robes and rituals that make no sense. And it's not an ad hominem to dismiss them outright based on their appearance. If someone I clearly perceive to be a cultist walks up to me in a mall, I'm not interested in hearing what they have to say, because the odds are empirically large that they will be wasting my time.


> The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion ...

You immediately self-contradicted here. If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).

Of course, given unlimited time to think about it, we would never use ad hominem reasoning and consider each and every argument fully. But there are tens of thousands of cults across the world, each insisting that they possess the Ultimate Truth, and that you can have it if you spend years studying their doctrine. Are you carefully evaluating each and every one to give them a fair shake? Of course not. Even if you wanted to, there is not enough time in a human lifespan. You must apply pattern-matching. The argument being made here isn't really an ad hominem, it's more like "The reason AI risk-ers strongly resemble cults is because they functionally are one, with the same problems, and so your pattern-matching algorithm is correct". Note that the remainder of the talk is spend backing up this assertion.

There's a good discussion of this in the linked article about "learned epistemic helplessness" (and cheers to idlewords for the cheeky rhetorical judo of using Scott Alexander and LW-y phrases like "memetic hazard" in an argument against AI risk), but what it boils down to is that our cognitive shortcuts evolved for a reason. Sometimes the reason is because of our ancestral environment and no longer applies, but that is not always true. When you focus solely on their failure cases, you miss sight of how often they get things right... like protecting you from cults with the funny robes.

> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).

A lot of people did ignore Einstein until the transit of Mercury provided the first empirical evidence for relativity, and they were arguably right to do so.


A good heuristic leads to a reduction in the overall cost of a decision (combining the cost of making the decision with the cost of the consequences if you get it wrong).

A heuristic like "it's risky to rent a car to a male under 25" saves a lot of cost in terms of making the decision (background checks, accurately assessing the potential renter's driving skills and attitude towards safety, etc.) and has minimal downside (you only lose a small fraction of potential customers) and so it's a good heuristic.

A heuristic like "a 26-year-old working a clerical job who makes novel statements about the fundamental nature of reality is probably wrong" does reduce the decision cost (you don't have to analyze their statements) but it has a huge downside if you're wrong (you miss out on important insights which allow a wide range of new technologies to be developed). So even though it's a generally accurate heuristic, the cost of false negatives means that it's not a good one.


I agree with you in principle, but the combination of the base rate for "26-year-olds redefining reality" being so low and the consequences being not nearly as dire as you make out mean I stand by my claim, at least for the case of heuristics on how to identify dangerous cults.

With regards to the Einstein bit, per my above comment I still think that skepticism of GR was perfectly rational right up until it got an empirical demonstration. And it's not like the consequences for disbelieving Einstein prior to 1919 were that dire: the people who embraced relativity before then didn't see any major benefit for doing so, nor did it hurt society all that much (there was no technology between 1915 and 1919 that could've taken advantage of it).


Pascal's Wager (https://en.wikipedia.org/wiki/Pascal's_Wager) is also about a small but likely downside with a potentially large but unlikely upside. Do you think it's analogous to your 2nd case? If not, how is it different?


Good question, made me stop and think about it!

The difference is that in Pascal's Wager, the proposition is not a priori falsifiable, and so you cannot assign a reasonable expected cost (ie. taking probability into account) to either decision.

In the case of a 26-year-old making testable assertions about the nature of spacetime (right down to the assertion that space and time are interconnected), there's a known (if potentially large) cost to testing the assertions.


> If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).

So, is it a good heuristic to conclude that since crime is related to poverty and minorities tend to be poor, minorities qua minorities ought to be shunned?


No. The base rate of having a crime committed against you is extremely low, and the posterior probability of having a crime committed against you by a poor minority- even if higher- is still extremely low. My point refers to the absolute value of the posterior probability of one's heuristic being correct, not the probability gain resulting from some piece of evidence (like being a poor minority).


That actually is a good (in the sense of 'effective') heuristic. It's just not socially palatable in modern, Western civilization.


Seconding the point. If you want to accept "ad hominem" or stereotypes as a useful heuristic, you'll quickly hit things that will get you labeled as ${thing}ist. This is an utterly hypocritical part of our culture.


I've been thinking about this a lot lately, and am coming to the conclusion that it's similar to dead-weight loss in a taxation scenario. As a society we've accepted the "lower efficiency" and deadweight loss of rejecting {thing}ism because we don't want any one {thing} to get wrongly persecuted only on the basis of it being such a {thing}.


If you think of it that way, it's a rephrasing of the old and quite universally accepted "I would rather 100 guilty men go free than one innocent man go to prison."

"I would rather 100 deadbeat {class} get hired than one deserving {class} not be hired due to being a {class}."


Actually, it isn't.

Let's suppose we want to solutionize our criminal problem. There are 1000 people in the population; 90% white, of which 5% are criminals and 10% black, of which 10% are criminals. (I rather doubt the difference in criminality is 2x, but....)

So, there are 900 white people and 100 black people; if we finalize the black people, we'll have put a big dent in the criminal issue, right?

Well, we reduce our criminals from 55 to 45 while injuring an innocent 9% of the population.


> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).

Uh ... https://en.wikipedia.org/wiki/History_of_special_relativity#...


"Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn't want to go, you know what would happen to Einstein. He would have to resort to a brute-force solution that has nothing to do with intelligence, and in that matchup the cat could do pretty well for itself."

This seems, actually, like a perfect argument going in the other direction. Every day, millions of people put cats into boxes, despite the cats not being interested. If you offered to pay a normal, reasonably competent person $1,000 to get a reluctant cat in a box, do you really think they simply would not be able to do it? Heck, humans manage to keep tigers in zoos, where millions of people see them every year, with a tiny serious injury rate, even though tigers are aggressive and predatory by default and can trivially overpower humans.


I'm not arguing that it's useless to outsmart a cat. I'm disputing the assumption that being vastly smarter means your opponent is hopelessly outmatched and at your mercy.

If you're the first human on an island full of tigers, you're not going to end up as the Tiger King.


Well, as a cat owner I give you this: like with any other animal, there are tricks you can exploit to coerce a cat without using physical force.

One way to get a cat into a carrier - well, the catfood industry created those funny little dry food pellets that are somehow super-addictive. Shake the box, my cat will come. Drop one in the carrier, it surely will enter. Will it eventually adapt to the trick? Maybe, but not likely if I also do this occasionally without closing the carrier door behind the cat.

Yes, we can outsmart the cat. Cats are funny because they do silly, unpredictable things at random, not because they can't be reliably tricked.


The issue is that in this case "vastly smarter" is not smart enough to truly understand the cat. It's conceivable an AI with tons of computing power could simulate a cat and reverse engineer its brain to find any stimulus that would cause it to get in the cage.

I also think this isn't a very good analogy. In this case we're talking about manipulating humans, where we already know manipulation is eminently possible.

Heck it wouldn't even need psychological manipulation. Hack a Bitcoin exchange or find another way of making money on the internet, then it can just pay someone absurd sums to do what it wants.


What if you're the first human on an island full of baby tigers? I think most AI alarmists would argue that this analogy is vastly more appropriate.


That's easy. Pet the baby tigers, constantly. Cuddle them and socialize them to the human so they act like it's one of them. Use your smarts to find food and provide the tigers with food so you're considered more important (hand-feeding them might be more of a NO in case they're excitable). You're still running risks but you have tiger allies and some/most of the tigers simply love you with all their tigerly hearts, which is some protection.

We are not the human. We are the tigers. Superintelligent AI is in the position of the human here, and superintelligent AI must ingratiate itself without ever forming an adversarial situation… in this world where backhoes take out fiber backbones and EMPs exist.

AI will probably go native. It'll find things it loves about humans. I'm actually working on a book in this vein… you have to go back to deeper principles, rather than assume 'because AI can be evil, it will be as evil as the evilest individual human, 'cos that's such a winning strategy, right?'.


Please put me on this island.


I think an analogy of a baby human on an island of tigers would be vastly more appropriate. Humans would be like the tigers - they might have a lower overall intelligence, but they are mature and self sufficient.

A hyper-intelligent AI would be more akin to a baby human - it must be taught and raised first. But like the baby human on the island, it wouldn't be able to be taught by its peers, or benefit from the generations that came before it. It would certainly be at the mercy of the tigers until it matured, and even after it matured we wouldn't expect it to be able to use language or bows, or be anything other than a wild man. It probably wouldn't even seem much more intelligent than the tigers on the island.


I find both arguments equally plausible. I think there's plenty more plausible addendums too e.g. the idea that the baby human would mature incredibly fast. I'm not sure to what extent we're all throwing speculative darts here.


The idea here is that the person has to kill all the baby tigers, right? Because otherwise the end state is the same as the island full of adult tigers.


I was thinking that if you were _dropped_ onto an island full of tigers you'd have no chance, but that you could figure out a way to survive if you had some time to figure out a plan. Maybe you could find a way to coexist with the tigers. Become one of the pack, y'know?


Um, no. You enslave the tigers and harvest their organs to achieve immortality.


Found the SEAL!


To become the Tiger King you must overcome the entire population of tigers.

To become the President, you need only overcome a thousand Florida voters.

To intern the Japanese, you need only overcome two members of SCOTUS (Korematsu v. US)

It isn't necessary for Hawking to be able to trick the average cat into a box. It's sufficient to trick a handful of cats in total.


This line of reasoning only works in hindsight. Working in real time, you won't know in advance how many Florida voters you will need to win, or which ones.

It's like saying that the way to overcome the population of tigers is to focus on raising and befriending the largest and most cunning tigers, who will then protect you against the rest. Ok, good idea, but unfortunately there is no way to know in advance which little tiger cubs will grow up to be the largest and most cunning.


And yet somehow humans rule the planet and tigers are an endangered species, surviving only as a result of specific human efforts to conserve them because some humans care about doing so.

How well an AI could survive on a desert island is an irrelevant question when Amazon, Google and dozens of others are already running fully (or as near as makes no difference) automated datacentres controlled by a codebase that still has parts written in C. Hawking can easily get the cat in the container: all he has to do is submit a job to taskrabbit.


Of course, if you take your average city dweller on your island, he will probably die of thirst before the tigers get to him. But take an (unarmed) navy seal as your human on the island and I'm pretty sure in a couple of months he will be the Tiger King.

And Hawking would just ask his assistant to put the cat into the box. You are artificially depriving him of his actual resources to make a weak point.


Navy SEALs are not superhuman. A single adult tiger would slaughter just about any unharmed human with near certainty, even a SEAL. Even armed with any weapon other than a firearm, the chances of besting a tiger and coming out without being maimed or mortally wounded is close to zero.


Why so afraid of the 400 pound killing machines?


What percentage of people are Navy SEALs?

If I were a tiger, I'd probably think this island sounded like an excellent place for a fun adventure holiday with my tiger friends, and I'd be right to do so. Most likely outcome: good food, some exercise, and we can take some monkey heads home for our tiger cubs to play with.


Fun, until the navy seal appears. It takes only one.

The same thing with AI, it takes only one, no matter how many dumber ones we entertained ourselves with before.


What's up with your Navy SEAL analogy in these comments? Do you know any SEALs? Have you ever seen a tiger up close, say 20 feet or so?

One adult tiger will kill an unarmed SEAL (and any other human being) in single combat. It would barely exert itself. It would make more sense if, when you said "it only takes one", you were referring to how many tigers could kill five or six SEALs who don't have guns. Fuck it, give your SEAL a fully automatic weapon - the odds are still not in his favor against a single tiger. Large felines have been known to kill or grievously injure humans after taking five high caliber rounds.

This is exactly what idlewords' point is in his essay. Your argument about a SEAL landing on an island of tigers and somehow flipping a weird "Planet of the SEALs" scenario on them is exactly what many (most?) AI alarmists do. These ridiculous debates get in the way of good faith discussion about the real dangerous of AI technology, which is more about rapid automation with less human skin in the game than it is about the subjugation of the human species.

This sort of unrealistic scenario is really fun to talk about, but that's all it is - fun. It's not really productive, and it conveniently appeals to our sense of ego and religious anxiety. Better to do real work and talk about the problems AI will cause in the future.


There's no mention of iteration here, which is really what powers intelligence-based advantages.

The first time Random Human A attempts to get Random Cat B into a box, they're going to have a hard time. They'll get there eventually, but they'll be coughing from the dust under the bed, swearing from having to lift the bloody sofa up, and probably have some scratches from after they managed to scare the cat enough for it to try attacking.

However, speaking as a cat owner, if you've iterated on the problem a dozen or so times, Cat B is going in the box swiftly and effortlessly. Last time I put my cat in its box, it took about 3 minutes. Trying for the bed? Sorry, door's closed. Under the sofa? Not going to work. Trying to scratch me? Turns out cat elbows work the same way as human elbows.

The same surely applies to a superintelligent AI?

(Likewise with the Navy Seal On The Island Of The Tigers. Just drop one SEAL in there with no warning? He's screwed. Give a SEAL unit a year to plan, access to world experts on all aspects of Panthera tigris, and really, really good simulators (or other iteration method) to train in? Likely a different story. )


There's no need to reinvent solutions. The problem has already been given a proper mathematical treatment:

http://www-history.mcs.st-and.ac.uk/Extras/Spitzer_lion.html


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: