That's still a ways off. Robot manipulation in unstructured environments is still terrible. See the DARPA Humanoid Challenge. People have been underestimating that problem for at least 40 years.
But that doesn't help with the job situation. Only 14% of the US workforce is in manufacturing, mining, construction, and agriculture, the jobs where robot manipulation in unstructured environments matters. Those aren't the jobs at risk.
I've been saying for a while that the near future is "Machines should think, people should work". An Amazon warehouse is an expression of that concept. So are some fast-food restaurants. So is Uber. The computers handle the planning and organization of work; the humans are just hands for the computers. (Yes, "Manna", by Marshall Brain.) That's going to become more common. Computers are just better at organization and communication than humans.
Computers have already made a big dent in middle-class jobs, and that's going to continue.
If everything you do goes in and out over a wire, you're very vulnerable to automation. If 20% of what you do can't be done by a computer, that means five of you will be replaced by one person. This is already hitting low-level lawyers; it hit paralegals years ago.
The end state of this trend is a modest number of well-paid people in control, a huge number of people taking orders from computers, and many people without jobs. That's not far away; one or two decades. It's mostly deploying technology that already exists.
If you look at office spaces pre and post computers they have roughly the same amount of people.
AI is going to do to the office space what robots did to factories.
It'll be a slow unnoticeable process for the most part. Automation of a single process may save as little as 5 minutes a day per workflow, but eventually it adds up to a position not getting rehired as it usually would have.
Sure the AI business will create jobs, but not as many as it replaces and try telling a lawyer to go back to school to get a relevant education.
What the heck is he talking about? With my limited exposure to AI and neural networks, there really is no algorithm that can make algorithms. And therefore, AI doesn't really "think". Sure you can train a neural net to pick out the "diamonds" in a sea of garbage, but that is still not "thinking", merely going on an educated guess backed by statistics. Or am I missing something?
Obviously, AI is not right now in a state to be a thread. However, if you extrapolate - we are getting close to the singularity.
If you assign any probability greater than zero to that the singularity will happen in your lifetime, it is probably a good thing to start thinking about, how to deal with it.
if you have an AI that is pretty smart, and you add a lot of money, then you have a pretty big problem as since you can scale it, and let it do what you want. i.e. side stepping democratic approaches. or causing large scale unemployment.
Later, you have the selfaware-ness problem, but that's less of a concern in the shorter term.
What am I missing here?
This is wide-scale bike-shedding, basically. The real problems our unsustainable civilization faces (population overshoot, energy and fossil fuel shortage, ecological collapse, unsustainable agriculture, climate change, and so on) are between hard to impossible to solve at this point. They're also actually scary to think about.
So instead we talk about the "easy" and trivial stuff first. AI and singularity happen to be a nice kind of scary, because hardly anybody really believes it's a serious threat. It's kind of like watching a scary movie. You get a bit scared, but not too much, because you know there's no real danger.
Once we know for certain than AGI can be developed and it will cause troubles, there's already less developed AIs, generating huge heaps of money for certain companies and they will be heavily lobbying against any kind of regulation. And this won't be just traditional lobbying towards to the elected officials. Just think how powerful a company like Facebook, but equipped with more advanced AI could be at altering the public opinion if it decides to go all in and use all the powers it has.
But Elon says something and everyone loses their minds.
Let thoughts stand for themselves. Why attach so much weight to the speaker?
Also he doesn't just blurt out something without explaining it, his reasons are grounded.
It's the same for AI. We need to treat the risks responsibly which means researching them and making informed judgements. That's what he's talking about.
The statistics aren't that straight forward; for example, young men under the age of 24 are significantly over-represented in traffic deaths, so it's not entirely reasonable to assume the cars or roads are inherently dangerous. On top of that, we drive 3.1 trillion miles every year in the U.S alone and falling off a ladder at work kills about twice as many people than roadway fatalities do.
Falls disproportionately affect the elderly.. as do traffic accidents, but the opportunities for risk are typically fewer as many elderly stop driving at some point, most die as passengers when they're involved in traffic accidents.
There are clear and present threats to civilization needing to be dealt with - superhuman A.I is, to quote Maciej/Pinboard, the 'Idea that eats smart people'.
Note that consciousness isn't necessary or relevant for the purposes of this argument. We only care about what the AI is capable of, whether it's experiencing qualia while it's taking over the world or whatever is immaterial.
Once you grant that there are other possible configurations of matter that could be generally intelligent, the only other thing you have to grant is that humanity will one day be capable of manufacturing some of them. Assuming a wide variety of possible arrangements, it would seem unlikely that there was some fundamental law of the universe that prevented us from manufacturing intelligence.
What is their energy consumption? What are their switching speeds? Can cognition grow unbounded, and if so, why? Can any brain improve on its own process of cognition? If other arrangements of matter perform cognition equally well, why don't we see them in the independent evolution of life with wildly different structures for everything else?
There is far too much woo and not enough rigorous thinking around the notion of AI superintelligence, singularity, or whatever other incantation people care to use.
Because evolution is a very weak optimisation process. It has very limited materials to work with - everything has to be made of meat. The only changes it can make are incredibly indirect ones - mutations in DNA base pairs, leading to different proteins being manufactured or some such (I'm not a biologist, obviously) leading to subtle changes in the makeup of the resulting organism. Can you imagine how difficult engineering would be through a medium like that? It'd be like trying to perform brain surgery with a sledgehammer. And it's not even purposefully optimising for anything, let alone intelligence. The changes all happen completely at random, and the ones that manage to reproduce stick around.
Moreover, the fact that such a weak process as natural selection can produce something as smart as us, should really make you consider that humans, with optimisation power far superior to natural selection, can produce something far smarter.
I don't know how likely intelligence explosion scenarios are. The possibility of an AI of human level intelligence being able to improve itself seems obviously plausible - if humans can engineer intelligence, then human level AI can.
Whether it can do that rapidly is far more questionable. However, even if you think the likelihood of fast improvement cycles is quite low, it still warrants some serious thought, given the size of the potential payoffs and downsides.
Rejecting the physical Church-Turing thesis is not really plausible.
So then, the question becomes, what is the fundamental, lawful difference between narrow and general intelligence, that would cause effectively computable functions to be able to perform behaviours that we classify as narrow, but not broad ones? This would also seem surprising and arbitrary a priori.
You've been reading Searle and Penrose. I can tell.
Their proofs are based on the assumption that any AI must be a consistent system built using only computable functions.
Have you ever met a human mind that was completely consistent? I Haven't.
It's easy to set up a straw man to get demolished if you get to design the exact properties of the straw, flaws and all. Of course who would imagine that perfect internal consistency would be a flaw? But then again why should we assume that it's a prerequisite of artificial intelligence if it isn't for humans?
You're confusing different notions of consistency. I could write a trading bot in python which preferred CompanyCorp shares to IndustrialInc shares, and preferred IndustrialInc shares to CompanyCorp shares. It would be inconsistent in the Von Neuman Morgenstern sense, but the underlying process would still be a completely consistent logic system.
In the same way, irrational human cognition could at a low level be isomorphic to some computational process.
I haven't read Searle/Penrose
Or are you specifically talking about perfectly simulating human brains? Human brain emulations are only one very specific and narrow form a strong AI might take. But even in that specific subset of possible AIs, we have no real idea how precise the simulation might have to be. It might be perfectly acievable without even simulating individual molecules.
That aside, even if human cognition is a computable function, there are no guarantees that the physical process giving rise to human cognition is computable, nor that any process giving rise to cognition is computable.
Unless something in physics changes drastically, human cognition is a finite state automaton. See my other reply on the Bekenstein Bound.
The growth has an upper bound which means it's ultimately computable:
As this relates to the human brain, and even omitting any quantum weirdness (of which there is probably a lot), all we can say is the total state of the brain at time t requires at most N bits, where N is the bound over volume of the brain. It says nothing about the (information theoretic) entropy of the _dynamics_ of the brain, which is where the process of cognition actually occurs. In fact many (if not most) physical processes have infinite entropy.
For an example of a deterministic system that exhibits such behavior, Consider the logistic map at r~5.7. There exist some coarse partitions  of the state space such that the trajectory that can be expressed in a finite number of bits, but this is not in general the case, even though any _given_ state of the system can be expressed in a finite number of bits. Other examples include the double pendulum, three-body orbits, and in fact most real physical systems.
In fact it is not in general true that a system with N bits of state can be always simulated with <= N bits. We can obey the Bekenstein bound without necessarily being able to simulate the process, even a complete description of the state at time t.
For the time evolution of the state space of a system, the Lyapunov Time  and the Lyapunov Exponent  are much more informative. If the time required to reach another bit of precision grows faster than polynomial time, no Turing machine can hope to simulate that physical system in a practical amount of time. In fact we also have physical bounds on how much memory we have to work with, and how fast we can flip bits for a given volume of space - simply enumerating all possible state transitions of the brain using a Turing machine will require a volume of space much larger than the brain itself if the time evolution of the system produces large Lyapynov exponents (which it absolutely does). This should come as no surprise - known NP-hard problems have the same behavior.
I would also add, even if the brain turns out to have a representation in N bits, and the state transition is (somehow) computable in polynomial time, it may still be physically infeasible to do so without just building and 'running' an actual brain.
PS: To me all of the above suggests that whatever the universe is doing when it's doing physics, it certainly isn't computing in the Church-Turing sense.
I'm not sure how you reached this conclusion, assuming you're not just making a trivial observation that our formalisms employ the reals, which in itself entails nothing meaningful given the reals are merely a convenience, not a necessity.
If we accept that any bounded volume contains finite information I, and the laws of physics themselves have a finite description, then such laws are isomorphic to a state transition function I(t) -> I(t+1), which is readily simulable on a Turing machine. Of course, the devil's in the details of "finite" above.
> If the time required to reach another bit of precision grows faster than polynomial time, no Turing machine can hope to simulate that physical system in a practical amount of time.
I assume you're discussing the feasibility that cognition is simulable on standard computers here. I dispute your position that there's likely to be any quantum weirdness beyond ordinary chemistry, but even setting that aside, you're ascribing far too much power to the human brain.
Fact 1: natural selection would weed out any species with an organ that consumes more energy than any benefits it yields.
Fact 2: the growing evidence of many animals using tools, and our growing acceptance of a spectrum of consciousness among the animal kingdom, entails that human brains aren't too special, just special enough.
Fact 3: the detailed documentation of sensory defects, like optical and other sensory illusions indicates that whatever information processing algorithms we utilize, they are imperfect at best. This is exactly what we'd expect from evolutionary pressure, ie. we'd expect the most efficient information process that yields reproductive success, which is a flawed set of efficient heuristics that cover the majority of cases with the fewest resources needed, and not an energy-intensive, complicated, subtle but foolproof system.
Fact 4: many of the feats that were the exclusive domain of humans, like visual recognition and imagination, natural language translation, novel musical and written composition and more can be performed by computers. They're at roughly a 3 or 4 year old's level granted, but given we take 20 odd years to reach mastery levels in these disciplines, I'd wager that the computers will reach parity in that time frame too.
There's frankly little reason to believe the brain is special in any particular way, simply complex and layered, which will take some time to disentangle. It's like asking to describe the exact path of
> PS: To me all of the above suggests that whatever the universe is doing when it's doing physics, it certainly isn't computing in the Church-Turing sense.
Coincidentally, 't Hooft just released his book on the cellular automaton theory of quantum mechanics . I certainly hope it will change your mind, because I think people these days far too readily dismiss determinism, and even the strong Church-Turing thesis, often for specious reasons.
Which is precisely the point I'm making. I deliberately avoided any woo about non-determinism, biology, or consciousness by providing examples of the time evolution of purely deterministic dynamical processes, very simple ones, which produce behaviors that are provably not computable in any reasonable time and have infinite (informational, not thermodynamic) entropy.
The complexity classes of generating additional bits of precision are much worse than exponential in many cases, and these are just ordinary everyday systems - nothing so complex as a brain. You provide examples of flaws in human cognition, which I acknowledge, but pointing them out does nothing to advance the idea that the brain is a computer.
But time evolution isn't relevant. Even supposing such systems exist naturally, you're conflating simulation time with real time.
> You provide examples of flaws in human cognition, which I acknowledge, but pointing them out does nothing to advance the idea that the brain is a computer.
"The brain is a computer" is a very different claim to "the brain can be simulated by a computer". I've only claimed the latter.
They do, I gave examples.
> Conflating simulation with real time
I am not talking about real time, only about finite time for any Turing machine. No matter how fast you make your machine, I can ask for another digit of precision. As I mentioned earlier, you are also bounded on how fast your machine gets to be much earlier than I'm bounded on digits. You'll be running my double pendulum until the heat death of the universe, and using all the mass energy available to do it.
> The brain is a computer" is a very different claim to "the brain can be simulated by a computer"
These are equivalent by the universality of Turing machines. You are in fact making this claim, even if you don't realize it.
AI will not be the same as animal intelligence. The driving force behind animal intelligence has been survival. Animal intelligence evolved gradually resulting in a hybrid brain containing primitive structures with primal instincts and irrational behavior as well as more evolved structures capable of strong problem solving. Therefore our intelligence is tainted with primitive behavior.
Strong AI can eventually set intelligence free from our primitive, irrational roots and that is in itself not bad.
The driving force behind AI will also be survival, just in an environment where humans try to decide which AIs live and die.
Selection pressure will favor AIs that humans want to have around, or that can evade human detection.
In the former case, it may be easier for an AI to fool humans into appearing useful and being kept around, than to actually being useful. This would be analogous to some form of parasitism.
Also, once we let AIs into the game of helping make other AIs, or modifying themselves, then there is a lot more room for an AI to slip the leash and start doing things that superficially appear to benefit humans but actually selfishly helps the AI propagate.
Why isn't all grass venomous and covered in spikes? Why after millions of years hasn't grass evolved defenses against herbivores?
1) it reproduces fast enough to compensate for dying and being eaten.
2) herbivores that reproduce too fast and eat too much run out of food and die.
Survival is largely a function of the environment, and we happen to control that environment.
Unsupervised learning can still be controlled if we happen to control the input the system is given.
You can't code around selective pressure. If one of those MeeSeeks has a bug or design flaw that makes it persist after it's finished it's assigned task, guess what?
But it is a risk for our civilization, which was the express point.
The only sense in which "rationality" changes is when you're talking about the project to use our squishy meat-based thinky bits better, cultural and environmental baggage and all. In which case, "rational" strategies are going to be much different if you have, say, lots of shame-based baggage due to a strict Catholic upbringing.
Also, please stop listening to postmodernists until you've got a thorough nuts-and-bolts understanding of how systems work. It's really not good for you.
Also, Sobol previously started digital payments and self-driving car companies, which are repurposed by The Daemon for payments on the Darkent and AutoM8s...