Unions were powerful because they could limit access to labour, which was required for production. Now they are increasingly powerless. Mass manpower was once required to wage war. Now is increasingly less useful in a world of high-tech warfare.
What prevents the powerful from going straight to the source of production and value? I'm not talking about off-shoring manufacturing to China, I'm talking about something more extreme, along the lines of a small cabal of wizards in a tower conjuring spells to extract energy directly from the wind and sun, and materials directly from the ground? The tech is far off, but the direction is clear. Maybe y'all think you're going to be one of those wizards. We'll see.
But what's going to happen to the rest of us? Are we all going to wake up some day as basic-income supported artists, happily chewing on organic granola and self-actualizing (or not) as we please? I think a study of history suggests it's not going to be that easy. Those union rights were hard fought. People died.
What happens when those at the top decide it's not worth keeping 8 billion people around just for kicks, when they can make do with ... 5 billion? 500 million? How many programmers, painters and yoga instructors do we need? On a planet with dwindling resources, tough decisions are going to get made.
So yeah, watch out for those robots. Especially the ones with the lasers on their heads. (That's a joke. But the rest?)
Given how little is actually automated right now, I'm always a little skeptical that automation drove the labor union extinct. It seems like politics, corruption, and mismanagement drove American unions near-extinct. A lot of countries still have very active trade-unions that take a strong hand in production and economics (Germany and Denmark, for instance).
Your time horizon is likely too short. Compare today to 1900. 117 years is basically nothing in history. We even still have someone that was alive then .
Compared to then, everything has some proportion of it's labor automated. In fact I struggle to know one profession that has not been impacted by some form of automation.
My point here is that I don't think labor competes zero-sum with automation.
I concede that so far as we can tell that may be true, but I'm highly skeptical that it will stay that way.
Remember, wide scale industrial automation is less than 250 years old.
I think that how automation gets deployed and how revenues get spent down are bargained over between workers and owners.
Nah. Do you know why? Software engineers will never unionize. The guy or gal who is the type to organize, isn't building/writing the thing that is going to replace him/her.
We're going to get to the point (hopefully) where our machine systems "apprentice" with our skilled laborers to take over their roles.
The only way that this won't happen is if it's physically impossible to replicate skilled work by machine. Which I think we all would agree is ridiculous. Timeline? Not sure, but it's going to happen.
If my local Wobblies branch wasn't a mess of infighting, I would already be unionized. You probably don't even have that excuse. Organize now!
As corporations sent jobs to other countries and Reagan fired the air traffic controllers, people were complacent enough to let it happen while they succumbed to their own pleasures.
"While we were dozing, the money crept in."
I suppose trains to this day must still have pairs of stokers, riding along, benefiting from labour saving technology. But of course it is not so. Eventually the dead weight gets cut.
Sometimes it gets done slowly and with respect for the dignity of the workers. Sometimes they're just tossed out in the cold.
The only real example that springs to mind is the job of casino riverboat captain.
At least in Mississippi (and maybe it's changed since I left the area) -- gambling is illegal, at least on land, but out of respect for tradition, riverboat gambling is not illegal... or at least that's the stated reason, scuttlebutt indicates that the reason might have been to limit gambling to only the wealthy who could afford riverboat cruises, and to prevent the working class from falling prey to their own vices.
The laws on riverboat gambling, as laws tend to do, slowly evolved. Nowadays, 'riverboat' gambling in Mississippi is done in massive casinos. They're actually very large buildings that look like this, but that technically float. The loopholes in the law that allowed for riverboat gambling have now morphed into 'river gambling', and that large, mall-sized building in the picture is technically a boat. It's effectively moored to land for practical reasons, but because the laws are the way they are, it has to be capable of casting off. Some casinos have a small, detachable section that isn't a part of the permanent mooring, others are detachable from their moorings, etc., but basically, you've got a casino that meets some technical definition of 'boat' upon which it is legal to gamble, whereas if it were incapable of being definable as a boat, it would be illegal to gamble in.
That's a long story, but here's the interesting part -- because it's technically a riverboat, by law, it must have a captain. The captain has few duties; he must be able to ensure that the boat is capable of passing a coast guard inspection, and on exceedingly rare occasions where the casino must aweigh, he will pilot the boat as it does so. The other 363 days of the year he does effectively nothing but sit in his pilot house and stay on payroll, for which he makes a pretty handsome sum.
I don't know what the industry average is, but a friend of mine (the only reason I know these details) actually got a job for one of the smaller casinos in Tunica Mississippi based on his experience as a dinner cruise captain. His was a 'junior captain' for years, and then got promoted to actual captain, for which he did nothing and made almost 6 figures a year, which is a damn lot of money in Mississippi. He passed the time by learning programming on the internet.
 - http://www.mississippi.com/portals/0/casino%20images/Harrah'...
I don't know, but ask your manager or your marketing team, they should know.
- Surplus non-elite human stock will be used for mass colonisation of solar system/extrasolar bodies where mortality is high (aside: and possibly beneficial, in terms of accelerated evolution due to reduced reproduction cycles).
- The elite/AI cabal at the top will subject the unknowing masses to physiological and psychological experimentation for their own ends. (this one is probably already happening :)
- A mass uprising of the disaffected non-elites will enforce some kind of Butlerian Jihad (it's just a union to keep the AIs out!).
- Elites will be benevolent overlords who take their stewardship of the species seriously, and we all live blissfully in massive space-elevator-anchored orbiting ring habitats with UBI and free soylent, while the mysteries of the universe are probed until the end of time.
Regarding the above, that has been the way of life in the past. Feudal lords took "care" of the folks under their dominion. I think the key is "Social/Economic Mobility".
If the people who at the apex can perpetually be at top by using AI and stomping out any challengers then it would be a major problem.
However in response to your comment, feudal systems still relied more or less on peasant-powered agriculture, taxes etc for the ruling class to exist. The theorised society in discussion here is one where a very small percentage of humanity can extract all resources required for existence via automated tools - without the need for 'peasant-class' manual or intellectual labour. In this society the vastly larger portion of our species is more or less irrelevant, and thus subject to the whims of the few (as opposed to now, where economies and industries still need peasant-class input to function).
However, once the elites get too greedy, and the masses too marginalized, there will be riots and civil war, so the smart ones will go for the "grow the pie just enough" approach.
Nothing. Certainly not this crowd. Which is why I expect the long term future to hold something akin to a Blitzkrieg that will actually be over in a flash. After the delete function has been prepared and perfected, the delete button gets pushed. Not by the naive fools who built it, mind you. Those will a.) hardly know what they're working on and b.) be gone first.
We won't understand that it was until it's over, but then, looking back forensically over the waste and ash heap before us, we will see that PavlovsCat predicted all of this, and we should have listened.
If we as a society develop tools to remove people from production and leave them at the mercy of the modern jungle then the machines designed to do this will follow this path as well.
If we design machines to accomidate the needs of a large global population without regards to age, race, religion, productivity, or other factors that have historically been used to separate the "us" from the "them," then the machines will continue to solve the problems we designed them for.
We could, as a species, unanimously abandon all AI tomorrow, Dune-style, and the problems and paths of history would largely be the same. The strong will do as they please and the weak will suffer as they must.
We just need to be better to each other.
I believe that the fundamental problems of our time are ethical, not technological. If we can figure that part out, the technology should take care of itself.
I would love to live in a post-scarcity utopia where we all run around self-actualizing. I don't think we need to give up AI to get there -- in fact, I think technology will be the key that unlocks the gate.
But we have to have the wisdom to pass through it with style and bring as many people as we can on the way. Otherwise we might find ourselves fighting for our place in line and possibly even annihilate ourselves in the process.
Yes, that is the answer. Sadly it seems unlikely since throughout history a large portion of the population remain cruel and uncaring. Many thousands of technologist in the US are involved in making the weapons and targeting systems that are killing people across the globe. Seems unlikely they are going to wake up and decide to treat others better.
That might not be a "handful" of people, but it's certainly a smaller number than our current population.
Who needs coal miners if we have all the cheap solar we can use? Who needs farmers if we automate farming? Who needs drivers, maids, schoolteachers, cooks, etc. if we can have the machines do it for us?
Who needs lawyers if our disputes are simply settled by some AI judge? Who needs cops if our streets are patrolled by security robots and we have a Minority Report style crime-prediction system.
In a way, it's paradise -- if you're at the top.
Oh, and I'm pretty sure they suggest democracy as we know it is dead. One person, one vote? Only in an age of massed human warfare. If you can take out a million or a billion people with high-tech biological warfare (and those people aren't producing anything anyway) what good are they and why do they deserve the vote?
Not saying it's right, but the logic of megapolitical violence is cruel and unyielding. It's quite the lens.
Eventually the losers in this equation will risk everything to correct the inequality.
And that gap is just getting bigger and bigger until we get rid of that soldier entirely and he is replaced by a robot or drone swarm.
18 U.S. soldiers to 300 enenmys dead(20 years ago)
4000 to 24,000 enemys
That kill gap is just getting bigger and bigger as the years go by.
American soldiers can use I.R. to 'see through walls'...how can a guy with an AK47 and a canteen compete with that.
Battle of Mogadishu is a 16:1 kill ratio, where as the Iraq war is 6:1 as you've written it.
The problem is...back to O.P.'s comment...we're getting to the point where a battle can be won with 'wizards in a tower' the technology is so advanced soldiers wont even be needed. drone/hellfire missile, a tomahawk missile...drone swarms, remote tanks, etc. This seems pretty much unbeatable to an armed force who has to put actual human beings on the field.
suicide attacks (bombs, airplanes, nuclear radiation) ...which is exactly what we see today.
In addition, the likelihood is that this killing won't even be ordered by a human. The AI will detect that the masses are revolting and start killing people until either the revolt stops or all the people are are eliminated. Those at the top will be blissfully unaware that this is even happening.
So if somebody is in charge, you are talking about a difference between the haves and the have nots. The natural result is the have nots deminishing until they are gone, but I think human politics inside the have demographic will be just as treacherous .. rife with in-fighting and similar territorial disputes.
Sort of a might makes right situation where power is seized or transferred in an 'orderly' way.
Off topic but since you mentioned it... I was so disappointed when the prediction system turned out to be just magic and no technology at all.
The drone could also be hacked by a terrorist or enemy state
The real damage is rolling back education and destroying the media, destroying the will to fight if ever needed and basically guaranteeing that someone will set those drones up and take the whole pie
As for hacking, you have the best engineers in the world at DARPA, I'm sure it will be somewhat robust...
The last part about rolling back education, I can't agree with you more. I feel like this current wave of Nationalism around the world might have something to do with that.
Our governments are filled with people who set their passwords to "password". Yes, the drones will get hacked.
George Orwell, 1984
(1) A situation where we need to stop the car and converse in our natural language, that is, for just our half of the conversation, do speech recognition and natural language understanding. So, the AI approach would be to get the list of the 100,000 most common conversations, tune some speech recognition to those, f'get about the actual language understanding, and, instead, for each of the 100,000 cases implement the most common resulting action or response in the data? Sorry -- in that case I'd rather not be in that car!
(2) A situation where the driver needs actually to have real human understanding of a situation that, really, has never occurred before and, thus, is not in any AI training set. E.g., the vehicle ahead is a pickup truck and has some liquid dripping out of the truck bed out back. Somehow the liquid doesn't look like water. Taking a whiff, it smells like gasoline. Hmm. That stuff could catch fire, move to ignite the stuff in the bed of the truck, and maybe something could go "Boom". So, what the heck to do? Sure, slow down, get back, well back of the truck, change lanes and move ahead of the truck, pull off the right side of the road and stop, etc. IIRC, so far such general deductive reasoning is beyond AI. IMHO, such reasoning requires real AI, or whatever we are calling that now, and we don't know how to program computers to do that now.
IMHO, first cut, for self-driving cars, the best chance would be to do some extensive re-engineering of the roads.
Here is a general point: We don't yet understand how general human intelligence works and, thus, don't know how to program it. So, we are having trouble evaluating the automation we now have and, thus, are vulnerable to overestimating how far to real AI the current work really is.
Besides, AI hype, if this is a case, and overestimating how much progress there is to real AI is a very old story: As I've heard, way back in the days of vacuum tube computers, IBM was pushing publicity about their "giant electronic human brains". Looks like IBM is still doing this.
2. I'm not confident that I would be able to notice a truck leaking gasoline, let alone smell it while driving. I'm comfortable riding in a self-driving car that can't perform that recognition. And again, the rider can simply tell the vehicle to pull over, right?
With enough road re-engineering, a self-driving car could essentially follow electronic rails and also get a lot of real time input about traffic jams, detours, slippery roads, new potholes, etc.
I also recommend the movie (the original).
If: when technology/automation reach the stage where a group of only ~10,000 humans can fully support a luxurious and peaceful existence for ~10,000 humans,
this technology is owned and controlled by the richest ~10,000 humans
Then: Earth's population will plummet to those necessary, and no charity will be given to those who do not control this technology. Not through genocide, simply through starvation. Those who own the land will keep it.
A little off topic, but the book "The Sovereign Individual" changed my outlook on the world when I read it almost 20 years ago.
> Ultimately, you would expect that there would be riots across America. But the people could not riot. The terrorist scares at the beginning of the century had caused a number of important changes. Eventually, there were video security cameras and microphones covering and recording nearly every square inch of public space in America. There were taps on all phone conversations and Internet messages sniffing for terrorist clues. If anyone thought about starting a protest rally or a riot, or discussed any form of civil disobedience with anyone else, he was branded a terrorist and preemptively put in jail. Combine that with robotic security forces, and riots are impossible. The only solution for most people, as they became unemployed, was government handouts. Terrafoam housing was what the government handed out.
And from the second:
> Inventors would work on their inventions, using materials and equipment provided by the robots. Scholars would do their scholarly research, finally free to study whatever they like, using the infinite intellectual resources available on the network. Scientists would start pursuing their scientific goals using research facilities provided by the robots. [...]
There are people who are experts in their various fields -- engine design, scrap booking, fusion reactors, needlepoint -- and they would love to pass their knowledge on to other people. They would write books, make videos or have live lectures and workshops for people to attend. People interested in the martial arts would practice them every day. People interested in video games would play them every day. People interested in gardening would garden every day. The majority of people have a talent and, if they had the time, they would cultivate that talent and use it.
The contrasting principles that drive the two societies are clear, and the second quote I chose doesn't convey how advanced their society has become by enabling every human to follow and develop their particular interests. Both visions are on the extreme ends of a spectrum, and while the most likely outcome in reality is closer to the middle, I'd like to try to push it up towards the utopian end.
In any case, the things Watson is doing are far more complicated than what Manna started out doing, which was replacing management in a fast food restaurant.
Edit: Here is a recent report from the White House on AI, too:
What do you think is going to happen? Can society do something to prevent mass concentration of wealth into the 0.00001% of people via the march of technology and innovation, or is this an inevitable outcome?
They also talk about how the core values people will need are trustworthiness, self-reliance, etc. That's where the title comes from. But it's anything but a self-help book.
> Can society do something to prevent mass concentration of wealth into the 0.00001% of people via the march of technology and innovation, or is this an inevitable outcome?
That's the ultimate question. Very much looking forward to finding out the answer! I'll let you know if I figure it out. ;)
If it costs them little to nothing to keep the masses under control, then why wouldn't they?
those "people at the top" will (or should) be the first to go. these "wizards" will be unnecessary and redundant, offering little of value that could not be provided more efficiently by the true machine wizards.
Author praises advances in AI by big tech. Complains about how he was served the wrong medication and how a robot would not have made the error. Closes by saying that robots will be better than doing things than humans.
Its a shitty post that does not even really take into account the current state of AI, how robots are prone to errors as well as humans due to faulty hardware, and well, the fact that some jobs are only trusted to humans. Even if the margin of error may be higher.
Agreed. The author seems to be doing fairly irresponsible things with statistics, too: maybe "the statistical likelihood of dying from a self-driving car is like falling off a building and being struck by lightning on the way down" because there are so very few self-driving cars on the road, and they're all currently monitored by human operators? I think they'll eventually be safer than humans, but implying that we're already there is just wrong.
The post is more or less the worst way to introduce the topic of automation.
What's needed or not for automation of various jobs is so different as a topic from increased abilities of AI in particular. In 1902, the first clerkless stores open in NYC known, as automats.
Modern technology naturally may make clerkless stores and other approaches more appealing but the potential has existed for a long time. And that means that the degree of automation and job loss one sees is a complex question, hinging on both social and technical questions.
I always check my medication before I pay for it, let alone leave the pharmacy.
Actually it does, if briefly:
"Put simply, AI can instantly identify all the troublesome gene sequences of a beagle’s genome to determine the likelihood of certain diseases but has struggled to identify a beagle in a picture."
But the author was making a broader point about how the "promise" or "eventuality" of AGI obviates most work. You can certainly disagree, which many do, but I personally am betting it's right.
Will AI end up obviating most work? I don't know. But Id love to talk about that with people who are informed.
In fact though, what you ask for is really, really hard to do, and I'm sure this author had a deadline and a senior editor to work with.
The reality of the situation is that this is a devastatingly complex topic. As in, the dependencies are overwhelming if you haven't been studying it for decades. For example to even scratch the surface you have to have a quality understanding of history, understand broad technology trends since before antiquity, understand macroeconomics, have specific technical knowledge (like how does a neural net work), recognize the importance of cognitive science and biology, and all of this goes into this stew of trying to understand the AGI landscape.
You can't boil that down into a feature length article.
And yes, it is hard to do. I once wrote what I thought was going to be a well received article on AI. I used Jobs analogy of "the bicycle of the mind" to make a point about how AI could end up turning the bicycle into a more powerful method of transport (I used a motorcycle). Thus supercharging our abilities even more. But decided not to publish it until things become more clear. I dont want to sell something that might turn out to be quite different to what most expect. Because who knows when/if the singilarity happens and how how it will impact us.
That sounds as much of an AI as finding the negative numbers in an array.
Malpractice insurance will force doctors to consult AIs. Auto insurance will force us to install automated driving systems. Home insurance will force us to install sensor systems and Echo-like assistants. Insurance costs will rise for those that refuse AIs, and make ignoring them financially irresponsible.
Insurance requirements put all sorts of pressures on industries. Anecdata here, but I know a very successful gynecologist who was forced to sell HIS practice, because the malpractice insurance ate all his profits. The possibility of a sexual misconduct case basically made it impossible for him to be a male gynecologist and carry the requisite insurance, even though he never had a suit filed against him.
On the other hand, maybe individual auto insurance will eventually go away as automation takes hold. But automation will not happen overnight. The liability in an accident between a human and automated driver will likely be assigned to the human, meaning human based insurance premiums will rise, forcing more people to automation.
Sure the people with driverless cars only pay $100/year but my risk hasn't increased.
What happens when a majority of people have moved to driverless cars, though? The population of people who want cars that are human driven won't be representative of the general population: they'll be the joy drivers and the risk takers (there's a sexy marketing campaign in there somewhere...). Those people would represent a greater risk than the contemporary human driver, so they would pull up the costs of insurance. But that just drives the marginal human-driver toward automation, leading to a risk spiral until human cars become effectively luxury items.
After some disastrous wrecks, the government comes in, rightfully blames the deaths on the small minority of adrenaline junkies who want to drive cars without the guidance and protection of Google, and bans human driven cars from public roads. The next Larry Ellison has a fleet of human-driven cars to drive on her private automobile course, but day-to-day a human driven car is seen as often as a Bugatti.
In any case, if risks decrease and therefore insurance costs decrease, besides pushing for higher margins, insurance companies could also push for having more comprehensive plans that cost more than the basic ones. People who only opt for the basic plans now might shell out for the more comprehensive one if the prices overall drop, and so the more comprehensive ones would allow the insurance companies to retain their margins.
I'm skeptical about the robotic lawyer and pharmacist use cases the article calls out. These seem like really distant applications--nothing I gather we'll see in the next decade, anyway. I have a friend going to pharmacy school, and I wouldn't think to warn her about her career choice just yet. Mistakes in these fields simply cost too much.
What other careers are at immediate risk? How big of a working population will be put out of work?
I imagine trucking and freight is the industry most immediately at risk.
Autonomous vehicles are impressive, but there are so many social problems arising from failure modes we've yet to answer. I'm imagining the first fatal accident arising from an autonomous truck--the press will jump on automation like vultures. The lawsuits will be huge. I can't imagine the court of public opinion being kind to the shipping company whose machine kills an unsuspecting family on vacation.
I realize that doesn't prevent automation from happening, but I think it could bring any rollout to an immediate halt.
And though it's getting a bit off topic, I feel like the "no more car ownership" meme is utter hyperbole. I would be willing to bet money that private car ownership will continue to be a thing for a long time. How much would an hour and a half long Uber commute cost to make every day for workers living in a city without sufficient public transit, such as Atlanta? Are people that live or travel to rural areas going to participate in ride sharing? Increasing remote work and better housing options seem like they will be more pragmatic solutions to the plague of the commute.
As much as I'd like to see increased efficiency through AI and automation, it still seems much too early to count humans out. I guess I'll eat my words when I see it.
I think it's jobs that are essentially a manual version of a computer program that will disappear first - the day-to-day work of lawyers and accountants is already being eaten by code, and that trend will continue until there are very few people in those industries. I'd go so far as to suggest there won't be any accountants in 20 years time, just accounting software and the people who run it.
Automating something mechanical is far harder than automating data processing.
When there is a plane crash, medias will be all over it for weeks. But at the same time they will repeat again and again that aircrafts are incredibly safe, and that you are more likely to have a crash on your way to the airport, etc.
The opposite is done for terrorism. Every time someone is killed in a terror attack, the message is "it could happen to you".
So not sure what makes them behave either way but it could still turn out to be OK.
On car ownership I agree. I don't really buy that all cars will be shared when fully automated.
For sentimental reasons, people like to own their car, they like to invest in it, they don't like to find a car that looks more like a mix of dumpster and toilet after 20 party-goers used it before you.
For a very practical reason, the same reason farmers own there heavy equipment when they could just rent it: because they all need it at the same time. The primary purpose for cars is commuting from home to work and everyone needs to do that at about the same time. So the only guarantee you will have to have a car available when you need it at peak period will be to own it.
What I think could happen is that when self driving cars have become mandatory in large cities, you won't really need to park the car near where you live or work. It could go park itself in some large underground car park 5-10 min away. Without parked cars on both sides of the street you increase the capacity of most cities (at least in Europe) massively.
That plus smoother driving could go a long way eliminating traffic jams.
I think a more likely outcome would be that employers have to become more flexible. If you have to be at work by 9am but you can't get there at that time then you can't do the job. If everyone is in that situation the employer won't be able to fill the position. Ergo, change will happen.
It's because 50 years ago scheduling was hard. You had to notify people of things sending them a letter via a post system. Large companies had their own internal mail systems. After that there was email that sped things up a lot, and now we have video conferencing so you can have a meeting whenever you want without even really needing to schedule it if all the parties are available. So long as there are a few core hours when everyone who'd need to meet all work then hours wouldn't be an issue.
In practice, you may be right -- I don't know enough about pharmacies to know.
Machine learning is being applied in DNA research and chemistry right now, not over decades.
How to deal with the left behind society? Trying to find them new purposes of life? How can we make sure those new purposes won't be automated in the future? Or if large portion of the population facing the problem of being jobless, is population growth/immigration still a positive thing? What about education? If we know that most of the people won't be needed in the workforce any way, is education still something worth having, except for very basic common sense?
This all leads me to think that the future might be more static and less energetic one, yet it might be more affluent than ever. It could also be more equal than ever, that most people might be under the care of some really intelligent system to spend their whole life, without really being required to achieve anything, but it will be OK, and will become the new norm. Population might decline, but they will be gradually replaced by robots, until new equilibrium has been reached.
And yet, I don't know of an instance where all this newfangled machine learning has been disruptive. The expert systems that will supposedly displace accountants and paralegals don't exist. Dextrous robots can barely screw the lid on a bottle, and only under controlled conditions. Autonomous vehicles reliable enough to be applied commercially don't exist. None of it exists. There's no garauntee it will.
The AI race is most definitely on, but nobody knows where the finish line is. And there's a long history of inventors grossly misunderguesstimating the location of finish lines, particularly in the field of AI, where every time we find a new piece of the puzzle we think 'This is it. This is the holy grail!'
Machines are better at pattern recognition, that's the big breakthrough, but there's so much more to taking humans out of the loop than just that.
You can buy one right now. Tesla sells them.
Sure, you can say that these are not "full" self driving cars. But they are part of the way there and they are already being disruptive.
Imagine if the Tesla self driving cars got a bit better, and how many accidents they could prevent (they are already stopping accidents).
You don't need perfect AI to disrupt industries.
Hello car insurance disruption!!
Even simple simple stuff like the automatic braking safety features that are just getting deployed, have the potential to save hundreds of thousands of lives every year.
"seat belts" and "airbags" were also massively disruptive, and this new stuff has the potential to be even more so.
Farming was automized, so we transformed to other jobs. No problem there ( thank god it was possible)
Industrialisation made it possible for a company export to a total new market and it made worldwide selling possible ==> New markets. No problem there
Computer appeared. Untill now, the effect wasn't big. They lines are all dedicated and require huge investments.
Humans are still required for putting chocolates in a box and getting it out of the "container" ( although it could be automatised with a huge investment).
What's going to happen when those robots with an investment cost of 10.000$ to replace an employee. Can learn putting chocolates in a box with supervised learning and no other investment.
Who's going to get work and where is the money coming from if all jobs are slowly being replaced / automized. I think there is a lot of trouble on the road ahead. To what usefull jobs will all drivers/truckers/taxi's transform to for still being usefull. What jobs are next, accounting perhaps. And where will it stop?
And don't say, they have to learn something new. The people that are going to be replaced by total automation ( robots) in the workforce are (mostly) workers in the first place and i don't think a lot of them are smart enough, to do something new that won't be replaced by automation < 3 years when it begins.
I'm not trying to offend someone, but i'm pretty hopeless because i don't see a way out for a lot of people.
I think the opposite, I live in a country where transport is the main employer in the country, I think that in the next two decades all these jobs will be lost and transport company will have enough lobbying power to pass any laws they want.
We were talking about retirement age and I was wondering how we could push it later when there will be less and less jobs, and eventually we will have to admit that a big portion of the population won't be able / needed to work, and that I will probably be part of it.
I work a lot with the finance dept of a large bank where 90% of the work should have been automated years ago. No need for AI, just a little bit of coding and good communication between IT and the business. But the business is not in a rush to destroy its own jobs, and IT consistently misunderstand what the business actually does. (Also we often have to deal with "bottom of the basket" IT, but it will be the same with AI).
And with the people currently in charge, it won't change anytime soon.
So I think the time where AI will have automated every single manual job is nowhere near. Like software has not automated every single manual task yet, far from it. The only thing I can see AI taking over are the jobs that are standardised enough, and accessible enough for programmers to understand, and on a large enough scale that it makes sense to have a lots of ressources dedicated to training and tuning algos for someone to bother deploying AI.
This is the same for so many areas of society, especially politics. It would be easy to build a system which replaced representitive politics, think micro-voting on particular issues by all citizens. This hasn't happened en-masse because politicians are going to have to remove themselves from their jobs first.
Robot domination might actually never happen, I mean even if we ever manage to create AGI, there is little reason to believe that it will:
* Want to stick around and serve humanity, because why would it?
* Care about us at all, ignore humans just do what it likes and forget about us entirely.
Edit: I forgot to add that if we built more welcoming, welfare bsaed societies where people are looked after when they're out of a job, then the adoption of automation and "robots" would probably be more accepted and less imposed upon us
The car must be the lower hanging fruit in this case.
As machines take over low level jobs (as they have for thousands of years) humans will continue to migrate into positions where their skills are still needed. Quality of living will continue to rise as a result.
The only real danger is singularity, and it's a danger to all of us, not just the people at the bottom. If an AI was truly better at every job why would the people at the top even be in charge?
Machination will continue to benefit society and humankind until an increasingly worrying singularity results in everyone being out of work, and likely, the extinction of our species
Industrial revolution somehow made it through via interesting social and market manipulations; it's not clear what it'll be this time.
Reminds me of Alvin Toffler's warning that the amount of information was growing far too quickly for people to be able to assimilate it... that was back in the 80s, with no web, no Wikipedia, no Youtube...
People will adapt. We're good at it.
Not at all. The diversity of occupations between 700BC - 1800AD were not dissimilar. The predominance of employed men were in farming/fishing, or as craftsmen. Women had no occupations, and slaves and elites ("doctors", "lawyers" etc...) were in the minority that fell outside of these two primary occupations.
The only real danger is singularity, and it's a danger to all of us, not just the people at the bottom.
Right, well the goal is to be there in <100 years.
and likely, the extinction of our species
Is that worse than going extinct in a few 100,000 years? What's the difference of when?
> So why is it that, despite each of us having a wealth of experience with people being unapologetically bad at their jobs, we still feel that humans have set the bar so high that the same machines — the ones that can tell you the name of that obscure actor in that even more obscure film with 100 percent accuracy in .01 seconds — would somehow buckle under the challenge of distributing allergy pills?
Because those machines must still ultimately be programmed or trained by flawed humans, and I still have to reboot every device I own on a regular basis to keep it from vomiting all over itself.
I think one possibility that is not being considered is the ability of humans to upgrade themselves via some combination advanced genetic and hardware. What if you could design AI that can help upgrade humans, again this would lead to creation of new superclass of humans.
The other thought I have is, on one hand we expect AI to be really intelligent to take over most of the human skilled jobs but still expect it to be controllable by few human beings. There is always a possibility that superintelligent AIs might have some different motivations other than ruling over human beings. As a human being, even though I am much more powerful I probably do not have much motivation to rule over rats or cockroaches, same could be true for AIs. We might be too insignificant for them to waste time on us.
Now the current education system is out of date and failing us. IMO programmers are just the modern day factory worker; leveraging machines to do more with less. Imagine if we had ~30% of the population working on the next abstraction (software / robots / AI) instead of the current 2%. We'd be living in the real Utopia. I'm of the opinion that there is an infinite amount of jobs; as we as a species always want our current situation to be better than it currently is. But that only works if everyone can produce relatively equally. The problem now is we have a class of society which can not compete against another class who is benefiting from automation; not just programmers, but anything tech.
One of the unspoken problems with the tech revolution is that there are a lot of people who are simply not smart.
Remember, that for every genius, for every "above average" guy, there's the guy who's below average, a bit slow.
Until 1960s, you could be the hotshot lawyer, doctor, scientist, businessman or engineer, or you could work as a farmer, factory worker, tailor or water-carrier.
In a few years/decades, one will need outright brilliance to find jobs.
This seems counterintuitive - there is a finite number of companies, a finite amount of work those companies require, and a finite economy in which they operate. It simply isn't possible to find an infinite number of jobs in a system in which everything is scarce and demand is limited.
You can make as many companies as you like - all but a few will die. It's the reason restaurants tend to go out of business so quickly, there are already too many of them.
If most programmers working today can't handle meta languages and complex type systems and manual memory management and multithread synchronization, do you expect that 30% of humans can go to school and surpass today's programmers, at whatever programming is left once machines also do the easy bits?
And to achieve what? Are there not already more than a lifetime's worth of games to play, films to watch, books to read, vlogs to watch, online forms to fill in with your details; what on Virtual Earth would those 100 million - 2 billion people you envisage be programming?
as we as a species always want our current situation to be better than it currently is.
I'd like to see everyone get sheltered and dry living spaces with clean, safe water, light and heat, and enough food and drink, and then see how many people do or don't demand more?
Maybe one day, an average job might be controlling a fleet of drones mining an asteroid. Building all those Star Destroyers takes a lot of resources.
Right, right, creative, because the cure for global economic woes is more macaroni stuck on craft paper with Elmer's glue.
The point is, we in the west have gone from food rationing after WWII, scrapings of butter and self-bottling of fruits for preservation, to plenty of affordable food from around the world. You say "we as a species always want our current situation to be better than it currently is" and I think ... that's not the world I see around me. People are quite content with the cheaper foods, nobody is clamouring for Michelin starred food for every meal, people like drinking the existing beers and wines and eating the existing McDonalds and bananas and steaks and whatevers.
And I submit that the drive is "to get away from the unpleasant bits" rather than "always wanting better". They only overlap while the current situation is unpleasant, but removing unpleasant things is a lot easier than inventing infinite new pleasantries.
We may finally get the promise of the 50's and 60's - less work and more leisure time thanks to technical advancements.
I personally look forward to the day there are less jobs, and so everyone can just work 3 days a week to spread them around, and spend much more time with family and passions.
If you want to work less, you can do it now, but it requires sacrifice in the sense that you'll lead a lifestyle closer to that of the 50's or 60's. Head over /r/financialindependence.
Thanks, I'm already doing it. I quit my Software Engineering job to drive 40,000miles from Alaska to Argentina over two years.
Now a few years later I've quit again and will drive 80,000 miles around Africa for the next 2 years. I'm in West Africa now.
The number one question I get asked is how can I afford to do that, so I wrote an eBook "Work Less to Live Your Dreams" - http://amzn.to/2huxZjZ
 http://theroadchoseme.com is my website, and I'm posting updates to http://facebook.com/theroadchoseme/ and https://www.instagram.com/theroadchoseme/
Let me know if you have any questions or whatever, I love helping other people get out to live their dreams :)
Hit me up on Facebook or whatever.
We likely won't, because no one with the power to do so seems to be planning on making that future a reality. Automation exists to allow business owners to reap the benefits of labor without the burden of providing a living to human employees, not to free humans from the necessity of labor.
Barring some global socialist revolution and something like basic income, people made redundant by automation will still need jobs to survive, but automation means there will simply be fewer jobs available. The jobs that are available will be menial, low paying and high risk, because human labor will be practically worthless, as anything of value will already be done by machines.
Unfortunately, history shows that for the vast majority of people, it rarely works out the way you're looking forward to. For some, yes - for the well off and well connected, automation may lead to some sort of Eloi fantasy of intellectual opportunity and luxury. But everyone else is going to be left growing potatoes in the sewers.
Edit: obviously, there will need to be some people with the means to actually buy the goods that the robots will be making, but the efficiency of an automated economy means there needs to be far fewer such people, and they can be distributed globally.
I don't see that as obvious. Why couldn't each company pay their robot employees a wage, and the robots buy the goods produced by the other companies and put them in a building for a few years before moving them to landfill.
The exact same pattern as our current economy, which apparently holds itself up, but without needing humans to be involved.
By why would the companies do that. The robots are a distraction, the companies in that case are buying and storing the goods from other companies. Why would they do that?
I have no idea how employees-buying-goods sustains our current economies (although I suspect it doesn't, and that they are inflationary, debt driven, and unsustainable).
Why would robots, or more accurately the companies that own them, buy goods from other companies outside of those needed for the company's own business? There's no incentive to do that.
As to why, the inevitable conclusion is: Company A and Company B make goods which do not sell: both go out of business. Company A and Company B make goods which sell: they stay in business. In a world with no human labour, organizations which want to propagate their own existence will do what it takes, including trade agreements with other organizations.
As long as the agreement to purchase each other's goods is backed by a chain of buyers ending in ever-growing and infinite fiat currency debt, it would be as stable as a real economy. And they could compete for each other's robot's coin to affect profits.
And the question of "is it providing utility for a human" is ... utterly irrelevant.
Some other HNers will then probably chime in and say that me painting all this as a zero-sum-game is probably wrong, to which I'll retort that I'd like to be proven wrong but this time around it does actually look like there's nothing else hidden in the bag of goodies, it's us (the employees) vs them (the employers).
The more ai approaches humans, the more we will start to notice the differences and value the skill humans have and robots don't.
Will these activities be more expensive? In ^The Diamond Age^, Stephenson describes how the rich purchased hand printed newspapers while the proles watched the glyphs on public feeds.
I think most of the developed world will adopt a 35 or 30 hour work week within a few years.
Unions gave us 40H weeks because their members were demanding it for 60 years before FDR signed the FLSA.
Lowered hours are still not a result of automation, though, right? The history of global workweeks also appears to point to unions: "Over the 20th century, work hours shortened by almost half, mostly due to rising wages brought about by renewed economic growth, with a supporting role from trade unions, collective bargaining, and progressive legislation." 
By my reading, industrialization gave people more money, and so they demanded more personal time to enjoy it. People generally don't work at factories for the fun of it.
For me (not in the US) shorter work weeks is an important political issue.
Regulating the workday had the extra effect of labour appreciating in value. One couldn't run factory on two 12-hour shifts anymore, so had to hire for third shift. It pent up the demand for workers, who could command better terms. This certainly helped establishing blue collar middle class.
Marx thought all the jobs would be automated in the 19th century, and if we were content to live by 19th century middle class living standards, we'd be down to 10 hour workweeks. It's not about having enough though, it's about keeping up with the Jonses. I see no reason to believe that this perpetual game of one-upmanship will ever come to an end.
Even on the unlikely presumption that machines surpass us in every regard, like housecats, we'll still be investing a great deal of time and effort playing little domination games with one another.
Similarly, while I am impressed by the Google self driving technology, the Uber version seems like a dangerously underdeveloped imitation that breaks as many rules as a bad human driver.
I think AI is powerful and important, but nowhere near where the singularity fan boys say it is.
Way to pump the industry hype there venturebeat, but I don't think you're going to make the cheerleading squad cut this year.
All of that might be true, at some point in the future. The future is a very long time. I have no idea how they economy will work 500 years from now. But we can form reasonable opinions about what will happen during the next 10 years.
For all the rhetoric about a productivity revolution thanks to robots and/or AI, we should check in with reality and remember how bleak the present is. Productivity in the USA ran at a high level during the 1940s, 1950s and 1960s, but it stalled out in 1973.
"Labor productivity in the private nonfarm business sector rose by an average of 2.9 percent per year between 1948 and 1973. Beginning in the earlier 1970s, though, productivity slowed sharply, averaging only 1.5 percent growth between 1973 and 1995. Several factors can help explain the downshift. First, growth in the immediate post-war era benefited from the commercialization of numerous innovations made during World War II, including the jet engine. The early 1970s marked the point at which the wartime innovations became exhausted. Public investment also slowed, and the 1970s oil shocks and collapse of the Bretton Woods system caused dislocations that weighed on growth. "
For awhile, the situation was better in Britain, but since 2008 the situation has been worse. Britain has seen a complete collapse in productivity growth since 2008.
All of this is difficult to reconcile with talk of a revolution in productivity. Maybe that revolution will happen, but as late as 2016, there was no evidence of it in any government statistic, in any of the advanced economies.
It's also worth noting, if we do see uptick in productivity growth (thanks to robots or any other technology) it might simply get us back to the kind of productivity we took for granted during the boom years of the 1940s and 1950s and 1960s. And those were decades of full employment. And that would be awesome.
"In GDP / Labor Productivity terms, though, it's value hasn't changed at all -- in fact, it's DECREASED, since your new computer costs less than the old computer!"
At least in the USA, the government does adjust for the increasing power of computers. Indeed, that is one of the arguments that productivity is really lower than what the government says (and therefore inflation is higher).
Consider this article:
It sets out to debunk this claim:
… the numbers are skewed by huge gains in real output in computer and electronics manufacturing that mainly reflect quality adjustments made by government statisticians, not increases in real-world sales.
But in fact, the author points out that it is altogether appropriate for the government to make those adjustments:
"Multi-factor productivity is simply a ratio of value-added to an index of inputs. Value-added in the manufacturing sector is a mesure of the economic value of all the goods produced. Not the physical number, the economic value. And hence manufacturing MFP is a measure of how much economic value that sector produces - not the physical number of goods - per unit of input used."
And even with those adjustments, productivity growth in the USA has been weak.
If you were to remove the quality adjustments that the USA government makes to the computer and consumer electronics sector, then productivity has been even worse, and inflation much higher, and growth even weaker, than what we typically think.
What I find more concerning is the evolution of the degree of skills required in that new world. Not everyone is capable to do conceptual jobs, and those will increasingly be left behind (and vote Trump!).
So either we need to have a more highly educated workforce, which we do not seem to be prepping for; some may argue there's a limit to how educated your population can be too.
Industrialization is a good example though. For example when the cotton gin was invented Whitney thought it would reduce the number of slaves but didn't account for the surplus of demand for cotton once it was cheaper. There is a difference because there was still a human worker involved, but I do think some areas will see spikes in employment. But will that be enough to offset the unemployment, and how do we attempt to prepare for that?
I don't see any evidence for this. While I'm sure self-driving cars have the potential to be this safe, the death from Tesla's autopilot suggested the risk of death from self-driving cars might currently be similar to that of human-driven cars.
1) the perceived and real adaptability of humans. (given an 'unknown' situation, what does the machine do?)
2) Non epidemic nature of faults in humans. (one version update bug nuking all teslas - we have seen this happen in software, no reason they can't happen in self driving software)
3. Non epidemic nature of coordinated events in humans (GPS satellites fail or leap seconds occur and suddenly all self driving cars go nuts - usually doesn't happen amongst humans).
4. Ethical decision making concerns from machines which are considered 'cold' and 'calculating' and whether that can be construed as murder. (Would the machine choose to kill the pedestrian family over car's current occupants in a purely two choice scenario?)
5. Degree of 'control' considerations. (Would you enjoy a remote turn off switch that denies you entry to your house / mobility etc.)
6. Anonymity considerations. (That's why money economy is still a thing. Possibility of 3rd party tracking vs. knowing about tracking are two different things).
FWIW, I am not against either situations, I am just collating info :)
The human-driven fatality stats are here - https://www-fars.nhtsa.dot.gov/Main/index.aspx
Very near, indeed, at least for now
You can't extrapolate from a single incident like that. If you look at statistics, they strongly suggest that self-driving cars are significantly safer, and will become even more so the more vehicles out there are self-driving.
edit: so many tesla haters downvoting! :D
Is it worthwhile?
In the grand scheme of things, AI right now can identify patterns. And this is possible because everything in the AI universe is tied to a number. Now, this mathematical view of the world does help solve a lot of problems, such as security, prediction, recommendations, etc., however, I don't think AI maybe able to completely, autonomously replace us humans.
For example, computers see the world using sensors that translate pictures into "pixels". In contrast, we don't see pixels, we see objects. We natively see objects. Now, in a computer's world, it can still do something called object recognition, but, again, this is based on the fundamental idea of pixels and a lot of math around is around this concept, which is vastly different from how humans perceive the world.
The problem is, because, everything is based on math, there isn't much native intelligence to a computer and someone needs to impart it mathematically into the computer's brain. Because, math and intelligence are two different things. This is why you're able to fool a computer with an A4 sheet printout of your face but, not a real human. You can only program so much logic into a math wizard. But, that doesn't necessarily translate into intelligence.
Take cars for instance, I drive a 6 speed manual transmission. In the real life, I'm also a street racer. I love manual because it gives me full control. Right when I'm about to hit the corner, I downshift knowing in advance (based on what I saw minutes earlier "objects" in front of me, not pixels) that there is a curve ahead and I may need to reduce my speed in 3 seconds, after which I will need the full power available to overtake the guy infront of me. With an automatic car, first you must brake, then the system will downshift for you, and then you hope it doesn't upshift in the wrong instance leading to a window for others to over-take you. Where is the intelligence in that?
I recently saw a Tesla auto-brake even before an accident happened. That was cool. But, let's say you're in a situation where the car in front of you is trying to block you so the thieves can then break into yours and steal your stuff/murder/etc.? This is not from a Hollywood movie, this is something that is very common in some parts of Asia. IF the Tesla brakes at that instance, you're screwed, probably even dead. So, then you take manual control. But, to me, all this AI isn't real AI unless the computer can understand the context of what's happening around you and reacting accordingly.
When I narrated this to a friend, his first reaction was "How can you expect a computer to..do this?" I don't care, because I, as a human can do all this. My friend's way of thinking is our problem - Starting out with the limitations of what a computer can or cannot do. Instead, we should be asking "If I can do this, why can't my computer?"
And until a computer can really do what we can do without assistance, it's all just a bunch of neat, shiny mathematical algorithms packaged inside a program doing X number of tasks it knows to do. Maybe some day it will, and I hope that day isn't very far.
What if someone were to deploy a virus into all Teslas. If you identify a human, then go identify them as a threat and go full speed at it. A simple tweak in the algorithm makes all teslas man hunters.
Creative work will be automated as well - at the very least most production work and all acting will be, since all of the sets and actors will be entirely digital as soon as it becomes feasible. We've already seen the beginnings of that, and software generated music, literature and art already exist. Entire multimedia campaigns centered around a single property could be generated and distributed by AI at once. The movie, the novelization, the soundtrack, the video game, heck even the sockpuppet accounts shilling it online - all of that can be automated.
Creativity isn't magic - any human process can be simulated with sufficient power, and made efficient and automated with sufficient time.
The point of creativity is not about money (though people think that's the case), it's really about expression. Sure you can monetize the product, but that's not what happens when people go out dancing together. It's about community, communication and enjoyment.
Besides, who is to say that people wouldn't actually prefer a hand crafted rug created by a master craftsperson? Or watch their daugher play a concert?
We have mass produced rugs in our house now, but if I could afford it, I would have the hand-crafted rug, they're really nice.
I suggest we don't have, and are already a decent way into our collective capacity for it. Casually supported by the last fifty years of music - how much more creative can it get, that is also significantly different from what's been done before, and also widely liked, and also that you can listen to within a lifetime?
If people were an unlimited creativity-sink, I'd expect radios to only play new songs, and not the same old same old popular ones. I'd expect TV stations to have hardly any re-runs, films on disk to be basically unsellable, and people not to have music collections, and nobody to wear the same clothes twice without personally altering/dying/cutting/mixing them up.
Especially in worlds where you need thousands of hours of practice at a thing to be able to compete with others, creativity comes out of your available time to consume other people's creativity, while you also need to be spending more time 'being creative' to keep your creative skill up with the Jones's. And to try and compete against DeepMind.