I think the megapolitical perspective on violence in The Sovereign Individual[0] by James Dale Davidson and Lord William Rees-Mogg can be helpful here.
Unions were powerful because they could limit access to labour, which was required for production. Now they are increasingly powerless. Mass manpower was once required to wage war. Now is increasingly less useful in a world of high-tech warfare.
What prevents the powerful from going straight to the source of production and value? I'm not talking about off-shoring manufacturing to China, I'm talking about something more extreme, along the lines of a small cabal of wizards in a tower conjuring spells to extract energy directly from the wind and sun, and materials directly from the ground? The tech is far off, but the direction is clear. Maybe y'all think you're going to be one of those wizards. We'll see.
But what's going to happen to the rest of us? Are we all going to wake up some day as basic-income supported artists, happily chewing on organic granola and self-actualizing (or not) as we please? I think a study of history suggests it's not going to be that easy. Those union rights were hard fought. People died.
What happens when those at the top decide it's not worth keeping 8 billion people around just for kicks, when they can make do with ... 5 billion? 500 million? How many programmers, painters and yoga instructors do we need? On a planet with dwindling resources, tough decisions are going to get made.
So yeah, watch out for those robots. Especially the ones with the lasers on their heads. (That's a joke. But the rest?)
>Unions were powerful because they could limit access to labour, which was required for production. Now they are increasingly powerless.
Given how little is actually automated right now, I'm always a little skeptical that automation drove the labor union extinct. It seems like politics, corruption, and mismanagement drove American unions near-extinct. A lot of countries still have very active trade-unions that take a strong hand in production and economics (Germany and Denmark, for instance).
Your time horizon is likely too short. Compare today to 1900. 117 years is basically nothing in history. We even still have someone that was alive then [1].
Compared to then, everything has some proportion of it's labor automated. In fact I struggle to know one profession that has not been impacted by some form of automation.
I agree there, but there were waves of automation in agriculture that were then followed up by the peak of union strength. My point here is that I don't think labor competes zero-sum with automation. I think that how automation gets deployed and how revenues get spent down are bargained over between workers and owners.
I mean unions really came about as a response to automation, as labor forces were centralized, homogeneous and non-seasonal. Previously it wasn't really possible to unionize because labor wasn't as consistent.
My point here is that I don't think labor competes zero-sum with automation.
I concede that so far as we can tell that may be true, but I'm highly skeptical that it will stay that way.
Remember, wide scale industrial automation is less than 250 years old.
I think that how automation gets deployed and how revenues get spent down are bargained over between workers and owners.
Nah. Do you know why? Software engineers will never unionize. The guy or gal who is the type to organize, isn't building/writing the thing that is going to replace him/her.
We're going to get to the point (hopefully) where our machine systems "apprentice" with our skilled laborers to take over their roles.
The only way that this won't happen is if it's physically impossible to replicate skilled work by machine. Which I think we all would agree is ridiculous. Timeline? Not sure, but it's going to happen.
The way it works today is that different software engineers write the code that disemploys other software engineers. Also with machine learning access to open source code bases it would seem that machine systems could "apprentice" in the same way you posit the replacement of other skilled labor. The stronger argument for me is by analogy to chess software where centaurs (systems that combine human experts with chess tools outperform stand-alone software). Vinge offered this as a possible path in his original Singularity paper: intelligence augmentation (IA) may supplant AI or stand alone artificial intelligence for a wide range of the more complex tasks.
If you think about how many people, 300 years ago, were involved in farming.. it was a lot. How many today? In 1st world countries, very few. Farming has massive automation.
One could argue that the shared prosperity created by automation/mechanization and distributed in part by unions raised people's living standards to the degree that they no longer believed that unions were necessary to maintain their lifestyle.
As corporations sent jobs to other countries and Reagan fired the air traffic controllers, people were complacent enough to let it happen while they succumbed to their own pleasures.
I don't know, but I like the idea that prosperity took the workers' eyes off the ball and management[sic] took the opportunity to wage war against unions as a concept.
An old man told me that, once upon a time, every steam train would have two stokers aboard. Their job was to fuel the fire with coal; a hard, dirty job. They were replaced by mechanical stokers and their jobs made obsolete. But the union was strong and negotiated that the stokers would ride on each train, just as they had before; but of course now they could relax and not have to work.
I suppose trains to this day must still have pairs of stokers, riding along, benefiting from labour saving technology. But of course it is not so. Eventually the dead weight gets cut.
Sometimes it gets done slowly and with respect for the dignity of the workers. Sometimes they're just tossed out in the cold.
I'm all for some kind of social support system to help people through automation. Unemployment benefits, basic income, subsidized retraining, those have their merits. But how does giving someone a do-nothing job, where everyone (including the worker) knows they're dead weight, respect their dignity?
There are some jobs where the job is purely titular, and for those jobs, I would say that the people who work them are respected simply on their title, or perhaps jealousy of their do-nothing workload.
The only real example that springs to mind is the job of casino riverboat captain.
At least in Mississippi (and maybe it's changed since I left the area) -- gambling is illegal, at least on land, but out of respect for tradition, riverboat gambling is not illegal... or at least that's the stated reason, scuttlebutt indicates that the reason might have been to limit gambling to only the wealthy who could afford riverboat cruises, and to prevent the working class from falling prey to their own vices.
The laws on riverboat gambling, as laws tend to do, slowly evolved. Nowadays, 'riverboat' gambling in Mississippi is done in massive casinos. They're actually very large buildings that look like this[1], but that technically float. The loopholes in the law that allowed for riverboat gambling have now morphed into 'river gambling', and that large, mall-sized building in the picture is technically a boat. It's effectively moored to land for practical reasons, but because the laws are the way they are, it has to be capable of casting off. Some casinos have a small, detachable section that isn't a part of the permanent mooring, others are detachable from their moorings, etc., but basically, you've got a casino that meets some technical definition of 'boat' upon which it is legal to gamble, whereas if it were incapable of being definable as a boat, it would be illegal to gamble in.
That's a long story, but here's the interesting part -- because it's technically a riverboat, by law, it must have a captain. The captain has few duties; he must be able to ensure that the boat is capable of passing a coast guard inspection, and on exceedingly rare occasions where the casino must aweigh, he will pilot the boat as it does so. The other 363 days of the year he does effectively nothing but sit in his pilot house and stay on payroll, for which he makes a pretty handsome sum.
I don't know what the industry average is, but a friend of mine (the only reason I know these details) actually got a job for one of the smaller casinos in Tunica Mississippi based on his experience as a dinner cruise captain. His was a 'junior captain' for years, and then got promoted to actual captain, for which he did nothing and made almost 6 figures a year, which is a damn lot of money in Mississippi. He passed the time by learning programming on the internet.
There's probably not much preventing a society similar to what you describe from arising. I've had some thoughts regarding this, in no particular order;
- Surplus non-elite human stock will be used for mass colonisation of solar system/extrasolar bodies where mortality is high (aside: and possibly beneficial, in terms of accelerated evolution due to reduced reproduction cycles).
- The elite/AI cabal at the top will subject the unknowing masses to physiological and psychological experimentation for their own ends. (this one is probably already happening :)
- A mass uprising of the disaffected non-elites will enforce some kind of Butlerian Jihad (it's just a union to keep the AIs out!).
- Elites will be benevolent overlords who take their stewardship of the species seriously, and we all live blissfully in massive space-elevator-anchored orbiting ring habitats with UBI and free soylent, while the mysteries of the universe are probed until the end of time.
""
- Elites will be benevolent overlords who take their stewardship of the species seriously, and we all live blissfully in massive space-elevator-anchored orbiting ring habitats with UBI and free soylent, while the mysteries of the universe are probed until the end of time.
""
Regarding the above, that has been the way of life in the past. Feudal lords took "care" of the folks under their dominion. I think the key is "Social/Economic Mobility".
If the people who at the apex can perpetually be at top by using AI and stomping out any challengers then it would be a major problem.
That particular one was meant to be tongue-in-cheek, i.e. it's the least likely scenario in that if elites were to attain the status described in GP they would not care for responsible, 'fair' stewardship of non-elites as I had illustrated.
However in response to your comment, feudal systems still relied more or less on peasant-powered agriculture, taxes etc for the ruling class to exist. The theorised society in discussion here is one where a very small percentage of humanity can extract all resources required for existence via automated tools - without the need for 'peasant-class' manual or intellectual labour. In this society the vastly larger portion of our species is more or less irrelevant, and thus subject to the whims of the few (as opposed to now, where economies and industries still need peasant-class input to function).
Lifting bodies into space is the expensive part, unfortunately, so a mass approach won't work.
However, once the elites get too greedy, and the masses too marginalized, there will be riots and civil war, so the smart ones will go for the "grow the pie just enough" approach.
Until we have a space elevator there will be no mass exodus to space. Probably not even after we have one. Lifting stuff out of the gravivity well with rockets is just too expensive. It's much cheaper to breed on the target planet.
> What prevents the powerful from going straight to the source of production and value?
Nothing. Certainly not this crowd. Which is why I expect the long term future to hold something akin to a Blitzkrieg that will actually be over in a flash. After the delete function has been prepared and perfected, the delete button gets pushed. Not by the naive fools who built it, mind you. Those will a.) hardly know what they're working on and b.) be gone first.
I have an extremely strong feeling that this is the most important comment ever posted to Hacker News.
We won't understand that it was until it's over, but then, looking back forensically over the waste and ash heap before us, we will see that PavlovsCat predicted all of this, and we should have listened.
I only fear for the future of humanity in the presence of AI when I see how poorly humans treat each other. We design machines in our own image to solve the problems familiar to us; a manufactured brain is the fullest extent of this process.
If we as a society develop tools to remove people from production and leave them at the mercy of the modern jungle then the machines designed to do this will follow this path as well.
If we design machines to accomidate the needs of a large global population without regards to age, race, religion, productivity, or other factors that have historically been used to separate the "us" from the "them," then the machines will continue to solve the problems we designed them for.
We could, as a species, unanimously abandon all AI tomorrow, Dune-style, and the problems and paths of history would largely be the same. The strong will do as they please and the weak will suffer as they must.
I believe that the fundamental problems of our time are ethical, not technological. If we can figure that part out, the technology should take care of itself.
I would love to live in a post-scarcity utopia where we all run around self-actualizing. I don't think we need to give up AI to get there -- in fact, I think technology will be the key that unlocks the gate.
But we have to have the wisdom to pass through it with style and bring as many people as we can on the way. Otherwise we might find ourselves fighting for our place in line and possibly even annihilate ourselves in the process.
Yes, that is the answer. Sadly it seems unlikely since throughout history a large portion of the population remain cruel and uncaring. Many thousands of technologist in the US are involved in making the weapons and targeting systems that are killing people across the globe. Seems unlikely they are going to wake up and decide to treat others better.
agreed. my worry is that even if people decide to ethically develop AI, there will be those who disagree with those ethics and/or willingly ignore their conscience. I don't see a way out short of a political revolution / bloodshed.
The problem with a vision of robots manufacturing things for 'wizards' seems to me that it replaces the production side of our current system, but not the consumption side. If we are all made redundant, how will we buy their wares? Or, if you take the more sanguine view, why would they need to make them at all if no one would buy them? Either way, it would be a dramatically unbalanced arrangement...
They don't need anyone to buy their wares if they are self reliant and can produce everything they need. We're not talking about making more and faster cell-phones with less labour, we're talking about a small group of people making everything they need without the need for any labour at all.
That might not be a "handful" of people, but it's certainly a smaller number than our current population.
Who needs coal miners if we have all the cheap solar we can use? Who needs farmers if we automate farming? Who needs drivers, maids, schoolteachers, cooks, etc. if we can have the machines do it for us?
Who needs lawyers if our disputes are simply settled by some AI judge? Who needs cops if our streets are patrolled by security robots and we have a Minority Report style crime-prediction system.
In a way, it's paradise -- if you're at the top.
Oh, and I'm pretty sure they suggest democracy as we know it is dead. One person, one vote? Only in an age of massed human warfare. If you can take out a million or a billion people with high-tech biological warfare (and those people aren't producing anything anyway) what good are they and why do they deserve the vote?
Not saying it's right, but the logic of megapolitical violence is cruel and unyielding. It's quite the lens.
So where does everybody go when their jobs have been eliminated? What will young people do to make themselves valuable and provide for themselves and their families?
Eventually the losers in this equation will risk everything to correct the inequality.
Technology has radically shifted the kill gap. A more technologically advanced soldier can kill exponentially more combatants who have less technology.
And that gap is just getting bigger and bigger until we get rid of that soldier entirely and he is replaced by a robot or drone swarm.
To be honest, they might be impressive figures but the whole military campaign has been quite a failure. Maybe the solider is able to kill more, but it doesn't seem like much has been accomplished.
Sorry haha posted it in a hurry. Those were meant as equivalent examples of modern kill gaps. You see kill gaps increase historically everytime a group gets a more advanced technology, i.e. chariot, phalanx, rifleman, airplane.
The problem is...back to O.P.'s comment...we're getting to the point where a battle can be won with 'wizards in a tower' the technology is so advanced soldiers wont even be needed. drone/hellfire missile, a tomahawk missile...drone swarms, remote tanks, etc. This seems pretty much unbeatable to an armed force who has to put actual human beings on the field.
Yeah but as the military gets more automated, and we're seeing it already with drones, hellfire missiles, tomahawks, autonomous tanks, the 'wizards in a tower' will not be affected by those.
In regards to your links, take a look at Syria, especially Aleppo. Look what brutal application of airpower without regard to collateral damage can accomplish. In the scenario that the posters above are talking about, the winners will have vastly more killing power at their disposal.
In addition, the likelihood is that this killing won't even be ordered by a human. The AI will detect that the masses are revolting and start killing people until either the revolt stops or all the people are are eliminated. Those at the top will be blissfully unaware that this is even happening.
'All the people'? I find it hard to visualize a future where NOBODY is in charge. Why would AI do anything? Why would it care about some pesky humans?
So if somebody is in charge, you are talking about a difference between the haves and the have nots. The natural result is the have nots deminishing until they are gone, but I think human politics inside the have demographic will be just as treacherous .. rife with in-fighting and similar territorial disputes.
Sort of a might makes right situation where power is seized or transferred in an 'orderly' way.
Funny, to me it's the opposite; one can't imagine such a fantastic technology without making it sound like magic anyway, and I respect PKD for avoiding the inevitable technobabble-laden "explanations" of how it would work.
I understand your reaction but the movie was supposed to be a dystopian view of future technology and this "technology" was the central concept of the movie, not some peripheral interesting idea. The entire structure was built on a mushy foundation, in my mind anyway.
Centralised control... means single point of failure
The drone could also be hacked by a terrorist or enemy state
The real damage is rolling back education and destroying the media, destroying the will to fight if ever needed and basically guaranteeing that someone will set those drones up and take the whole pie
I'm sure there won't be ONE person controlling 50 thousand drones. There will be teams and squads like there is now.. just 'remote'.
As for hacking, you have the best engineers in the world at DARPA, I'm sure it will be somewhat robust...
The last part about rolling back education, I can't agree with you more. I feel like this current wave of Nationalism around the world might have something to do with that.
AI for self-driving cars? I keep thinking that occasionally in driving we encounter:
(1) A situation where we need to stop the car and converse in our natural language, that is, for just our half of the conversation, do speech recognition and natural language understanding. So, the AI approach would be to get the list of the 100,000 most common conversations, tune some speech recognition to those, f'get about the actual language understanding, and, instead, for each of the 100,000 cases implement the most common resulting action or response in the data? Sorry -- in that case I'd rather not be in that car!
(2) A situation where the driver needs actually to have real human understanding of a situation that, really, has never occurred before and, thus, is not in any AI training set. E.g., the vehicle ahead is a pickup truck and has some liquid dripping out of the truck bed out back. Somehow the liquid doesn't look like water. Taking a whiff, it smells like gasoline. Hmm. That stuff could catch fire, move to ignite the stuff in the bed of the truck, and maybe something could go "Boom". So, what the heck to do? Sure, slow down, get back, well back of the truck, change lanes and move ahead of the truck, pull off the right side of the road and stop, etc. IIRC, so far such general deductive reasoning is beyond AI. IMHO, such reasoning requires real AI, or whatever we are calling that now, and we don't know how to program computers to do that now.
IMHO, first cut, for self-driving cars, the best chance would be to do some extensive re-engineering of the roads.
Here is a general point: We don't yet understand how general human intelligence works and, thus, don't know how to program it. So, we are having trouble evaluating the automation we now have and, thus, are vulnerable to overestimating how far to real AI the current work really is.
Besides, AI hype, if this is a case, and overestimating how much progress there is to real AI is a very old story: As I've heard, way back in the days of vacuum tube computers, IBM was pushing publicity about their "giant electronic human brains". Looks like IBM is still doing this.
1. How about a vehicle AI that allows you to ask it to stop and start?
2. I'm not confident that I would be able to notice a truck leaking gasoline, let alone smell it while driving. I'm comfortable riding in a self-driving car that can't perform that recognition. And again, the rider can simply tell the vehicle to pull over, right?
What I've seen about self-driving vehicles is that they are supposed to let all human passenger just take a nap during the ride. What you are describing is closer to old cruise control where the car would automatically maintain a selected speed.
With enough road re-engineering, a self-driving car could essentially follow electronic rails and also get a lot of real time input about traffic jams, detours, slippery roads, new potholes, etc.
I laugh and/or cry every time someone on HN responds to fears of the rich entirely abandoning the lower classes with "but then they'll have no one to buy their products and make them money!" or "but you still need people to maintain this technology!" Complete misunderstanding of what's going on. One wealthy person would be happy to spend a small amount of time administering the vast automated factories that support their high-class lifestyle. To put it simply:
If: when technology/automation reach the stage where a group of only ~10,000 humans can fully support a luxurious and peaceful existence for ~10,000 humans,
this technology is owned and controlled by the richest ~10,000 humans
Then: Earth's population will plummet to those necessary, and no charity will be given to those who do not control this technology. Not through genocide, simply through starvation. Those who own the land will keep it.
Unfortunately, you make great points. Most workers will not be needed and without real push back corporations and economic elites will not want to support them.
A little off topic, but the book "The Sovereign Individual" changed my outlook on the world when I read it almost 20 years ago.
Two contrasting outcomes of automation are outlined in Manna, a 2003 essay by Marshall Brain. From the first half:
> Ultimately, you would expect that there would be riots across America. But the people could not riot. The terrorist scares at the beginning of the century had caused a number of important changes. Eventually, there were video security cameras and microphones covering and recording nearly every square inch of public space in America. There were taps on all phone conversations and Internet messages sniffing for terrorist clues. If anyone thought about starting a protest rally or a riot, or discussed any form of civil disobedience with anyone else, he was branded a terrorist and preemptively put in jail. Combine that with robotic security forces, and riots are impossible. The only solution for most people, as they became unemployed, was government handouts. Terrafoam housing was what the government handed out.
And from the second:
> Inventors would work on their inventions, using materials and equipment provided by the robots. Scholars would do their scholarly research, finally free to study whatever they like, using the infinite intellectual resources available on the network. Scientists would start pursuing their scientific goals using research facilities provided by the robots. [...]
There are people who are experts in their various fields -- engine design, scrap booking, fusion reactors, needlepoint -- and they would love to pass their knowledge on to other people. They would write books, make videos or have live lectures and workshops for people to attend. People interested in the martial arts would practice them every day. People interested in video games would play them every day. People interested in gardening would garden every day. The majority of people have a talent and, if they had the time, they would cultivate that talent and use it.
The contrasting principles that drive the two societies are clear, and the second quote I chose doesn't convey how advanced their society has become by enabling every human to follow and develop their particular interests. Both visions are on the extreme ends of a spectrum, and while the most likely outcome in reality is closer to the middle, I'd like to try to push it up towards the utopian end.
In any case, the things Watson is doing are far more complicated than what Manna started out doing, which was replacing management in a fast food restaurant.
Queen Elizabeth refused to grant a patent for the sewing machine because she was afraid that it will be devastating for the livelihoods of her poor subjects. So ya, in spite of having a stellar pedigree, this sort of fear-mongering doesn't exactly have a reputation for successful forecasting.
I like your insight. Is that the main message conveyed in [0]? (I haven't read it)
What do you think is going to happen? Can society do something to prevent mass concentration of wealth into the 0.00001% of people via the march of technology and innovation, or is this an inevitable outcome?
It's a great book. Their analysis of violence as the organizing force of society was what stuck with me the most.
They also talk about how the core values people will need are trustworthiness, self-reliance, etc. That's where the title comes from. But it's anything but a self-help book.
> Can society do something to prevent mass concentration of wealth into the 0.00001% of people via the march of technology and innovation, or is this an inevitable outcome?
That's the ultimate question. Very much looking forward to finding out the answer! I'll let you know if I figure it out. ;)
> What happens when those at the top decide it's not worth keeping 8 billion people
those "people at the top" will (or should) be the first to go. these "wizards" will be unnecessary and redundant, offering little of value that could not be provided more efficiently by the true machine wizards.
Author praises advances in AI by big tech. Complains about how he was served the wrong medication and how a robot would not have made the error. Closes by saying that robots will be better than doing things than humans.
Its a shitty post that does not even really take into account the current state of AI, how robots are prone to errors as well as humans due to faulty hardware, and well, the fact that some jobs are only trusted to humans. Even if the margin of error may be higher.
> Its a shitty post that does not even really take into account the current state of AI, how robots are prone to errors as well as humans due to faulty hardware, and well, the fact that some jobs are only trusted to humans. Even if the margin of error may be higher.
Agreed. The author seems to be doing fairly irresponsible things with statistics, too: maybe "the statistical likelihood of dying from a self-driving car is like falling off a building and being struck by lightning on the way down" because there are so very few self-driving cars on the road, and they're all currently monitored by human operators? I think they'll eventually be safer than humans, but implying that we're already there is just wrong.
Agreed. It got me thinking about the balance of workers doing a good job and those doing a poor job in my experience. Perhaps my standards are lower than the author's, but I would say I've encountered maybe 50:1 good workers to bad.
You have 50 seamless interactions, one bad one, then you go home and say, "You won't believe what happened to me at the pharmacy. I can't wait for robots to take over."
A program running on the JVM could do any of those three, I meant more in general that the exact state of a computer program can be seen in terms of memory, while the human brain is still more of a black box (perhaps this will change in the future?)
The post is more or less the worst way to introduce the topic of automation.
What's needed or not for automation of various jobs is so different as a topic from increased abilities of AI in particular. In 1902, the first clerkless stores open in NYC known, as automats.
Modern technology naturally may make clerkless stores and other approaches more appealing but the potential has existed for a long time. And that means that the degree of automation and job loss one sees is a complex question, hinging on both social and technical questions.
That wouldn't necessarily change the premise, and if anything would bolster it because the author isn't a robot either, and therefore is prone to the same human error he's complaining of.
I don't think it's unreasonable to assume that he didn't actually look at the medication until it was time to take it. Pharmacies tend to give you your medication in a bag, so if you don't unpack the bag immediately, you're not even going to lay eyes on the medication.
Its a shitty post that does not even really take into account the current state of AI
Actually it does, if briefly:
"Put simply, AI can instantly identify all the troublesome gene sequences of a beagle’s genome to determine the likelihood of certain diseases but has struggled to identify a beagle in a picture."
But the author was making a broader point about how the "promise" or "eventuality" of AGI obviates most work. You can certainly disagree, which many do, but I personally am betting it's right.
I did consider that point and agree to an extent. However, it seems to be written by someone that does not have enough knowledge or experience in the field. Someone who is merely repeating what could be considered a popular opinion. Which, actually, is my biggest issue with the post. Its unoriginal and brings nothing to the table. Typical venturebeat content. Words written with the only goal of generating traffic. Blogspam aimed at a smarter audience (you). It could have been so much better. Why not explore how AI could have prevented his medicine being served incorrectly? Why not investigate how a smart agent could use current state CV to make decisions and serve medications? It would have been valuable even if technically wrong. Its useful to understand how AI is understood by outsiders. But no, the author simply talks about the promise of AI. Which is fine, but brings nothing to the table. We've been talking about that promise for decades.
Will AI end up obviating most work? I don't know. But Id love to talk about that with people who are informed.
In fact though, what you ask for is really, really hard to do, and I'm sure this author had a deadline and a senior editor to work with.
The reality of the situation is that this is a devastatingly complex topic. As in, the dependencies are overwhelming if you haven't been studying it for decades. For example to even scratch the surface you have to have a quality understanding of history, understand broad technology trends since before antiquity, understand macroeconomics, have specific technical knowledge (like how does a neural net work), recognize the importance of cognitive science and biology, and all of this goes into this stew of trying to understand the AGI landscape.
You can't boil that down into a feature length article.
Yes, absolutely. Id love to know what the author would have come up with. It helps understand how people perceive AI. It has become hard to see what the general perception is after working on it. He does shed some light but stops to change the subject.
And yes, it is hard to do. I once wrote what I thought was going to be a well received article on AI. I used Jobs analogy of "the bicycle of the mind" to make a point about how AI could end up turning the bicycle into a more powerful method of transport (I used a motorcycle). Thus supercharging our abilities even more. But decided not to publish it until things become more clear. I dont want to sell something that might turn out to be quite different to what most expect. Because who knows when/if the singilarity happens and how how it will impact us.
I really think its the insurance industry that will push AIs into the mainstream. The author makes a good point that humans would rather trust another human with a 1% error rate than a robot with a .01% error rate, because the robot error is somehow scarier. But actuarial science doesn't care about which is scarier, it cares about the 1% v the .01%.
Malpractice insurance will force doctors to consult AIs. Auto insurance will force us to install automated driving systems. Home insurance will force us to install sensor systems and Echo-like assistants. Insurance costs will rise for those that refuse AIs, and make ignoring them financially irresponsible.
Insurance requirements put all sorts of pressures on industries. Anecdata here, but I know a very successful gynecologist who was forced to sell HIS practice, because the malpractice insurance ate all his profits. The possibility of a sexual misconduct case basically made it impossible for him to be a male gynecologist and carry the requisite insurance, even though he never had a suit filed against him.
It seems tho insurance would shift from inidivudual/user to producer tho. Why would individuals need car insurance with a fully automated vehicle for example.
Insurance is all about covering liability. Car manufacturers won't willingly take on all liability after automation. They will still try to shift some onto the consumer.
On the other hand, maybe individual auto insurance will eventually go away as automation takes hold. But automation will not happen overnight. The liability in an accident between a human and automated driver will likely be assigned to the human, meaning human based insurance premiums will rise, forcing more people to automation.
I'm not sure. If an insurance company can make a profit by charging me $500/year (random number) for car insurance now why would I be forced by them to switch to a driverless car?
Sure the people with driverless cars only pay $100/year but my risk hasn't increased.
They'll accurately evaluate risk, as you point out, and many people will have the option of much, much cheaper insurance by using a driverless car. Most people hate driving anyway, so that's a nice cherry on top as they transition to driverless cars.
What happens when a majority of people have moved to driverless cars, though? The population of people who want cars that are human driven won't be representative of the general population: they'll be the joy drivers and the risk takers (there's a sexy marketing campaign in there somewhere...). Those people would represent a greater risk than the contemporary human driver, so they would pull up the costs of insurance. But that just drives the marginal human-driver toward automation, leading to a risk spiral until human cars become effectively luxury items.
After some disastrous wrecks, the government comes in, rightfully blames the deaths on the small minority of adrenaline junkies who want to drive cars without the guidance and protection of Google, and bans human driven cars from public roads. The next Larry Ellison has a fleet of human-driven cars to drive on her private automobile course, but day-to-day a human driven car is seen as often as a Bugatti.
I would like to get a non-ACA blessed health insurance policy, but it will literally cost me $8000 more (in fines). $500 is no big deal, but make the cost difference significant and behavior is changed (with some grumbling).
Why would they want that? Their job is to accurately assess risk, and then charge appropriately for insurance. If the risk goes down, so does the amount they charge. If your argument is that perception of risk will be higher than actual risk and therefore they'll be able to sustain a higher margin on their insurance policies, then I'd point out that you're completely ignoring competition. If one company does charge significantly more than they have to because the public perception of risk doesn't match reality, some other company will come in and charge less for the same policy, thus driving the price back down to an appropriate level.
Well, the cheaper the insurance, the more of it they'll sell. That said, if insurance gets cheap enough, surely they can justify having higher margins because they still need to meet their operating costs.
Is insurance not somewhere near market saturation in your local market? Sure, there's a limited amount of flex if prices drop, but there isn't that much room to grow.
Depends on the type of insurance you're talking about.
In any case, if risks decrease and therefore insurance costs decrease, besides pushing for higher margins, insurance companies could also push for having more comprehensive plans that cost more than the basic ones. People who only opt for the basic plans now might shell out for the more comprehensive one if the prices overall drop, and so the more comprehensive ones would allow the insurance companies to retain their margins.
It won't be accident-free, more likely accident-rare. There will be the occasional accident, but because it is so rare, it will be heavily covered in the media, thus skewing people's perception of the risk. (See: plane crashes)
Isn't it how it works today? The insurer makes profit by taking the difference between the "real" risk and the risk that the customer is willing to pay for. I bet that the insuring companies have much better estimate of that risk than their customers.
Particularly given that in most countries, insurance premiums have to be backed with actuarial calculations. So it will be reasonably transparent, at least to the regulators.
I can't be alone in thinking this recent "robots are talking all the unskilled jobs" meme is a bit overblown. It feels premature, at best.
I'm skeptical about the robotic lawyer and pharmacist use cases the article calls out. These seem like really distant applications--nothing I gather we'll see in the next decade, anyway. I have a friend going to pharmacy school, and I wouldn't think to warn her about her career choice just yet. Mistakes in these fields simply cost too much.
What other careers are at immediate risk? How big of a working population will be put out of work?
I imagine trucking and freight is the industry most immediately at risk.
Autonomous vehicles are impressive, but there are so many social problems arising from failure modes we've yet to answer. I'm imagining the first fatal accident arising from an autonomous truck--the press will jump on automation like vultures. The lawsuits will be huge. I can't imagine the court of public opinion being kind to the shipping company whose machine kills an unsuspecting family on vacation.
I realize that doesn't prevent automation from happening, but I think it could bring any rollout to an immediate halt.
And though it's getting a bit off topic, I feel like the "no more car ownership" meme is utter hyperbole. I would be willing to bet money that private car ownership will continue to be a thing for a long time. How much would an hour and a half long Uber commute cost to make every day for workers living in a city without sufficient public transit, such as Atlanta? Are people that live or travel to rural areas going to participate in ride sharing? Increasing remote work and better housing options seem like they will be more pragmatic solutions to the plague of the commute.
As much as I'd like to see increased efficiency through AI and automation, it still seems much too early to count humans out. I guess I'll eat my words when I see it.
I imagine trucking and freight is the industry most immediately at risk.
I think it's jobs that are essentially a manual version of a computer program that will disappear first - the day-to-day work of lawyers and accountants is already being eaten by code, and that trend will continue until there are very few people in those industries. I'd go so far as to suggest there won't be any accountants in 20 years time, just accounting software and the people who run it.
Automating something mechanical is far harder than automating data processing.
On the media jumping on it like vultures, I think plane accidents can give us hope.
When there is a plane crash, medias will be all over it for weeks. But at the same time they will repeat again and again that aircrafts are incredibly safe, and that you are more likely to have a crash on your way to the airport, etc.
The opposite is done for terrorism. Every time someone is killed in a terror attack, the message is "it could happen to you".
So not sure what makes them behave either way but it could still turn out to be OK.
On car ownership I agree. I don't really buy that all cars will be shared when fully automated.
For sentimental reasons, people like to own their car, they like to invest in it, they don't like to find a car that looks more like a mix of dumpster and toilet after 20 party-goers used it before you.
For a very practical reason, the same reason farmers own there heavy equipment when they could just rent it: because they all need it at the same time. The primary purpose for cars is commuting from home to work and everyone needs to do that at about the same time. So the only guarantee you will have to have a car available when you need it at peak period will be to own it.
What I think could happen is that when self driving cars have become mandatory in large cities, you won't really need to park the car near where you live or work. It could go park itself in some large underground car park 5-10 min away. Without parked cars on both sides of the street you increase the capacity of most cities (at least in Europe) massively.
That plus smoother driving could go a long way eliminating traffic jams.
The primary purpose for cars is commuting from home to work and everyone needs to do that at about the same time. So the only guarantee you will have to have a car available when you need it at peak period will be to own it.
I think a more likely outcome would be that employers have to become more flexible. If you have to be at work by 9am but you can't get there at that time then you can't do the job. If everyone is in that situation the employer won't be able to fill the position. Ergo, change will happen.
There are reasons why companies have sort of standardised business hours. School schedules. Being able to interact with other companies. And within a company you sort of want people to be around at the same time otherwise you lose efficiency.
There are reasons why companies have sort of standardised business hours.
It's because 50 years ago scheduling was hard. You had to notify people of things sending them a letter via a post system. Large companies had their own internal mail systems. After that there was email that sped things up a lot, and now we have video conferencing so you can have a meeting whenever you want without even really needing to schedule it if all the parties are available. So long as there are a few core hours when everyone who'd need to meet all work then hours wouldn't be an issue.
In principle, pharmacists exercise a lot of professional judgement -- everything from preventing/reporting drug abuse to noticing when a set of prescribed medications shouldn't be taken together. They act as the safe join operation of the medical system, which forks between specialists, various offices, etc., who usually don't communicate well with one another.
In practice, you may be right -- I don't know enough about pharmacies to know.
You may well be right, but it's pretty common to initially overestimate the pace of progress ("it'll be ready in 15 years" for 100 years running) only to severely underestimate it later on ("it feels premature"). Given the recent evidence of unexpectedly early breakthroughs in AI ("it'll take another 10 years to beat humans at Go") I think it's not crazy to ask whether the inflection point has been reached.
I think what AI really challenges is the value of man. Our current world relies on the assumption that more people means more productivity/consumption, so more growth and stronger economy. With the potential emergence of more capable machines, this assumption may no longer hold.
How to deal with the left behind society? Trying to find them new purposes of life? How can we make sure those new purposes won't be automated in the future? Or if large portion of the population facing the problem of being jobless, is population growth/immigration still a positive thing? What about education? If we know that most of the people won't be needed in the workforce any way, is education still something worth having, except for very basic common sense?
This all leads me to think that the future might be more static and less energetic one, yet it might be more affluent than ever. It could also be more equal than ever, that most people might be under the care of some really intelligent system to spend their whole life, without really being required to achieve anything, but it will be OK, and will become the new norm. Population might decline, but they will be gradually replaced by robots, until new equilibrium has been reached.
I think we're overestimating AI. The hype has been around in tech circles now for years, but now it's breached it's walls and is spilling out into mainstream discussions in economics and politics, and it's getting even more detached from reality.
And yet, I don't know of an instance where all this newfangled machine learning has been disruptive. The expert systems that will supposedly displace accountants and paralegals don't exist. Dextrous robots can barely screw the lid on a bottle, and only under controlled conditions. Autonomous vehicles reliable enough to be applied commercially don't exist. None of it exists. There's no garauntee it will.
The AI race is most definitely on, but nobody knows where the finish line is. And there's a long history of inventors grossly misunderguesstimating the location of finish lines, particularly in the field of AI, where every time we find a new piece of the puzzle we think 'This is it. This is the holy grail!'
Machines are better at pattern recognition, that's the big breakthrough, but there's so much more to taking humans out of the loop than just that.
Partial autonomy isn't disruptive, you may want to look up what that word means in the economic sense. It isn't going to turn the logistics of transportation on it's head, or put anybody out of work, or cripple the marketshare of big carmakers. It's just a feature.
Sure it is! Imagine if partial autonomy safety features get deployed world wide, cutting car accidents by half or more.
Hello car insurance disruption!!
Even simple simple stuff like the automatic braking safety features that are just getting deployed, have the potential to save hundreds of thousands of lives every year.
"seat belts" and "airbags" were also massively disruptive, and this new stuff has the potential to be even more so.
At what point did seatbelts and airbags displace established competitors? Was there a car company I don't know about that went bellyup because they couldn't figure out how to seatbelt?
Everyone always says that we are over estimating current changes in the economy. We already had industrialisation and it went fine. I think they are wrong, we are on the verge of something totally new and all the optimists aren't realists.
Farming was automized, so we transformed to other jobs. No problem there ( thank god it was possible)
Industrialisation made it possible for a company export to a total new market and it made worldwide selling possible ==> New markets. No problem there
Computer appeared. Untill now, the effect wasn't big. They lines are all dedicated and require huge investments.
Humans are still required for putting chocolates in a box and getting it out of the "container" ( although it could be automatised with a huge investment).
What's going to happen when those robots with an investment cost of 10.000$ to replace an employee. Can learn putting chocolates in a box with supervised learning and no other investment.
Who's going to get work and where is the money coming from if all jobs are slowly being replaced / automized. I think there is a lot of trouble on the road ahead. To what usefull jobs will all drivers/truckers/taxi's transform to for still being usefull. What jobs are next, accounting perhaps. And where will it stop?
And don't say, they have to learn something new. The people that are going to be replaced by total automation ( robots) in the workforce are (mostly) workers in the first place and i don't think a lot of them are smart enough, to do something new that won't be replaced by automation < 3 years when it begins.
I'm not trying to offend someone, but i'm pretty hopeless because i don't see a way out for a lot of people.
I've had this discussion yesterday with a colleague of mine a bit older than me, he doesn't think that technology will kill jobs, he thinks that the government will probably prevent self driving cars and the like.
I think the opposite, I live in a country where transport is the main employer in the country, I think that in the next two decades all these jobs will be lost and transport company will have enough lobbying power to pass any laws they want.
We were talking about retirement age and I was wondering how we could push it later when there will be less and less jobs, and eventually we will have to admit that a big portion of the population won't be able / needed to work, and that I will probably be part of it.
I think these fears are grossly exagerated. Our society has a remarkable capacity to preserve manual work way after it should have been automated.
I work a lot with the finance dept of a large bank where 90% of the work should have been automated years ago. No need for AI, just a little bit of coding and good communication between IT and the business. But the business is not in a rush to destroy its own jobs, and IT consistently misunderstand what the business actually does. (Also we often have to deal with "bottom of the basket" IT, but it will be the same with AI).
And with the people currently in charge, it won't change anytime soon.
So I think the time where AI will have automated every single manual job is nowhere near. Like software has not automated every single manual task yet, far from it. The only thing I can see AI taking over are the jobs that are standardised enough, and accessible enough for programmers to understand, and on a large enough scale that it makes sense to have a lots of ressources dedicated to training and tuning algos for someone to bother deploying AI.
This is true, the processes and technology for far greater efficiency in business, manufacturing and politics have been available for a long, long time. But why would a manager in a corporation leverage these things if it means losing an empire?
This is the same for so many areas of society, especially politics. It would be easy to build a system which replaced representitive politics, think micro-voting on particular issues by all citizens. This hasn't happened en-masse because politicians are going to have to remove themselves from their jobs first.
Robot domination might actually never happen, I mean even if we ever manage to create AGI, there is little reason to believe that it will:
* Want to stick around and serve humanity, because why would it?
* Care about us at all, ignore humans just do what it likes and forget about us entirely.
Edit: I forgot to add that if we built more welcoming, welfare bsaed societies where people are looked after when they're out of a job, then the adoption of automation and "robots" would probably be more accepted and less imposed upon us
We see posts like this daily (and I know this article is more about AI) but there's still no low priced robotic arms on the market. it's almost 2017, but there's no dishwasher unloader or any other meaningful service robots except from an automatic vacuum cleaner..
I make and sell low-cost robots, and business is great. :) But you don't see them, because I'm in a niche where I sell to companies with huge pressures to cut costs and go faster. (You don't see this kind of pressure for robots in the home, but I wouldn't be surprised to see dishwasher and service robots appear in chain hotels and restaurants, because of the scale they are at.) Having one dishwasher to unload is not a profitable problem to solve for most people in their home, but if you had 1000 dishwashers, it would be worth it to investigate your automation options.
What is most interesting to me is a general purpose robot arm (or pair of arms) that is geometrically similar to human arms so it can be trained by a human. And that is designed not to unload a dishwasher, but is designed to do basically anything it is trained to do. Which can include not only unloading the dishwasher, but doing the whole process of getting dishes cleaned and put away, laundry cleaned and folded and put away, and any number of other household tasks. And probably most importantly, it can make robot arms.
Hey there, I'd be super interested in this. I checked out tapster.io but I'm more looking for (a little bit)larger and stronger robot arms.
Email me at sixsamuraisoldier [at] gmail.com
Doubt its the low hanging fruit technically, more the massive gains from first-mover advantage. A pair of robot arms cooking you food doesn't seem as much a winner takes all environment.
Only if you consider that a meal. I'm not sure I would, although I'm not sure I would expect a robot-cooked meal from fresh ingredients to come out preferable; in any case, the capital investment required to implement an automatic kitchen capable of producing on-demand quality on par with a minimally skilled human cook would alone be enormous, to say nothing of the software capabilities being not there yet in a way that regularly provides fodder for @InternetOfShit and the like.
What I don't get about the recent hype is that jobs have been vanishing for thousands of years. The oft repeated statistic is that over 90% of the US population were farmers 100 years ago. Is 90% of the population out of work?
As machines take over low level jobs (as they have for thousands of years) humans will continue to migrate into positions where their skills are still needed. Quality of living will continue to rise as a result.
The only real danger is singularity, and it's a danger to all of us, not just the people at the bottom. If an AI was truly better at every job why would the people at the top even be in charge?
Machination will continue to benefit society and humankind until an increasingly worrying singularity results in everyone being out of work, and likely, the extinction of our species
I think you have a very myopic view of history. Things have never changed this quickly in known history. This is similar to how Anthropocene is killing of animal populations because they just can't adapt fast enough.
Industrial revolution somehow made it through via interesting social and market manipulations; it's not clear what it'll be this time.
> Things have never changed this quickly in known history
Reminds me of Alvin Toffler's warning that the amount of information was growing far too quickly for people to be able to assimilate it... that was back in the 80s, with no web, no Wikipedia, no Youtube...
Not at all. The diversity of occupations between 700BC - 1800AD were not dissimilar. The predominance of employed men were in farming/fishing, or as craftsmen. Women had no occupations, and slaves and elites ("doctors", "lawyers" etc...) were in the minority that fell outside of these two primary occupations.
The only real danger is singularity, and it's a danger to all of us, not just the people at the bottom.
Right, well the goal is to be there in <100 years.
and likely, the extinction of our species
Is that worse than going extinct in a few 100,000 years? What's the difference of when?
> So why is it that, despite each of us having a wealth of experience with people being unapologetically bad at their jobs, we still feel that humans have set the bar so high that the same machines — the ones that can tell you the name of that obscure actor in that even more obscure film with 100 percent accuracy in .01 seconds — would somehow buckle under the challenge of distributing allergy pills?
Because those machines must still ultimately be programmed or trained by flawed humans, and I still have to reboot every device I own on a regular basis to keep it from vomiting all over itself.
Few thoughts below, I think possibilities of future being considered in most of AI takeover discussions are some sort of linear projections of our imagination based on our understanding of current scenario, past history and future expectations. Our minds are many times not able to consider outcomes beyond any of these, for example when you read history of World War 1, we find most of the experienced generals and strategist did not expect the war to be as horrific and as long as it turned out to be, since their expectations were based on past experience and current knowledge (which turned out to be inadequate). Similarly, most of the people who witness atomic bombing of World War 2, did not really expect to have no nuclear wars in next 70 years. The prevalent expectations then was to consider nuclear wars being imminent. Similarly, after the moon landing, there were not many people who thought no human will go to moon in next 50 years.
I think one possibility that is not being considered is the ability of humans to upgrade themselves via some combination advanced genetic and hardware. What if you could design AI that can help upgrade humans, again this would lead to creation of new superclass of humans.
The other thought I have is, on one hand we expect AI to be really intelligent to take over most of the human skilled jobs but still expect it to be controllable by few human beings. There is always a possibility that superintelligent AIs might have some different motivations other than ruling over human beings. As a human being, even though I am much more powerful I probably do not have much motivation to rule over rats or cockroaches, same could be true for AIs. We might be too insignificant for them to waste time on us.
Why is education never discussed in the context of robot / AI automation? It is the solution. If not the ONLY solution. We've been here before with the industrial revolution. 50% of the population was working in agriculture and everyone feared the factory would automate everyone's job. But we managed. We managed by creating the current education system to produce educated factory workers.
Now the current education system is out of date and failing us. IMO programmers are just the modern day factory worker; leveraging machines to do more with less. Imagine if we had ~30% of the population working on the next abstraction (software / robots / AI) instead of the current 2%. We'd be living in the real Utopia. I'm of the opinion that there is an infinite amount of jobs; as we as a species always want our current situation to be better than it currently is. But that only works if everyone can produce relatively equally. The problem now is we have a class of society which can not compete against another class who is benefiting from automation; not just programmers, but anything tech.
>I'm of the opinion that there is an infinite amount of jobs; as we as a species always want our current situation to be better than it currently is
One of the unspoken problems with the tech revolution is that there are a lot of people who are simply not smart.
Remember, that for every genius, for every "above average" guy, there's the guy who's below average, a bit slow.
Until 1960s, you could be the hotshot lawyer, doctor, scientist, businessman or engineer, or you could work as a farmer, factory worker, tailor or water-carrier.
In a few years/decades, one will need outright brilliance to find jobs.
> I'm of the opinion that there is an infinite amount of jobs; as we as a species always want our current situation to be better than it currently is.
This seems counterintuitive - there is a finite number of companies, a finite amount of work those companies require, and a finite economy in which they operate. It simply isn't possible to find an infinite number of jobs in a system in which everything is scarce and demand is limited.
There's not a finite number of companies. Companies are a human construct. Easy to create more. Resources are finite on Earth but we will be sourcing resources from around the solar system relatively shortly. I'd say within 50 years.
Companies can only survive as long as enough demand exists for them to remain profitable, their population is self-limited by the nature of competition.
You can make as many companies as you like - all but a few will die. It's the reason restaurants tend to go out of business so quickly, there are already too many of them.
Look back 50 years and find code-which-writes-code already exists.
If most programmers working today can't handle meta languages and complex type systems and manual memory management and multithread synchronization, do you expect that 30% of humans can go to school and surpass today's programmers, at whatever programming is left once machines also do the easy bits?
And to achieve what? Are there not already more than a lifetime's worth of games to play, films to watch, books to read, vlogs to watch, online forms to fill in with your details; what on Virtual Earth would those 100 million - 2 billion people you envisage be programming?
as we as a species always want our current situation to be better than it currently is.
I'd like to see everyone get sheltered and dry living spaces with clean, safe water, light and heat, and enough food and drink, and then see how many people do or don't demand more?
Why didn't we just stick with bread and rice? How much bread and rice could you possibly want? The point is; you can't envisage what the needs of tomorrow will be. If we truly automate all the necessities of life, then we could for example spend more time doing creative things or researching the some new technology. In a world where material things are abundant, the intangible will be worth more that is currently is.
Maybe one day, an average job might be controlling a fleet of drones mining an asteroid. Building all those Star Destroyers takes a lot of resources.
If we truly automate all the necessities of life, then we could for example spend more time doing creative things
Right, right, creative, because the cure for global economic woes is more macaroni stuck on craft paper with Elmer's glue.
The point is, we in the west have gone from food rationing after WWII, scrapings of butter and self-bottling of fruits for preservation, to plenty of affordable food from around the world. You say "we as a species always want our current situation to be better than it currently is" and I think ... that's not the world I see around me. People are quite content with the cheaper foods, nobody is clamouring for Michelin starred food for every meal, people like drinking the existing beers and wines and eating the existing McDonalds and bananas and steaks and whatevers.
And I submit that the drive is "to get away from the unpleasant bits" rather than "always wanting better". They only overlap while the current situation is unpleasant, but removing unpleasant things is a lot easier than inventing infinite new pleasantries.
I'd agree. But I think that is a product of a failed education system resulting in disenfranchisement. It doesn't have to be that way. A fix (for the US) isn't coming anytime soon. It will probably have to get pretty messy before things improve.
We may finally get the promise of the 50's and 60's - less work and more leisure time thanks to technical advancements.
I personally look forward to the day there are less jobs, and so everyone can just work 3 days a week to spread them around, and spend much more time with family and passions.
We're already there. Productivity has increased manyfold since then. The dividends primarily go to those with the capital, not the ones producing the labor. What gains do go to the labor force are spent buying bigger nicer houses and things rather than working fewer hours.
If you want to work less, you can do it now, but it requires sacrifice in the sense that you'll lead a lifestyle closer to that of the 50's or 60's. Head over /r/financialindependence.
I'm sorry, I must have missed the memo on how to buy that bigger nicer house thanks to the housing market being as it is. I don't think people have been upgrading their lives in the areas you claim. Many are happy to have a job but are falling behind as cost of living increases. You are right though about where the majority of gains have gone though.
Yes, been living in one for a few years now. Desired house size has gone up quite a bit in many parts of the country. Unlike the Bay Area where there simply isn't room for it, and the tear down rate is not fast enough.
>We may finally get the promise of the 50's and 60's - less work and more leisure time thanks to technical advancements.
We likely won't, because no one with the power to do so seems to be planning on making that future a reality. Automation exists to allow business owners to reap the benefits of labor without the burden of providing a living to human employees, not to free humans from the necessity of labor.
Barring some global socialist revolution and something like basic income, people made redundant by automation will still need jobs to survive, but automation means there will simply be fewer jobs available. The jobs that are available will be menial, low paying and high risk, because human labor will be practically worthless, as anything of value will already be done by machines.
Unfortunately, history shows that for the vast majority of people, it rarely works out the way you're looking forward to. For some, yes - for the well off and well connected, automation may lead to some sort of Eloi fantasy of intellectual opportunity and luxury. But everyone else is going to be left growing potatoes in the sewers.
Edit: obviously, there will need to be some people with the means to actually buy the goods that the robots will be making, but the efficiency of an automated economy means there needs to be far fewer such people, and they can be distributed globally.
Edit: obviously, there will need to be some people with the means to actually buy the goods that the robots will be making
I don't see that as obvious. Why couldn't each company pay their robot employees a wage, and the robots buy the goods produced by the other companies and put them in a building for a few years before moving them to landfill.
The exact same pattern as our current economy, which apparently holds itself up, but without needing humans to be involved.
> Why couldn't each company pay their robot employees a wage, and the robots buy the goods produced by the other companies and put them in a building for a few years before moving them to landfill.
By why would the companies do that. The robots are a distraction, the companies in that case are buying and storing the goods from other companies. Why would they do that?
I have no idea why they would do that, it just counters the claim "obviously, there will need to be some people with the means to actually buy the goods". If people are needed to buy goods, we can dodge around that by having the robots act like people and buy the same goods in the same patterns, and then there won't need to be people again.
I have no idea how employees-buying-goods sustains our current economies (although I suspect it doesn't, and that they are inflationary, debt driven, and unsustainable).
People buy goods, often from companies other than their employer, because the goods provide utility to the people buying them.
Why would robots, or more accurately the companies that own them, buy goods from other companies outside of those needed for the company's own business? There's no incentive to do that.
I saw your question and answered it, and you've asked the same thing again. I wasn't trying to conjour an incentive, I was literally countering the claim "people are obviously needed to buy goods" with the argument "robots could buy goods instead, it's not obvious that people are required for that".
As to why, the inevitable conclusion is: Company A and Company B make goods which do not sell: both go out of business. Company A and Company B make goods which sell: they stay in business. In a world with no human labour, organizations which want to propagate their own existence will do what it takes, including trade agreements with other organizations.
As long as the agreement to purchase each other's goods is backed by a chain of buyers ending in ever-growing and infinite fiat currency debt, it would be as stable as a real economy. And they could compete for each other's robot's coin to affect profits.
And the question of "is it providing utility for a human" is ... utterly irrelevant.
Not sure if you were sarcastic or not but judging by how the history of automation happened in the past I'd say that all the benefits of the "technical advancements" you mention will be pocketed by the owners of the "means of production", i.e. the employers/capitalists, while the employees themselves will be left with lower wages because of the smaller pie (less work to throw around).
Some other HNers will then probably chime in and say that me painting all this as a zero-sum-game is probably wrong, to which I'll retort that I'd like to be proven wrong but this time around it does actually look like there's nothing else hidden in the bag of goodies, it's us (the employees) vs them (the employers).
Since we're not substantiating anything, I'd like to go on record with my own prediction: human labour will become more fashionable. Robots invading every day life will happen, and it will eventually make people appreciate human labour more. entire new industries will be created (or nascent ones expanded) around "the human touch." Hosting something, keeping people company, writing personal letters, drawing, etc., while the robots do our menial tasks.
The more ai approaches humans, the more we will start to notice the differences and value the skill humans have and robots don't.
"I'd like to go on record with my own prediction: human labour will become more fashionable. Robots invading every day life will happen, and it will eventually make people appreciate human labour more. entire new industries will be created (or nascent ones expanded) around "the human touch." Hosting something, keeping people company, writing personal letters, drawing, etc., while the robots do our menial tasks."
Will these activities be more expensive? In ^The Diamond Age^, Stephenson describes how the rich purchased hand printed newspapers while the proles watched the glyphs on public feeds.
Industrialization and automation took us to 40h weeks. Obviously it was progressive forces that did the work but it wouldn't have been possible if it weren't for increased productivity making it possible for workers to call for reduced working hours.
I think most of the developed world will adopt a 35 or 30 hour work week within a few years.
If you're thinking of Henry Ford, he was merely an early (and imperfect) adopter of an idea that had been implemented by others in fits and starts over decades prior, decades that included the Haymarket Riot, Homestead Strike, and the strikes of 1919 where 20% of industrial workers took up pickets for union representation, from which the 8 hour day descended.
Unions gave us 40H weeks because their members were demanding it for 60 years before FDR signed the FLSA.
In general? Not that I know of. In the US you often have to work 30-35 hours per week just to get benefits.
Lowered hours are still not a result of automation, though, right? The history of global workweeks also appears to point to unions: "Over the 20th century, work hours shortened by almost half, mostly due to rising wages brought about by renewed economic growth, with a supporting role from trade unions, collective bargaining, and progressive legislation." [1]
By my reading, industrialization gave people more money, and so they demanded more personal time to enjoy it. People generally don't work at factories for the fun of it.
Exactly - industrialization leads to higher productivity and higher living standard so that someone would actually ask for less work rather than more pay.
I think this trend is likely to continue which is a good thing. I see no difference between the industrialiazation of production in 1916 and the AI/robots sneaking in to production in 2016 - it's going to do more work with less humans, so all humans can to is organize to ensure the benefits are higher pay and shorter hours, otherwise the benefits will end up (only) with those who own the means.
For me (not in the US) shorter work weeks is an important political issue.
The 40 hour week was a compromise for labour regime that doesn't grind people down into a physical wreck. Remember it wasn't office work but production jobs where your pee breaks were regulated. There hardly was a technological difference with immediate pre- and post-unionization periods.
Regulating the workday had the extra effect of labour appreciating in value. One couldn't run factory on two 12-hour shifts anymore, so had to hire for third shift. It pent up the demand for workers, who could command better terms. This certainly helped establishing blue collar middle class.
Marx thought all the jobs would be automated in the 19th century, and if we were content to live by 19th century middle class living standards, we'd be down to 10 hour workweeks. It's not about having enough though, it's about keeping up with the Jonses. I see no reason to believe that this perpetual game of one-upmanship will ever come to an end.
Even on the unlikely presumption that machines surpass us in every regard, like housecats, we'll still be investing a great deal of time and effort playing little domination games with one another.
When we actually have proof that AI is even possible, issues around AI will become real and pertinent to real life. Until then, it seems like a distraction from the very real problems these proponents of AI are actively trying to ignore, like mass surveillance and privacy. I wonder why so many proponents are actively trying to ignore these very real problems in our world in favor of living in this fantasy. It's fine to pretend that self driving cars and things like that are artificially intelligent, but while such things are artificial they're certainly not intelligent. I'm going to revisit the conversation when artificial intelligence is proven to exist. Until then, I personally don't see a reason to humor people deliberately ignoring important issues like mass surveillance in favor of living in the fantasy of AI.
The article plays fairly loose with its own arguments.
For example, while an expert system/predictor likely gets the medication right more often than the human, the actual shelving, counting, and management of the physical medications is totally beyond the reach of present and near future robots. For example: no robot or AI knows how to recover of a customer accidentally spills a cup of coffee on a shelf. (Further into the future, anything will happen!)
Similarly, while I am impressed by the Google self driving technology, the Uber version seems like a dangerously underdeveloped imitation that breaks as many rules as a bad human driver.
I think AI is powerful and important, but nowhere near where the singularity fan boys say it is.
I'm sick of people who know nothing about statistics, machine learning, or AI writing about the future of statistics, machine learning, and AI. He talks about the kinds of errors people make and compares it to a database lookup of an actor? Come on.
So to make the argument that peoples' jobs are at risk of automation, the author provides anecdotal evidence of a glorified vending machine pharmacy-pill filler (yes, I know this is not nearly all a pharmacist does, but this was the specific task given in the example), a position that has already been automated by vending machines, self check-out lanes, and robotic pick-and-ships?
Way to pump the industry hype there venturebeat, but I don't think you're going to make the cheerleading squad cut this year.
His point, in my opinion, is that the only reason the pharmacist isn't already replaced by a (more effective) vending machine is political and that it will be so replaced once most people figure it out.
In recent months, there has been a large number of posts on Hacker News extolling the coming robot (and/or AI) revolution. I've read that we are facing a jobless future because all the jobs will be automated.
All of that might be true, at some point in the future. The future is a very long time. I have no idea how they economy will work 500 years from now. But we can form reasonable opinions about what will happen during the next 10 years.
For all the rhetoric about a productivity revolution thanks to robots and/or AI, we should check in with reality and remember how bleak the present is. Productivity in the USA ran at a high level during the 1940s, 1950s and 1960s, but it stalled out in 1973.
"Labor productivity in the private nonfarm business sector rose by an average of 2.9 percent per year between 1948 and 1973. Beginning in the earlier 1970s, though, productivity slowed sharply, averaging only 1.5 percent growth between 1973 and 1995. Several factors can help explain the downshift. First, growth in the immediate post-war era benefited from the commercialization of numerous innovations made during World War II, including the jet engine. The early 1970s marked the point at which the wartime innovations became exhausted. Public investment also slowed, and the 1970s oil shocks and collapse of the Bretton Woods system caused dislocations that weighed on growth. "
For awhile, the situation was better in Britain, but since 2008 the situation has been worse. Britain has seen a complete collapse in productivity growth since 2008.
All of this is difficult to reconcile with talk of a revolution in productivity. Maybe that revolution will happen, but as late as 2016, there was no evidence of it in any government statistic, in any of the advanced economies.
It's also worth noting, if we do see uptick in productivity growth (thanks to robots or any other technology) it might simply get us back to the kind of productivity we took for granted during the boom years of the 1940s and 1950s and 1960s. And those were decades of full employment. And that would be awesome.
Labor Productivity measurement depends on GDP, so it suffers from classic GDP mismeasurement error. GDP only measures $, not value, so it fails to measure technological improvements in a competitive industry -- so it utterly fails to measure productivity (hence the charts showing that labor productivity growth is always tiny).
Your computer today costs $1000, but it's 1000x more useful and more complex than that $2000 computer of 30 years ago. In GDP / Labor Productivity terms, though, it's value hasn't changed at all -- in fact, it's DECREASED, since your new computer costs less than the old computer!
"In GDP / Labor Productivity terms, though, it's value hasn't changed at all -- in fact, it's DECREASED, since your new computer costs less than the old computer!"
At least in the USA, the government does adjust for the increasing power of computers. Indeed, that is one of the arguments that productivity is really lower than what the government says (and therefore inflation is higher).
… the numbers are skewed by huge gains in real output in computer and electronics manufacturing that mainly reflect quality adjustments made by government statisticians, not increases in real-world sales.
But in fact, the author points out that it is altogether appropriate for the government to make those adjustments:
"Multi-factor productivity is simply a ratio of value-added to an index of inputs. Value-added in the manufacturing sector is a mesure of the economic value of all the goods produced. Not the physical number, the economic value. And hence manufacturing MFP is a measure of how much economic value that sector produces - not the physical number of goods - per unit of input used."
And even with those adjustments, productivity growth in the USA has been weak.
If you were to remove the quality adjustments that the USA government makes to the computer and consumer electronics sector, then productivity has been even worse, and inflation much higher, and growth even weaker, than what we typically think.
I see all these talks about AI coming for our jobs and how awesome post scarcity society will be. But I never really hear people talking about the transition. Because what do you do when 10-20% of people are unemployed? Because isn't the end goal to have a majority of labor done by machines? What jobs can replace? How do we deal with high unemployment?
Industrialisation has wiped out way more jobs than AI can ever wipe out. Manual labor in farms and factories. But the economy adapted and new jobs were created.
What I find more concerning is the evolution of the degree of skills required in that new world. Not everyone is capable to do conceptual jobs, and those will increasingly be left behind (and vote Trump!).
See that's my concern. AI, I think, will first take out a lot of low skill job. But it is also good at a lot of high skill jobs, such as surgery. Now a lot of these jobs work well in conjunction, but there is a saturation point of how many jobs you need for that skill set. AI can reduce that.
So either we need to have a more highly educated workforce, which we do not seem to be prepping for; some may argue there's a limit to how educated your population can be too.
Industrialization is a good example though. For example when the cotton gin was invented Whitney thought it would reduce the number of slaves but didn't account for the surplus of demand for cotton once it was cheaper. There is a difference because there was still a human worker involved, but I do think some areas will see spikes in employment. But will that be enough to offset the unemployment, and how do we attempt to prepare for that?
> After all, human errors literally kill over a million people a year, whereas the statistical likelihood of dying from a self-driving car is like falling off a building and being struck by lightning on the way down.
I don't see any evidence for this. While I'm sure self-driving cars have the potential to be this safe, the death from Tesla's autopilot suggested the risk of death from self-driving cars might currently be similar to that of human-driven cars.
I think I mentioned this elsewhere: It is not about the exact stat comparison between self driving cars and humans. It is actually about
1) the perceived and real adaptability of humans. (given an 'unknown' situation, what does the machine do?)
2) Non epidemic nature of faults in humans. (one version update bug nuking all teslas - we have seen this happen in software, no reason they can't happen in self driving software)
3. Non epidemic nature of coordinated events in humans (GPS satellites fail or leap seconds occur and suddenly all self driving cars go nuts - usually doesn't happen amongst humans).
4. Ethical decision making concerns from machines which are considered 'cold' and 'calculating' and whether that can be construed as murder. (Would the machine choose to kill the pedestrian family over car's current occupants in a purely two choice scenario?)
5. Degree of 'control' considerations. (Would you enjoy a remote turn off switch that denies you entry to your house / mobility etc.)
6. Anonymity considerations. (That's why money economy is still a thing. Possibility of 3rd party tracking vs. knowing about tracking are two different things).
FWIW, I am not against either situations, I am just collating info :)
However, the error bar is huge on the autopilot data. I wish they'd release stats for the much more common events of injury accidents, accidents with major repairs, and accidents with minor repairs.
>> the death from Tesla's autopilot suggested the risk of death from self-driving cars might currently be similar to that of human-driven cars.
You can't extrapolate from a single incident like that. If you look at statistics, they strongly suggest that self-driving cars are significantly safer, and will become even more so the more vehicles out there are self-driving.
We replaced the baity title with a (hopefully) representative phrase from the article body. If someone suggests a more accurate and neutral title, we can change it again.
I wonder what the impact will be where a digital replacement seems very far away, like in psychology. Will a machine end up being a better psychologist or worse because he's not a human? It may even end up being better. It will surely clean up the theory soup there.
I am one amongst those who don't completely buy into the AI hype, yet.
In the grand scheme of things, AI right now can identify patterns. And this is possible because everything in the AI universe is tied to a number. Now, this mathematical view of the world does help solve a lot of problems, such as security, prediction, recommendations, etc., however, I don't think AI maybe able to completely, autonomously replace us humans.
For example, computers see the world using sensors that translate pictures into "pixels". In contrast, we don't see pixels, we see objects. We natively see objects. Now, in a computer's world, it can still do something called object recognition, but, again, this is based on the fundamental idea of pixels and a lot of math around is around this concept, which is vastly different from how humans perceive the world.
The problem is, because, everything is based on math, there isn't much native intelligence to a computer and someone needs to impart it mathematically into the computer's brain. Because, math and intelligence are two different things. This is why you're able to fool a computer with an A4 sheet printout of your face but, not a real human. You can only program so much logic into a math wizard. But, that doesn't necessarily translate into intelligence.
Take cars for instance, I drive a 6 speed manual transmission. In the real life, I'm also a street racer. I love manual because it gives me full control. Right when I'm about to hit the corner, I downshift knowing in advance (based on what I saw minutes earlier "objects" in front of me, not pixels) that there is a curve ahead and I may need to reduce my speed in 3 seconds, after which I will need the full power available to overtake the guy infront of me. With an automatic car, first you must brake, then the system will downshift for you, and then you hope it doesn't upshift in the wrong instance leading to a window for others to over-take you. Where is the intelligence in that?
I recently saw a Tesla auto-brake even before an accident happened. That was cool. But, let's say you're in a situation where the car in front of you is trying to block you so the thieves can then break into yours and steal your stuff/murder/etc.? This is not from a Hollywood movie, this is something that is very common in some parts of Asia. IF the Tesla brakes at that instance, you're screwed, probably even dead. So, then you take manual control. But, to me, all this AI isn't real AI unless the computer can understand the context of what's happening around you and reacting accordingly.
When I narrated this to a friend, his first reaction was "How can you expect a computer to..do this?" I don't care, because I, as a human can do all this. My friend's way of thinking is our problem - Starting out with the limitations of what a computer can or cannot do. Instead, we should be asking "If I can do this, why can't my computer?"
And until a computer can really do what we can do without assistance, it's all just a bunch of neat, shiny mathematical algorithms packaged inside a program doing X number of tasks it knows to do. Maybe some day it will, and I hope that day isn't very far.
you are correct, the article is over hyped. The current AI is very fragmented and computers can't do we do very easily. On the other hand computers do stuff very easily that we find very hard. As computers approach to things we do easily this is where it gets scary. Its like they have a super power over us.
What if someone were to deploy a virus into all Teslas. If you identify a human, then go identify them as a threat and go full speed at it. A simple tweak in the algorithm makes all teslas man hunters.
Creative work will be automated as well - at the very least most production work and all acting will be, since all of the sets and actors will be entirely digital as soon as it becomes feasible. We've already seen the beginnings of that, and software generated music, literature and art already exist. Entire multimedia campaigns centered around a single property could be generated and distributed by AI at once. The movie, the novelization, the soundtrack, the video game, heck even the sockpuppet accounts shilling it online - all of that can be automated.
Creativity isn't magic - any human process can be simulated with sufficient power, and made efficient and automated with sufficient time.
What's going to be the value of mass produced art work? Will teens just attend concerts with robots playing guitars? Will kids just give up learning music because a computer can do it ? If that's the case, there's probably not going to be much incentive for people to get out of bed.
The point of creativity is not about money (though people think that's the case), it's really about expression. Sure you can monetize the product, but that's not what happens when people go out dancing together. It's about community, communication and enjoyment.
Besides, who is to say that people wouldn't actually prefer a hand crafted rug created by a master craftsperson? Or watch their daugher play a concert?
We have mass produced rugs in our house now, but if I could afford it, I would have the hand-crafted rug, they're really nice.
Why not? Do humans have an unlimited capacity for absorbing creativity?
I suggest we don't have, and are already a decent way into our collective capacity for it. Casually supported by the last fifty years of music - how much more creative can it get, that is also significantly different from what's been done before, and also widely liked, and also that you can listen to within a lifetime?
If people were an unlimited creativity-sink, I'd expect radios to only play new songs, and not the same old same old popular ones. I'd expect TV stations to have hardly any re-runs, films on disk to be basically unsellable, and people not to have music collections, and nobody to wear the same clothes twice without personally altering/dying/cutting/mixing them up.
Especially in worlds where you need thousands of hours of practice at a thing to be able to compete with others, creativity comes out of your available time to consume other people's creativity, while you also need to be spending more time 'being creative' to keep your creative skill up with the Jones's. And to try and compete against DeepMind.
Unions were powerful because they could limit access to labour, which was required for production. Now they are increasingly powerless. Mass manpower was once required to wage war. Now is increasingly less useful in a world of high-tech warfare.
What prevents the powerful from going straight to the source of production and value? I'm not talking about off-shoring manufacturing to China, I'm talking about something more extreme, along the lines of a small cabal of wizards in a tower conjuring spells to extract energy directly from the wind and sun, and materials directly from the ground? The tech is far off, but the direction is clear. Maybe y'all think you're going to be one of those wizards. We'll see.
But what's going to happen to the rest of us? Are we all going to wake up some day as basic-income supported artists, happily chewing on organic granola and self-actualizing (or not) as we please? I think a study of history suggests it's not going to be that easy. Those union rights were hard fought. People died.
What happens when those at the top decide it's not worth keeping 8 billion people around just for kicks, when they can make do with ... 5 billion? 500 million? How many programmers, painters and yoga instructors do we need? On a planet with dwindling resources, tough decisions are going to get made.
So yeah, watch out for those robots. Especially the ones with the lasers on their heads. (That's a joke. But the rest?)
[0] https://www.amazon.com/Sovereign-Individual-Mastering-Transi...