The article is focused on robotic automation of menial tasks, primarily in industrial settings (factories, warehouses), not automation of judgements which could be subject to bias...
Thriftwy raises an interesting point, speculatively, but there's little bias to be found in supply logistics or assembly of parts.
Marginally touched upon in the article was the displacement of lower-class workers. Is expansion of automation going to lead humanity toward dispossession of low-income workers, or will universal welfare become the result of outmoded jobs?
I don't think Asimov's "Aurorans,"--elite-yet-few humans supported by legions of robotic servants--is possible, but it's a thing.
I started working in robotics because I wanted to empower people and make the world a better place. But what I have started to appreciate is that increasing automation is by default a transfer of wealth from labor to capital owners. The two million long-haul truck drivers who are going to be replaced by self-driving trucks are going to lose income; the owners of the self-driving trucks are going to capture that wealth. I am not sure what the right social policies are to address this; retraining, basic income, and universal healthcare all seem reasonable. But I think a lot of people don't appreciate how automation leads to both increased productivity and also increased inequality.
> But what I have started to appreciate is that increasing automation is by default a transfer of wealth from labor to capital owners.
I think this is correct. However, one aspect not yet covered in the media as much is that if wealth is concentrated and dispossessed from labour classes, there will be a corresponding decline or collapse in many areas of the economy.
If an entire generation of labour workers are tipped out of employment, who will purchase FMCG? Sure, the market for premium/elite products could increase but I do not think this will compensate nearly enough for the lack of volume.
It is in the interest of capital owners to ensure that significant markets remain so that industries that rely on economy of scale can continue to operate. Basic income doesn't help this situation.
I don't think we'll have a lack of volume. Global incomes continue to rise and push demand higher. Often times developing economies' growth comes at the expense of developed but on the whole it's net positive.
Although I agree automation necessarily moves wealth from labor to capital sources, I think an underappreciated concept by those concerned about automation is how increased productivity at least historically has benefited the overall economy. The industrial revolution went a long way towards advancing economic inequality, but had it not happened we collectively wouldn't enjoy anywhere near the quality of life we now do (and that goes for pretty much anyone who's worn a t-shirt).
But yes, it would be best if we could find ways to help laborers adapt to new automation and advance technology in a socially responsible way. Not that I have any great ideas for how that would work.
Problem is: Tell that to the people losing their jobs. "Sorry you have to choose between medication and rent now. But hey, the overall economy is doing great so you should feel happy!" This idea that a rising overall economy benefits everyone is a lie told by the smaller number of people who it actually benefits.
I agree with that so much that I spend all my time working on robots. I am not saying we should stop automating. More the latter, that we should work toward applying the technology in a sustainable way. The advances in labor protections as part of the industrial revolution did not come for free. People fought (and died!) for advances such as the eight hour work day, stopping child labor, and many other things.
At this moment in history, with so much automation happening, and also so much inequality, we should keep fighting for it to happen in a good way.
You have to think about how it benefits the overall economy. Typically this has been by replacing menial, low-paying jobs with more leveraged and better paying ones. But in the industrial age these upgrades were still creating jobs the average person could train for in a few months, and then follow for the rest of their working life.
But there's no guarantee anywhere that this is how it "has" to work out, and as things accelerate there's a lot of evidence to say it's not going to be that way for working class people in highly developed economies. (It seems to be ramping up developing economies much the same as it worked for us.) If you don't have a paycheck, are you really benefitting from the "overall economy"?
"The relatively steady rate of employment in etching and engraving fits into the optimist’s narrative of what will happen when machines take over work. Unlike the pessimist’s narrative, which points out the instances in which companies view automation as an opportunity to cut labor costs, the sunny view assumes that there’s always more work to do. If workers aren’t bogged down with repetitive, boring tasks, they’ll be able to do more work or focus on new business segments.
“A lot of the automation has just made it faster, not necessarily taken away jobs,” says Dalton. “For us, I would say it’s increased our capability to hire more employees, because we can do more work.”
Coincidentally, the whole magic robots stealing jobs meme neatly exculpates austerity and outsourcing.
> increasing automation is by default a transfer of wealth from labor to capital owners
Things are more complex than that. Whether labor gains or loses depends it productivity saves labor faster or slower than lower prices increase consumption.
I don't think that increased automation will necessarily kill jobs. When Eli Whitney invented the cotton gin that could triple the labor of a slave, and the number of slaves increased keep the machines fed.
For a more modern analogy, AWS enables one engineer to deploy machines at let's say 100 times the speed of an engineer doing it all manually. If you business scales with the number of deployed machines, each additional engineer adds significant value to your business until you reach a point where adding another engineer does not justify the salary you would pay her.
The issue then is now training engineers to use AWS. Your business could justify bearing this cost if the cost of training does not outweigh the additional value.
I think that's definitely true, in terms of net jobs. (But probably not always true.) However it will lead to disruption - the old jobs won't be the same as the new jobs. This disruption is potentially harmful to the people doing the old jobs. Additionally, most of the benefit of the cotton gin went to the slave owners, not the slaves, even though the slaves still had jobs. Hopefully we can do better this time!
I think that soon it will be considered a bad practice to possess a skill or to use some knowledge for decisions..
If you have learned to do something, you will proceed by teaching a machine learning system that. And then you will use that, and not your internal knowledge.
The reason will be cited as follows: If you know something, this introduces bus factor. People can't get in your head to validate if you're applying your knowledge fairly, or biased, or maliciously misappropriating things. Your reasoning is unverifiable. You can leave for greener pastures, carrying your skills and knowledge with you.
While machine learning system is repeatable, you can have all kinds of checks and KPIs. And it's protected by IP laws.
This is an interesting thought experiment, but I imagine it having precisely the opposite effect. Modernism, as a technology, works precisely by making power relationships seem obvious, natural, and therefore not something subject to critique. These relationships include definitions of gender, race, and class, as well as capitalist and colonial relationships, and many more. Probably the most salient power relationship to this article is deciding who should benefit from the work of robots. Acting through computers amplifies this effect, and makes it easier to uphold oppressive power relationships while keeping one's hands clean.
I recognize that this is not immediately an empirical claim, but I will observe that discarding it offhand in favor of the equally ideological dominant interpretation would be exactly the dynamic I'm pointing out.
i dunno... i just read about alphago zero. Only needed to teach it the rules of the game, and it thrashed the current level of cumulative human expertise (alpha go) 100-0 matches, which had just thrashed a single human the year before.
This seems like its happening at some hedge funds as well for stock picking.
I guess the closed system of the game rules is more manageable than all of medicine, but... I can't agree with "stuck as the current level of expertise".
I wasn't able to determine from what I read yesterday if AlphaGo Zero played vs AlphaGo or vs itself (AlphaGo Zero). This probably isn't the place to ask, but if it is, do you know?
It was trained with self-play and evaluated on AlphaGo Lee (the version that beat Lee Sedol) and AlphaGo Master (the version that beat Ke Jie and I believe a number of online matches against top-ranked professional players).
>If every doctor trains an ML algo, then forgets how to be a doctor, then medicine stops dead at that level.
It's very rare that technology becomes ubiquitous and free very quickly. You will still need doctors in poor areas, destitute countries, and in the military.
Still, education and society would likely optimize for getting people to the edge of knowledge more quickly if ML algos truly took over a field like medicine.
I don't know if this adds or subtracts from your dystopian view, but the act of training the machine would introduce the same reasoning/bias flaws one has.
Someone wise once told me, even after we have strong AI and don't need to program, he will still program, because the process of programming leads to understanding, and he will still want to understand things. The AI might be able to write the program better and faster, but it won't be able to help him gain the understanding he would have gotten from writing the program himself.
This is why you will learn to program, write the program, gain the understanding, train the ML system properly to reproduce your results, and then basically unlearn everything.
Why would you unlearn though? Eventually the AI will do all of the above as well. Maybe eventually programming will be like blacksmithing or weaving is today...
I was thinking more like the Renaissance fair today. You'd visit the fair that recreates a hack-a-thon in the late 90s.
At the fair, you'd see the old programmer at his terminal, coding in Perl in an emacs window... instead of turkey legs, you'd have Fritos and Mountain Dew...
Weapons of Math Destruction is also a good resource on this topic, about how bias is already being encoded in "black box" algorithms that has real effects on things like the availability of credit.
> the instructions of the red light have to be followed regardless
No they don't. If there's no enforcement, and no moral reason to follow the law, you've reduced yourself to a robot if you follow it.
As far as red lights are concerned though, you'd better be damn sure cars or bikes or pedestrians aren't going to leap out from around a corner before you run them. Especially if the intersection is empty at 3am and visibility is poor. In fact in that situation it's better to play it safe.
If there is no camera and no police, you can run treat the light as a stoplight. Just be safe about it.
By can, I mean that a human being can choose to do so. Maybe smart cars in the future won't allow the driver to make that decision. But nothing forces us to always obey traffic laws right now.
In China and many other countries, rules are typically so numerous, draconian or fuzzy that you are nominally constantly breaking something. If the authorities want to alter your behavior, they call in the specter of potential enforcement to encourage behavioral change. On the one hand this seems horrible - shifting sands, constant unknowns, zero security - but in reality it's kind of liberating - less need to hire lawyers and asking for forgiveness is tolerated. The gap between enforcement and law is precisely where most people seek to operate, because it typically creates additional efficiencies. In short, enforcement is law. Or to rephrase this in the words of Snowden - Policy is a one-way ratchet that only loosens over time.
Reminds me of a Karl Schroeder novel (Permanence) in which characters encounter an alien race (or the remains of an extinct one?) which evolved to have intelligence but not consciousness. Or are we turning ourselves into philosophical zombies?
This is an interesting take, but I think part of what makes us human is our inability to disassociate our “domain knowledge” from our sense of self.
Much domain knowledge is in some sense innate or even subconscious, picked up in the field. I think it would be more efficient just to spy on people’s thoughts (and work) in some manner if there is a fear they could misapply their knowledge or leave a critical process hanging.
Yes they will allow for a degree of inefficiency if it removes human factor.
The reasoning will go, yes we are losing $1 on every automated transaction, but we can make more of them, and we certainly avert one $10,000 corruption case and one $100,000 case where we get sued for human decision.
You can't sue machine learning algorighm (that how it would go). But human, you can, even frivolously.
Yes but you can sue the company that owns the machine learning algorithm, probably for much more money than you can sue an individual. You can even imagine class action law suites and the like.
(I am one of the researchers mentioned in the article.)
It is much harder to ask an algorithm about it's reasoning than a person's.
"Joe, why did you fire Carla?"
"She was late to work every other day and spent all day on her phone"
"HR-bot 5000, why did you fire Carla?"
"Weights indices 59738, 837, and 28836 added to over 0.67, yielding a score of only .85, which is below the firing threshold"
Certainly you could look at the input data and try to convince a jury that her performance wasn't acceptable but you can't prove that is why the AI fired her.
We cannot do this today, but this capability, of using language, is one we are actively working towards. Eventually, you can imagine an AI that can engage in a dialog like this one, in words. For example, see the DARPA Explainable AI program: https://www.darpa.mil/program/explainable-artificial-intelli....
At my previous shop, it was normal to silently tolerate developers breaking the rules (not locking their stations when they stepped away, browsing unapproved sites, coming in late, leaving early)... until it was decided that they needed to be fired because they were rude to someone important, or etc... at that point they lost their privilege to act like a professional and their badge-access records were pulled and compared to their time-entry.
The reason developers got fired is because of clear and well-defined rules... that were never enforced except when the decision to fire them for an arbitrary reason had already been made.
I think HR-bot 5000 would treat employees more equitably by forcing employers to confront the inequity of some HR regulations. The same way self-driving cars "drive weird" because they stop at stop signs and drive the speed limit.
Well, where "Machine Learning" (aka: black box you can only get weights for a given question), one can train a machine to be racist, sexist, ageist, or whatever.
The problem is that the end weight distribution is different than the GB's or TB's of training data. How do we know the training was fair and impartial? How did the trainers even know if it was? What biases crept in on this stage?
Worse yet, what if the bias of the black box does denigrate black people... Say, we take in all pictures of convicted criminals- it's disproportionaly black. I would argue part of that is because of inherent policing biases, but that's embedded in "guilty" verdict. Who's at fault for this "bias"? Is there a fault? How do we detect, other than exhaustively?
I'm eagerly awaiting for methods to "open up" ML black boxes and see what makes them tick. See their decision trees, their neural weights. I want to poke and prod to see what's behind those series of numbers like [.888271829 1.10999292992 37.999999921 1000.32 .73] . Right now, it's shove data in exhaustively and hope for the best. I don't particularly care for that way of analysis.
Every single technological advancement relieves someone of work. This allows people to put their money towards things that they would rather put money towards, creating more jobs there. This makes the world a better place.
I agree, but technical advancements can also lead to a transfer of wealth from worker to capital owner. For example, there are roughly two million long-haul truck drivers in the US today. When we have self-driving trucks, they will lose their jobs. It's not a very fun job, and self-driving trucks will be better and safer. But it will also transfer wealth from the worker to the capital owner.
It may also create other jobs etc., but clearly the wealth transfer is one part of the effect. I think that as a result it is important to pursue social policies to mitigate the disruption experienced by displaced workers, and to spread the benefit of the technology to a larger fraction of society.
Jobs have been made obsolete many times over. There are fewer blacksmiths than there used to be when a horse was a primary method of transportation. Manufacturing jobs declined during globalization/offshoring during the eighties. Did knowledge work take the place of those manufacturing jobs? Would the employment numbers look different if VisiCalc hadn't come along?
The questions are: what do the population numbers look like that are impacted by these changes in job-type demand? How many blacksmiths were many obsolete by the car? How many manufacturing jobs were removed from the pool by moving to offshore manufacturing? How many manufacturing/warehouse/etc. jobs will be replaced by automation?
The last question is probably the hardest to answer, because we don't know yet. It depends on the evolution of the sensors (solid state LIDAR), other hardware, and automation methods.
Thriftwy raises an interesting point, speculatively, but there's little bias to be found in supply logistics or assembly of parts.
Marginally touched upon in the article was the displacement of lower-class workers. Is expansion of automation going to lead humanity toward dispossession of low-income workers, or will universal welfare become the result of outmoded jobs?
I don't think Asimov's "Aurorans,"--elite-yet-few humans supported by legions of robotic servants--is possible, but it's a thing.