Hacker News new | past | comments | ask | show | jobs | submit login
Brain work may be going the way of manual work (economist.com)
36 points by josephby on May 23, 2013 | hide | past | favorite | 74 comments



It's the finance and legal professions that are most at risk with the current level of AI and machine learning. Contrary to what most finance and legal professionals believe, there is low creativity in their vocations - and that is exactly what AI and machine learning does best: wrote procedural operations from a complex set of rules. This will be very interesting to see how our "captains of corruption" (the fucks that actually run this planet) react when they are pushed to the curb.


I laughed until I realized you were serious. AI can't even get basic speech processing correct. The combined computer processing power in the world cannot yet accurately emulate the thought processes of the average human mind.

It will be decades before it can tackle the "creativity" needed to draft complex transactional legal documents, let alone litigation-related documents. I can't comment on finance, but law is not about "wrote procedural operations from a complex set of rules." The law is relatively simple--it is, and always has been, the application of the law that is complex, and this is why lawyers get paid so much. (This same logic generally applies to programming -- the syntactic rules of a language are simple, but the application of those rules is highly complex.)

Indeed, anything beyond the simplest legal work requires a very highly contextual understanding of both the applicable law and the facts of the situation which is currently beyond the capabilities of current software or hardware (and that includes currently available or near-available quantum optimizers).

AI will replace legal work about the same time as it replaces programming.


Low-level legal work, such as discovery, is already disappearing. For example: http://www.blackstonediscovery.com


What about legalzoom.com and similar services? All those wills, patent applications, incorporations, etc. were once performed by legal assistants, paralegals and junior lawyers. Now someone just fills in some forms and ticks off various options. Or think of online insurance quotes, tax preparation and basic accounting again mostly junior level jobs to be sure but this is only the beginning!


Before legal zoom, and even after it, those things were available in hard copy volumes of model contracts, wills, etc. complete with instructions and optional sections for different uses. The online services streamlined the UX a bit, but it's simply not the case that prior to them all of those documents were prepared individually by professionals. And I suspect that both the books and the online services displaced people doing their own documents without professional assistance as much as taking business from pros.


You're quite right about the books I remember the Self-Help series for all kinds of legal documents, but they were essentially dumb. If the person using those forms made an error it probably wouldn't be caught and could be costly. Modern electronic systems have, what may be described as intelligence, though certainly not understanding, such that user error should be much less common. Also with consumer grade financial software different scenarios can be evaluated and actions suggested based on data analysis; so there's more than just UX improvements occurring in these systems.


As always, the fun part is the interface between the rules engine and the real world. How good is AI at gathering requirements from people who don't quite know what they want?

The other fun part is what happens when AI is good enough to replace the gophers who do most of the grunt work, but not yet good enough to replace the experts who figure out what the customers really want. If this stage lasts long enough for the experts to retire, where do new experts come from?


Even if this is true, it will be the low-level grunts and support staff who feel the burn, not the high-level people who actually run the show. Operating at the pinnacle of those professions requires the ability to form strong personal relationships with the right people - not a skill that can be replicated by machines just yet.


What part of finance do you think will go first ? Finance is highly subjective, and while I don't think it isn't at risk of being taken over by AI, I think it will be quite a while before a machine can make the proper subjective decisions a finance professional makes.


Insurance underwriting probably will become more and more automated, the exceptions are the cases with large values involved, in the short run, definitely this will be automated some day as well, or insurance in which only a small part of the population pay for insurance and the sample is small too do any meaningful statistics.

Anyway there'll probably always be people behind the algorithms doing the modeling or programming at least.


Uh, they already do, high frequency trading.


As a finance professional (HF equities trader), I find this surprising. We are using computers more and more in this world. Of course, if a person has no familiarity or skill with computers - for analytics, monitoring, and so forth - they are likely to be left behind.


No they will not.

The amount of effort required to shrink real knowledge worker jobs is astronomical compared to simply driving a car. After the singularity, fine. But even that will require a massive, massive amount of work. The low hanging fruit for increasing knowledge work output is in Gattaca not AI.


I guess the point where an AI can replace a programmer is the point where a technological singularity begins. Because at that point the AI is capable of improving its own code which would lead to a 'runaway reaction'.

And if you're in a post-singularity world you don't really have to worry about jobs anymore. So I guess being a programmer is pretty future proof ;)


The fact that automation will take most of population out of work doesn't necessarily mean that your job will be completely automated. Look at a manufacturing plant for instance, human workers are still necessary but you need 10x less workers and they'll produce 10x more than 50 years ago.

The same holds for technical areas, we can't say for sure when we'll have strong AI but in the meantime we're gonna need less and less people to do the job that a lot of people do today. This all to say that the problem of unemployment isn't about every single person in the world being unable to get a job, if you have 35% of your population unable to exchange their labour capacities for a living then this is a huge social problem.


Software is already replacing programmers through gains in efficiency if not AI.

Just look at what a small number of programmers can accomplish today vs 10 or 20 years ago.


There's not a finite amount of software that the world needs, though. We're standing on the shoulders of giants but the top of the ladder is not even in sight.


It depends i think. Let's say you develop mobile games for a living and then the unemployment rate rises to 20-30%. The economy collapses, and nobody buys games. Not the best place to be while paying the mortgage.

Ironically , if you're a programer who increases productivity, i.e. causes unemployment, you save money. In sour economy the need to save money is the greatest.


So, post singularity, people with no wealth or ability to work will be prevented from starving? There are many people that are dying from starvation today. Why would this change?


The idea of the singularity also includes post-scarcity for at least basic survival needs. The logic being when robots & AIs take over all the jobs we can increase the output to a point where goods are exceedingly abundant. Granted that path involves quite a bit of faith that there's some reason to provide, but it's not terribly unreasonable if a bit idealistic.


This idea fails because we had the resources to feed everybody today yet it doesn't happen. In the future, when techno-baron A sees that techno-baron B is getting ahead, techno-baron A consumes more resources in order to defeat techno-baron B. If people are in the way of that, and they are of no use to any techno-barons they are pushed aside.

There was "post-scarcity for basic survival needs" for millions of years already. When life first formed on earth, there was food enough for every organism. That didn't prevent organisms from fighting each other.

Competition and the extinction of the losers has been a central theme of life since it began. Being at the top of the food pyramid, and in a first-world country has made us forget that fact. The idea that humanity can rise above this, formed from living in this social-economic bubble for far too long, is just hubris.


Things have always been this way isn't really strong argument for this. There are many ways we could set up a society in which the basic needs for everyone's survival is needed and the surplus is allocated in a way which most benefits society. A perfectly impartial central planning AI comes to mind.

That's where the big difference comes from your example of '"post-scarcity for basic survival needs" for millions of years already.' there's never been the coordination and impartiality to distribute and coordinate at that level.

As for today if we coordinated and there were no economic considerations we could feed everyone today. It's just that there's not the planning or monetary forces in place to make it happen.


A central planning AI to impose fairness and the right to basic survival could only come about and remain standing if there was enough support for it from those already in power to allow this. How would you suppose this would come about?

The future will hold a very different scenario than a typical first-world country today where the populous actually produces useful work. As a leader, instead you have a population who who in-fight amongst themselves for the best of the consumable goods and produce absolutely nothing of value for you. To make matters worse, they continually attempt to push their boundaries, growing bigger and bigger if not thwarted (as life normally does).

My other point, that techno-baron A will want to fight/defend himself against techno-baron B still applies. Won't anything that techno-baron A needs to do to defend himself against techno-baron B trump the humanitarian concerns for maintaining the lifestyle of his subjects that give absolutely nothing back to him?

Even if techno-baron A is morally inclined to do so, there will be others that are more brutal and would not. And even if the moral techo-baron A is able to completely conquer the world (seems very unlikely considering how schizophrenically both brutal and compassionate he would have to be), then his hold on it will at some time cease, probably under duress from threat(s) within.


People have been trying to fairly distribute resources to give everyone a basic standard living for ages. It's the basic idea behind welfare, socialism, and communism, there's a precedent for trying to do exactly what I described as the CPAI.


So why do you think that an AI could do any better? You really think it's too complicated for people to figure out how distribute wealth fairly enough and that's why it failed? All an AI tasked with making things fair for everybody can do is create a plan. It cannot execute it, it cannot enforce it.

Why would people follow it? Why would some rich and powerful guy with connections forgo all his power to some CPAI that was set up? How would he maintain his ego when the world no longer shakes from his every footstep? How would he repay all the favors he made to those that got him there?

AI is just a tool, like the gun, the ship or the railroad. It's not some panacea that is going to protect human nature from itself.


I think one of the reason it failed in the past was a mixture of corruption and incomplete technology. There just wasn't the tech in place for things to be efficiently planned accurately and efficiently.


> This idea fails because we had the resources to feed everybody today yet it doesn't happen.

The idea is that in a post singularity[1] world we won't be making those decisions. A hopefully wise AI will manage and decide everything for us. (Even if we wouldn't want it to.)

[1] http://en.wikipedia.org/wiki/Technological_singularity


> The logic being when robots & AIs take over all the jobs we can increase the output to a point where goods are exceedingly abundant.

That or the AI decides to exterminate us. In both cases jobs won't be a major concern.


Yeah that's always an option for how this could go. I prefer to be positive cause if a post-singularity AI wants to kill me there's not much I can do.


What if natural resources are scarce?


Basic living is perfectly sustainable for our current population and beyond I think (and I think I've read but have no source), anything above that could be allotted based on the newly scarce resource of human creativity or whatever we decide. Also many scarce resources could likely be made non scarce with 'free' labor of robots/AIs.


You don't have to fully automate a job. You just have to replace most of the human labour. The "secretary" sector wasn't obliterated by word processing and email, but it was crushed down to a vestigial remnant. The "lawyer" sector could be as easily crushed by Watson.


> You don't have to fully automate a job

Right, and a lot of today's technology bridges the gap by shifting work to the end user. How often does anyone call a travel agent anymore?


It's already happening. The number of "knowledge worker" jobs is shrinking. The low-skilled are the first to go and software is replacing them, just as low-skilled labor jobs have been replaced by machines.

It's a slow process, as the quantity and depth of software increases, the number of knowledge workers decreases. That's literally the reason for existence for most software.


Low-skill and knowledge worker are at the opposite ends of the spectrum.


In this case, Low skill is adjective on knowledge worker not a category unto itself.


You don't need AI to see the number of programmers required on a project to exponentially shrink.

Five years ago to build a simple video chat you needed a team of hardcore programmers with expertise in video, audio, networking, real time protocols and telecom. Today it takes a couple of lines of javascript through WebRTC.


But aren't users demanding more and more features from software as time goes on, and won't this added size and complexity create additional work for programmers even though the stuff we did in the previous release is now easier to do? For example, compare the amount of software that went into a phone ten years ago with the gigabytes of software that sits on a smartphone today, or the complexity of the UI on a phone ten years ago vs. today. Not to mention the increased functionality of the servers at the back-ends of all these apps.

Also, if you're starting a project from scratch, you can use the latest and most streamlined technology. But if you're supporting a large code base (millions of lines of code) and adding features to it, you can't just suddenly re-write all your code to use the latest techniques. Also, you may be locked into a technology by your customers. For example, in the enterprise software space, lots of customers have a huge investment in Java application servers on which all their software runs, and if you want to compete in that space, it's easier to sell them a Java-based system than it is to convince them to install and learn to support a Ruby-based system just to run your product. And there's probably much more code (and programmers) in enterprise software -- think of all the software that supports banks, insurance companies, hospitals, government agencies, pharmaceutical companies, etc. -- than in the start-up world. I don't think AI is going to make a dent in that huge, complex pile of legacy software any time soon.


Look at state govt employees in the US. A major portion of this labor force is involved in managing data. Even if you take just the "knowledge workers", the number still would be in the hundreds of thousands range. Not counting outsourcing.

Facebook manages the data of 1 billion people, spanning some 200 countries with 5000 employees. No great AI involved.

Glad you bought up enterprise, most of those programmers now come from Indian IT companies. The only sector still employing american programmers is defence. Now if you are right and all those hospitals, pharma companies, insurance companies, banks etc require more and more features to be built for their customers then why do we see this

http://www.indianexpress.com/news/its-time-to-move-on-as-tec...

Why do we see Infosys tanking after almost a decade of being a darling of the stock market.

http://www.bloomberg.com/news/2013-05-22/infosys-investors-b...

Of course its all not happening overnight, there are some legacy systems that are still too expensive to replace but it is happening.


Sadly, I couldn't read the article due to the paywall. But I agree with you. I don't think AI will replace knowledge workers. A couple of decades ago, people used to say that workers in factories would disappear as well. That never happened, at least not completely.

The costs are always a problem. If AI will ever really replace us, is still a mystery for the future. Because for it to work, it would have to be cost effective.


> Sadly, I couldn't read the article due to the paywall.

Just open it in Incognito Mode. The Economist uses a simple, cookie-based porous paywall.


Thanks for the tip. It worked.


>>A couple of decades ago, people used to say that workers in factories would disappear as well. That never happened, at least not completely.

It hasn't happened completely, but that is only a matter of time. Manufacturing jobs have been declining at a sharp and steady rate in the US. One part of this may be due to outsourcing, but the other part is due to automation. When machines can do most of the labor, eventually you will only need a few high-level engineers to calibrate those machines.


Jobs in U.S. factories disappeared to Asia. Now that offshore manufacturing costs are rising, we can expect more companies to follow Foxconn's example, and invest heavily in machines.

If a job is offshored (as much knowledge work has been), it's a good bet that the next step will be to automate it.


It's already started: http://online.wsj.com/article/SB1000142405270230337920457747... Lawyers are being replaced by computers for doing pretrial document review. Currently only low-skill low-creativity jobs are being replaced, but that will expand over time.


Gattaca, and computer assisted knowledge work. Let's use humans for what humans are good at and computers for what computers are good at. Let's also work on the interface between those two systems.


I would rather prepare for it rather than shove my head in the sand. Is there reason to believe it will come later rather than sooner?


FWIW, this sounds a lot like Schumpeterian (or Marxist) "creative destruction" at play. The idea of "creative destruction" is an interesting one, and it's an area of much debate in regards to the long-term viability of the capitalist model.

The key idea of "creative destruction" is basically that technological innovation both creates (duh) and destroys (in that it destroys economic value based on pre-innovation technologies). And since - in Schumpeter's view anyway - capitalism depends on a constant flow of new innovations and entrerpeneurship, we have a constant state of churn where value is being "creatively destroyed".

I started reading Josesph Schumpeter's Capitalism, Socialism and Democracy[2] a while back but got distracted and never finished it - but based on what I know so far, I recommend it. What I'm not yet clear on, from my limited reading of the original source material, and a few related works, is exactly how bullish (or not) Schumpeter was on capitalism. Which reminds me, I really want to go back and finish the book, as I find this topic both fascinating and important.

[1]: http://en.wikipedia.org/wiki/Creative_destruction

[2]:http://en.wikipedia.org/wiki/Capitalism,_Socialism_and_Democ...


You mention Schumpeterian, guess who wrote the article! (or it could be someone with the same name, I don't honestly know)


Schumpeter died in 1950. "Our Schumpeter columnist and his colleagues consider business, finance and management, in a blog named after the economist Joseph Schumpeter"


Yeah, the original Schumpeter is long dead. But apparently that blog is meant as a tribute to him or something. I actually wasn't completely clear on that point myself, when I first saw the word "Schumpeter" on the page. I kind of assumed it was either something like that, or an amazing coincidence.


I assume it's the name of the column, unless the dead have risen. And creative destruction is mentioned in the article too!


"Moore’s law—that the computing power available for a given price doubles about every 18 months—continues to apply. "

Then why are Intel desktop CPU's not getting any better the last 3 years?


Because the article, like just about damn everyone else, completely misunderstands Moore's law, which actually observes that the number of transistors we can fit on a chip doubles every two years. Transistor count does not directly correlate to performance.


Wrong again. Moore's law is actually not any kind of law at all.

It's merely a business practice adopted by Intel, which they attempt to adhere to. The fact that they refer to it as a law, and that journalists around the world parrot their term, is all merely a magical marketing sales pitch.

They TRY to double the transistor count on a periodic basis, and, when they do, suddenly, they've got a newer, better, more expensive thing to market and sell. Wowee! The future is such a miracle!


"Moore's law" is the name, it doesn't matter if it's a real law or not.


If they won't try to double the transistor count, then somebody else will come and eat their lunch. Trying to make better products is the only viable strategy for them.

Moore's law is about their success in doing so. Your car's horse power doesn't increase twofold every 1,5 years neither its gas consumption decreases, but with transistors it actually happens.


It's not a law, nor is it a business practice. Read what I said again:

Moore's law ... OBSERVES that the number of transistors we can fit on a chip doubles every two years.

If you are trying to argue with me calling it "Moore's law", you are too late. You missed the boat. That's what it is called, law or not.


> Hey, guys, I bet every 18 months we can draw a picture on a thin slice of substrate, and make it twice as small as we did 18 months ago. Let's call it a law, and proclaim that it "drives the economy."

> Jesus H. Christ, Gordon, that's a fucking fantastic idea! Get marketing on the phone. Holy shit, we're gonna be fucking rich.

Yeah, okay, "observe" whatever you want. Sorry to contradict you on a website. Don't forget to down vote this comment as well. Boo hoo.


I "observed" nothing. That's what Moore's law says, not me.

P.S. HN doesn't let me downvote your comments, because you are replying to my comments. But, I suppose it's impossible I'm the only one that disagrees with you, so I must be hacking the website.


The "Moore's Law" used in the article "that the computing power available for a given price doubles about every 18 months" is correct, but it is a slight redefinition of the actual "Moore's Law".

In any case, the costs are going down, the number of cores are growing, and, more importantly, computing power is not restricted to CPUs (see GPUs/APUs).


It's wrong. It's not the computer power that's doubling, it's the number of transistors. Big difference. http://en.wikipedia.org/wiki/Moores_law


you misread computing power as CPU clock speed


No I didn't. Intel desktop CPUs did not gain a significant amount of computing power the last 3 years. Compare 1995 CPUs to 1998. Now compare 2010 CPUs to 2013.


http://www.anandtech.com/bench/Product/701?vs=191

That's a big enough difference to count as significant. Especially when you consider that Intel's been lowering their power budget and spending larger portions of their transistor budget on the integrated GPU. The Pentiums on the market in 1995 were 10-12W, and the Pentium 2s on the market in 1998 were 30-45W followed by a die shrink that brought power back down to 20-30W.


think parallel


Yes, they had 4-core CPUs in 2006, and still have 4-core CPUs in 2013 (with a few rare xeon 6-cores).


You can get 8-core Xeons (X7550 to pick a random one) and 16-core Opterons (6378 for example).

More importantly 2013 Xeons do much more per clock cycle than 2006 Xeons (and 2013 Opterons for that matter). Improvements in pipelining, branch prediction, caches, etc are harder to nail down to one number like clock speed or core count but can be huge contributors to real-world performance.


While I doubt there will be some Reverse Big Bang of creative destruction, it's tough to argue that the need for knowledge workers is being reduced. In theory, even Excel reduces the need for knowledge workers by allowing a single analyst to produce more output that an analyst with graph paper.

My question to HN-ers, as people who write code that further reduces the need for knowledge workers, how should we feel about our role in contributing to a Vonnegut-style dystopia?

Indifference? If not us, someone else will do it. And job loss isn't really a net loss. It's just capital reallocating to another area.

Pride? Automation brings advances down in price point creating a better standard of living for all.

Something else?

edit: The "need" for knowledge workers isn't reduced. I stated that incorrectly. I meant that fewer knowledge workers are needed for a given task or given output. Clearly, the need for knowledge workers as a pct of the workforce is higher than in the past and will continue to do so until the machines take over.


Some technologies are inherently populist, while others are authoritarian. ENIAC was authoritarian, pocket calculators populist (and abacus more so, because it can be produced from more easily available materials). Tanks authoritarian (because too expensive for one person), IEDs populist. Large solar plants authoritarian, "small-solar," one-family panel setups populist. Agribusiness authoritarian, home fertilizer production for home gardens populist.

We may end up as net job destroyers, but if we create enough new entrepreneurs, or make off-the-grid more viable, then I don't feel so bad.


I'm optimistically looking for ways not to feel guilty about this. I've always believed that better technology creates new jobs and a wealthier society, but it's pretty clear now that businesses are doing very well by automating away mid-skill jobs. Just contrast the stock market with the job market: companies are becoming more profitable but hiring/paying less.

Tech entrepreneurs are trained to think about optimizing/automating. For example, here's an old blog post by Kopelman: http://redeye.firstround.com/2006/04/shrink_a_market.html Summary: there's a lot of opportunity for you if you find a way to shrink a market.

Without doing too much handwringing, I think it's our responsibility as engineers/technologists to at least try to be aware of what we destroy with what we create.


Seems like this article is conflating service workers with knowledge workers. Being a chauffeur isn't the same as being a designer.


I don't see AI leading the reduction jobs available to knowledge workers for a while yet. Though knowledge workers will continue to loose their jobs. But this will simply be the result of specialized software (not what I call AI).


Unfortunately, the world is very imperfect and there's a huge amount of things that need improvement. To quote M King Hubbert: "Our ignorance is not so vast as our failure to use what we know." I don't see work running out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: