It will be decades before it can tackle the "creativity" needed to draft complex transactional legal documents, let alone litigation-related documents. I can't comment on finance, but law is not about "wrote procedural operations from a complex set of rules." The law is relatively simple--it is, and always has been, the application of the law that is complex, and this is why lawyers get paid so much. (This same logic generally applies to programming -- the syntactic rules of a language are simple, but the application of those rules is highly complex.)
Indeed, anything beyond the simplest legal work requires a very highly contextual understanding of both the applicable law and the facts of the situation which is currently beyond the capabilities of current software or hardware (and that includes currently available or near-available quantum optimizers).
AI will replace legal work about the same time as it replaces programming.
The other fun part is what happens when AI is good enough to replace the gophers who do most of the grunt work, but not yet good enough to replace the experts who figure out what the customers really want. If this stage lasts long enough for the experts to retire, where do new experts come from?
Anyway there'll probably always be people behind the algorithms doing the modeling or programming at least.
The amount of effort required to shrink real knowledge worker jobs is astronomical compared to simply driving a car. After the singularity, fine. But even that will require a massive, massive amount of work. The low hanging fruit for increasing knowledge work output is in Gattaca not AI.
And if you're in a post-singularity world you don't really have to worry about jobs anymore. So I guess being a programmer is pretty future proof ;)
The same holds for technical areas, we can't say for sure when we'll have strong AI but in the meantime we're gonna need less and less people to do the job that a lot of people do today. This all to say that the problem of unemployment isn't about every single person in the world being unable to get a job, if you have 35% of your population unable to exchange their labour capacities for a living then this is a huge social problem.
Just look at what a small number of programmers can accomplish today vs 10 or 20 years ago.
Ironically , if you're a programer who increases productivity, i.e. causes unemployment, you save money. In sour economy the need to save money is the greatest.
There was "post-scarcity for basic survival needs" for millions of years already. When life first formed on earth, there was food enough for every organism. That didn't prevent organisms from fighting each other.
Competition and the extinction of the losers has been a central theme of life since it began. Being at the top of the food pyramid, and in a first-world country has made us forget that fact. The idea that humanity can rise above this, formed from living in this social-economic bubble for far too long, is just hubris.
That's where the big difference comes from your example of '"post-scarcity for basic survival needs" for millions of years already.' there's never been the coordination and impartiality to distribute and coordinate at that level.
As for today if we coordinated and there were no economic considerations we could feed everyone today. It's just that there's not the planning or monetary forces in place to make it happen.
The future will hold a very different scenario than a typical first-world country today where the populous actually produces useful work. As a leader, instead you have a population who who in-fight amongst themselves for the best of the consumable goods and produce absolutely nothing of value for you. To make matters worse, they continually attempt to push their boundaries, growing bigger and bigger if not thwarted (as life normally does).
My other point, that techno-baron A will want to fight/defend himself against techno-baron B still applies. Won't anything that techno-baron A needs to do to defend himself against techno-baron B trump the humanitarian concerns for maintaining the lifestyle of his subjects that give absolutely nothing back to him?
Even if techno-baron A is morally inclined to do so, there will be others that are more brutal and would not. And even if the moral techo-baron A is able to completely conquer the world (seems very unlikely considering how schizophrenically both brutal and compassionate he would have to be), then his hold on it will at some time cease, probably under duress from threat(s) within.
Why would people follow it? Why would some rich and powerful guy with connections forgo all his power to some CPAI that was set up? How would he maintain his ego when the world no longer shakes from his every footstep? How would he repay all the favors he made to those that got him there?
AI is just a tool, like the gun, the ship or the railroad. It's not some panacea that is going to protect human nature from itself.
The idea is that in a post singularity world we won't be making those decisions. A hopefully wise AI will manage and decide everything for us. (Even if we wouldn't want it to.)
That or the AI decides to exterminate us. In both cases jobs won't be a major concern.
Right, and a lot of today's technology bridges the gap by shifting work to the end user. How often does anyone call a travel agent anymore?
It's a slow process, as the quantity and depth of software increases, the number of knowledge workers decreases. That's literally the reason for existence for most software.
Also, if you're starting a project from scratch, you can use the latest and most streamlined technology. But if you're supporting a large code base (millions of lines of code) and adding features to it, you can't just suddenly re-write all your code to use the latest techniques. Also, you may be locked into a technology by your customers. For example, in the enterprise software space, lots of customers have a huge investment in Java application servers on which all their software runs, and if you want to compete in that space, it's easier to sell them a Java-based system than it is to convince them to install and learn to support a Ruby-based system just to run your product. And there's probably much more code (and programmers) in enterprise software -- think of all the software that supports banks, insurance companies, hospitals, government agencies, pharmaceutical companies, etc. -- than in the start-up world. I don't think AI is going to make a dent in that huge, complex pile of legacy software any time soon.
Facebook manages the data of 1 billion people, spanning some 200 countries with 5000 employees. No great AI involved.
Glad you bought up enterprise, most of those programmers now come from Indian IT companies. The only sector still employing american programmers is defence. Now if you are right and all those hospitals, pharma companies, insurance companies, banks etc require more and more features to be built for their customers then why do we see this
Why do we see Infosys tanking after almost a decade of being a darling of the stock market.
Of course its all not happening overnight, there are some legacy systems that are still too expensive to replace but it is happening.
The costs are always a problem. If AI will ever really replace us, is still a mystery for the future. Because for it to work, it would have to be cost effective.
Just open it in Incognito Mode. The Economist uses a simple, cookie-based porous paywall.
It hasn't happened completely, but that is only a matter of time. Manufacturing jobs have been declining at a sharp and steady rate in the US. One part of this may be due to outsourcing, but the other part is due to automation. When machines can do most of the labor, eventually you will only need a few high-level engineers to calibrate those machines.
If a job is offshored (as much knowledge work has been), it's a good bet that the next step will be to automate it.
The key idea of "creative destruction" is basically that technological innovation both creates (duh) and destroys (in that it destroys economic value based on pre-innovation technologies). And since - in Schumpeter's view anyway - capitalism depends on a constant flow of new innovations and entrerpeneurship, we have a constant state of churn where value is being "creatively destroyed".
I started reading Josesph Schumpeter's Capitalism, Socialism and Democracy a while back but got distracted and never finished it - but based on what I know so far, I recommend it. What I'm not yet clear on, from my limited reading of the original source material, and a few related works, is exactly how bullish (or not) Schumpeter was on capitalism. Which reminds me, I really want to go back and finish the book, as I find this topic both fascinating and important.
Then why are Intel desktop CPU's not getting any better the last 3 years?
It's merely a business practice adopted by Intel, which they attempt to adhere to. The fact that they refer to it as a law, and that journalists around the world parrot their term, is all merely a magical marketing sales pitch.
They TRY to double the transistor count on a periodic basis, and, when they do, suddenly, they've got a newer, better, more expensive thing to market and sell. Wowee! The future is such a miracle!
Moore's law is about their success in doing so. Your car's horse power doesn't increase twofold every 1,5 years neither its gas consumption decreases, but with transistors it actually happens.
Moore's law ... OBSERVES that the number of transistors we can fit on a chip doubles every two years.
If you are trying to argue with me calling it "Moore's law", you are too late. You missed the boat. That's what it is called, law or not.
> Jesus H. Christ, Gordon, that's a fucking fantastic idea! Get marketing on the phone. Holy shit, we're gonna be fucking rich.
Yeah, okay, "observe" whatever you want. Sorry to contradict you on a website. Don't forget to down vote this comment as well. Boo hoo.
P.S. HN doesn't let me downvote your comments, because you are replying to my comments. But, I suppose it's impossible I'm the only one that disagrees with you, so I must be hacking the website.
In any case, the costs are going down, the number of cores are growing, and, more importantly, computing power is not restricted to CPUs (see GPUs/APUs).
That's a big enough difference to count as significant. Especially when you consider that Intel's been lowering their power budget and spending larger portions of their transistor budget on the integrated GPU. The Pentiums on the market in 1995 were 10-12W, and the Pentium 2s on the market in 1998 were 30-45W followed by a die shrink that brought power back down to 20-30W.
More importantly 2013 Xeons do much more per clock cycle than 2006 Xeons (and 2013 Opterons for that matter). Improvements in pipelining, branch prediction, caches, etc are harder to nail down to one number like clock speed or core count but can be huge contributors to real-world performance.
My question to HN-ers, as people who write code that further reduces the need for knowledge workers, how should we feel about our role in contributing to a Vonnegut-style dystopia?
Indifference? If not us, someone else will do it. And job loss isn't really a net loss. It's just capital reallocating to another area.
Pride? Automation brings advances down in price point creating a better standard of living for all.
edit: The "need" for knowledge workers isn't reduced. I stated that incorrectly. I meant that fewer knowledge workers are needed for a given task or given output. Clearly, the need for knowledge workers as a pct of the workforce is higher than in the past and will continue to do so until the machines take over.
We may end up as net job destroyers, but if we create enough new entrepreneurs, or make off-the-grid more viable, then I don't feel so bad.
Tech entrepreneurs are trained to think about optimizing/automating. For example, here's an old blog post by Kopelman: http://redeye.firstround.com/2006/04/shrink_a_market.html Summary: there's a lot of opportunity for you if you find a way to shrink a market.
Without doing too much handwringing, I think it's our responsibility as engineers/technologists to at least try to be aware of what we destroy with what we create.