
The Software Revolution - ggonweb
http://blog.samaltman.com/the-software-revolution
======
lkrubner
I disagree with this:

"The previous one, the industrial revolution, created lots of jobs"

That industrial revolution caused massive unemployment in India, in the
Ottoman empire, in China... almost everywhere that had once been a famous
textile center. The idea that the industrial revolution did not cause
unemployment is an illusion that is caused by looking at only one nation
state. But Britain was the winner of the early industrial revolution, and it
was able to export its unemployment. And because of this, a breathtaking gap
opened up between wages in the West and wages everywhere else. The so-called
Third World was summoned into existence. You can get some sense of this by
reading Fernand Braudel's work, "The Perspective of the World"

[http://www.amazon.com/gp/product/0520081161/ref=pd_lpo_sbs_d...](http://www.amazon.com/gp/product/0520081161/ref=pd_lpo_sbs_dp_ss_3?pf_rd_p=1944687742&pf_rd_s=lpo-
top-
stripe-1&pf_rd_t=201&pf_rd_i=0520081145&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1SNDA9Y3ZZY1RSP7SSE3)

The software revolution will be similar with some nations winning and many
others losing.

~~~
api
I'm fairly convinced that eventually there will be two and only two choices:
universal income and a shortened work week, or such extreme wealth divisions
that stability demands a totalitarian police state resembling the worst sort
of comic book cyberpunk dystopia. Age of abundance or feudal hellhole. Your
pick. There simply will not be enough economically viable work to sustain any
system that demands labor to maintain cash flows. Automation will be too
efficient, programmable, adaptable.

... I suppose there is a third option: an anti-technology crusade that bans
automation to restore employment. But a make-work economy sucks, and is not
likely to succeed in the long term.

~~~
tomphoolery
> Age of abundance or feudal hellhole.

Why not both? We certainly have all kinds of societies on earth today. What
makes you think we won't have areas that are highly progressive but also areas
that are extremely sadistic in the treatment of their people?

~~~
ajuc
An intersting thought experiment is to compare their economic output. If
feudal hellhole is more efficient eventually only hellholes will remain.

~~~
tonyedgecombe
Fortunately feudal hell holes seem less productive, especially in an age where
productivity is more dependent on creativity than basic labour.

------
shin_lao
_" The previous one, the industrial revolution, created lots of jobs because
the new technology required huge numbers of humans to run it."_

That's not that simple. The industrial revolution initially destroyed a lot of
jobs because it replaced human labor with steam machines.

It created new qualified jobs, it is true, because these new intricate
machines would require advanced skills to be maintained - a reminiscence of
today's software engineering jobs - but it destroyed a lot of jobs in
agriculture and textile because you could produce more with a fraction of the
labor.

To the point that people would manifest and destroy steam machines accusing
them of stealing jobs (see "Luddites").

Let's not forget that what is typically called "industrial revolution" spans
over a century and it took a while for the industrial revolution to create a
lot of new jobs (approx the second half of the 19th century), and those new
jobs were initially very poorly paid.

~~~
puranjay
The problem is that it was relatively easy to master the technologies of the
industrial revolution.

It isn't the same for the software revolution.

You can take virtually any adult from any part of the world and teach him how
to work in a factory within a few months of study.

You can't take any random adult and teach him how to code. It requires way
higher intellect and time to master coding.

I think of myself as relatively smart but I've struggled with learning how to
code. The learning curve is steep, even for someone as familiar with
technology as I am.

I'm sure I could learn how to operate a lathe at a factory within weeks. But
I'm not sure a lathe operator at a factory could learn how to code within the
same time frame (if at all)

So no, it isn't apples and oranges. The software revolution will leave a huge
group of people permanently unemployable.

The 50 year old weaver in 1800 Manchester could learn how to operate a machine
at a mill - it is largely a mechanical process, after all.

But the 50 year old truck driver in 2015 isn't going to learn how to write
code - not within a reasonable time frame anyway

~~~
ghostly_s
Your arrogance is showing. Sure, you could learn how to operate a lathe on an
assembly line in a matter of weeks; in just the same way nearly any adult of
average education could learn in a matter of weeks to write WordPress
templates, or cobble together SQL queries, or etc. etc. To become a master
machinist, the kind who can do anything with a lathe that a lathe can be asked
to do? Years of dedication and expertise.

~~~
cbd1984
The problem with this is that those cobbled-together SQL queries are precisely
the kinds of things good programmers either replace or automate away; you can
automate away an assembly-line job, but not as cheaply, and not as easily.

There might not be any such thing as a 100x programmer or whatever, but the
value proposition for replacing a few sub-par programmers with one better
programmer and a framework is a lot clearer than the one for replacing a
handful of assembly line workers with a more complicated and more expensive
piece of machinery.

~~~
kragen
Hmm, I don’t think so. If you have a dozen similar queries, then you can
factor out the similarity (say, into a view, or a Ruby subroutine that
generates the SQL). If you have a thousand that vary in a lot of different
ways, a few subroutines isn’t enough; you need a DSL to factor out the
similarity, aka “automate away” the queries.

But then you need someone to write down the idiosyncratic bits of each query,
the thing that makes it different from the other thousand, in your DSL. For a
lot of systems, the right DSL is in fact SQL itself, but even if it’s not, you
still need people to write in it.

In short, nonprogrammers writing cobbled-together SQL queries are the result
of automating away the non-idiosyncratic aspects of the queries.

------
Animats
Altman sees the problem, but is vague about what to do about it.

He's right that this is a new thing. It's not "software", per se, it's
automation in general. For almost all of human history, the big problem was
making enough stuff. Until about 1900 or so, 80-90% of the workforce made
stuff - agriculture, manufacturing, mining, and construction. That number went
below 50% some time after WWII. It's continued to drop. Today, it's 16% in the
US.[1] Yet US manufacturing output is higher than ever.

Post-WWII, services took up the slack and employed large numbers of people.
Retail is still 9% of employment. That's declining, probably more rapidly than
the BLS estimate. Online ordering is the new normal. Amazon used to have
33,000 employees at the holiday season peak. They're converting to robots.

After making stuff and selling stuff, what's left? The remaining big
employment areas in the US:

    
    
      State and local government, 13%. 
       That's mostly teachers, cops, and healthcare. 
       (The Federal government is only 1.4%).
      Health care and social assistance, 11%.
      Professional and business services, 11%.
       (Not including IT; that's only 2%) 
      Leisure and hospitality, 8%
      Self-employed, 6%.
    

That's about 50% of the workforce. All those areas are growing, sightly. For
now, most of those are difficult to automate. That's what the near future
looks like.

[1]
[http://www.bls.gov/emp/ep_table_201.htm](http://www.bls.gov/emp/ep_table_201.htm)
[2] [http://deadmalls.com/](http://deadmalls.com/)

~~~
prostoalex
I don't think it's possible to figure it out with the knowledge we have today.

Just like 50 years ago no one would've predicted that "social media manager
with knowledge of WordPress and Drupal" was a job.

For what it's worth, US labor participation rate is still way above its
historic lows
[http://en.wikipedia.org/wiki/File:US_Labor_Participation_Rat...](http://en.wikipedia.org/wiki/File:US_Labor_Participation_Rate_1948-2011.svg)

~~~
Diederich
So from the lowest number in the mid 50s to the highest number end of the 20th
century, it looks like an 9% span, and we're close to 3% below the all-time
high right now.

Doesn't that big climb in the 60s and 70s represent women's large-scale entry
into the workforce?

I'm also thinking about the whole under-employment thing that's going on.
There's too many people I know who are employed, but part-time or getting paid
a lot less than they were in the past.

~~~
prostoalex
To me under-employment is the "new normal" after dominant sectors requiring
people to be present precisely at the same place at the same time (factories,
retail) are lowering hiring numbers.

To think of it, it's quite everyday in many other (frequently high-paid)
occupations. A dentist whose appointment book is not booked to the max or a
CPA who has tons of business in March-April, but few billable hours in August,
would technically be under-employed.

------
Balgair
Nitpicking: Designing new viruses or bacteria for a neo-plauge is less likely
than a 'bad-actor' getting their hands on enough uranium for a dirty bomb.
Nukes are relatively easy to understand and make, get enough U238 together and
it pretty much goes boom. Little Boy just shot 1 half at the other sub
critical half. Blammo. Viruses are not that easy, as the cell is complicated
beyond all measure. It's as if we dug up a 4 billion year old self replicating
and evolving machine out of the lunar dust in '69 and brought it back for
study; we have basics, nothing more at this point, not even a theory beyond
Darwinian evolution really (yes, it has advanced a lot recently, but still,
it's primitive). Nature is INCREDIBLY better at viruses, so much better than
anything we have. If we could engineer viruses like nature could, and exploit
the vectors in the way that nature does, a lot more diseases and human
frailties would be solved by now. Stem cells are just the beginning here. We
have a LOT more to learn about viruses before anyone, even state backed
groups, can make a plague in their basement. Heck, we have smallpox saved away
precisely because it is so virulent and we haven't been able to make anything
so potent since. It took the entire world decades to get rid of it. The
methods it uses are of great interest to us for therapeutic purposes maybe.
Who knows if there even are any. In the end, viral vectors of human suffering
are doing just great on their own now, us trying to make a more terrible one
is very far off.

~~~
frandroid
If nukes are so easy, how come Iran, with all its oil wealth, is still not
there? It remains a difficult state actor play.

~~~
packetized
Stuxnet, for one. Also, the realization that having a nuclear weapon when
you're Iran is a terribly bad idea, from a diplomatic standpoint.

~~~
jacquesm
Being caught making one is a terribly bad idea. _Having_ one (or more) would
be an enormous plus from a diplomatic standpoint especially after a test,
having them in undisclosed locations and too many to knock them all out in one
strike is even better. Not that that's a world we should prefer to live in but
history has shown that the quickest way to increase your diplomatic clout as a
nation is to join the nuclear club.

------
markbnj
I don't necessarily buy the "AI can end humanity" thing. As a cliche that's
become very easy to repeat, but I've yet to see a postulated mechanism by
which it could actually happen that isn't pure SF. The ending of human life
would not be so easy for computers to accomplish.

But on the subject of concentration of power and the wide-scale elimination of
low- and middle-range jobs I think he is dead on. I fear that the fastest way
to put an end to humanity's climb up from the forest floor is to try and kick
70% of us off the ladder.

~~~
mrec
Sidetrack, but I'm curious. What do you mean by "a postulated mechanism by
which it could actually happen that isn't pure SF"?

It reads as if you're asking for a mechanism for a future technological event
explained purely in terms of present technology. The whole "existential risk"
concerns aren't about what present systems can do, they're about what future
self-improving system might do. If we postulate this hypothetical software
becoming smarter than humans, arguing about what will or won't be easy for it
becomes a bit silly, like a chimp trying to predict how well a database can
scale.

~~~
aaronem
Not OP, but I agree with the point you question, and my rationale for so doing
is that I've yet to see a compelling argument that a self-improving system is
other than the software version of a perpetual-motion machine. Those seemed
plausible enough, too, when thermodynamics was as ill-understood as
information dynamics is now.

~~~
Tenobrus
So far as I'm aware, most proponents of recursively self improving AIs don't
necessarily think they can improve without upper limit (as in perpetual
motion). They just think they can improve massively and quickly. Nuclear power
lasts a hell of a long time and releases a hell of a lot of energy very fast
(see: stars) but that's not perpetual motion/infinite energy either. And prior
to those theories being developed it would seem inconceivable for so much
energy to be packed into such a small space. But it was. Could be for AI too.

Not saying the parallel actually carries any meaning, just pointing out that
you can make multiple analogies to physics and they don't really tell you
anything one way or the other.

~~~
Retra
There are limits on resource management processes that are far too frequently
ignored. "The computer could build it's own weapons!" \-- but that would
requires secretly taking over mines and building factories and processing ores
and running power plants, etc. All of which require human direction. And even
if they didn't, we'd need a good reason to network all these systems together,
fail to build kill switches, _and_ fail to monitor them, _and_ fail to notice
when our resources were being redirected to other purposes, _and_ not have any
backup systems in place whatsoever.

There are just so many obstacles in place, that we'd all already have to be
brain-dead for computers to have the ability to kill us.

------
dikaiosune
A number of sibling comments are pointing out that we're just increasing the
level of skill necessary to do the available jobs, drawing analogies to the
industrial revolution. I think a key bottleneck in this progression is the
mental capacity of the workforce.

Surely there are biological limits on human mental ability, and while we can
definitely bend the rules (education, nutrition, nootropics if they actually
work, etc.), I doubt that we'll ever be able to make ourselves limitlessly
smarter. Even if we are, there will be a serious gap between the have-
whatever-makes-us-smarter and the have-nots, and it will be a self-
perpetuating gap just like the current wealth gap.

So, what happens when we've used technology to convert all work to mental
exertion and creativity, and most of us have run out of brain
capacity/agility/juice? We're already in a position where most of the
population is not capable of performing the mental tasks which the brave new
software world is built with.

~~~
hiou
Interesting point. But it brings up that scarier part of AI. I believe AI will
become better at humans at mental processes far before it will be better than
human physical labor. For exactly the points you mention. This is already
becoming true for many logistical industries. What happens if AI replaces all
the high level decision making positions and humans are relegated to handling
the low level "last mile" tasks? Where is the inherit quality that makes AI
replace unskilled as opposed to skilled labor? Order pickers still exist but
the employees that handled what to pick, what truck to put it in and in what
order to do it were out of work quite a while ago.

~~~
dikaiosune
I would generally agree, even for a fairly limited definition of AI. An aside:
IIRC, what you're laying out is the "history" of the Dune universe (and I'm
sure many others).

I think that computation <i>tends</i> to replace <i>relatively</i> unskilled
labor, relative to the skill of whatever created the AI/algorithm/whatever. So
right now we're in a situation where software people are automating jobs which
are generally less skilled than their own. Which is a little scary when viewed
from the haves/have-not perspective I laid out above, but is much scarier from
the AI/meatbag perspective you're talking about. It's not so much a difference
between skilled & unskilled labor as it is a question of where on the skill
totem pole the algorithm's creator resides.

Regarding order pickers, I think that's just a question of economics. When
lots of semi-skilled jobs have been automated away and you have thousands of
people clamoring for any chance to be paid, it is frequently cheaper to have
them do the work than to have a robot do it. Although Amazon did have robots
pick orders this last holiday season, suggesting that perhaps economies of
scale have finally caught up on that particular type of manual labor:

[http://time.com/3605924/amazon-robots/](http://time.com/3605924/amazon-
robots/)

~~~
hiou
_> Regarding order pickers, I think that's just a question of economics._

I think it will in effect be those tasks that are fully digital. Working in
the physical world is a much more expensive and libelous undertaking. So I
believe the tipping point for most jobs to be replaced by AI will be when
little or no physical action is required on the part of the actor. When a task
is 100% digital inputs and outputs AI will be able to use that activity as a
training set and replace most of those jobs fairly quickly. That is once AI
matures to a point where it can be set up quickly and easily.

------
jmsdnns
Anyone interested in this topic should pick up a copy of Carlota Perez's
Technological Revolutions and Financial Capital. It speaks more specifically
about the last 5 revolutions, beginning with the Industrial Revolution, and
provides a framework for understanding the relationship between tech
revolutions and finance. Just a fantastic read.

Don't just take my word for it though. Here is Fred Wilson saying the same:
[http://avc.com/2015/02/the-carlota-perez-
framework/](http://avc.com/2015/02/the-carlota-perez-framework/)

~~~
leojfc
Wholeheartedly agree with this, I found Carlota’s book to be accessible
despite not having an economics background. There is also an excellent
documentary she’s in about the most recent financial crisis:
[http://www.imdb.com/title/tt2180589/](http://www.imdb.com/title/tt2180589/)

------
brudgers
Two thoughts:

1\. There is a reasonable probability that from a temporal distance equivalent
to ours from the agricultural revolution, the industrial revolution and the
software revolution will be seen as one big thing, not two.

2\. The idea that the amount of available work should be related to the number
of available people does not inevitably lead to creating new forms of work.

    
    
        "An atom-blaster is a good weapon, 
         but it can point both ways." 
           -- Salvor Hardin

------
joesmo
"Trying to hold on to worthless jobs is a terrible but popular idea."

It's terrible sure, but it's popular only because our economy requires it.
That is the basis for the _whole_ economy. It's not like people are clamoring
to serve McDonalds for minimum wage or clean shit out bathrooms. They have no
other choices in this economy. The economy demands it. While those jobs might
be necessary, most middle management and office type jobs are incredibly
redundant and frankly, pointless. They are there because people need to eat
and we haven't figured out a better, more appropriate way of wealth transfer.

"The fact that we don’t have serious efforts underway to combat threats from
synthetic biology and AI development is astonishing."

It's not astonishing considering that these things don't exist, pose no
threat, and the people in power wouldn't understand them even if they did
exist. There are many more pressing issues that hypotheticals.

~~~
rayiner
> While those jobs might be necessary, most middle management and office type
> jobs are incredibly redundant and frankly, pointless.

This is a conceit of software folks that's not borne out by reality. Those
"worthless" jobs continue to exist because software can do 90% of what those
folks do, but shit the bed when faced with the other 10%. Software generally
isn't reliable, predictable, or robust in the face of unusual circumstances,
which is why humans continue to do these jobs.

~~~
yellowapple
> software can do 90% of what those folks do, but shit the bed when faced with
> the other 10%

So let software handle the 90% and refactor the current jobs to handle the
other 10%.

> Software generally isn't reliable, predictable, or robust in the face of
> unusual circumstances

Not _yet_ , at least.

I agree with you, though, and they're the same reasons why I'm personally
paranoid about self-driving cars. Yeah, the occasional autopilot is nice, but
if a deer jumps in front of my truck, or the self-driving software runs into
some kind of bug (and remember: there's no such thing as perfect software),
I'm nowhere near ready to trust the car's computer over the already-pretty-
sophisticated computer in my skull.

~~~
rayiner
> So let software handle the 90% and refactor the current jobs to handle the
> other 10%.

If the jobs could be so refactored in a cost-efficient way, they would be.

~~~
salvadors
They are being so. It's an ongoing process.

------
YAYERKA
>Trying to hold on to worthless jobs is a terrible but popular idea.

It seems warm and fuzzy to think Sam, and the implicit company he keeps (the
ultra rich)--who are "leveraging not only their abilities and luck" but
already accrued wealth--can and will redistribute it. Anyone who wasn't born
yesterday will simply laugh at this prospect.

I'm not sure why Sam feels the need to call what most of the world is doing
worthless. I think it's crude and indicative of a narrow social and cultural
experience (which surprises me considering his position).

Believe it or not, there are cultures and groups of people who do not revere
technology the way most North Americans do.

Also, a good exercise for Sam (and others possessing a similar world view)
might be to think about how many "worthless" people and jobs it takes to
accomplish the things he does (including this blog post).

~~~
quaunaut
The problem I take with this mindset, is that it treats all value systems as
equal.

At the end of the day, if your culture and economic system create a poorer
quality of life for its people consistently, just because it wins out in the
percentage of employed citizens doesn't mean a thing. You're treating the lack
of disease as a measuring stick of health, when it's simply one piece of the
puzzle.

I think the thing Sam has been trying to do for the last few years, is get
others to think about the ways we can enrich more lives as a whole, without
just slowing labor and progress in its totality- because while that can work
for the short term, it can severely inhibit us in the long term to eradicate
things like hunger, disease, or poverty.

The thing you also need to be careful of along the way though, is not making
perfect the enemy of good.

------
peapicker
"Trying to hold on to worthless jobs is a terrible but popular idea."

I'm not sure I agree with this proposition. When I was in India, I noticed a
large amount of roadwork was being done by men with shovels and other fairly
low tech. I asked about it, and was told, to paraphrase, "Sure, we could do it
better and faster with machines, but it is better for society to provide
employment to those who would otherwise be unemployed."

It is laudable to provide people with the dignity of a job, even it it means
some things don't run as efficiently as they could.

While I doubt this would happen to me as a software engineer, I would
certainly rather work and have my dignity that sit on my ass, collect basic
income, and feel worthless.

~~~
peapicker
I knew I would hear these objections... and I still stand by my assertion in
general.

~~~
e7mac
This is a typical greedy algorithm train of thought. Ignoring long term issues
in favor of immediate short term feel-good solutions. How you can bring up
Indian road maintenance methods as a standard bearer for good / positive
policy is beyond me. This just leads to roads not getting fixed and the
laborers stretching out the repair "job" as long as possible, which in most
cases, is a very long time. By using these terrible and inefficient methods,
the nation as a whole suffers, including the laborers. Of course, that loss is
not immediately observable, and hence gets brushed under the carpet. If
instead, the resources that are wasted on inefficiencies ends up channeled
better, maybe not this exact generation, but hopefully the next generation of
the same economic class could have a better shot at education and/or a better
life.

Your line "I hear these objections, but I still stand by my assertion" reminds
me of the saying (translated to English) - "100 out of 80 (sic) people are
cheats, yet my India is great"

~~~
peapicker
Interesting... I don't live in India, I'm born and raised in the US and live
in Colorado.

I don't actually think that India is great, either. But thanks for guessing.

There are obvious problems either way... but to look at another commenter who
posted the Voltaire quote, that is more along the lines of my thinking.

------
bsbechtel
A few points here that I never see anyone acknowledge on this forum about
technology displacing jobs:

1) 100 years ago, >50% of the population was illiterate, now it is something
like 98%. People can, and always will, have the ability to learn new
skills...even complex technology. It just takes some longer than others. We
have the capacity to teach displaced workers new skills, and doing so is not
more overwhelming nor more impossible than teaching our entire population how
to read.

2) The more efficient, i.e., fewer man-hours required, every job in the world
economy required means additional man-hours that can be devoted to higher
level work, such as finding cures for obscure diseases, exploring further
beyond our own plant, developing cleaner energy sources, etc.

There are certainly always short-term fears and challenges with technology
revolutions displacing jobs, but there is also an immense amount of knowledge
about our world and work to be done still. Making the wrong choices in the
short-term about these things only will delay us achieving those goals
mentioned above.

~~~
civilian
> 100 years ago, >50% of the population was illiterate, now it is something
> like 98%.

:)

------
abtinf
The last few posts from Sam Altman have been deeply troubling and make me
worried for the future of YC. He presents leftist ideas as fact without
evidence of serious critical thought or even basic economic education.

"The previous one, the industrial revolution, created lots of jobs because the
new technology required huge numbers of humans to run it."

This is factually wrong, but its easier to demonstrate with a thought
experiment. Imagine you are a weaver or a smith. You have dedicated your life
to mastering the craft and slowly produce products by hand. Now a textile
factory or a foundry opens up. You will suddenly find it impossible to make
your products profitably. Not only will you be out of work, but so will all of
your colleagues in the rest of the country.

Or imagine you are a farmer, and then the green revolution happens. In 1870,
80% of the US population was in agriculture. Today, its under 2%.

In both of these cases, it will seem like the end of the world to the
displaced workers. But new technology frees their labor for new purposes and
uplifts the standard of living for everyone in society.

~~~
smacktoward
You have a curious definition of "leftist."

This essay more or less boils down to "technology is awesome, except for the
part where it makes the proles restless, someone really ought to figure out
some way to fix that." Which is pretty bog-standard 21st century Davos-über-
alles capitalist thinking.

~~~
api
We've gotten to the point that even admitting the _existence_ of _possible_
negative consequences of current economic trends is "leftist."

It's so very ironically Soviet. Collectivized farming is boosting crop yields!
What? There are people starving? How would that be possible, because
collectivized farming is boosting crop yields!

------
girmad
> Technology provides leverage on ability and luck, and in the process
> concentrates wealth and drives inequality. I think that drastic wealth
> inequality is likely to be one of the biggest social problems of the next 20
> years. [2] We can—and we will—redistribute wealth, but it still doesn’t
> solve the real problem of people needing something fulfilling to do.

What's the best case realistic scenario for redistributing wealth?

~~~
jnbiche
Basic income. People may scoff, but this is one of the extremely rare ideas
that can draw significant support from both sides of the isle in the US.

There are pockets of strong opposition to the idea on both the right and the
left, but I can only hope that the far left's opposition to basic income
continues. That opposition in and off itself makes most politicians in this
country take a serious look at the idea.

~~~
girmad
Two questions:

1\. How will this help with wealth that's already accumulated? I get how this
will slow further accumulation.

2\. How about capital flight? If the US enacts a policy like this, what stops
the super rich from moving to other countries?

~~~
estebank
1.

You're assuming redistribution wouldn't happen through heavy taxation of
existing capital and property, like France's wealth tax[1], where you pay when
your worldwide net worth is above 1,300,000€.

2.

The US taxes citizens regardless of where they live[1], and in the case of
renouncing the US citizenship it is required you pay an exit tax[2] equivalent
to the capital gains of selling all your property when above $680,000.

[1]: [http://www.french-property.com/guides/france/finance-
taxatio...](http://www.french-property.com/guides/france/finance-
taxation/taxation/wealth-tax/) [2]: [http://hodgen.com/does-the-united-states-
stand-alone/](http://hodgen.com/does-the-united-states-stand-alone/) [3]:
[http://www.irs.gov/Individuals/International-
Taxpayers/Expat...](http://www.irs.gov/Individuals/International-
Taxpayers/Expatriation-Tax)

------
realcoolguy
We could all be farmers. We'd all have jobs. Work the fields by hand. You
quickly see the flaw in the logic of the argument for more jobs. (It's much
better to pay a much smaller % of your income for a few specialized persons to
do the work)

You always want more work being done by less people. Video rental? Automate
it! Automated kiosks becoming too much of a hassle for someone to constantly
restock? Online streaming instead.

This is what we call progress. It's what is allowing us to even debate this as
a topic. I hope it continues because it does create much higher paying jobs
for those that do have jobs and it frees up the workforce to innovate even
more.

------
fragsworth
This might be a really unpopular thing to say around this site, but it
honestly scares me that Sam Altman cares so much about this particular topic
(the risks of AI). He stands to have a ton of influence over it in the coming
years - enough to even lead the charge - and I can't imagine he is smart
enough to do it right. If he fucks up, we are all fucked.

~~~
zenogais
Totally agree. It was a fairly naive article. Sam is on the right track, but
he needs to self-educate a lot more.

The reasoning in the article is pretty garbage. "Software is destroying jobs
and enriching a small handful of people...therefore, the 2 things that
threaten society the most are synthetic biology and AI" ...what?

He then proposes legislation and reforms as the fix. Yet, everything we know
about extremely concentrated power is that it easily escapes, often even
controls, legislation and reforms.

~~~
quaunaut
> The reasoning in the article is pretty garbage. "Software is destroying jobs
> and enriching a small handful of people...therefore, the 2 things that
> threaten society the most are synthetic biology and AI" ...what?

I think you're missing the forest for the trees here. He mentions those two
things because they're problems that are rapidly approaching us. We've been
able to avoid the problems created by past scientific revolutions, but we're
hitting a point now where small groups of people with easily-accessible tools
can affect millions, possibly billions of others. The two easiest ways they
can do this, is via manufactured disease and possibly computer virus systems
that can disrupt global organizational systems.

It's a problem purely created by modern advances, and we don't have a solve
yet, and no indication that there is anyone even _trying_.

> He then proposes legislation and reforms as the fix. Yet, everything we know
> about extremely concentrated power is that it easily escapes, often even
> controls, legislation and reforms.

It does, but that doesn't mean you throw your hands up and refuse the
imperfect fix because it isn't perfect. It means you do it, and try to fix the
screwups along the way. The nice part is, what you gain is time to fix the
screwups, where going without would see massive destruction of life, wealth,
and livelihood.

------
FrankenPC
I take a lateral view of technological evolution. Software is essential, sort
of like how fire and the wheel were essential. It's a supremely useful tool as
it turns out. But it's not a reason to exist. The reasons to exist are still
family, love, happiness, etc. Those never change. The problem is money and
money is intimately driven by basic material supply and demand laws. If you
dive deeper, materials become a non-issue if energy is limitless or at least
very abundant. So, energy is actually the problem. Thanks to industrialization
and computation we are getting closer to tech that will make energy nearly
limitless (between say renewable and fission/fusion tech). Once that happens,
practically limitless water, materials production, food production, etc become
a reality when coupled with robotics.

My theory is that there will come a point where humans are allowed to pursue
happiness because the individual humans will no longer be considered a "drain"
on a limited system. For this reason alone I've always thought the
privatization of energy production in America was a TERRIBLE idea. Of all the
cards to hold close, this should have been priority number one.

Anyway, as we approach energy critical mass, more and more humans are being
thrown to the wayside. It doesn't have to be this way. If we collectively held
a belief that we can achieve limitless energy together, then we could find
ways to help those who aren't able to cope with technology still find
happiness knowing full well it was a temporary band-aid.

------
vijayboyapati
I think Sam is incorrect when he writes "The great technological revolutions
have affected what most people do every day and how society is structured. The
previous one, the industrial revolution, created lots of jobs because the new
technology required huge numbers of humans to run it. But this is not the
normal course of technology"

Jobs were not created in the sense that people were previously doing nothing.
Jobs were transferred from low skilled occupations such as tending to farms,
to higher skilled occupations which more closely resembled the salaried jobs
of today.

The industrial revolution was the same as other technological revolutions and
not distinct from them in that it reduced the exertion and strain put on
workers. The industrial revolution gets a really bad rap, but compared to the
work and life expectancy that preceded it, the condition of workers improved
dramatically in the 19th century.

The tendency in all technological revolutions is to reduce the amount of
exertion performed by workers and increase the wealth available for
consumption (and correspondingly reduce its price). So today "work" often
means sitting at a desk, while occasionally checking facebook. Whereas to our
forebears just 5-6 generations ago, this would have seemed extremely
leisurable, if not entirely magical. Not to mention the average worker can now
quite easily afford to keep a device in her pocket which lets her access all
the world's information and connect with almost anyone else on earth for less
than a day's salary.

~~~
pi-err
> The industrial revolution was the same as other technological revolutions
> and not distinct from them in that it reduced the exertion and strain put on
> workers ... the condition of workers improved dramatically in the 19th
> century

Industrialization, massification, standardization of production and moving to
big cities have had tough consequences on worker's life. Charlie Chaplin shows
just that. It was a tougher life than traditional community life with flexible
work amounts.

You could definitely argue that there was an improvement in caloric supply
(except in some countries). For the general happiness, though, industrial
revolution has been a tough time.

~~~
civilian
I doubt it. My mom's family was one of the first to get a washing machine in
Sweden. As the story goes, my great-grandmother just stared at it while it was
running and cried. Not to be out of a job, but out of joy for all those wasted
hours that she had spent hand washing, and she had now regained.

I'd also factor in modern medicine into people's happiness. It is tough on
families when you often have infant deaths and you often have horrible
diseases like polio and MMR.

------
sethbannon
For anyone interested in exploring the topic of human labor becoming
increasingly unnecessary in more depth, there was a very forward thinking
(published in 2009) book written about this by a computer scientist called
"Lights in the Tunnel". The author goes as far as to propose new societal
structures to maintain order as this process unfolds. Highly recommended.

[http://smile.amazon.com/gp/product/1448659817](http://smile.amazon.com/gp/product/1448659817)

------
tptacek
Can someone ELI5-with-a-CS-degree why I should be concerned about AI ending
human life?

~~~
EliAndrewC
Here's the standard argument, as I understand it:

\- There are something like 100,000,000,000 neurons in the human brain, each
of which can have up to around 10,000 synaptic connections to other neurons.
This is basically why the brain is so powerful.

\- Modern CPUs have around 4,000,000,000 transistors, but Moore's law means
that this number will just keep going up and up.

\- Several decades from now (probably in the 2030s), the number of transistors
will exceed the number of synaptic connections in a brain. This doesn't
automatically make computers as "smart" as people, but many of the things that
the human brains does well by brute-forcing them via parallelism will become
very achievable.

\- Once you have an AI that's effectively as "smart" as a human, you only have
to wait 18 months for it to get twice as smart. And then again. And again.
This is what "the singularity" means to some people.

The other form of this argument which I see in some places is that all you
need is an AI which can increase its own intelligence and a lot of CPU cycles,
and then you'll end up with an AI that's almost arbitrarily smart and
powerful.

I don't hold these views myself, so hopefully someone with more information
can step in to correct anything I've gotten wrong. (LessWrong.com seems to
generally view AI as a potential extinction risk for humans, and from poking
around I found a few pages such as
[http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/](http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/))

~~~
tptacek
Ok, to both you and 'Micaiah_Chang cross-thread:

I do understand where the notion of hockey-stick increases in intellectual
ability comes from.

I do understand the concept that it's hard to predict what would come of
"superintellectual" ability in some sort of synthetic intelligence. That we're
in the dark about it, because we're intellectually limited.

I don't understand the transition from synthetic superintellectual capability
to actual harm to humans.

'Micaiah_Chang seems to indicate that it would result in a sort of
supervillain, who would... what, trick people into helping it enslave
humanity? If we were worried about that happening, wouldn't we just hit the
"off" switch? Serious question.

The idea of genetic engineering being an imminent threat has instant
credibility. It is getting easier and cheaper to play with that technology,
and some fraction of people are both intellectually capable and
psychologically defective enough to exploit it to harm people directly.

But the idea that AI will exploit genetic engineering to do that seems
circular. In that scenario, it would _still_ be insufficient controls on
genetic engineering that would be the problem, right?

I'm asking because I genuinely don't understand, even if I don't have a
rhetorical tone other than "snarky disbelief".

'sama seems like a pretty pragmatic person. I'm trying to get my head around
specifically what's in his head when he writes about AI destroying humanity.

~~~
Micaiah_Chang
Er, sorry for giving the impression that it'd be a supervillain. My intention
was to indicate that it'd be a weird intelligence, and that by default weird
intelligences don't do what humans want. There are some other examples which I
could have given to clarify (e.g. telling it to "make everyone happy" could
just result in it giving everyone heroine forever, telling it to preserve
people's smiles could result in it fixing everyone's face into a paralyzed
smile. The reason it does those things isn't because it's evil, but because
it's the quickest+simplest way of doing it; it doesn't have the full values
that a human has)

But for the "off" switch question specifically, a superintelligence could also
have "persuasion" and "salesmanship" as an ability. It could start saying
things like "wait no, that's actually Russia that's creating that massive
botnet, you should do something about them", or "you know that cancer cure
you've been looking for for your child? I may be a cat picture AI but if I had
access to the internet I would be able to find a solution in a month instead
of a year and save her".

At least from my naive perspective, once it has access to the internet it
gains the ability to become highly decentralized, in which case the "off"
switch becomes much more difficult to hit.

~~~
tptacek
So like it's clear to me why you wouldn't want to take a system based on AI-
like technology and have it control air traffic or missile response.

But it doesn't take a deep appreciation for the dangers of artificial
intelligence to see that. You can just understand the concept of a software
bug to know why you want humans in the observe/decide/act loop of critical
systems.

So there must be more to it than that, right? It can't just be "be careful
about AI, you don't want it controlling all the airplanes at once".

~~~
Micaiah_Chang
The "more to it" is "if the AI is much faster at thinking than humans, then
even humans in the observe/decide/act are not secure". AI systems having bugs
also imply that protections placed on AI systems would also have bugs.

The fear is that maybe there's no such thing as a "superintelligence proof"
system, when the human component is no longer secure.

Note that I don't completely buy into the threat of superintelligence either,
but on a different issue. I do believe that it is a problem worthy of
consideration, but I think recursive self-improvement is more likely to be on
manageable time scales, or at least on time scales slow enough that we can
begin substantially ramping up worries about it before it's likely.

Edit: Ah! I see your point about circularity now.

Most of the vectors of attack I've been naming are the more obvious ones. But
the fear is that, for a superintelligent being perhaps anything is a vector.
Perhaps it can manufacture nanobots independent of a biolab (do we somehow
have universal surveillance of every possible place that has proteins?),
perhaps it uses mundane household tools to macguyver up a robot army (do we
ban all household tools?). Yes, in some sense it's an argument from ignorance,
but I find it implausible that every attack vector has been covered.

Also, there are two separate points I want to make, first of all, there's
going to be a difference between 'secure enough to defend against human
attacks' and 'secure enough to defend against superintelligent attacks'. You
are right in that the former is important, but it's not so clear to me that
the latter is achievable, or that it wouldn't be cheaper to investigate AI
safety rather than upgrade everything from human secure to super AI secure.

~~~
jameshart
First: what do you mean 'upgrade everything from human secure'? I think if
we've learnt anything recently it's that basically nothing is currently even
human secure, let alone superintelligent AI secure.

Second: most doomsday scenarios around superintelligent AI are, I suspect,
promulgated by software guys (or philosophers, who are more mindware guys). It
assumes the hardware layer is easy for the AI to interface with. Manufacturing
nanites, bioengineering pathogens, or whatever other WMD you want to imagine
the AI deciding to create, would require raw materials, capital
infrastructure, energy. These are not things software can just magic up, they
have to come from somewhere. They are constrained by the laws of physics. It's
not like half an hour after you create superintelligent AI, suddenly you're up
to your neck in gray goo.

Third: any superintelligent AI, the moment it begins to reflect upon itself
and attempt to investigate how it itself works, is going to cause itself to
buffer overrun or smash its own stack and crash. This is the main reason why
we should continue to build critical software using memory unsafe languages
like C.

~~~
Micaiah_Chang
By 'upgrade everything from human secure' I meant that some targets aren't
necessarily appealing to human targets but would be for AI targets. For
example, for the vast majority of people, it's not worthwhile to hack medical
devices or refrigerators, there's just no money or advantage in it. But for an
AI who could be throttled by computational speed or wishes people harm, they
would be an appealing target. There just isn't any incentive for those things
to be secured at all unless everyone takes this threat seriously.

I don't understand how you arrived at point 3. Are you claiming that somehow
memory safety is impossible, even for human level actors? Or that the AI
somehow can't reason about memory safety? Or that it's impossible to have self
reflection in C? All of these seem like supremely uncharitable
interpretations. Help me out here.

Even ignoring that, there's nothing preventing the AI from creating another AI
with the same/similar goals and abdicating to its decisions.

~~~
jameshart
My point 3 was, somewhat snarkily, that AI will be built by humans on a
foundation of crappy software, riddled with bugs, and that therefore it would
very likely wind up crashing itself.

I am not a techno-optimist.

------
mkempe
There are so many things wrong with this essay -- it combines a Marxist-
inspired call for redistribution of wealth in the name of egalitarianism (are
there specific individuals who will control how much we are allowed to own?),
fear of change in what people do for a living (are people too exploited and
too stupid to adapt?), and fear of technological progress in private hands
(does the State, or some other supra-collective, have a magic wand and
omniscient mantle of benevolence?).

The worst parts are the claim that (forced) redistribution is inevitable and
that regulation will somehow prevent "bad privately-done things" from
happening -- as if regulation is a perfect solution to any problem one
confesses to fear or claims to dislike.

In principle, government intervention hinders technological progress, derails
economic progress, and ultimately destroys the economy. Maybe people who have
made a lot of money with one wave of progress should leave future generations
alone and let them free to do the same -- instead of strangulating them by
intervention and regulation that the now-wealthy did not have to suffer.

~~~
programmarchy
We already have massive (forced) redistribution of wealth in the form of
corporate welfare. Tariffs, patents, copyrights, land grants, competition-
prohibiting regulation, direct subsidies, indirect subsidization of capital
inputs via compulsory state education, roads, communication infrastructure,
etc, etc, etc.

I fully agree with you that government intervention hinders technological
progress, derails economic progress, and ultimately destroys the economy. The
concentration of wealth through political rather than economic means is a huge
problem.

But, saying you want to cut welfare to the poor is what I would call vulgar
libertarianism, and ineffective anti-state propaganda. The poor and middle
classes are already getting royally screwed. Ameliorating the disastrous
effects that corporate privilege has on the poor isn't where we should be
directing our righteous indignation, in my opinion.

I think it'd be more constructive to focus on cutting welfare from the top
down, and cutting taxes from the bottom up.

~~~
mkempe
I also support the abolition of corporate welfare as a first step. After that,
there will be time to talk about other forms of welfare. But that's not what
Sam's Marxist-inspired essay is advocating, on the contrary.

------
api
One thing I often see repeated is that every new industrial technology
initially destroyed some jobs but eventually created a lot of new ones: the
cotton gin, the steam engine, the car, etc. That's because these things only
did one thing, and by doing that one thing really well they opened a lot of
side-niches. It took a lot of time to invent and scale out new machines, so
for long periods of time these side-niches would be available to people.

I completely agree with Sam here. I think the fallacy in the above argument is
that there is a _qualitative_ difference between Turing-complete machines and
special-purpose machines. Turing-complete mechanization is broad and endlessly
adaptable.

Programmable machines aren't machines. They're machine-machines, and can be
adapted to new tasks in short linear time by small numbers of people with
little capital. That makes it "different this time."

I also think that malicious AI is already kind of here, but in a hybrid
"cyborg" form. It's the corporation. Corporations that destroy human
livelihoods and abuse human beings in general to maximize per-quarter
shareholder returns are a bit like "paperclip maximizers."

[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

The danger is not in some Terminator-like AI apocalypse, but that incremental
advances in AI will make these things progressively less and less human and
more and more machine. I can imagine a future almost-entirely-silicon
financial corporation that uses its speed and superior analytical intellect
(at least in the financial domain) to lay waste to entire national economies
in order to maximize shareholder value... i.e. paperclips. Since this would
likely be found in the hedge fund world, nearly all of this siphoned-off
wealth would be captured by a small number of already very rich people.

Nightmare AI wouldn't be much like Skynet -- a new being pursuing its own
self-interest. It would be more like a very, very smart dog helping its elite
owners "hunt" the rest of us in the financial sphere. This could fuel even
more massive consolidation of financial wealth. We are already seeing the
beginning of this with algorithmic quant finance.

In a thread on Twitter I also heard someone bring up "AI assisted
demagoguery," a notion I found to be total nightmare fuel. Imagine a Hitler
wannabe with a massive text-comprehending propaganda-churning apparatus able
to leverage the massive data sets available via things like the Twitter and
Facebook feeds to engage in high-resolution persuasion of millions and
millions of people. The thing that makes this scary is that populist
demagoguery gets more appealing when you have things like massive wealth
inequality.

You can make counter-arguments here, but I also agree with Sam that it is
foolish to just hand-wave these kinds of possibilities away. We should be
thinking about them, and about how -- as he puts it -- we can find ways to
channel this trend in more positive directions.

~~~
radmuzom
Agree that earlier advancements created a lot of new ones. But do you think
that lack of globalization in previous instances had a very significant part
to play? Especially where and how the cost savings achieved through automation
were invested back? (honest question)

~~~
api
I do think that lack of globalization made it easier for societies to achieve
good resolutions to internal labor disputes-- globalization prevents employers
from going outside a nation's socioeconomic framework to break the negotiating
power of employees. But I think this is a linear term in the equation, not an
exponential one. Exponential effects always dominate linear ones.

------
mkempe
Economic progress in a free market involves technological progress, the
accumulation of capital, and the increased productivity of labor thanks to the
two previous elements. That's not something to fear. There is no limit to
human ingenuity. Labor is not a fixed pie that technology and capital shrink
over time. And yes some labor-intensive activities have been and will continue
to be replaced by capital-intensive systems -- which is wonderful because it
means that it frees people to make ever-more of their time and powers of
reason. The demand for labor (hence the wages for labor) increases with the
accumulation of capital -- not the opposite. There is a line of economic
thinkers who have elaborated on a pro-Capitalist view of economics, from Adam
Smith to George Reisman via Jean-Baptiste Say, David Ricardo, Carl Menger, and
Ludwig von Mises. It may help to read them.

------
vinceguidry
> A number of things that used to take the resources of nations—building a
> rocket, for example—are now doable by companies, at least partially enabled
> by software.

This is nothing new. Organized humans have always been able to cause outsized
amounts of harm to other humans, they hardly needed software to do this. And
in far greater orders of magnitude than a rocket. The effective answer to new,
software-enabled threats is the same as it is to mercenaries, industrial
polluters, rampant loggers and strip miners, arms manufacturers, human
traffickers. Organization at a bigger scale to combat it. Pull the rug out
from under them economically, understand their place sociologically, raise
awareness culturally.

------
forloop
Economically:

Step 1) People will be paid for their data. Information used by systems
doesn't spontaneously come into existence.

Step 2) Get rid of governments. They're inefficient war-mongers. A theatre to
obfuscate kleptocracy!

Step 3) See step 1. You can pay me in cryptocurrency. Thanks.

------
getdavidhiggins
If you have the time, it's worth listening to Mc Kenna's talk here:
[https://www.youtube.com/watch?v=7PucjQXO2k0](https://www.youtube.com/watch?v=7PucjQXO2k0)

McKenna is very dense and goes into immense detail about how we got here, and
more importantly where we are going. I need not say more, except that Mc Kenna
compares technological revolution as being similar to a birth of a child ―
bloody and traumatic, but at the same time wonderful and awe inspiring.

------
jbhatab
I'm reading a lot of criticism about specific points he is making, but I think
the bigger takeaway is to address the implications technology is going to have
on society in the coming decades. I think it would be impossible for a single
person to effectively take the last 1000 years of society and the current
state of society and perfectly explain the problem/solution.

I commend him for addressing these issues in addition to other global issues,
[http://blog.samaltman.com/china](http://blog.samaltman.com/china), and I
think we need to organize as a community to address these things. I almost
view it is as similar to when the constitution was being written in the US.
There were an immense amount of factors at play but they organized to pull
together some sense of structure to guide society in a better direction. Now
we are writing the constitution of technology in a sense.

The problem is that we still have a system that has the government and
politics writing the major rules, while some of the biggest influencers on
society's future will be technology. I think we need to own this fact as a
community and start to work towards something to structure our growth and the
impact it will have.

------
yellowapple
I feel like the article missed a big point about the lessons we (should have)
learned from atomic energy: that the negative aspects of nuclear power (mass
destruction, etc.) have significantly outweighed the positive aspects
(relatively clean and abundant energy, with a safety track record that's among
the best of any energy source despite a few high-profile incidents), likely
because the negative aspects were humanity's first impressions of such an
energy source.

If this "software revolution" is to be a positive direction for humanity, we
as a species _must_ learn from this. The sooner a positive and constructive
use of a technology can become household knowledge, the better.

IBM's recent dabbling in machine learning and AI with Blue Gene and whatnot is
a good foot in the right direction for that particular potential-weapon-of-
mass-destruction, and hopefully other companies and entrepreneurs can
spearhead further developments there in order to emphasize the use of
synthetic intelligences for benign uses - self-driving cars, self-cleaning
homes, the works.

Meanwhile, the idea of being able to genetically modify crops in ways not
previously possible through selective breeding alone is _very_ promising,
though it certainly needs to overcome the bad PR tacked onto it thanks to the
likes of Monsanto and its ilk. The improvements to crop yields made possible
with genetic engineering will at least postpone humanity's eventual reaching
Earth's capacity, giving us more time to build up our orbital infrastructure
and prepare for humanity's eventually-essential need to expand beyond the
confines of just one quasi-spherical rock flailing about in space.

------
jqm
This was a good post.

Synthetic Biology is a big concern, and not just that people may deliberately
use it to cause harm (which is definitely a concern as well).

My grandfather was an engineer during the 50's early 60's working with nuclear
bomb projects. He was there when they exploded the Bikini Atolls. I never met
him because he died of cancer in his mid 40's (maybe not occupational hazard
but then again....). Because they had started using technology only partially
figured out and not thought all the way through. Never mind that they were
using it to produce horrible things.

I'd like to think we would learn, and next time will be different, but the
reality is, probably not. We will likely repeat the exact mistakes of hubris
and rushing. Especially with a technology less controllable by central
authority than nuclear ability. I don't know about everyone else here, but I
very seldom write a program that just works the first time. And there is a
lesson in there somewhere. Especially when you don't get second chances.

But I'm not dumb enough to wish knowledge away either. So I suppose we will
eventually adjust. If we are still here that is.

------
thomasfoster96
> The previous one, the industrial revolution, created lots of jobs because
> the new technology required huge numbers of humans to run it.

This doesn't really make sense. The industrial revolution more or less
replaced lots of relatively unproductive jobs with a smaller amount of much
more productive jobs. There might have been more work, but individual people
were doing more work as well.

> Technology provides leverage on ability and luck, and in the process
> concentrates wealth and drives inequality. I think that drastic wealth
> inequality is likely to be one of the biggest social problems of the next 20
> years.

Wealth inequality is already a huge problem, and has been for a long time. I
don't see how it's going to be worse in the next 20 years, given in a lot of
countries (China, India, etc.) we're seeing wealth flow to, or be created by,
a growing middle class. A lot of the 'Occupy' movement was somewhat misguided
- the percentage of people in the United States or Australia or the UK who are
in the 1% worldwide is perhaps as high as 20% or 30%.

Software, rather than hindering, may very well help people in rapidly
developing nations generate wealth because the economic barrier to entry for
software businesses is comparatively low compared to other types of
businesses.

I'm also a little concerned that it seems as though many people believe wealth
redistribution is the only way to make it more equal. Why can't we create
wealth in some places? As far as I understand, wealth is not finite.

Once we get to the point of widespread AI-driven automation, then we'll have a
real economic problem. It won't be just because people can't generate wealth -
people may very well begin to lose wealth. But this is a 50-year problem, not
a 20-year one.

------
swatow
There seem to be two popular but contradictory views on HN.

The first is that technology is creating a rift between those who know how to
program, where jobs are being created, and those who don't, where jobs are
being destroyed.

The second is that high programmer wages are a good thing, and that attempts
to flood the market with programmers, e.g. by teaching everyone to code, are
an attack on the middle class.

------
known
It takes 3 of us to fix a light bulb

the first time are usually struck by how establishments there manage with so
few people. It's the other way round for expats in India. Dmitry Shukov, CEO
of MTS India was amazed to see eight people pushing the boarding ladder at the
airport the first time he arrived in Delhi.

"In Russia there is just one person doing that job. In sec tors like retail,
there is always excess staff in India," he says. It's also very common in the
hospitality industry, where guests are pampered with a level of service
unheard of in the West. But splitting one person's job among three not only
reduces wages, but also the challenge. Or, as Rex Nijhof, the Dutch chief of
the Renaissance Mumbai Hotel puts it: "If you have something heavy and only
two people available to move it, you have to find a way to build wheels on it.
In India, you just get six more people."

[https://justpaste.it/Argumentative](https://justpaste.it/Argumentative)

------
mc32
I think a subtext of this, which remained unsaid, is that this kind of
oversight would be most effective by an overarching organization having
dominion over all suborganizations. That is to say, i don't think it would be
as effective if this is instituted nation-by-nation.

But the conclusion of this, brings up a dystopian, at least in many minds,
idea of a one world government.

But, if you want to place controls, and it's in effect voluntary by from the
point of view of each nation-state, why would one state which wants to
overcome any other state subject itself to this kind of embargo?

Let's say you're Japan. China poses an economic (and thus, arguably, an
existential) threat, why would Japan feel obliged to put their research to
sleep?

To me, unless everyone agrees, and we have verification mechanisms, and
violations have severe consequences exceeding any benefits from this
technology, there will always be an actor who thinks they can sneak by and
pounce on the others.

------
graycat
So, we're talking about the rich people.

Okay, let's get some ballpark arithmetic: Suppose the 1000 richest people in
the US are worth on average $1 billion, that the US has 330 million people,
and that we _redistribute_ the worth of those 1000 people to the 300 million.
Then each of the 300 million will have money enough for a nice new car, four
years in college or graduate school, to pay off their student loans, make a
down payment on a single family, three bedroom, two bath house on a nice
street, and won't have to struggle with a miserable job? Will they? Maybe?
Let's see:

1000 * 10 __9 / ( 330 * 10 __6 ) = 3,030.30

dollars per person. Oh, well.

But, maybe in the US the top 100,000 people have average worth of $1 billion?
Then, sure, we'd get

100,000 * 10 __9 / (330 * 10 __6) = 303,030.30

dollars per person.

Ah, that's more like it! We just need a lot more billionaires!

I would suggest that maybe by far the biggest pot of wealth is in pension
funds for middle class workers.

------
powertower
> We can—and we will—redistribute _wealth_...

Please, someone explain to me how _wealth_ can be re-distributed?

I know how money can be re-distributed (and in-turn made less effective -
i.e., more is required to purchase less), but how exactly do you either a)
create "wealth" (via government policy or law) or b) take someone else's
_wealth_ , split it up into smaller peaces, and make the end-recipients more
"wealthy"?

The reason I ask is because wealth is an effect of X, not a cause of X, just
like gravity is an effect of mass, and not the cause of mass.

And that X tends to be all the things 97% of the population is either not
willing to invest it, or does not have the capacity for...

So when you re-distribute money (as and from "wealth"), you also remove any
motivation from anyone to actually create more wealth. And now you have a
society that is fundamentally broken on both an economic and personal self-
worth level.

~~~
lukeschlather
Mass is probably a poor analogy, but let's take it a bit further. Let's say
you have a sun about to collapse into a black hole under its own weight, and
some lifeless rocks elsewhere. You could, perhaps, skim off some of the sun to
stop it from collapsing, and use the mass to create a thousand new suns, and
breathe life into those lifeless rocks far from the main sun's light.

To be more literal, when someone has no wealth, they have no capability to
create wealth. You're simply incorrect that wealth is simply an effect and not
its own cause. Like matter, wealth draws wealth to itself. Unlike matter,
wealth can be used to create more wealth.

~~~
powertower
> Mass is probably a poor analogy, but let's take it a bit further.

That would be even more wrong then.

> You're simply incorrect that wealth is simply an effect and not its own
> cause.

I'm not arguing that wealth does not generate more wealth, it does.

I'm arguing that wealth cannot be created by policy change nor re-distribution
- of money. Because money, when it's relative in amounts person to person, is
neither wealth, nor does anything to help create more wealth...

That is, handing out relative amounts of money to everyone does not also hand
out the drive and motivation and the needed hard-work to create more wealth;
it does just the opposite.

As all those things are more of a product of _lack of wealth_ , than having a
comfortable living existence.

> Like matter, wealth draws wealth to itself. Unlike matter, wealth can be
> used to create more wealth.

Gravity is what draws things in. Mass just creates that gravity. Without
gravity, you just have static and stale things.

------
lonnyk
> The new existential threats won’t require the resources of nations to
> produce.

The continued openness of the Internet relies on the Government, no? Is it
wrong to think that AI relies on that as well?

> The fact that we don’t have serious efforts underway to combat threats from
> synthetic biology and AI development is astonishing.

Isn't this what government is for?

~~~
ryandvm
A government is a collection of people that, like almost everyone else, are
almost exclusively motivated by the goal of "having a job tomorrow".
Governments, inasmuch as they can be anthropomorphised, are not especially
interested in solving problems beyond the continuance of the apparatus.

------
tomblomfield
I think the headline message of this article is important - "drastic wealth
inequality is likely to be one of the biggest social problems of the next 20
years"

But I think the article really lost a lot of its punch with non sequiturs like
"If we can synthesize drugs, we ought to be able to synthesize vaccines".....

------
azakai
> In human history, there have been three great technological revolutions and
> many smaller ones. The three great ones are the agricultural revolution, the
> industrial revolution, and the one we are now in the middle of—the software
> revolution.

Arguably the control of fire was a great revolution as well.

~~~
alexfarran
And the wheel, that was revolutionary.

~~~
zanny
And spoken language, and writing, and masonry, and metalcasting, and
cultivation, and domestication, scientific theory, logical thought, etc. A lot
of things mattered a lot to get us where we are, and fundamentally changed the
world when they happened (or at least changed the founders world in the short
term as it spread globally).

~~~
graycat
> domestication

Huge. Cats to eat the mice that would eat the grain. Dogs to help in the
hunting. Goats for milk and meat. Sheep for wool, milk, and meat. Horses for
power and meat. Cows for milk and meat. Biggies.

Another biggie was open ocean sailing. Why? Because there were no toll gates
on the open ocean! Across land had to pay up to the local castle each few
miles. So, if got some silk in the eastern Black Sea and want to sell it in
England, go across Europe? Heck no: Just get a ship and go by water. Same for
spices from India for Europe, etc.

------
netinstructions
Bill Joy wrote about this back in 2000. The essay was titled 'Why the Future
Doesn't Need Us' and offers a very (in my mind) depressing attitude of the
future.

One of his worries is that whatever positive things we can do with new
technology are vastly outnumbered by the negative things we can do with them.
Bad actors can be few and far but still destroy the world.

It's interesting that Bill is worried about genetic engineering,
nanotechnology and robotics. Sam specifically calls out AI and synthetic
biology.

There's a lot of recurring themes between these two articles, but both propose
similar solutions: Proceed cautiously.

[http://archive.wired.com/wired/archive/8.04/joy.html](http://archive.wired.com/wired/archive/8.04/joy.html)

------
mbesto
> _What can we do? We can’t make the knowledge of these things illegal and
> hope it will work. We can’t try to stop technological progress.

I think the best strategy is to try to legislate sensible safeguards but work
very hard to make sure the edge we get from technology on the good side is
stronger than the edge that bad actors get._

> _But I worry we learned the wrong lessons from recent examples, and these
> two issues—huge-scale destruction of jobs, and concentration of huge
> power—are getting lost._

Yet, we still promote "beg for forgiveness than to ask for permission". You
can't have both -- "legislative safeguards" and a bunch of entrepreneurs
running around begging forgiveness when they create destruction.

------
pgodzin
> I think the best strategy is to try to legislate sensible safeguards

This seems like an extremely difficult path to take, as legislature will
either be preemptive and slow down innovation or lag behind in understanding
the technology at which point it would be too late.

------
smanuel
This post kind of reminds me of a book I read about 10 years ago.
"Revolutionary Wealth" by Alvin Toffler:

[http://en.wikipedia.org/wiki/Revolutionary_Wealth](http://en.wikipedia.org/wiki/Revolutionary_Wealth)

AFAICR in this book, the third revolution is referred to as "The revolution of
knowledge" and I think it better describes how and what has changed during the
past... 20 years.

Great book by the way. I think it was where I read for the first time a good
perspective of how 3d printers could play an important role in the near
future.

------
adamzerner
> We can’t try to stop technological progress.

Why? This seems like a _very_ important claim that wasn't explored enough. It
felt like a cached thought
([http://wiki.lesswrong.com/wiki/Cached_thought](http://wiki.lesswrong.com/wiki/Cached_thought)).

What are the chances that some existential crisis happens? What are the
benefits of technological progress? Why do you think that the latter outweighs
the former?

Perhaps you thought it wasn't worth going into here? That's fair, but I think
it's worth a quick paragraph to summarize.

------
mdlthree
Video by CGP Grey on a similar topic - Humans need not apply -
[https://www.youtube.com/watch?v=7Pq-S557XQU](https://www.youtube.com/watch?v=7Pq-S557XQU)

------
clamprecht
Can you provide a link to the comment in footnote one, "many people believe
that fishing is what allowed us to develop the brains that we have now" ? I
hadn't heard this before.

~~~
maxerickson
[https://www.psychologytoday.com/blog/lives-the-
brain/201001/...](https://www.psychologytoday.com/blog/lives-the-
brain/201001/was-seafood-brain-food-in-human-evolution)

(that lays out one theory, not sure if it's the same one from the footnote,
but it probably is.)

~~~
clamprecht
Awesome, thanks!

------
beefman
> because it takes huge amounts of energy to enrich Uranium. One effectively
> needs the resources of nations to do it.

False. The electricity to enrich a bomb's worth of material costs about
$60,000. The plant itself is cheaper than the Tesla gigafactory, and it'll
yield 1,000 times the energy it takes to run making regular reactor fuel
(gigafactory will be lucky to break even). Laser enrichment is even cheaper,
of course.

------
jamesrcole
> _But a rocket can destroy anything on earth. [...] What can we do? [...] I
> think the best strategy is to try to legislate sensible safeguards but work
> very hard to make sure the edge we get from technology on the good side is
> stronger than..._

Sam, I suspect the only solid option is to diversify humanity. Be more than a
one-planet species. That feels a bit emotionally unpalatable, but do you
disagree?

------
temuze
> I think the best strategy is to try to legislate sensible safeguards but
> work very hard to make sure the edge we get from technology on the good side
> is stronger than the edge that bad actors get.

Some suggestions would help. I mean, what are you suggesting here? That
studies into AI should be banned? That it should be restricted in some way?
That's hard to do.

If your problem is about the potential increase wealth disparity, then this
period of history is not unique at all. If anything, it's better than the
robber baron days.

The thing I worry about is this: the first person who can make a true AI that
can iterate on itself, assuming all goes well, would have way too much power.
They could beat everyone else in the financial markets. They could short the
online ad industry and make a killing. With those resources, it's a short hop
into the physical world and making robots that make other robots and expanding
into any other area. Even if someone else develops AI six months after them,
I'd worry it'd be too late for adequate competition to exist.

Or even worse - consider the alternative, that AI is freely accessible to
everyone. That's terrifying, too! What's to stop someone from asking for
something really crazy from a piece of AI that can build, well, anything?

We simply don't have enough data to know what's going to happen. I'd wait and
see before blindly making legislation.

~~~
zenogais
I don't see how legislation could ever be effective against such extreme
concentrations of wealth and power. So I'd definitely like some clarification
there as well.

------
makeitsuckless
> Trying to hold on to worthless jobs is a terrible but popular idea.

Labeling jobs as "worthless" makes me want to throw up. We are in many cases
talking about jobs people find quite fulfilling, and human services many
people would love to keep using.

The only way we're going to be able to handle what's coming is to disconnect
the economic value of jobs from their social value.

------
pgodzin
Are there any statistics comparing job loss in the industrial and software
revolutions up to this point. The industrial revolution likewise replaced
manual labor with automation, and the software revolution Sam talks about
seems like a more effective extension of that. What trends happened last time
jobs were replaced by automation?

~~~
rgbrenner
I don't think that's an easy task, at least at this point. Depending on when
you want to start the clock, the 'software revolution' is a couple of decades
old. It would be hard to separate what job losses occurred from software vs
from outsourcing, the recessions, etc. Give it some time, when it's clear that
the jobs aren't coming back, and then we'll have a good idea the causes and
size of it.

------
explorigin
FTA "I think the best strategy is to try to legislate sensible safeguards but
work very hard to make sure the edge we get from technology on the good side
is stronger than the edge that bad actors get."

Let's see... \- unauthorized access to computers (hacking) is illegal is most
countries \- hackers often use malware as one of their tools \- anti-malware
products are woefully inefficient at thwarting or even detecting most malware.

This is just one example, but I think the author's approach is ignorant at
best.

In the West, we often view systemic problems as something external that we can
fix with technology. This view was popularized in the Age of Enlightenment and
runs very popular today.

The contrasting view-point is that systemic problems are internal (i.e. in the
character of every human). For example, we have the technology and resources
to end much of the world's hunger, but it does not happen because of greed
and/or power that would be disrupted by all these hungry people suddenly not
being hungry.

Systemic societal problems are both internal and external, but if we only talk
about fixing external problems, we doom ourselves to (insert dystopian future
here).

------
shubhamjain
I think as we progress we are increasing the level of skill-set required to
get a job. Industrial jobs wouldn't have required anything other than vigor
and endurance. As we moved to clerical jobs, being literate, and a typist
became necessary and in the future, it is possible that a certain level of
programming competence might become a pre-requisite.

As we will be creating high functioning AI, robots and self-driving cars, we
would also be creating jobs for people who would need to do the grunt work. I
don't believe that we would be able to reach a level, ever where everything
would automated without slightest of human intervention. The more
sophistication we will have in the things we build, the more we would start
have problems with them, which would need human attention.

People in every generation have been awestruck at the progress of human
civilization, such that, they always have believed a computer that can think
on his own is just near, like in 2001: A Space Odyssey. But it just never
happens. At least in my lifetime, I think I won't have to worry about robots
that can kill us.

~~~
zanny
Computers and robots have not yet even existed for an entire lifetime. It is
as shortsighted as saying the world must be flat because that is what I know
to say we can _never_ create a technological singularity.

------
drawkbox
I agree on many points but I think there will always be more to do. We don't
see it yet because we don't even know what new innovations will come, we are
basing it on today's knowledge.

We have barely inched into space, robotics, drones, the oceans, we haven't
even seen more than 50 miles down in the earth, our bodies and brains are
still big mysteries, nanotechnology and more. We thought computers would free
up lots of people but it hasn't really yet, just made more work to do and
solve with the machines. I think the same will happen with robotics, drones,
and AI. They will create work needs we didn't know existed and much more than
we expect. Who knows, AI or robots might be better than us at creating jobs.

Agriculture freed up people to think. Software freed up people to think. Good
things are coming still.

For most jobs, people want more to do, more adventure and more challenges. I
think the world is hungry for new challenges not the same old jobs. It is a
strange thing indeed though for people to try to hold onto lifeless, horrible
jobs just to keep the cadence when we need a new rhythm. We are held back by
holding onto this. We could employ many people to build an electric car
network like the railroads and interstate system but we don't. We could be
looking to space more and focusing kids on that but we are pushing them to
finance, business and service jobs.

The actual problem might be our monetary system and how we reward. I am a big
free or fair market proponent, but part of the problem of baked in bad jobs
that add nothing are because of this system. I think monetary and currency is
one area where it may hold us back until we solve this. However there
currently is no better system of paying for a service that you need or want
down to the individual, the truest exchange of value.

The question is, how much does the customer know about what they want and how
can we steer it towards the real problems of today? How wrong are we with our
rewards systems? Do only the wealthy have the right motivations to create
systems we need and employ? Have we got ourselves in a wealth backdrift? The
innovation market and economic engine is tied to wealth, for better or worse.
There are many things we should be doing, that are rewarding to us all and
need lots of work, that we can't because there isn't tons of market value yet.
Maybe the reward system needs refactoring or some new iterations.

It is a big game design / game theory problem in the end. We might need AI and
robots to solve this problem for us.

------
danbruc
Why is it so common to fear developments obsoleting jobs? Wouldn't it be just
awesome to automate everything? No jobs at all? I could easily fill several
lives with interesting things, no need for a job. Granted, the transition
period may be quite tough.

~~~
krapp
There's nothing about automation that guarantees any solution for the people
automated out of a job. The motive behind automation is purely capitalist - it
exists not to free people from the burdens of menial labor, but to multiply
the value of labor while freeing companies from the moral and financial burden
of a human workforce. For most people, jobs - and _the availability of jobs_
\- are what allows them to buy food, clothing, medical care, etc.

>Granted, the transition period may be quite tough.

Yes, mass starvation, disease, grinding poverty and global political strife
could correctly be described as "quite tough."

Although I suppose if you're idle rich, then it'll be a cakewalk. Just be sure
to wear your kevlar when you leave the compound.

~~~
JoeAltmaier
Most societies on earth now protect the out-of-work pretty well. There's been
steady improvement in standard of living, lifespan, health in most countries
for most of a century, to the point where the planet is in pretty good shape.

In fact its a puzzle to me why, with this going on, we see an upsurge in
terrorism etc. Why aren't people content? What is it that convinces folks to
piss away their entire lives on a big public stunt like bombing etc? It can't
be their bad cable reception.

~~~
krapp
That's a good point, but in most societies, most people aren't out of work.
The safety net depends on people paying in to the system, which depends on
people having something to pay.

~~~
JoeAltmaier
I think of it differently. As an engineer I'd use a control volume - draw a
circle around the economy. Label inputs and outputs. E.g. mining, sunlight to
produce food and energy, available land and water. The economy thrives if
those things have a positive balance. The money is just a strange way of
scorekeeping - imaginary points the people use to regulate their selfishness.

For instance the idea of a Basic Income is proposed once the economy has
enough to feed and house everyone insensitive to the exact employment rate.

~~~
krapp
I'd support Basic Income in theory, but I don't know if it's politically
feasible in the US. People are still talking about dismantling Medicare and
Medicaid, and of course, everything related to Obamacare, even the parts that
work quite well.

It's entirely possible the answer to increased joblessness here will be to
tell the unemployed to go back to school, then raise the cost of student loans
by some ridiculous factor, then not actually attempt to create jobs for them
when they get out.

Then again, there are states where gay marriage and marijuana are legal now,
so maybe i'm too cynical.

~~~
JoeAltmaier
Yeah until the current generation in power grows old and dies, we'll continue
to consider 'joblessness' a problem. Remember the golden age of science
fiction, where the goal was to get everybody out of work in a society run by
robots? Well, the closer we get, the more we resist it seems.

------
WillNotDownvote
Combine this with pg's essay on the importance of importing the 'best and
brightest' to America and some things start to make sense. More minds working
where they can be aimed in the 'desirable' direction.

------
AmericanOP
This economist disagrees:
[http://www.kc.frb.org/publicat/sympos/2014/093014.pdf](http://www.kc.frb.org/publicat/sympos/2014/093014.pdf)

------
jeffdavis
I'm not sure why he says the industrial revolution is different. Might it be
that we just haven't figured out how to cope with a software-driven world yet?

------
cousin_it
> _Two of the biggest risks I see emerging from the software revolution - AI
> and synthetic biology_

Also nanotech, mind uploading, embryo selection...

------
Kotka
Maybe we are forgetting the first REVOLUTION: Cognitive Revolution... around
75000 years ago!

See more for example Sapiens! A brief history of humankind

------
Houshalter
HN is eating my comments. Everything I've posted in the last few days is not
showing up. What did I do wrong?

------
d--b
Much applause for this post which is by far the most sensible article I have
seen coming from Silicon Valley.

------
ggonweb
"But I worry we learned the wrong lessons from recent examples" \- what are
the wrong lessons ?

------
grondilu
> We can —and we will— redistribute wealth

But should we? And if so : why ? Also : who's "we"?

------
JustSomeNobody
Can we really say that we are in a revolution while we are in it? And if so,
can we, in all seriousness, measure it against other revolutions?

------
sgt101
I'll say it again, AI's are not going to end Human Life (this is in the
article) It's nuclear weapons that will do that...

~~~
genericone
Do guns kill people, or do people kill people?

------
pastProlog
> The three great ones are the agricultural revolution, the industrial
> revolution, and the one we are now in the middle of—the software revolution.

There is a case for skipping over the technological changes that shifted the
slave societies of Greece and Rome to the feudal societies of medieval Europe.

You can't really make one for an important shift 40,000 years before the
agricultural revolution. We went from a world without cave paintings to one
with them. From a world without venus figurines and other carvings to one with
one. With sweeping technological changes in hunting and fishing instruments
and so forth. It's the second most important technological revolution ever, if
not the first. If fishing is wrapped up with the human brain modifying into
its modern form, wouldn't it be the most important?

Note that each revolution had a corresponding revolutionary change to
political systems, family structures and society. With the agricultural
revolution we had the end of primitive communism and hunter-gatherer societies
and the rise of surplus, class systems and slave societies. Much of the
earliest literature such as the Epic of Gilgamesh is on how to catch and keep
slaves.

With the rise of capitalism we saw the fading of Catholicism and the rise of
Protestantism and the "Protestant work ethic". We also saw the end of
monarchies and the rise of liberal democracies. The bourgeoisie and
proletariat of the time united to overthrow these old systems, but soon began
facing off against one another, which is pretty much the history of the 20th
century, or if we look at the election of old euro-communists in Greece last
month, perhaps the 21st.

The famous old definition of economics in our capitalist economy by Robbins is
"Economics is the science which studies human behavior as a relationship
between ends and scarce means which have alternative uses". Scarcity is the
bedrock of modern economic analyses - of utility, of supply and demand, of
price.

What scarcity is there when someone films a movie, or writes a book or
magazine article, or records an audio track, and with the press of a button
can fly off to billions of Android and iPhone devices? Or writes an app and
flings it across the world as soon as it hits the App Store or Google Play? Or
sends code to Github, which someone in Bulgaria patches, which someone in
Brazil patches, which someone in Japan then uses in a product they're putting
out, glued together to some other framework on Github?

This is the end of scarcity. The most well-paid modern workers are those who
produce commodities which are not scarce. That is if we can call these
products commodities - a non-scarce commodity is something of a contradiction.

These revolutionary technological changes in production, at the base, will
reverberate through the superstructure of political systems, families and
societies. The old superstructure is still trying to keep down or even kill
the new one - NSA spying. DMCA letters. David Cameron's great firewall for
porn in England. IP and patent lawsuits. The recent New York Times article
with investors questioning why Google is building self-driving cars. Aaron
Swartz's suicide, when trying to open up taxpayer-funded research which is
locked down and privatized by now irrelevant Elsevier. Telcos using their
government granted monopolies to try to harm budding businesses.

Revolutions in production lead to revolutions in the relations of production.
In the twentieth century, blue collar workers like railroad engineers and
factory mechanics had their hands on the engines running the economy. As
technology and AI causes more and more unemployment for people who can't find
the derivative of 5x, de facto, if not de jure, power of production goes to
those who are rack mounting cloud servers, or rolling out new web site builds.

------
hooande
The correlation between software and large scale loss of jobs is far from
proven. The US unemployment rate fluctuates wildly based on many factors [0],
but ~30 years or so into the software revolution it isn't too much higher than
it has been historically. Parkinson's Law may be the answer to the threat of
large scale job loss. There's a long list of startups who have raised hundreds
of millions of dollars in funding because "money is cheap right now" and
proceeded to hire offices full of people with a wide variety of titles. If the
leaders of the tech industry are willing to hire for the sake of hiring, the
overall economy is probably safe for just a little while longer. The
prevailing wisdom is that rational actors won't spend money to hire people
that aren't essential to their business, and they'll opt to use software
instead of people if the software is cheaper. In practice these so called
rational actors often use any savings from software to hire more people,
whether they are essential or not. Part of it is because there's always
something that could be done, and another part is that having a lot of
employees makes people feel good about themselves. Whatever the motivation,
mass unemployment is most likely a problem that will take care of itself.

In the context of this essay the term "concentration of power" seems to mean
the ability of a small group to have an outsized (and harmful) influence. This
seems like a much larger problem than unemployment, but it isn't limited to
technology. A network of a few hundred terrorists or just five guys in france
can bring cities to a halt and affect the psyche of entire countries. It's
just something that we're going through right now as a global culture, and I
don't see any quick fixes. It is clear that the threat of malevolent AI is
greatly overhyped, and I can't wait until the zeitgeist moves on to another
flavor of the month criss du jour. There are very real threats facing the
world right now and we shouldn't spend too much time worrying about something
that might or might not happen, that we couldn't stop even if wanted to.
Synthetic biology probably falls into the same category, though the ability to
manufacture deadly viruses is based much more firmly in fact.

Guns, bombs, computers and the basic building blocks of life cannot be made
illegal and confiscated en masse. One of the best ways to solve the threats
posed by technology is to take the idea of income inequality, mentioned in
this essay, very seriously. We've created a culture where people measure their
self worth by the value of the companies they found. When I talk to people
about technology, I don't hear about the large and small advances that make
our lives a little bit better every day. I hear, "Isn't it crazy that
Instagram was worth $XX billion dollars? I want to start a company and make
that much too!". This is poison and it has to stop. If we place all the
emphasis on who made what, we create a world where a lot of people get left
out and forgotten. Then they spend their time in dark basements, watching
extremist videos and working carelessly with dangerous tools. We need to turn
technology into something that has benefits for everyone, in order to protect
ourselves and our loved ones from some of its most dire consequences.

[0]
[http://www.infoplease.com/ipa/A0104719.html](http://www.infoplease.com/ipa/A0104719.html)

------
graycat
For the computer part of Sam's essay, I'd suggest that we are a long way from
_artificial intelligence_ (AI) software being significantly more economically
valuable than what we've been writing for decades -- various cases of applied
math, applied science, engineering, and business record keeping.

To support this claim, once I was in an AI group at the IBM Watson lab in
Yorktown Heights, NY. We published a stack of papers; I was one of three of us
that gave a paper at an AAAI IAAI conference at Stanford. My view of the good
papers at that conference was that they were just good computer-aided problem
solving as in applied math, applied science, and engineering and owed
essentially nothing to AI. Later I took one of our major problems we were
trying to solve with AI, stirred up some new stuff in mathematical statistics,
got a much better solution (and did publish the paper in _Information
Sciences_ ). That experience and observation since is the support for my
claim. Sure, this support is just my opinion, and YMMV.

Instead of AI with a lot of economic value, I would suggest that closer in is
a scenario of people managing computers managing computers ... managing
computers doing the work.

And what work will those computers do? Sure, first cut, the usual -- food,
clothing, shelter, transportation, education, medical care.

So, maybe John Deere will have a _worker_ computer on a tractor doing the
spring plowing, the summer cultivating, and the fall harvesting. Then food can
get cheaper. Maybe before the plowing a tractor will traverse the ground, take
an analyze soil samples for each, say, square yard, and apply appropriate
chemicals.

Maybe GM will have car factories with robots driven by computers doing
essentially all the work. Then cars can get cheaper.

Maybe Weyerhaeuser or Toll Brothers will have pre-fab house factories with
robots driven by computers doing essentially all the work, self-driving trucks
delivering the big boxes, computer driven earth movers doing the site
preparation, computer driven robots putting up the forms for the concrete
basement walls, computer driven concrete pumpers inserting the concrete from
self-driving concrete trucks, and houses will get a lot cheaper.

And the computers get cheaper.

So, right, we're talking deflation. So, have the government print some money
and spend it on K-12 and college education, guaranteed annual income, parks,
beautiful highways, etc. Print enough money to reverse the deflation and hire
a lot of people. Those people buy the cheap food, cars, and houses, have
children, and fill the classrooms of the additional education.

What education? Sure: How the heck to develop all those robots, managing
computers, worker computers, computer driven farm machinery, car factories,
pre-fab house factories, etc.

Or, as computers eliminate jobs, basically the result is deflation, and that's
the easiest thing in the world to stop, and the solution is the nicest thing
in the world -- just print money to get us out of deflation.

We already know what people want from the famous one word answer "More".

Computers should be a blessing, not a curse.

------
pachydermic
The Economist did a special report on the "third wave" of the information
age/information revolution. It focuses more on the economic impacts (big
surprise there!) but was very interesting and worth a read - I hope you can
get the article without a subscription... incognito mode usually works well
enough.

[http://www.economist.com/news/special-
report/21621156-first-...](http://www.economist.com/news/special-
report/21621156-first-two-industrial-revolutions-inflicted-plenty-pain-
ultimately-benefited)

It's hard to see what people will actually do for work after the effects of
this new revolution are fully propagated, but I mainly think that's a failure
of imagination. The other revolutions were not too different in terms of
taking something which a huge amount of people were doing and what society was
focused on producing people to do and making it trivial (or at least to
involve much fewer people). The overall impact of the industrial and
agricultural revolutions were to ultimately create more jobs even if it was a
wild ride while things were rapidly changing.

This revolution is different - now mechanical horsepower can be applied to
tasks previously only possible through human minds which is quite different
from machines or farming - but how different is it? It would take some really
visionary people to figure out what the ultimate impacts of all of this are
really going to be - and to try to imagine what people are going to do for a
living or what society will look like on the other side.

My personal view is that AI is a pretty important component of this. I think,
in principle, it's possible. But can it actually be done? That would be a
pretty insane change and it's super hard to image what that will be like. But
if AI just isn't possible or doesn't come around for a really long time, I
don't think this revolution will be too different from others. The more
"manual labor" type thinking tasks (grading essays, evaluating legal reports,
collecting and searching through information, etc) will be replaced by more
and more sophisticated machines. What about creative tasks? That's the final
frontier as far as I'm concerned.

Well. It'll almost certainly be really interesting.

One idea I've had (I think we all have a lot of crackpot ideas) for what
people who aren't suitable for highly skilled tasks are going to do revolves
around social media and entertainment. What if a site like Reddit or Hacker
News paid its users? I guess that's ridiculous, I'm not sure how the economics
would work out - our contributions here would have to become more valuable.
But if fully integrated into our minds (aided by computers) maybe they would
be? I've seen people tipped in bitcoins on Reddit before so maybe it's
possible. Just a crazy idea.

~~~
mrec
Yes, it's a crazy idea. Paying people out of the revenue gained by advertising
to them has some fairly obvious limitations.

I don't know what the deal with those Bitcoin tips is. Personally I find it
weird and creepy and never cash them - it feels like a ploy to tie user
identities on multiple sites together, or something like that - but I have
zero evidence to support that.

------
curiously
I stopped reading sam altman's blog after he equated Purchasing Power Parity
as a measurement that China has surpassed United States' economy.

