
Google self-driving car pioneer wants to teach people how to face the future - w1ntermute
http://www.economist.com/news/technology-quarterly/21662654-sebastian-thrun-pioneer-googles-autonomous-cars-wants-teach-people-how
======
bsaunder
I agree with the premise that machines will increasing exceed our capabilities
(and are improving quicker than we are). This seems so obvious as to be not
debatable.

I also think there are a lot of merits to online-education and Udacity seems
to have the right general idea. On-line education needs to be more engaging
and entertaining than simply videos and online collaboration.

However, this whole "people need more skills" and "we need more jobs" meme
feels like a losing fight with reality. In fact this push for increased human
skills seems like it will only widen the gap between the uber skilled and the
99%. In our current economic model, this seems like increased pain for most of
us.

IMHO it would be much better if we could stop this denial. We need to pivot
our economic system towards a basic income model. I think a much better
strategy would be for us to start focusing our best and brightest on full
automation of our needs and basic wants so that we can provide most essential
services to people with minimal human and resource cost.

~~~
mathattack
Many great scientists (Einstein amongst them) get into trouble when they veer
into Economics. Einstein predicted social unrest from massive unemployment due
to productivity improvements. Productivity improvements help people in
aggregate.

The question here is will a small subset of the world accrue all the benefit
from the AI productivity improvement? One answer is basic income, presumably
based on taxation of the rich. Is another answer some kind of universal
ownership scheme of the companies creating the AI? I'm not advocating
government ownership of business (this rarely ends well) but perhaps some way
for the government to give shares in index funds in lieu of basic cash
payments? Essentially equity based social security or welfare payments in lieu
of some portion of the cash payments. I haven't thought this fully through
yet, but it does allow some aspect of shared upside. (My sense is there will
be a lot of shared upside anyway, similar to how we all benefit from free maps
and search)

~~~
hiddencost
Russia tried that. When the USSR ended, they gave every one vouchers to bid
for shares of state owned companies. The little people had no idea what they
were doing and got screwed brutally. It created many of the ridiculously
wealthy Russian oligarchs you see today.
[https://en.m.wikipedia.org/wiki/Privatization_in_Russia](https://en.m.wikipedia.org/wiki/Privatization_in_Russia)

~~~
nhaehnle
Interesting. The fundamental goal seemed a noble one, but it was badly
implemented. Any lessons to take away?

One immediate idea is some kind of vesting, i.e. you can't actually sell your
vouchers for X years (or stages, being able to sell certain percentages after
a certain number of years).

A second idea is to encourage the formation of public interest associations to
manage the ownership. There are already some non-traditional funds run with
some social goals in addition to the goal of making money, perhaps that would
help. (But I'm really just brainstorming right now.)

~~~
ambicapter
My knee-jerk reaction is that these people got bit because they weren't
educated on the matter, so free education would've been helpful. Incidentally,
I feel like that would help with the original problem as well.

I am really starting to think that government should reduce the cost of
education as a public good.

------
ericjang
Even though the standard of living is raised for all humans, it seems like
advances in AI and automation will drastically reduce the percentage of
productive members of society, in spite of educational opportunity.

Improving educational opportunities (a la Udacity) is a noble idea because it
levels the playing field a bit (giving the underprivileged a chance), but the
end result is the same - a small number of people will inevitably own AI, and
the world by extension.

Assuming that the new AI titans are compelled to share their wealth with
humanity, wouldn't less productive members of society still face existential
threat? Improved standard of living only takes care of the first 2-3 layers of
Maslow's Hierarchy of needs - but "esteem" and "self-actualization" are really
important as well.

~~~
seiji
_but "esteem" and "self-actualization" are really important as well._

Many people fill those needs with banal things like drinking contests or
sports.

~~~
eli_gottlieb
You mean, as opposed to banal things like their jobs?

------
astazangasta
This is not about humans vs machines, it is STILL, after all these centuries,
about labor vs capital.

~~~
bduerst
Pretty much. The massive automation that happened during early 20th century
modernization sparked a unionizing movement with laborers. Kind of interesting
that we don't see that level of organization now.

------
arstin
If Thrun is right about AI "outsmarting people in every dimension", how in the
world could more entertaining job training videos organized into "nanodegrees"
help us at all? I know this article was just a free ad for Udacity so he had
to say self-serving things, but do you think the dude really believes this???

As others have pointed out, the real problem before us this century is to
_somehow_ decouple having a basic standard of living from performing work---
more specifically from "contributing to productivity". My amateur guess is
that even with our current state of food and energy production, the biggest
barrier isn't the massively difficult economic and organizational problem, but
advancing as a culture beyond our entrenched moral assumption that the means
for basic living is something to be earned rather than a human right.

------
stonogo
When this company can build a phone that won't crash, I'll believe they can
build a car that won't crash. Until that day comes I feel like stern warnings
of the coming economic revolution are not really in order.

------
sandworm101
Arrogant, as is typical of those with a financial stake in a particular
version of progress. This guy is under the belief that his version of the
future will happen and that it is his job to help others realise that truth.
Technological progress may be inevitable, but that progress envisioned by any
particular person is not.

Nanodegrees may seem a good idea, and they no doubt make sense financially to
google, but that does not make them inevitable. People have been attending
today's traditional schools since long before the Aztecs were even a thing
(Cambridge 1200s). There is wisdom in those years. It may be time to change
that wisdom, but it won't happen within a generation.

Self-driving cars seem likely but are not inevitable. All sorts of safety-
enhancing technologies are dropped for apparently irrelevant reasons. Why do
we sell cars that can break speed limits? Why do school buses not have seat
belts? Why do planes still have error-prone pilots? Why are alcohol and
cigarettes still a thing? Each of those have at some point been challenged by
technological progressives who though their version of the future was
unavoidable. Each was proven wrong. Only the arrogant assume the future.

fyi, anyone who thinks driverless cars are inevitable should look at the
futurama exhibit of 1939. We were then going to have them by 1960. Then it was
ford. Now it is google. I'll believe it when I see it.

[https://en.wikipedia.org/wiki/Futurama_%28New_York_World%27s...](https://en.wikipedia.org/wiki/Futurama_%28New_York_World%27s_Fair%29)

~~~
moe
_Self-driving cars seem likely but are not inevitable._

Yes they are inevitable.

 _All sorts of safety-enhancing technologies are dropped_

Self-driving cars are not only a "safety-enhancing technology".

First and foremost they are a money-saving technology.

Over 4 million people are employed in the USA transportation industry[1]. 1.7
million of them are truck drivers[2].

Global logistics companies will save billions of dollars per year by upgrading
their fleets to self-driving vehicles.

[1]
[http://www.bls.gov/emp/ep_table_201.htm](http://www.bls.gov/emp/ep_table_201.htm)

[2] [http://www.bls.gov/ooh/transportation-and-material-
moving/he...](http://www.bls.gov/ooh/transportation-and-material-moving/heavy-
and-tractor-trailer-truck-drivers.htm)

~~~
sandworm101
My point: There are often seemingly irrational reasons why new technologies
are not adopted. One can make all rational arguments only to have them fall
down when it comes to implementation. It is arrogant to assume that one today
can perfectly predict technology tomorrow. (I say perfectly because the OP
references only a narrow range of self-drive tech, not acknowledging all the
other options.)

------
ihsw
> To the extent we are seeing the beginning of a battle between artificial
> intelligence (AI) and humanity, I am 100% loyal to people.

Perhaps I'm just jaded or cynical, but _fuck people_.

We, as a species, are the greatest threat to ourselves, and we are our
greatest asset. An individual may be brilliant, and and large groups
congregating together into governments that empower hundreds of millions of
individuals may be the greatest feat our species has accomplished, but there
is just so much instability across a wide spectrum of life.

We, as a species, are approaching the limit of what 7B people on this planet
are capable of. Globalization is piquing and revolutionary growth will pique
with it, and maintaining the rate at which we grow will be accomplished by
broad-sweeping reforms with the end-goal being tearing down inefficiencies --
starting with economic.

We, as a species, absolutely can bring a first-world quality of life to
everybody, so why shouldn't we?

We, as a species, absolutely can cooperate and govern on a global scale, where
such cooperation is codified into law, so why shouldn't we?

The answer is that we don't want to because we can't handle giving up control
to _those other people_ because they don't have _our best interests_ in mind.

How do we face the future? By letting go of our ill-conceived notion that we
are fit to govern ourselves in the current manner, and accept that operational
control of various aspects of humanity (eg: supply chain management, namely
natural resource allocation and food distribution) will be automated, and as
such outside of the purview of _human judgement_.

~~~
d883kd8
I disagree. You are probably being downvoted out of disagreement. (d_-1?)

I'd like to thank you for contributing your perspective in this reasoned and
impassioned advocacy for authoritarianism.

~~~
ihsw
It's not authoritarianism, it's trust. We already trust large aspects of our
lives to machines, it's time we take that trust to the next level.

If and when it happens, nobody will miss having to manage large fleets of
container ships that currently move hundreds of millions of dollars of goods
per day.

If and when it happens, nobody will miss having to manage large fleets of
trucks, vans, and cargo planes that currently move hundreds of millions of
dollars of goods per week.

If and when it happens, nobody will miss having to find and drill for oil and
nat-gas.

If and when it happens, nobody will miss having to raise and slaughter farm
animals.

If and when it happens, nobody will miss having to tend to land for our apples
and oranges.

Freedom of movement will diminish, in exchange we will achieve riches nobody
ever dreamed of.

