
A.I. will have implications for education, welfare and geopolitics (2016) - magoghm
https://www.economist.com/special-report/2016/06/25/re-educating-rita
======
minimaxir
The post talks a lot about how MOOCs can be sufficient education for AI/data
science; I disagree with that approach since although MOOCs can teach people
the concepts, they don't teach people the soft skills behind the tech, which
is much harder and a major bottleneck to using it practically (full blog post
on the subject: [https://minimaxir.com/2018/10/data-science-
protips/](https://minimaxir.com/2018/10/data-science-protips/))

That's not even considering getting _employment_ in a relevant position, which
is much harder than it was in 2016 due to increased competition, and the
requirement for Masters/PhD nowadays ([https://towardsdatascience.com/the-
economics-of-getting-hire...](https://towardsdatascience.com/the-economics-of-
getting-hired-as-a-data-scientist-e3882933b43c)).

~~~
komali2
What kind of "soft skills" do you mean? Typically when I hear "soft skills" I
think "interpersonal communication, leadership, organization ability" type
things.

~~~
minimaxir
That's a part of it, which doesn't come up in MOOCs at all (specifically,
collaboration in a corporate environment within the software development
lifecycle, and negotiating with project stakeholders about specs,
requirements, and results).

~~~
komali2
I also didn't learn anything about that in school, but I did from my coding
bootcamp, so if anything I think that is a general criticism of the education
system.

------
corydominguez
I started my career as a 28 year old college dropout doing data entry for a
small surf and skate ecommerce company in 2012. I never finished any of the
MOOC courses I started, but their availability helped me get my first
internship, which has lead to me being a successful backend engineer in SF. I
especially credit the original Coursera database class and Udacity's web
development in python.

------
bluishgreen
Premature deindustrialization might not necessarily be a bad thing.
"Premature" is a judgement. Why is putting fossil carbons buried deep down the
earth for millions of years up in the air in a matter of decades a "mature"
anything?

Developing countries directly went to cell phones without going via landlines.
There could be another way. There should be another way. That is if we are to
sustain 10 billion people to the same standard of life as the West without the
fossil fuel disadvantage.

------
thrmsforbfast
The "premature deindustrialisation" phenomenon was the most interesting/novel
part of this article. The rest is the normal AI/MOOC/software-eating-
everything hand-wringing.

However, somewhat ironically, the actual paper that's cited [1] seems to
disagree with the author's characterization:

 _> In sum, while technological progress is no doubt a large part of the story
behind employment deindustrialization in the advanced countries, in the
developing countries trade and globalization likely played a comparatively
bigger role._

[1] [http://www.nber.org/papers/w20935](http://www.nber.org/papers/w20935)

------
candiodari
For welfare ? Good.

It'd be hard to fuck it up as badly as humans have. And God forbid, AIs will
do whatever improves outcomes.

Humans employed in welfare, even in the police have been known to relish in
the power the law gives them, as it does in many cases, and use it not even
for personal empowerment, but for showing off to girls, for taking petty
revenge, and worse.

~~~
bobthepanda
They're not talking about using AI to allocate welfare, they're talking about
basic income.

Using AI to allocate things is dangerous IMO, mostly because AI will use past
decision-making as training data, and the training data was created by those
power-mad human administrators with all their implicit and explicit biases.

~~~
candiodari
So it'll start as bad as humans, and improve. It's got my vote.

~~~
bobthepanda
Improve, or get stuck in a local maximum and make the situation worse
permanently.

At least with human decision making, there is a paper trail and a means of
legal recourse. AI provides neither.

~~~
candiodari
Euhm just the opposite. Human paper trails, especially by government employees
.. I've found them lacking or outright stupid/useless/falsified. That makes
legal recourse dubious.

With AI decisions paper trails will be perfect and it won't have any emotions,
before or after getting sued, about the matter.

------
crooked-v
Don't forget about the implications for paperclips.

~~~
_iyig
Context (I assume):

[https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)

~~~
crooked-v
There's also a game about the subject.

[http://www.decisionproblem.com/paperclips/](http://www.decisionproblem.com/paperclips/)

Other articles that help really grok the general idea:

[https://www.lesswrong.com/posts/HFyWNBnDNEDsDNLrZ/the-
true-p...](https://www.lesswrong.com/posts/HFyWNBnDNEDsDNLrZ/the-true-
prisoner-s-dilemma)

[https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-
pe...](https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-pebbles-into-
correct-heaps)

~~~
DuskStar
IMO Universal Paperclips is one of the the best clicker games, even without
the AI safety elements, because it has an end that is reachable in a
reasonable amount of time.

