
You Can Always Get What You Want, But Not What You Need (2016) [video] - AlanTuring
https://www.youtube.com/watch?v=QCw_D7dr7Rw
======
tw1010
Great video. I think the book _The Elephant in the Brain_ is required reading
in this context:
[http://elephantinthebrain.com/](http://elephantinthebrain.com/)

------
jimbokun
Yes!

The sentiment is something I (and I suspect many other engineers) feel, but
not able to express it nearly so simply and elegantly.

Before you sit down and start writing that code or creating that new
algorithm, stop and think about what you really want to achieve.

------
truculation
Fascinating. Instead of banning money or fixing prices (did they try that in
Russia?) we retain free markets but now with a new layer of government
intervention. Not intervention by social spending or regulation but by buying
and selling according to the dictats of an AI. And the utility function of
this AI is set according to what exactly?

My guess is it would have to be according to our values, not to specific
goals:

[https://www.youtube.com/watch?v=tZBViI8ZaU0](https://www.youtube.com/watch?v=tZBViI8ZaU0)

~~~
fossuser
This is the goal alignment problem that MIRI, FHI, and some safety groups at
Open AI are working on. It seems like a hard problem to get right (with bad
consequences if not aligned super intelligence happens first).

------
rmateus
Multicriteria value decision analysis theory and methods are able to provide
the solution to model the need (utility). For instance, in reinforcement
learning the payoff (goal) function is always a given without any depth
consideration about its construction. It can be modeled by using the methods
referred previously.

~~~
xapata
I thought that theory told us multicriteria optimization is intractable?

~~~
arbie
If that's the same as multivariate opt, there are billion-dollar industries
that rely on it every day.

~~~
xapata
No, unless I've mistaken the phrase, they're not the same. I'm thinking of the
problem when competing criteria can't be satisfactorily combined as inputs to
a function with a scalar output. This comes up in ethical paradoxes, for
example.

~~~
rmateus
Making value trade-offs between competing criteria ultimately boils down to
hard ethical choices (not exactly paradoxes)

------
akshayB
Companies make things that they can sell in a marketplace it is necessarily
not what that place needs. For example social media companies just want to
sell their social networks in any country with internet irrespective of that
country have drinking water problem or humanitarian crisis.

------
debt
Peter Norvig’s book A modern approach to AI is pretty cool. I don’t know what
they use now in 101 AI courses but I remember that book being very complete.

I still believe though that hard AI will only ever be possible if we ever make
serious breakthroughs in the material sciences.

------
dikkechill
To differentiate between needs and (economic) wants, I found the "fundamental
human needs" framework quite useful:
[https://en.wikipedia.org/wiki/Fundamental_human_needs](https://en.wikipedia.org/wiki/Fundamental_human_needs)
(as developed by Chilean Economist Manfred Max-Neef).

Basically: Subsistence, Protection, Affection, Understanding, Participation,
Leisure, Creation, Identity and Freedom.

------
jonbaer
The key point @ 3:15 ...
[https://youtu.be/QCw_D7dr7Rw?t=193](https://youtu.be/QCw_D7dr7Rw?t=193)

------
dang
Url changed from
[https://www.youtube.com/watch?v=Zsbb1Ze4T_M&list=PLrAXtmErZg...](https://www.youtube.com/watch?v=Zsbb1Ze4T_M&list=PLrAXtmErZgOeTplq3WVIwlGn8dC9lbjy3),
which is a clip from this.

