
An Alphabet X concept from 2016 is an unsettling vision of social engineering - m1
https://www.theverge.com/2018/5/17/17344250/google-x-selfish-ledger-video-data-privacy
======
pi-squared
On the one side we want freedom of speech inside organizations so that they
can discuss ideas like this. On the other side it's dangerous if an idea like
this leaks because it presumably shows how evil Google is.

Now imagine a leaker inside your own brain. Every thought you may have could
be displayed on the front page of The Verge or whatever. "Peter's internal
thoughts hint he may be a pedophile".

I prefer Google having these internal discussions of highly disturbing
concepts vs not having them.

"I do not agree with what you have to say, but I'll defend to the death your
right to say it."

~~~
pjc50
> freedom of speech inside organizations so that they can discuss ideas like
> this

What does that actually mean in practice? "We're going to float this idea
which we all think is unethical, but without actually labelling it as
unethical in the presentation?"

(The strange thing seems to be a software engineering organisation that thinks
of things without immediately coming up with dozens of examples of how this
could go wrong!)

~~~
Willamin
Directly from the article, it seems that they are well aware that it's not
going to be the most popular idea.

> When reached for comment on The Selfish Ledger, an X spokesperson provided
> the following statement to The Verge:

>> “We understand if this is disturbing -- it is designed to be. This is a
thought-experiment by the Design team from years ago that uses a technique
known as ‘speculative design’ to explore uncomfortable ideas and concepts in
order to provoke discussion and debate. It’s not related to any current or
future products.”

------
alleyshack
As a former Googler, this video doesn't surprise me at all. Google tends to
have a bit of a ...culture bubble, I suppose, where the engineers and
designers forget that not everyone is okay with having all their data
harvested and used in whatever manner Google thinks is best. I suspect this is
related to the fact that the vast majority of these engineers and designers
are straight, cis, college-educated men who are white or Asian and between the
ages of 18 and 35, a group which on average tends to have fewer reasons to be
concerned about their data being used against them. Then add in the fact that
most employees have bought into the general "all-knowing, all-caring Google"
mindset, and you have a perfect recipe for the kinds of thoughts presented in
the video.

The scary thing is, most of the people I worked with genuinely believed that
ideas like this are good for the user. The fact that such things require
extensive privacy invasions don't even cross their minds, because they don't
think of it as a privacy invasion. It's just another way to "optimize" toward
some goal or another.

------
brudgers
One of my great grandmothers worked in a button factory as a child. My
grandfather on that side was able to go to college because of The Cooper
Union. It was explicitly non-lamarkian. As is all social mobility. Google's
vision literally pairs the idea that the options offered children will be
bounded by the data trail of their parents with the image of a man working on
a row boat. A literal interpretation -- the only kind computers make --
suggests that the artifacts this system will produce for the man's children
are more likely to be oars than iPads. That's not to say this fetishization of
Lamarkian genetics doesn't make sense inside Google. The median salary for the
people there is $200,000 and the system will probably cough up Hondas if not
BMW's. Google's vision reduces to using computation to create and maintain a
caste system. The image of the man in the boat is a deliberate design
decision.

~~~
GuiA
_> That's not to say this fetishization of Lamarkian genetics doesn't make
sense inside Google. The median salary for the people there is $200,000 and
the system will probably cough up Hondas if not BMW's._

This is a problem I have with all tech companies who are designing products
used by hundreds of millions, if not billions, of humans - ranging from
farmers in Germany to factory workers in India or mothers in Ecuador or ...

How can the employees of those companies - a group of people who are used to
driving Teslas, getting their groceries delivered, etc - relate to those users
and understand them in any meaningful way at all?(replying that a bunch of
Google UX designers once spent 3 days in the Philippines doing “user research”
is a terrible answer)

------
GuiA
Here are the concrete ideas presented in 6 mins of narrated video:

\- a system could highlight choices aligned with the user’s values (examples:
recommend taking Uber pool instead of Uber X in the Uber app, suggest locally
grown bananas in a shopping app)

\- a system could manufacture custom objects for a user to gather missing data
about that user

And the broader, higher level concept:

\- data aggregated about a user can be considered to be a “genome”, and
perhaps concepts applicable to genomes (sequencing, ...) are similarly
applicable

This whole sub field of “speculative design” feels particularly useless (this
video is part of a broader novel practice, see for instance
[https://www.primerconference.com/2017/](https://www.primerconference.com/2017/)).
A few very vague points are raised, with no direct way to probe the questions
or start answering them. This is in contrast to for example the scientific
approach, where the base hypothesis usually gives us a clue as to what we
might want to measure, change, etc.

So sure, at the end of your multi week process you get a slick video, except
you’re not much further down the line of inquiry (and if it gets leaked you
have the whole internet turn on you).

If one were assigned to think about this topic, it seems like actually
exploring the base hypothesis (“personal data can be thought of as a genome”)
with real experiments designed to test the limits of that statement would be a
much more productive use of time.

~~~
otakucode
Why did you simply skip over the whole "a system could ignore the user's
values and instead only offer suggestions and goals aligned with Googles
values"? That was clearly stressed.

------
walterbell
One theoretical foundation for some of the “nudging” ideas:

[http://www.nybooks.com/articles/2014/10/09/cass-sunstein-
its...](http://www.nybooks.com/articles/2014/10/09/cass-sunstein-its-all-your-
own-good/)

 _> there is very little awareness in these books about the problem of trust.
Every day we are bombarded with offers whose choice architecture is
manipulated, not necessarily in our favor. The latest deal from the phone
company is designed to bamboozle us, and we may well want such blandishments
regulated. But it is not clear whether the regulators themselves are
trustworthy. Governments don’t just make mistakes; they sometimes set out
deliberately to mislead us. The mendacity of elected officials is legendary
and claims on our trust and credulity have often been squandered. It is
against this background that we have to consider how nudging might be abused.

... Sunstein’s idea is that we who know better should manipulate the choice
architecture so that those who are less likely to perceive what is good for
them can be induced to choose the options that we have decided are in their
best interest. Thaler and Sunstein talk sometimes of “asymmetric paternalism.”

... Deeper even than this is a prickly concern about dignity. What becomes of
the self-respect we invest in our own willed actions, flawed and misguided
though they often are, when so many of our choices are manipulated to promote
what someone else sees (perhaps rightly) as our best interest? Sunstein is
well aware that many will see the rigging of choice through nudges as an
affront to human dignity: I mean dignity in the sense of self-respect, an
individual’s awareness of her own worth as a chooser. The term “dignity” did
not appear in the book he wrote with Thaler, but in _Why Nudge? _Sunstein
concedes that this objection is “intensely felt.” Practically everything he
says about it, however, is an attempt to brush dignity aside._

~~~
dsfyu404ed
>... Sunstein’s idea is that we who know better should manipulate the choice
architecture so that those who are less likely to perceive what is good for
them can be induced to choose the options that we have decided are in their
best interest. Thaler and Sunstein talk sometimes of “asymmetric paternalism.”

That just sounds like the parent class of "we should convert all these
barbarians to Christianity and get rid of most of their culture in the process
for their own good so they don't burn in hell."

------
_bxg1
One of the greatest and most common fallacies I see Silicon Valley companies
make on a regular basis, is assuming that human beings are purely rational
entities. To be more specific, in many cases, they ignore mental health.

When applied to dispassionate entities this type of reasoning might make
sense, but real people:

\- Experience anxiety. Chilling effects, stress, etc. become abundant when we
have no internal life; no room for our minds to move and experiment and be
messy without being locked into step with the outside world.

\- Are changed by observations we or others make about ourselves. Our minds
are not static systems, they are infinitely recursive and dynamic. Observation
of our own minds - even perfectly accurate observation - influences them. The
cycle keeps going, ad infinitum.

The world is simpler when it's homogenized. It's easier to reason about. I
sympathize; I'm a programmer myself. But these aren't just approximation
errors. These are ways in which recent technology actively damages the human
psyche, both on an individual and a societal level. The Olympians of Silicon
Valley are trying to shape the world in their own image, and they'll burn it
to the ground before they admit their fault.

------
dmayle
I think you could go back to the early seventies and make this same kind of
video about gene sequencing, DNA, CRISPR, etc.

Just like we didn't know to what extent DNA changes can alter an individual,
and what the repercussions of making those changes are, the same is true of
our actions and experiences.

This video is a look at a nascent field that requires thinking and ethical
explorations.

What are the ramifications of an individual who, through modern science, has
the ability to not just alter their genetic makeup, but also their experiences
and behaviors so as to achieve a desired outcome?

Where is free will in all of this? Does it exist? Will we all become beholden
to our former selves who, making decisions with less information, made
decisions we no longer agree with?

What about our parents? They make choices, sending us to schools, or acting
classes, but they shape us in the same way.

...and where is the limit? As long as we're not very good at it (like right
now), it's ok, but once we're accomplished a certain level of proficiency, is
that when it becomes dangerous?

This is a thought-provoking video, but I think a lot of the issues expressed
here are a reflection of the viewers, and not of the video itself. It even
raises the concept of responsible behavior (it refers to 'stewardship').

Our current society values the concept of 'humanity', and the 'greater good',
which gives me hope that we will take action and correct evolutionary unstable
systems and behaviors.

To end on a joke: I, for one, welcome our new selfish ledger overlords.

------
cmiles74
It's like they've forgotten that they mostly sell ads. In the end, they won't
be training the human race to be more generous. They'll be training them to
buy more crap.

~~~
gaius
The article mentions this

 _which would “reflect Google’s values as an organization,”_

~~~
cmiles74
At heart, I disagree that that Google's "values as an organization" differ
from "sell more ads, make more money." For sure, there may be individuals with
loftier goals and some of them may even have positions high up in the
organization. But in the end, it's a corporation, and in the end they all have
the same goals.

The whole presentation strikes me as insulated and disconnected from reality.

------
skummetmaelk
The video is describing mind control. Literally... I have no words.

First they want to offer users to select how they want to change their
behaviour. I guess that's fine, we all want to improve ourselves. This segment
lasts a couple of seconds and the rest of the time is dedicated to describing
how the system can be made to predict and target "bad" behaviours of humans as
a species.

Who greenlit this.

EDIT: I understand this is not a product, but clearly it means Google is
seriously thinking about doing such things. This is just as scary for me.

~~~
hn_throwaway_99
> EDIT: I understand this is not a product, but clearly it means Google is
> seriously thinking about doing such things. This is just as scary for me.

I think this is bullshit. You are essentially accusing Google of
"Thoughtcrime". I think Google takes plenty of actually questionable _actions_
that deserve opprobrium, so I'd rather focus on that than internal
deliberations.

~~~
skummetmaelk
I'm not accusing them of thoughtcrime. However, if this much effort is put
into creating a "speculative design" video there have clearly been deep
discussions about the topic beforehand. It was not cast away even after these
discussions. This is to me an indication that there are internal forces in
Google seriously considering going after such applications.

As an aside; this whole throughtcrime stuff is ridiculous. Just because
someone has a right to say something does not mean you are not allowed to be
disgusted by them or think they are bad people because of what they say.
Imagine a coworker saying that women deserve to be raped. You would not think
of him as a good person after that, even though he has every right to make
that statement.

------
sqdbps
When did tech reporting become the thought police and a guardian of the status
quo?

They seem to react to anything more exciting than a new phone release with
fear and scorn.

Exploring concepts and ideas is how progress happens and it's telling how they
see "Duplex" \- the most impressive tech showcased at these keynotes - as a
misstep.

~~~
zeusk
> Exploring concepts and ideas is how progress happens and it's telling how
> they see "Duplex", the most impressing tech showcased at these keynotes, as
> a misstep.

Creating agents that can imitate human speech is spectacular but their
demonstrated use of it is far from it. Its essentially expressing "my time
isn't worth speaking to the human on the other side"; perhaps you feel okay
with it when the robot is working for you but I wonder how'd you feel on the
other side of the equation.

So much for don't be evil.

~~~
pythonaut_16
As if businesses haven't been using robocalls and automated response systems
for years.

~~~
zeusk
Yes, and people don't like that. This way we'll end up with robots talking to
robots using human language, which couldn't be more idiotic.

~~~
fixermark
> which couldn't be more idiotic

Actually, that sounds like a perfectly reasonable solution to piggyback on the
existing telecom infrastructure in a backwards-compatible way.

I think you just described a stepping-stone for moving from a system where
humans burn a bunch of time negotiating deals to where deals are auto-
negotiated on behalf of humans without having to build out a new protocol from
scratch or run a new initiative through ANSI or W3C.

Neat. Beats the pants off the transition plan for IPv4 to IPv6 at least.

~~~
zeusk
> Actually, that sounds like a perfectly reasonable solution to piggyback on
> the existing telecom infrastructure in a backwards-compatible way.

> I think you just described a stepping-stone for moving from a system where
> humans burn a bunch of time negotiating deals to where deals are auto-
> negotiated on behalf of humans without having to build out a new protocol
> from scratch or run a new initiative through ANSI or W3C.

Except for the fact that Human language is not entirely precise. We can't even
implement error-free "smart" contracts in a specialized language and you think
doing so in a language with so much more ambiguity using computers is a good
idea?

------
jessriedel
Can anyone speak to whether this style of video is use widely internally at
Alphabet X, or if this is just an anomaly? I'm not talking about the content,
just the practice of sending around videos about ostensibly intellectual ideas
that have very little content and instead rely on music/imagery/etc.? It's
very embarrassing.

~~~
GuiA
It’s an emerging sub field of design that calls itself “speculative design”.

Can’t speak for Google, but it is gaining traction. See for instance
[https://www.primerconference.com/2017/](https://www.primerconference.com/2017/)

~~~
jessriedel
TED talks have gained a lot of traction too...

[https://www.youtube.com/watch?v=DkGMY63FF3Q](https://www.youtube.com/watch?v=DkGMY63FF3Q)

------
joejerryronnie
This is the most terrifying dystopian thing I have ever seen (mostly because
we're not that far away from this becoming a reality). To think that just a
couple decades from now, Google could literally be directing a majority of our
waking actions based off a soulless machine learning algorithm. In this
reality, do you have the ability to opt out or is the only solution violent
revolution and the destruction of the machines?

------
PuffinBlue
Even though speculative - collective multi-generational Behavioural Sequencing
(and influencing) seems like Psycho-history.

Sci-fi becoming possible dystopian reality yet again.

------
walterbell
Could we see the positive speculation video? There must have been one, right?
Standard scenario planning technique.

------
AnsisMalins
Good to know I'm not the only one to think of this: a phone app that asks what
you want and then orders you around. And when you tick the online box, it
optimizes over the population of all online users. For example, it might
suggest places and times as to increase chances of serendipitous encounters.

------
sizzle
This is eerily similar to the concept of a 'cookie' seen in the Black Mirror
episode, White Christmas.

( _Spoiler Alert_ )

A cookie is a device "that is inserted under the clients head by the brain and
kept there for a week, giving it time to accurately replicate the individuals
consciousness. It is then removed and installed in a larger, egg shaped device
which can be connected to a computer or tablet."

Highly recommend this episode for those interested:
[https://en.m.wikipedia.org/wiki/White_Christmas_(Black_Mirro...](https://en.m.wikipedia.org/wiki/White_Christmas_\(Black_Mirror\))

------
taneq
Another perfect illustration of the way that Google recasts "your personal
information is required to generate that outcome" as "you must surrender your
personal information to us in order to benefit from that outcome."

Most of the functionality they describe is perfectly achievable without any
personal data ever leaving your phone / home computer / personal server, but
the quid-pro-quo of "service in exchange for data" is too deeply ingrained.

------
falcolas
This underscores to me why I have no fear of the Singularity. It would be
impossible for an AI to do worse things to us than we as humanity are willing
to do to ourselves.

We have tortured and maimed in the name of advancing medicine. We have killed
millions in the name of saving more millions. We manipulate people's thoughts
and behaviors in order to make them "better". We evaluate other people based
on those we perceive as their peers.

AI only makes those processes easier.

~~~
wilsonnb
A better reason not to fear the singularity is because it a science fiction
concept that's connections to the real world are tenuous at best.

Anyways, I think the fear most people have is either that the AI would be far
better at doing these bad things to humans than humans are (which you allude
to in your comment), or they always envision themselves on the "good guys"
side in human vs human conflict and don't have that comfort when it comes to
AI vs human conflict.

Those fears seem valid to me if you believe in the underlying premise.

------
xab9
Did anyone think of Asimov's Foundation? I doubt that we will ever have "big
enough data" for such a scale, but the socio-genetical ledger is a concept
that may lead toward ubiquitous informational systems.

------
Shank
With a lot of leaked internal videos, you get a lot of spin from whatever
agency is reporting on it. With GDPR and Facebook's data problems, a video
like this is surely going to be reported on as a dystopian future and
indicative of everything that's wrong with Google.

But taken at face value -- just judging the video itself -- it doesn't seem
that bad. To the untrained eye it can appear horrible, but this is largely
because people conflate all data into one group. There's two different types
of data: there's the data that you create intentionally, as well as the data
that's the passive result of you being around technology. There's data that,
if shared, could be potentially deadly in some parts of the world (messages,
photos, videos, calls, etc.) -- and there's data that exists, but is not
captured about people. This video is clearly showing more of the latter.

The canonical example the video uses is a _scale_. The idea is that the ledger
thinks that it can make better decisions if it knows your weight. It's not
sure that you'd buy an existing smart scale, so it wants to create one that
could fit the bill for capturing that data from you. This is passive data --
it already exists about you, but it isn't collected. Of all of the genuine
types of data collection, this is the most genuine! If you go to a doctor's
office, the first thing they do is weigh you and take your temperature, for
good reason. It's one of the biggest factors in health and treatment for a
patient. It can give off warning signs or indicate more effective treatments.

This video isn't about taking in your random personal data that you store on
Google and using it nefariously. It's about taking in data that you already
have and trying to make actionable decisions based on it. The video
characterizes the data as if it's an independent living thing as an exercise
-- not as the end grand outcome. The idea is simply that if you can track what
users do and how they behave in certain situations, you can use those past
decisions to help inform future generations. The video doesn't even make this
a mandatory thing -- it shows a user turning it on and using it to make
specific goals (like eat healthier).

If Google wants to tackle a problem like depression through opt-in deep
learning on habits, nudging people in the right way, then I don't see a
problem with it. If you could categorically learn how to avoid pitfalls and
make things better for future generations, why wouldn't you? It actually kinda
gives every single life a little more meaning and purpose -- actually acting
as inputs on how to better the human race.

Everyone wants to take things like this in the worst possible scenarios.
"Google is evil or wants to sell ads, so they're going to build a system for
being evil and selling ads." But look at the facts: there's a lot of fear over
not a lot of time tested results. Google search is really, really good. Waymo
cars really don't crash that much. They do a lot of projects for the "greater
good," and haven't been historically known to take advantage of the data they
collect.

They're pitching this internal video to their employees as inspiration to
build a better quality of life for future generations. They aren't pitching it
to, uh, data mine everyone for their own profit. If ethics is about what you
do when nobody is looking, then this is a good example of consistent ethics.
When Google says publicly they care, and then privately says they care, I
think it's safe to say they genuinely do care.

------
otakucode
This fits perfectly with the views expressed by Eric 'If you don't have
anything to hide you shouldn't worry' Schmidt in his book 'The New Digital
Age.' Google is rich. Therefore, Google is Better. Because they are Better,
they need to reign in society and guide it for its own protection and benefit.
It's really a very old school mindset, the kind that ruled the world for a
very long time in the ages of kings, god-kings, theocracies, etc. It's the
idea that some people are fundamentally born to lead and others born to
follow. This is precisely and exactly the viewpoint that "all men are created
equal" was penned to spit in the face of.

------
dwighttk
tangent: videos like this are so irritating... Just give me the transcript,
possibly with pictures/video interleaved if necessary (but probably fewer than
you think).

~~~
vertexFarm
Right? Remember when you could search for a tutorial online and you got an
actual written process instead of some dolt on youtube awkwardly recording his
screen and saying "uhhhhhhh" for the first third of the video? And wasting
time doing stupid things like opening the program and setting up a test file
instead of just recording the actual thing the tutorial is supposed to
instruct? Ten minutes of bullshit just to show me where that one menu is
hiding.

I hate video tutorials, and it's all you can find now. Sorry, I'll shut up.
This is way off topic. Carry on.

~~~
monort
I hate it too, but try to create a tutorial and you will understand the reason
- it takes much less time, just record what you are already doing. The
alternative to video is not text but absence of information.

~~~
dwighttk
one could always edit out large chunks of useless video

------
aoner
I feel like Yuval Noah Harari is spot on with dataism emerging as the new
dominant inter-subjective truth. Go and read Homo Deus, you will not be
disappointed. The idea of dataism will slowly replace our current humanistic
ideas of liberty/individualism. I'm not saying I agree with the video/general
direction, but I think we're foolish to disregard this idea as an isolated
event produced in a "culture bubble".

------
jusujusu
Great music!

------
hprotagonist
_Of all tyrannies, a tyranny sincerely exercised for the good of its victims
may be the most oppressive. It would be better to live under robber barons
than under omnipotent moral busybodies. The robber baron’s cruelty may
sometimes sleep, his cupidity may at some point be satiated; but those who
torment us for our own good will torment us without end for they do so with
the approval of their own conscience. They may be more likely to go to Heaven
yet at the same time likelier to make a Hell of earth. This very kindness
stings with intolerable insult. To be “cured” against one’s will and cured of
states which we may not regard as disease is to be put on a level of those who
have not yet reached the age of reason or those who never will; to be classed
with infants, imbeciles, and domestic animals._

C. S. Lewis, 1948

~~~
mhomde
I kinda want to see a black mirror episode of AI overlords shepherding humans
for their own good :)

~~~
fixermark
Just read the last story in "I, Robot".

Twist: from the point of view of at least one of the characters (and the
general tone of the story), it's a good thing. President of the World makes
the case that the world has always been too complicated for humans to control
their own fate, and the ultimate goal of technology has always been to guard
against chaos with machines big enough to tackle that complexity. ;)

~~~
mhomde
Yeah, he has a point :) It's always struck me as weird that we "allow"
political leaders that can put personal gain and career in front of what's
good for everyone else. Also as you touch upon the world is rapidly becoming
too complex to be governed by us. AI could save us from bureaucratic
inefficiency, corruption, shortsightedness and cognitive bias.

I'd actually welcome our AI overlords if they could execute according to a a
reasonable value system (maybe execute is the wrong word). On the other hand,
how do you set its goals, and what could possibly go wrong :)

------
s_kilk
> The title is an homage to Richard Dawkins’ 1976 book The Selfish Gene.

Another instance of techies being led astray by flimsy, reactionary pseudo-
science.

~~~
igravious
I think you might be confusing this with Dawkins’ 1977 book The Selfish
Horoscope.

------
dmead
This is getting gross. I really fail to understand the type of person that
believes this is a good idea.

~~~
fixermark
I believe the article clarifies that the creator of this video doesn't believe
it's a good idea. It's speculative.

