Hacker News new | comments | show | ask | jobs | submit login
An Alphabet X concept from 2016 is an unsettling vision of social engineering (theverge.com)
159 points by m1 5 months ago | hide | past | web | favorite | 104 comments



On the one side we want freedom of speech inside organizations so that they can discuss ideas like this. On the other side it's dangerous if an idea like this leaks because it presumably shows how evil Google is.

Now imagine a leaker inside your own brain. Every thought you may have could be displayed on the front page of The Verge or whatever. "Peter's internal thoughts hint he may be a pedophile".

I prefer Google having these internal discussions of highly disturbing concepts vs not having them.

"I do not agree with what you have to say, but I'll defend to the death your right to say it."


> freedom of speech inside organizations so that they can discuss ideas like this

What does that actually mean in practice? "We're going to float this idea which we all think is unethical, but without actually labelling it as unethical in the presentation?"

(The strange thing seems to be a software engineering organisation that thinks of things without immediately coming up with dozens of examples of how this could go wrong!)


Directly from the article, it seems that they are well aware that it's not going to be the most popular idea.

> When reached for comment on The Selfish Ledger, an X spokesperson provided the following statement to The Verge:

>> “We understand if this is disturbing -- it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”


Keep in mind that we're seeing a leak taken out of its context. We have no idea whether the "immediately coming up with" was one location over and not leaked.

(... which leads me personally to ask the question: what was the intent of the leaker here? ;) ).


On the contrary, the earnestness of the presentation implies that they believe it _is_ the ethical thing to do. That is a part of what struck me as creepy about the whole thing.


> Now imagine a leaker inside your own brain.

But it isn't like that at all. We don't judge people on their internal thoughts because they are fleeting and can even be random. But we do judge when someone distills those thoughts and starts opening their mouth or acting on them.

There was a well produced video and concept. This took quite some time. This was not a fleeting discussion just an interesting idea. This was distilled.

It isn't so much a leaker in your mind, but someone leaking what you gave a presentation on.


Motive is a relevant consideration in legal cases, in addition to speech and action. So no, we do in fact literally judge people in their internal thoughts.


But as you are pointing out, they took action on that thought. It was more than just fleeting.


> But as you are pointing out, they took action on that thought

They didn't take action if I correctly understood the allegory. If an organisation is a person (my friend...), then individual workers are like neurons/brain regions. This internal video is analogous to a thought that may or may not be acted out by creating a product that impacts other people outside of the org.

I agree that the idea is dystopian, but the article was a little over the top with the scaremongering.


Making a video is action. It is not implementation, but it is definitely action.


Of course we're just arguing about semantics here, but I would absolutely argue that under the organisation-as-person metaphor, this video is absolutely not an "action" by the organism, but rather just part of its internal process. I'd say it's similar to how leading a military exercise does not actually constitute military action, unless you end up actually invading some place.


The action alone is not what is judged. It is action and intent.


We don't judge people on their internal thoughts because they are fleeting and can even be random.

I disagree. I think we don't judge people on their internal thoughts because we can't see those internal thoughts. If they became visible, people would start judging each other on them.


Discussion is fine. It's only when it turns to action that there are issues. At least, that is my view. The problem is, Google acts. And its leaders publish books announcing their intent to act. And it's exactly in line with The Selfish Ledger. That suggests strongly that this has not been a discussion which was had and then decided against through good reasoning and a glance at the history of the tragedies that have come from every attempt at large-scale social engineering.


> your right to say it.

I think what's more relevant is the right not to say something publicly, and by extension the right to have thoughts free of external judgement and consequence.


Exactly. Just as Google (and everyone in it) has the right to express their thoughts freely, I have the exact same right to be express my thoughts about their speech.


This Mitchell & Webb sketch may serve illustrative and/or informative purposes related to the topic at hand (may contain moving pictures and British accents): https://www.youtube.com/watch?v=owI7DOeO_yg


Links are great, but please provide some explanation in future (or edit in one now), people generally don't like surprise links.


Done!


Can you describe in what way you believe this program to be evil?


It's a building block for a command-side society. Just take a look at another article on the HN home page: https://apnews.com/6e151296fb194f85ba69a8babd972e4b

The Chinese are rounding up Muslims by the thousand and re-training them out of their culture. Do you want people behind desks thousands of miles away from you manipulating your culture with computer aids? My data is mine, not the fodder for a corporation.

The U.S. is about freedom. Want to create your own cult? Sure, find some land and do it. But I don't want to be involved in efforts to command-side my life. I don't trust google's corporate values for driving my life. I don't trust advertisers and election-riggers using psychological monitoring to influence consumption and voting choices. They haven't demonstrated they have our best interests in mind - or that they can be disentangled from evil foreign powers.

This video talks about synthesizing objects of desire based on intimate details of peoples' lives, throughout their lives. Then anticipating the decisions they make throughout their life. I don't want a server room full of computers figuring out how to make me comply with google's corporate values.

This video even admits explicitly that "the notion of a global good is is problematic," so it weasels to creating optimal health. In fact, the keys to optimal health are already widely-known: eating well and getting exercise. Society's problem is that we have overwhelming advertising influences using scientific techniques to push us away from what we already know. We need distance from this asymmetric power relationship, not deep learning to enhance it.


Seems like a false distinction in that Google doesn't have coercive power the same way the Chinese state does. Rather, the video is showing what is clearly an opt-in system to use data to guide a user toward some goal of theirs - in the example's case of "health." Which in theory would then expand into all areas of life so that someone could optimize their decisions with data.

Do you make the claim that it is impossible to align a user's goals with a company's goals?


At one company I've worked with, health care costs an extra $50/month unless one complies with their health-tracking system's goals. I recently read of an ADD sufferer who was unable to obtain enough medicine to last a full week due to a DUI infraction decades ago. Massive government control over our health and permanent records are extent and nascent.

Company's goals are absolutely at odds with individual goals in many, many cases. That's why Consumer Confidence is tracked by the government. That's why manipulative advertising is ubiquitous. I'm not saying it's possible, but why would we trust an advertising company with our health, especially one with a lengthy list of privacy violations? At the very least, health programs should originate from trusted health-related organizations.


> Do you want people behind desks thousands of miles away from you manipulating your culture with computer aids?

Did you mean to describe Hollywood or did you just accidentally describe Hollywood?


It doesn't just talk about synthesizing objects of desire based on intimate details of peoples' lives, but that the reason for creating those desirable objects and having the user acquire them is not for making money on the sale, but is in order for the ledger to gain access to that object's observation of user for better understanding of the user.

"The ledger is missing a key data source which it requires in order to better understand this user." (4:22)

It determines if the user is likely to interact with an object that will produce the data and if not then,

"In situations where no suitable product is found, the ledger may investigate a bespoked solution."

The video[1] says it much better that I can. It has amazingly high production values and writhing for an internal company video. Although Google has been higher tens of thousands of the best young people for awhile now, so I should not be too surprised.

[1]https://www.youtube.com/watch?v=iqUCX5rPQug&feature=youtu.be


This is such a massive straw man. We're not concerned with their right to distribute memos internally. We're concerned with their ability to almost unilaterally direct the progress of civilizations.


This would be a good time to share internal free speech discussions of “highly non-disturbing concepts”.


But it is so much easier to jump to hatred and call them Nazi scum. Much harder to accept uncomfortable converstaions with conflicting ideas happen


It's actually pretty easy to accept that these conversations happen. I do. The thing is, in conversations, people claim to hold and/or appear to hold certain views and beliefs. Other people become aware of them (somehow), and make judgements.

That first part is basically universal human nature, and I think it's basically a necessity and right. The variety and possible problem comes in whether and how people express these judgements. I don't think the specific act of writing or speaking your opinion about someone's beliefs and opinions is ever wrong. You can do other bad things, like incite to violence or fraud but those are separate things


I agree with you completely, I meant for my comment to be sarcastic but forgot the /s. I think a large part of why people seem to fear free speech now is because they see people in the media and on tv get enraged about others beliefs and begin to think that it is the social norm to get outraged


kinda off.

this could have been discussed just fine, but instead they got 5 phds for a few months and then a video editing team to make a cheesy slideshow of why Peter might be a pedophile.

there's some tangible difference that you can't defend with your brain reading generalization.


As a former Googler, this video doesn't surprise me at all. Google tends to have a bit of a ...culture bubble, I suppose, where the engineers and designers forget that not everyone is okay with having all their data harvested and used in whatever manner Google thinks is best. I suspect this is related to the fact that the vast majority of these engineers and designers are straight, cis, college-educated men who are white or Asian and between the ages of 18 and 35, a group which on average tends to have fewer reasons to be concerned about their data being used against them. Then add in the fact that most employees have bought into the general "all-knowing, all-caring Google" mindset, and you have a perfect recipe for the kinds of thoughts presented in the video.

The scary thing is, most of the people I worked with genuinely believed that ideas like this are good for the user. The fact that such things require extensive privacy invasions don't even cross their minds, because they don't think of it as a privacy invasion. It's just another way to "optimize" toward some goal or another.


One of my great grandmothers worked in a button factory as a child. My grandfather on that side was able to go to college because of The Cooper Union. It was explicitly non-lamarkian. As is all social mobility. Google's vision literally pairs the idea that the options offered children will be bounded by the data trail of their parents with the image of a man working on a row boat. A literal interpretation -- the only kind computers make -- suggests that the artifacts this system will produce for the man's children are more likely to be oars than iPads. That's not to say this fetishization of Lamarkian genetics doesn't make sense inside Google. The median salary for the people there is $200,000 and the system will probably cough up Hondas if not BMW's. Google's vision reduces to using computation to create and maintain a caste system. The image of the man in the boat is a deliberate design decision.


> That's not to say this fetishization of Lamarkian genetics doesn't make sense inside Google. The median salary for the people there is $200,000 and the system will probably cough up Hondas if not BMW's.

This is a problem I have with all tech companies who are designing products used by hundreds of millions, if not billions, of humans - ranging from farmers in Germany to factory workers in India or mothers in Ecuador or ...

How can the employees of those companies - a group of people who are used to driving Teslas, getting their groceries delivered, etc - relate to those users and understand them in any meaningful way at all?(replying that a bunch of Google UX designers once spent 3 days in the Philippines doing “user research” is a terrible answer)


The video does not examine the idea that “locally grown” produce may actually be a net negative to society, both bringing up the problems inherent in “global good” then proceeding in the next breath without addressing any of them, carrying on as if they don’t exist.

It’s a clever instance of self-deception.


Here are the concrete ideas presented in 6 mins of narrated video:

- a system could highlight choices aligned with the user’s values (examples: recommend taking Uber pool instead of Uber X in the Uber app, suggest locally grown bananas in a shopping app)

- a system could manufacture custom objects for a user to gather missing data about that user

And the broader, higher level concept:

- data aggregated about a user can be considered to be a “genome”, and perhaps concepts applicable to genomes (sequencing, ...) are similarly applicable

This whole sub field of “speculative design” feels particularly useless (this video is part of a broader novel practice, see for instance https://www.primerconference.com/2017/). A few very vague points are raised, with no direct way to probe the questions or start answering them. This is in contrast to for example the scientific approach, where the base hypothesis usually gives us a clue as to what we might want to measure, change, etc.

So sure, at the end of your multi week process you get a slick video, except you’re not much further down the line of inquiry (and if it gets leaked you have the whole internet turn on you).

If one were assigned to think about this topic, it seems like actually exploring the base hypothesis (“personal data can be thought of as a genome”) with real experiments designed to test the limits of that statement would be a much more productive use of time.


Why did you simply skip over the whole "a system could ignore the user's values and instead only offer suggestions and goals aligned with Googles values"? That was clearly stressed.


One theoretical foundation for some of the “nudging” ideas:

http://www.nybooks.com/articles/2014/10/09/cass-sunstein-its...

> there is very little awareness in these books about the problem of trust. Every day we are bombarded with offers whose choice architecture is manipulated, not necessarily in our favor. The latest deal from the phone company is designed to bamboozle us, and we may well want such blandishments regulated. But it is not clear whether the regulators themselves are trustworthy. Governments don’t just make mistakes; they sometimes set out deliberately to mislead us. The mendacity of elected officials is legendary and claims on our trust and credulity have often been squandered. It is against this background that we have to consider how nudging might be abused.

... Sunstein’s idea is that we who know better should manipulate the choice architecture so that those who are less likely to perceive what is good for them can be induced to choose the options that we have decided are in their best interest. Thaler and Sunstein talk sometimes of “asymmetric paternalism.”

... Deeper even than this is a prickly concern about dignity. What becomes of the self-respect we invest in our own willed actions, flawed and misguided though they often are, when so many of our choices are manipulated to promote what someone else sees (perhaps rightly) as our best interest? Sunstein is well aware that many will see the rigging of choice through nudges as an affront to human dignity: I mean dignity in the sense of self-respect, an individual’s awareness of her own worth as a chooser. The term “dignity” did not appear in the book he wrote with Thaler, but in Why Nudge? Sunstein concedes that this objection is “intensely felt.” Practically everything he says about it, however, is an attempt to brush dignity aside.


>... Sunstein’s idea is that we who know better should manipulate the choice architecture so that those who are less likely to perceive what is good for them can be induced to choose the options that we have decided are in their best interest. Thaler and Sunstein talk sometimes of “asymmetric paternalism.”

That just sounds like the parent class of "we should convert all these barbarians to Christianity and get rid of most of their culture in the process for their own good so they don't burn in hell."


One of the greatest and most common fallacies I see Silicon Valley companies make on a regular basis, is assuming that human beings are purely rational entities. To be more specific, in many cases, they ignore mental health.

When applied to dispassionate entities this type of reasoning might make sense, but real people:

- Experience anxiety. Chilling effects, stress, etc. become abundant when we have no internal life; no room for our minds to move and experiment and be messy without being locked into step with the outside world.

- Are changed by observations we or others make about ourselves. Our minds are not static systems, they are infinitely recursive and dynamic. Observation of our own minds - even perfectly accurate observation - influences them. The cycle keeps going, ad infinitum.

The world is simpler when it's homogenized. It's easier to reason about. I sympathize; I'm a programmer myself. But these aren't just approximation errors. These are ways in which recent technology actively damages the human psyche, both on an individual and a societal level. The Olympians of Silicon Valley are trying to shape the world in their own image, and they'll burn it to the ground before they admit their fault.


I think you could go back to the early seventies and make this same kind of video about gene sequencing, DNA, CRISPR, etc.

Just like we didn't know to what extent DNA changes can alter an individual, and what the repercussions of making those changes are, the same is true of our actions and experiences.

This video is a look at a nascent field that requires thinking and ethical explorations.

What are the ramifications of an individual who, through modern science, has the ability to not just alter their genetic makeup, but also their experiences and behaviors so as to achieve a desired outcome?

Where is free will in all of this? Does it exist? Will we all become beholden to our former selves who, making decisions with less information, made decisions we no longer agree with?

What about our parents? They make choices, sending us to schools, or acting classes, but they shape us in the same way.

...and where is the limit? As long as we're not very good at it (like right now), it's ok, but once we're accomplished a certain level of proficiency, is that when it becomes dangerous?

This is a thought-provoking video, but I think a lot of the issues expressed here are a reflection of the viewers, and not of the video itself. It even raises the concept of responsible behavior (it refers to 'stewardship').

Our current society values the concept of 'humanity', and the 'greater good', which gives me hope that we will take action and correct evolutionary unstable systems and behaviors.

To end on a joke: I, for one, welcome our new selfish ledger overlords.


It's like they've forgotten that they mostly sell ads. In the end, they won't be training the human race to be more generous. They'll be training them to buy more crap.


The article mentions this

which would “reflect Google’s values as an organization,”


At heart, I disagree that that Google's "values as an organization" differ from "sell more ads, make more money." For sure, there may be individuals with loftier goals and some of them may even have positions high up in the organization. But in the end, it's a corporation, and in the end they all have the same goals.

The whole presentation strikes me as insulated and disconnected from reality.


The video is describing mind control. Literally... I have no words.

First they want to offer users to select how they want to change their behaviour. I guess that's fine, we all want to improve ourselves. This segment lasts a couple of seconds and the rest of the time is dedicated to describing how the system can be made to predict and target "bad" behaviours of humans as a species.

Who greenlit this.

EDIT: I understand this is not a product, but clearly it means Google is seriously thinking about doing such things. This is just as scary for me.


> Who greenlit this.

> “We understand if this is disturbing -- it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”

So the answer seems to be "nobody".


> EDIT: I understand this is not a product, but clearly it means Google is seriously thinking about doing such things. This is just as scary for me.

I think this is bullshit. You are essentially accusing Google of "Thoughtcrime". I think Google takes plenty of actually questionable actions that deserve opprobrium, so I'd rather focus on that than internal deliberations.


I'm not accusing them of thoughtcrime. However, if this much effort is put into creating a "speculative design" video there have clearly been deep discussions about the topic beforehand. It was not cast away even after these discussions. This is to me an indication that there are internal forces in Google seriously considering going after such applications.

As an aside; this whole throughtcrime stuff is ridiculous. Just because someone has a right to say something does not mean you are not allowed to be disgusted by them or think they are bad people because of what they say. Imagine a coworker saying that women deserve to be raped. You would not think of him as a good person after that, even though he has every right to make that statement.


Nobody. As clarified near the bottom of the article, the whole thing is speculative concept work. It doesn't reflect anything that's actually being built.

I'd assume it's a useful exercise for a company with the kind of power a Google has to take a hard look at the limits of that power.


Unless you're saying you're privy to inside information, YOU don't know what is or isn't being built inside Google.


I took the statement from X at face value. "It’s not related to any current or future products."


> I understand this is not a product, but clearly it means Google is seriously thinking about doing such things. This is just as scary for me.

Adversarial modeling. With organizations like the Chinese government actually implementing schemes like this, you really do want other organizations---especially ones not particularly sympathetic to the Chinese Communist Party's interests---considering these topics.


> you really do want other organizations considering these topics.

Why?


So they can poke holes in them, recognize the pattern when some other org actually implements them (because they already know what the thing looks like), and already either be aware of the ways the thing breaks down or wrenches an adversary could throw into the thing to make it break down, were one so inclined.


Naturally, and I would welcome this kind of analysis. Just not from a company that would benefit immensely financially speaking from eventually building such an application.


Yes, but did you see any hint of poking holes in there at all? It was 100% behind the idea.


It's a leak, so I don't assume we're seeing all sides of the story.


In a theoretical sense this is essentially how Google works. See Big Other: Surveillance Capitalism and the Prospects of an Information Civilization https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754


When did tech reporting become the thought police and a guardian of the status quo?

They seem to react to anything more exciting than a new phone release with fear and scorn.

Exploring concepts and ideas is how progress happens and it's telling how they see "Duplex" - the most impressive tech showcased at these keynotes - as a misstep.


The whole video strikes me as tone deaf and out-of-touch. The earnestness of it all gives it a creepy tone that makes it all seem unreal. I have a hard time believing this isn't some elaborate joke.

Putting that to the side, they had me thinking things like "well, this isn't so crazy, kind of..." all the way up to the bit where they talk about literally creating a one-off product for a customer in order to collect a little bit of data they feel they are missing. Sure, it's creepy to have Google constantly suggest I buy a Fitbit, eventually pushing one uniquely designed just for me, but I can see some usefulness there. Kind of. For the advertisers, at least.

But their repeated invocation of Lamarck, paired with this idea that the data they collect from one persons' parents will somehow be bolted onto their children strikes me as classist, elitist and (at the core) entirely deranged. They somehow made a leap from the ridiculousness of "The Circle" right into the worst bits of "A Brave New World" in the span of twenty minutes.

It's fun to make fun of the mis-steps of a large corporation and their silly blunders (all phones need a keyboard!), but this presentation is in a whole different ballpark, in my opinion.


It's not about if you liked it or not, it wasn't meant for public consumption.

It's someone's thoughts shared with colleague as part of the intellectual process and that is being demonized.


I don't think anyone is demonizing the idea of sharing ideas with colleagues. It's the content, in my opinion, that has grabbed people's attention.


To say they see it as a misstep is, I think, simplistic. The article links to a headline that calls it "stunning," for instance.

> The outcry over Duplex’s potential to deceive prompted Google to add the promise that its AI will always self-identify as such

Tech reporters aren't the only ones who have picked up on the potential to deceive. Just read the first HN threads about it. The tech is undoubtedly impressive, but the fact that Google didn't appear to anticipate its social engineering potential is off-putting.


> Exploring concepts and ideas is how progress happens and it's telling how they see "Duplex", the most impressing tech showcased at these keynotes, as a misstep.

Creating agents that can imitate human speech is spectacular but their demonstrated use of it is far from it. Its essentially expressing "my time isn't worth speaking to the human on the other side"; perhaps you feel okay with it when the robot is working for you but I wonder how'd you feel on the other side of the equation.

So much for don't be evil.


I really don't care if I'm talking to a person or a robot. Anonymous social interactions that are required to achieve an unrelated goal are a waste of time. I truly believe that my time is not worth talking to a human to set up an appointment that could have been set up in much less time if I could interact with a computer instead, nor is that worth anyone else's time.

In many cases, I'm never going to see the person I'm talking to. We won't discuss anything other than the details of the appointment I want to schedule. For all intents and purposes they are no different than a computer to me.

I understand why people would want to keep meaningful anonymous social interactions, like conversing with strangers on a bus (or on the internet!), around but I fail to see why anyone cares if we get rid of meaningless ones.


As if businesses haven't been using robocalls and automated response systems for years.


Yes, and people don't like that. This way we'll end up with robots talking to robots using human language, which couldn't be more idiotic.


> which couldn't be more idiotic

Actually, that sounds like a perfectly reasonable solution to piggyback on the existing telecom infrastructure in a backwards-compatible way.

I think you just described a stepping-stone for moving from a system where humans burn a bunch of time negotiating deals to where deals are auto-negotiated on behalf of humans without having to build out a new protocol from scratch or run a new initiative through ANSI or W3C.

Neat. Beats the pants off the transition plan for IPv4 to IPv6 at least.


> Actually, that sounds like a perfectly reasonable solution to piggyback on the existing telecom infrastructure in a backwards-compatible way.

> I think you just described a stepping-stone for moving from a system where humans burn a bunch of time negotiating deals to where deals are auto-negotiated on behalf of humans without having to build out a new protocol from scratch or run a new initiative through ANSI or W3C.

Except for the fact that Human language is not entirely precise. We can't even implement error-free "smart" contracts in a specialized language and you think doing so in a language with so much more ambiguity using computers is a good idea?


To me at least, the value of something like Google Duplex would be doing things like scheduling things over the phone, i.e. things that I should already be able to do on a computer anyways.


Can anyone speak to whether this style of video is use widely internally at Alphabet X, or if this is just an anomaly? I'm not talking about the content, just the practice of sending around videos about ostensibly intellectual ideas that have very little content and instead rely on music/imagery/etc.? It's very embarrassing.


It’s an emerging sub field of design that calls itself “speculative design”.

Can’t speak for Google, but it is gaining traction. See for instance https://www.primerconference.com/2017/


TED talks have gained a lot of traction too...

https://www.youtube.com/watch?v=DkGMY63FF3Q


the music/imagery is to lighten it up...imagine if he had shared his thoughts in a 'memo'- it could be, you know, misconstrued...


The difference is that if you start sending around a (purely hypothetical) memo unrelated to your work and unsanctioned by your employer, you might (hypothetically) get fired. This video has production value and ties into core corporate objectives. I'd be genuinely surprised to learn it wasn't backed by a serious corporate initiative.


God that music is horrible. 20-second loop in the same key, same tempo, same volume over and over repeated 26 times.

It's like being on hold. Sounds like a corporate initiative to me!


This is the most terrifying dystopian thing I have ever seen (mostly because we're not that far away from this becoming a reality). To think that just a couple decades from now, Google could literally be directing a majority of our waking actions based off a soulless machine learning algorithm. In this reality, do you have the ability to opt out or is the only solution violent revolution and the destruction of the machines?


Even though speculative - collective multi-generational Behavioural Sequencing (and influencing) seems like Psycho-history.

Sci-fi becoming possible dystopian reality yet again.


Could we see the positive speculation video? There must have been one, right? Standard scenario planning technique.


Good to know I'm not the only one to think of this: a phone app that asks what you want and then orders you around. And when you tick the online box, it optimizes over the population of all online users. For example, it might suggest places and times as to increase chances of serendipitous encounters.


This is eerily similar to the concept of a 'cookie' seen in the Black Mirror episode, White Christmas.

(Spoiler Alert)

A cookie is a device "that is inserted under the clients head by the brain and kept there for a week, giving it time to accurately replicate the individuals consciousness. It is then removed and installed in a larger, egg shaped device which can be connected to a computer or tablet."

Highly recommend this episode for those interested: https://en.m.wikipedia.org/wiki/White_Christmas_(Black_Mirro...


Another perfect illustration of the way that Google recasts "your personal information is required to generate that outcome" as "you must surrender your personal information to us in order to benefit from that outcome."

Most of the functionality they describe is perfectly achievable without any personal data ever leaving your phone / home computer / personal server, but the quid-pro-quo of "service in exchange for data" is too deeply ingrained.


This underscores to me why I have no fear of the Singularity. It would be impossible for an AI to do worse things to us than we as humanity are willing to do to ourselves.

We have tortured and maimed in the name of advancing medicine. We have killed millions in the name of saving more millions. We manipulate people's thoughts and behaviors in order to make them "better". We evaluate other people based on those we perceive as their peers.

AI only makes those processes easier.


A better reason not to fear the singularity is because it a science fiction concept that's connections to the real world are tenuous at best.

Anyways, I think the fear most people have is either that the AI would be far better at doing these bad things to humans than humans are (which you allude to in your comment), or they always envision themselves on the "good guys" side in human vs human conflict and don't have that comfort when it comes to AI vs human conflict.

Those fears seem valid to me if you believe in the underlying premise.


Did anyone think of Asimov's Foundation? I doubt that we will ever have "big enough data" for such a scale, but the socio-genetical ledger is a concept that may lead toward ubiquitous informational systems.


With a lot of leaked internal videos, you get a lot of spin from whatever agency is reporting on it. With GDPR and Facebook's data problems, a video like this is surely going to be reported on as a dystopian future and indicative of everything that's wrong with Google.

But taken at face value -- just judging the video itself -- it doesn't seem that bad. To the untrained eye it can appear horrible, but this is largely because people conflate all data into one group. There's two different types of data: there's the data that you create intentionally, as well as the data that's the passive result of you being around technology. There's data that, if shared, could be potentially deadly in some parts of the world (messages, photos, videos, calls, etc.) -- and there's data that exists, but is not captured about people. This video is clearly showing more of the latter.

The canonical example the video uses is a scale. The idea is that the ledger thinks that it can make better decisions if it knows your weight. It's not sure that you'd buy an existing smart scale, so it wants to create one that could fit the bill for capturing that data from you. This is passive data -- it already exists about you, but it isn't collected. Of all of the genuine types of data collection, this is the most genuine! If you go to a doctor's office, the first thing they do is weigh you and take your temperature, for good reason. It's one of the biggest factors in health and treatment for a patient. It can give off warning signs or indicate more effective treatments.

This video isn't about taking in your random personal data that you store on Google and using it nefariously. It's about taking in data that you already have and trying to make actionable decisions based on it. The video characterizes the data as if it's an independent living thing as an exercise -- not as the end grand outcome. The idea is simply that if you can track what users do and how they behave in certain situations, you can use those past decisions to help inform future generations. The video doesn't even make this a mandatory thing -- it shows a user turning it on and using it to make specific goals (like eat healthier).

If Google wants to tackle a problem like depression through opt-in deep learning on habits, nudging people in the right way, then I don't see a problem with it. If you could categorically learn how to avoid pitfalls and make things better for future generations, why wouldn't you? It actually kinda gives every single life a little more meaning and purpose -- actually acting as inputs on how to better the human race.

Everyone wants to take things like this in the worst possible scenarios. "Google is evil or wants to sell ads, so they're going to build a system for being evil and selling ads." But look at the facts: there's a lot of fear over not a lot of time tested results. Google search is really, really good. Waymo cars really don't crash that much. They do a lot of projects for the "greater good," and haven't been historically known to take advantage of the data they collect.

They're pitching this internal video to their employees as inspiration to build a better quality of life for future generations. They aren't pitching it to, uh, data mine everyone for their own profit. If ethics is about what you do when nobody is looking, then this is a good example of consistent ethics. When Google says publicly they care, and then privately says they care, I think it's safe to say they genuinely do care.


This fits perfectly with the views expressed by Eric 'If you don't have anything to hide you shouldn't worry' Schmidt in his book 'The New Digital Age.' Google is rich. Therefore, Google is Better. Because they are Better, they need to reign in society and guide it for its own protection and benefit. It's really a very old school mindset, the kind that ruled the world for a very long time in the ages of kings, god-kings, theocracies, etc. It's the idea that some people are fundamentally born to lead and others born to follow. This is precisely and exactly the viewpoint that "all men are created equal" was penned to spit in the face of.


tangent: videos like this are so irritating... Just give me the transcript, possibly with pictures/video interleaved if necessary (but probably fewer than you think).


Right? Remember when you could search for a tutorial online and you got an actual written process instead of some dolt on youtube awkwardly recording his screen and saying "uhhhhhhh" for the first third of the video? And wasting time doing stupid things like opening the program and setting up a test file instead of just recording the actual thing the tutorial is supposed to instruct? Ten minutes of bullshit just to show me where that one menu is hiding.

I hate video tutorials, and it's all you can find now. Sorry, I'll shut up. This is way off topic. Carry on.


I hate it too, but try to create a tutorial and you will understand the reason - it takes much less time, just record what you are already doing. The alternative to video is not text but absence of information.


one could always edit out large chunks of useless video


Yes, and this video isn't even captioned...


I feel like Yuval Noah Harari is spot on with dataism emerging as the new dominant inter-subjective truth. Go and read Homo Deus, you will not be disappointed. The idea of dataism will slowly replace our current humanistic ideas of liberty/individualism. I'm not saying I agree with the video/general direction, but I think we're foolish to disregard this idea as an isolated event produced in a "culture bubble".


Great music!


Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. This very kindness stings with intolerable insult. To be “cured” against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.

C. S. Lewis, 1948


I kinda want to see a black mirror episode of AI overlords shepherding humans for their own good :)


Just read the last story in "I, Robot".

Twist: from the point of view of at least one of the characters (and the general tone of the story), it's a good thing. President of the World makes the case that the world has always been too complicated for humans to control their own fate, and the ultimate goal of technology has always been to guard against chaos with machines big enough to tackle that complexity. ;)


Yeah, he has a point :) It's always struck me as weird that we "allow" political leaders that can put personal gain and career in front of what's good for everyone else. Also as you touch upon the world is rapidly becoming too complex to be governed by us. AI could save us from bureaucratic inefficiency, corruption, shortsightedness and cognitive bias.

I'd actually welcome our AI overlords if they could execute according to a a reasonable value system (maybe execute is the wrong word). On the other hand, how do you set its goals, and what could possibly go wrong :)


Up is down, black is white, wrong is right.

If I have to choose between cynical, selfish, sociopathic corporations and ones that at least try to do the right thing, I'll choose the latter every time.

Google doesn't always get it right, but the world is a far better place because them.


Serious question: how became the world a better place because of Google?


> The title is an homage to Richard Dawkins’ 1976 book The Selfish Gene.

Another instance of techies being led astray by flimsy, reactionary pseudo-science.


I think you might be confusing this with Dawkins’ 1977 book The Selfish Horoscope.


This is getting gross. I really fail to understand the type of person that believes this is a good idea.


I believe the article clarifies that the creator of this video doesn't believe it's a good idea. It's speculative.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: