Now imagine a leaker inside your own brain. Every thought you may have could be displayed on the front page of The Verge or whatever. "Peter's internal thoughts hint he may be a pedophile".
I prefer Google having these internal discussions of highly disturbing concepts vs not having them.
"I do not agree with what you have to say, but I'll defend to the death your right to say it."
What does that actually mean in practice? "We're going to float this idea which we all think is unethical, but without actually labelling it as unethical in the presentation?"
(The strange thing seems to be a software engineering organisation that thinks of things without immediately coming up with dozens of examples of how this could go wrong!)
> When reached for comment on The Selfish Ledger, an X spokesperson provided the following statement to The Verge:
>> “We understand if this is disturbing -- it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”
(... which leads me personally to ask the question: what was the intent of the leaker here? ;) ).
But it isn't like that at all. We don't judge people on their internal thoughts because they are fleeting and can even be random. But we do judge when someone distills those thoughts and starts opening their mouth or acting on them.
There was a well produced video and concept. This took quite some time. This was not a fleeting discussion just an interesting idea. This was distilled.
It isn't so much a leaker in your mind, but someone leaking what you gave a presentation on.
They didn't take action if I correctly understood the allegory. If an organisation is a person (my friend...), then individual workers are like neurons/brain regions. This internal video is analogous to a thought that may or may not be acted out by creating a product that impacts other people outside of the org.
I agree that the idea is dystopian, but the article was a little over the top with the scaremongering.
I disagree. I think we don't judge people on their internal thoughts because we can't see those internal thoughts. If they became visible, people would start judging each other on them.
I think what's more relevant is the right not to say something publicly, and by extension the right to have thoughts free of external judgement and consequence.
The Chinese are rounding up Muslims by the thousand and re-training them out of their culture. Do you want people behind desks thousands of miles away from you manipulating your culture with computer aids? My data is mine, not the fodder for a corporation.
The U.S. is about freedom. Want to create your own cult? Sure, find some land and do it. But I don't want to be involved in efforts to command-side my life. I don't trust google's corporate values for driving my life. I don't trust advertisers and election-riggers using psychological monitoring to influence consumption and voting choices. They haven't demonstrated they have our best interests in mind - or that they can be disentangled from evil foreign powers.
This video talks about synthesizing objects of desire based on intimate details of peoples' lives, throughout their lives. Then anticipating the decisions they make throughout their life. I don't want a server room full of computers figuring out how to make me comply with google's corporate values.
This video even admits explicitly that "the notion of a global good is is problematic," so it weasels to creating optimal health. In fact, the keys to optimal health are already widely-known: eating well and getting exercise. Society's problem is that we have overwhelming advertising influences using scientific techniques to push us away from what we already know. We need distance from this asymmetric power relationship, not deep learning to enhance it.
Do you make the claim that it is impossible to align a user's goals with a company's goals?
Company's goals are absolutely at odds with individual goals in many, many cases. That's why Consumer Confidence is tracked by the government. That's why manipulative advertising is ubiquitous. I'm not saying it's possible, but why would we trust an advertising company with our health, especially one with a lengthy list of privacy violations? At the very least, health programs should originate from trusted health-related organizations.
Did you mean to describe Hollywood or did you just accidentally describe Hollywood?
"The ledger is missing a key data source which it requires in order to better understand this user." (4:22)
It determines if the user is likely to interact with an object that will produce the data and if not then,
"In situations where no suitable product is found, the ledger may investigate a bespoked solution."
The video says it much better that I can. It has amazingly high production values and writhing for an internal company video. Although Google has been higher tens of thousands of the best young people for awhile now, so I should not be too surprised.
That first part is basically universal human nature, and I think it's basically a necessity and right. The variety and possible problem comes in whether and how people express these judgements. I don't think the specific act of writing or speaking your opinion about someone's beliefs and opinions is ever wrong. You can do other bad things, like incite to violence or fraud but those are separate things
this could have been discussed just fine, but instead they got 5 phds for a few months and then a video editing team to make a cheesy slideshow of why Peter might be a pedophile.
there's some tangible difference that you can't defend with your brain reading generalization.
The scary thing is, most of the people I worked with genuinely believed that ideas like this are good for the user. The fact that such things require extensive privacy invasions don't even cross their minds, because they don't think of it as a privacy invasion. It's just another way to "optimize" toward some goal or another.
This is a problem I have with all tech companies who are designing products used by hundreds of millions, if not billions, of humans - ranging from farmers in Germany to factory workers in India or mothers in Ecuador or ...
How can the employees of those companies - a group of people who are used to driving Teslas, getting their groceries delivered, etc - relate to those users and understand them in any meaningful way at all?(replying that a bunch of Google UX designers once spent 3 days in the Philippines doing “user research” is a terrible answer)
It’s a clever instance of self-deception.
- a system could highlight choices aligned with the user’s values (examples: recommend taking Uber pool instead of Uber X in the Uber app, suggest locally grown bananas in a shopping app)
- a system could manufacture custom objects for a user to gather missing data about that user
And the broader, higher level concept:
- data aggregated about a user can be considered to be a “genome”, and perhaps concepts applicable to genomes (sequencing, ...) are similarly applicable
This whole sub field of “speculative design” feels particularly useless (this video is part of a broader novel practice, see for instance https://www.primerconference.com/2017/). A few very vague points are raised, with no direct way to probe the questions or start answering them. This is in contrast to for example the scientific approach, where the base hypothesis usually gives us a clue as to what we might want to measure, change, etc.
So sure, at the end of your multi week process you get a slick video, except you’re not much further down the line of inquiry (and if it gets leaked you have the whole internet turn on you).
If one were assigned to think about this topic, it seems like actually exploring the base hypothesis (“personal data can be thought of as a genome”) with real experiments designed to test the limits of that statement would be a much more productive use of time.
> there is very little awareness in these books about the problem of trust. Every day we are bombarded with offers whose choice architecture is manipulated, not necessarily in our favor. The latest deal from the phone company is designed to bamboozle us, and we may well want such blandishments regulated. But it is not clear whether the regulators themselves are trustworthy. Governments don’t just make mistakes; they sometimes set out deliberately to mislead us. The mendacity of elected officials is legendary and claims on our trust and credulity have often been squandered. It is against this background that we have to consider how nudging might be abused.
... Sunstein’s idea is that we who know better should manipulate the choice architecture so that those who are less likely to perceive what is good for them can be induced to choose the options that we have decided are in their best interest. Thaler and Sunstein talk sometimes of “asymmetric paternalism.”
... Deeper even than this is a prickly concern about dignity. What becomes of the self-respect we invest in our own willed actions, flawed and misguided though they often are, when so many of our choices are manipulated to promote what someone else sees (perhaps rightly) as our best interest? Sunstein is well aware that many will see the rigging of choice through nudges as an affront to human dignity: I mean dignity in the sense of self-respect, an individual’s awareness of her own worth as a chooser. The term “dignity” did not appear in the book he wrote with Thaler, but in Why Nudge? Sunstein concedes that this objection is “intensely felt.” Practically everything he says about it, however, is an attempt to brush dignity aside.
That just sounds like the parent class of "we should convert all these barbarians to Christianity and get rid of most of their culture in the process for their own good so they don't burn in hell."
When applied to dispassionate entities this type of reasoning might make sense, but real people:
- Experience anxiety. Chilling effects, stress, etc. become abundant when we have no internal life; no room for our minds to move and experiment and be messy without being locked into step with the outside world.
- Are changed by observations we or others make about ourselves. Our minds are not static systems, they are infinitely recursive and dynamic. Observation of our own minds - even perfectly accurate observation - influences them. The cycle keeps going, ad infinitum.
The world is simpler when it's homogenized. It's easier to reason about. I sympathize; I'm a programmer myself. But these aren't just approximation errors. These are ways in which recent technology actively damages the human psyche, both on an individual and a societal level. The Olympians of Silicon Valley are trying to shape the world in their own image, and they'll burn it to the ground before they admit their fault.
Just like we didn't know to what extent DNA changes can alter an individual, and what the repercussions of making those changes are, the same is true of our actions and experiences.
This video is a look at a nascent field that requires thinking and ethical explorations.
What are the ramifications of an individual who, through modern science, has the ability to not just alter their genetic makeup, but also their experiences and behaviors so as to achieve a desired outcome?
Where is free will in all of this? Does it exist? Will we all become beholden to our former selves who, making decisions with less information, made decisions we no longer agree with?
What about our parents? They make choices, sending us to schools, or acting classes, but they shape us in the same way.
...and where is the limit? As long as we're not very good at it (like right now), it's ok, but once we're accomplished a certain level of proficiency, is that when it becomes dangerous?
This is a thought-provoking video, but I think a lot of the issues expressed here are a reflection of the viewers, and not of the video itself. It even raises the concept of responsible behavior (it refers to 'stewardship').
Our current society values the concept of 'humanity', and the 'greater good', which gives me hope that we will take action and correct evolutionary unstable systems and behaviors.
To end on a joke: I, for one, welcome our new selfish ledger overlords.
which would “reflect Google’s values as an organization,”
The whole presentation strikes me as insulated and disconnected from reality.
First they want to offer users to select how they want to change their behaviour. I guess that's fine, we all want to improve ourselves. This segment lasts a couple of seconds and the rest of the time is dedicated to describing how the system can be made to predict and target "bad" behaviours of humans as a species.
Who greenlit this.
EDIT: I understand this is not a product, but clearly it means Google is seriously thinking about doing such things. This is just as scary for me.
> “We understand if this is disturbing -- it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”
So the answer seems to be "nobody".
I think this is bullshit. You are essentially accusing Google of "Thoughtcrime". I think Google takes plenty of actually questionable actions that deserve opprobrium, so I'd rather focus on that than internal deliberations.
As an aside; this whole throughtcrime stuff is ridiculous. Just because someone has a right to say something does not mean you are not allowed to be disgusted by them or think they are bad people because of what they say. Imagine a coworker saying that women deserve to be raped. You would not think of him as a good person after that, even though he has every right to make that statement.
I'd assume it's a useful exercise for a company with the kind of power a Google has to take a hard look at the limits of that power.
Adversarial modeling. With organizations like the Chinese government actually implementing schemes like this, you really do want other organizations---especially ones not particularly sympathetic to the Chinese Communist Party's interests---considering these topics.
They seem to react to anything more exciting than a new phone release with fear and scorn.
Exploring concepts and ideas is how progress happens and it's telling how they see "Duplex" - the most impressive tech showcased at these keynotes - as a misstep.
Putting that to the side, they had me thinking things like "well, this isn't so crazy, kind of..." all the way up to the bit where they talk about literally creating a one-off product for a customer in order to collect a little bit of data they feel they are missing. Sure, it's creepy to have Google constantly suggest I buy a Fitbit, eventually pushing one uniquely designed just for me, but I can see some usefulness there. Kind of. For the advertisers, at least.
But their repeated invocation of Lamarck, paired with this idea that the data they collect from one persons' parents will somehow be bolted onto their children strikes me as classist, elitist and (at the core) entirely deranged. They somehow made a leap from the ridiculousness of "The Circle" right into the worst bits of "A Brave New World" in the span of twenty minutes.
It's fun to make fun of the mis-steps of a large corporation and their silly blunders (all phones need a keyboard!), but this presentation is in a whole different ballpark, in my opinion.
It's someone's thoughts shared with colleague as part of the intellectual process and that is being demonized.
> The outcry over Duplex’s potential to deceive prompted Google to add the promise that its AI will always self-identify as such
Tech reporters aren't the only ones who have picked up on the potential to deceive. Just read the first HN threads about it. The tech is undoubtedly impressive, but the fact that Google didn't appear to anticipate its social engineering potential is off-putting.
Creating agents that can imitate human speech is spectacular but their demonstrated use of it is far from it. Its essentially expressing "my time isn't worth speaking to the human on the other side"; perhaps you feel okay with it when the robot is working for you but I wonder how'd you feel on the other side of the equation.
So much for don't be evil.
In many cases, I'm never going to see the person I'm talking to. We won't discuss anything other than the details of the appointment I want to schedule. For all intents and purposes they are no different than a computer to me.
I understand why people would want to keep meaningful anonymous social interactions, like conversing with strangers on a bus (or on the internet!), around but I fail to see why anyone cares if we get rid of meaningless ones.
Actually, that sounds like a perfectly reasonable solution to piggyback on the existing telecom infrastructure in a backwards-compatible way.
I think you just described a stepping-stone for moving from a system where humans burn a bunch of time negotiating deals to where deals are auto-negotiated on behalf of humans without having to build out a new protocol from scratch or run a new initiative through ANSI or W3C.
Neat. Beats the pants off the transition plan for IPv4 to IPv6 at least.
> I think you just described a stepping-stone for moving from a system where humans burn a bunch of time negotiating deals to where deals are auto-negotiated on behalf of humans without having to build out a new protocol from scratch or run a new initiative through ANSI or W3C.
Except for the fact that Human language is not entirely precise. We can't even implement error-free "smart" contracts in a specialized language and you think doing so in a language with so much more ambiguity using computers is a good idea?
Can’t speak for Google, but it is gaining traction. See for instance https://www.primerconference.com/2017/
It's like being on hold. Sounds like a corporate initiative to me!
Sci-fi becoming possible dystopian reality yet again.
A cookie is a device "that is inserted under the clients head by the brain and kept there for a week, giving it time to accurately replicate the individuals consciousness. It is then removed and installed in a larger, egg shaped device which can be connected to a computer or tablet."
Highly recommend this episode for those interested: https://en.m.wikipedia.org/wiki/White_Christmas_(Black_Mirro...
Most of the functionality they describe is perfectly achievable without any personal data ever leaving your phone / home computer / personal server, but the quid-pro-quo of "service in exchange for data" is too deeply ingrained.
We have tortured and maimed in the name of advancing medicine. We have killed millions in the name of saving more millions. We manipulate people's thoughts and behaviors in order to make them "better". We evaluate other people based on those we perceive as their peers.
AI only makes those processes easier.
Anyways, I think the fear most people have is either that the AI would be far better at doing these bad things to humans than humans are (which you allude to in your comment), or they always envision themselves on the "good guys" side in human vs human conflict and don't have that comfort when it comes to AI vs human conflict.
Those fears seem valid to me if you believe in the underlying premise.
But taken at face value -- just judging the video itself -- it doesn't seem that bad. To the untrained eye it can appear horrible, but this is largely because people conflate all data into one group. There's two different types of data: there's the data that you create intentionally, as well as the data that's the passive result of you being around technology. There's data that, if shared, could be potentially deadly in some parts of the world (messages, photos, videos, calls, etc.) -- and there's data that exists, but is not captured about people. This video is clearly showing more of the latter.
The canonical example the video uses is a scale. The idea is that the ledger thinks that it can make better decisions if it knows your weight. It's not sure that you'd buy an existing smart scale, so it wants to create one that could fit the bill for capturing that data from you. This is passive data -- it already exists about you, but it isn't collected. Of all of the genuine types of data collection, this is the most genuine! If you go to a doctor's office, the first thing they do is weigh you and take your temperature, for good reason. It's one of the biggest factors in health and treatment for a patient. It can give off warning signs or indicate more effective treatments.
This video isn't about taking in your random personal data that you store on Google and using it nefariously. It's about taking in data that you already have and trying to make actionable decisions based on it. The video characterizes the data as if it's an independent living thing as an exercise -- not as the end grand outcome. The idea is simply that if you can track what users do and how they behave in certain situations, you can use those past decisions to help inform future generations. The video doesn't even make this a mandatory thing -- it shows a user turning it on and using it to make specific goals (like eat healthier).
If Google wants to tackle a problem like depression through opt-in deep learning on habits, nudging people in the right way, then I don't see a problem with it. If you could categorically learn how to avoid pitfalls and make things better for future generations, why wouldn't you? It actually kinda gives every single life a little more meaning and purpose -- actually acting as inputs on how to better the human race.
Everyone wants to take things like this in the worst possible scenarios. "Google is evil or wants to sell ads, so they're going to build a system for being evil and selling ads." But look at the facts: there's a lot of fear over not a lot of time tested results. Google search is really, really good. Waymo cars really don't crash that much. They do a lot of projects for the "greater good," and haven't been historically known to take advantage of the data they collect.
They're pitching this internal video to their employees as inspiration to build a better quality of life for future generations. They aren't pitching it to, uh, data mine everyone for their own profit. If ethics is about what you do when nobody is looking, then this is a good example of consistent ethics. When Google says publicly they care, and then privately says they care, I think it's safe to say they genuinely do care.
I hate video tutorials, and it's all you can find now. Sorry, I'll shut up. This is way off topic. Carry on.
C. S. Lewis, 1948
Twist: from the point of view of at least one of the characters (and the general tone of the story), it's a good thing. President of the World makes the case that the world has always been too complicated for humans to control their own fate, and the ultimate goal of technology has always been to guard against chaos with machines big enough to tackle that complexity. ;)
I'd actually welcome our AI overlords if they could execute according to a a reasonable value system (maybe execute is the wrong word). On the other hand, how do you set its goals, and what could possibly go wrong :)
If I have to choose between cynical, selfish, sociopathic corporations and ones that at least try to do the right thing, I'll choose the latter every time.
Google doesn't always get it right, but the world is a far better place because them.
Another instance of techies being led astray by flimsy, reactionary pseudo-science.