
The New Wilderness - OrwellianChild
https://idlewords.com/2019/06/the_new_wilderness.htm
======
nonwifehaver3
The dragon with its hoard of gold (private user data) is a great description
of Facebook and Google's regulatory goals. As Maciej has also put it, "privacy
is an essential right — your most intimate moments should be kept strictly
between you and Google".

The terror that could be unleashed by their already existing hoard is almost
unimaginable. For all the concern about rising authoritarian politics, few
have figured out how bad it would be for the future equivalent of the Gestapo
or the Red Guard (pick your country) to get a hold of a decades long dump of
medical and financial data, metadata on relationships, posting history,
browsing and location history, behavioral fingerprints, stylometry, device
IDs, reading history, private photos, and on and on. The tech industry's "vow"
to not create a Muslim registry for the US government had a touch of absurdity
to it, since there are already companies that could plausibly give you a list
of almost all Muslims in the US with a few queries, along with far more
personal information than any census or registry has ever asked for.

~~~
cm2012
Google or FB don't matter for this stuff, since the government goes straight
to the ISPs (which covers incognito mode, etc)

~~~
hoseja
But ISPs have access to much less detailed data. Or do you think encryption is
trivially broken by them?

------
throwanem
"Liberty" might not be completely off the mark. You can get into the habit of
looking over your own shoulder all the time, interrogating your every
expression and action for what an unsympathetic stranger might make of it and
how it might be used against you. And that consciousness doesn't only extend
to spaces where you know you're being monitored, because you _can 't_ know
where and when and how you're being monitored. So you eventually experience it
all the time.

~~~
skybrian
This isn't intended as a shallow dismissal, but this sounds strikingly similar
to the experience of paranoia. Nobody would say that the feeling that the
Invisible Other is watching us is a good feeling. But unless something
happens, it's also "just" a feeling. Might the feeling be doing more harm than
the reality?

On the other hand, our ancestors would probably wonder at our obsession over
the danger of invisible entities getting into our food or water, but germ
theory is still real.

The problem with guarding against invisible dangers, even when they're real,
is that they can be hard to distinguish from superstition. It's easy to point
at some religious customs and say they are irrational, but is today's folk
understanding of nutritional science all that much better? Badly understood
science can create especially virulent memes.

When the harms are hard to demonstrate, privacy disputes sometimes feel
similar.

~~~
throwanem
It’s funny: to you this reads as paranoia, but if anything I’m drawing on my
experience of social anxiety. Which, yes, _is_ irrational, because no one _is_
watching. The point of the discussion at hand is that, in an environment
pervaded by automated, recorded invasion of privacy, such feelings may cease
to be irrational.

------
kennethfriedman
Great points here. We definitely don't have the language to discuss these
problems currently. "ambient privacy" is a good start.

"I’ve lost something by the fact of being monitored." is very true, but
deniers will say: "what's the _something_?" and we just don't have the
language to explain it yet.

~~~
tobr
That’s where the comparison to environmental protection is so powerful. For
the longest time, if you would say “We lose something by exploiting natural
resources”, people would have just pointed out that there’s a seemingly
infinite supply of unexploited nature left.

We’ve always been monitored in some situations, but never before around the
clock, in the bedroom and the bathroom, at the doctor, etc. There’s no way to
know what we lose, when we don’t even know how the information will be used.
It is a passive blackmail situation. These companies have compromising
information about all of us, and we really don’t have any idea who can or will
be able to see it.

~~~
skybrian
Perhaps ironically, environmental protection often requires sophisticated
monitoring of nature. If nobody is watching, you don't know what's lost.

------
AlphaWeaver
The nature analogy is useful, but it's also worth thinking about where this
new problem differs.

Nature is a large interconnected system that, in some cases, has the benefit
of being self correcting. Population control and food chains are examples of
this. I worry that, in the case of privacy, there are far fewer natural "self
correcting" aspects. How will this impact our response to this problem?

(My personal prediction is that it will simply make it possible to do more
damage before we begin seeking a solution en masse.)

------
cpeterso
> _Facebook’s early motto was “move fast and break things” (the ghost of that
> motto lives on as the Facebook guest wifi password)._

What is the password?

~~~
ljlolel
Sounds like something like `m0vefast`

~~~
idlewords
That's what it is.

------
emtel
> The large tech companies point to our willing use of their services as proof
> that people don’t really care about their privacy. But this is like arguing
> that inmates are happy to be in jail because they use the prison library.

I used to really enjoy Maciej as a writer, but I've become really bothered by
how bad some of his recent arguments are. This is just an amazingly bad
analogy. Structurally, it's on the level of the standard libertarian "proof"
that income tax is morally equivalent to slavery. Even though there seems to
be some analytical validity in both cases, they are _obviously_ bad analogies
because slavery is _obviously_ not the same as income tax, and ad tracking
networks are _obviously_ not the same as being in prison. This is a cute
rhetorical flourish, but nothing more.

The truth is, most people don't care about this stuff. They really just don't.
You might want them to, and you might think they would if they had the same
understanding of the issues as you do, but as of today, they don't. And I
don't see any benefit to the mental gymnastics people go through to avoid
accepting this obvious truth.

~~~
idlewords
I am glad you used to enjoy my writing!

I think in this case I may be trying to draw out a narrower point than you
think. My argument is that you can't infer consent from how people adapt to
circumstances in a situation where they are not given a choice.

The whole issue of consent in online privacy is fascinating, because I don't
think even experts in the field could understand how their data is used (or
will be used) enough to give meaningful consent. I certainly couldn't.

I also agree with you that there is an open empirical question of how much
people actually care about this stuff. One way I would like to see it tested
is having a legal basis for competitors to sites like Google to give binding
privacy guarantees. Then at least we'd be able to put a dollar value (positive
or negative) on privacy with a market test.

~~~
emtel
> I am glad you used to enjoy my writing!

And I am sorry for being a bit of a dick!

> My argument is that you can't infer consent from how people adapt to
> circumstances in a situation where they are not given a choice.

I see what you mean, but I don't buy the premise that people don't have a
choice, at least not in the same way in which inmates can't choose to leave
prison. Technically savvy people can avoid a lot (if perhaps not all)
tracking, today, if they care to put in the effort - many don't. The
technology for doing this has been productized and is available for purchase
to less savvy users, if they choose to buy it - hardly any do.

I don't mean to suggest that there's a silver bullet that fixes this problem,
that you can buy today for $19.99. My point is that when there is real demand
for something (e.g. privacy protection), even imperfect solutions succeed in
the market and grow in capability over time. Lots of people are attempting to
satisfy this demand, duckduckgo being one good example. It looks to me like
most of these products are having limited success, which I think is exactly
what you'd expect if only a small number of people care about this issue.

------
kzrdude
More choice is misleading, we need to get more freedom by raising the bar of
what is protected, and this minimum should not be possible to negotiate away.

------
komali2
I always wondered why Google or Facebook or hell Comcast didn't start just
Owning senators - "Mr. Senator, we want to be the fiber contractor for the new
Vet building. Here's pages of logs of you attempting to learn how to use the
darknet to access child pornography."

Any anti-choice politician whose mistress or daughter has had an abortion,
_some_ tech company probably knows about it.

Any anti-LGBT politician that's actually gay, Google, Facebook, hell maybe
even Grindr is aware.

Any White Knight with a heinous secret fetish, Pornhub knows.

People were able to map out military bases using smartwatch GPS data. Imagine
if you had straight up access to the databases of this information.

I guess individuals at these companies are caught within rigid corporate power
structures? Maybe the internal tooling prevents that sort of thing (didn't
prevent someone from deleting Trump's twitter feed that one time), or just
rule of law in the USA is still too strong and the risk of being annihilated
by the legal system is too great.

Still, I bet the temptation is strong.

~~~
nostrademons
It's unprofitable. The cost to user-trust (and hence future data collection
and revenues off that data) is more than they stand to gain from blackmailing
any one person - even a U.S. Senator or President, or Chinese Premier. So they
don't. The only thing that would potentially justify this from a corporate
strategy perspective would be an existential threat. Politicians in question
know this, and so they don't bother trying to get rid of Google or Facebook,
only reign them in enough to please their constituents.

When I was working at Google (which was close to a decade ago now, before tech
overreach became a household buzzword) I'd say "People regularly underestimate
how much data Google has on them and overestimate how interesting they are."
Everybody's first fear is that Google's going to look up their search history
and blackmail them with their porn fetishes. They never stop to think that
their porn fetishes, no matter how hardcore, are _boring_ , and shared by
millions of other people. As a Google engineer you get desensitized very
quickly to the fact that 10% of search queries are porn-seeking (17% on
mobile), and looking at other people's kinky pasttimes is about the very last
thing you would want to be doing.

Similarly, it's significantly more profitable to advertise to people than it
is to kill them. Every dead person is one less potential customer in the
global economy.

~~~
meruru
This is a point I've made repeatedly on discussions about the value of
privacy. It's not about protecting the fact you're gay or have some weird
fetish or cheated on your wife or whatever. That stuff doesn't matter in the
slightest. The important thing is that the establishment shouldn't be able to
quickly pinpoint and disable every potential whistleblower and every other
kind of threat to the establishment itself.

~~~
TheSpiceIsLife
I’d argue both things are a concern.

Corporo-government overreach is a concern for obvious reasons.

But it’s also concerning that any _individual_ rogue employee could have
malicious intent toward _you_ for no other reason than you have something you
might not want publicly known.

~~~
meruru
Sure, but one is comparatively minor vs the biggest threat to democracy and
human freedom.

~~~
TheSpiceIsLife
That's a good point.

It could be argued that, for a person being blackmailed right now, one is a
very real present threat, while the other is a hypothetical future.

Fortunately there are enough of us to go round, so we can collectively
advocate for protections against both possibilities.

------
burlesona
Submitted this previously:
[https://news.ycombinator.com/item?id=20188689](https://news.ycombinator.com/item?id=20188689)

How does HN de-dupe stuff? In the past when I’ve submitted something that was
a dupe it took me to the already posted link instead of making a new one.
Curious why that didn’t happen here.

Really good article, gave me a lot to think about. I think the nature metaphor
is a good one, though it doesn’t fill me with optimism for the future of
ambient privacy.

~~~
dang
This is in the FAQ: we don't count submissions as dupes if the story hasn't
had significant attention yet. In other words, reposts are ok until the
article has had a significant discussion, or at least significant upvotes.
This is to give good articles more than one chance at getting attention, since
it can be pretty random which articles get noticed on /newest.

In this case, we actually boosted the submission into the second-chance pool
(see
[https://news.ycombinator.com/item?id=11662380](https://news.ycombinator.com/item?id=11662380)),
which lobbed it on the front page. The reason we picked this post rather than
yours is that it was the first submission of the article. (It doesn't look
like it right now, because the timestamp was adjusted to be the resubmission
time, but you can tell from the ID in the URL.) In the future we want to do
some sort of karma sharing so multiple submitters can get credit, but that's
not implemented yet.

~~~
tobr
If I may, I would like to request a little more transparency when this
happens, as it can be confusing. For example, my comment elsewhere in the
thread was written days ago, and I only happened to see that it has bubbled
back up with a modified timestamp. When someone replies to a comment that
appears to be posted mere hours ago they might expect to get a reply.

~~~
dang
I'm not sure how to do it in any way that wouldn't be too complicated. The
status quo leads to confusion, which is bad, but other things we've tried have
led to more, or to a lot of distracting meta comments.

