
Social Cooling – How big data is increasing pressure to conform - milly1993
https://www.socialcooling.com/
======
bicubic
I was going to write at length about my concerns on this topic, and then I
decided not to. Because it's entirely possible that one day many years from
now, a prospective employer/insurer/whatever finds such a comment and flags me
for it.

That's the cooling effect in a nutshell.

~~~
captainmuon
I've been saying this for a while: We as a society have to stop taking
everything that is said on the internet so fucking seriously.

You shouldn't have to think everything perfectly though that you write. It's
OK to say something dumb, or even offensive. It's not great, but no big deal
either. I don't want to live in a world where we have to polish and double-
check every thought that leaves our minds.

It used to be that a written statement was something important, with gravitas,
with thought and meaning put into it. You rarely sat down and wrote a letter
or a book. But the vast majority of utterances on the net are not like that,
so we shouldn't treat them so. We shouldn't apply yesterdays standards to
them.

\----

Actually, I believe this will be all moot in a few years. With the rise of AI,
and the continuing increase in storage and bandwidth, we might reach a "one
million monkeys with typewriters" scenario. There will be every possible
utterance and every possible embarrasing photo of everyone on the net. It will
be trivially possible to fake your voice and image. (Unless we enter the
cryptpocalypse in which everything is signed...)

This is currently an odd period of time, in which we can create data, but can
hardly fake it. There is authencity, proof of authorship. We can hold people
responsible for how they behave and what they think. Before that was, and
after it will be, just hearsay. It sounds super scary, but to be honest I find
the thought quite liberating.

~~~
Pigo
I completely agree, and it's the very reason social media holds no interest
for me anymore. I think people have forgotten what it used to be like on
Myspace and the early days of Facebook. Social media was about self expression
and finding out where the party is, now it's a finely tuned marketing platform
for grandmothers.

~~~
captainmuon
> Social media was about self expression and finding out where the party is

Exactly! My next project is going to be something like that. Based on the
fediverse/Mastodon, but firstly about self-expression, connecting with real
friends, meeting people. You curate your own home page, share what you want to
share, you are invited to interact with strangers. No social media bs. Trying
to capture the feeling of local / university social networks pre facebook, or
the feeling of myspace.

~~~
Pigo
The kids seem to be more into picture and messaging hangouts, but those things
are lost into the ether (you hope anyways). Myspace jumped the shark
customization wise, but there was a unique mix of the site helping you to find
people and hanging out in real life, not being the actual hangout itself.
Meetup was a good idea, but it doesn't seem to drive engagement. I don't know
what the answer is, but I'd love to help out on a project that could be fun.

------
imron

      You better watch out
      You better not cry
      Better not pout
      I'm telling you why
      Big Data is coming to town
    
      It knows when you are sleeping [0]
      It knows when you're awake
      It knows if you've been bad or good
      So be good for goodness sake.
    

0: [https://medium.com/@sqrendk/how-you-can-use-facebook-to-
trac...](https://medium.com/@sqrendk/how-you-can-use-facebook-to-track-your-
friends-sleeping-habits-505ace7fffb6)

------
voidhorse
I think the creator may be better off conducting a through philosophical
investigation in efforts of pinning down the concept. You would be much better
off giving full citations to Foucault and Deleuze, and full analysis into the
Snowden incident and the fallout rather than a sort of half nod to, what I
think, are the foundation beams of this concept.

This page kind of assumes the audience is already willing to admit social
cooling as a legitimate phenomena, and if not, will be convinced to do so
after a few short bullets and very little in the way of actual analysis
(ironically, this sort of approach leverages one of the modern patterns the
piece could tackle--short bursts of information, instant delivery, decreased
skepticism and amounts of reflective thought).

Also, I'd highly recommend avoiding the global warming comparison. It does a
disservice to your cause. It basically comes off, at least to me, as saying
"our problem isn't a substantial thing in its own right so lets compare it to
this other big problem people already care about and hope the very loose and
forced analogy strings them along"

All this being stated, ya'll should check out Horkheimer's essay "The Concept
of Man." He wrote it in ~1952(might've been 53 or 57, I'm forgetting the exact
date)--and it's crazy how prophetic that essay turned out to be. It shows how
all our innovation really just led to an amplification of social structures
and patterns that were already emerging during the dawn of automation and
mechanization. I think it's relevant to your project.

~~~
socialcooling
Author here:

Being trained as a media theorist I understand your criticism (and am going to
check out Horkheimer's essay, thanks for the tip!).

But this website purposefully tries to keep things accessible in order to
reach a wider audience.

I often see how academics have a deep understanding of what's going on, but
just aren't as good at spreading that insight to a wider audience, like the
startup community.

~~~
voidhorse
Ah, that is a fair concern, and this approach makes sense if that is your
goal.

Still, I think it's useful to point to some of the academic backing--like you
already do with Foucault, just perhaps in greater depth. Maybe add some of
that academic/conceptual source material to the further reading section--then
again, might just distract from the main point. You know your target audience
better than I do, I only have my particular reaction (which is probably a bit
idiosyncratic and outside of the scope of your intended audience).

In any case this is a cool project and a noble effort. Hope you stick with it.

------
zeteo
> People are starting to realize that this 'digital reputation' could limit
> their opportunities. (And that these algorithms are often biased, and built
> on poor data.)

That's interesting of itself, but the bigger underlying issue is that
opportunities are becoming more concentrated. When only a few companies
dominate hiring in many fields, their mistakes get seriously amplified. Back
in the day you were fine if Google's hiring process misjudged you - you could
work for Excite or Altavista instead. Nowadays if some ML algo decides that
people wearing blue sneakers are worse job performers you can get screwed
(without even knowing why). And even worse, the major companies (where the
jobs are) often share algorithms.

~~~
api
Worse still-- past employment at the majors is seen as a strong "social proof"
indicator by many including other employers and investors.

I saw an angel.co drinking game once. I think you had to chug two drinks for
"worked at X" (where X is any major) being put forward as the sole
qualification of a founder or key early employee. This is starting to edge out
"went to X" where X is a top-tier school.

China is supposedly deploying their own horrific state-sponsored "social
credit score" system, but we're doing it too. We're just doing it in a less
centralized way. In a way that's worse. In China everyone will know of this
system and its existence and I'm sure people will figure out so many ways to
game it it'll become irrelevant. In the West people will remain blissfully
ignorant as ours has no name or formal identity.

Ultimately I am still more creeped out by what our private sector is doing
than what our NSA and CIA are doing. Neither is good, but the latter has some
oversight and regulation. The former has absolutely no regulation or oversight
whatsoever, and in any case the private sector is very often better at such
things than the public sector is. I wouldn't be at all surprised if Facebook's
data analytics are far superior to the NSA's.

~~~
fivestar
One reason why I think management in the US is so bad is because the entire
corpus of managers went to the same few groupthink institutions and that is
why they are so bad--no real risk takers, no real anything. It's like that in
politics, too. We aren't promoting the right ideals, but merely conformity,
risk-aversion, and foregone conclusions accepted as fact.

------
nippples
> People are starting to realize that this 'digital reputation' could limit
> their opportunities.

Thanks to the moral police and keyboard warriors out there normalizing
contacting employers over an internet argument.

~~~
creepydata
I think you meant to say "hiring managers Googling potential employees."

~~~
Bartweiss
Surely this is a case of "why not both?"

Campaigns to get people fired when their online posts are revealed are well-
attested from all parts of the political spectrum, and generally come out of
doxxing efforts that hiring managers don't undertake. There are even campaigns
that boil down to "this person said X, harass/troll their employer so that
even if the employer doesn't object to X it becomes too costly to keep them
employed".

But at the same time, hiring managers have Google and all kinds of tools
random harassers don't, like the ability to check criminal records and credit
scores. (And _there 's_ a great example of an opaque and inaccurate tool
governing people's lives - just read about the people sharing a name and
birthdate with someone who has bad credit or legal issues!)

So yeah, hiring managers with Google. But I wouldn't discount the other issue
either, since it can cause people problems even for comments that don't
violate any general social standard.

------
dalbasal
Hmmm...

There's one side of this which is straightforward. Companies and governments
are compiling data for their own purposes, which range from modeling user
behaviour to profiling you so that they can sell you stuff or arrest you for
dissidence.

The lines we previously defended for privacy, freedoms of conscience,
affiliation and speech _have_ been disturbed, to say the least. This has
generaly been done under the surface, without involving users. It is
increasingly felt on the surface, via the ads you see on FB or the
recomendations youtube feeds you.

The other side of this is what I think of as a "post-history" problem. We're
now transitioning into a period where reality is simply recorded. Your comment
on Chelsea Manning's release is now a matter of public record. Your next
Tinder date might see it and so might the HR manager reviewing your
application for senior talent accumulator in 2032.

There are all sorts of implications to that, but mostly people just feel weird
about it for now. Anxious and uncertain.

So... FB (HN, whatever) is a space for casual discussion. Casual generally
meant private in the past. Now, some of the most casual discussions mean an
extreme opposite of private. This inevitably comes with stress.

Calling it a hilling (or cooling) effect is evoking a political dimension, one
that speaks to the first part of the issue. The second issue, that's more of a
social issue. It's political too, but I don't think that's where the centre of
mass is.

~~~
jxramos
We should start compiling data on the government and turn the tables.

------
bem94
I really feel that engineers need to wake up to this kind of thing.

I'm not saying we should stop (although that's what might happen), just that
we pause and consider what this is doing to the world. It is the undercurrent
for so many profound changes going on right now.

Are we really comfortable as individuals building systems which predict
someones mental (ill) health, personality traits or ethnicity just so we can
sell them things, or worse, not sell them things?

~~~
inanutshellus
It's like any other thing in capitalism. You'll have a contingent of folk that
won't do the work, based on ethical reasons, and you have folk that will, for
financial reasons.

Anecdotally, the few folks I know that work for data collection companies are
all "tinfoil hat" types. They have flip-phones, they have no online presence,
they smile like a Cheshire Cat when you ask them about it and you generally
get the impression they've just decided to categorize it as "us" and "them".
:-\

~~~
bem94
> It's like any other thing in capitalism. You'll have a contingent of folk
> that won't do the work, based on ethical reasons, and you have folk that
> will, for financial reasons.

That is true, but going on my peers (especially the ones fresh from
university), I think a dangerous proportion of people simply aren't aware on
any level of the ethical implications of what they do. It's that which worries
me.

~~~
sixstringtheory
I don't think I've ever seen as much apathy in a classroom as my fellow CS
majors displayed in our society and ethics course.

~~~
Bartweiss
> society and ethics course

How was the course?

I didn't have one, but many other "engineering ethics" courses I've seen
inspire apathy just because they're terrible. It's like school anti-bullying
campaigns - even if you're vehemently anti-bullying, most of the campaigns are
too ridiculous to feel anything good about.

On the other hand, something like Canada's Iron Ring seems to get taken very
seriously. It seems like a nontrivial part of the challenge is teaching ethics
in a way that reaches even the people who _want_ to behave ethically.

~~~
sixstringtheory
It's true, it wasn't a mindblowingly exciting class or delivery. There were a
handful of people who cared to ask questions beyond the prompts for group
assignments or during lectures. A lot of people accepted as obvious fact that
every household should have a humanoid robot, or e-government would make
perfect decisions, or complete quantification of the individual couldn't
possibly be abused. (These are just the ones that stand out in my memory.)
Then you also have the garden variety folks playing minecraft, doing other
coursework, etc.

I'll also grant that I'm not very visionary or even great working/leading
large groups of people. How would you teach a class exciting enough that
virtually all students would attend it, enthusiastically, even if it were
elective? (It was required for us.)

At the end of the day, it's only going to be as exciting as the _students_
make it by involving themselves and _thinking_. They are the ones creating
tomorrow's startups, not the professors. As it stands, it seemed like quite
the accurate litmus test for how many people care to think about issues in
this way in our field.

------
DanielBMarkham
Our species is hardwired to mix among multiple social groups, each having
different norms and hierarchies. We have positively proven that we evolve over
time by allowing repugnant minorities the ability to publicly speak. Out of
100 reprehensible social opinions, one turns out to be the next Martin Luther
King, Jr. Our perception of something as being "good" or "bad" for public
discourse is notoriously fallible and broken.

The things I say in a social group of former college buddies and the things I
say in a group of the local clergy are two different things. That doesn't make
me two-faced: it makes me human. In fact, the ability to converse and trade
with drastically different social groups is probably the essence of humanity.

Yet our current overlords that program the internet are convinced that the
entire world should run as if it were just a huge version of their favorite
social group. Joe tells racist jokes? Maybe we let Joe continue, but we
definitely ought to score that. After all, Joe could offend somebody -- and
then they would be mad at our platform, not Joe.

We are instrumenting a terrible evil on our species, even more evil than the
security and surveillance state, if such a thing could be possible. SkyNet has
finally attacked, and because there are no T-1000s leading the way the vast
majority of the population doesn't even know it's at war.

------
captainmuon
This sounds like a self-fulfulling prophecy. Tell people that having data of
them out there will change their behavior, and then they'll become aware of
that idea, and _will_ change their behavior. If they don't know about the
concept, they might not.

You could even say that this page, and people trying to raise awareness for
this issue, are harmful!

Imagine a few important people stepping up and saying, no, we will not
disadvantage applicants because of their "unprofessional" facebook profiles.
In fact, we value authentic, unintimidated people. The act of saying so will
make it a little bit so!

We need to shift the blame from people expressing themselves, to those people
punishing them for it, or even to people giving well-meaning advice like this.

(Just a crazy thought I just had. Didn't want to be to harsh with the creator,
who raises an important discussion.)

~~~
gedrap
>>> people trying to raise awareness for this issue, are harmful

Harmful for who? If, let's say, they make people actually aware of this, that
might lead to change towards privacy. But if people are unaware, and they
remain unaware, the probability of change is smaller.

~~~
captainmuon
That's what I'm wondering - maybe "privacy" is the chilled state. Maybe what
we should be aiming for is not "privacy", but rather being at ease with modern
communication.

The following is a kind of evil comparison, and I'm sorry, but I can't come up
with a better one right now - it's a bit like saying: "If 'chubby' people were
aware that they should wear flattering clothes, then this might lead to other
people liking them better." \- Maybe well-indended advice, but WTF no! You
shouldn't tell somebody to hide their body, and likewise you shouldn't tell
somebody to hide their emotions, political ideas, drinking pictures, social
moments, and so on.

------
joekrill
The first thing I thought of was the "Nosedive" episode of Black Mirror this
season. Seems more and more plausible every day.

~~~
zeotroph
What is missing is a government actively pushing this scheme - which China is
doing with their "social credit score".

An intermediate step would be selling of derived data to anyone, not just
companies interested in hiring you but really anybody.

~~~
vlasev
It is never mentioned how things came to be in that episode. I'm sure there
was a lot of government in it, as is in all the episodes of the series.

------
fibbidd
I am not saying there is nothing to this, but this is just a catchy concept
with a convincing narrative. The sources are news articles and YouTube videos,
not scientific papers specifically addressing the issues mentioned, for
example showing a link between self-censorship and online monitoring or
quantifying that effect.

~~~
socialcooling
Author of the website here:

The website does link to a lot of scientific studies actually. Both throughout
the page and at the bottom.

But I purposefully didn't want to link directly to the PDF's of those studies
too much. By pointing to accessible news articles about those studies in the
"further reading" section I was hoping to keep things accessible to a wider
audience.

This article that appeared in The Guardian today about Social Cooling has some
more sources you may like.
[https://www.theguardian.com/commentisfree/2017/jun/18/google...](https://www.theguardian.com/commentisfree/2017/jun/18/google-
not-gchq--truly-chilling-spy-network)

And you may also like "Postscript op societies of Control" by philosopher
Deleuze, that greatly inspired this view.
[https://www.qwant.com/?q=Postscript+op+societies+of+Control](https://www.qwant.com/?q=Postscript+op+societies+of+Control)

We need catchy concepts to reach a wider audience.

------
ptero
I agree with the general premise, but I think privacy is not the perfect
weapon here: if someone is making a public statement (e.g., a personal view
expressed on a weblog) they cannot claim that it is a private statement. The
goal (I think) is to prevent them being hounded for it outside the channel
where it was expressed.

One option is to bring back anonymity so people can make public, anonymous
comments. Anonymity has been sharply curtailed (because terrorism) and this
is, IMO, bad for society.

Another is to mandate short term limitations on use. For example if the
employer wants to look at your online presence they can only look at last week
of your posts and only for initial employment consideration. IMO employers
should not look there at all, but maybe this may be a palatable compromise.

The chap in HR is not itching to dig dirt on employees -- he just has a
distorted notion of due diligence forced on him. If he has a clear, legal
definition of what he can and he cannot look into I suspect he will gladly
comply. My 2c.

~~~
bradmwalker
What stops hiring managers and potential colleagues?

------
peter_retief
We are in the age of "virtue signalling" people pretend to hold what they
believe are popular virtues. Is it possible to mention "Donald Trump" and not
cause an uproar of virtue signalling?

~~~
jerf
Virtue signalling is only a subset of this problem. It's an important one
because of the way it interacts with human psychology, but it's not the whole
story. This goes into things like "not being able to post those photos of me
getting blasted last weekend", being unable to partition one's identity such
that you might be able to keep your sexual orientation details away from
people you may not want to know them, and all sorts of other things well
beyond the political issues of the day. (Sexual orientation may be a "big
political issue" too, but in this case I'm referring to the personal
dimensions of those issues.)

~~~
peter_retief
I absolutely agree with you about the subset however the fear of being
different is huge and I believe that plays into what people say online. People
have lost jobs with inappropriate online rants, sometimes racial or religious
discrimination sometimes being negative toward an employer or even looking for
another job. Wrong signals can cost you a lot. I am really glad it is been
debated/discussed

------
golemotron
In one model of human behavior we all just become better people because we
self-censor. I think that may be true for the first generation of people
experiencing these effects in a society. Later generations will have more of a
problem.

The problem comes when they repress negative emotions and other status
detractors because the cost of even being aware of them is too high. Then you
have people who, in psychological terms, are prisoners of their shadow selves.
They become anxious and depressed because they fear confronting it.

~~~
socialcooling
Author here:

I also worry about Learned Helplessness, where we believe that there is
nothing we can do about it.
[https://en.wikipedia.org/wiki/Learned_helplessness](https://en.wikipedia.org/wiki/Learned_helplessness)

In Silicon Valley the Technological Determinist view that technology has it's
own will, that it is some unstoppable force, is the dominant but dangerous
viewpoint.

That idea is only creating a self-fulfilling prophesy.

The reality is that we as a society have always taken the rough edges of new
technologies through the creation of laws and new norms. For example, we
pretty much put a halt to nuclear energy.

We can and we must regulate the Big Data world much more. And the first step
is that we must help people understand the problem.

~~~
golemotron
I agree. We need to be more conscious of the Precautionary Principle, but
having said that it is easy to see how technological determinism is an
accurate view regardless of whether we like it or not. As an example, no one
quite knew how much cars, television, or the ready availability of cameras on
cellphones would impact society. But consumers were so excited by the upsides
that there couldn't ever be debate about any downsides or call for regulatory
frameworks until the downsides manifested themselves.

The trick that we use is to hook the consumer faster than law can react. Uber
did this successfully. Segway flubbed it. With Segway Kamen wanted a big
rollout. It attracted attention and municipalities started passing legislation
restricting Segways before they were off the assembly line.

------
njarboe
I had a high school teacher once tell me, "Never write anything down you would
not want published in the local paper". This was before the internet and smart
phones. It seemed like good and not very restrictive advice that I used as a
heuristic for many years. Maybe at that time there was a better balance
between personal privacy and society's right to know things about you.
Equivalent advice today might be, "Never say or do anything within 100ft of a
smart phone or put into a computer anything that you would not want everyone
in the world to be able to see now and at all times in the future." That is
quite a bit of change and I would feel unjustly controlled following this
heuristic.

------
mncharity
_Harvard Rescinds Acceptances for At Least Ten Students for Obscene Memes_
[1]: "Harvard College rescinded admissions offers to at least ten prospective
members of the Class of 2021 after the students traded sexually explicit memes
and messages that sometimes targeted minority groups in a private Facebook
group chat." (June 5)

A related NYTimes opinion piece [2] encourages "help young social media users
realize that their online and real-life experiences are more intertwined than
they may think. Parents might, for example, cite current events, like the
Harvard episode, to remind them that nothing online is ever completely
private". Which is true, good advice, and social cooling.

And the nytimes/reuters version [3] is currently "Page No Longer Available".
How does that affect your confidence that "if it was going on, you would know
about it"? :)

[1] [http://www.thecrimson.com/article/2017/6/5/2021-offers-
resci...](http://www.thecrimson.com/article/2017/6/5/2021-offers-rescinded-
memes/) [2] [https://www.nytimes.com/2017/06/07/well/family/the-secret-
so...](https://www.nytimes.com/2017/06/07/well/family/the-secret-social-media-
lives-of-teenagers.html) [3]
[https://www.nytimes.com/reuters/2017/06/05/business/05reuter...](https://www.nytimes.com/reuters/2017/06/05/business/05reuters-
usa-harvard.html)

------
vladf
I might be misunderstanding the point being made here, but at one part doesn't
make sense to me. The digital reputation argument seems to be saying "big data
is bad because your reputation (i.e., people's valuations of your past
actions) is now more accessible to people." Such an argument can hold two
ways:

1\. Giving anyone access to your reputation is inherently bad.

2\. Giving some amount of people access to your reputation is OK, but the
amount of people big data gives it access to is now magically worse.

(1) is definitely untrue, at least to most people. We all definitely use our
knowledge of other's reputations to make judgements, and apply social pressure
to make them conform. For instance, if someone you know is a rich snob, or a
vehement racist, you won't hang out with them.

(2) seems ad hoc. Why would letting more people know about your reputation
magically be worse? Whether someone knows about your reputation should either
be bad or not -- it's not dependent on how many other people are aware of your
reputation.

~~~
WalterSear
The problem here, like all mass surveillance issues is two-fold. The first,
and generally less serious: you will be discriminated against due to your
views/behaviour, or or your past views/behaviour.

The second is much more insidious and difficult to resolve: since systems are
never perfect, it is likely that many will be discriminated against due to
over-generalization and mistakes in the system. The more complete the
surveillance appears to be, the more confidence authorities have in the
system, the more likely that people get into serious trouble due to no fault
of their own.

~~~
vladf
Sure, but what I'm saying is that the above holds for non-mass surveillance as
well. So if you're OK with offline systems of social reputation (which most
people are, and it seems essential to the function of society), then you owe
an explanation as to why your two points [discrimination on views, imperfect
generalization] don't apply to offline systems of reputation.

~~~
socialcooling
The difference is: \- scale \- transparency (ability to complain or discuss
decisions). \- culture: people can recognize normal discrimination, but think
algorithmic judgements are 'neutral'.

Check out [https://www.mathwashing.com](https://www.mathwashing.com)

~~~
vladf
I wholeheartedly agree with the linked page (we should encourage this kind of
scrutiny in algorithmic judgements). However, where there seems to be
disagreement is that an algorithmic judgement is a _step forward_ compared to
offline, human judgement. An algorithm has fixed code, and fixed code is
traceable and auditable. Yes, it might be hard. Yes, legislation may not be
there, but it's _possible_. Compare this to human judgement: 10 years ago, if
HR threw out your resume, you have no recourse. Today, if an algorithm
automatically rejects your resume, we at least have a path to _potential
recourse_ in the future, since you can analyze an algorithm's decision making.
A human's judgment is only more opaque. Essentially, when you say:

> people can recognize normal discrimination

I don't see how you can reliably recognize discrimination in any way that
can't also be applied to the decisions of a computer program.

~~~
elefanten
Broadly, I agree with you. For discussion's sake, I think the counterargument
would be to focus on the scale and culture points. Specifically,

1\. The scale means we're applying many more judgments in many more places
that would've slid by the human judgment radar. This is arguably dangerous in
itself because if we hold judgments to generally be unreliable, we're just
adding many more points of unreliability.

2\. The culture could conceivably develop in the direction of blindly trusting
automated judgments, such that the type of scrutiny you encourage will dwindle
as a practice. This would put us in a much more vulnerable state with respect
to bad judgments.

That said, I still side with you. I think the ultra-dystopian scenario
outlined as a possibility by OP is unlikely precisely because power/control
are decentralized and very difficult to wield intentionally. And there's
massive inherent conflict between various actors that have greater means to
try to affect that control.

However, I also don't think it's inconceivable to end in the ultra-dystopian
scenario. Technological progress is generally good and generally can't be
stopped, but it also continually introduces undesirable possibilities as well.

~~~
vladf
To be honest, I don't see how an increase in algorithmic judgments will
necessarily lead to a culture where we are more blindly trusting of them. I
think it's up in the air, and there are forces working both ways: the more
present it is, the more that engineers and policymakers have to think about
it, though perhaps end-users will start noticing it less (?). In any case, it
seems like those latent forces are to blame, not "big data" in it of itself.

The scale point makes some sense, I'll admit. We're raising volatility in
applying potentially-discriminating judgements in the first place. Perhaps
indeed it's a tradeoff between the expected boons of algorithmic decision
making and their increased risk of discrimination. That said, the scale
argument would then _not_ hold for _replacing_ what are currently situations
with opaque human judges with robots.

------
syphilis2
It seems to me we need to write 'privacy in public' into our laws. We've
reached a time where government possesses the power to track most citizens
simultaneously. Businesses can do the same with everyone they come in contact
with, and technology is cheap enough I believe an individual could do a decent
job tracking people in his community. We all have to exist in public to some
extent, I believe it's reasonable to demand more privacy be extended into
public space.

------
Asooka
Ok, but isn't this the system working as intended? Yes, to me this is
absolutely awful, a new form of Gestapo with similar awful consequences, but
isn't this _exactly_ what Big Data is supposed to do? Make everyone conform
and punish those who step out of line.

Just raising awareness won't change anything - the system is working as
intended for the people who were sold on it and the people who implemented it
(bar a few unfortunate engineers who had to do it for the money). History is
rife with examples of people trying to enforce a more rigid social order with
varying degrees of success. Letting people _different from you_ have freedom
is not something that many people want. Think hard about the last time you
thought "the world would be a better place if everyone thought like me". Then
realise how many people don't follow that with "but enforcing a mind-police on
society is awful".

------
chicob
I would add that, in an age where old values fade away, people seem to be
caught in a strange economy of visible virtue.

By reducing moral relativism to the self and ignoring its role in
relationships at large, individuality overcomes any collective moral system
(be it religious, political or philosophical), and so self-righteousness
assumes a form that values spontaneity and originality - the tools of personal
promotion - above ethical soundness. This seems to be, in my opinion, the
hummus of the most visible social outcry. Social media outrage took the place
of discussion, just like opinion articles are taking the place of news
reports.

Uncritical adherence to this logic harms us all. And the chilling effect
strengthens it.

In the past, people fought against a static, conservative religious or
political moral, in order to make room for individuality, liberty and
democracy. Now we have an agglomerate of individual perspectives fighting for
visibility in social media, where popularity (by any shallow measure) took the
place of reasoning. The chilling effect makes public virtue even more black
and white, and conformity (or social cooling) is just settling in either side.
Living in the fringe that is refusal of conformity (social heating?) has
become more difficult and exhausting than ever...

I don't know. Maybe I'm wrong and things were like this for ages. Maybe there
is an answer in all the valuable teachings of the past that we simply choose
to ignore for the sake of the here and now.

------
lithos
All of this with 4, different share buttons. Just to flag everyone who visits
the site.

~~~
socialcooling
Those buttons are privacy-friendly (as stated right above them). They don't
load any tracking scripts.

------
Mz
Oh. My. God. Fearmongering is everywhere.

We also might conclude that "meh, teens getting drunk occasionally" or "meh,
people actually having a sex life" is pretty goddamn normal and get over a
bunch of nonsense.

No matter what goes on around us, we still have a choice in how we interpret
things and what kind of world we choose to build. There is zero inevitability
here.

When Demi Moore posed naked on the cover of a magazine while pregnant, this
was some sort of shocking dramatic thing. Now, it seems like every pregnant
celebrity does the exact same pose and posts it somewhere. It has become
prosaic.

Seriously, we can choose to be more humane to people. Things going to hell is
not some inevitability.

Edit: Maybe a better example is that when 24 hour news channels became a
thing, it changed the news. Before that, people were very straight laced and
serious for the 30 minutes that they reported the news. This was not
sustainable when reporters had to talk live all day, every day. They became
less stiff and formal, more able to crack a joke and be human. They still had
to treat some subjects with appropriate respect, but 24/7 news channels caused
news to lighten up some. Geez.

------
dredmorbius
A related question: can privacy be quantified? And if so, how? On what bases?

Thoughts I've had:

Total quantity of data available?

Ability to define boundaries?

Ability to enforce those boundaries?

Knowledge of what boundaries to even define?

Who knows what about a person?

How many agents know what?

How aware is the subject of actual knowlesdgee?

How rapidly can that knowledge be further transferred?

Does the surveillor know more of the subject than the subject?

Can the subject access that knowledge?

Can others?

What level of benefit (or harm) can be transacted on the basis of
surveillance? Does this accrue to the subject or others?

~~~
tpeo
_" Knowlesdgee"?_

Anyway, I think you're forgetting one important dimension: whether the person
in question would like that particular piece of information to be known.

~~~
dredmorbius
Sigh. Soft keyboard keeps duping and misregistering ketstrokes.

That dimension is the setting of boundaries. E.g., "I don't wan't you to know,
or share, or seek, or ask of some X." Or if it's acquired, not to share it
except as specifically specified -- only with notice, on request, within a
given grroup, for (or not for) a specific time, etc., etc.

------
randommm
It is obviously bogus that Foucault raised that issue in 1975. For example
Erving Goffman introduced the term "Total Institution", which implies total
surveillance already in 1961. Subsequently, there have been many public
reports concerning the information society and the problem of data bases. For
a legal assessment see Westin, Columbia Law Review 1966, pp. 1003ff.

~~~
socialcooling
It's not 'obvious'. The goal of that chart is not to be a perfect
representation of history. It's to show 'greatest hits'. Foucault was a
superstar-philosopher who was a regular TV guest. His panopticon analysis is
way more widely know than Goffman's analysis (as much as I love Goffman).

~~~
randommm
Sure. But why this urge to rewrite history then? Just because Foucault is the
grandfather or surveillance studies? Why forget all these older and more
fruitful discourses?

------
bogomipz
I see this chilling effect manifested sometimes on HN as well. People will
create throwaway accounts to comment on certain subjects.

An example - there was a discussion a couple of days ago about FB and I
questioned why a commenter felt the need to create a fake account simply to
comment on FB. It turned out they weren't even a current employee but an ex-
employee.

~~~
20170619-2
on HN, the mods are directly responsible for the chilling effect. they censor
anyone who doesn't tow the company line, or has any kind of strong dissenting
opinion. this is achieved by shadow bans, single-IP bans, and then finally by
flagging every single IP they've ever logged in under as blacklisted for new
account creation.

i cycled through 3 or 4 accounts with multiple-thousand-points of karma, but
in the end i just stopped giving a shit. this sort of amateur-level banning
may work against your typical troll, but on HN, the end result is you're
wiping out diversity of opinion, because although the people on here are smart
and resourceful enough to get around any ban that happens over the internet,
at some point it just becomes not worth it just to express your opinion --
that's how censorship actually works in the real world.

at the end of the day, YC is a VC and has interests to protect.

~~~
bogomipz
>"on HN, the mods are directly responsible for the chilling effect. they
censor anyone who doesn't tow the company line, or has any kind of strong
dissenting opinion ..."

What is the company line exactly? Is it just maintaing agreement with YC-
backed companies?

I hadn't heard about the shadow bans, are they only on submissions or
commenting too?

------
michaelcampbell
How is this not the well documented Hawthorne Effect? That being watched
digitally and with more ubiquity seems to be a distinction without a (real)
difference. Calling something new does not make it so.

~~~
socialcooling
Social Cooling is related to a number of concepts, including Foucault's
panopticon. But Social Cooling is about much more than "individuals change
their behavior when observed"(the Hawthorne effect). That's only the starting
point. Social cooling aims to cover the large scale societal consequences of
that effect, and makes a comparison to Global Warming to point to the scale of
the problem as well as the possible path that leads us out of the problem.

~~~
randommm
I sincerely doubt that anything that places the 'panopticon' in the center of
the analysis will be able to point us the path that leads us out of the
problem. The discourse of ubiquitous surveillance (or less political:
observation) is a discourse of weakness, a discourse of lacking alternatives.
The only normative proposal they make is: watch us less!

------
f_allwein
of course, "social credit systems" to rate citizens are already being tested
in China: [http://www.economist.com/news/briefing/21711902-worrying-
imp...](http://www.economist.com/news/briefing/21711902-worrying-implications-
its-social-credit-project-china-invents-digital-totalitarian)

~~~
wu-ikkyu
>It is planning what it calls a “social-credit system”. This aims to score not
only the financial creditworthiness of citizens, as happens everywhere, but
also their social and possibly political behaviour.

This already exists across the world in the form of credit rating agencies and
social media. It's merely an issue of data integration

~~~
elefanten
Except its:

1\. Centrally regulated and collected 2\. Mandatory 3\. Explicitly includes
matters of ideology and opinion

So, it's hardly comparable to what is extant under the surface in other
countries.

~~~
wu-ikkyu
1\. So is Facebook and the credit rating agencies in the US

2\. No it's not, it was unsuccessfully tested in _a single county_

3\. So do the judgments of employers on whether or not to hire or fire someone
based on publicly expressed political ideology

It may make us feel better to point the finger _over there_ to distract from
the parallels of what is going on locally but doing so is hardly practical.

------
rdrey
HN can help us investigate this phenomenon by tracking how often users click
on 'reply' but then shy away from submitting it.

~~~
nxc18
I've done this multiple times over the last 2ish months, and every time it has
been because I don't want to create the wrong perception of myself. In fact, I
almost did that with this comment, but decided the irony was too much.

~~~
rdrey
Then HN can sell the "self-censorship index" data to data brokers... oh, wait.
:P

------
gedrap
I think that one of the reasons why awareness is still relatively low, is that
it is often hard to imagine the implications of this for the casual internet
user as HN is quite a bubble in this regard.

And by implications I mean something more than not seeing job ads, or not
getting a loan.

Fortunately, there are no such cases that I am aware of. Unfortunately, it
might be just a matter of time.

------
nachocode
This is a very delicate topic indeed. On one hand, users are not willing to
pay to use the services of platforms such as Youtube or Facebook so they need
to find ways to monetize. On the other hand, As a user, you feel that your
trust in platforms has been betrayed by knowing that platforms are selling the
data that you willfully post to the public.

~~~
socialcooling
Yes, it's becoming more difficult to argue that users are giving 'informed
consent' when they sign up to 'free' services.

In the EU the new GDPR law is already making the 'informed consent'
requirement more strict. [http://www.eudataprotectionlaw.com/consent-under-
the-general...](http://www.eudataprotectionlaw.com/consent-under-the-general-
data-protection-regulation/)

------
avip
Domainistication - How inventing a quite meaningless term, made of N words,
then registering the domain made of these words glued together, and uploading
a single page thingy, became a thing.

More on that in
[https://domainisticationofngrams.com](https://domainisticationofngrams.com)

------
jxramos
wow, this is a huge question and a huge distinction "If they say they don't
sell your data, ask if they are selling theirs." I've never even contemplated
the ramifications of derived data. Can anyone enlighten the gang about what
risks reside along this derived data surface?

------
dmartinez
In the future, the right to anonymity is a right that will become worth
fighting for.

~~~
bballer
That future is now.

------
SamBoogieNYC
If public expression of a thought MAY result in more sophisticated communal
understanding revealed by discourse, then it follows that self-censoring said
thought MAY result in eliminating more advanced thoughts.

That is a real concern to me.

------
vlasev
Serious question - how viable would it be to implement some sort of data
decay?

I don't know how it would be implemented, on what schedule, and to what
extent.

~~~
jellicle
If even one person that has access to it makes a non-decaying copy, the decay
has failed...

~~~
dredmorbius
That depends on the person and their capabilities. It's more a probability
distribution.

------
Fuxy
And this is why Facebook has become my go to place for contacting old friends
I haven't talked to in years and relatives I speak too even less.

If it's people I interact with all the time there's other ways of contact that
are less data mined like a good old text message or phone call.

Oh yeah and my last Facebook post is well over a year ago. There's no way in
hell I will post random pictures it that will show me in bad light. It's
basically a slightly less official business profile.

~~~
andreasgonewild
Forget about posting. Even visiting that stupid site will mess with your mind
and others in unseen ways. All parties are at all times interacting with a set
of algorithms optimized for profit/integrity intrusions/behavioral changes/god
knows. You might think you're smart enough, but then why are they spending so
much money on something that doesn't work. Just say no.

[https://github.com/andreas-gone-wild/snackis](https://github.com/andreas-
gone-wild/snackis)

------
Symmetry
Regardless of current issues, I figure that in 30 years everybody will have
access to AIs capable of trawling the internet and correlating all my online
identities. At least barring things like nuclear war, global draconian
censorship, etc. I've taken to just using my real name in most places online
to remind me of this. If I'd created this HN account a few years later I'd be
AndrewClough rather than Symmetry.

------
sn41
I guess it's time for Samizdat all over again.

------
sn41
It's like we need Samizdat all over again.

------
mhh__
I know that it's crazy

I know that it's nowhere

But there is no denying that

It's hip to be square

?

------
grillvogel
the formatting of this site is truly awful

------
Katastrophial
It's a great presentation of facts into an infographic form that is easily
digested by lots of people.

It could use some serious fine tuning for grammar though, likely as a result
of English being a secondary language.

If the guy who owns the website is on here, I'd be happy to help out with the
syntax and grammar. PM me, I'd love to help out with this - it's really well
laid out.

