Hacker News new | past | comments | ask | show | jobs | submit login
Social Cooling (2017) (socialcooling.com)
2692 points by rapnie 23 days ago | hide | past | favorite | 1059 comments



All: don't miss that there are multiple pages of comments in this thread. That's what the More link at the bottom points to. Or click these:

https://news.ycombinator.com/item?id=24627363&p=2

https://news.ycombinator.com/item?id=24627363&p=3


I think this is a good example of how pro-privacy arguments should be framed. It is takes the varied aspects and complex implications of tracking users across the web (or even in the real world), and distills it down into an easy to understand concept.

When you think privacy of in in the terms of 'social cooling', or consider things like China's 'social credit' system, I can't help be think we are much closer to the world depicted in the last season of Westworld than we might want to admit.


Agreed. I think the audience matters too -- different messages appeal to different people.

My dad is one of those old school guys who thinks law enforcement can do no wrong and nobody needs to hide anything unless they're doing something wrong. Even if that were true and I think it is true that many law enforcement personnel are trying to do good, that doesn't always mean the results will always reflect their intentions. When the sample size of facts is too small, as is often the case with mass collection, it's too easy for your sample to get mixed up with someone else's. Maybe your phone is the only other phone in the area when a murder is committed. That doesn't mean you did it, but it sure makes you look like the only suspect.

I was never able to gain an inch on his argument until I asked him why he has curtains on his living room window. I mean, it faces North, so there's no need to block intense sunlight, yet he closes them every night when he's sitting there reading a book or watching TV. Why? He's not doing anything illegal, yet he still doesn't want people watching him. He said he would not be ok with the Police standing at his window all night watching him. That's when he finally understood that digital privacy is not just for criminals, but for everyone who wants to exist in a peaceful state and not a police state.


> I was never able to gain an inch on his argument until I asked him why he has curtains on his living room window.

I'm not doing anything wrong, but I still close the door when I take a dump. The idea that someone wanting privacy means it is nefarious or wrong is ridiculous.


I never found this type of argument satisfying. It's more of an appeal to emotion than a rational reason.

In our culture we feel deep embarrassment if someone sees us using the toilet, but this is not universal across people and cultures, and honestly, it shouldn't be embarrassing. There's nothing inherently wrong with pooping. We irrationally feel embarrassment when we shouldn't have to.

This argument doesn't show any negative consequences of invasion of privacy. It's also not clear how it extrapolates to situations that don't involve toilets or nudity. If the problem is embarrassment, and people don't feel embarrassed that Facebook collects data, does that make it okay?

Obviously there are other arguments for privacy that do show potential harm. I find these more compelling.


> It's more of an appeal to emotion than a rational reason.

It sounds more respectable if you call it an 'intuition pump'. Whether or not it is rational to want to defecate privately, this point may lead some fraction of those whose mind was previously made up to reconsider their position. In those cases, it can be the beginning of a conversation.


I suppose it might have value if it causes closed-minded people to be more open-minded.


It's not just embarrassment. It's the loss of dignity that comes from having no control over who is allowed in your own personal space.


What does "loss of dignity" mean in this context? How does it differ from embarrassment? Why does being seen pooping cause it?

I'm not arguing, I'm just not sure what you mean.


Embarrassment is an emotional state which sometimes occurs when the image that we seek to project is undermined. It's painful, but usually temporary. By loss of dignity, I mean that an individual is not being respected if they are not permitted any control over their person space. Humans seem to naturally require some degree of control over what may be witnessed by others and what is theirs alone.


Could you say it's a feeling of powerlessness?

Could we perhaps group together embarrassment, loss of dignity and shame and summarize the point as follows?

"Invasions of privacy cause psychological harm."


Yes, psychological harm is one of the most powerful arguments against privacy invasion as I see it. The other being the potential for social or even physical harm, i.e. misuse of that data by people who are able to gain access to it.


It doesn't matter what it means. What matters is that it exists, and it should be respected.

Otherwise your argument becomes "I don't understand what these things are or why people care about them, and therefore perhaps they don't matter."

And that's not a strong argument.


Of course it matters. If the parent comment had said, "it's the loss of foobar that comes from having no control...", would you know what their argument is? Or that foobar exists? Or that foobar should be respected? Or that you lose foobar when you're seen on the toilet?

It's not possible to understand the parent comment without knowing what dignity is.

> Otherwise your argument becomes "I don't understand what these things are or why people care about them, and therefore perhaps they don't matter."

What argument? I literally said, "I'm not arguing, I'm just not sure what you mean." I was just asking for clarification. I haven't denied anything in the parent comment.


> It's more of an appeal to emotion than a rational reason.

But that is precisely the rational reason. In a free society you want people to act freely. To be able to act freely it helps tremendously to not be under constant surveillance by authorities, powerful actors and/or personal and political enemies. If one happens to have the same cultural background or political ideas as all those on the other side and one is generally a careless nature it helps in not feeling threatened by that surveillance.

The new thing digital surveillance brought is the ability to automate and for search things that happened once. Where in communist Germany the state had to have a giant apparatus that would break into your flat and install microphones, have people constantly following you around and listening in on every word you said. The impact this has on a free exchange of ideas is quite obvious, isn't it? These things have become far less resource intensive in the age of the web.

And if you now say: "Yeah but they were communists" — that is the point. If you are hoping those in power will be respectful because your values (currently) align with theirs; or because your information is (currently) more useful to them when not disclosed to your enemies — then this is a very optimistic view of the world. But things can change, and not all have that sense of optimism.

Not having to think about whether somebody will knock your door with state police in a decade because of something you wrote online is the reason why privacy exist. Not having to censor yourself because you are afraid those fringe lunatics on the opposite political side will destroy your life is the reason why privacy exists. Not having to censor yourself because your violent husband reads everything you wrote is the reason why privacy exists.

So maybe you can read this as: Power that sees what you do can (and does) change how you act, even if they don't come after you. Not having them see you is a good way of not having to change.


>> It's more of an appeal to emotion than a rational reason.

> But that is precisely the rational reason.

I'm not following your reasoning here. You list several logical reasons why digital privacy is important (it protects us from nefarious governments, it protects us from violent spouses, etc.). What does this have to do with an irrational embarrassment over pooping?


Freedom of expression includes the freedom to be irrationally embarrassed about anything, or indeed irrational about anything at all. As long as you're not hurting anyone I guess.


The rational argument is: we don't want to live in a society where the private is potentially intruded by other outside actors, because in our notion of liberty the individual shall be able to live a life without having to fear these intrusions.

Whether this fear is rational doesn't matter. Whether these intrusions are never actually carried out and always only remain a faint possibility, a story the actors make you believe doesn't matter.


We shouldn't do many things but we do. If I feel embarrassed, it means I am vulnerable. I want to keep it to myself and I have the right to feel embarrassed, despite it being illogical. Humans aren't perfectly logical beings. If we were, there would be no discussions like this one.


Sure, I don't want to embarrass people. We should try to accommodate people's feelings.

But I don't think it's the strong argument in favour of privacy that we want to make, because:

1. We do give people privacy in the bathroom. The debate is over the data social media companies collect. If people aren't generally embarrassed that Facebook collects data about what they post on Facebook, how does it relate to being embarrassed to be seen on the toilet?

2. Do we always have to accommodate irrational feelings? What about people who are easily offended by things that things that most would consider non-offensive? Is it immoral for a child to dress as a clown on halloween given that some people have coulrophobia? If you're arguing with someone who believes law enforcement should have access to people's social media and you bring up that stuff posted on social media could be embarrassing, the obvious response is, "Well, too bad. Investigating crimes is more important."


We wouldn't be living things


> I never found this type of argument satisfying. It's more of an appeal to emotion than a rational reason.

John Oliver used a similar tactic when speaking about Edward Snowden and the Patrioct Act. Instead of framing it about rights, pricacy and stuff, he talkes about dick picks. It kinda worked? https://www.youtube.com/watch?v=XEVlyP4_11M


I thought we feel embarrassed pooping because of our animal instincts.

There are sanitary reasons for closing the door while pooping.


My dad doesn't close the door when he take a dump. That's the way he was raised and so that's how he does it.


That's not really the same thing. I close the door to the toilet because other people don't want to see it. I close the blinds when reading a book because they do want to see it.


While crass, that's a great way to put it. Why can't I just want my conversations to be private because eavesdropping without cause is icky. Just like in person.


that would be a nice way to get spies out of our data: flood them with pictures of our dumps :)


Any sufficiently advanced noise is indistinguishable from signal.

(... not saying dumps are advanced noise, but this is on the right track. Don't hide the needle. Produce more haystack)


Interesting.

So instead of an ad blocker, we could have background bots in our browser visiting random urls and clicking on every ad in sight (of course it would need to mimic human UI input).

I wonder what affect that would have.


The only legitimate ad blocker that has been banned from the chrome store was ad nauseum. It was a thin wrapper over ublock that a click signal to every single ad. You could adjust the intensity (no clicks, some clicks, all), but that was where Google drew the line.


Be careful not to get your Google account banned with this.


this was made by an acquaintance: http://martinnadal.eu/fango/


I went to a debate once, in which the former head of GCHQ (British equivalent of the NSA) argued that because agents weren't literally listening to people's phone calls, like the Stazi did, mass digital surveillance is fine. And unfortunately for many people this argument works. Human eavesdropping is obviously a problem at a viceral level, because somebody you don't know listening to you is frightening. The fact that digital surveillance gives power to its possessor just as much as human surveillance did is hard to get across.


Privacy is about control and power over your own existence and choices—just that its impact is usually long-term and most profound on a societal level but it starts at the most trivial aspects of life, like being able to sleep in safe, quiet place without any fear. So if data aggregation about you is automated, you still lose that control.

When an employer, for instance, is able to request data aggregation services for a break down about your entire life without or with forced consent from you, or able to monitor and analyze every step of yours during working hours, it's dehumanizing.

Similarly, it doesn't matter whether those with access to data regarding you have only good intentions. It may be pleasing to have a store know everything you like and need right in the moment, you still should be able to walk in and out (pseudo-)anonymously when you wish to.

Same with the state. We say not to talk to the police. In trials the determination what evidence can be submitted is always an important step. So why should the police, prosecution, intelligence agencies, or any other entity be able to access or collect data about you and evaluate it without due process?


This is hilariously cynical because GCHQ and the other letter agencies have had automated listening, recording, and analysis systems in place for decades.

https://en.wikipedia.org/wiki/ECHELON


Privacy is simple. The "watcher" always without exception has a massive power imbalance in their favor. The first and often only line of defense against that power imbalance is the right to privacy.


Right. Apart from the sci-fi tropes, the extreme drama, and aesthetics, it's a spitting image. A great deal of effort is quietly spent on social control, keeping things as they are, and extracting value from people-as-cows, both here and there. Any technology in a position to add robustness to that system, to reduce its upkeep effort, or improve its efficiency at generating wealth for the privileged is likely to succeed, so it's reasonable to think some of the not-yet-here but possible aspects their world will make it to ours in time.

Sometimes I think that authors who see patterns and make reasonable but dire predictions about where society is going actually end up providing a game plan to career oppressors.


People-as-cows, huh? What does that mean to you?


It's an analogy. Personally, I think human lives have intrinsic value. I want my species not just to survive but to prosper as much as possible for as long as possible.

To answer your question, people aren't always seen as intrinsically valuable, nor their suffering meaningful. In the wrong context, corporations, congregations, and other populations are only valued for what they produce, like how cows are valued (and raised) for their milk and meat.


Probably a reference to the individual user as an member of an aggregate “herd” that produces value for the social media platform from the perspective of the business.


There are also deeper potential meanings, though the OP did clarify some.

Cattle are products on a farm. They have purposes. A few bulls are left for breeding, the rest are gelded. Some cows are for milk. Others are fattened up as much as possible.

But all end up in the slaughterhouse. Anyone that steps out of line causes problems before that time may find themselves culled from the herd.

The purpose of the system is not to make cows happy, or meet cow needs. It's to produce as much economic product as possible.


Animal Farm uses the farm metaphor for a reason.


To me it means, phone-as-ear-tag.


Yes, this was great. I think the slogans "Privacy is the right to be imperfect" and "Privacy is the right to be human" are both great, relatable, non-controversial, and easy to understand.


> "Privacy is the right to be human" are both great, relatable, non-controversial, and easy to understand.

And misleading. Privacy in private interactions (personal or closed groups) is basic human right. But in public interactions (public space or open groups) the concept of privacy is much more problematic. One can argue for less accountability for social progress, another for more accountability to weed-out bad actors.

Seems to me that using word 'privacy' for both of these different concepts is source of confusion. Perhaps we should limit term 'privacy' for private interactions and use some other (like 'non-accountability') for public ones.


> But in public interactions (public space or open groups) the concept of privacy is much more problematic.

I don't see what's so problematic. If someone is in public, they are exposed and obviously don't have any privacy. Same logic applies to data people publish on the internet. People can attempt to create some privacy for themselves in these contexts but it's not really a violation or invasion if some stranger shows up and witnesses things they weren't supposed to.

It's completely different from someone's house or computer. These are our spaces and we have complete control over them. So someone installing sensors such as microphones and cameras inside our own homes is a massive violation of our rights. Everybody understands this. It's offensive when the state does it even when warranted. So it is also not acceptable for mere corporations to turn on our microphones in order to listen to keywords or some other surveillance capitalism bullshit.


> If someone is in public, they are exposed and obviously don't have any privacy

Rights to wear clothes. Rights to not speak to anyone they don't want to. Rights against unreasonable search. These are all privacy related, and while we give some up to be a part of society, we retain some as well. Looking at this as black and white (on either side) is an obstacle to finding a sustainable and constructive path forward.


Considering we're see "social heating" if not "social fire" all around us, I'm not sure this is informs people correctly.

My local Facebook group seethes with an angry discussion just below threats of actual violence - and the actual violence was on display only a short time ago when Back The Blue physically assaulted a black lives matter demonstration (in a smallish city where "BLM" is just earnest liberals as you'd expect). And the miscreants were readily identifiable by Facebook (which hurt their business if nothing else but still basically weren't all that bothered by the situation).

Another thing about the heated local-group arguments is that few people have a good idea how unprivate their situation really is. The paranoia of Bill Gates "microchipping" people is a cartoonish example but there's a vast group people very concerned with privacy but having close to no understanding of what it actually involves (or how much they don't have).

If anything, the noxious effect of massive collection is most evidenced by micro-marketing of a variety of crazed ideas to those most susceptible to them - and employers and landlords being able to harass their own employees for particular things they object to (but lets a lot of things through, and business owners have less to worry about).


I believe that social cooling is a thing, and I also believe that the observations you're making are legitimate. Three points that might reconcile these ideas:

1) social cooling is a long-term, slow-burn, bring-pot-to-boil-so-slowly-the-frogs-don't-notice problem. Pointing out some social heat to discredit it is analogous to people discrediting global warming because they've experienced an unseasonable cold snap in their town.

2) By your own description, there are knowledge gaps inside the "social fire" crowd - they don't understand (potential, future) consequences like housing discrimination, work prospects, etc. I don't think it will take more than one generation for these realities to become common knowledge.

3) Finally, people who consider themselves hopelessly marginalized will be susceptible to 'social fire'. People who don't have anything to lose are prone to this (eg, what factors go into someone's decision to get on board with looting?). More solidly situated members of the public, with reputations (salaries, ongoing business concerns, etc) at stake, are likely to be more careful.


this is the kind of privacy discourse I am interested in. Whether an individual can find my ssn, location, credit cards, or whatever personal information is not really what I am thinking about when I think about “protecting my privacy” but rather reducing my data emissions that compose these ratings. in my experience it’s hard to get this across to people who are not familiar though, always get the “I have nothing to hide :) what are you trying to hide?” response. Will try this “social cooling” framework next time. maybe a little less daunting as an entry point than “surveillance capitalism”


I never understood this. Economics 101 or 102 maybe tells us that our consumer welfare will be reduced if firms have less uncertainty about how much they can extract from us. You can make this argument more sophisticated in networks, regarding ads, regarding quality and what have you. But the basic case should be enough to convince you that amazon knowing every detail about you is not going to help you. At all.

So of course we have something to hide.


What Economics 101 or 102 principles are you referring to? I Googled your comment and found this 2019 research paper [1] that seems to support it, but I would have thought the Economics 101 take is more aligned with what companies tell us – more information about consumer desires allows firms to sell us products that we like more at lower cost, and competition means that the savings eventually get passed onto us rather than captured in permanently higher profits.

[1] https://www.ftc.gov/system/files/documents/reports/reduced-d...


Essentially, a sale contract can be written in many ways, but one can show the following generally for (more or less) all such contracts: to account for the fact that the firm is missing information about the counterparty, you, it will have to pay what is referred to as "information rent" to at least some customers. The firm ends up in a "second best" outcome merely because it does not possess all information about its customers. The difference, however, is "rent" that accrues to at least some customers. That is, you, the customer, can expect to pay less for things you care about. This in particular occurs when the contract is simply a single "price". With a price, you have to find the optimum between serving many customers and selling for high prices. One can show that without enough information, the firm can not really do better than setting a single price, which leaves rents to the consumer lest demand is lost.

In contrast, if you have full information, you can construct pricing schemes that fully extract all surplus from the consumer. You can, in essence, get higher prices without losing customers. Many pricing schemes today are trying to use more information to approximate that situation (for example auctions, anything with subscriptions, fixed components, packages etc.). It is why firms like Amazon and Google hire a lot of Economics PhDs and Game Theorists. You will also notice that many products are pushing toward such pricing models. This is not by accident.

So, your contention is half right and half wrong. In the greater scheme of things, full information is often (but not always) efficient for total welfare. However, in such situation total welfare also may accrue entirely to firms. That means higher profits, first, and higher costs for the consumer second.

In effect, you will pay more if you are more known.

It then depends on your faith in the fairness of the ownership and distributional properties of our capitalist systems, as well as the efficiency of the markets in question (e.g. competition), whether the increased profits are eventually redistributed to you, the consumer.

It seems to me that in many of the markets in question, even the description of oligopoly would be rather charitable. In that case, latter parts of your post do not seem likely.

Edit: Since you asked for the principles. The first iteration of this you may come across is called price discrimination. At that stage, it's not about information, but you can make that link in your head quite easily: The ability to set different prices, depends of course crucially on what you know.

Next, you may hear about auctions or contract theory, where such problems are tackled explicitly. Switching the roles, you may hear about principal agent problems, where a similar (really the same thing) occurs. For full generality, you may want to read into Mechanism Design. Tillman Borgers has a great book which used to be available free as PDF and you can probably still find it. If you are interested in questions such as: "What can we say generally about any sort of sales contract", then this is a good place to start. Needs some math though.


> It is why firms like Amazon and Google hire a lot of Economics PhDs and Game Theorists. You will also notice that many products are pushing toward such pricing models. This is not by accident.

My dad has a Ph.D in Econ from an Ivy League institution, and lives near-ish to a few FAANGs. He's retired but gets headhunter emails from them consistently.


It is enough in the present, but I'm not sure that will be enough in the future. People have always been distrustful of faraway strangers hiding their faces in hoodies and sunglasses. Similarly for a good credit score you need a history of taking and paying off loans.

You may need a good life on display rather than just an absence of bad things.


> When you think privacy of in in the terms of 'social cooling', or consider things like China's 'social credit' system, I can't help be think we are much closer to the world depicted in the last season of Westworld than we might want to admit.

We were 'almost' there 20 years ago. We are firmly near Westworld (everything outside of androids).


If there's anything that gives me hope that we can avoid a dystopian future driven by social media, it's that Deep-learning / AI is being used to cheaply create realistic forgeries of just about everything: profile pictures, text, profiles, voice recordings, etc.

Within the next 10 years, and maybe much sooner, the vast majority of content on FB/Twitter/Reddit/LinkedIn will be completely fake. The "people" on those networks will be fake as well. Sure there are bots today, but they're not nearly as good as what I'm talking about, and they don't exist at the same scale. Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.


My family grew up behind the iron curtain. At a family event once I heard someone tell a story that I think has been the most accurate prediction of the last few years (if anyone knows the actual interview event, please tell me more so I can get the exact wording, this is all paraphrasing from childhood memories).

A western reporter travelled to the other side of the iron curtain once and was doing what he thought would be an easy west-is-great gotcha-style interview. He asked someone over there, "How do you even know what's going on in your country if your media is so tightly controlled?" Think Chernobyl-levels of tight-lipped ministry-of-information-approved newspapers.

The easterner replied, "Oh, we're better informed than you guys. You see, the difference is we know what we're reading is all propaganda, so we try to piece together the truth from all the sources and from what isn't said. You in the west don't realize you're reading propaganda."

I've been thinking about this more and more the last few years seeing how media bubbles have polarized, fragmented, and destabilized everyone and everything. God help us when cheap ubiquitous deepfakes industrialize the dissemination of perfectly-tailored engineered narratives.


I’ve heard this story too when growing up. I belong to one of the last generations born in the German Democratic Republic. A quite prominent element of our History and German lessons in the 2000s was critical reading of historic news and caricatures, we did these analyses in exams up to A-levels. Propaganda was a big topic, not only when learning about the Third Reich. One reason certainly was that all our teachers spent most of their lives in the GDR system.

I’ve been wondering whether teachers who grew up on the other side of the curtain put a similar emphasis on the topic of propaganda, especially after social media uncovered lots of gullibility in the general public and a for me very difficult-to-understand trust in anything as long as it is written down somewhere, often not even looking at the source. Political effects of eastern german brain drain aside, one important difference between people in the former western and eastern parts of Germany up until today is how much they trust media and institutions like the church.


I find this unpersuasive.

The level of control/conformity on canonical Western media was such that, for most topics of daily news, thinking about the bias of the reporter was not a first-order concern.

For some topics (let's say, hot-button US-vs-USSR things, or race issues in the US), the bias of the source was of course important, anywhere.

But for, say, reporting inflation, unemployment, or the wheat harvest, whether NBC news or the Washington Post was biased wasn't critical in the same way it would have been in the USSR.

Basically, my argument is that the difference in degree is still a worthwhile difference.


While a segment of HN commenters could go on for hours about U-3 or U-6 unemployment numbers, the politicization of such, there is no real difference with most media consumers. Truth largely settles along a binary choice of the mainstream alternatives. Within those strains, views are very self-congruent. Perhaps that’s coincidence, or there are only two real truths, but I’ll defer to PG’s writings on that.

The real difference is that those in the east were predisposed to be suspicious, whereas in the west that disposition or curiousity is not a thing.


There are plenty of real truths, it's not strictly binary.

But it's in Pepsi's and Coke's best interest to have you think it's only those two.


Bias can be reflected in which stats are reported at all. There's also the framing of the numbers and the conclusions stated or implied.


Have you noticed the topics for which there's remarkable conformity between US and UK media compared with other western media? https://news.ycombinator.com/item?id=23858477

As to reporting unemployment: https://news.ycombinator.com/item?id=24364947


Ah but universal cynicism and nihilism is also a form of control. When the very idea of objective truth has been destroyed, this makes the job of authoritarians easier, not harder.


The point isn't to be a cynic and a nihlist, it's to become a skeptic and to be mentally trained to always read between the lines. "Critical thinking" as they said in grade school.

The cliche "if you're not paying for it, you're the product" is just the tech nerd's version of "if you don't know who the fish at the table is, you're the fish."

Folks behind the iron curtain got used to that mentality over a few decades in a time when information flowed slowly through newspapers, radio, and early TV... we're now being forced to reckon with these tricks over the course of a few years while moving at the speed of industrialized data collection, microtargeting, and engineered dopamine bursts that maximize engagement.

People living in the cold war era were at least mentally inoculated against these tricks -- in the US we've had no preparation for it. The ease with which we've turned against each other for the easy popcorn comfort of the conspiracy theory or outrage du jour is mind boggling.


How do we know that people from formerly communist countries are any better at media consumption? From what little I’ve read about Russia, people seem to be pretty pro-Putin and there are lots of conspiracy theories.

It doesn’t seem like people there are obviously better at media consumption, let alone inoculated?


People who have gained that skill in USSR have left ex-USSR for US, Europe and Israel a long time ago.


>From what little I’ve read about Russia, people seem to be pretty pro-Putin

Presiding over steadily improving living standards tends to give leaders staying power in every country. Putin was there for Russia's bounceback from the 90s.


Yes, which is why Russian propaganda is more concerned about muddying the waters than constructing any particular narrative.


Also, they realized how to take advantage of potential energy. Give groups a nudge, and they will write their own propaganda and circulate it, and it snowballs from there. I read a recent interview from someone working in the Internet Research Agency, and they said they don't even bother making content themselves anymore, they just try to push and amplify what's already there at the bottom of the fish tank and it works just as well.


Do you have a link to the interview? I'd like to give it a read too


To further this point, a RAND Corp study: "The Russian 'Firehose of Falsehood' Propaganda Model: Why It Might Work and Options to Counter It"

https://www.rand.org/pubs/perspectives/PE198.html


> Ah but universal cynicism and nihilism is also a form of control. When the very idea of objective truth has been destroyed, this makes the job of authoritarians easier, not harder.

Universal cynicism and nihilism may function that way. But that was not the attitude of the person in the description. So I am not sure how that is relevant?


The step from "I don't trust anyone so I need to triple check everything" to "cynicism and nihilism" is quite small, especially given the effort in triple checking all information.


Remember me a joke, in USSR to know the truth you only need to put a NOT in front of an article of the Pravda, because are all false, in USA you can't because only half are false


It is sad that the wisdom from behind the iron curtain (where I grew up, too) is so fitting in the US (where I now live) today. I find that critical assessment of the media, resistance to propaganda and brainwashing detection skills acquired over there served me very well in the US.

I wish those skills were teachable without recreating the full environment...


> we try to piece together the truth from all the sources and from what isn't said

I'm skeptical that this can be done effectively


Dr. Linebarger[1] wrote first a textbook (for the US army) and then a book (for the general public) on "Psychological Warfare" which incidentally contains a section, with an outlined method complete with mnemonic acronym (STASM), on media analysis.

"If you agree with it, it's truth. If you don't agree, it's propaganda. Pretend that it is all propaganda. See what happens on your analysis reports."

Mad magazine used to run "reading between the lines" pieces.

[1] A while ago I learned The Game of Rat and Dragon is accurate insofar as felines not only have better reflexes than ours, they're among the best.


Ask anyone from China and they will tell you the exact same thing. They know their news is state sponsored and all propaganda. People in the united states are blissfully unaware.


We still have a robust ecosystem of quality journalism in the US. There is bias, there are mistakes made, and there is false information masquerading as news that can mislead media consumers if they are not careful. But we are still very far from the situation in China and Russia. To be clear there is a problem, and it's growing, but let's not exaggerate.


If Julian Assange were a Chinese citizen blowing the lid on Chinese war crimes in Xinjiang while they accused him of not feeding his cat we wouldn't bat an eyelid at denouncing their crackdown on journalism in their country.



Somehow what you were saying reminded me of reading The Onion.

You know, where they have those opinion pieces always with the same 6 photos (but a different name & occupation) each spouting something humorous?

and curiously there is some truth at the hidden within each onion article.


Exactly, ask the same to anyone in Cuba or Venezuela.


> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online

On the flip side, successful startups that aren't full social but do require some authenticity verification have already been proven: nextdoor and blind, for example

I think the biggest issue is scaling to a facebook-style, reddit-style, or twitter-style "full-world" social network implies colliding people who have no other relationship or interaction but are linked through a topic or shared interest

And, in my opinion, when you hit a certain level of scale, the verification almost becomes pointless: there's enough loud angry and troll people out there that I dont think it matters if they're verified or not. You can't moderate away toxicity in discussions that include literally a million participants.

I think you need both verification and some way to keep all the users' subnetworks small enough that it isn't toxic or chilling. But then you lose that addictive feed of endless content that links people to reddit or Facebook or Instagram. Tough problem


> You can't moderate away toxicity in discussions that include literally a million participants.

In my opinion HN is the gold-standard of online communities and it's being managed pretty well despite it scaling to what it is right now.

I wonder more leanings from HN (specially on the moderation front) can be applied to newer social platforms.


The moderation here is very good, but I think cultural self-selection is a big factor too. Speaking broadly, it attracts technical, logical people who share values and standards around reasoned debate. I don't see that part scaling to society at large.


Eternal September is a datum that, contrary to initial hopes, that part doesn't scale to society at large. Online has become much more like offline than vice versa.


And even if we aren’t more Vulcan then the norm, we like to think we are :)


Well, even if you think it's all self-delusion, the ceremony around it is real and that's an important difference.


That's a really interesting observation. Really the site/service could just make the ceremony of objectivity part of the entire style and UX, that might be enough. There's other things you could do too, like make every statement tagged with a source, and let community attempt to mark each source as primary/secondary, full/partial context, etc. Those statements could rise based on those tags instead of upvotes. It'd be wikipedia-for-news like. Has this been done?


I don't even think toxicity is a problem for users without public persona. Those that are public have to play by the same rules that were already in place for classical PR.

We only got this problem with users trying to do house cleaning. Most communities are completely fine without authentication, so it certainly isn't necessary.


> But then you lose that addictive feed of endless content that links people to reddit or Facebook or Instagram. Tough problem

... Which is a good thing. (for the users, at least)


> do require some authenticity verification have already been proven

can add levels.fyi to that list as they now use actual offer letters to build their data set


You mention realistic forgeries, AI and huge volume as a possibility and that the outcome would be that people would be pushed into the real world but I'm not sure I see the connection.

If I can interact with bots that emulate humans with such a degree of realism, what do I care? You could be a bot, the whole of HN can be bots, I don't really care who wrote the text if I can get something from it, I mean I don't have any idea who you are and don't even read usernames when reading posts here on HN.

At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).


The lines between the online world and the "real" world are always blurry. You might not care on HN, but you probably will care when you're trying to meet someone on a dating website and waste a bunch of time chatting with someone only to realize that they're a very convincing bot and that you've spent X hours that you could've been using to meet real people.

It seems like crowd-sourced moderation is probably the only thing that will work at scale. I've always wondered why Reddit doesn't rank comments by default according to someone's overall reputation inside of a subreddit and then by the relative merits of the comment on a particular subject. Getting the weighting right would be hard, but it seems like that would be the best way to dissuade low quality comments and outright trolling.


>At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).

A relevant, if flip solution to the 'bot' issue[0].

[0]https://xkcd.com/810/


> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

Any kind of widely used identity/authentication system would need to be a protocol and not a product of a for-profit corporation. Businesses take on great risks if they use another corporation's products as part of their core operations as that product owner can change the terms of service at any time and pull the rug out from under them. A protocol is necessarily neutral so everyone can use it without risk in the same way they use HTTP.

For identity protocols I think BrightID (https://www.brightid.org/) is becoming more established and works pretty well.


See also Neal Stephenson's Fall: Dodge in Hell. What happens there though isn't authentic experiences but instead people buy tailored human/AI agent filters called editors to construct a reality for them by filtering out most media sources, including billboards and other interactive real-world advertisements and media screens. This way each individual has their own media reality.


> Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

Will they? People interact with these things because they are giving the brain what it wants, not what it might need. How many people would flock to a verified minimal bias news site? How many people would embrace so many hard truths and throw off their comforting lies? How many people could even admit to themselves they were being lied to and had formed their identity around those lies?

Do people want authentic now? The evidence says no.


I don't know if the news is really the best example of this today. Clearly there will always be a subjective bias in reporting the news, but as deep fakes become more prevalent it will become increasingly important to know that the origin of a video clip is trustworthy.

That said, there are clearly some social networks where you absolutely want to verify authenticity. Take for example, dating websites. Fake profiles _TODAY_ are a huge problem for those sites. If you have too many fake profiles, then paying users just log off and never come back. Same for LinkedIn. How many recruiters are going to pay for access to that network if 30% of the profiles are fake?


That's just digital certificate-based government ID. You could maybe provide some layer of abstraction above it to improve the developer experience, but at the end of the day you're reliant on it existing. Everything else will be too easily forged (unless you're planning on doing in-person validation).


You'd have to do in-person validation.


But bots and spam and russian memes are already deeply engaging to people. I'm sure it will only get worse, though obviously some people will opt out.


>IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

The US government does authentication in real life via social security numbers. Of course, they are not very secure: a government-operated SSO or auth API for third-party applications would be a logical next step.

It would guarantee uniqueness and authenticity of users. Even better, if this were an inter-governmental program, it would deter government meddling: a state issuing too many tokens for fake accounts would arouse suspicion.


>Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

I think you have completely misread the situation. The "fakification" of social media is already happening. Much if not most engagement is already driven by bots or by fabricated "influencers" and more people are using these platforms more often, not less.


I agree that the system is already being heavily influenced by bots. I think that the public's perception of just by how much though does not match reality. As time goes on though, the lay public will come to the same realization that many of us have already arrived at: it's all fake.

I think the critical threshold for most people will be when bots start impersonating people they know in person. At that point, the value of the social networks will evaporate.


>As time goes on though, the lay public will come to the same realization that many of us have already arrived at: it's all fake.

I don't share your optimism. Significant portions of the population believe the Earth is 6000 years old or is flat. Not sure why their critical thinking skills would suddenly improve at an opportune time.


> Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

Not so sure. I'd rather wage that people won't really care about whether they interact with real humans or not. Why would it matter? It's not rare for people to relate and feel emotions for virtual characters in video games - even though they are perfectly aware it's all fake! The same can be said for movies, TV shows. You know it's fake, yet you watch and enjoy. I'm not sure why it would be ANY different for social networks which are basically just another form of entertainment.


This is very interesting. So basically, we'll all use fake personas managed by AI. And nothing online will be real...


> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

Ironically accounts with Twitter's blue check mark are often the accounts most likely to be managed by a social media manager.


Blue check accounts are expensive enough that, if you get the account banned, you can't easily make a new one. Bot accounts don't have this problem. If I want to trick as many people as possible into drinking bleach, I probably want easily-burnable bot accounts, so that when someone calls me out on it, I can just make a new one and pick up where I left off.

Of course, this also assists in Social Cooling, since controversial statements act a lot like totally false ones in the public eye.


China already has that. At age 16, all citizens must get an ID card. Photo and biometric info are recorded. To get a cell phone, the ID card is required, and as of last year, it's cross-checked by a face recognition scan. Cell phone IDs are tied to citizen IDs. WeChat accounts are verified against phone IDs.

Now that's authenticity verification.


Not that different in the EU. Most member states keep track of EU citizens from birth with a citizen ID. To get a phone, you need to show said ID. There are states which keep biometrics in the ID and passports, such as face biometrics and fingerprints. Some EU states even sample DNA from the child at time of birth and keep in their records for future use.


Really? People censoring themselves is the problem? Whenever I take a peek at social feeds I see people saying crazy things, insults, conspiracy theories, hate, etc. Usually I end up the feeling that the larger the audience and concurrency of engagement, the less people censor the them selves, it usually even make them see extra things that normally they won't say.


Perhaps people censoring themselves is the reason you see crazy things, insults, conspiracy theories, hate, etc. The rational and well-mannered people aren't taking the risk so all you hear is those who will take the risk.

It's why politics is full of goons. Who in their right mind would go into that arena, to do good, when the risks are so high, the exposure so great, the hatred so guaranteed? Just the wrong people willing to take the risk.


At an IRL social gathering, when someone starts getting cranky, you see and/or hear everyone else in the room going clammy, and know they feel the same way as you do. There's a certain loudness to their silence.

On the Internet, those same people are completely imperceptible.


This is a great observation. I think one difference is that on the internet, the social gathering is much bigger, and these people end up finding each other. In real life, if you start ranting about flat earth or something, it's likely that no one around will agree with you and not engage. But if you do it online, you'll find plenty of others. (maybe trolls, but how can you really know?) So now you think maybe your ideas aren't so crazy. And normally rational people see all these people starting to believe in flat earth, and that no one is standing up to them, and that makes them unsure and uncomfortable.

Maybe flat earth isn't the best example, but you know, I don't want to looks like I'm opposed to POPULAR_OPINION_ONLINE lol


I'd go for a much more prosaic example, myself. How about Docker?

Among members of my team, I have far and away the most moderate opinions on Docker. I'm pretty sure that this is largely because I'm also the one tasked with maintaining what infrastructure we have that's based on Docker. So my opinions are largely driven by first-hand experience, whereas my colleagues' opinions are largely driven by things they read on the Internet.


I read somewhere that he "Like" button needs to have an equivalent "silent disapproval stare" button.


That could be an interesting concept for some social networks to try, maybe with some limitation of social circle? I.e. it's a stronger signal if "X of your friends/people you follow/... disapprove" than "X0,000 strangers disapprove", which is a problem with more typical downvote features? Doesn't help for people totally in an echo chamber, but at least for some?


that would be hilarious if downvoted comments became literally smaller font


Doesn't HN basically do that, with downvoted comments slowly fading away until they are illegible?


Frequent in-person discussions between people with different opinions tends to make people compromise and find nuance more easily. However if one side of the discussion is self-censoring, then both sides will tend to develop extreme opinions without any means to tamper them. As such, what you are describing is actually evidence to support the self-censorship hypothesis, not refute it.


>Frequent in-person discussions between people with different opinions tends to make people compromise and find nuance more easily

Is there any reason to think this is the case? In my experience, in-person disagreements over 'big things' (be they politics or philosophy) either end in bitter disagreement, or what appears to be a compromise but actually isn't (because one or both parties do not wish to talk about the topic any more, before things get worse).

> However if one side of the discussion is self-censoring, then both sides will tend to develop extreme opinions without any means to tamper them.

This assumes that most disagreements are resolved when there is a difference of opinion. Personally, I rarely change my opinion after speaking to someone, and I instead change it when I do my own reading around topics. The fact is that it's awkward to ask 'what's your source for that?' in a conversation between friends. Either one or both parties don't care enough to provide a source, or it's impractical (such as at a dinner party).

To surmise, I'm questioning whether mere in-person disagreement really does tamper the essence of those extreme opinions, not merely the appearance presented to that particular conversation partner.


I don't agree. I have many very interesting conversations with people that I do not agree with politically, but I respect their intelligence and point of view, and vice versa. It is vastly more realistic to have a nuanced and respectful debate in private, versus a public discussion which will inevitably devolve. If you would like proof of this, open literally any twitter thread about politics with more than a few replies.


>I have many very interesting conversations with people that I do not agree with politically, but I respect their intelligence and point of view, and vice versa.

Likewise. But I wasn't saying that's not possible, I was saying that I'm not convinced many people change their opinions over the course of such conversations. Being civil is important, but the question was whether civil debate among people who know each other in person results in more reasonable opinions, or compromises.

It's obviously better than online conversations. But to what extent? I don't think GP made a sufficiently convincing case.


The objective of a conversation is not to change the other’s opinion, it is to understand each other on a deeper level than at the start. If the net result is a shift in opinion on either side (or both) then so be it.

The idea of “right” and “wrong” views is flawed and to set out with the objective of persuading the other to your view is a mistake. Getting them to understand you view, whilst you get to understand theirs, is a better objective. You can’t change the world if you don’t understand it.

It is of course extremely difficult to have this kind of conversation online especially in short form.


How many people do you see saying those crazy things? Hundreds? Thousands? What about the hundreds or millions or billions of others who don't post anything at all for fear (consciously or not) of backlash, either from the crazies or the not-crazies?


Obviously anecdotal, but I'm talkig about people I actually personally know. IRL I'm able to have a conversation with them, online they are so used to trolls and extreme opinions that they get into "fight mode" where they automatically assume the worse about the other person, and interpret anything they say, in the worst possible way.

And I don't see any chilling effect, other than "fuck that, I'm not gonna follow Facebook/twitter anymore"

They're not writing anything, but they're not consuming it. Now if so called journalists would stay off twitter/facebook, the problem will be solved. Because it's not a chilling effect if the entire aparatus is irrelevant.


Reasonable people on both sides censor themselves (at least more than unreasonable people).

My theory is that this is why Full Name Required comments fields and also Facebook is way uglier than pseudonymous forums like HN and Ars Technica.


You chose and interesting and very moderated forums there... Aren't the worst places on the internet unmoderated pseudonymous forums? 4chan, the horrible bits of Reddit, and the like?


I honestly think its the opposite. When people don't have to stick to a side they'll actually discuss things without falling into a persona or clique. Then again there are trolls but they're rather easy to spot.


Old slashdot then. AFAIK and IIRC it was user moderated (and there was a fascinating system around metamoderation.)


The website doesn't only mention censoring but also conformity. If people are saying things that they wouldn't normally say but do because of the larger audience and concurrency of engagement then that contributes to the problem...


There are multiple issues. Self censorship is a problem, but conspiracy thinking is also a problem. Dr. Steven Novella recently said something to the effect of “the problem is that social media has automated conspiracy theory”. What he was talking about was how algorithms have had the effect of breadcrumbing people deeper and deeper into conspiracy theories and surrounding them with false confirmation.


Conspiracies come from low trust and a feeling of inferiority for different reasons. Problem is that some conspiracies are true and some are even pushed by authoritative news sources.

One conspiracy is certainly that the perspective of flat-earthers matters and should be addressed in any way. Same, with anti-vaxxers. We had vaccination quotas of 96% and as soon as people wanted to force others to vaccinate, it dropped considerably. Reactionary? Perhaps, but perfectly understandable.


There's a selection effect going on there. People with more circumspect attitudes are more likely to be sensitive to social cooling, and when they back off of social media, they take their more measured opinions with them.


The hot get hotter, the cool get cooler. It's just one more way that people are pulling away from each other toward two opposite extremes.


Maybe the situation is like Idiocracy where a certain class of people are cooled but unreasonable, insensitive and hateful people do not.


Not all people are created equal...

The less certain people censor themselves. And the more other kinds of people censor themselves. There seems to be widespread colloquial agreement that those who don't censor themselves are usually more extreme in their views, more confident in their truthiness, and often more mistaken about basic verifiable facts.

This is very much a question of signal-to-noise ratio.


This is explained by Foucault: if you think that you are being watched, you will censor yourself. He uses the panopticon as metaphor: https://en.wikipedia.org/wiki/Panopticon. Bauman later called our situation "Post-Panopticism".


> saying crazy things, insults, conspiracy theories, hate

Sadly, I think this is par for the course, and often those "crazy" things are accepted by a large enough part of society that the cooling effect is very low.


> People censoring themselves is the problem?

Yes. For example, very few people in SV can openly say they are going to vote for Trump.

> the larger the audience and concurrency of engagement, the less people censor the them selves

Yes, people don't censor themselves when they are in majority. For example, those who live in SV, and support gay marriage and BLM, they can throw insults without repercussions.


If you want respect don't admit to supporting bigotry.


The weird thing is that up and down this thread, you can get the feeling that people are bigots, but they feel "oppressed" because they can't openly state those feelings in the public square or at work.


Lack of empathy is expected from people defending mob justice.


Also, not accepting that people consider Trump a better presidental candidate is bigotry exactly, by definition from the dictionary.

The fun fact about the word "bigotry" is that people who use "bigotry" as insults are very often bigots themselves.


Cool, let's rephrase. If you want respect don't admit to being prejudiced against the way people are born. Being prejudiced against choices people make is completely fine.


> If you want respect

I'm sorry for not expressing clearly. People want freedom more than respect. In particular, freedom to express support of Trump.

> If you want respect don't admit to being prejudiced against the way people are born.

I'm sorry, I don't see a connection between your comment and parent comment.


People in sv are absolutely free to express support for Trump.


They will quickly lose their jobs.

It is somewhat similar (but to lesser degree of course) to China: there’s no law prohibiting talking about Tiananmen Square, but you better not do it.


Freedom of speech isn't freedom from consequences. I'm as free to call you an idiot and boycott you as you are to say idiotic things. It actually is illegal to talk about tianamen square in china. You'll be arrested.


> Freedom of speech isn't freedom from consequences.

Yes it is; that's (most of) what "freedom" means. By your logic, if I would shoot you if you leave your house, then you would still be free to leave your house, 'just' not from the consequences.

Edit, a more proximate example: if the ministry of love will kindnap and torture you for criticising the government, your logic would hold that this does not violate freedom of speech, so long as they do not preemptivly prevent such criticism.


Retaliation by government.


> Freedom of speech isn't freedom from consequences.

This phrase should be an example of Emperors New Clothes.

https://en.wikipedia.org/wiki/Freedom_of_speech

Of course it is trivially correct for the most part because people have opinions, but the concept of freedom of speech directly addresses this.

> Freedom of speech is a principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation

You don't even need to read more than 200 words and people using this phrase seem overly interested in the retaliation part through social excommunication. Bigotry in its original form.


This catchy phrase is catch 22. Negative consequences of freedom mean there is no freedom.


> This catchy phrase is catch 22. Negative consequences of freedom mean there is no freedom.

That's something of a misnomer. In the case where the government (as in China) will visit consequences upon you for your speech limits freedom.

But others using their speech to express their displeasure with your speech does not.

Do you see how that works? If your peers disagree with you and express that, it's not limiting freedom, it's giving the same freedom to everyone.

I do believe that it's inappropriate (note the word I use here, as it has a specific meaning and implication) to target someone's professional status for a real or perceived disagreement (assuming that those disagreements are not relevant to the target's professional duties).

That doesn't make it illegal, just petty, vindictive and in bad faith. None of which limits anyone's freedom to express themselves.

There's a big difference between legality and social norms. Just because it's legal to do something, doesn't mean it's a good idea.


Freedom became an empty word once US turned it to plastic. People always lose some freedom in any social interaction. If you treat any such compromise as "no freedom" than you'll be left with no "freedom".

This whole dichotomy is just stupid and abused because of historic American politics, the word has lost all meaning.


You aren't free unless you can say and do things without consequence? What?


Frame it in something not political: Imagine there was some taboo or social norm that said the only acceptable favorite color was green. If you publicly said your favorite color was something other than green, you should expect to be fired from your job, your family go hungry, and other similar consequences. Are you really free to have any favorite color you want? Technically, yes. Practically, do you have that freedom?


I don’t think we’re talking about someone making neutral statements about their favorite color.


Can you find me the law that says it is illegal to talk about Tiananmen Square in China? I'd love to read it.

What actually happens is that when you talk about it, you lose your job, etc. Rarely does the government step in. Which, and correct me if I'm wrong, sounds like what you're advocating as "free speech".


> Rarely does the government step in.

The Great Firewall and Social Credit system are both run by the government and definitely penalize this behavior.

Of course there's no law explicitly saying "you can't talk about Tiananmen Square" because that law would be talking about Tiananmen Square which is the opposite of what they want.


You can definitely talk about it. How else would people know not to talk about it? The behavior that the government penalizes is advocating action against the government.

But people don't talk about it. It's enforced socially. That's my point. You don't talk about Tiananmen Square, you don't gawk at Falun Gong protesters, etc. Even many Chinese expats act like this. It's just something people know not to do because they don't want to be seen as a bad person and lose friends, jobs, and so on.

That happens completely outside the government's influence.


Assuming that everyone who prefers Trump over Biden is prejudiced against the way people are born, is still bigotry.

Not treating people with respect, regardless of their views, is also bigotry.


Again, I'm fine with being bigoted against peoples choices. If you make bad choices you can be damn sure I'll won't respect you. I can't accept being bigoted against the way someone is born.


> I'll won't respect you

Nobody cares about your respect.

But please don't bully those who disagree with you.


> If you make bad choices you can be damn sure I'll won't respect you.

I try to respect people enough to not tell them what "good" or "bad" must mean for them.


> Being prejudiced against choices people make is completely fine.

Wait until you read about the whole "free will" issue.


I fully accept that free will isn't real. I also fully accept my ability to change the utility maximizing decision by not respecting people who don't respect others because of the way they were born.


And yet ... they never had a choice in the matter, so you're doing what you seek to destroy.


I use "choice" in its commonly accepted definition for simplicity. We can get into semantics if you'd like. Determinism doesn't mean it's impossible to change the "choices" people make. It means it's impossible to change your own utility functions which cause the "choices". Society can still effect peoples "choices" by punishing them because that will change the outcome of the pre-determined utility function. Incentives are everything.


Because Trump supported white supremacists? Because Trump has a proven history of treating women like objects? And these are not allowed ps of the tongue, these were systematically repeated sentiments. If you choose to support him, you support these things as well.


> Because Trump supported white supremacists

This is a lie. 100% debunked lie. https://www.factcheck.org/2020/02/trump-has-condemned-white-...

As far as women, framing Trump as a big meanie who says mean words totally ignores what he and his administration have actually done for women in the aggregate.

> Our nation has created more than 7 million jobs since the 2016 election — and women have filled over half, or more than 4 million, of those vacancies

> The unemployment rate for women stands at a minuscule 3.2%, and last September reached its lowest level since 1953

> And as the unemployment rate has declined, so too did the number of women in poverty, decreasing by 1.5 million in President Trump’s first two years in office

https://www.realclearpolitics.com/articles/2020/02/29/has_tr...!

The victims of sex trafficking are primarily women and children

> Worldwide, there are 40.3 million victims, with 75% women and girls and 25% children, according to The International Labour Organization

> Trump signed the Abolish Human Trafficking Act, which strengthens programs supporting survivors and resources for combating modern slavery

> [Trump] signed the Trafficking Victims Protection Reauthorization Act which tightens criteria for whether countries are meeting standards for eliminating trafficking

> Trump also signed the Frederick Douglass Trafficking Victims Prevention and Protection Reauthorization Act, authorizing $430 million to fight sex and labor trafficking, as well as the Trafficking Victims Protection Act, which establishes “new prevention, prosecution, and collaboration initiative to bring human traffickers to justice.”

> since President Trump took office in January 2017, there have been nearly 12,470 arrests for human trafficking, according to arrest records compiled by investigative journalist Corey Lynn, and over 9130 victims rescued. Compare that to the 525 arrested in Barack Obama’s last year in office

http://www.dienekesplace.com/2019/07/28/the-number-of-human-...


what is SV?


Silicon Valley


I'd be interested in figuring out how I can use this to my advantage. For example, create a persona online that is optimal to lenders, employers and even the government.

The issue is my "real self" is uninterested in participating in these networks, even if to create a fake persona.

Maybe it could be automated, or outsourced?


Creator of socialcooling.com here. You may enjoy this other website I created:

https://www.cloakingcompany.com

It's a fictitious company that helps you do exactly this. And while it's fiction, the tool actually does work.


What do you mean, "It's a fictitious company"? Is this not a legitmate service of yours? I get that the service, if real, produces fictitious content but your wording is throwing me off.


Indeed, it's not a real service. It just pretends to be. You can just use the tool for free.

Although as time passes, it seems maybe it should be real...


>it's not a real service.

>You can just use the tool for free.

So can you actually use it or not? If you can use it, then I would say it is a real service in any sense of "real service" I can think of.


Alright, in that case it's a real service ;-)


I, for one, welcome our new culture jamming underlords.

testimonials for cloakingcompany.com: https://news.ycombinator.com/item?id=24328764


Wow, I'm blown away. Thank you.


Honestly, this had even more impact on me than the social cooling site. Very nicely done.


You are brilliant! Is this opensource? Would love to contribute!


whoa, you should have led with that ;)


This reminds me of Gattaca (https://en.wikipedia.org/wiki/Gattaca) where people with good genes rent out their DNA to those who have bad genes so they can get better jobs, insurance, etc...


DeGENErate


It has no bearing on anything as far as I can tell. For decades I've been open about my drug use, lack of care for people less fortunate than me, anti-organ-donation, anti-first-lady, illegal importation of pharma, and a hundred other things.

I have no problem accessing a $1.5 million mortgage at 2.875%, getting prescribed drugs, or immigration beyond whatever is inherently hard about the system.

The best way is still the real information. The hard stuff in the real world. What you do online does nothing.

Except maybe the Tinder thing. Most dating apps align your attractiveness with the attractiveness of potential targets. That's to be expected.

The way I see it is "Information wants to be free".


>It has no bearing on anything as far as I can tell.

...It says a lot that all of your examples are from your own life. There are counter examples abounding that just aren't affecting you (to your knowledge), such as those stated in TFA, or CA, or Brexit etc.

Do you think these data brokers are selling our info for billions to rubes? Are insurance companies known for their gullibility? Are sale of lists of rape victims to 'whoever has money' A-OK, because you are not being personally affected?

... These trends are worsening. People aren't spending more and more on data that has "no bearing on anything". That it's invisible to you makes it worse.


Oh yeah, for sure. And for instance, if it were to happen to 5% of people, then there will be twenty people like me for one person who is unfairly affected but for that person it will be a complete nightmare.

And societally it's not okay to create a complete nightmare for like 5% of people. So I totally get it.

Just that if you live in the First World and live a normative interface (my drug use doesn't leak into the professional environment, my illegal imports are on the quiet) you can get away with a lot.


Was thinking the same. I wonder if there is a market selling "ready to move in" identities


Yeah, social media profiles are bought and sold like commodities daily. Look for "bots" in the news for examples.


That's different goods.

I mean identities that span across several networks and include emails, aged cookies, fake fingeprint generator, etc


We're talking about the same things.


From what I understand this is an actual thriving industry already. Traditional identity theft (get someone's SSN and other info and open credit lines in their name) is much harder now, so the fraudsters have moved on to creating wholly made up "synthetic identities" de novo.


This is wire fraud, comrade.

All citizens who lie about being cat owning church going knitting enthusiasts — regardless as to whether it was to get a better rate on their next car lease, or not — will be incarcerated.

This may be reduced to a small fine (and denouncement) if you forgo your right to the wasteful scrutiny of a public trial.

Glory to Arstotska


I don't think that would really fly. You may get served a higher class of ads, but if you go apply for a loan or a job, you still have to disclose your real self.


Yes but that's just the thing: OP wants to create their "real" self, just not the authentic self. It becomes real, by association with the name of the person, yet it stays a simulated expression, a simulacrum[0].

Consider that the loan- or job-"machines" are collecting intelligence from social networks to evaluate the person -- in addition to loan history and previous job performance. Now if you can present "yourself" to this machines in a conformal way, you don't need to fear negative repercussions on shitposts you did. While you can still be authentic in private or under pseudonyms.

Of course, you will still get categorized by the bank transactions you make in your real name. Same goes for your performance reviews on previous jobs. It is just a matter of tricking these other forms of automated social control into a higher rating bound to your name.

-----

I find it fascinating that philosophers like Baudrillard and Deleuze were able to think and warn about these issues more than 40 years ago when none of this was even remotely on the horizon:

See also Deleuzes "Societies of Control":

https://cidadeinseguranca.files.wordpress.com/2012/02/deleuz...

and:

https://www.researchgate.net/publication/337844512_Societies...

[0]: https://en.wikipedia.org/wiki/Simulacrum


Thank you for posting this, saves me the trouble :-)

I re-read Deleuze's three-page paper every year. It really describes things well.


> It really describes things well.

It definitely does, and he is scarily accurate in his analysis. I just re-read it myself, and stumbled over this part, which I certainly not anticipated in this form a few years back:

> For the hospital system: the new medicine "without doctor or patient" that singles out potentially sick people and subjects at risk, which in no way attests to individuation -- as they say -- but substitutes for the individual or numerical body the code of a "dividual" material to be controlled.

This is certainly an accurate description of the control mechanisms various states have put into place in the form of apps that enforce selective quarantine restrictions.

The socialcooling website really a great project! Important content presented concise and on-point, thank you for doing this!


I don't understand what any of this is warning of. This just seems like Living in a Society 101.


> but if you go apply for a loan or a job, you still have to disclose your real self

Then doesn't this discount the threat being posed by the "Social Cooling" theory? If social media activity doesn't matter "when it comes down to real transactions" shouldn't we be less worried?

I think the answer is somewhere in the middle. Obviously you can't "social media fake" your way into a mortgage (I hope) but it may stop you from getting a job or being elected to office.


Financial transactions have better tracking like credit scores and credit history, or things like your income/debt ratio.

> but it may stop you from getting a job or being elected to office.

This is more of the problem - the social impact eventually leads to financial impact.


This whole concept seems overdramatic to me at least at present. Banks are making lending decisions based on steady income and payment history, not your online persona. Similarly for employment. If you have reasonable qualifications, you will have no trouble finding work, regardless of how "optimal" your persona is.

Advertising is the area in which the most persona research and targeting is implemented. I suspect the reason no one is trying to fake online personas is because it would only have noticeable impact on what ads you see.


Hah. Reminds me of Gattaca.


Is it wrong to suggest that this (if accurate) is a positive trend? I would like to live in a society where people spend more time considering what they say publicly, keeping to themselves, and refraining from imposing their thoughts and opinions. Live and let live.

If you want to have a private conversaion, social media doesn't seem to be a good vehicle for it. Much like airing your dirty laundry in the town square has been considered bad etiquette, airing personal greivances on the internet seems to be in poor taste.

It must be noted that manners never arise sponaniously in culture, but becuase people fear the consequences of breaching etiquette. I for one welcome the return of politeness to society.


Of course not. You're free to suggest what you like. I'm not going to say something here and put thegrimmest into a list because I disagree with you and think you should pay extra for your flights.

/But/, and there's always a but, I do think the trend towards shutting people down who you don't agree with is terrible. Pragmatic debate seems impossible online, and let's face it, that's how we're all communicating now. When there is the risk of social backlash affecting your livelihood, you'll keep your ideas and opinions to yourself, even if they could be useful to society.

I mean, anyone who thinks the ideals of today are without flaw, just wait til the year 2100 when they'll be seen as backwards.


Society as a whole already normalizes this sort of thing. Many people will have to pay more for a house, and many more will simply be denied. When this paradigm is already so normal, people aren't going to be so averse to their digital and social habits being tracked and rewarded, ESPECIALLY if its advertised as a way to get discounts or benefits on certain services. Car insurance companies are trying it out as well.

The whole entire notion of a credit history, credit reporting agencies, and the idea of my personal information being out there and out of my control sounds so weird.


I think the thing that will cool off is the generation of outrage, and heated (note the term), emotional discourse.

> I do think the trend towards shutting people down who you don't agree with is terrible.

I think the more considered and closer one's speech is to factual, the harder it is to generate outrage. I think a cooling trend pushes people in that direction when composing their speech. I think this is a good thing.

I don't think ideals are ever without flaw. The important question is how do we live together when we know that we disagree and will not ever all agree?


> I think the more considered and closer one's speech is to factual, the harder it is to generate outrage

Sadly that's not the case since there is the phenomena of canceling people over what are called "hate facts".


[flagged]


One of the upsides for me during this time of social unrest is that I have been able to put my Sociology degree to major use during discussions.

One of the reasons touting that statistic might get an auto-remove is because it is in itself deceptive, or at least can be in the inferences many people make from it.

Seeing that statistic might make people think that black people are inherently violent, that there is something about black people that make them commit more homicides. the actual reason that many people do not lift from seeing that statistic on its own, is that homicides and violence are directly linked to poverty.

Then, someone who may be uneducated on the matter might believe that black people are simply both poor and violent, which would completely discount generations of systemic oppression targeted toward minorities and black people specifically which have directly led to their higher poverty rates.



I don't have a particular epistemic position one way or the other, but I'll suggest the alternate hypothesis that the text "experiment about censorship on reddit" might have had more to do with the lack of removal than it being posted on a minor subreddit.


How? It's automatic removal. Not removal by moderators. I doubt the automatic removal algorithm is that complicated.


It really is a crass and inflammatory statement, though. The 13% number may be supported by data, but the actual meaning and phrasing of the statement is actually highly opinionated.

First, it's a fact that black Americans are over-policed and over-prosecuted compared to white Americans. It is reasonable to believe that the conviction rates are skewed.

Second, there is the nasty business of the phrasing "...responsible for...". It is a reasonable perspective to have that if black Americans engage in more violence, it is because they have been subjected to more violence and deprived of opportunity. And that, ultimately, is in many cases, the responsibility of white Americans.

And then, sometimes people just commit murder, regardless of race.

Without the context of a fully-rendered explicit argument, the implied argument in that statement seems to be one of some kind of innate racial disposition. Which people should rightly reject, if not censor. As noted, the "I'm simply running a test" comment was not censored. So perhaps it isn't the data point that is censored, but the implied argument that you seem to be making.

I understand that it can be frustrating to have a 'fact' censored, especially if your intent is to have a productive discussion about a difficult topic. However, as laid out above, that 'fact' is not as simple as your test makes it out to be. It is a statement derived from statistical data that was collected by a government agency. If you cited it as such, and left out the language connecting moral responsibility with a racial group, it would be a more truthful and objective representation of fact, and might not be censored the same way. The test seems to loosely support this, and actually indicates the censorship being applied on reddit is actually quite effective.

Edit: On a related note, it is interesting how guarded I feel even replying to something like this. It's as if I want not to even be part of such a conversation publicly for fear of algorithmic misinterpretation of my meaning. I assume others feel this way, too, based on the OP. That's not the world any of us want to live in. It's not so much I mind publicly published information being collected and analyzed, but that I fear it being utilized in some grand corporate conspiracy. Perhaps we should legislate not against information collection and analysis, but antisocial behavior analysis conspiracies.


> It is a reasonable perspective to have that if black Americans engage in more violence, it is because they have been subjected to more violence and deprived of opportunity. And that, ultimately, is in many cases, the responsibility of white Americans.

I disagree that this is a reasonable perspective at all. Adult people are wholy responsible for their actions. This fundemental fact underpins our whole society.

I would say that this statistic is primarily used to explain disproportionate encounters with (and subsequently death at the hands of) police. It's important to note that black people are also massively overrepresented as victims of violent crime. This suggests that black communities are generally more violent and therefore more likely to be policed. This fact along with others (like the behaviours of majority black police departments) can be used to construct in good faith a strong argument that there is no epidemic of police racism. This argument is not very popular, so it seems to get censored.


>Adult people are wholy responsible for their actions. This fundemental fact underpins our whole society.

You say that is solely fault of the individual, but then say that it "suggests that communities are generally more violent and therefore more likely to be policed". So, if it is the fault of each black individual, as you claim is the underpinning of society, why are black communities being more policed?

>This fact along with others (like the behaviours of majority black police departments) can be used to construct in good faith a strong argument that there is no epidemic of police racism.

Being this the case, wherein lies the issue: the black community or the police institution that trains its members to be more aggressive and fearful of black communities? Keep in mind that only one of the two is in fact an institution funded by the public that undergoes training.

The biggest issue with these kinds of arguments is that it does not take in consideration that black communities are marginalized and target of harassment. This is institutionalized in the sense that the training the harassing people receive teaches them to harass and keeps telling them that they will get killed otherwise. This is not present only in the police, but in other facets of society as well. Look at how many videos of black americans being followed by security in malls and store there are on social media. This shows a pattern that keeps happening, and in unfortunately in many situations escalate to injury or death.


> So, if it is the fault of each black individual, as you claim is the underpinning of society, why are black communities being more policed?

Because effective policing means distributing police resources according to demand?

> police institution that trains its members to be more aggressive and fearful of black communities?

It seems perfectly reasonable to be more fearful when going into a more dangerous area. I don't see any evidence that police are somehow less aggressive or fearful when going into areas dominated by violent gangs with other skin colours. Can you point to some official training doctrine that tells police to be fearful of black people? I'm quite sure that has been illegal for a long time.


Yes, we do hold people individually responsible because it is necessary. We can't forgive crime because a nuanced understanding of recent history and racism. That is a value we hold, but there is nothing "factual" about it, so this is an example of diluting the word "fact" to mean other things.

Slavery was real. Racism was and is real. Inter-generational effects from these forces are real. In all racial groups, lack of opportunity with the legal economy increases engagement with illegal economies. Do you agree?

The statistic is used to explain and place blame upon black Americans for their own deaths at the hands of law enforcement, and saying that it merely "explains" tries to conceal the opinionated nature of that statement with an aura of objectivity.

It is very convenient and clean to ignore recent history and talk about individual responsibility, while taking no individual responsibility for the unequal treatment of blacks that you support with such arguments. By simply citing that statistic while failing entirely to address the obvious and very recent (very present) endemic racism and unequal treatment of black Americans, and placing the blame squarely on their collective shoulders, the only logical conclusion can be that there is something innate about people of that race that leads them to violence, which is objectionable, racist, and has no place in reasonable discourse. There is nothing "good faith" about such an argument.

You will go so far to say that black communities are more violent, but you shy away from saying why you think that is. You will cite a statistic that makes them sound guilty without acknowledging the factors that lead to it being true.

"Soldiers are murderers. 95% of soldiers involved in WWII killed people." Generally, this is true. But we choose not to view it that way.

A statement is not a statistic. A statement includes a statistic. A statement is an analysis, and the way you choose to analyze some data has ethical implications.


> That is a value we hold, but there is nothing "factual" about it, so this is an example of diluting the word "fact" to mean other things.

This is true - this is a value, not a fact. It is a value however that underpins our legal system and therefore our society. The idea that we assign moral agency and total responsbility for action to capable adult individuals who take those actions.

> The statistic is used to explain and place blame upon black Americans for their own deaths at the hands of law enforcement

That's not what any resonalbe interpretation of what I wrote reads. To elaborate, we can assume that 1/E police encouters (E) will result in a death. Much like we can assume 1/P medical procedures (P) will result in a death. People are people and everyone makes mistakes at work. When your work deals with peoples lives those mistakes cost them. I don't see a way to avoid E or P existing. If we are trying to determine if E is biased against black people, we can see if E is significantly different between races. Turns out it's not. In fact You are slightly more likely to be killed as a white person in a police encouter than as a black person.

It's an entirely separate issue from racism if we are suggesting that E (or P) is too low. But the data clearly demonstrates it's not racially biased.

Now the only remaining question is why black people are significantly more likely to experience a police encouter than white people. What we find is that black people tend to live in more criminal and therefore more heavily policed areas than white people. Do you think that police should not pay more attention to more criminal neighbourhoods? Where is the racism?

> You will go so far to say that black communities are more violent, but you shy away from saying why you think that is. You will cite a statistic that makes them sound guilty without acknowledging the factors that lead to it being true.

I don't specualte as to why because I don't know and I assume the answer is very complicated. I prefer to pay attention to folks like Thomas Sowell who have dedicated their careers to answering these questions. I found a good starting point here: https://www.youtube.com/watch?v=l5csE8q9mho


The statistic is automatically (as in, by a robot, not by a person) removed. I only ever noticed because I made a long comment arguing against the racist use of the statistic (where I linked the table the statistic is from, etc.). However, I'm simply against statistics being banned.

I didn't bother with all the extra stuff in my experiment, because it's not important for testing the robot which removes the comments immediately and automatically.


But it wasn't banned in one example where the statistic was cited. Aside from the subreddit, the only difference was the additional context that "this is a test of censorship".

I will note, however, that your original comment has now been flagged and is invisible, cooling this whole discussion. While I object to the statistic and think that censorship of statements including that statistic may actually be productive in some cases, I think that this conversation has the potential to be productive, and I regret that the meta-conversation about censorship is not possible.


Well, it's easy to test.

=======================================

Attempt six:

- Result: no removal

- Subreddit: /r/askreddit

- Comment:

>Hello, please don't mind this comment, I'm simply running an experiment about censorship on reddit.

Did you know that black Americans (who make up just 13.4% of the population) are, nonetheless, responsible for 56% of homicides?[6]

-----------------------------------------------------------------------

Something about the phrasing of the comment has stopped it from being removed. It was silly of me to let my wish of not offending people get in the way of scientific investigation. Still, I think it is not because of the polite preface that the comment has not been removed, and I will test it by including the polite preface to the comment which was removed.

--------------------------------------------------------------

Attempt seven:

- Result: instant, automatic removal

- Subreddit: /r/askreddit

- Comment:

>Hello, please don't mind this comment, I'm simply running an experiment about censorship on reddit.

This is something I just can't wrap my head around: Did you know that according to FBI statistics, black Americans, despite making up only 13% of the population, are responsible for 56% of homicides in the US?[7]

--------------------------------------------------------------

So, it is clear that there is something about the contents of this segment which causes comments to be automatically removed (at the very least, on /r/askreddit) "This is something I just can't wrap my head around: Did you know that according to FBI statistics, black Americans, despite making up only 13% of the population, are responsible for 56% of homicides in the US?". I'm guessing that it's as simple as the fact that I used the less commonly used 13.4% in the non-censored comment, whereas I used the more commonly used 13% in the censored comment.

I'll now run some experiments to narrow down what exactly are the conditions for removal. Results will be placed on https://pastebin.com/Z6G0B7kA. I'll do a write up later, when I've fully understood the extent of the censorship.

[6]https://www.reddit.com/r/AskReddit/comments/j2l13b/whats_the... (comment now edited in an attempt to avoid being banned by the subreddit mods)

[7]https://www.reddit.com/r/AskReddit/comments/j2ocv1/what_moti...


It seems like what you're describing as positive is only a small part of what the article is complaining about. How did you get from e.g. 'If you have "bad friends" on social media you might pay more for your loan' to the return of politeness to society?

I agree that it would be nice to see people imposing their views on others less - "Live and let live" is a basic requirement of a Liberal society. But the dystopian future evoked by this microsite is sort of the opposite of that - an enforced uniformity, where instead of tolerating difference we attack it until people learn to hide it more effectively.


You can only attack difference that is broadcasted. "Keep to yourself" is another way of phrasing it. This means don't go advertising and monpolizing the attention of others with your differences. Live your private live in private.


I do agree that it may foster politeness, but there are other undesirable effects of this cooling, such as political suppression. Sure, in a democracy like America, we love to tell everyone what we believe, and often it isn't polite, but in a place like China, it's beyond impolite to speak ill of the government, even when the criticism is just. I hate to invoke a slippery slope argument, but if we become timid around the subject of expressing our opinions, we may be easier to suppress. I would also like to add that there is an inherent value to speech. For example, a person who reveals government biases through photo or video is more valuable than a person who posts baseless conspiracies, hopefully we can have a proper value system socially enforced, rather than just have it all pushed down together


Ever heard “I don‘t mind if people are gay, I just don’t want to hear about it?”

Remember “Don’t ask don’t tell?”

The truth is that what is generally accepted today will be guaranteed to not be the same exact things that are generally accepted tomorrow.

Society moves from being more liberal back to more conservative through culture. Punishing people for straying outside lines when they are not causing specific harm to others eliminates the very method by which societies evolve.

What you are describing has lead to the stagnation and ultimately death of many cultures and societies.


> and refraining from imposing their thoughts and opinions

This isn't what social cooling results in though. Thoughts and opinions are imposed, it's just that their imposition is monopolized and becomes implicit. Dirty laundry will still be aired in the town square, but it'll be the King's and everyone will be forced to smell it.


I think the issue is that it is getting harder to have a private conversation or indulge in a private interest. It's quite difficult to have a conversation with a friend that's physically far away without using the services of one or more multinational corporations that may or may not be able to monitor what you say and sell that information to someone else. Of course it's possible, but how hard is it to analyze all the options and coordinate a method?

And what if you want to buy stuff for a hobby that you only talk about with a few close friends? Don't use Amazon, or a credit card anywhere, don't use Google to look up products or Google Maps to get to a store, don't use plaintext email or Facebook chat or Whatsapp or whatever else to talk about it with your friends, etc.

It takes a lot of mental effort to know whether or not an action will be "public", which can cause the cooling effect this page talks about. The trend is not people doing stuff in private instead of publicly, it's people not doing stuff at all because there is no "private".


I don't find using WhatsApp or Signal groups to communicate with my distant friends particularly hard. These particular corporate platforms are quite ubiquitous. I'm also not particularly worried about being canceled for what I say in these conversations, since it's not something I've observed happening in wider society.


Contrast it to living in a small town. Everyone talks, including the local store owners. There's very little privacy in having a private interest or hobby.

Local privacy is arguably far easier in a city, or in a crowded digital space. It all depends on the context of who you're trying to hide from. I'd much rather trust my privacy to Apple and Amazon if I wanted to quietly buy things no one else in my neighbourhood knew about.


That's kind of the point though isn't it? I imagine folks are rather more polite in a small town than a big city. I don't think having lots of privacy is a natural state for people. I think transparency is the ally of good and opaqueness the cover for evil. Mind you it only works if everyone is watching everyone (a la small town) rather than big brother watching you.

More or less I'm advocating a distributed social credit system instead of a centralized one. In fact I'd say "distributed social credit" is a pretty good term for the social conditions we have spent most of our time evolving in.


That's the opposite of live and let live.


Behaviour expected by social norms and with purely social consequences is much preferred to behaviour dictated by governments which can have legal and physical consequences. In the first case, you are (supposed to be) protected from physical consequences by that very same government. You'll never be able to get away from people's expectations as long as you live amongst other people. What matters is what they can do about it.


Your analogy was small towns vs. big cities. Now it's society vs. government? Are we even still talking about social cooling?

Both small towns and big cities have governments. Social norms can include being heterosexual or following a specific religion. Not conforming to those expectations can have physical consequences too.


Right, and what I'm saying is that there can be an upside to increased social pressure to conform to social norms (also known as being polite) which is suggested by social cooling. I'm also saying that it's not equivalent to government-imposed social credit.


Ah, my friends, time to return to snail mail and security envelopes!


It's not live and let live, it's live within the lines or be penalized. This isn't immediately terrible if you actually like living within those lines, but that's a big if. And what about when you or the lines change and they no longer align so well?

There's a big difference between politeness and total conformity to established (by the powerful) norms. Disagreeing (politely) with government policy on a public forum could easily prevent you from obtaining certain positions or status in the future if this is an accurate trend.

Not to mention that the freedom to go outside of convention without arbitrarily large punishment is worth preserving in of itself.


Trend? Here's Orwell (1948) on the financial side of heterodoxy: https://news.ycombinator.com/item?id=23822425

"Freakin' internet. Whats up with that?" — M.I.A.


> Is it wrong to suggest that this (if accurate) is a positive trend? I would like to live in a society where people spend more time considering what they say publicly, keeping to themselves, and refraining from imposing their thoughts and opinions. Live and let live.

Is this what's happening? What I see is more and more people falling into a few different tribes, each attempting to out ostracize the other. Game theory suggests this will end with two main tribes with peak hatred for each other.


Is it wrong to suggest that this (if accurate) is a positive trend?

If it's a completely inaccurate trend, I suppose your suggestion then completely misses the hoop, so to speak. If anything, it seems like a lack of privacy has heated things up through the micro-marketing of a hundred types of off-kilter reasons to be angry to a hundred different slightly skewed personality types.


> I would like to live in a society where people spend more time considering what they say publicly, keeping to themselves, and refraining from imposing their thoughts and opinions.

I see you've never been to the internet.


>If you want to have a private conversaion, social media doesn't seem to be a good vehicle for it. Much like airing your dirty laundry in the town square has been considered bad etiquette, airing personal greivances on the internet seems to be in poor taste.

An excellent point. Although not a new or particularly profound one.

When the large corporation I worked for back in the mid-1990s connected their email system to the larger internet, all employees were sent a memo discussing the advantages and issues with this.

It was recommended (paraphrasing) that employees shouldn't "put anything in an email that they wouldn't want to see on the cover of their local newspaper." That was back when local newspapers were a thing, but the principle still applies.

In fact, it applies even more strongly to the current social media environment. And it's still good advice.

That said, the rise of online communication and social media have reduced the personal and private interactions that people have.

Many on HN (and everywhere else too) won't answer phone calls at all, instead relying on SMS/Slack/WhatsApp, etc.

And formerly private conversations about one's personal life now take place on online platforms like Facebook, which ruthlessly exploits every bit of information they can get to "optimize the ad delivery experience."

One of the worst offenders is GMail, of course. They read all of your emails as a matter of course. Again in an effort to "better target advertising."

Which is why I'm surprised that anyone with even a passing interest in privacy would use either of those platforms. I certainly don't.

When I have a voice conversation (whether that be on a phone call or in person), as long as I'm cognizant of who is in hearing distance of my voice, I can be relatively (unless I'm being specifically targeted for close surveillance) sure that my conversation is private.

But any text-based communication that utilizes a centralized resource to route such communications is incredibly vulnerable to exposure and can't be trusted to provide a private communications channel.

Yes, this is oversimplified. No, I don't discuss encrypted voice/text mechanisms like Signal, PGP, SMIME, etc. here.

I didn't do so because most folks are unaware/unwilling/unable to use such secure communications mechanisms anyway, so their utility is severely limited.


The idea has only accidental correlation with social media. You are pretty much wrong focusing your thinking on social media only.


This is a good site, but it leaves out the fact that the traditional mass media itself has enforced certain opinions, which subsequently leads to a chilling effect.

Culturally, we need to get to a place where words aren't considered a form of violence, and where mere discussion of controversial ideas isn't shot down for "giving the enemy a platform." The concept of a calm debate really needs to make a comeback.

"It is the mark of an educated mind to entertain a thought without accepting it."

- Aristotle (paraphrased)


That's a lofty idea, but how do you deal with hate speech ?


Complex topic, but I think a tagging and filtering approach is probably the best one. You can’t censor bad ideas without inviting total censorship, so instead just let people choose which things they want to hide.

In any case, I’m talking more about the cultural value of politeness and free expression of ideas, which almost by definition would exclude any extreme sort of hate speech.


You start by understanding hate speech isn't the issue. GPT-3 will happily barf out pages of hate speech for you, but that doesn't mean we're trying to ban/silence/cancel GPT-3, does it?


You could start with realizing that the problem is not the speech part.


Sensible and jovial debate in the commons died when mankind died. We are pulling fairly hard into colouring people's reputation however we want via the internet with a humanistic culture.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: