That's the cooling effect in a nutshell.
You shouldn't have to think everything perfectly though that you write. It's OK to say something dumb, or even offensive. It's not great, but no big deal either. I don't want to live in a world where we have to polish and double-check every thought that leaves our minds.
It used to be that a written statement was something important, with gravitas, with thought and meaning put into it. You rarely sat down and wrote a letter or a book. But the vast majority of utterances on the net are not like that, so we shouldn't treat them so. We shouldn't apply yesterdays standards to them.
Actually, I believe this will be all moot in a few years. With the rise of AI, and the continuing increase in storage and bandwidth, we might reach a "one million monkeys with typewriters" scenario. There will be every possible utterance and every possible embarrasing photo of everyone on the net. It will be trivially possible to fake your voice and image. (Unless we enter the cryptpocalypse in which everything is signed...)
This is currently an odd period of time, in which we can create data, but can hardly fake it. There is authencity, proof of authorship. We can hold people responsible for how they behave and what they think. Before that was, and after it will be, just hearsay. It sounds super scary, but to be honest I find the thought quite liberating.
Exactly! My next project is going to be something like that. Based on the fediverse/Mastodon, but firstly about self-expression, connecting with real friends, meeting people. You curate your own home page, share what you want to share, you are invited to interact with strangers. No social media bs. Trying to capture the feeling of local / university social networks pre facebook, or the feeling of myspace.
That's how it is for me.
That's nice. And we shouldn't have to maintain physical appearances or be judged for it. We should just accept all of our imperfections, celebrating all forms of expression. But lookism and online-lookism are here to stay. Conventions of "good" info content are strengthened by karma/gamification, for every vote you give, including here on HN. It's noble to desire for a time when we can just all be ourselves, but with attention spans vanishing I don't think anyone will care.
The input box gets hidden behind the keyboard so I have to type blindly.
Probably going to get down voted for this and next time I'll have to censor myself to become a robot so I don't lose my points from which my future employer will judge my worthiness.
But seriously please make hn a github project so I can send a PR to fix this really annoying issue everytime I type a fucking comment.
Some people seem to be doing just fine with saying whatever is on their mind and getting away with it. Notable example is the current president...
I'm no friend of Trump, but it is disingenious and maybe harmful if most of the criticism is directed against the stupid things he says, his unstatesmanlike behavior, his faux-pases. Or in the spirit of this thread - how unadapted and uncensored his behavior is. Because that can be good and bad, and it's what so many people elected him for.
That he is spewing so much hate, he should be (and is) criticized for. I wish he would receive more critique for his (equally bad in my personal opinion) policies. I'm observing from europe, so my view might be wrong, but it seems most of the critique is on the form level.
The techno-utopia is always portrayed as some society where open mindedness and diversity are embraced. This is the paradox of modern political correctness. If the hivemind of society rejects you and your ideas, it is in fact not open at all.
This is why you should use pseudonyms and strive for anonymity. It's trivial to signup to Hackernews under an assumed name, or handle, and start venting on contentious issues. Hackernews might shadow-ban your throwaway account, so you might have to lurk moar and share some interesting links before you can comment without being censored. I know from experience. Last time I checked, HN has no strict policy on multiple accounts and you can do this very easily.
In terms of OPSEC, you obviously shouldn't contaminate your real iden with your anon iden, or contaminate your anon idens with other anon idens. You should also deliberately alter the stylometry of your writing so nobody can link two pieces of text to each other. Anonymouth is my favorite tool for doing just that.
Also, just a note, but you have some interesting tells in your text. 'anon iden' is a phrase I don't recall seeing, at least not very often. You used "it's" correctly, another signal, along with the 'moar' spelling, and a few other shorthand phrases.
You might want to start thinking about methods for scrubbing your text if you're actually interested in drawing a line between your personae and you.
Even though the user was banned (several times), even though all the posts are now deleted, along with my entire comment and submission history, my username there is now permanently tied to my real name.
While it was easy for a human user to doxx me using my comment history, it's even easier for computers, who save everything, no matter how briefly it is posted, to comb through your data and determine who you are. The only way to stay anonymous is to completely avoid ever talking about anything remotely identifying
Maybe an antidote to this phenomenon is for us to collectively go punk on it. If everyone trolls nobody trolls.
except you would be lucky if one well-meaning employee would tell you the above sentence off the record in violation of every contract he/she singed with ProfitCorp.
more likely, your life would be a mysterious series of surprise rejections with no or spurious reasons, the latter being especially insidious, since it would lead you to to believe there are aspects of your resume that need improvement, when in reality the missing piece is a "postitive comment generator" to flood every relevant online community with comments like "awesome", "let my know when it is finished" and "we should definitely have lunch sometime".
EDIT: okay i'm being sarcastic again, so in the spirit of improving my web sentiment analysis score, the points i wanted to make are
* these systems are invisible
* once you know about them, they can be gamed
It took me a while to realize that the uptick in friendly conversations with drugstore workers, and the onset of being stalked in my local supermarket, was likely because I now matched some shoplifting profile. It's been a useful reminder of privilege. Though it seemed unfortunate to be wasting people's time.
But here's summer, and sometimes not carrying a laptop at all. And it appears my supermarket, of more than a decade, has retained state. And given they certainly have my card information, I have to wonder how far that state has propagated.
So when choosing a laptop bag, or breaking a zipper, or paying cash, or spotting a possible misunderstanding, you have to wonder, can you really afford to appear different than the norm?
You might be significantly impacted, before (or never) realizing what happened. And thus you get to share in that joy of racial discrimination, pervasive uncertainty. Did the cab really not see me, or choose to not see me? Why did X happen to me, what's going on here?
And yet, the concept of "nudge" has public policy value. Doing noisy profiling, and helping people do the right things.
There's an old line, that the internet is creating a global village. But villages are extremely diverse. From warm and fuzzy, to amazingly toxic. There are tremendous social benefits to "everyone knows you". I just wish I saw more thoughtful discussion of the roles of anonymity, and on aiming us away from toxic.
Also there are all kinds of unconscious social biases that can induce people to talk to us. Perhaps you know expect to be interacted with and thus this orientates you towards social interactions.
Like listing the full stack of every product you ever worked on in white 2pt font in your resume to pass naive keyword filters and pasting in irrelevant blocks of tags on craigslist
Its too dangerous to be honest under you real name and has been for years.
Its alot like Roko's basilisk that way. Once you know the capability exists, you have to destroy it or help it. There isn't really any middle ground.
I sort of wish Roko's hadn't played out as such a joke, because the general sentiment is actually a really under-appreciated one.
There are all kinds of settings where the best outcomes are gained by either preventing a thing or enabling it - and succeeding. Revolutions seem like the obvious case, where the highest payoffs accrue to the vanguard revolutionaries (if they win) or the establishment (if they win). Various doomsday cults in fiction also count, where people produce a bad outcome on the logic that if someone else does it first, that would be even worse.
It's actually really nice to have the idea of something which is sensible to restrain, right up until it gets out of control and turns on the people who restrained it.
Curiosity killed the cat. Now I have to decide whether to destroy it or help it..
I see it pretty often here, the start up idea in stealth mode mocked (reasonably I think) the advantage of getting feedback on you idea far out weighs the potential disadvantage getting your idea stolen
seems like something similar might be going on here, lots of people worry about something they say being used against them later, but theirs a cost to that
when you put your ideas in a public place, and expose them to smart people, those critiques sharpen your ideas and give you useful feedback
if you're not actively trying to troll people, and legitimately trying to make your points in good faith
its probably highly unlikely that the potential downsides will outweigh the positives from improving your ideas by getting feedback on them
You can't easily "dissolve" your personal identity if things go south. You'll forever be "that stupid person who say X online" to search engines forever. Unless someone's working on a stealth startup to fix this…
If you neglect the truth, and instead traffic in half truths and innuendos... in the off chance you are pilloried anyway, where will you retreat to?
The world is full of second chances. You may lose your chance at becoming a senator, or a university professor, or somesuch. But there's mostly always another opportunity somewhere. In the internet age, you can eke out a living off of a motley crew of diffuse patrons more easily than ever. You don't need to please everyone the way Walter Kronkite did. You just need to please your core following.
If you accept that you have no entitlement to any particular space or industry or position then it becomes much easier to accept that things might go sideways. It's not the end of the world. Just the end of your story in one slice of it.
... says someone who just wrote a post this morning on HN about the notion that whiteness and masculinity could be associated with brain damage. I may come to regret it. But I think it's worth it, to put up my sail and allow it to be pushed closer to truth.
You better watch out
You better not cry
Better not pout
I'm telling you why
Big Data is coming to town
It knows when you are sleeping 
It knows when you're awake
It knows if you've been bad or good
So be good for goodness sake.
This page kind of assumes the audience is already willing to admit social cooling as a legitimate phenomena, and if not, will be convinced to do so after a few short bullets and very little in the way of actual analysis (ironically, this sort of approach leverages one of the modern patterns the piece could tackle--short bursts of information, instant delivery, decreased skepticism and amounts of reflective thought).
Also, I'd highly recommend avoiding the global warming comparison. It does a disservice to your cause. It basically comes off, at least to me, as saying "our problem isn't a substantial thing in its own right so lets compare it to this other big problem people already care about and hope the very loose and forced analogy strings them along"
All this being stated, ya'll should check out Horkheimer's essay "The Concept of Man." He wrote it in ~1952(might've been 53 or 57, I'm forgetting the exact date)--and it's crazy how prophetic that essay turned out to be. It shows how all our innovation really just led to an amplification of social structures and patterns that were already emerging during the dawn of automation and mechanization. I think it's relevant to your project.
Being trained as a media theorist I understand your criticism (and am going to check out Horkheimer's essay, thanks for the tip!).
But this website purposefully tries to keep things accessible in order to reach a wider audience.
I often see how academics have a deep understanding of what's going on, but just aren't as good at spreading that insight to a wider audience, like the startup community.
Still, I think it's useful to point to some of the academic backing--like you already do with Foucault, just perhaps in greater depth. Maybe add some of that academic/conceptual source material to the further reading section--then again, might just distract from the main point. You know your target audience better than I do, I only have my particular reaction (which is probably a bit idiosyncratic and outside of the scope of your intended audience).
In any case this is a cool project and a noble effort. Hope you stick with it.
Do you mind sharing a link to it? A quick search didn't return anything close to that title written by Horkheimer.
I read it in the Verso edition of Critique of Instrumental Reason
Be careful, reading them is like choosing the red pill.
How much background knowledge would I need to take advantage of it? I'm utterly ignorant of such matters, but I'm trying figure out how to go about learning this.
Alternatively, you can just dive in and look up what you don't understand as you read.
The Stanford encyclopedia of philosophy is a great resource for that sort of thing: https://plato.stanford.edu/
I'm still starting my learning of philosophy, going through Plato's works.
I'm keep Kant and Nietzsche in mind :)
PS: That Stanford page is awesome.
That's interesting of itself, but the bigger underlying issue is that opportunities are becoming more concentrated. When only a few companies dominate hiring in many fields, their mistakes get seriously amplified. Back in the day you were fine if Google's hiring process misjudged you - you could work for Excite or Altavista instead. Nowadays if some ML algo decides that people wearing blue sneakers are worse job performers you can get screwed (without even knowing why). And even worse, the major companies (where the jobs are) often share algorithms.
I saw an angel.co drinking game once. I think you had to chug two drinks for "worked at X" (where X is any major) being put forward as the sole qualification of a founder or key early employee. This is starting to edge out "went to X" where X is a top-tier school.
China is supposedly deploying their own horrific state-sponsored "social credit score" system, but we're doing it too. We're just doing it in a less centralized way. In a way that's worse. In China everyone will know of this system and its existence and I'm sure people will figure out so many ways to game it it'll become irrelevant. In the West people will remain blissfully ignorant as ours has no name or formal identity.
Ultimately I am still more creeped out by what our private sector is doing than what our NSA and CIA are doing. Neither is good, but the latter has some oversight and regulation. The former has absolutely no regulation or oversight whatsoever, and in any case the private sector is very often better at such things than the public sector is. I wouldn't be at all surprised if Facebook's data analytics are far superior to the NSA's.
Thanks to the moral police and keyboard warriors out there normalizing contacting employers over an internet argument.
Campaigns to get people fired when their online posts are revealed are well-attested from all parts of the political spectrum, and generally come out of doxxing efforts that hiring managers don't undertake. There are even campaigns that boil down to "this person said X, harass/troll their employer so that even if the employer doesn't object to X it becomes too costly to keep them employed".
But at the same time, hiring managers have Google and all kinds of tools random harassers don't, like the ability to check criminal records and credit scores. (And there's a great example of an opaque and inaccurate tool governing people's lives - just read about the people sharing a name and birthdate with someone who has bad credit or legal issues!)
So yeah, hiring managers with Google. But I wouldn't discount the other issue either, since it can cause people problems even for comments that don't violate any general social standard.
There's one side of this which is straightforward. Companies and governments are compiling data for their own purposes, which range from modeling user behaviour to profiling you so that they can sell you stuff or arrest you for dissidence.
The lines we previously defended for privacy, freedoms of conscience, affiliation and speech have been disturbed, to say the least. This has generaly been done under the surface, without involving users. It is increasingly felt on the surface, via the ads you see on FB or the recomendations youtube feeds you.
The other side of this is what I think of as a "post-history" problem.
We're now transitioning into a period where reality is simply recorded. Your comment on Chelsea Manning's release is now a matter of public record. Your next Tinder date might see it and so might the HR manager reviewing your application for senior talent accumulator in 2032.
There are all sorts of implications to that, but mostly people just feel weird about it for now. Anxious and uncertain.
So... FB (HN, whatever) is a space for casual discussion. Casual generally meant private in the past. Now, some of the most casual discussions mean an extreme opposite of private. This inevitably comes with stress.
Calling it a hilling (or cooling) effect is evoking a political dimension, one that speaks to the first part of the issue. The second issue, that's more of a social issue. It's political too, but I don't think that's where the centre of mass is.
I'm not saying we should stop (although that's what might happen), just that we pause and consider what this is doing to the world. It is the undercurrent for so many profound changes going on right now.
Are we really comfortable as individuals building systems which predict someones mental (ill) health, personality traits or ethnicity just so we can sell them things, or worse, not sell them things?
Anecdotally, the few folks I know that work for data collection companies are all "tinfoil hat" types. They have flip-phones, they have no online presence, they smile like a Cheshire Cat when you ask them about it and you generally get the impression they've just decided to categorize it as "us" and "them". :-\
That is true, but going on my peers (especially the ones fresh from university), I think a dangerous proportion of people simply aren't aware on any level of the ethical implications of what they do. It's that which worries me.
How was the course?
I didn't have one, but many other "engineering ethics" courses I've seen inspire apathy just because they're terrible. It's like school anti-bullying campaigns - even if you're vehemently anti-bullying, most of the campaigns are too ridiculous to feel anything good about.
On the other hand, something like Canada's Iron Ring seems to get taken very seriously. It seems like a nontrivial part of the challenge is teaching ethics in a way that reaches even the people who want to behave ethically.
I'll also grant that I'm not very visionary or even great working/leading large groups of people. How would you teach a class exciting enough that virtually all students would attend it, enthusiastically, even if it were elective? (It was required for us.)
At the end of the day, it's only going to be as exciting as the students make it by involving themselves and thinking. They are the ones creating tomorrow's startups, not the professors. As it stands, it seemed like quite the accurate litmus test for how many people care to think about issues in this way in our field.
"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair
Very few people in the upper echelons of society (like highly-paid Silicon Valley engineers) truly believe they're doing something wrong, or could even be convinced they are doing something wrong. Above a certain level on Mazlowe's hierarchy, people have selected where they work in part because of the mission. They have bought the company line because in part, the company line is what they're there for.
People at Google don't work there because they really love advertising and really love putting banner ads in front of people. They work at Google because they believe they're 'making the world a better place'.
And good luck convincing anyone that their purpose in life is a lie and that they're part of the problem. I've convinced exactly zero people so far.
I mean, I'm sure Uber employees feel they're empowering people to work for themselves and helping people make a decent living who might otherwise be stuck unemployed. Maybe it took that many blatant scandals and fiascos for Travis Kalanick to come to terms with the fact that he wasn't making the world a better place?
One of my favourite quotes too, but I refuse to accept it as law.
If people really do work at places like Google, Uber and Palantir to make the world a better place, then it suggests they would care if they are making the world a worse place inadvertantly, right? In which case it just takes education.
If, on the other hand, they don't want to look too deeply into the social consequences of what they work on, then that is more difficult to deal with.
All of these companies (especially Google) make some fantastic contributions. I just wish sometimes engineers looked up from the keyboard to see the bigger social picture.
Wake up to this thing? Who do you think is enabling it and laughing all the way to the bank and/or the VC money?
We all enable it, I'm not sure the awareness of the full affect is there though.
One other day I met with a guy from a company, that branded itself a social startup. Do you know what social means? Social, like in society, like in living and working together? It means, I employ people to think about the best way to monetise your relationships with others.
I admire technological progress and I believe it's the only chance we have. But the corporate, SV or VC double speak and the weak and irresistant minds of maybe 90% of the population makes western societies an awkward place to live.
The things I say in a social group of former college buddies and the things I say in a group of the local clergy are two different things. That doesn't make me two-faced: it makes me human. In fact, the ability to converse and trade with drastically different social groups is probably the essence of humanity.
Yet our current overlords that program the internet are convinced that the entire world should run as if it were just a huge version of their favorite social group. Joe tells racist jokes? Maybe we let Joe continue, but we definitely ought to score that. After all, Joe could offend somebody -- and then they would be mad at our platform, not Joe.
We are instrumenting a terrible evil on our species, even more evil than the security and surveillance state, if such a thing could be possible. SkyNet has finally attacked, and because there are no T-1000s leading the way the vast majority of the population doesn't even know it's at war.
You could even say that this page, and people trying to raise awareness for this issue, are harmful!
Imagine a few important people stepping up and saying, no, we will not disadvantage applicants because of their "unprofessional" facebook profiles. In fact, we value authentic, unintimidated people. The act of saying so will make it a little bit so!
We need to shift the blame from people expressing themselves, to those people punishing them for it, or even to people giving well-meaning advice like this.
(Just a crazy thought I just had. Didn't want to be to harsh with the creator, who raises an important discussion.)
In 2015 the databroker market was already worth 150 billion dollars in the US alone. (source: FCC report on databrokers)
Harmful for who? If, let's say, they make people actually aware of this, that might lead to change towards privacy. But if people are unaware, and they remain unaware, the probability of change is smaller.
The following is a kind of evil comparison, and I'm sorry, but I can't come up with a better one right now - it's a bit like saying: "If 'chubby' people were aware that they should wear flattering clothes, then this might lead to other people liking them better." - Maybe well-indended advice, but WTF no! You shouldn't tell somebody to hide their body, and likewise you shouldn't tell somebody to hide their emotions, political ideas, drinking pictures, social moments, and so on.
An intermediate step would be selling of derived data to anyone, not just companies interested in hiring you but really anybody.
The website does link to a lot of scientific studies actually. Both throughout the page and at the bottom.
But I purposefully didn't want to link directly to the PDF's of those studies too much. By pointing to accessible news articles about those studies in the "further reading" section I was hoping to keep things accessible to a wider audience.
This article that appeared in The Guardian today about Social Cooling has some more sources you may like. https://www.theguardian.com/commentisfree/2017/jun/18/google...
And you may also like "Postscript op societies of Control" by philosopher Deleuze, that greatly inspired this view.
We need catchy concepts to reach a wider audience.
I can't satisfy anyone's need for a thorough scientific paper offhand, but I can certainly add anecdotal evidence : I regularly self censor online precisely because I know I'm being tracked in some fashion (and probably in ways I haven't even thought of : retroactive big data analysis 20 years from now is likely to be more sophisticated than it is today). I doubt I'm the only one.
One option is to bring back anonymity so people can make public, anonymous comments. Anonymity has been sharply curtailed (because terrorism) and this is, IMO, bad for society.
Another is to mandate short term limitations on use. For example if the employer wants to look at your online presence they can only look at last week of your posts and only for initial employment consideration. IMO employers should not look there at all, but maybe this may be a palatable compromise.
The chap in HR is not itching to dig dirt on employees -- he just has a distorted notion of due diligence forced on him. If he has a clear, legal definition of what he can and he cannot look into I suspect he will gladly comply. My 2c.
It's also too simple to equate advocating wealth redistribution with needing to give away your money. Giving away your fortune is probably not the best angle for wealth redistribution. Certain rules of fairness for the wealth redistribution may be needed. For example, it may be useful to fight for rules that get all relatively rich people to join in (aka taxes), which will create a much more powerful push for equality.
It's not just HN. When I talk to other companies I often get asked "are you in the Valley or the city?" Answer: we are in SoCal, so keep going South. Sometimes they are surprised, though they seem to find it reassuring that we're still in California. If we were in Texas or North Carolina I could see that being disorienting to some people.
The problem comes when they repress negative emotions and other status detractors because the cost of even being aware of them is too high. Then you have people who, in psychological terms, are prisoners of their shadow selves. They become anxious and depressed because they fear confronting it.
I also worry about Learned Helplessness, where we believe that there is nothing we can do about it.
In Silicon Valley the Technological Determinist view that technology has it's own will, that it is some unstoppable force, is the dominant but dangerous viewpoint.
That idea is only creating a self-fulfilling prophesy.
The reality is that we as a society have always taken the rough edges of new technologies through the creation of laws and new norms. For example, we pretty much put a halt to nuclear energy.
We can and we must regulate the Big Data world much more. And the first step is that we must help people understand the problem.
The trick that we use is to hook the consumer faster than law can react. Uber did this successfully. Segway flubbed it. With Segway Kamen wanted a big rollout. It attracted attention and municipalities started passing legislation restricting Segways before they were off the assembly line.
What we're seeing now is technology connecting us more and more and amplifying the potential disqualifiers. There is more social freedom when people have more independence.
A related NYTimes opinion piece  encourages "help young social media users realize that their online and real-life experiences are more intertwined than they may think. Parents might, for example, cite current events, like the Harvard episode, to remind them that nothing online is ever completely private". Which is true, good advice, and social cooling.
And the nytimes/reuters version  is currently "Page No Longer Available". How does that affect your confidence that "if it was going on, you would know about it"? :)
1. Giving anyone access to your reputation is inherently bad.
2. Giving some amount of people access to your reputation is OK, but the amount of people big data gives it access to is now magically worse.
(1) is definitely untrue, at least to most people. We all definitely use our knowledge of other's reputations to make judgements, and apply social pressure to make them conform. For instance, if someone you know is a rich snob, or a vehement racist, you won't hang out with them.
(2) seems ad hoc. Why would letting more people know about your reputation magically be worse? Whether someone knows about your reputation should either be bad or not -- it's not dependent on how many other people are aware of your reputation.
The second is much more insidious and difficult to resolve: since systems are never perfect, it is likely that many will be discriminated against due to over-generalization and mistakes in the system. The more complete the surveillance appears to be, the more confidence authorities have in the system, the more likely that people get into serious trouble due to no fault of their own.
Check out https://www.mathwashing.com
> people can recognize normal discrimination
I don't see how you can reliably recognize discrimination in any way that can't also be applied to the decisions of a computer program.
1. The scale means we're applying many more judgments in many more places that would've slid by the human judgment radar. This is arguably dangerous in itself because if we hold judgments to generally be unreliable, we're just adding many more points of unreliability.
2. The culture could conceivably develop in the direction of blindly trusting automated judgments, such that the type of scrutiny you encourage will dwindle as a practice. This would put us in a much more vulnerable state with respect to bad judgments.
That said, I still side with you. I think the ultra-dystopian scenario outlined as a possibility by OP is unlikely precisely because power/control are decentralized and very difficult to wield intentionally. And there's massive inherent conflict between various actors that have greater means to try to affect that control.
However, I also don't think it's inconceivable to end in the ultra-dystopian scenario. Technological progress is generally good and generally can't be stopped, but it also continually introduces undesirable possibilities as well.
The scale point makes some sense, I'll admit. We're raising volatility in applying potentially-discriminating judgements in the first place. Perhaps indeed it's a tradeoff between the expected boons of algorithmic decision making and their increased risk of discrimination. That said, the scale argument would then not hold for replacing what are currently situations with opaque human judges with robots.
That's the big thing here, the internet strips context and nuance from everything you post unless you take a lot of time and thought over exactly what you say. It's proably fair to say most people on the internet don't do that.
Just raising awareness won't change anything - the system is working as intended for the people who were sold on it and the people who implemented it (bar a few unfortunate engineers who had to do it for the money). History is rife with examples of people trying to enforce a more rigid social order with varying degrees of success. Letting people different from you have freedom is not something that many people want. Think hard about the last time you thought "the world would be a better place if everyone thought like me". Then realise how many people don't follow that with "but enforcing a mind-police on society is awful".
By reducing moral relativism to the self and ignoring its role in relationships at large, individuality overcomes any collective moral system (be it religious, political or philosophical), and so self-righteousness assumes a form that values spontaneity and originality - the tools of personal promotion - above ethical soundness. This seems to be, in my opinion, the hummus of the most visible social outcry. Social media outrage took the place of discussion, just like opinion articles are taking the place of news reports.
Uncritical adherence to this logic harms us all. And the chilling effect strengthens it.
In the past, people fought against a static, conservative religious or political moral, in order to make room for individuality, liberty and democracy. Now we have an agglomerate of individual perspectives fighting for visibility in social media, where popularity (by any shallow measure) took the place of reasoning. The chilling effect makes public virtue even more black and white, and conformity (or social cooling) is just settling in either side. Living in the fringe that is refusal of conformity (social heating?) has become more difficult and exhausting than ever...
I don't know. Maybe I'm wrong and things were like this for ages. Maybe there is an answer in all the valuable teachings of the past that we simply choose to ignore for the sake of the here and now.
We also might conclude that "meh, teens getting drunk occasionally" or "meh, people actually having a sex life" is pretty goddamn normal and get over a bunch of nonsense.
No matter what goes on around us, we still have a choice in how we interpret things and what kind of world we choose to build. There is zero inevitability here.
When Demi Moore posed naked on the cover of a magazine while pregnant, this was some sort of shocking dramatic thing. Now, it seems like every pregnant celebrity does the exact same pose and posts it somewhere. It has become prosaic.
Seriously, we can choose to be more humane to people. Things going to hell is not some inevitability.
Edit: Maybe a better example is that when 24 hour news channels became a thing, it changed the news. Before that, people were very straight laced and serious for the 30 minutes that they reported the news. This was not sustainable when reporters had to talk live all day, every day. They became less stiff and formal, more able to crack a joke and be human. They still had to treat some subjects with appropriate respect, but 24/7 news channels caused news to lighten up some.
Thoughts I've had:
Total quantity of data available?
Ability to define boundaries?
Ability to enforce those boundaries?
Knowledge of what boundaries to even define?
Who knows what about a person?
How many agents know what?
How aware is the subject of actual knowlesdgee?
How rapidly can that knowledge be further transferred?
Does the surveillor know more of the subject than the subject?
Can the subject access that knowledge?
What level of benefit (or harm) can be transacted on the basis of surveillance? Does this accrue to the subject or others?
Anyway, I think you're forgetting one important dimension: whether the person in question would like that particular piece of information to be known.
That dimension is the setting of boundaries. E.g., "I don't wan't you to know, or share, or seek, or ask of some X." Or if it's acquired, not to share it except as specifically specified -- only with notice, on request, within a given grroup, for (or not for) a specific time, etc., etc.
Or possibly just better questions.
Or perhaps, what possiblities occur to you?
An example - there was a discussion a couple of days ago about FB and I questioned why a commenter felt the need to create a fake account simply to comment on FB. It turned out they weren't even a current employee but an ex-employee.
i cycled through 3 or 4 accounts with multiple-thousand-points of karma, but in the end i just stopped giving a shit. this sort of amateur-level banning may work against your typical troll, but on HN, the end result is you're wiping out diversity of opinion, because although the people on here are smart and resourceful enough to get around any ban that happens over the internet, at some point it just becomes not worth it just to express your opinion -- that's how censorship actually works in the real world.
at the end of the day, YC is a VC and has interests to protect.
What is the company line exactly? Is it just maintaing agreement with YC-backed companies?
I hadn't heard about the shadow bans, are they only on submissions or commenting too?
And with "show up" I mean as blatantly as in China. I'm actually fully expecting these things to happen below the water already.
This already exists across the world in the form of credit rating agencies and social media. It's merely an issue of data integration
1. Centrally regulated and collected
3. Explicitly includes matters of ideology and opinion
So, it's hardly comparable to what is extant under the surface in other countries.
2. No it's not, it was unsuccessfully tested in a single county
3. So do the judgments of employers on whether or not to hire or fire someone based on publicly expressed political ideology
It may make us feel better to point the finger over there to distract from the parallels of what is going on locally but doing so is hardly practical.
And by implications I mean something more than not seeing job ads, or not getting a loan.
Fortunately, there are no such cases that I am aware of. Unfortunately, it might be just a matter of time.
In the EU the new GDPR law is already making the 'informed consent' requirement more strict.
More on that in https://domainisticationofngrams.com
That is a real concern to me.
I don't know how it would be implemented, on what schedule, and to what extent.
If it's people I interact with all the time there's other ways of contact that are less data mined like a good old text message or phone call.
Oh yeah and my last Facebook post is well over a year ago. There's no way in hell I will post random pictures it that will show me in bad light. It's basically a slightly less official business profile.
I know that it's nowhere
But there is no denying that
It's hip to be square
It could use some serious fine tuning for grammar though, likely as a result of English being a secondary language.
If the guy who owns the website is on here, I'd be happy to help out with the syntax and grammar. PM me, I'd love to help out with this - it's really well laid out.