Hacker News new | past | comments | ask | show | jobs | submit login
The internet is increasingly a low-trust society (wired.com)
261 points by Balgair on June 24, 2019 | hide | past | favorite | 165 comments



If I'm not mis-remembering, there's an interesting section in Dawkins' The Selfish Gene about cooperative grooming behavior (I think it was in some sort of water fowl) and how the birds deal with cheating behavior (individuals that accept grooming without reciprocating). My takeaway was roughly:

Any exchange of value based on trust is exploitable. The simplest cure is excluding the exploiter, but this doesn't scale well. The exploiter can skate on anonymity if the community is large enough to continually prey on someone new. Spreading news of an exploiter's behavior to others can greatly improve how well this scales, but this behavior also requires trust.

I think the more direct problem is with scale, and that the internet is at the nexus of many trust issues only because it has ramped up the scale and scope of many interactions.

I'm not super optimistic on solving this intrinsic problem of trust in social exchanges, but I do see this framing as a silver lining. It seems at least plausible to iterate offline at significantly smaller scales on mechanisms for building and maintaining trust--and rectifying its breaches--in ways that do actually scale.


It's true that "The Selfish Gene" has been a science-based classic of evolutionary biology for over 40 years. But it's also true that in those 40 years a lot of studies have taken aim at the central arguments of the book and in my opinion cast serious doubt on the accuracy of the books conclusions (Or perhaps limit the scope of those views as being a projection of western culture - but not of all cultures on earth, particularly not eastern philosophy, and not the animal kingdom at large)

If you have never read any of these counterpoints, but find that the conclusions of Selfish Gene have shaped your world view, please consider reading some of these counterpoints and seeing if they persuade you to consider new perspectives.

A particularly thorough book detailing these counterpoints is Matthieu Ricard's book:

Altruism: The Power of Compassion to Change Yourself and the World


The book you're recommending seems to be about psychological and spiritual relationships between human beings, from what I can gather from a quick skim on Amazon. But not a single one of the reviews actually describes a single argument the book is making (strange), so it's hard to judge.

Dawkins' "The Selfish Gene" is a scientific work about evolutionary selection at the gene level -- I don't recall him touching on psychology at all. (He provides evolutionary explanations for certain altruistic behaviors, but I don't recall him even starting on how they might be expressed via a psychological mechanism or at any conscious level.)

I can't quite imagine what either have to do with each other -- they seem to be such different topics. Or what Dawkins' work has to do with "western culture", or culture at all. At heart it's a quite mathematical/statistical argument.

I'm curious, what exactly do you see being refuted?


The author is definitely a "spiritual person", he's a Buddhist monk and the french translator for the Dalai Lama. But he is also very scientifically minded. He has a PHD in cellular genetics from the institute pasteur[1].

One of the central themes of the book, and motivators for the author to write, is to address the contrast between other oriented (altruistic) and self oriented (selfish) societies. The author asserts that in the western fields of psychology, theories of evolution, and economics, it's often taken for granted that an individual's deeds, words, and thoughts are motivated by selfishness. To the extent that this assumption has nearly become dogma. The 868 page book, including a massive 160+ pages of notes, and bibliographies systematically lay out the scientific arguments against "the hypothesis of human selfishness".

You can read the sections of the book that specifically address "the selfish gene" with the following google search [2].

[1]: https://www.pasteur.fr/en/education/programs-and-courses/doc...

[2]: https://books.google.com/books?id=1k_2AwAAQBAJ&printsec=fron...


From the links, it seems like his main criticism of Dawkins is actually merely the word "selfish" in the title of his book, and that the CEO of Enron liked it?

Yes, Dawkins is saying that "universal love" does not have an evolutionary component, which seems like a fairly uncontroversial claim.

It seems like your criticism of Dawkins is more a criticism of how other people have misunderstood him, rather than any criticism of the arguments in The Selfish Gene itself?

If you haven't, I highly suggest you read Jonathan Haidt's "The Righteous Mind". While it's at a popular level, it does a fairly good job at presenting a plausible framework for how moral behavior (like altruism) can emerge from evolutionary principles. [1] Haidt is probably one of the most influential moral psychologists today.

[1] https://www.amazon.com/Righteous-Mind-Divided-Politics-Relig...


Thank you, I'm familiar with the book you are recommending - the righteous mind. It is discussed in Altruism and cited in its bibliography. [1]

I get the impression you've decided to take a stance on the book Altruism without reading it. The summary you provide for Rightous mind could work just as well as a summary for Altruism too. At this point, we aren't even disagreeing, we're just citing different sources, and I'm content to just drop it.

https://books.google.com/books?id=1k_2AwAAQBAJ&printsec=fron...


>Altruism: The Power of Compassion to Change Yourself and the World

Aware that I might be jumping to conclusions but that doesn't sound like the title of an objective, scientific text.


Perhaps it is a failing of science that we have that perception


Any chance you can post some of these counterpoints?

At first glance I’m not sure how a book on altruism/compassion refutes the gene-centred view of evolution, which holds that evolution is best viewed as acting on genes and that selection at the level of organisms or populations almost never overrides selection based on genes.


I think as far back as in 1902, some of these topics were treated scientifically by Kropotkin in in his “Mutual aid – a factor in evolution” [0]. His book was based on his observations of animal behaviour in Siberia.

[0]: http://www.gutenberg.org/ebooks/4341


I think it is just a formalization of the thoughts of egoists of the 19th century.

Trust is a resource and is (only) used up if exploited. There are countless examples how this happened on the net. Remember the latest disclaimer you agreed to and consequently had your data stolen or abused? Don't even get me started on penis enlargement spam.

Then additionally, people pretending to have identified the abusers very frequently become the largest abusers themselves.

Lastly, trust needs room to grow, so trying to enforce it through surveillance and advertising will only strengthen the individuals ego. So...

> can greatly improve how well this scales, but this behavior also requires trust.

can backfire immensely. People have already tried and failed.

There is no intrinsic problem with trust. On the net you are potentially connected to everyone on the planet. Nobody can claim the trust of everyone. And I firmly believe achieving this is the wrong goal.

People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.

Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.

If you keep looking for new predators, everybody will look like one at some point. Because you have let your trust be abused and have none left.


> There is no intrinsic problem with trust. On the net you are potentially connected to everyone on the planet. Nobody can claim the trust of everyone. And I firmly believe achieving this is the wrong goal.

> People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.

> Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.

I'm not sure it's possible to distinguish the world you describe from the one we already live in.


My favorite concept from the book as well!

The name of the behavior you're looking for is Evolutionarily Stable Strategy [1] often abbreviated to just ESS.

[1] https://en.m.wikipedia.org/wiki/Evolutionarily_stable_strate...


Interestingly, one could view China's social credit score system as a way of dealing with this trust issue.

Note well: I am not advocating either a social credit score system, nor China's use of it. Instead, I am warning that any national-level attempt to solve the trust issue will likely be open to the same abuses as China's system.


That only works until enough people exploit the system in some way. (Both good and bad score...)

Specifically, once it becomes a means of political control it ceases to be useful as a system for trustworthiness.


> Any exchange of value based on trust is exploitable

I was thinking "money screws things up" and its corollary "free screws things up", then read your comment. It gets to the root of things.

With trust, trade is unlimited, so I wonder how to architect things to stand up to the problems we have.


The obvious is a robust welfare state. I mean it seems pretty clear to now that inequality goes up, social stability decreases, major overhaul, a generation of babysitting it... like just stop pandering to the meme we can’t afford universal healthcare and college education. Cause we also cannot afford leashing the next generation to soon to dead men’s gambling debts.

But that effectively ends the point in aristocracy, the meme that we must cater to all these rugged individuals living off grandpas old money. So that’ll never fly.

Violent revolt it is! Front row to the apocalypse! /s


Maybe I’m reading this wrong, but isn’t this just describing an echo chamber? Sure that’s nice with concepts like justice, but if the internet shows anything it’s that we thought we agreed on things far more than we actually do.


So, should we just try to ignore the larger-scale exploitation of trust?


No, we shouldn't ignore it.

But I think there's a risk of sending a lot of bright, productive people off to tilt at technological/interface/regulatory windmills in a way that leaves fundamental trust issues out of scope.

Even if Amazon, Twitter, Facebook, etc. could wave a magic wand and remove every fake review, counterfeit product, scam, personal threat, or piece of false/fraudulent media, there would still be a trust issue. We still have to trust that they did what they said, that whatever disappeared was correctly identified, and that they didn't wrongly remove many legitimate items in the process.

Even if the magic wand that makes these decisions has the utmost ethical and logical integrity, there will still be some mix of skeptics, cynics, malign actors, competitors, bots, etc. who chum the waters with accusations to the contrary. We'd still have to choose whether or not to trust this process and the actors behind it.

So, I think it might be productive to focus on some smaller questions first. To be semi-arbitrary: can we find a protocol for reliably building a 50-person trust network with a limited scope/focus (maybe identifying reliable providers of a single service), where each participant knows a small fraction of the network, which is capable of meeting both its purpose and is capable of detecting and reforming or ejecting exploiters? If the first is tractable, can you expand the scope/focus of the networks and retain these properties? Can you compose a higher-order network that retains these properties?


One of the things the internet has been really great at is allowing those small (n~50) networks with high trust for e.g. identifying trusted vendors, but for really niche things where it’s hard to get an organic local network. Think new programming languages, or to crib from a recent HN topic, building tube headphone amps.

These kind of emergent niche communities can only exist when the broader network gets sufficient scale to have a critical mass in the niche.

So in limiting scale overall let’s take care not to throw out the baby etc.


I'm not sure I see hard top-down scale limits as a goal. I don't think this would be in the best interest of most people with a hand on the steering wheel of large social networks.

I'm imagining something more along the lines of: can we learn how to turn what small/niche networks do well into building blocks, and does that knowledge teach us anything we could use to (I guess adversarially) re-structure/reform some environments that are currently trust bonfires.


These networks would be scalable by using derived trust values. In simple terms you trust others to provide you with the trust value of people you have never met. I'm sure one could mathematically prove certain implementations to be unexploitable.


>>In simple terms you trust others to provide you with the trust value of people you have never met.

References? Credit Score?


> References? Credit Score?

Both of those are easily gamed in the absence of trust. For references the mechanism is obvious. They list co-conspirators as references who then lie about their titles.

For credit reporting it seems like you could do the same thing over time. Create fictitious entities to make positive filings with credit reporting agencies. I assume the reason that isn't more popular is that it's even faster to do it the other way and steal the identity of somebody with good credit.


This looks more like people new to the internet discovering what everyone else already knew: You can't trust strangers on the internet. You never could.

If anything the internet nowadays is a more regulated, safer place than it used to be. The "wild west" internet of old now only endures in certain corners. It's no longer synonymous with the thing itself.

Posting with your full name instead of anonymously or pseudonymously is now the norm for many people. This changed how people interact online, but it also changed their expectations.

A full name however doesn't make a person or their opinions any more real than a pseudonym will. People will lie to you with a fake name just as shamelessly as they would hiding behind a pseudonym.


I think it's more that the perception of it is being twisted. The media twists the way that the internet has worked since time immemorial as something that's suddenly gotten a lot worse, that all trolls are on par with criminals, and that we need to take drastic measures.


I wholly agree. But.

The people who write those kinds of articles are mostly just your average Joe who uses the internet exclusively to browse Facebook and the like. Of course this gives them a twisted understanding of the medium.

And then there's those who are just lying I guess (duh!). Plenty of reason for that with the whole fake news hysteria. If you don't want to risk ending up on the wrong end of that debate, your article is already written for you. Risky to defend the free(-er) internet nowadays or even just to refrain from attacking it. Further there's the fact that "a war on fake news" might give the established media an edge, but I'm unconvinced any reputable journalist would consciously consider that when deciding on the tone of his article.

In any case you can also find plenty of articles displaying a better understanding of the matter. It's not like there's some secret global conspiracy against the free internet.


Yeah , people forget that pre-facebook, posting your real name or address online was a real taboo.

And honestly, it was better that way. The internet was built to be public space.


This goes beyond trust, I think. The social structures of Internet communities are very different from those those in real life, and I'm highly disturbed when people try to transplant IRL social norms to the Internet thoughtlessly. This is where we get bone-headed ideas like the real-names policy for "the Internet".

Now, I am not discounting that there is actually toxic, horrific harassment that goes on the Internet. Doxxing and things like it are terrible and should never happen.

Rather, I'm arguing that people need to acquire a sort of "street smarts" when navigating the Internet. Much like when you travel to a foreign country, you may expect a different culture with different norms, so it is the same with the Internet. What's rude in one culture is simply social convention in another. What is casual conversation in one is taboo in another. I spent a lot of time on the Internet growing up and I feel like I have been inoculated against the worst parts of the Internet, but at times it feels like many people haven't.

A good example of this is how a lot of people and media take 4chan posts at face value, without realizing that a lot of it is completely self-aware and that the worst posts are almost always a game of who can post the most needlessly offensive thing possible. A lot of 4chan is an exercise is communication with no names and no filter. And yet some of my most informative conversations have been on 4chan precisely because there is no politeness filter, and posters can be as devastating critical as they want. But a reader also needs to learn how to tune in and out, correspondingly discount and read between the lines to get the most out of 4chan conversation, otherwise you simply come away with the idea that the community hates everything and everyone.

For a more tame example, I follow a couple of Twitch streamers. For example, for streamer A, most communication with their chat is saccharine and supportive. But for streamer B, the chat makes fun off and insults him the whole way especially he makes a mistake, and the streamer, and the streamer gives as good as he gets and makes fun of the chat the whole time too. And this is normal and fun engagement for all parties. From afar, streamer B's community looks extremely toxic, but that's the furthest thing from the truth. (What is abhorrent is when you take streamer B's chat behavior into streamer A's chat, and that is frowned upon by all parties.)

I believe that Internet activity should be diverse. There should be places where communication is an extension of real life (Facebook, emails), places of semi-anonymous and professional communication (Hacker News, certain subreddits/slack communitiees/GitHub), and then places where you should be able to go hogwild with whatever you want to say (fun subreddits, discord).

Twitter is unfortunately one of those places that has had all of these mixed together, which is why I decided to have multiple Twitter accounts targeting professional/personal hobby communication. From what I hear, kids are already tuning to this idea, with things like public/private instagram accounts.

The Internet is not all the same, and that's great. The Internet is not just real life, and that's great.


A serious problem with the "let the people live in their own worlds" attitude is that many communities do not stay in their realms. You allude to this a little but, in my opinion, it's not a minor problem to be glossed over, it's the core of the trust issue: any online community that doesn't keep itself small and insular (and enforce this with sufficient opsec) is at risk of being invaded and exploited (for lulz, for cash, or for political manipulation).


I absolutely agree. Invasions are bad, crossing "community" boundaries is. I think that's also part of the "Internet Etiquette" to be learnt: what happens in one domain of the Internet should stay in that domain of the Internet. You don't bring your mischievous trolling behavior from Discord to Facebook, but you also don't bring your "comfortable-space" standards from a support group forum to a video game forum.

I acknowledge your argument that "diverse worlds" system may not be sustainable over time because it is unstable, particularly to new entrants who do have to learn the local lingo. I personally disagree, and would argue instead that that's something Internet users should learn to deal with. We've long had the solution for this: good moderation. You can have hyper-extensive moderation like r/askhistorians or Facebook support groups, you can have moderate moderation for spam and the like for things like generic hobby groups, and then you can have light moderation for spaces deliberately made for that. From heavy to light moderation, the burden shifts from the moderator to the user. Different Internet spaces should have different arrangements like this. While this solution may not be perfect, I argue that this is what works. I put this in contrast to another prevailing idea that all spaces should be like IRL (because that has maximal accountability and requires no context switching), and I argue that that's what will diminish the diversity and dynamism of the Internet.

I would further add a key ingredient to making this world is the tools provided by the platform to perform this moderation. That's one reason why Twitter, for all the good content it has, is also a mess, because there's no limit of cross-pollination of communities. Contrast this to, say, Reddit, which gives subreddit moderators significant powers to curate and protect their communities.


> A good example of this is how a lot of people and media take 4chan posts at face value, without realizing that a lot of it is completely self-aware

But a lot of it is not, and a fair number of people do become indoctrinated to alt-right ways of thinking through that sort of medium, some of them going as far as murder.

I'm not saying "Ban this sick filth!" but I am increasingly of the opinion that such places shouldn't just be left to fester, because the genuinely twisted do go there, and they recruit, and that has real world consequences.


I agree that the issue is not quite as simple as "just let everyone do their own thing". I also don't know how I would ease my kids onto the Internet - I feel like it was a less dangerous place when I was growing up. My very subjective and still liable to change opinion is that it will come down to education, developing maturity Internet consumption over time, and the broad swathes of the Internet calling trash out when that's what it is. And of course banning the truly extreme sick filth.

(I have a whole separate argument on how I think the problem may be that the progressive movement has simply assumed that it has won the cultural war, and has stopped making effective arguments for itself, which inadvertently leaves room for alt-right type movements that are still actively recruiting and promoting themselves - e.g. the so-called "red pilling".)


Well, the alt-right's a broad, unconcentrated counterculture. It's easy to see hypocrisy and dishonesty amongst the mainstream political culture, so there's an allure to aligning against it, secretly or not. I imagine if the mainstream political culture in e.g. america had been headed rightward for the last however long, we'd see a bigger alt-left... but who knows.


> This looks more like people new to the internet discovering what everyone else already knew: You can't trust strangers on the internet. You never could.

Like everything, this isn't absolutely true. People meet others they met through online means all the time, be it for clubs, dating, or commerce (e.g. craigslist). There is a certain societal expectation for how strangers are treated and that requires a bit of trust in someone you've never met.


In my mind, focusing on bots is kind of barking up the wrong tree. A lot of fake reviews on Amazon are written by real humans -- my main concern is not figuring out whether a human or a bot wrote a comment, it's figuring out whether or not the comment/review is trustworthy.

If we got rid of all of the bots, that wouldn't make Amazon easier for me to use. I feel like scammers have already figured out that humans themselves are relatively cheap to buy.


Right -- it's ultimately a trust problem, and we're in an unfortunate negative feedback loop here.

Bots aside, what we post on the internet (and believe in general) is a function of the information we consume. That information is increasingly consumed through the internet, and the internet (as typically used) is feeding us increasingly poor quality information. (To be consumed, then shared. Recurse...)

We can (and I think do) cope by having different levels of trust for different sorts of internet information. I generally have higher trust in an HN post than an Amazon review, for example. This is useful, but has a dark side: some of the things I (and likely, you) take as a sign to "increase trust" happen to correlate with things that have nothing to do with trust. I trust Amazon reviews more if they have good grammar and spelling, largely because I trust general internet information more when it has good grammar and spelling. But perhaps I shouldn't when e.g. buying a screwdriver (what does writing ability -- and the things it correlates with -- have to do with evaluating a screwdriver?). Coming up with social/political-flavored-information examples is an interesting (and worrying) exercise.


I find it interesting that when something uses proper grammar and spelling, it may appear more trustworthy.

Consider this: a consumer that employs above average grammar and spelling skills to write product reviews may also be more skillful in forming and expressing valuable opinions and assessments of any product.

I think that people are naturally inclined to pick up on those signals, whether the correlation is real or not.


Even if the correlation is real, what I'm getting at is that it isn't causal. Good grammar probably makes for a good bet on more trustworthiness, but I'm wondering when/how that bet is going to be wrong (and it is, if it's not causal), and how that effects us (on the massive scale that is our info consumption across the entire internet).

back to the Amazon example: an expert car mechanic may have worse spelling/grammar than a hobbyist, but I should probably trust their Amazon review of a car part more.


Not only that: the exact same written text will appear more trustworthy depending on the typeface [1] (eg Baskerville > Helvetica > Comic Sans)

[1] https://opinionator.blogs.nytimes.com/2012/08/08/hear-all-ye...


There is a component to terribly written phishing emails where the poor spelling and grammar becomes a filter for people who won't fall for it. People who will fall for either won't care or aren't paying attention anyway.


Amazon is weird, I bought a product in the mail the other day and they offered me a $20 coupon (the product only cost $25) if I wrote a five star review. So I wrote a one star review talking about how I don’t want to be bribed and they can’t make me be dishonest and to warn people. But amazon didn’t allow my review. So that sucks, and makes me not trust any reviews on Amazon.


I mean, how could they trust you weren't a fraudster from a competing product?


Because they actually purchased the product?


I don't think that's a great deterrent. Especially for a low cost item, I'm pretty sure it happens a lot that competitors would buy a few products of yours and then leave negative reviews as "Verified Purchase".


Amazon is not a neutral referee though. They have an incentive to block (both fraudulent and legitimate) one star reviews but keep (even fraudulent) five star reviews: better reviews lead to more sales.


Guessing that Amazon has some way for the seller to dispute the review, or they use some heuristics to determine that a comment is “on topic”.


The article doesn’t focus on bots ( in fact it specifically mentions times when humans pretend to be bots). It’s really about internet fakery in general eroding trust.


Yep, just expanding on the premise of the article and reinforcing that this is a general trust problem, since a large portion of the current conversation around internet trust does revolve around bots.


This is the most intelligent take on internet trust issues that I've seen. The conclusion is overly glib though: it took centuries for high-trust societies to develop the mechanisms that allow them to function as such, and even small changes can upset that balance (as we've been seeing in the real world over the last couple of years). We shouldn't assume it will be trivial to bootstrap those kinds of institutions on the internet.


People forget that institutions take time to develop. They also forget that high-trust institutions frequently begin with a dictator. The end-point is high-trust/low-coercion, but you can't get there from low-trust/low-coercion. First you have to go through a high-trust/high-coercion state, after which you can gradually taper off the coercive elements as institutions mature and the individual actors get used to the new normal.

This is exactly how the liberal societies of the west evolved. But don't remember that any more, though.


This isn't how the Unites States evolved. Arguably, the success of the experiment still hasn't been fully established, but the major institutions that govern the US system were more or less present right from the beginning.


There was never any coercion in the US. All those black people simply begged to be chained and beaten for centuries. The tribes? They just really wanted to travel to Oklahoma. Women were naturally granted the right to vote in 1776 without any strife or suffering.

Whitewashing history to make it more palatable is dangerous.


We're talking about whether or not coercion is necessary to bootstrap a high-trust society. Are you arguing that those types of coercion were fundamentally necessary to establish the United States? Or are you forgetting this context in order to have something to be upset about?


Yes, I am arguing that. There is no way the US is the world power it was through the late 1800s and beyond, without centuries of coercion [0].

[0] https://www.history.com/news/slavery-profitable-southern-eco...


One of the key arguments against slavery was that it help the South back from being able to have a modern, industrial economy. If slavery was the key ingredient to being a world power, the slave states would not have had their asses thoroughly handed to them by the free states in the Civil War.


> Women were naturally granted the right to vote in 1776 without any strife or suffering.

Depends on the state, if I'm remembering correctly. New Jersey allowed certain women and african americans to vote until (I believe) 1806.

Dirtying history to make our advancements larger is also dangerous.


Provide some evidence then. What percentage of women could vote then? What percentage of blacks? If it's not a significant amount, your reply isn't much of a rebuttal.


The original requirements were the same for everyone - white or black, male or female - though thanks to the doctrine of coverture, married women would not have the assets to qualify. (Married women could generally not own property independent of their husband) Coverture was not applied universally however, and many married women still voted. See [0]

> That all inhabitants of this Colony, of full age, who are worth fifty pounds proclamation money, clear estate in the same, and have resided within the county in which they claim a vote for twelve months immediately preceding the election, shall be entitled to vote for Representatives in Council and Assembly; and also for all other public officers, that shall be elected by the people of the county at large.

It's probably also worth noting that these restrictions applied to men, too.

0: https://en.wikipedia.org/wiki/History_of_the_New_Jersey_Stat...


On that note, a couple states in the Western US were early adopters of womens' suffrage, mostly to boost their numbers and/or to sway women into moving west; Wyoming is the famous example, but Colorado, Idaho, and - surprisingly enough - Utah all had formally legalized womens' right to vote in general elections prior to 1900.


Rolling back to the original point, that the US was High Trust right from the start, doesn't the point that there were major barrier's to entry for being part of the "democracy" for both men and women show that there was some inaccuracy to that statement?


Oh, didn’t you read? All men are created equal. It’s a classless society!


The ruling part of the early United States were English settlers, so they brought their high trust society with them.


Eh. Reality is more complicated.

Some of the English settlers were decent people with functional societies.

Some of them were basically the christian Taliban.


No, they were not.

The heavily religious communities in the early US didn't display any of the tribal behaviour commonly seen amongst the Taliban (which are the cause of most of the governance issues in Afghanistan), just the extremely strict adherence to religion which is in some ways a superficial similarity as the average person in those times was as fanatically religious as only the most hardcore Christian today.

For instance Cousin marriage wasn't a thing at all (which is one of the main ways of maintaining tribal cohesion) while it is still extremely common in Afghanistan and the Middle East today.


Certain English settlers were such hardliners and so terrible to everyone who didn't adhere perfectly to their doctrine that Rhode Island and parts of New Hampshire were settled be people expressly fleeing them. No, they didn't have the tribal politics but the hard-line religiousness and correspondingly harsh treatment of everyone else was the comparison I was going for any anyone who has studied the first hundred years of the Massachusetts Bay Colony knows that comparison is apt.

Nit picking over the particulars of how they ran their society does not make them any less a bunch of fundamentalist jerks.


That's a take. I'm assuming you mean "The US as created by white European invaders who had to decimate a local indigenous population using slavery and warfare after purging itself of imperial overlords" right?

Like the US in its current coast to coast form didn't spring out fully formed from Liberty's bosom. It also had centuries of European innovation that lead to it.


The Fed?


Careful. Soon you'll be arguing for the dictatorship of the proletariat. "Dictatorship does not necessarily mean the abolition of democracy for the class that exercises the dictatorship over other classes; but it does mean the abolition of democracy (or very material restriction, which is also a form of abolition) for the class over which, or against which, the dictatorship is exercised." -- Lenin.

I'm not being completely serious, but I do find it extremely interesting that this whole idea of high-coercion at the start followed by a gradual easing off as people accept the new culture is one that has been tried explicitly. Or... at least claimed to have been tried ;-). Marx and Engels split no hairs about the authoritarian nature of revolution and when Lenin and Stalin later tried (presumably) to manage the situation they found it necessary to maintain that dictatorship in order to combat the ever present trend toward bourgeoisie and capitalism. But it was all supposed to end eventually.

For me, I see parallels between that and the current air of "It's a scary world out there. Trust your government to take care of you. Give us more powers so that we can make our communities whole." No authoritarian revolution, but a very real reach towards cracking down on bad people in the name of the community. And once utopia (though not a communist one) is reached, the power will simply not be necessary.

Having said all that, it was Confucius who said that if you are lax at the beginning and then become more strict, you will be seen as a tyrant. However, if you are very strict at the beginning and become more liberal over time, people will see you as magnanimous. When I used to teach at a high school, I used that idea and it did, indeed, work very well.


Reminds me of The Republic, where Plato describes an ideal government of selfless, enlightened rulers that would over time devolve down into democracy, which then would end up in tyranny, because the people would choose a tyrant to get rid of the chaos.

Unrelated, I was unaware of the dictatorship of the proletariat, but that explains to me why communism ends up as a system of control: because it started out as a system of control in the first place. And why it's liberal (at least in the 1910s) intellectuals that were attracted to it: they were the ones ruling society in that system, which, as every intellectual knows, is The Way It Ought To Be. Of course, it got co-opted by the political sorts because control in the real world requires weapons not ideas.


I disagree profoundly with pretty much every claim made here. The analysis of the inception of liberal societies is very likely wrong.


A bare "wrong!" is a very weak reply that adds only noise to the discussion.


Can we trust the article that it is the internet that is becoming low trust, or is it reflecting society's trends? Trust has been fading in societies for decades [1] and apparently millenials are the most cynical [2]

https://ourworldindata.org/exports/trust-attitudes-in-the-us...

https://img.washingtonpost.com/blogs/monkey-cage/files/2014/...

First, the internet has opened up, and its prominent platforms (which were always predominantly american-culture oriented) are now global and thus reflect low trust cultures as well. Second, Goodhart's law, the internet metrics exist mainly to be gamed, thats why they should change. Reviews worked for a while, when the internet had a very different audience , they no longer work, that's normal wear and tear. Third, it's time platforms start paying specialists for crucial things like product reviews. Crowdsourcing no longer works. (Which explains why wikipedia should maintain a very conservatiove editing policy from now on)


Third, it's time platforms start paying specialists for crucial things like product reviews

Given that platforms earn a commission on every sale, why should they be trusted to procure specialists' reviews? Maybe they would prefer to hire professional charlatans to boost the reviews on everything, in order to drown out the legitimate opinions of aggrieved customers?


Sure that's a great recipe to lose your customers. Shops have an interest in having honest reviews, and if they are gaming them, someone else will provide more honest ones.


Classically, you're right, but Amazon seems to operate by a different set of rules. Prices, availability, and recommendations change constantly according to some internal algorithm. In effect, no two users are shopping in the same store. This disrupts the ability for users to communicate about what's going on at Amazon. The whole thing is a big mystery and most people just see what they think is a good deal and snap it up before it's gone.


Where is the footnotes?


The first and second link I presume.


It is simply one facet of more pervasive deceit in society.

Back in the day, when you placed a long distance call, an operator would come on to ask you what your number was so the call could be billed to you. A single call might be a dollar or a few dollars, which would be ten to several tens of dollars in today's money.

No one would imagine that such a system would work today.


It is simply one facet of more pervasive deceit in society.

And that's probably rooted in people being fundamentally devalued and taken advantage of.

We are seeing worse income inequality than in The Gilded Age and people act like it's a natural and unavoidable side effect of the existence of tech, the internet, whatever. That's BS. People created these things. If these things exploit people, it's because people designed them to exploit people.

If people want people to be treated better, then people need to stop blaming machines for our social values. Tech merely magnifies those values. It doesn't cause them to exist.

Bill Gates said that automating an efficient system amplifies the efficiency and automating an inefficient system amplifies the inefficiency. I propose that you can similarly amplify whatever underlying social values you have, whether that's something good -- pro education! -- or something bad -- racism and misogyny!

And classism. We are using technology to amplify extractive economic practices, then blaming the robots as if Judgement Day had already arrived and Skynet is now in charge of our lives. (A la the Terminator movies.)


I'd almost rather society just admit it's classist so it can be handled appropriately. Instead we're just in denial and no policies (for better or worse) will ever go towards what we need. I might sound conservative but I'm definitely democrat/liberal in most of my values. I just don't want to live near someone that is thuggish and may steal from my house.


That’s back when those systems could only be used by wealthy, educated people. Like usenet before the eternal September.


And, of course, when the automated billing system we're using now hadn't been developed yet. If asked, I'm sure the telephone company would have preferred not to trust their customers like that.


I'm a bit surprised this article has zero reference to any interdisciplinary work on trust metrics, and stuff like trust propagation algorithms. I'm even more surprised, with the blockchain having becoming such a buzzword these days, that Bitcoin's proof-of-work approach was not even mentioned, let alone evaluated for effectiveness. And where's the love for Bruce Schneier [0]?

I think the article's approach of referencing historic stragies from government and social infrastructure is a bit useless in this context because how much the problem changes when scarce information becomes abundant light-speed communication. I understand that they are trying to make the point that corporate overlords are bad, but maybe it would serve the purposes of the article better to just focus on the variables that matter, now that there's new capacities for transparency and accountability metrics based on some combination of historic behavior, web of trust node size, and other situational variables based on the medium -- like users that spend a lot of money, or consistently downvote known bad actors. Forcing users to have skin in the game with strict enforcement of transgressions is of course a reasonably effective, if coercive, strategy as well, and you can take the edge off this authoritarian approach if you pair it with some variant of restorative justice.

My approach is perhaps not rigorously useful, but for my personal conceptions of trust in a world of bad actors, I like looking at strategies from Axelrod's iterated prisoner's dilemma[1]. Tit for Tat is famously a good strategy, and there's also a good strategy where you forgive on multiple cooperations but gradually increase the punishment of defectors to n times for their nth defection [2]. Though I should mention that tribalist collusion with other bad actors is unfortunately a very viable approach as well.

[0]: https://www.schneier.com/books/liars_and_outliers/

[1]: https://axelrod.readthedocs.io

[2]: http://jasss.soc.surrey.ac.uk/20/4/12.html


There's no mention of what I think is by far the biggest negative influence on trust: the media. Trust in media is at an all time low:

https://www.washingtonexaminer.com/washington-secrets/trust...

It's a totally self inflicted wound too. Clickbait, outrage bait, blatant political spin, stealthy retractions, journalists on social media starting mobs...

Bots do not even register in most people's minds compared to that.


What you see is the clickbait. What you don't see are the structural changes to the media landscape over the past 20 years that led to this.

Newspapers per 100 million people fell from 1200 (in 1945) to 400 (in 2014). This is from a Brookings study cited in a Wikipedia article on the topic [0]. In 2013, the Chicago Sun Times laid off all its photographers and tasked journalists to take photos as well as provide the research and writing [1]. How would the quality of your work be affected if you had to do the job of 2 people?

The classifieds ads business is dead, and subscriptions have been declining for years because "news on the Internet is free". The only "media" that makes serious money is talk radio, which isn't journalism so much as diatribes of political invective.

As it turns out, that's what people are willing to pay for, or at least sit through ads for. If anything, "the media" is giving the people what they want.

[0]: https://en.wikipedia.org/wiki/Decline_of_newspapers#Performa...

[1]https://www.nytimes.com/2013/06/01/business/media/chicago-su...


The idea that the internet transmorphed media from oracles of truth to professional manipulators is taking the whole situation inside out. The fact that people used to trust media more is not an indicator that media used to say more truth in the past. Journalists lived in a high tower pretty much unreachable by an average reader, so producing rubbish, or being manipulative was billion times easier. In the past only an important journalist was able to stand against another important journalist, and even that was a slow inconclusive pushing. Nowadays, bullshit, and incompetence can be revealed in hours, and even small mistakes are publicly noted. Everybody can be media now. Naturally journalists has lost their semigod status. But it's important to understand that they weren't semigods before, it's just that you looked at them down up. And now you don't.


> The only “media” that makes serious money is

...

> Google Made $4.7 Billion From the News Industry in 2018, Study Says

https://www.nytimes.com/2019/06/09/business/media/google-new...

Hmm


Obviously rchaud is referring to people that create media, or in this case, the journalists. I don't see what relevance Google's revenue is to this conversation. They found a way to make money by directing people to other people's work.


While those "other people" were doing unpaid internships, freelancing, and filing for unemployment insurance. Something seems very profoundly wrong with this picture.


Google made $4.7B off of the news industry, by aggregating other people's stories and advertising on them. None of that money actually went to writers or publishers.

Which is the entire point of the article you linked. If you're gonna be snarky, you should really check to make sure your information is accurate.


That article -- and the lobbying group "study" it is based on have been pretty roundly condemned even by other journalism organizations e.g. https://www.cjr.org/the_new_gatekeepers/nyt-google-media.php

tl;dr the number is total fabrication


Which only cements the claim that no one is making serious money off of proper news media.


Has the news industry's economics ever been truly separate from entertainment industry economics? Is the only reason news ever made money was because it offered people a novel form of entertainment?

Did people ever buy news because it was news? Or was this decline inevitable as actual entertainment was always going to eventually be able to offer a better match for what people actually buy?


Most newspaper articles don't have or need a photograph. Did the Chicago Sun Times really have a 1:1 ratio of journalists and photographers?


It's not just media, but the education system as well. More educated people are significantly more, not less likely to fail Intellectual-Turing-Tests about people with opposing views. i.e. far from increasing openness to experience, education itself is functioning much like indoctrination into a fundamentalist religion! The effect starts already at a High-School level and becomes worse and worse as educational attainment rises:

https://www.theatlantic.com/ideas/archive/2019/06/republican...

If this doesn't scare the s--t out of you, I don't know what would.


What that article tells me is that people say they belong to teams they don't actually belong to, and that very few people actually do. Seems to me to be a problem with the teams.


> What that article tells me is that people say they belong to teams they don't actually belong to

I'm not sure that's consistent with the evidence. People are actively getting worse and worse at modeling the other "team", which strongly suggests that polarization is quite real indeed and not just a matter of different labels.


Democrats with higher education are getting worse and worse at modeling Republicans. I think that means that, in higher education, Republicans get exposed to real Democrats, and Democrats get exposed to a caricature of Republicans.


If true, that seems odd. There was an awful lot of very public searching for "what are Republicans, and how can we understand and empathize with them more?" after the last election. Tons. It's ongoing, in fact. It's mostly been—to an almost absurd degree—very gentle and good-hearted. If there's ever been anything like this sort of massive effort by Republican media figures and public intellectuals, I'd love to know about it, because their findings would surely be fascinating. I doubt there has been, at least in the last 30 years or so.


> what are Republicans, and how can we understand and empathize with them more?" after the last election. Tons.

> very gentle and good-hearted.

That has not been my impression. Just one data point.


I would say there was some people trying to understand. It just gets lost in all the people saying "Trump is evil! How could you be so stupid? You're all such idiots!" Those people are both more numerous (in my opinion) and closer to the microphones.

And of those trying to understand, some were trying to understand, and some were trying to understand how to get votes better.


I'd like to see a comparison between a survey of espoused party values and the oppositions understanding - I wonder if someone was to look at the values that the political leaders are saying and voting for would there be as wide a gap, or would it fall more in line with what people believe of a party and it's members?

On a person per person basis there's dehumanizing caricature, but I wonder if you compare it to the party to which the person claims to belong, if it would be as out to lunch.


Perhaps it's more a reflection of the incentives at work? How do you drive engagement with your content? Clearly 'clickbait' is (was?) successful at driving traffic, otherwise it wouldn't be called clickbait. Polarization is used successfully by other 'non-political' sites such as YouTube to drive engagement statistics (and hence ad impressions), and so forth. If you're trying to stay afloat in a competitive media environment, what would you do?

There is no mass incentive for thought-out, contemplative, long-form journalism. In my opinion, we have dug our own grave - the truth is that the primitive, animal parts of our brain vastly overpower the analytical parts of our brain, and so your 'outrage bait' is a news (or tech) executive's bonus for the year. If the only metric by which we measure anything is $$$, then, well...


"Journalism" has a "push" element in addition to "pull"


Do you think a group of journalists selling verified, accurate, non clickbait articles based on data and first hand accounts would be able to generate enough revenue to feed the journalist’s children and send them to college?


That's part of the issue; there is no need for commercial first-hand accounts anymore. If someone on social media is talking about a news event and provides a video documenting their presence, that is about as verified and accurate as you can hope for.


Which is incredibly susceptible to someone with an agenda who wants to spin an narrative and knows how to leverage social media.

It's tailor made for propaganda. Joe Schmoe who is actually there doesn't know how to inflate his likes and SEO his account. His voice gets drowned out by the people who are getting paid to push a particular viewpoint. There is no editor providing oversight. No ethics board. No neutral point of view. It's just whomever shouts the loudest and gets there first.


No neutral point of view. It's just whomever shouts the loudest and gets there first.

And that's exactly how news industry worked always, since invention of press (actually even before, though it wasn't industry back then).


You're being downvoted, but to a large extent, you're probably right here. For many topics journalists used to cover, you genuinely don't need them any more, since enthusiasts posting Patreon/donation funded content is at least as good in terms of quality. Tech, games, TV, music, sports, celebrity gossip, all of them can be covered just as well or better by some random guy on YouTube or some online blog site.

And that in fact be another reason why the media as a whole is doing so poorly. In the old days, a fair few people read the papers for this stuff. Nowadays they can get the same information elsewhere, without the stories they're less interested in taking up space.


There absolutely is a need for multiple, verified, consistency and fact checked first hand accounts. One person on social media does not provide an objective and difficult to falsify viewpoint.


Deepfakes are getting more and more accessible...

https://en.wikipedia.org/wiki/Deepfake


Maybe not, but it's still a problem. It's the same in Spain, most media is hard to trust on basically anything. They even adhere to stupid projects like "The Trust Project" and spin off factcheking brands, but none of that solved the problem. It's still the same people claiming that, oh no, this time you can trust us!

I even experienced being the target of a report (well, the company I work for), and they did an awful job. I felt that the story they made up was only tangentially related to reality.


The purpose of my comment is to illustrate that the collective "we" as a society are to blame for the lack of good journalism in the market. "We" don't demand it (i.e. pay for it), and so we get what we (don't) pay for. The person I was responding to claimed the media self inflicted it upon itself (I presume based on the prose of their comment), my claim is society self inflicted it upon itself.


Ultimately, both views are true. We do not "demand" it with regular market means, but it's important to remember that free markets tend to structurally favor incrementally cutting corners. Even assuming we've demanded good journalism in the past (which is doubtful), we'd still end up where we are. As for media self-inflicting it upon itself, well, they shouldn't have started cutting corners. But since "media" is really a lot of competing actors, "not cutting corners for short-term gain" was an impossible outcome anyway - the whole thing has dynamics of a Prisoner's dilemma.

My conclusion is that it's less about demanding - the way we fund news, i.e. open and competitive market, is structurally unable to support good journalism. Usually problems like these are solved by governments setting standards and giving funding, but journalism is a special case (it's perceived by many as protection against government overreach), so with that option out, I have no idea how to even begin solving it.


> Maybe not, but it's still a problem.

Of course it's still a problem. The point is that it's actually a much bigger problem than you implied. People blame the media like it's all just down to some greedy jerks making malicious decisions, and if only we could replace them with someone with integrity, everything would be okay.

The fact is that we've created an environment where it's nearly impossible for honest, non-sensationalized news to exist. Putting all the blame on the media is like blaming a starving man for stealing bread. They're culpable, certainly, but you're missing the root cause.


The truth is the opposite. They always lied. Then we did not know they lied. Now we know. It is not they who changed, but us.


> It's a totally self inflicted wound too. Clickbait, outrage bait, blatant political spin, stealthy retractions, journalists on social media starting mobs...

The media isn't one unified conglomerate. It's like equating the entire tech industry to just SV gig economy startups.


It's a totally self inflicted wound too.

No, it's really not. In a nutshell, the root cause is the currently pervasive idea that good writing should be available completely for free.


Ideas follow from reality. Supply and demand would dictate that infinite supply should drive prices to zero. Isn't that what we're seeing?

The other problem is that most people can't tell good writing apart from bad, but that problem is far older than all technology (apart from writing itself, of course).


That's a two-way street. Reality also follows from ideas.

Perhaps most people can't tell good writing from bad, but the HN crowd is more able than average to distinguish the two. Yet the attitude here boils down to "writing isn't a real job -- if you don't like working for peanuts, then STFU and get a real job!" HN members actively find ways around pay walls, more aggressively than most use ad blockers etc. Most people here are well heeled and can afford to pay for subscriptions, but the dominant attitude is "fuck no to that -- writing should be free!"


Not so much "writing should be free", but "I dont trust and consume a specific source enough to justify subscription" with the adendum that "ads are evil". At least for me.

Edit: having read the "hand licking incident" I believe it did give me value and I would be willing to pay 1 dollar or so as thanks (not implying that I got only 1 dollar value, or even that I got as much as 1 dollar value. Just a number that seems reasonable).

There is the matter of how: to do it I would probably have to spend much more than 1 dollar in effort.

And there is the matter of scale: I want to pay you for your work in giving me an interesting insight. But if we started to do that massively, people would optimize for "things that seems insights" not for insights... (see https://slatestarcodex.com/2014/07/30/meditations-on-moloch/)


I primarily try to monetize my blog writing with tips and Patreon. Someone who doesn't want a recurring monthly charge can leave a one time tip.

Even though I removed ads from most of my sites in part to respect the boundaries of people who hate ads, I mostly get endless excuses rather than funds.

When I ask "How can I monetize my work?" people don't actually have a solution. They seem to think if you get enough traffic, that automagically leads to money, overlooking the fact that this concept only really works for an ad-based model and widespread use of adblockers kills it.

I sometimes get told "Product sales of some kind." Nevermind that this is another form of whoring out my writing to the need to sell something other than the value of the writing per se and also people on HN equally bitch about the evils of content marketing and how it is one of the things ruining the internet.

I've heard these arguments for years. I've tried to find a means to make money without being evil in some manner. The result for many years now is virtuous and intractable poverty.

When push comes to shove, the real answer boils down to: We expect large quantities of quality writing on a regular basis and we refuse to pay for most of it. We also will get up on our high horses and get all offended if you dare to use expressions like _slave labor_ to describe our entrenched expectations and the de facto outcome. Don't confuse me with the facts. My mind is made up!

It's quite tiresome to keep hearing the same BS over and over while I continue to live in poverty and yadda.

Edit in response to your edit: I call bullshit. If you honest to God want to give me a single dollar, you can do so via either PayPal or Venmo right now without further hypothesizing about how giving me a single Goddamned dollar is some new means to ruin the internet, along with every other means to pay for writing. Because beneath all the hot air is the fact that most people simply expect slave labor to create good writing. If this weren't true, I could pay cash for a cheap house in my small town and quit whining on HN about being poor.


fyi: paypal refused to allow me to change my password, venmo refused to accept my non-US account and patreon(1) refused my non-US credit card. As expected, I did spend much more than one dollar in time trying to send you money. What I did not expect was to fail.

(1) I meant to be a one month only patron.

Also, take my data point and do with it as you will. Call me evil/slaver/bullshiter/whatever. I was trying to help and also to discuss, but I no longer feel inclined to do either.


Thank you for trying and for following up with this detailed reply.

I was trying to help and also to discuss, but I no longer feel inclined to do either.

Yes, this is par for the course: People get mad at me for effectively communicating that there are no good solutions here, no matter how hard I try. And it becomes a new excuse to blame me for my financial problems and declare "Not my problem! I'm done here!"

I genuinely bear you no ill will, but you and I are posting on a very public forum, so I think there are larger things at stake than your feelings. Other people need to genuinely understand how this works if it is ever going to change and being too polite about this fails to get it through to people.

They just keep arguing that I must be wrong, there must be some means for someone to make money as a writer that doesn't violate any of their constraints for how to make money as a writer without being evil. And, besides, their desire to have an ad-free, whatever whatever internet is far more important than my financial difficulties.

I got these kinds of arguments even when I was literally homeless and going hungry, which I found completely mind boggling. But, to their minds, my homelessness was merely evidence that I was incompetent and there was no reason to take me seriously, not evidence that, no, seriously, most writers really just can't make the money they need in the current climate.

You have a good day. I know I can be hard to take.


It isn't just a dichotomy that bad writers produce bad writing and good writers produce good writing. Good writers are even better at producing bad writing, because they produce bad writing that masquerades as good. We're not dealing with random defects, we're dealing with agents and their agendas.


We're not dealing with random defects, we're dealing with agents and their agendas.

That's a reasonable point, but people mostly don't tell me "It's a trust issue. I would pay if I believed I could trust the motives of the author/source."

The overall attitude expressed is consistently "I'm simply not going to pay for writing. If writers want a middle class income, they should get a real job."

Once in a great while someone will agree with the general point that if you want to be able to trust what an author is saying, you need to pay them for their writing and not expect them to monetize with ads or sponsors because that introduces a conflict of interest. One person cited Consumer Reports as an example of this model and why they pay for a subscription.

But that's the exception, not the rule. Most comments here consistently express the attitude that they simply will not pay for writing and writing is not a real job.

At the same time, journalists get attacked for not doing their job adequately well, etc. It mostly falls on deaf ears to point out that journalism simply doesn't pay what it used to and there is a cause-and-effect relationship between the lack of adequate pay and the lack of quality writing.


The overall attitude expressed is consistently "I'm simply not going to pay for writing. If writers want a middle class income, they should get a real job."

There's more to trust than belief in the veracity (or lack thereof) of a statement. When you trust a writer, you not only trust their claims, you trust that the substance of their writing is worth your time. The attitude you highlight suggests to me that many people do not see a lot of writing as being worth their time.

Unfortunately, people's judgements of value can be strongly influenced by price. When the quantity of readily available, free writing increases dramatically, people's judgement of its value goes down. Simply put, they no longer trust in the institution of writers as a medium.


I'm a writer. Some of my writing hits the front page of HN. This piece did fairly well on the front page in terms of both karma count and discussion: https://raisingfutureadults.blogspot.com/2019/01/the-hand-li...

It also got copied and reblogged, sometimes legitimately with my permission and sometimes not. For me, it is the first hit if you google the expression "the hand licking incident." It seems plenty of people found the piece worth reading.

It made not one thin dime.

I spent around two weeks on that piece. It's at least my third attempt at a parenting blog. I get paid for freelance writing, have years of experience blogging, about six years of college and if karma count is anything to judge by I'm a "respected member of the community." (My old account has 25k karma and this one currently has 19k karma. If it was all under one account, I would be decently high on the leader board.)

It has no ads on it in part because I would rather not be a shill for god-knows-what. I would rather be paid for my writing. But it also has no ads in part because I know how much the internet in general and HN in specific hate ads these days. It is supported via tips and Patreon.

I'm quite open about how much I struggle financially and that I make my living as a writer in part because I'm medically handicapped and can't do a lot of so-called "real jobs." Given that we have worse economic inequality than in The Gilded Age, "get a real job" is a specious argument anyway.

The reality is that the current attitude is that writing simply should be slave labor. Period. If you don't like it, go do something else. Not our problem that you are literally homeless and going hungry, bitch.

Meanwhile, five million monthly visitors to HN expect the front page to be filled daily with good writing and they bitch and moan about how there isn't enough good stuff on HN and the front page moves too slow and on and on.

I don't particularly care to continue this discussion further. It's not likely worth my time.

(Edit: Not currently homeless, but I was for nearly six years. I still struggle with food insecurity and general poverty.)


The rot had set in long before they were giving away free content on the web. Before then people were paying for the distribution and not the content, the web destroyed the ability to profit off the distribution.


It's unfortunate, because while there are many bad actors, there are some outlets that don't employ these tactics that get painted with the same brush. I'm not sure how to make that better.


The Washington Examiner is a horrible source for anything. Maybe that was your point? (That link in your comment doesn't work for me.)


Fabricating attention.


All modern forms of yellow journalism.


> In low-trust societies, you never know. You expect to be cheated, often without recourse. You expect things not to be what they seem and for promises to be broken, and you don’t expect a reasonable and transparent process for recourse. It’s harder for markets to function and economies to develop in low-trust societies. It’s harder to find or extend credit, and it’s risky to pay in advance.

'Bout sums it up.


Maybe I missed something, but when was the internet ever high-trust?

I feel like since it's inception savvy users always applied a healthy amount of skepticism and made thier own choice whether or not they believed something "on the internet". This included both facts on websites and conversing with other humans on chat.


> but when was the internet ever high-trust?

Exactly. It started when big corp tried to lure in more users by giving them free cheese, making them believe that the internet is a safe place to post your name, address, work life and sex habits. Everyone at the time thought it was crazy that people did that, and yeah they were right.


Maybe the problem is that ever since google the only metric anyone has bothered to devise is "popularity", which was never a particularly good indicator of truth to begin with.


Google was built heavily on authority, not just popularity.

Otherwise you could easily buy your way above Wikipedia, a non-profit organization (Wikimedia) - which ranks at or near the top for every query.

Quora, or some other VC backed knowledge service with a couple hundred million to vaporize, would overtake Wikipedia through the direct purchasing of popularity. Then others would quickly follow, entirely wiping Wikipedia from the first pages. That can't be done. If it could, private equity via Answers.com and other such very low quality sites would have already done it, seeking billions of dollars in return on such positioning.


Yeah google isn't built on popularity, but everything else is. And it's partly our fault for using aggregators.


Interesting, but how do they assign this authority? Anything to cite?


As it should be, I think. You don't inherently trust stranger on a street, you shouldn't do that on the internet as well.


Not quite.

While I wouldn't trust a random stranger on the street with e.g. my money or private data. However, I have no problem trusting them to honestly and truthfully answer questions like "how do I get to X from here?" or "here, can you please mark my bus ticket in that marking machine over there and hand it back to me? I can't quite reach it in this crowded bus".

On the internet, however, I don't trust a random stranger (e.g. a youtube commenter, or a reddit user) with answering "2+2" correctly, let alone with something that involves my possessions, no matter how low-value they might be.


> Things are increasingly getting worse

General purpose headline for attracting views, tying into the seductiveness of a general perception of doom and gloom.

https://www.forbes.com/sites/stevedenning/2017/11/30/why-the...


At probably a naive glance it seems related to anonymity. Would tying identity to online posts increase this trust? What if you had to insert your National ID Card to comment in some feedback forums?

I remember YouTube tried this and failed due to pushback.

What's the driver of this clickbait/instant gratification/hot take culture that's developing? Is it just ads?


Some of the worst clickbait is published in national newspapers under a byline and photograph of the author.

The Boris Johnson fiasco has got journalists openly saying "who are the people who called the police when they heard shouting and screaming next door? We need to find and expose them".

Requiring ID means the culture war side with the best harassment capability can take over the space and decide who becomes the story and who gets made unsafe.


> Boris Johnson fiasco

Could you provide some context?

> who are the people who called the police when they heard shouting and screaming next door? We need to find and expose them".

Could you provide some links?

(sorry, not doubt, truly uninformed)


>Would tying identity to online posts increase this trust? No. People happily reshare fake news all day under their real names.

If you don't recall, Facebook also requires "Real Names" and we saw how much that helps.

The driver is human nature. The internet didn't invent the conspiracy theory, or the clickbait news article. That stuff existed almost since the first newspaper. False accusation is specifically called out in the 10 commandments. Lying and cheating is human nature and likely requires some checks in place when it gets to the point of harming others for your own profit.


> What's the driver of this clickbait/instant gratification/hot take culture that's developing? Is it just ads?

It's essentially a race-to-the-bottom situation. Media that don't employ clickbait will lose out to others that are making tons of money catering to instant gratification.


People on Facebook are posting with their real names, and much of the worst, most polarizing crap gets spread on there. Some people do post with their real names as a way of visibly holding themselves accountable for what they say, but it's something that really only makes sense as a voluntary choice.


True. Is it similar to "road rage" where people are safe and secure in their moving "castle", and freely criticize and yell at others without fear of consequences?


I don't think so, because people should stop treating the internet as a private place. It really isnt, and the whole "we are real people with real names" is an illusion created by for-profit companies. I know that people desire safety by default, but they also have a very long history of trusting the wrong tyrants.


Since youtube tries to monetize ads, they would indeed be interested in your identity. Saying it is to prevent abuse is a pretext. People believing it would certainly have their trust be abused.


That is not really surprising, is it? 20 years ago the Internet was populated with different people, because the subscriptions were expensive. And besides that, there was no mobile Internet, the user demographic was completely different. Now that literally everybody on this planet has access to the Internet, and, well, there are some rather unpleasant people out there, I think we've reached a new historical period with these "low-trust societies".


Things were worse 20 years ago for the general internet , with few "trustable" brand names anyway. Piracy, viruses and scams were all over the place. If anything it's cleaner today, thanks to investments in security/antispam, but the mode of operation for scammers has changed.


>Better rules and technologies that authenticate online transactions; a different ad-tech infrastructure that resists fraud and preserves privacy;

I don't disagree with the premise, but this solution seems vague at best. Authentication and privacy? Great things to be sure, but where is the connection? As it said, even "verified" purchases could be (and sometimes are) done by paid shills.

I guess the proposal is to create laws that prohibit lying on the internet? This person has clearly fallen on the wrong side of the authoritarian argument if that's the solution they came up with to prevent power consolidation in big business.


In Trust We Trust


The less trust there is, the more hardship there is.

We are due for tough times ahead.


scale is indeed the problem. scale has perverted our capitalist society as well, long before the internet. Adam Smith wrote about capitalism but he never envisioned Amazon. Changing a system from a one-to-one interaction to a one-to-many interaction should not allow us to apply the same rules. In math for example, commutativity works for single numbers, but not matrices.


Funny title change


[flagged]


Ignoring the politics behind this article, it still has a point in how fake, scripted posts alter the global conversation.


What is this comment intended to mock?


It assumes the entire post is about a throwaway example given as a eye-catching part of the piece. It's also an example of the low-trust society discussed, the commenter assumed the worst if the article and attacked immediately.


Welcome to digital socialism. It always ends with long lines for a few blocks of cheese.




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: