Any exchange of value based on trust is exploitable. The simplest cure is excluding the exploiter, but this doesn't scale well. The exploiter can skate on anonymity if the community is large enough to continually prey on someone new. Spreading news of an exploiter's behavior to others can greatly improve how well this scales, but this behavior also requires trust.
I think the more direct problem is with scale, and that the internet is at the nexus of many trust issues only because it has ramped up the scale and scope of many interactions.
I'm not super optimistic on solving this intrinsic problem of trust in social exchanges, but I do see this framing as a silver lining. It seems at least plausible to iterate offline at significantly smaller scales on mechanisms for building and maintaining trust--and rectifying its breaches--in ways that do actually scale.
If you have never read any of these counterpoints, but find that the conclusions of Selfish Gene have shaped your world view, please consider reading some of these counterpoints and seeing if they persuade you to consider new perspectives.
A particularly thorough book detailing these counterpoints is Matthieu Ricard's book:
Altruism: The Power of Compassion to Change Yourself and the World
Dawkins' "The Selfish Gene" is a scientific work about evolutionary selection at the gene level -- I don't recall him touching on psychology at all. (He provides evolutionary explanations for certain altruistic behaviors, but I don't recall him even starting on how they might be expressed via a psychological mechanism or at any conscious level.)
I can't quite imagine what either have to do with each other -- they seem to be such different topics. Or what Dawkins' work has to do with "western culture", or culture at all. At heart it's a quite mathematical/statistical argument.
I'm curious, what exactly do you see being refuted?
One of the central themes of the book, and motivators for the author to write, is to address the contrast between other oriented (altruistic) and self oriented (selfish) societies. The author asserts that in the western fields of psychology, theories of evolution, and economics, it's often taken for granted that an individual's deeds, words, and thoughts are motivated by selfishness. To the extent that this assumption has nearly become dogma. The 868 page book, including a massive 160+ pages of notes, and bibliographies systematically lay out the scientific arguments against "the hypothesis of human selfishness".
You can read the sections of the book that specifically address "the selfish gene" with the following google search .
Yes, Dawkins is saying that "universal love" does not have an evolutionary component, which seems like a fairly uncontroversial claim.
It seems like your criticism of Dawkins is more a criticism of how other people have misunderstood him, rather than any criticism of the arguments in The Selfish Gene itself?
If you haven't, I highly suggest you read Jonathan Haidt's "The Righteous Mind". While it's at a popular level, it does a fairly good job at presenting a plausible framework for how moral behavior (like altruism) can emerge from evolutionary principles.  Haidt is probably one of the most influential moral psychologists today.
I get the impression you've decided to take a stance on the book Altruism without reading it. The summary you provide for Rightous mind could work just as well as a summary for Altruism too. At this point, we aren't even disagreeing, we're just citing different sources, and I'm content to just drop it.
Aware that I might be jumping to conclusions but that doesn't sound like the title of an objective, scientific text.
At first glance I’m not sure how a book on altruism/compassion refutes the gene-centred view of evolution, which holds that evolution is best viewed as acting on genes and that selection at the level of organisms or populations almost never overrides selection based on genes.
Trust is a resource and is (only) used up if exploited. There are countless examples how this happened on the net. Remember the latest disclaimer you agreed to and consequently had your data stolen or abused? Don't even get me started on penis enlargement spam.
Then additionally, people pretending to have identified the abusers very frequently become the largest abusers themselves.
Lastly, trust needs room to grow, so trying to enforce it through surveillance and advertising will only strengthen the individuals ego. So...
> can greatly improve how well this scales, but this behavior also requires trust.
can backfire immensely. People have already tried and failed.
There is no intrinsic problem with trust. On the net you are potentially connected to everyone on the planet. Nobody can claim the trust of everyone. And I firmly believe achieving this is the wrong goal.
People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.
Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.
If you keep looking for new predators, everybody will look like one at some point. Because you have let your trust be abused and have none left.
> People need to simple not let their trust be abused and that is very possible. For example not letting ad companies handle your biases or believing the next random spam mail you get.
> Anonymity is a tool to shield you from abuse. Not the only one, but for non-public personas, nobody has found something better yet.
I'm not sure it's possible to distinguish the world you describe from the one we already live in.
The name of the behavior you're looking for is Evolutionarily Stable Strategy  often abbreviated to just ESS.
Note well: I am not advocating either a social credit score system, nor China's use of it. Instead, I am warning that any national-level attempt to solve the trust issue will likely be open to the same abuses as China's system.
Specifically, once it becomes a means of political control it ceases to be useful as a system for trustworthiness.
I was thinking "money screws things up" and its corollary "free screws things up", then read your comment. It gets to the root of things.
With trust, trade is unlimited, so I wonder how to architect things to stand up to the problems we have.
But that effectively ends the point in aristocracy, the meme that we must cater to all these rugged individuals living off grandpas old money. So that’ll never fly.
Violent revolt it is! Front row to the apocalypse! /s
But I think there's a risk of sending a lot of bright, productive people off to tilt at technological/interface/regulatory windmills in a way that leaves fundamental trust issues out of scope.
Even if Amazon, Twitter, Facebook, etc. could wave a magic wand and remove every fake review, counterfeit product, scam, personal threat, or piece of false/fraudulent media, there would still be a trust issue. We still have to trust that they did what they said, that whatever disappeared was correctly identified, and that they didn't wrongly remove many legitimate items in the process.
Even if the magic wand that makes these decisions has the utmost ethical and logical integrity, there will still be some mix of skeptics, cynics, malign actors, competitors, bots, etc. who chum the waters with accusations to the contrary. We'd still have to choose whether or not to trust this process and the actors behind it.
So, I think it might be productive to focus on some smaller questions first. To be semi-arbitrary: can we find a protocol for reliably building a 50-person trust network with a limited scope/focus (maybe identifying reliable providers of a single service), where each participant knows a small fraction of the network, which is capable of meeting both its purpose and is capable of detecting and reforming or ejecting exploiters? If the first is tractable, can you expand the scope/focus of the networks and retain these properties? Can you compose a higher-order network that retains these properties?
These kind of emergent niche communities can only exist when the broader network gets sufficient scale to have a critical mass in the niche.
So in limiting scale overall let’s take care not to throw out the baby etc.
I'm imagining something more along the lines of: can we learn how to turn what small/niche networks do well into building blocks, and does that knowledge teach us anything we could use to (I guess adversarially) re-structure/reform some environments that are currently trust bonfires.
References? Credit Score?
Both of those are easily gamed in the absence of trust. For references the mechanism is obvious. They list co-conspirators as references who then lie about their titles.
For credit reporting it seems like you could do the same thing over time. Create fictitious entities to make positive filings with credit reporting agencies. I assume the reason that isn't more popular is that it's even faster to do it the other way and steal the identity of somebody with good credit.
If anything the internet nowadays is a more regulated, safer place than it used to be. The "wild west" internet of old now only endures in certain corners. It's no longer synonymous with the thing itself.
Posting with your full name instead of anonymously or pseudonymously is now the norm for many people. This changed how people interact online, but it also changed their expectations.
A full name however doesn't make a person or their opinions any more real than a pseudonym will. People will lie to you with a fake name just as shamelessly as they would hiding behind a pseudonym.
The people who write those kinds of articles are mostly just your average Joe who uses the internet exclusively to browse Facebook and the like. Of course this gives them a twisted understanding of the medium.
And then there's those who are just lying I guess (duh!). Plenty of reason for that with the whole fake news hysteria. If you don't want to risk ending up on the wrong end of that debate, your article is already written for you. Risky to defend the free(-er) internet nowadays or even just to refrain from attacking it. Further there's the fact that "a war on fake news" might give the established media an edge, but I'm unconvinced any reputable journalist would consciously consider that when deciding on the tone of his article.
In any case you can also find plenty of articles displaying a better understanding of the matter. It's not like there's some secret global conspiracy against the free internet.
And honestly, it was better that way. The internet was built to be public space.
Now, I am not discounting that there is actually toxic, horrific harassment that goes on the Internet. Doxxing and things like it are terrible and should never happen.
Rather, I'm arguing that people need to acquire a sort of "street smarts" when navigating the Internet. Much like when you travel to a foreign country, you may expect a different culture with different norms, so it is the same with the Internet. What's rude in one culture is simply social convention in another. What is casual conversation in one is taboo in another. I spent a lot of time on the Internet growing up and I feel like I have been inoculated against the worst parts of the Internet, but at times it feels like many people haven't.
A good example of this is how a lot of people and media take 4chan posts at face value, without realizing that a lot of it is completely self-aware and that the worst posts are almost always a game of who can post the most needlessly offensive thing possible. A lot of 4chan is an exercise is communication with no names and no filter. And yet some of my most informative conversations have been on 4chan precisely because there is no politeness filter, and posters can be as devastating critical as they want. But a reader also needs to learn how to tune in and out, correspondingly discount and read between the lines to get the most out of 4chan conversation, otherwise you simply come away with the idea that the community hates everything and everyone.
For a more tame example, I follow a couple of Twitch streamers. For example, for streamer A, most communication with their chat is saccharine and supportive. But for streamer B, the chat makes fun off and insults him the whole way especially he makes a mistake, and the streamer, and the streamer gives as good as he gets and makes fun of the chat the whole time too. And this is normal and fun engagement for all parties. From afar, streamer B's community looks extremely toxic, but that's the furthest thing from the truth. (What is abhorrent is when you take streamer B's chat behavior into streamer A's chat, and that is frowned upon by all parties.)
I believe that Internet activity should be diverse. There should be places where communication is an extension of real life (Facebook, emails), places of semi-anonymous and professional communication (Hacker News, certain subreddits/slack communitiees/GitHub), and then places where you should be able to go hogwild with whatever you want to say (fun subreddits, discord).
Twitter is unfortunately one of those places that has had all of these mixed together, which is why I decided to have multiple Twitter accounts targeting professional/personal hobby communication. From what I hear, kids are already tuning to this idea, with things like public/private instagram accounts.
The Internet is not all the same, and that's great. The Internet is not just real life, and that's great.
I acknowledge your argument that "diverse worlds" system may not be sustainable over time because it is unstable, particularly to new entrants who do have to learn the local lingo. I personally disagree, and would argue instead that that's something Internet users should learn to deal with. We've long had the solution for this: good moderation. You can have hyper-extensive moderation like r/askhistorians or Facebook support groups, you can have moderate moderation for spam and the like for things like generic hobby groups, and then you can have light moderation for spaces deliberately made for that. From heavy to light moderation, the burden shifts from the moderator to the user. Different Internet spaces should have different arrangements like this. While this solution may not be perfect, I argue that this is what works. I put this in contrast to another prevailing idea that all spaces should be like IRL (because that has maximal accountability and requires no context switching), and I argue that that's what will diminish the diversity and dynamism of the Internet.
I would further add a key ingredient to making this world is the tools provided by the platform to perform this moderation. That's one reason why Twitter, for all the good content it has, is also a mess, because there's no limit of cross-pollination of communities. Contrast this to, say, Reddit, which gives subreddit moderators significant powers to curate and protect their communities.
But a lot of it is not, and a fair number of people do become indoctrinated to alt-right ways of thinking through that sort of medium, some of them going as far as murder.
I'm not saying "Ban this sick filth!" but I am increasingly of the opinion that such places shouldn't just be left to fester, because the genuinely twisted do go there, and they recruit, and that has real world consequences.
(I have a whole separate argument on how I think the problem may be that the progressive movement has simply assumed that it has won the cultural war, and has stopped making effective arguments for itself, which inadvertently leaves room for alt-right type movements that are still actively recruiting and promoting themselves - e.g. the so-called "red pilling".)
Like everything, this isn't absolutely true. People meet others they met through online means all the time, be it for clubs, dating, or commerce (e.g. craigslist). There is a certain societal expectation for how strangers are treated and that requires a bit of trust in someone you've never met.
If we got rid of all of the bots, that wouldn't make Amazon easier for me to use. I feel like scammers have already figured out that humans themselves are relatively cheap to buy.
Bots aside, what we post on the internet (and believe in general) is a function of the information we consume. That information is increasingly consumed through the internet, and the internet (as typically used) is feeding us increasingly poor quality information. (To be consumed, then shared. Recurse...)
We can (and I think do) cope by having different levels of trust for different sorts of internet information. I generally have higher trust in an HN post than an Amazon review, for example. This is useful, but has a dark side: some of the things I (and likely, you) take as a sign to "increase trust" happen to correlate with things that have nothing to do with trust. I trust Amazon reviews more if they have good grammar and spelling, largely because I trust general internet information more when it has good grammar and spelling. But perhaps I shouldn't when e.g. buying a screwdriver (what does writing ability -- and the things it correlates with -- have to do with evaluating a screwdriver?). Coming up with social/political-flavored-information examples is an interesting (and worrying) exercise.
Consider this: a consumer that employs above average grammar and spelling skills to write product reviews may also be more skillful in forming and expressing valuable opinions and assessments of any product.
I think that people are naturally inclined to pick up on those signals, whether the correlation is real or not.
back to the Amazon example: an expert car mechanic may have worse spelling/grammar than a hobbyist, but I should probably trust their Amazon review of a car part more.
This is exactly how the liberal societies of the west evolved. But don't remember that any more, though.
Whitewashing history to make it more palatable is dangerous.
Depends on the state, if I'm remembering correctly. New Jersey allowed certain women and african americans to vote until (I believe) 1806.
Dirtying history to make our advancements larger is also dangerous.
> That all inhabitants of this Colony, of full age, who are worth fifty pounds proclamation money, clear estate in the same, and have resided within the county in which they claim a vote for twelve months immediately preceding the election, shall be entitled to vote for Representatives in Council and Assembly; and also for all other public officers, that shall be elected by the people of the county at large.
It's probably also worth noting that these restrictions applied to men, too.
Some of the English settlers were decent people with functional societies.
Some of them were basically the christian Taliban.
The heavily religious communities in the early US didn't display any of the tribal behaviour commonly seen amongst the Taliban (which are the cause of most of the governance issues in Afghanistan), just the extremely strict adherence to religion which is in some ways a superficial similarity as the average person in those times was as fanatically religious as only the most hardcore Christian today.
For instance Cousin marriage wasn't a thing at all (which is one of the main ways of maintaining tribal cohesion) while it is still extremely common in Afghanistan and the Middle East today.
Nit picking over the particulars of how they ran their society does not make them any less a bunch of fundamentalist jerks.
Like the US in its current coast to coast form didn't spring out fully formed from Liberty's bosom. It also had centuries of European innovation that lead to it.
I'm not being completely serious, but I do find it extremely interesting that this whole idea of high-coercion at the start followed by a gradual easing off as people accept the new culture is one that has been tried explicitly. Or... at least claimed to have been tried ;-). Marx and Engels split no hairs about the authoritarian nature of revolution and when Lenin and Stalin later tried (presumably) to manage the situation they found it necessary to maintain that dictatorship in order to combat the ever present trend toward bourgeoisie and capitalism. But it was all supposed to end eventually.
For me, I see parallels between that and the current air of "It's a scary world out there. Trust your government to take care of you. Give us more powers so that we can make our communities whole." No authoritarian revolution, but a very real reach towards cracking down on bad people in the name of the community. And once utopia (though not a communist one) is reached, the power will simply not be necessary.
Having said all that, it was Confucius who said that if you are lax at the beginning and then become more strict, you will be seen as a tyrant. However, if you are very strict at the beginning and become more liberal over time, people will see you as magnanimous. When I used to teach at a high school, I used that idea and it did, indeed, work very well.
Unrelated, I was unaware of the dictatorship of the proletariat, but that explains to me why communism ends up as a system of control: because it started out as a system of control in the first place. And why it's liberal (at least in the 1910s) intellectuals that were attracted to it: they were the ones ruling society in that system, which, as every intellectual knows, is The Way It Ought To Be. Of course, it got co-opted by the political sorts because control in the real world requires weapons not ideas.
First, the internet has opened up, and its prominent platforms (which were always predominantly american-culture oriented) are now global and thus reflect low trust cultures as well. Second, Goodhart's law, the internet metrics exist mainly to be gamed, thats why they should change. Reviews worked for a while, when the internet had a very different audience , they no longer work, that's normal wear and tear. Third, it's time platforms start paying specialists for crucial things like product reviews. Crowdsourcing no longer works. (Which explains why wikipedia should maintain a very conservatiove editing policy from now on)
Given that platforms earn a commission on every sale, why should they be trusted to procure specialists' reviews? Maybe they would prefer to hire professional charlatans to boost the reviews on everything, in order to drown out the legitimate opinions of aggrieved customers?
Back in the day, when you placed a long distance call, an operator would come on to ask you what your number was so the call could be billed to you. A single call might be a dollar or a few dollars, which would be ten to several tens of dollars in today's money.
No one would imagine that such a system would work today.
And that's probably rooted in people being fundamentally devalued and taken advantage of.
We are seeing worse income inequality than in The Gilded Age and people act like it's a natural and unavoidable side effect of the existence of tech, the internet, whatever. That's BS. People created these things. If these things exploit people, it's because people designed them to exploit people.
If people want people to be treated better, then people need to stop blaming machines for our social values. Tech merely magnifies those values. It doesn't cause them to exist.
Bill Gates said that automating an efficient system amplifies the efficiency and automating an inefficient system amplifies the inefficiency. I propose that you can similarly amplify whatever underlying social values you have, whether that's something good -- pro education! -- or something bad -- racism and misogyny!
And classism. We are using technology to amplify extractive economic practices, then blaming the robots as if Judgement Day had already arrived and Skynet is now in charge of our lives. (A la the Terminator movies.)
I think the article's approach of referencing historic stragies from government and social infrastructure is a bit useless in this context because how much the problem changes when scarce information becomes abundant light-speed communication. I understand that they are trying to make the point that corporate overlords are bad, but maybe it would serve the purposes of the article better to just focus on the variables that matter, now that there's new capacities for transparency and accountability metrics based on some combination of historic behavior, web of trust node size, and other situational variables based on the medium -- like users that spend a lot of money, or consistently downvote known bad actors. Forcing users to have skin in the game with strict enforcement of transgressions is of course a reasonably effective, if coercive, strategy as well, and you can take the edge off this authoritarian approach if you pair it with some variant of restorative justice.
My approach is perhaps not rigorously useful, but for my personal conceptions of trust in a world of bad actors, I like looking at strategies from Axelrod's iterated prisoner's dilemma. Tit for Tat is famously a good strategy, and there's also a good strategy where you forgive on multiple cooperations but gradually increase the punishment of defectors to n times for their nth defection . Though I should mention that tribalist collusion with other bad actors is unfortunately a very viable approach as well.
It's a totally self inflicted wound too. Clickbait, outrage bait, blatant political spin, stealthy retractions, journalists on social media starting mobs...
Bots do not even register in most people's minds compared to that.
Newspapers per 100 million people fell from 1200 (in 1945) to 400 (in 2014). This is from a Brookings study cited in a Wikipedia article on the topic . In 2013, the Chicago Sun Times laid off all its photographers and tasked journalists to take photos as well as provide the research and writing . How would the quality of your work be affected if you had to do the job of 2 people?
The classifieds ads business is dead, and subscriptions have been declining for years because "news on the Internet is free". The only "media" that makes serious money is talk radio, which isn't journalism so much as diatribes of political invective.
As it turns out, that's what people are willing to pay for, or at least sit through ads for. If anything, "the media" is giving the people what they want.
> Google Made $4.7 Billion From the News Industry in 2018, Study Says
Which is the entire point of the article you linked. If you're gonna be snarky, you should really check to make sure your information is accurate.
tl;dr the number is total fabrication
Did people ever buy news because it was news? Or was this decline inevitable as actual entertainment was always going to eventually be able to offer a better match for what people actually buy?
If this doesn't scare the s--t out of you, I don't know what would.
I'm not sure that's consistent with the evidence. People are actively getting worse and worse at modeling the other "team", which strongly suggests that polarization is quite real indeed and not just a matter of different labels.
> very gentle and good-hearted.
That has not been my impression. Just one data point.
And of those trying to understand, some were trying to understand, and some were trying to understand how to get votes better.
On a person per person basis there's dehumanizing caricature, but I wonder if you compare it to the party to which the person claims to belong, if it would be as out to lunch.
There is no mass incentive for thought-out, contemplative, long-form journalism. In my opinion, we have dug our own grave - the truth is that the primitive, animal parts of our brain vastly overpower the analytical parts of our brain, and so your 'outrage bait' is a news (or tech) executive's bonus for the year. If the only metric by which we measure anything is $$$, then, well...
It's tailor made for propaganda. Joe Schmoe who is actually there doesn't know how to inflate his likes and SEO his account. His voice gets drowned out by the people who are getting paid to push a particular viewpoint. There is no editor providing oversight. No ethics board. No neutral point of view. It's just whomever shouts the loudest and gets there first.
And that's exactly how news industry worked always, since invention of press (actually even before, though it wasn't industry back then).
And that in fact be another reason why the media as a whole is doing so poorly. In the old days, a fair few people read the papers for this stuff. Nowadays they can get the same information elsewhere, without the stories they're less interested in taking up space.
I even experienced being the target of a report (well, the company I work for), and they did an awful job. I felt that the story they made up was only tangentially related to reality.
My conclusion is that it's less about demanding - the way we fund news, i.e. open and competitive market, is structurally unable to support good journalism. Usually problems like these are solved by governments setting standards and giving funding, but journalism is a special case (it's perceived by many as protection against government overreach), so with that option out, I have no idea how to even begin solving it.
Of course it's still a problem. The point is that it's actually a much bigger problem than you implied. People blame the media like it's all just down to some greedy jerks making malicious decisions, and if only we could replace them with someone with integrity, everything would be okay.
The fact is that we've created an environment where it's nearly impossible for honest, non-sensationalized news to exist. Putting all the blame on the media is like blaming a starving man for stealing bread. They're culpable, certainly, but you're missing the root cause.
The media isn't one unified conglomerate. It's like equating the entire tech industry to just SV gig economy startups.
No, it's really not. In a nutshell, the root cause is the currently pervasive idea that good writing should be available completely for free.
The other problem is that most people can't tell good writing apart from bad, but that problem is far older than all technology (apart from writing itself, of course).
Perhaps most people can't tell good writing from bad, but the HN crowd is more able than average to distinguish the two. Yet the attitude here boils down to "writing isn't a real job -- if you don't like working for peanuts, then STFU and get a real job!" HN members actively find ways around pay walls, more aggressively than most use ad blockers etc. Most people here are well heeled and can afford to pay for subscriptions, but the dominant attitude is "fuck no to that -- writing should be free!"
Edit: having read the "hand licking incident" I believe it did give me value and I would be willing to pay 1 dollar or so as thanks (not implying that I got only 1 dollar value, or even that I got as much as 1 dollar value. Just a number that seems reasonable).
There is the matter of how: to do it I would probably have to spend much more than 1 dollar in effort.
And there is the matter of scale: I want to pay you for your work in giving me an interesting insight. But if we started to do that massively, people would optimize for "things that seems insights" not for insights... (see https://slatestarcodex.com/2014/07/30/meditations-on-moloch/)
Even though I removed ads from most of my sites in part to respect the boundaries of people who hate ads, I mostly get endless excuses rather than funds.
When I ask "How can I monetize my work?" people don't actually have a solution. They seem to think if you get enough traffic, that automagically leads to money, overlooking the fact that this concept only really works for an ad-based model and widespread use of adblockers kills it.
I sometimes get told "Product sales of some kind." Nevermind that this is another form of whoring out my writing to the need to sell something other than the value of the writing per se and also people on HN equally bitch about the evils of content marketing and how it is one of the things ruining the internet.
I've heard these arguments for years. I've tried to find a means to make money without being evil in some manner. The result for many years now is virtuous and intractable poverty.
When push comes to shove, the real answer boils down to: We expect large quantities of quality writing on a regular basis and we refuse to pay for most of it. We also will get up on our high horses and get all offended if you dare to use expressions like _slave labor_ to describe our entrenched expectations and the de facto outcome. Don't confuse me with the facts. My mind is made up!
It's quite tiresome to keep hearing the same BS over and over while I continue to live in poverty and yadda.
Edit in response to your edit: I call bullshit. If you honest to God want to give me a single dollar, you can do so via either PayPal or Venmo right now without further hypothesizing about how giving me a single Goddamned dollar is some new means to ruin the internet, along with every other means to pay for writing. Because beneath all the hot air is the fact that most people simply expect slave labor to create good writing. If this weren't true, I could pay cash for a cheap house in my small town and quit whining on HN about being poor.
(1) I meant to be a one month only patron.
Also, take my data point and do with it as you will. Call me evil/slaver/bullshiter/whatever. I was trying to help and also to discuss, but I no longer feel inclined to do either.
I was trying to help and also to discuss, but I no longer feel inclined to do either.
Yes, this is par for the course: People get mad at me for effectively communicating that there are no good solutions here, no matter how hard I try. And it becomes a new excuse to blame me for my financial problems and declare "Not my problem! I'm done here!"
I genuinely bear you no ill will, but you and I are posting on a very public forum, so I think there are larger things at stake than your feelings. Other people need to genuinely understand how this works if it is ever going to change and being too polite about this fails to get it through to people.
They just keep arguing that I must be wrong, there must be some means for someone to make money as a writer that doesn't violate any of their constraints for how to make money as a writer without being evil. And, besides, their desire to have an ad-free, whatever whatever internet is far more important than my financial difficulties.
I got these kinds of arguments even when I was literally homeless and going hungry, which I found completely mind boggling. But, to their minds, my homelessness was merely evidence that I was incompetent and there was no reason to take me seriously, not evidence that, no, seriously, most writers really just can't make the money they need in the current climate.
You have a good day. I know I can be hard to take.
That's a reasonable point, but people mostly don't tell me "It's a trust issue. I would pay if I believed I could trust the motives of the author/source."
The overall attitude expressed is consistently "I'm simply not going to pay for writing. If writers want a middle class income, they should get a real job."
Once in a great while someone will agree with the general point that if you want to be able to trust what an author is saying, you need to pay them for their writing and not expect them to monetize with ads or sponsors because that introduces a conflict of interest. One person cited Consumer Reports as an example of this model and why they pay for a subscription.
But that's the exception, not the rule. Most comments here consistently express the attitude that they simply will not pay for writing and writing is not a real job.
At the same time, journalists get attacked for not doing their job adequately well, etc. It mostly falls on deaf ears to point out that journalism simply doesn't pay what it used to and there is a cause-and-effect relationship between the lack of adequate pay and the lack of quality writing.
There's more to trust than belief in the veracity (or lack thereof) of a statement. When you trust a writer, you not only trust their claims, you trust that the substance of their writing is worth your time. The attitude you highlight suggests to me that many people do not see a lot of writing as being worth their time.
Unfortunately, people's judgements of value can be strongly influenced by price. When the quantity of readily available, free writing increases dramatically, people's judgement of its value goes down. Simply put, they no longer trust in the institution of writers as a medium.
It also got copied and reblogged, sometimes legitimately with my permission and sometimes not. For me, it is the first hit if you google the expression "the hand licking incident." It seems plenty of people found the piece worth reading.
It made not one thin dime.
I spent around two weeks on that piece. It's at least my third attempt at a parenting blog. I get paid for freelance writing, have years of experience blogging, about six years of college and if karma count is anything to judge by I'm a "respected member of the community." (My old account has 25k karma and this one currently has 19k karma. If it was all under one account, I would be decently high on the leader board.)
It has no ads on it in part because I would rather not be a shill for god-knows-what. I would rather be paid for my writing. But it also has no ads in part because I know how much the internet in general and HN in specific hate ads these days. It is supported via tips and Patreon.
I'm quite open about how much I struggle financially and that I make my living as a writer in part because I'm medically handicapped and can't do a lot of so-called "real jobs." Given that we have worse economic inequality than in The Gilded Age, "get a real job" is a specious argument anyway.
The reality is that the current attitude is that writing simply should be slave labor. Period. If you don't like it, go do something else. Not our problem that you are literally homeless and going hungry, bitch.
Meanwhile, five million monthly visitors to HN expect the front page to be filled daily with good writing and they bitch and moan about how there isn't enough good stuff on HN and the front page moves too slow and on and on.
I don't particularly care to continue this discussion further. It's not likely worth my time.
(Edit: Not currently homeless, but I was for nearly six years. I still struggle with food insecurity and general poverty.)
'Bout sums it up.
I feel like since it's inception savvy users always applied a healthy amount of skepticism and made thier own choice whether or not they believed something "on the internet". This included both facts on websites and conversing with other humans on chat.
Exactly. It started when big corp tried to lure in more users by giving them free cheese, making them believe that the internet is a safe place to post your name, address, work life and sex habits. Everyone at the time thought it was crazy that people did that, and yeah they were right.
Otherwise you could easily buy your way above Wikipedia, a non-profit organization (Wikimedia) - which ranks at or near the top for every query.
Quora, or some other VC backed knowledge service with a couple hundred million to vaporize, would overtake Wikipedia through the direct purchasing of popularity. Then others would quickly follow, entirely wiping Wikipedia from the first pages. That can't be done. If it could, private equity via Answers.com and other such very low quality sites would have already done it, seeking billions of dollars in return on such positioning.
While I wouldn't trust a random stranger on the street with e.g. my money or private data. However, I have no problem trusting them to honestly and truthfully answer questions like "how do I get to X from here?" or "here, can you please mark my bus ticket in that marking machine over there and hand it back to me? I can't quite reach it in this crowded bus".
On the internet, however, I don't trust a random stranger (e.g. a youtube commenter, or a reddit user) with answering "2+2" correctly, let alone with something that involves my possessions, no matter how low-value they might be.
General purpose headline for attracting views, tying into the seductiveness of a general perception of doom and gloom.
I remember YouTube tried this and failed due to pushback.
What's the driver of this clickbait/instant gratification/hot take culture that's developing? Is it just ads?
The Boris Johnson fiasco has got journalists openly saying "who are the people who called the police when they heard shouting and screaming next door? We need to find and expose them".
Requiring ID means the culture war side with the best harassment capability can take over the space and decide who becomes the story and who gets made unsafe.
Could you provide some context?
> who are the people who called the police when they heard shouting and screaming next door? We need to find and expose them".
Could you provide some links?
(sorry, not doubt, truly uninformed)
If you don't recall, Facebook also requires "Real Names" and we saw how much that helps.
The driver is human nature. The internet didn't invent the conspiracy theory, or the clickbait news article. That stuff existed almost since the first newspaper. False accusation is specifically called out in the 10 commandments. Lying and cheating is human nature and likely requires some checks in place when it gets to the point of harming others for your own profit.
It's essentially a race-to-the-bottom situation. Media that don't employ clickbait will lose out to others that are making tons of money catering to instant gratification.
I don't disagree with the premise, but this solution seems vague at best. Authentication and privacy? Great things to be sure, but where is the connection? As it said, even "verified" purchases could be (and sometimes are) done by paid shills.
I guess the proposal is to create laws that prohibit lying on the internet? This person has clearly fallen on the wrong side of the authoritarian argument if that's the solution they came up with to prevent power consolidation in big business.
We are due for tough times ahead.