Hacker News new | past | comments | ask | show | jobs | submit login

I feel like all of this debate is painfully missing the point. The problem is not some content moderation policy, the problem is that social media has changed social conversations from small local interactions into monstrous virtual fight clubs between millions of people simultaneously, and where the most extreme opinions are rewarded with the most attention. Boring level headed opinions used to at least have a chance of rising above the noise. Not anymore.



The other problem is that the business model of social media is based on generating "engagement" at all costs, so the platforms are built to encourage outrage as it generates lots of engagement, among other addictive behaviors (the infinite "algorithmic" feed for example). Social media was supposed to be a tool that serves people but its current business model encourages it to work against people.

There were plenty of other technologies that could've been used to organize large-scale virtual fight clubs (forums, BBSes, chatrooms, maybe even the telephone) but this didn't happen because nobody actually wanted to foster such toxic behavior.


> nobody actually wanted to foster such toxic behavior

I think no one realized you could make disgusting amounts of money by fostering it. There have been plenty of flame wars on BBSes and forums, but the tension between "engagement" and "quality" always favored the latter in small, private communities (HN is a good example). However, when it comes to Facebook and Twitter, the former is always favored (due to market forces, shareholder interests, etc.).


"BBSes and forums"

Observe flamewars on BBSs and forums tended to drive people away, thus tending to confine them through natural mechanisms. Our current systems encourage and fan them, and by profiting from them, create a system that sustains them indefinitely instead of naturally isolating them to just the people who actively want to participate.

As someone who has a low-grade hobby of kind of keeping track of this sort of thing, I think one of the major challenges of trying to structure communities is all the most powerful drivers of what is going to happen occurs in these low-level decisions that create persistent second-order effects on the community's nature, and this is one of the somewhat unusual cases where the second-order effects utterly dominate the first-order effects. In this case, you can read "first-order effects" as "the things the developers intended to happen", which I think are dominated by the structural effects of decisions that aren't intentional.

If you create a community that literally profits from conflict and flamewar, you'll never be able to fix them by any number of systems you try to add to fix that via first-order effects. The underlying structure is too powerful. Facebook can't be fixed with any amount of "moderation"; the entire structure, from the very foundation, is incapable of supporting a friendly community at scale. Until Facebook stops profiting from engagement, they will never be able to fix their problems with toxicity, no matter how many resources they pour into naively fighting it via direct fixups.

(Now, I have questions about whether a friendly community of the scale of Facebook is even possible: https://news.ycombinator.com/item?id=20146868 . But the fact it may not even be possible does not mean that Facebook is still obviously not that solution.)


I'm always reminded of this quote from Larry Wall, the creator of Perl:

"The social dynamics of the net are a direct consequence of the fact that nobody has yet developed a Remote Strangulation Protocol."

It's tough to get all this right. Humans are ok at dealing with one another in person most of the time, but being behind a screen really doesn't happen in many cases.

I have no idea what the solution is... it's a people problem, so probably not a strictly technological solution.

HN works fairly well because of the hard work of the moderators.

FB is kind of maddening to me. You can't just put it in a 'good' or 'bad' bucket.

Some things I get a lot of value out of:

* Being able to keep track of acquaintances from all the places I've lived. There are a lot of friends and family I have in Italy that I can't see often, and I do enjoy hearing what they're up to.

* As a tool for organizing it's been a very handy, low-friction way to get people involved in some political issues where I live in Oregon.

On the other hand, lately it has also been a source of stress. The sheer amount of anti-science, poorly thought out political comments and plain hatred is really depressing at a time when a lot of things are not going well.


> it's a people problem

I dislike this trend (especially in Silicon Valley) to blame problems on the users - e.g. creating a startup, and becoming frustrated with users when people use it "incorrectly." Technology is supposed to be used by people, not the other way around. When technology is using people for its own interest (in this case, ad revenue), then we have a real problem with the technology, and it is absolutely not a people problem.

Only 80 years ago, plain images on posters could be used to motivate people to die for their country in World War II. 400 years ago, images were so rare that it was enough to paint church walls with them to fill people with belief in God and afterlife. And as of the last ten years, we're suddenly expecting people to drop their belief in images and use their "rational" logic to see through fallacies, saying it's a "people problem" when they can't? It's just too fast for evolution, and the onus is on the ones who create the technology that disseminates images to be careful, lest they create the perfect conditions for a society to fall apart because they were too busy looking out for their bottom line.


>The sheer amount of anti-science, poorly thought out political comments and plain hatred is really depressing at a time when a lot of things are not going well.

On this front, I prune my contacts when my feed starts stressing me out. It used to be you had to totally unfriend someone, but Facebook wised up and now you have a variety of options. You can put them on a 30-day timeout so their posts won't show up on your feed while they get their rant on, or you can unfollow entirely while remaining friends (so you can still actively check on them but won't get passively bombarded with dumb stuff). You can also opt out of seeing content from specific sources they share if the only problem is they're sharing dumb links.

It's still not perfect, but keeping in touch with people who post dumb stuff is always gonna be a balancing act and Facebook's come a long way in facilitating that act even though most of the options are not obvious (most of the above are found in the ellipsis icon in the upper right of every post).


"HN works fairly well because of the hard work of the moderators."

Moderators and size limits, the latter keeping the amount of work small enough that a couple of moderators can handle it, and aren't getting subjected to the sort of stuff Facebook moderators deal with. Obviously, not having images or video also helps that. (Though I recall some times when Slashdot trolls were taking some good swings at Can't Unsee even with those limits.)

HN is on the upper end of what a community structured in the way it is can handle, I think, and it has taken some tweaks such as hiding karma counts on comments. I'm not deeply in love with reddit-style unlimited upvote/downvote systems... in their defense, they do seem to scale to a larger system than a lot of alternatives, but it comes at a price. I do fully agree it tends to create groupthink in a community, as a structural effect, though I think that's both a positive and a negative, rather than a pure negative as some people suggest. Some aspects of "groupthink" become "community cohesion" when looked at from another point of view.

Never thought about it that way, but maybe that's why a reddit-style karma system does tend to hold relatively large communities together.

But even as one of the more scalable known systems, it still breaks down long before you hit Facebook scales, or "default Reddit subreddit" scales.


> There are a lot of friends and family I have in Italy that I can't see often, and I do enjoy hearing what they're up to.

In the old days we had to actively do that using letters, or emails, or phone calls. I think it was a better system because it forced you to choose who you cared enough about to stay up-to-date with. Minimalism isn't just about things, it's also about relationships.


I only have so much time, and FB makes it easier to keep in touch with more people. Sure, I'll find the time for really good friends, but it's a benefit to be able to keep in touch with more people who I enjoy having in my life.


If I was king of Facebook (or a social media company that had its network) and I could change things for users without worrying about the company's revenue I'd do two things.

1. No links to outside content.

2. Mandatory deletion of all historical data with a max retention option no longer than 1 year with a default of 30 days. (Let users pick the 'fuse' length within this time frame).

I think that would double down on the things I like about it (keeping tracking of acquaintances like you mentioned, handling events, etc.) - while also removing a lot of the things I don't (arguing about news, targeted ads based on historical data).

That said I'd just like to make it easier for people to control their own nodes (https://zalberico.com/essay/2020/07/14/the-serfs-of-facebook...), but I also recognize that getting the social element to work in a federated way is not easy. Maybe Urbit will pull it off eventually.


I kinda wonder about just having a no politics policy. Allow user reports... If N users report a post for being political, or it triggers some regex, penalize the post in some way. Facebook doesn't have to be about politics; Instagram wasn't for a long time, but with the onset of the BLM movement, I've seen my timeline filled to the brim with politics, and at one point, people were even saying that no one was even allowed to post non-political content because it shows how privileged you are to be ignoring the movement. I don't use Facebook for the politics... I just want to know what's going on in my friends lives. You can still have engagement without politics.


While that sounds appealing, that just moves the goalposts. Now, who gets to define what's politics? Is mentioning global warming politics? Is advocating for wearing a mask during COVID-19 politics? And the meta-discussion of what content constitutes politics is also inherently political.

As for user reports. I would expect the same kind of dog piling you see now with people flagging people/brands/content they don't like politically as "politics". Post a picture of a The Origin of Species? Politics! Post a link to Chick-fil-a? Politics! Etc.


Ultimately, politics aren't the issue. Not having clear, consistent & enforced rules as well as no consequences for breaking them is the problem.

People aren't encouraged to think twice before they post because there's not going to be any significant consequences for breaking the rules.

Even if you somehow manage to get permanently banned from a social network, it's very easy to come back; it doesn't cost anything besides spending some time creating a new account.

From a business perspective it makes sense - why would you ban an abusive user that makes you money? Just give them a slap on the wrist to pretend that you want to discourage bad behavior and keep collecting their money.

Proper enforcement of the rules with significant consequences when broken (losing the account, and new accounts cost $$$ to register) would discourage a lot of bad behavior to begin with.

You could then introduce a karma/reputation system to 1) attach even more value to accounts (you wouldn't want to lose an account it took you years to level up and gain access to exclusive privileges) and 2) allow "trusted" users beyond a certain reputation level to participate in moderation, prioritizing reports from those people and automatically hiding content reported by them pending human review (with appropriate sanctions if the report was made in bad faith) to quickly take down offensive content.


Likely to produce a weird world where a bunch of demonstrators get shot downtown and .. you never hear about it.


you can't solve social problems with technology, only policy. companies like Facebook should be broken up and regulated such that the whole model of profiting from social division is removed. this would be highly beneficial to society and is an appropriate role of government.


> There were plenty of other technologies that could've been used to organize large-scale virtual fight clubs (forums, BBSes, chatrooms, maybe even the telephone) but this didn't happen because nobody actually wanted to foster such toxic behavior.

“Nobody” is leaving out some pretty notorious sources of toxicity (e.g. 4chan). I think a key difference is that these huge platforms dramatically increase the reach of those communities by giving them much better tools, highly-available servers, etc. and in particular mainstreaming them into the same place everyone else is, making it easier to recruit and share outside those communities.

In the forum era, people had to learn about a particular site, learn the community, maybe create an account, etc. to know these existed — now it's just one Facebook share away and there's an advanced “engagement” system ensuring that anyone who likes something widely shared will continue to see other content from the same source without needing to seek it out. Brigading was most noticeable with a wave new of new accounts and there wasn't something like an ML system making that activity drive unrelated users to see it.


4Chan is only 4 months older than Facebook and is hardly that profitable. Which is what changed, online dumpster fires where not attractive to advertisers but add a veneer of social networks and some basic location / demographic data and suddenly things change.


Facebook really had two periods because it was limited scale prior to mid-2006, when the number of users grew dramatically.

I think the more important point is really the advertising angle: taking a bunch of small communities and giving them high-quality hosting powered by a billion dollar company along with a pre-developed audience.


And some moderation. 4chan wasn't ever going to attract ad revenue because nobody at Coca-Cola wants their brand run alongside a goatse link.

But massive political fights aren't something that the advertisers (until recently) see as bad imagery for their brands.


4chan has moderation. They just tend to take their duties exactly as seriously as everything else on the website.


You could say that some topics discussed on 4chan are "toxic" but is the discussion in general also toxic? There are no cliques, no friends and foes, no flame wars, just a background level of ad-hominem (i.e. very formalized and short phrases used to express disagreement with the message in the form of ad-hominem). There is no point in slinging insults at other users and no "honor" to protect - everyone is anonymous, there is no identity, not even a pseudonym or post history. It's incredible how much of the discussion (on-line and in real life) is about asserting one's status and preventing others from asserting theirs. Anonymous image boards are free from this.


Not every message is terrible, yes, but it’s not without cause that it has such a bad reputation. The idealized version you describe is far from representative.


They certainly fostered toxic behavior. The social interactions we see on social media—even in HN from time to time—are familiar to anyone who frequented forums in the past. The ad-supported business model industrialized it, though.


My point is that in most cases it wasn't the desired behavior, just sometimes tolerated if they were high-profile users with connections with the moderation team and/or otherwise provided valuable contributions.


Yep, generates outrage, feeds on it and turns it into money. The added insult is that advertising money used to be used to fund investigative journalism and real community news coverage at local newspapers in every city and town in the US. Anyone know what that money is spent on now?


> the business model of social media is based on generating "engagement" at all costs

I haven't seen evidence that outrage driven engagement is profitable, though the statement is frequently repeated.

Is it possible that divisiveness is an intended effect?

We know that governments sow discord to achieve selfish ends. Why wouldn't similarly powerful business interests?


> I haven't seen evidence that outrage driven engagement is profitable, though the statement is frequently repeated.

Let's assume that you are able to create a social network the scale of Facebook and network effects aren't an issue (let's imagine a magical solution that interoperates with existing social networks in such a way that accounts work on either side and content is visible from both sides) with the caveat that you instantly ban accounts that participate in political flamewars, intentionally spread misinformation/fake news, etc. You're going to end up banning a significant chunk of people that would otherwise make you money if you just turned a blind eye to the problem like current social networks do.


Is it true that failing to ban is the source of the divisiveness? I haven't used Facebook before, but it seems like the mechanism is the feed, which prioritizes divisive content.

Even as a relatively senior employee, you may be unable to discern intent. You might just get an incentive scheme that compensates you for engagement rather than profitable engagement.


The feed brings visibility to offensive content, but ultimately someone has to create that content in the first place. Even if you kept the feed as-is, as long as you had bulletproof moderation that would nuke any offensive content (or other mechanisms that are effective at discouraging people from posting such content), there wouldn't be any bad content for the feed to recommend.


> The other problem is that the business model of social media is based on generating "engagement" at all costs (emphasis my own)

Well, FB and the like have been bringing the ban hammer out a fair bit. Most recently was the QAnon kiddos.

If they really were keeping engagement up at all costs then banning a lot of accounts and dumping all that hand-wringing would be a strange thing to do time and time again.


Yes, I think some people are becoming uncomfortable with unfettered free speech because the distribution model has changed in ways that are difficult to control and which are elevating what some people view as the "wrong" messages.

Prior to this era in which we're constantly online and connected, for a message to be heard it usually had to go through a professional editor at a newspaper, radio station, publishing company, or TV station. If you had an opinion you wanted to share, you had free speech but you probably didn't have a platform. Now messages are global, immediate, viral, and permanent.

Max Wang's video makes a strong case that Facebook's content policy and Zuckerberg's concern with adherence to it is a stringent ideology which lacks the flexibility to address clear cases that it should cover but doesn't. In other words it lacks good judgement.

I think it's going to be a long time before an automated system can exercise good judgement. Obviously having a professional editor review every tweet is impossible. So in the meantime, I think we will see a continual elevation of tension between people who value free speech at any cost and people who think that certain messages shouldn't be given a platform.


> I feel like all of this debate is painfully missing the point.

Your point also applies to news -- articles these days miss the point by design. You get more clicks by being provocative and hyperbolic. And yellow journalism isn't really new, but its comeback has been triumphant.


Along the last few weeks, Bing's news feed, the one that appears directly on the main page of bing.com, turned into a tabloid cesspool (at least in its French version). Gradually but very quickly, it turned from something reasonable, rather faithful to the range of important events (there could be one 'people' entry here and there, but why not, after all, as long as it remains very minor) into pure clickbait. The titles are clickbait ('incredible', 'exceptional', etc.) and the content behind is crap. It is possible that they put first, the ones that already get more clicks.

If I translate the first entries titles right now:

* Amazing confidences -> an actress

* Big controversy -> a female singer

* Corruption suspicion -> OK, a politician

* Accused of lying -> an actor's wife

* a few trivialities and filler items

* Finally calm -> a dead singer's wife

* Turkish activities -> OK, international news

* from the previous item on, it is only now that you have a regular mix of normally interesting/important news items. You had to click once or twice on the Next arrow to get there, everything or almost everything on what I'd call the front page was clickbait crap.

So, that's a feed, but the content comes from actual newspapers, and if I go to my local newspaper, half of the top content is also oftennational/international 'people' crap.


That's hardly surprising. Both social media and regular media are now part of the same competition to get more clicks. The whole model of online ad-driven content is rotten and creates the worst incentives.


It's not so simple: social media is how regular media get most of their clicks. Not that I'm a huge fan of social media companies, but the constant push by regular media companies for social media companies to promote "authoritative sources" (i.e. regular media) and the accompanying hit pieces make a lot more sense in this context. To me, these articles are being honest in their criticism when they call for less social media in general, not just more control of the existing social media. This is one of the latter.


Indeed.

I have a theory, that many of the high-profile "advertiser boycotts" recently aren't really about content moderation. I suspect that companies simply don't want to promote their body wash in the middle of a constant screaming match about wearing masks.

Being on Facebook bums me out. It makes me dislike my extended family, and old colleagues or people I knew from school. I've been unable to completely wean myself off it (the "engagement" hooks are strong), but I genuinely feel dirty and unhappy with myself after browsing the site. It certainly doesn't foster a positive view toward brands that I see advertising there.

I wonder if: (1) that perspective is common, (2) large advertisers have figured it out, and (3) their "boycott over hate speech" is really just a clever way to get social-trend-extra-credit for a move they'd like to make anyway?


One of the canned responses we have observed Zuckerberg make to the media in response has been something along the lines of "Facebook gives people who would not otherwise have a voice a chance to be heard" (words are mine, not theirs).

The web already did that. By advocating that the world use only a single website to "speak", we allowed a single website owner to create a business by charging its users for the opportunity to publish their "speech" to segments of the website's audience.

The reason most of those millions of people "joined" Facebook was not to voice their opinions to an audience. It was to communicate with family and friends. A new mode of communication.

If those people who wanted to voice their opinions to audiences had their own websites, who would visit them? Would they have an audience of millions?

(The term "millions" is used here as an expression for "many", not as an accurate estimate of the numbers in question)

I have heard that there is talk in the US media of doing away with the US Postal Service. AKA Ye Olde Mode of Communication. It is unsettling how far things have come in eroding the American's right to be free from commercial or ideological pandering. There used to be a legally recognised principle that an American could say "No more", and then the pandering must stop. Try that with Facebook or any other "tech" company.

https://en.wikipedia.org/wiki/PS_Form_1500

https://en.wikipedia.org/wiki/Rowan_v._United_States_Post_Of...


You are right, that is the central point. We have not been able to scale the small scale interactions our species evolved with to include thousands, let alone millions of people. Perhaps its impossible to do this, with brains such as ours. Nevertheless, the toxic result we get when we do try to do it is incredibly engaging, which is all that is required for the brokers to make money. This is not very different from a drug addiction. In both cases the victims are compelled, despite themselves, to engage in something that is harmful while the drug-dealers have absolutely no incentive to reduce or eliminate the damage.


I don't think it's painfully missing the point, it's just studiously ignoring that they've let the cat out of the bag and focusing on ways to mitigate the problem.

Social media is a natural consequence of increased communication of larger groups, and as an emergent phenomenon based on everything else you aren't going to get rid of it. So instead we focus on how to mitigate the problems and live with it, because even if you shut down every single social network today, there would be new popular ones used by almost everyone a year from now.


I am developing a site where the landing page only shows news about topics from sources you have chosen to see. There will be some limited interaction with those news stories. On a shared page, everything is brought together and you'll see what news sources other people are reading.

I am hoping this allows people to get the news they want, interact with it and see what news other people are interacting with... without the flame wars and hate brigades that Twitter and FB prompt


sounds like Reddit. Or Digg before that. Or RSS feeds before that.


Google Reader had a limited kind of social networking where you could comment on the articles and your friends saw the comments if they were subscribed to the same feed, but there were no replies to comments and they never got outside your contacts. No popularity contests with Like buttons or similar.


Yeah, not too different. No user submitted links though and no comment system (at least when I first make it live). To be honest, I'll probably be the only user!


"The only winning move is not to play."


Satanic ritual panic and several other deeply moral panics say hello. Loud and extreme opinions and bullshit have always been rewarded with more attention because the adherents are more passionate.


> where the most extreme opinions are rewarded with the most attention

Which is amusing given that even here we have to deal with an upvote mechanism that boosts posts from strong extreme opinions.


Personally, I think the retweet model is slightly more problematic than the upvote model. For one, upvotes have an equivalent downvote (usually), which a retweet doesn't. If a famous account upvotes a post, and a I downvote a post, they basically cancel each other. If a famous account on retweets a post, then millions more will see it thanks to that one button click, and nothing I can do balanced that.

Facebook also thrives on a similar "re-share" model, which leads to the most toxic content spreading extremely quickly and bouncing to new audiences, whereas no matter how many times I upvote this thread, only a few % of the world browses HN.


It seems to me HN rewards views that are "interesting" but the most highly rated comments seem to be pretty mainstream nerd stuff. Do you see evidence to the contrary?


This is exactly the point I'm trying to explain to my friends that are outraged that Facebook didn't censor Trump's post. I think Facebook is one of the worst companies out there, but for other reasons.

The issue with Facebook is not about censoring the post that we don't like, it's about removing that polarization that comes with a platform that wants to get you hooked so they can sell you more ads.

Facebook and other social media are making us HATE each other because that's what they make money on. The debate should be around how can you build an ethical social platform that values your time and opinions.


I do agree that the ad-based business model is designed to have us hate each other because that is what drives engagement. So I'd like to get your thoughts on this thought experiment. What if it was illegal for social media platforms to base their revenue stream on advertising? Instead, it is mandated by law that their primary revenue is based on a subscription model? "Want to use Facebook? You'll need to pay $4 a month to use it." What do you suppose the nature of Facebook would be like then and, to an extent, its impact to society at large? Perhaps better? It has been my impression so far that services that require paid subscriptions over free usage tend to respect privacy better, have higher quality content, and are less addicting in usage.


I would argue that the problem started with the introduction of “youth marketing”. Marketers saw “getting them early” as a way to have a customer for life and therefore make more money overall, and they dove in.

This resulted in prioritizing those things that younger people are drawn to, such as novelty and attention.

Imo, this resulted in a huge cultural shift that has now resulted in “boring level headed opinions [based on years of experience]” being lost in the sea of people fighting for more attention through more novelty.

The internet just threw rocket fuel on the problem.


This is so right. But are you aware of any ideas what to do about this? It seems like everyone has just accepted that‘s the way it is now.


Unfortunately this doesn't work unless a massive group of people does it, but: stop sharing things that make you angry. They make you angry so that you share them.

So much toxicity is rooted in exploiting this dynamic.


I'm not sure what your whole list of requirements is for a solution, but: how about simply not using Facebook at all?


Favor moderated communities over "free" communities

Push for ad-free social networks (probably not free to users, then)


> Boring level headed opinions used to at least have a chance of rising above the noise.

... Because they were heavily promoted by the people who owned the media platforms. It wasn't because of any of their particular virtues.

Consider your bias when making this claim. What makes an opinion boring and level-headed? That it is squarely in the middle of the Overton window? Who sets the Overton window, in the pre-social-network world where the overwhelming majority of media is state/corporate-sponsored?

It's not some natural evolutionary process that caused boring, level-headed opinions to rise to the top. It was self-serving opinions rising to the top, that, because of their ubiquity, are believed to be boring and level-headed.

When viewed from a different cultural lens, many of those boring and level-headed opinions appear completely batshit insane. For example, the current American model for healthcare seems, for the most part (Everyone has a few minor tweaks they want to make - but setting that aside), boring and level-headed and reasonable to ~half the country. To the rest of the world, it appears completely insane, and anyone proposing switching to it would be a complete lunatic. In the US, however, the converse holds - switching away from it is considered by many to be absolutely crazy.

Which of those opinions is the reasonable, level-headed one? It depends on your current social setting, not on the virtues or drawbacks of the expressed idea.


The policy and the design of the interface is inextricably linked to that change in interactions. A hands-off policy is still a policy for a site that rewarded more inflammatory content in pursuit of certain metrics.


Amen. The thing that's funny is it seems relatively straightforward to measure the positive and negative interactions and promote appropriately and transparently.


I wonder how social networks would work without sharing and liking mechanism.


If you want to stand out from the crowd... Don`t stand in crowds... BLHO`s are nowhere to be found.... Because they have already left the building.

"Die looted" is not referencing the information quality/concentration, but BLHO types know better.


How can I upvote this twice?


It's more than just the toxicity of the conversations. These vapid online brawls have engaged millions more people to vote who never would have voted in the past.

Hence, introducing an easily swayed group of uneducated voters into the pool, and debasing politics to their level.

If democracy represents the intelligence of the average voter, then engaging literally tens of millions of uneducated/emotional/immature people to vote significantly diminishes the ability of our government to make decisions and/or speak about problems honestly.


> These vapid online brawls have engaged millions more people to vote who never would have voted in the past.

Voter turnout in presidential elections is the same as it has been for 50 years at least. Voter turnout in the most recent midterm spiked a crazy amount, but I'm not willing to credit social media so much as a natural reaction to the extremism of the current administration. Which sources are you using that I missed?


I don’t really see anything very different there. Voters have always been easily manipulated: by newspapers, TV, whatever. Though I suppose one key difference is that the source of that manipulation has been hidden: you used to know what newspaper pushed what, now actors hide behind sockpuppet accounts and promoted ads.


This is nothing new in kind. The founders were fearful of mass democracy too and sought to avoid creating one. For better or for worse, they failed.


Reference?

I don't see us as having created a representative democracy to protect democracy from the masses. I like to think we are a representative democracy because the masses are supposed to have better things to do than keep the lights of government on.


One of the authors of the Constitution, James Madison.

https://billofrightsinstitute.org/founding-documents/primary...


It's pretty well documented in the Federalist Papers. I recommend reading them in their entirety, because it's IMO one of the most profound writings of political exposition in written history, even if you disagree with the philosophies being promoted.

Federalist No. 10[1] is probably one of the most highly regarded of the papers, and explicitly lays out these concerns. Selected quotes:

"AMONG the numerous advantages promised by a well-constructed Union, none deserves to be more accurately developed than its tendency to break and control the violence of faction. The friend of popular governments never finds himself so much alarmed for their character and fate, as when he contemplates their propensity to this dangerous vice."

"Complaints are everywhere heard from our most considerate and virtuous citizens, equally the friends of public and private faith, and of public and personal liberty, that our governments are too unstable, that the public good is disregarded in the conflicts of rival parties, and that measures are too often decided, not according to the rules of justice and the rights of the minor party, but by the superior force of an interested and overbearing majority."

"By a faction, I understand a number of citizens, whether amounting to a majority or a minority of the whole, who are united and actuated by some common impulse of passion, or of interest, adversed to the rights of other citizens, or to the permanent and aggregate interests of the community."

"If a faction consists of less than a majority, relief is supplied by the republican principle, which enables the majority to defeat its sinister views by regular vote. It may clog the administration, it may convulse the society; but it will be unable to execute and mask its violence under the forms of the Constitution. When a majority is included in a faction, the form of popular government, on the other hand, enables it to sacrifice to its ruling passion or interest both the public good and the rights of other citizens."

"From this view of the subject it may be concluded that a pure democracy, by which I mean a society consisting of a small number of citizens, who assemble and administer the government in person, can admit of no cure for the mischiefs of faction. A common passion or interest will, in almost every case, be felt by a majority of the whole; a communication and concert result from the form of government itself; and there is nothing to check the inducements to sacrifice the weaker party or an obnoxious individual. Hence it is that such democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths."

Hamilton also talks a little bit about this phenomenon in Federalist No. 68, and Federalist No. 62 lays out the impetus for having an equally represented Senate (which is inherently undemocratic), part of which was to check the immediate impulses/passions of the people, as can be the case with the House of Representatives.

[1] https://avalon.law.yale.edu/18th_century/fed10.asp




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: