Hacker News new | past | comments | ask | show | jobs | submit login
Facebook admits it must do more to stop the spread of misinformation (techcrunch.com)
710 points by CodeGenie on Nov 10, 2016 | hide | past | favorite | 614 comments



This article criticizes Facebook for firing the human editors that had been keeping things sane. However they were pushed to that by accusations of bias in the right-wing media. Accusations that looked likely to lead to a Congressional investigation.

See, for example, http://thehill.com/policy/technology/279361-top-republican-d....

Now they are in a situation where they are damned if they do, damned if they don't. And people immersed in echo chambers will accuse them of bias no matter what.

But the entire system is fundamentally broken. Pay per ad incentives lead to rewarding viral content. And content that induces outrage is far more likely to go viral than pretty much anything else. Plus it goes viral before people do pesky things like fact checks. And the more of this that you have been exposed to, the more reasonable you find outrageous claims. Even if you know that the ones that you have seen were all wrong.

For an in depth treatment of the underlying issues, I highly recommend Trust Me, I'm Lying.


Yes, trying to solve this problem by creating a definition of "misinformation" sufficiently precise to act on it for all possible articles, then trying to remove it somehow is probably an AI-complete problem. If it's not AI-complete it's probably just plain an ill-formed question. That's before we ask questions about the biases that get embedded into the misinformation detector.

This can't be fixed at Facebook scales by building a platform that so highly incentivizes low-quality content of all kinds, then trying to stop the content so late in the cycle. The entire incentive structure has to be rethought. Unfortunately, that's probably a problem that Facebook is literally incapable of solving without going out of business, because Facebook the corporate entity is that incentive structure. To fix Facebook requires Facebook to become not-Facebook.


This is a problem with human wetware - there are several cognitive biases [1][2][3][4] that make people engage more strongly with ideas that fit their preconceptions and discount ideas that don't. Add information-sharing to the mix and the aggregate effect you get is large information cascades that fracture the population into tribes that believe things not necessarily because they're true, but because they are stated early, vehemently, and frequently.

Remember that evolution favors individuals who survive & gain status, not those whose representation of the world is true. Someone who is right but killed for it (eg. Copernicus) is still dead.

The fix is for people to be aware of these cognitive biases in themselves, to actively seek out information that differs from their preconceptions, and to challenge inaccurate information (with evidence!) when presented with it. You can't outsource this to a communication platform, because it's inherently less comfortable and more effort for the reader, and the reader will just go to a competing communication platform that makes them feel better.

[1] https://en.wikipedia.org/wiki/Illusory_truth_effect

[2] https://en.wikipedia.org/wiki/Confirmation_bias

[3] https://en.wikipedia.org/wiki/Bandwagon_effect

[4] https://en.wikipedia.org/wiki/Congruence_bias


Fact check: Copernicus was not killed.

He died of natural causes at 70. According to viral stories at the time, moments after having seen the first print of his book.


Maybe people confuse Copernicus with the advocate of related theories Giordano Bruno, who was killed in 1600 for his beliefs and advocacy (though not only for his beliefs about astronomy).

https://en.wikipedia.org/wiki/Giordano_Bruno

People may also confuse Bruno with Galileo Galilei, who was punished by the Inquisition for his defense of Copernican theory, but not executed.


See, just like that. Would be more effective with sources, though.


Given the amount of misinformation put out by the MSM this election they'd do well to blanket ban the NYT, CNN, et al.


> The fix is for people to be aware of these cognitive biases in themselves, to actively seek out information that differs from their preconceptions, and to challenge inaccurate information (with evidence!) when presented with it.

You are right, but I hope you are aware that this would require a giant leap to an essentially superhuman state of rationality and clear thinking for a vast majority of people.

Just stop for a moment and look around you. Is that how most people actually function in this world?


Separate ideology (or whatever happens to be the object of bias) from identity, and you're halfway there. People are much more amenable to changing their minds if their identities aren't at stake. For example, most people can discuss the pros and cons of F-150s and Silverados fairly rationally; those with the Calvin pissing logos, not so much.


Again agreed. But how would you accomplish that?

For large majorities of voters, dropping that ballot in the box is over 90% an identity game.


On a societal level, your guess is good as mine. Perhaps increased infrastructure spending so that high-skill, high-pay jobs are more evenly spread out and ideological segregation is reduced?

On an individual level, being active in (non-political) hobbies and volunteer groups can help, especially ones that attract both people from the cities and the countryside.


Prove that assumption, please.


We have to design systems that work with the people we have, not the people we wish we had.

Fail to recognize that, and you've failed before you've started.


>> most people can discuss the pros and cons of F-150s and Silverados fairly rationally; those with the Calvin pissing logos, not so much.

How is this different from stereotyping then?


Who am I stereotyping? People with "Calvin pissing on Ford/Chevy/Ram logo" stickers? The vast majority of truck drivers don't really care about brand--they just want the most appropriate truck for their situation.


None of us function in this way. While part of the solution does have to involve people becoming better at vetting their information-sources, it is equally important to acknowledge that for everyone to do this individually would be absolutely impossible given time constraints.

What we need, fundamentally, is a media environment that can harness the collective capacity of humans to vet and debate new information in an organized fashion that can be trusted. This site is a good example where a knowledgeable community, and the simple mechanism of votes does some automatic, crowd-based content curation. But the Reddit-like model is very basic - so much more could possibly be done to allow individual users to contribute to a collective process of knowledge-building.

I think we need more development and discussion devoted to what kind of systems could be designed to encourage transparency, quality, and critical analysis in our news-making.


Well, a good first step would be an education system that encouraged a culture of curiosity and respectful questioning, rather than just "obey".


Hah! It's worse.

As a kid I learnt this and tried to be as rational and aware of my fallibalities as possible.

That makes you an alien to normal human beings. Conversation becomes impossible, processing fast enough becomes impossible, and the penalties for acting weird and communicating weird are huge.

Instead you need to find a way to act in a manner that allows you to blend in with people.


Is that how you function?


This is absolutely the right answer, but maybe Facebook can do some things to help. It's possible to gamify the process of learning to spot biases by presenting people with situations and getting them to think critically to solve the biases or fallacies. Especially on Facebook, people love filling out questionnaires that tell them something about themselves.

I'm not saying that one smart game is going to fix the entire problem, but adding the development of logic skills into the types of activities that people enjoy doing on Facebook is definitely possible.


It doesn't work that way. It assumes that what people are doing is wrong - it's not. The mental overhead to assess everything clinically is sufficient that faster thinkers can just outrun you. People/brains are making trade offs to deal with the world around them.

Other humans are taking advantage of those gaps to mobilize votes or advertisements.

There's no way out and I've been wondering when people would realize the depth and intricacy of the mess we're in. Hopefully people pay more attention to the way cognitive cheat code are being abused.

Hopefully a protocol to deal with it can be made.


There's really an amazing amount of misinformation and manipulation going on. I think there's a place for people developing critical thinking skills, but there's also a limit to what people can sort through. There's a big need for more software to fact-check and compare sources across the Internet. Having Facebook run it is one part of the solution, but people need to have transparent software that they control to do the same kind of work so it isn't entirely centralized. Google, Amazon, and Facebook assistants are great, but we need personal assistants that work for us.


The fix is for people to be aware of these cognitive biases in themselves, to actively seek out information that differs from their preconceptions, and to challenge inaccurate information (with evidence!) when presented with it.

Exactly right. This should be strongly emphasized in our educational curriculum.


As any alien arriving in your world would immediately ask, why doesn't having a true representation of the world help you survive and gain status? And if it doesn't, why does it matter?


An interesting TED talk about the topic how evolution does not necessarily favor true representation of reality in our minds and bodies: https://www.ted.com/talks/donald_hoffman_do_we_see_reality_a...


It helps me gain status even more if I can plant false information in others.


As someone points out Copernicus was not killed. And it illustrates the problem of FB putting its finger on the scales. Ever.


It's almost as if solving the problem of truth is antithetical to the notion of having a for-profit entity be the main channel of [legit|mis-]information in our world.

It's almost as if the way we understand and use money has hardened into a giant screwball of lies, base instincts, and perverse incentives.

Almost.


I think I would argue that this is beyond an AI-complete problem, it is an unanswerable problem. For instance, consider the following spectrum of ideas:

- The earth is flat

- 9/11 conspiracy theories

- Barack Obama was born in Kenya

- Anti-vax

- Climate change denial

- Anti-GMO

- Denial of underlying quantum randomness in the universe

- The Clinton Foundation has been or is involved in some shady dealings

- Muslims want to implement Sharia law in Europe

- The minimum wage decreases employment

- Donald Trump has sexually assaulted a number of women

- Monetary policy impacts the real economy

- The sun will, on balance, probably rise tomorrow

I've sorted this in a (rough, personal) order of increasing plausibility / conformance with facts. Where is the line between misinformation and alternative information? As a human, it's pretty unclear to me.


Could you elaborate on what "AI-complete" means? I have not heard this term before. I am assuming it similar to NP-Complete but in the domain of AI?

If this correct I assume that there are also problem that are classified as AI-hard?


AI-complete, I believe, means 'full AI'. An AI-Complete problem is a problem that can't be solved without creating full human-level (or above) AI.


That's a meaningless term since defining "human-level" is already a lost cause.

Some people can multiply ten digit numbers in their head. Some can't even tell you what a number is. There's a very wide bell-curve here.


Man, it must be me, but you ranked "Monetary policy" WAY more likely than I would (rates near zero for so long doing so little must at least put SOME doubt in your model), and "denial of quantum randomness" way less credible than I see it as. Like, man, reasonable people can disagree about pilot waves and many worlds.

Are there some confusing, emotionally laden, difficult questions of fact in there? Yes. Are some things ultimately unknowable (like what people want, which involves scrying into their minds)? Yes.

These, as it turns out, are not the things I can't help but be side-tracked on. I'm diverted by you claiming monetary policy has an effect with nearly the certainty of the sun rising.


Haha, sorry I knew this list would cause this sort of trouble. I'm a big believer in pilot wave theory myself, it's just very much not the 'mainstream' view. And I didn't mean to imply that the last two were close together, either. I do think monetary policy probably has some effect (whether we understand or can model that effect is another matter), but I didn't mean at all to imply that it was anywhere near the certainty of the sun rising, it just happened to be the closest :).

I think perhaps we disagree more about the probability of these two:

- The Clinton Foundation has been or is involved in some shady dealings

- Muslims want to implement Sharia law in Europe

Than the other two. I think both of those are, in some sense, almost certainly true. There are Muslims who want to implement Sharia law in Europe. The Clinton Foundation has been involved in transactions that at least have the appearance of moral uncertainty. The headlines that I listed are just overly strong statements/generalizations of those (IMO undeniable) facts.


I keep getting sidetracked by other issues, but you're statements are about certainty and uncertainty and where do you draw the line for heavy-handed intervention, so I might as well get into them:

"undeniable" does not mean what you think it means. Example: http://www.cbsnews.com/videos/hillary-clinton-defends-founda...

There, Clinton denies the foundation has been engaged in anything shady. Boom, definitely not undeniable #hackernewsdrama, hahaha.

If you want a disagreement, though, I prefer many-worlds over pilot wave theories, largely because I'm not entirely sure _what_ the propagation speed of the pilot waves would be. 'Greater than the speed of light' either (1) doesn't really narrow it down at all or (2) narrows it down to having no options at all. But they should have _a_ speed, because I'm not sure you can even have standing waves if the disturbances propagate instantly.


Rates near zero is part of the reason the stock market has been on a tear the last few years. It also was a driver of the housing bubble.

It didn't cause the hyperinflation that a lot of people thought would come, but the Great Depression is Bernanke's area of expertise, and he had sound theoretical reasons to believe QE wouldn't spark serious inflation.


The near zero rates occurred in response to the economic crisis after the housing bubble burst, not a driver of the bubble.


Rates were also comparatively low during the 2000s to juice the economy after the tech bust and 9/11. They got even lower after the crash and QE, but they were the lowest they had been in the 00s since the early 60s.


I don't deny quantum randomness per say, I just deeply hope that it isn't true for some reason.


There is no need to create a Grand Unified Theory of misinformation, here.

This problem isn't that hard. You filter this stuff and rank it by its SOURCE, not by the content of individual articles.

It's not hard to figure out which online sources are pushing out most of the abjectly-bad propaganda, and de-rank them.


"This problem isn't that hard. You filter this stuff and rank it by its SOURCE, not by the content of individual articles."

I don't think reifying ad hominem into code is the solution to the problem.

Of course, if you don't mind a rare few false positives here and there on articles, I've got your filter right here:

    def source_is_trustworthy(source):
        return False
 
Remember as you sit here thinking through your exceptions what we're talking about; if your "trustworthy" source was confidently telling you about how Clinton was going to win, it's not one of the exceptions. I won't say I was confident Trump would win, but I was certainly less suprised than most; the fact that I generated this somewhat more predictive model by tossing out pretty much every "mainstream" news source is not a good sign for them. And to be honest, Trump news isn't the only thing that I find this useful for. Beyond the bare facts, most media outlets really aren't good for much anymore, and do you ever have to de-spin their news just to get those bare facts in the first place. (I can't, however, pack up into a recipe how to do this yourself. I think we're in a period of transition in the news industry, much greater than just "the internet makes the old dinosaurs stumble" makes it sound, and I'm at the moment not really all that confident in anything.)


As someone who vehemently opposes Trump, I feel that allegations of anti-Trump bias in the mainstream media were entirely correct and somewhat to be expected. Although Trump is certainly a name that sells papers, Trump's repeated threats to open up news media to broader libel laws were not well received. The news organizations' endorsements were pretty one-sided[0]. I don't want to take you too literally, but it seems to me pretty obvious that a model ignoring the MSM entirely would not be preferable to one that took into account the MSM opinions and then applied a correctional factor. I'm sure we agree that truth is a function of our means for determining truth, but I suspect we disagree strongly on the reliability of Internet news sources.

[0] https://en.wikipedia.org/wiki/Newspaper_endorsements_in_the_...


You must have missed the leaked emails in which Clinton exerts massive influence on the media, planning every last details in exchange for all kinds of rewards [0]

[0] http://observer.com/2016/08/wikileaks-reveals-mainstream-med...


I did miss that. Was that article intended to support that view? The incidents listed don't seem to have been terribly well planned or executed, or to have garnered any positive results. That seems difficult to reconcile with the idea of a powerful media conspiracy.


"source_is_trustworthy" applies to all sources, not just "MSM". As I said, I don't have anything right now that I can point to and say "If they say it, I trust it."

I used to try to apply a correctional factor, but I've found it not even worthwhile anymore. I honestly don't know entirely why. I don't know if the news has gotten that much more biased to the point where it almost swamps the signal entirely, or if the combined financial pressures away from expensive real reporting and towards click-bait headlines has removed the signal, or what, but beyond very bare facts they just aren't worth much anymore. It is also possible this has always been the case and I wasn't aware enough to realize it; there's some classic history stories that back that possibility up.

I do know primary sources are getting easier and easier to consult directly. Which could itself also be a reason my opinion of them has plummeted so much in the past 10 years; it was a lot harder to see through media spin and simplification (deliberately for their audience, and accidentally when they in fact don't understand what's going on themselves, see especially science journalism for that) when they were the only source of information.


What do you think was unfair about the Trump coverage in mainstream newspapers?

What is the correct balance of endorsements? Is anything other than 50:50 incorrect?


0:0 would be fair.

It won't be reached in practice, but if you make it the policy, you have an argument to take down any bullshit. With 50:50 it's going to be an endless quarrel over who has had enough so far, who is underrepresented and why everything is dominated by two polar opposites with no voice given to people who don't want to belong to either camp.


The case of news sources failing to predict events seems like a red herring in the larger debate here - news media is generally about reporting events that have actually happened. Predicting presidential elections or other uncertain events is a side hustle they've been dragged into because it sells. And if you throw out their predictive failures, I think it's pretty easy to compose a trustworthiness function based on overall factual accuracy. And sure, there may be lingering biases that need to be corrected for, but even so, the MSM biases generally don't extend into the realm of pure fabrication, whereas clickbait outrage farms absolutely do - and they are a large chunk of the problem.


I don't think you're ever going to detect truth correctly, but you can absolutely detect falsehood.

Compare "{Candidate} up over 5000% in polls" vs "{Candidate} up over 15% in polls".

Baby steps. PageRank wasn't built in a day.


PageRank is also not unbiased, it has been played before many times and pure PageRank would be pretty easy to play - that's why Google keeps tweaking it years after it was invented. And still SEO industry exists. Now, SEOs just compete on being on the first page, not claiming something as grandiose as "truth" - add that to the mix, and all the biases there are, and you get pretty much hopeless task. And of course you get under a constant stream of criticism, and maybe also government regulation - it's one thing to just produce search results (even that is controversial, remember "right to forget"?), it's another thing to claim being able to "detect falsehood" - that's very sure a way to get sued and for the government to get involved.


It's a lot easier to look for inconsistency than it is to look for incorrectness.

Here's a pagerank like reputation system i've built.

http://github.com/neyer/respect


Whyso? I defer to your experience, but I would expect that incorrectness is simply inconsistency measured against a more complete/complex basis?


To determine correctness, the adjudicator requires knowledge outside the system. In the scientific method, this is done by generating falsifiable hypotheses and performing the experiment. Failing that, you have to fall back on heuristics, like "what I already believe", Occam's razor (simplest theory consistent with observations wins), or reliance on authority/consensus.

Consistency is simply majority wins, which completely stifles new ideas or minority opinions.


> Consistency is simply majority wins, which completely stifles new ideas or minority opinions.

No isn't.

You don't need to have a "non objective view of the truth" - i..e you don't need the ability to ask the system "Is this statement true?"

You just need the ability to ask the system "Will I think this is true, given the other things I have claimed to be true?"

Maintaining internal complexity of your own claim graph gets really tricky unless you keep it as accurate as possible.

For example, consensus view was that trump would not win the election. Trump did win the election. Thus, those two claims:

* (before) Trump will not win * (later) Trump won

These two are inconsistent. It's possible to rectify that inconsistency by marking the earlier statment cancelled. So now we have three statements

* (before) Trump will not win * (later) Trump won * (now) I was wrong to say, "Trump will not win."

The gap between "now" and "Before" can be used to compute "probability of a statement being retracted." The smaller those gaps, the more likely it is what whatever you say comes with an asterisk next to it, saying "the expected time to retraction of this statement is: x days"


Ah, you're only speaking of temporal consistency (or "constancy" - does the source's statement change over time).

I was speaking of consistency with the general body of knowledge in the system, (correspondence?).

In the specific case of evaluating major news sources for trust, I think the vast majority of their statements are never retracted/changed, and therefore they would rate highly on an overall "constancy" scale as well. Their many statements about Trump's likelihood of winning were very constant as well - for months they said he'd lose, then they changed, and forever after they will say he won.

Sounds very temporally consistent, with just a single change.

Not sure if the adjudicator could narrow it just to "trust of the source's future predictions about politics", but basically the whole country just got a negative feedback on the trust weighting for major media in that domain.


I am not saying build a machine to detect correctness, I'm saying build one to detect incorrectness.


Correct. The singular value decomposition it was based on was first described 100 years before.


>I can't, however, pack up into a recipe how to do this yourself.

Sum (from: 0, to: PosInf, Enumerable: Source - Bias)

converges asymptotically to Truth


Sorry. Most of the official sources in this recent campaign were dead wrong on a lot of issues, mostly because of their own massive biases.


Are you referring to wrong when they described what they thought was going on, which contains a large amount of opinion, or wrong on the facts as they exist?

Facts can be presented in a biases way (generally through omission of other contextual facts), but if what is said is factual, at least that bit of information is still correct. If combining multiple sources to generate a grouping of facts, that presentation bias way well disappear. If system weeds out people that present opinion as fact, all the better.


Frankly papers that lie through crafted contexts are much more dangerous because they give a false sense of authority due to their correctness at the micro level but incorrectness at the macro level.

Imagine I develope a newspaper that only reports crimes committed by a particular race and I excruciatingly fact check every incident and highlight when they are uneducated (but don't mention education otherwise). What will result is one of the biggest macro lies that will fuel racism through its broader message while everything will be perfect to a fact checker. This would rank perfectly in such a system while broader newspapers would get hammered for messing up picture captions and confusing Airbus A319s with 737s.


Or opinions can be presented as facts, or at least presented in a way that makes it hard to tell if it is reporting or analysis.


The polls were not that far off. If 1 in 100 Trump voters had gone for Clinton instead she would have won.

I think polling is pretty broken though. A lot has changed from 20 years ago when everyone had a landline phone.


The polls were off by the same percentage amount that would have allowed Romney to win the popular vote over Obama.


Using the source to figure out whether something is accurate/correct has its own issues unfortunately. These include:

1. Editors/writers/management changes, since quite a few media publications have gone from being fairly reliable to biased as all hell based purely on who's taken over there. Or in some (rarer cases), the opposite has happened.

So you'd have to make sure older less biased pieces weren't being punished for the actions of the publication in the present.

2. A lot of sources are correct about some things but not about others. It's very possible to have a publication that's terrible at writing about politics in an accurate and unbiased way, but fantastic at writing about technology or sports. So ideally your system would have to detect the subject of the article as well as the publication.

However, I guess you could at least add known 'satire' and fake information sites to a 'non credible' list. Like the ones listed here:

http://www.snopes.com/2016/01/14/fake-news-sites/


this assumes their interest is in being fair, but I suspect their goal will gravitate mostly toward 'not being sued/investigated' which may very well create a selection bias.


Non-starter. The NYTimes and Washington Post, for example, are respected but have a very strong yet subtle bias.

On the one hand, they seem more interested in fact-checking. On the other, they propagate concepts like: on-balance the US is a force for good and should lead the world. This is highly contentious and non-obvious for many (most?) people outside the US.


Pretty much all mainstream media outlets did, at some point, publish something that can be qualified as "bad propaganda" by at least some of the observers. Now, either you filter out pretty much all the internet except maybe for bare statistics, numbers, math, etc., or you say "well, these guys are not as bad as those guys, because obviously being completely wrong about X much worse than being slightly less than right about Y", at which point you are just implementing your own biases.


Yes. We're talking about a platform that at it's best is birthday wishes, vacation and cat pictures. It's running on a paid content model, with advertisers selling through viral videos and click-bait. It's good not to expect too much out of it.


We're actually not; that's simply false. No, Facebook is not "at it's best" [sic] when it sticks to birthday wishes and cat pictures.

It's actually at its best when it is truly social, in all senses of the word. When it facilitates real communication on real issues across a broad spectrum of people. When it spreads information, not misinformation.


Not trying to speak for the original poster, but what I took away from the argument is that when FB is "truly social", involving real issues, the potential rewards are sufficiently high that misinformation becomes valuable, thus dangerous. When it sticks to birthdays and cat pictures, there's no incentive to go hostile. Think of it as being somewhat similar to the arguments about the Flash Crash regarding HFT and liquidity: that HFT is great for liquidity until you need it, at which point it goes away[1].

I agree with others that detecting misinformation in the general case likely requires a theory of mind, and so something approaching "real" AI. I also agree that FB has a serious problem on their hands. And I'm so very glad I don't use it.

[1] I advance no comment here about the correctness of that; just a comparison to illustrate what seems like an overlooked point.


This is the same as reddit. As social platforms get larger they starts diverging from their original userbase (programmers/geeks for reddit, college students for facebook) to apply to a wider and wider audience. Eventually the wider audience starts to look a whole lot like the demographics of a whole country (or maybe the world), and it becomes a platform with as many arguments as society in general.

The solution to not becoming a battleground which is inherantly destabilizing to the platform is to not grow, or only grow in a single direction. State up front what your targeted user base is and keep to them. NeoGAF as a "gamers only" forum will survive a very long time, for instance.

Facebook has sidestepped it by becoming a personal community for each user, but whenever they try to connect that personal community to the wider world through "shared" news, "shared" trends etc that are not personalized it causes a big ruckus.


If it is truly social then it will spread misinformation like wildfire. That's what people do -- gossip game magnified. That said, maybe Facebook is best when it is just spreading obviously unimportant content (in the sense of national and global affairs), like birthday wishes and cat pictures. No harm if the cat is implicated in wild conspiracy theories.

It's not the cat pic sharing that is paid for by entities with motive and interest in spreading misleading messages.


When Facebook is at it's best is taking money from nefarious actors and fucking up democracy

In Poland, for good half year before our presidential election, then a year after I was getting obviously paid for posts from dozens upon dozens of right wing extremists.

I am left-centrist (although my dad jokes at me that im a bolshevik). I don't have many friends who skew righ-wing either. Yet, there was my wall, chock full of sensationalist bs, for over A YEAR

Wonder how much money Facebook earned in that time just from sad little Poland


Social communication is in-person, face-to-face communication. It's not moving bits around over an http interface.

I love idealists, but it's time to take an honest accounting of the differences between the promises of the internet and the actuality.


Not anymore. I'm sorry, but that that view is now obsolete.

The fact is that large and increasing quantities of our social communication now occur online.

I'm not claiming this is salutary or calling it a good thing. I'm just stating the fact.


I have no doubt that a lot of social information is being transferred in a new way, much the same as the introduction of the telephone allowed a lot of social information to be transferred in a new way. My point is that true socializing and socialization are human, not digital experiences.

Not trying to be pedantic here. It's just important to sort out the terms used if any analysis is going to be useful. What's happening on facebook is not social. It's bits of data around being social. Different thing entirely. Perhaps we need a new term for whatever it is, but whatever we call it, it's not the same as face-to-face real socializing with real people.


Aren't Bud and you socializing right now? Hell, I even feel I'm socializing a bit with you, how wrong am I?


Are you picking on me? Making a joke that we share? Challenging me to come up with a retort? Asking me to join in to a shared humorous narrative? Giving me a friendly jibe about an error or weakness you perceive?

We assume positive intent online because otherwise we get flame wars, but there's a helluva lot of body language and inflection that can take the comment you made and make it mean a dozen things. All of that is lost. And all of it is important.

In addition to the nuance, would I know severine if I saw them on the street? Miss them if they died? Remember the nice way they did X everytime we met?

We are exchanging written opinions and statements, yes. These exchanges also occur in social interactions. But we are not socializing. There's no bond here that's become stronger, no subtle nuance or interplay that we're managing at a subconscious level. For all either of us knows, the other one is a bot. Or an alien. There's no humanity here.


I disagree. For me, and I think for many, online interactions are a just as much a form of socialization as going out in person. For very small groups, I generally prefer the in person version, but for larger groups, I think I get more out of the online setting.

Let's take our relationship, for example. I've been cohabiting some of the same online spaces as you for at least the last decade. For me, you are a valued part of that ecosystem, more so than most of my physical neighbors who I chat with in person a couple times a year.

No, I wouldn't recognize you on the street, but I struggle with face blindness, so it's a weak criterion. Would I "miss you" if you died? No, but I'd certainly reflect sadly on your death if I learned about it. "Remember the nice way you did X"? No, but I have trouble coming up with such an X most people I interact with locally.

There's no humanity here.

I'm surprised you would say this. Is this a recent change in your view of online interactions or have you always felt this way? Quantitively I find about the same amount of humanity here as I do elsewhere, and qualitatively I prefer many things about what I find here to what I see when I go out in person. Are you OK?

(ps. I notice that many of the links in the sidebar of your site are broken.)


Facebook completely controls the presentation priority, no? Bolt a Watson-like truth scoring system on to evaluate content and use the resulting confidence to boost or bury content in other's feeds.

It would absolutely have to be automatically generated, but it doesn't seem impossible (just much harder NLP that Watson playing Jeopardy).


If the scoring system does anything other than maximize engagement with Facebook, Facebook ends up making less money and performing worse on KPIs than they otherwise would. It'd be very difficult for any decision maker at Facebook to build, deploy, and maintain that feature. It'd be the sort of thing that'd only be there if otherwise the company would be under existential threat.


> under existential threat

You mean government regulation a la China because some memes really are dangerous? If you increase the velocity of transmission, then even in a free system there are some dangerously infectious falsehoods that it might be in the public good to slow. Think War of the World on Facebook.


Ah, but herein lies the difference. Memes seem to universally promote freedom of communication, communication about freedom, and a way to react to the oppression of the status quo. In China, this is counter to the powers that be, but in America, this is fully aligned with the anti-regulation party that is sweeping through all branches of government. Memes play into their hands, and so Facebook would never be under existential threat.


That, or a public reaction to their product (ie, clickbait fatigue - Facebook gets really good at local optimization for the most engaging news posts but creates a news feed that convinces people that Facebook as a whole isn't worthwhile).

It basically needs to be serious enough to overwhelm the internal incentives of maximizing key performance indicators.


I think there's the global maximum vs local maxima argument in favor of doing something too. Why do people ultimately spend time on Facebook?

I would say they don't do so because of micro-optimized KPI targeting: they do so because they feel time on Facebook is valuable and makes them happy. If Facebook can't deliver increases in that, then optimizations only take them so far.


Yup. There's definitely some Goodhart's Law going on here as well. Micro-optimized KPI targeting is an inexact measure of feeling like ones time on Facebook is valuable.


Watson scoring something for truth is only as good as the data drawn from. Who decides that data? Jeopardy questions center around long-established, non-controversial information; how does a Watson-like system evaluate unprecedented breaking news? It's not a question of the language used, it's a much, much higher-order issue that current Watson sidesteps.


Agreed, much more difficult, but ultimately the same way we do so? We believe certain ideas to be facts (hopefully based on scientific sources), then we parse incoming information in light of those.

I can't imagine an unprecedented breaking news story that has no basis in any factual information. I'm not talking about an upvoter here, but primarily a downvoter (incongruence with accepted facts being easier to prove than vice versus).


Sure, and that's the higher-order problem. If we could make machines that solve any problem the way we do, we'd be a lot closer to AGI, but for now, it's still science fiction for a computer to have beliefs, and understand information in the context of those beliefs.


I don't think ranking new information according to held beliefs is necessarily that far towards AGI (admittedly, strong NLP may be, though).

There's no fundamentally creative step in deciding "How well does this new piece of information match pieces I previously had"?


It seems to me the hardest part would be separating truthiness from virality. How do they seed legit sources in the current media climate of racing to the bottom for eyeballs.


Isn't this essentially the problem Google's been solving with continued iteration on PageRank?

From an untrustworthy collection of input relationships, how do I produce the most reliably correct output?


No because popularity does not equal truth.


It doesn't? Because in the sense that popularity is "agreement with consensus scientific opinion" then it absolutely does.


Well, to be more specific about what I mean is, Google's algorithm is successful if it finds the pages that people find valuable. It's sort of baked into the value proposition that the end user will be able to gauge the quality of results, so there is a sort of feedback loop. The problem with misinformation is there is no in-band signal for Google about its truthiness. I suppose there heuristics you can use related to fact-checking quality of certain publications, etc, but it's much trickier than general perceived quality.

Nevertheless, I agree that if anyone can do it algorithmically, it would probably be Google.


When was this ever the definition of popularity.

I'd like it if it was, but I've never witnessed popularity that embodies that statement.

Popular opinion is usually informed by the most often repeated narrative.


No, it doesn't. Sometimes, popularity blocks the process of scientific inquiry. As Kuhn pointed out. But in any case, there is no truth, as Popper pointed out.


This is basically impossible given the current and near future state of the art.

The Allen AI group has been working on solving year 4 multiple choice science tests for about 3 years. They now score a bit over 60% correct, and that is a much easier task.


What I'm suggesting we start with is not exactly the same. The analogous question in that domain is "How accurate have they been in eliminating wrong answers?"

Everything has to be right for an answer option to be correct. Only one thing has to be wrong for it to be incorrect.


And how do you think that they would do that?

Choosing a random story from politifact: http://www.politifact.com/missouri/statements/2016/nov/06/ro...

Like most false stories, there is a hint of truth to this but that doesn't make it true. Politfact spends 3 pages discussing it, and concludes:

Courts did object in three cases to decisions Kander made as secretary of state. But the specifics of those cases are much more nuanced than Blunt lets on. And the claim that Kander tried to manipulate the election is, at best, unproven.

A SOTA computer system could probably parse that last paragraph I posted, but couldn't get anywhere near getting close to doing the research needed to get to that conclusion. We are years off that (I work on research in this area)


I'm not surprised parsing something nuanced is beyond state of the art. But what about "illegal immigrants are committing an epidemic of rape against our children" or "we have a flood of illegal immigrants"?

A lot of claims that were made this cycle are rooted in measured quantities where the measurements we have disagree with the claims.


Define "epidemic of rape" or "flood of illegal immigrants".

If there is at least one rape or at least one illegal immigrant then it becomes subjective or contextual.

Is 3 rapes a flood? What if it is in a week in one town? What if one was by a former illegal immigrant? What if it is over the course of a year, but all 3 occurred in one day?

Is 20 illegal immigrants a flood? What if they all arrive in a town with a population of 100?

Who is "we"? Is a story about millions of illegal immigrants in Greece relevant? What if it about thousands of illegal immigrants working in Athens? What if Athens is in Georgia?

A real example:

"Broken Families: Raids Hit Athens' Immigrant Community Hard" vs "Athens crackdown shows no hospitality for illegal migrants"

One is a real headline from Georgia, the other from Greece. Can you tell which is which?


They can just form a group of truth checker, a ministry of sort, who's job is to inform people of the truth. They can call it the Ministry of Truth, where busy citizens who have no time to research can get their truths from.


How would AI resolve religious issues?

The Jews say they are waiting for the Messiah.

The Christians say the Messiah came once and will come again.

The Muslims say there is only one God and Muhammad is his Prophet.

Which of these groups is correct?

I would love to see the AI algorithm that can finally settle this issue! Seriously, mad props to the programmer who writes that code!


Fortunately, you don't really have to solve that problem. It would be sufficient to assess claims of actual fact about the actual world. I think the first step in writing the classifier would be to write something like

    float IsStatementAboutFacts(string statement);  // Returns confidence estimate


Your statement is that of an atheist. You don't seem to realize that many of the followers of these religions believe their assertions to be factual in every way. When I got off the subway today, at the Port Authority, in New York City, there were 2 women, holding up signs about Jesus. One of the signs literally said "Historical fact: Jesus rose from the dead".


To me that's less of an issue than a statement like "if Trump is elected Jesus will return."


This statement is true because Jesus promised to return regardless of election results :)


Right, they have a few historical claims. But none of them have any proof for those claims and their confidence is off the charts.

Some of their statements pass the first filter, far fewer pass the second.


Why not do what Twitter does and let the floodgates open I.e. Do no filtering?

I'm however doubtful that a "successful" social network would learn from struggling one. Well C'est la vie.


Is it established that that is in fact what Twitter does? I see something non-chronological on Twitter and it hasn't been clear for years what criteria are used.


They also banned the Hillary for prison hashtag I believe. The supporters had to spelling it wrong in order to get it to trend again.


indeed, it would take more than likes and shares, just an election of just votes and promisses is insufficient, yet, there we are.


"they were pushed to that by accusations of bias in the right-wing media."

Accusations of bias? I think you mean to say that actual Facebook employees stated this as a fact. And the story was covered by ALL news outlets, starting with Gizmodo.


And Zuckerberg meeting right wing leaders shows that there was great element of truth in it.


It could also mean he was simply running damage control on a bullshit accusation he had no way of denying.

If sites are spreading factually provable lies through Facebook as fact, and people are acting on it, should Facebook have any say in telling people "Hey, this is garbage"?

It's an open question.


It's also partly a constitutional question. I want my friends and I to be able to share articles we find in whatever press on FB. I'll filter by friend. (Mea Culpa: I haven't read TFA yet, so I may be missing some context.)


> ALL news outlets, starting with Gizmodo.

That's like saying "the truth of this information is at least 0"


> Now they are in a situation where they are damned if they do, damned if they don't. And people immersed in echo chambers will accuse them of bias no matter what.

What if they simply change the algorithm to perform some graph clustering/principal component analysis on the stories that are shared organically among groups and then inject let's say 10% stories that are disproportionally shared by other clusters, but not this one?

The granularity could be tweaked to chunk everything into let's say ~5 clusters per nationality and language.

The machines do not have to assess facts, they just have to allow people to see beyond their friend-horizon. So call it the bubble-burster.


Quite sensible. The more options the better


Human editors can separate sane from insane ? I dont think so.

The recent presidential poll fiasco is prime example. All the large media houses like CNN, ABC, WaPo, NYT complete failed to detect and acknowledge the silent righ-wing voters claiming Clinton would win hands down. This is not just about political bias of the media but more importantly their complete failure to judge the pulse of the nation. In my opinion the main stream media was just insane here.

On the contrary the media engaged in an active social bullying of conservative voters by calling them bigots,racists,misogynist and what not. Had this media been lot more sensitive and sane in understanding both liberal and conservative voters, had they focused on what issues really mattered to conservative voter-base instead of ridicule and bullying I am pretty sure it would have changed things.

I blame the social bullying by main stream media as the prime reason why country seems divided.


> The recent presidential poll fiasco is prime example. All the large media houses like CNN, ABC, WaPo, NYT complete failed to detect and acknowledge the silent righ-wing voters claiming Clinton would win hands down.

The fake news on Facebook is a whole different thing, it's easy to spot and literally news stories made specifically to game Facebook. It's insane that it has gotten this far.

https://www.buzzfeed.com/craigsilverman/how-macedonia-became...


I don't think the "bias" of mainstream media sources is solvable, it's not that problem we need to solve.

The problem we need to solve is the sharing of hysterical, blatantly false information, presented as facts, from specific, intentionally propagandistic sites. And I wouldn't hide or disallow it, I would flag it with a warning message.


I think the solution to "damned if you do, damned if you don't" here is to just remove the stupid news sidebar.


I imagine the news sidebar drives a lot of traffic.

(I agree, though.)


Facebook fixes misinformation with this one weird trick! Publishers hate it.

(I couldn't resist)


Yes. Remember that Facebook itself is essentially becoming what AOL used to be. AOL News was huge and it's still probably one of their most popular sites to this day.


No. The solution is to continue to provide news, to curate it, to shut down professional liars, and to tell the critics that if they don't like it, they are free to encourage their followers to use other social media.


> shut down professional liars

Would they really boot CNN?

CNN told us this: https://www.youtube.com/watch?v=_X16_KzX1vE

They were laughably wrong: https://popehat.com/2016/10/17/no-it-is-not-illegal-to-read-...

Maybe one can claim it wasn't an intentional lie, but this guy is someone who really should have known better. So if it wasn't an intentional lie, they were negligent. Not sure where that leaves us.


>Maybe one can claim it wasn't an intentional lie, but this guy is someone who really should have known better.

It would be great if an algorithm could fight this type of content, but it probably can't.

But we can probably start with sites that publish stories like "Rudy Giuliani SLAPPED A REPORTER WITH HIS DICK last Tuesday" and that might solve enough of this problem to call it a decent win.


Even a stopped clock is right twice a day.

Yes, you can completely filter them out figuring it's not worth the effort, but the problem is that's how you form bubbles to begin with.

Might have more luck boiling them down to verifiable facts and stripping all opinions (especially every explanation of "why X did Y" or "what X means").


I just don't know where the cutoff is for professional or habitual liars. There are many ways to mislead, cherry picking data, methodological errors, straight up confusion and overwhelming technical data. Do we need to model the frequency of offense or severity?

I'm not really going for the whole slippery slope aspect, more I feel like I'm miscategorizing the problem.

Maybe the metadata on misinformation looks different from the real deal. How quickly does it spread, and through which people? No need to actually judge the content.


Agreed! They just need a panel of curators from a diverse background (men, women, race, religion, education...) who can provide different perspectives and reduce bias (I say reduce...human decisions will always have bias). Hell maybe an indepndent company does the creation so there's no bias for anti FB postings.

If there's no consensus on the "truthiness" then the default is benefit of the doubt that it's true.

I know we are all in IT, but not all problems are best solved by algorithms.


Would your definition of professional liars include Hillary Clinton and Barack Obama?


The news sidebar to me now is only a "Click to see if this random celebrity died or not" bar


It's amazing how much Trust Me, I'm Lying explains 90% of what you see shared on social networks these days. A great book, and still really relevant.


Not only in social media. The trend of the news in what were once subscription-only papers is disturbing as well now that they're online and supported in part by online advertising and eyeballs. I agree. Great book, written by someone who's been there.


Just a reminder that anyone in the UK or with an interest in UK politics should subscribe to Private Eye.

Ian Hislop likely has the distinction of being the most-sued person in English legal history - almost entirely by people seeking libel writs in attempts to bury the truth.

There is a podcast and selected content online. Funnily enough, it's the only periodical in the UK that has seen a rise in readership.


It seems like the best thing to do would be penalize a site if it shows verifiably false content until it both updates the original story and issues a retraction. Facebook could prioritize retractions in its algorithm and ensure that people who saw the original story would see the retraction as well.

Content providers who abide by journalistic guidelines could be prioritized. Providers who actually have an editor could even associate the editor of each article with the post so that the editor's individual track record could be checked alongside the site itself, which would work better for large organizations who have multiple departments and don't deserve to be globally penalized for a single bad editor.


Do you really want Facebook in this role?

I guess it works as long as you agree with their choices, prioritization, and opinions but as we saw this cycle, there was a collusion between "journalists" and the Clinton campaign. Even if you see no problem with that, would you see a problem if the Trump campaign had done it?


Not particularly. Personally, I've always advocated for some type of "internet journalistic credentials" that could be embedded in meta tags to identify content producers, editors, originators and retractions according to a process of journalistic ethics that actually valued real journalism over content spam. Content originators over scraper and ad-farms.

I'd honestly envision something closer to the BBB to manage this type of thing and track membership, ratings, disputes and resolution that all browsers and search engines could tap into. The key would be that freelance journalists would need to be able to gain membership in the same way that journalists get officially credentialed to cover different types of events.


The BBB is just a way for angry consumer to snipe at vendors. It doesn't to any sort of investigation of issues, nothing close to Politifact or Snopes.


You can't win against clickfarmers and clickbait networks.

No machine learning will save you from botters as the lion share of commercial botters live in a country from which most of the math research behind machine learning is coming from.

In Russia, few people will start an advertising network startup in their sane mind, commercial scale clickfraud, farming, and botting give an order of magnitude higher returns.

The term "Internet marketing business" is used mostly as an euphemerism for all of the aforesaid here.


I think a lot of the problem arose because they kept the content editors close to home.

A lot could be solved by just adding people to that department, located across the country in states where they already have offices anyway, or hire editors to work from home. That way, the editorial content wouldn't have to be so biased by region.


That would help.

Another problem is the type of people who would take this job. They must be literate, but not able to find a better job. This will be people with degrees in English, journalism, and similar -- giving a pretty obvious political bias. You won't be getting mechanical engineers, coal miners, patent lawyers, and stock traders.


> lead to a Congressional investigation.

That really seems like "abridging the freedom of speech" to me. Constitutional conservatives sure love pushing the limits of the constitution when it inconveniences them.

But, you know, the election is over so now that the right has total control we must all put aside our differences and work together.


It is kind of Orwellian to claim an investigation of suppression of speech is "abridging the freedom of speech." Nonetheless, it isn't the government's business, what Facebook does with its feed, and constitutional conservatives should recognize that.

http://thehill.com/policy/technology/279498-some-conservativ...


You don't have freedom of speech in Facebook, if Facebook doesn't want you to, it's their platform...

They may lose credibility doing that, and would be a worse place to discuss things, but that is up them unless you have some kind of contract with them that makes you entitled to something else.

That for me is the main problem, people just want to trust stuff blindly and get all jived up when that doesn't work. You're mad at your own stupidity folks.


The original "bias squad" was not a first amendment issue. Facebook isn't the government (despite their incredible reach).


I think if they control 60% of the news pipeline (and climbing), they're categorically different. The U.S. government regularly investigates similarly large entities.


Facebook, et al, campaigned to have internet access be considered a utility instead of a luxury. The UN is pushing it as a human right.

If you follow that line of reasoning, the companies who run the pipes and control the flow of information aren't just companies playing favorites. They're potentially influencing - if not outright regulating - human rights worldwide.. which means they should have a much higher level of scrutiny.

And they walked right into that one.


FB would need to abandon the algo-driven feed altogether. yes, manual forwarding is bad, but that existed in the email days.

nowadays you get all that suggested content which is driven by your past behavior, acting as a giant amplifier.

to go back to something idlewords is harping about - all of that is possible because of the persistent storage of user data. kill the deep profiles, kill the tracking, only use what the user directly, willingly entered.

in SV speak: utter heresy.


Now they are in a situation where they are damned if they do, damned if they don't

On the topic of "don't," an unspoken option is that Facebook simply not try to do news.

But the entire system is fundamentally broken...

Why would they persist in the face of this? Hamartia?


Why would they persist in the face of this? Hamartia?

"If you're not a part of the solution, there's good money to be made in prolonging the problem."

See https://despair.com/products/consulting for the poster that I stole that from.


Facebook consumes too many eyeball hours not to do news.


> Now they are in a situation where they are damned if they do, damned if they don't. And people immersed in echo chambers will accuse them of bias no matter what.

You are making an assumption that algorithms can not be biased. Earlier, humans were biased. Now, algorithms are biased.


Provided the data isn't wrong, we can typically put tight statistical error bounds on the output of various ML algorithms. Most people's understanding of how algorithms can be "biased" is completely wrong. There was a whole furor a while back about algorithms discriminating due to language differerences in the data for people of different races. Guess what; if you give race as a parameter to any half-decent algorithms trained on that data, the algorithm will learn that the written data contains those biases. Decent algorithms trained on true data are pretty much guaranteed to have results within a very very small margin of reality.


> Guess what; if you give race as a parameter to any half-decent algorithms trained on that data, the algorithm will learn that the written data contains those biases.

You know what? Even if you do not give race as a parameter; an algorithm could be biased. It can easily learn race from secondary or tertiary parameters.


I don't think you understood my post; giving race as a parameter allows ML algorithms to detect and counteract human racism.

If an ML algorithm notices a disparity along e.g. racial lines, it's because it's actually there, not because a human imagined it.


> damned if they do, damned if they don't

The level of damnation differs significantly.

Having a human team being accused of bias (possible falsely) and pressured by politicians to change puts them in the same category as the NYT, the Wall Street Journal and every serious print and broadcast service on earth.

Algorithmically spreading exciting and blatantly false claims puts them in the category of the National Enquirer, and 'The Protocols of the Elders of Zion'

One category is significantly less damned than the other.


I think that's a genuine concern, the only way I can think of that breaks this paradigm is by users to elect moderators.

This would also give a platform to new and grassroot politicians to rise up over time. Right now, grassroot enthousiasm seems to flow towards senators (ron paul, bernie sanders,..) because they didn't vote for something most of the other senators voted for.

Meanwhile having access to capital and connections to even become a senator is very constraint and unlikely to come without strings attached.

Anyway, having users elect moderators solves the problem of "trusting the moderator" and since I assume they would be a salaried position facebook can apply and enforce fairness guidelines.


It's seems pretty obvious to me that they replaced the human curated news section with a far (and intentionally) inferior product. They knew people would eventually begin complaining about the new system and ask for the old system back.


Why exactly shouldn't it be beneficial if we let the market (the viewer) without any regulation decide which information should spread and which shouldn't?

If people prefer seeing outrageous stuff then let them have it.

The reason why governments and institutions want to control this is in order to control its people, all the other reasons given are just an attempt to mask this.


> Why exactly shouldn't it be beneficial if we let the market (the viewer) without any regulation decide which information should spread and which shouldn't?

1) It doesn't work. It leads to the lowest common-denominator winning.

2) Humans aren't rational.

3) The world is complicated enough that the median person simply doesn't know much about anything outside of their silo.

4) There are powerful forces conspiring to profit off of people's ignorance, rather than to turn them into less ignorant, more rational, more empathetic people, more curious people.

5) There are many people who simply aren't that curious. Left to their own devices, they will be overly susceptible to trickery and populism.

This might be a nice idea in a socially homogeneous group of a small number of like-minded farmers. It fails surely and predictably in a large group trying to make good decisions in a complex and technical world.

Read: Why Smart People Make Big Money Mistakes And How To Correct Them: Lessons From The New Science Of Behavioral Economics


It doesn't matter if I am rational or not, it is my freedom to consume information from whichever source I prefer just as it doesn't matter if it's justified to spend double the money on an Apple product over something from a competitor.

It's also irrelevant how you would like people to be (more rational, empathetic and so on), actually no one on this planet even cares about your opinion on that matter enough to actually change themselves. (and no one ever will, trust me)

What matters is that people are free to choose to be whatever kind of person they want to be.

Some of them choose to be persons that you don't like, well then don't be around them, issue solved.

You seem to believe that we exist to follow some great plan devised by other people who believe they are smarter and know what we should actually be instead of what we choose to be. This is not the case.

We've all seen the results of ideology that seeks to reach utopia by changing man into something elevated. (Soviet Union, Nazi Germany, North Korea, Rouge Khmer, Venezuela) It always failed in a catastrophic manner and caused often unimaginable suffering because of the authoritarianism inherent in this ideology.


We've learned enough neuroscience to know that humans make predictable systemic errors in judgement. This is when pursuing their own preferences, not someone else's preferences.

This has real effects that harm other people. Do what you want to yourself. The problem is when you harm others due to your own ignorance or avoidable biases.

None of the big bad movements you mention were genuinely trying to make humans into better thinkers. In fact, they all relied on humans not being such great thinkers.


> This has real effects that harm other people.

That's why we have courts where you can claim damages.

> None of the big bad movements you mention were genuinely trying to make humans into better thinkers. In fact, they all relied on humans not being such great thinkers.

Oh really? The Soviet man, the Übermensch as understood by the Nazis and all similar Socialist concepts are then completely new to you? Some of these concepts are pretty close to what you described a person should be like.

https://en.wikipedia.org/wiki/New_Soviet_man

So it seems to me that you basically share many of the views of a Hitler or a Stalin on questions like these.


You seem to be pattern matching on certain features and neglecting that details matter.

How can conscientious teachers or coaches exist in your worldview?

Surely educators are all evil because they dare to seek to improve people. Education has been used as a pretext to imprison people in oppressive regimes therefore education is evil. Absurd!


I have no issue with education and teaching without coercion, what you are proposing though is use of force to prevent spread of information you do not like or that might cause people to reject your ideology.

This is the real issue here. You are arguing in favour of using force (the basis of Socialism) while I'm arguing for freedom. (the basis of Capitalism)

And please don't tell me that censorship or regulating media isn't really using force. You go break censorship laws in countries that have them and lets see what they will do to you.


You just don't have enough contempt for humanity. The masses are unaccountable and will lie as much as they think they can get away with. It won't help them. They will lose sight of their self-interests as they become more and more detached from the facts of the world.


That's what happens when you try to control other's speech. You will be blamed for bias, and the worst thing the accusations would be completely correct, because humans are biased. Humans have opinions, humans have cognitive biases, humans are prone to groupthink and a long list of fallacies. It is extremely hard work to get even close to objectivity, let alone achieve it - professional scientists who are trained and have tons of tools to avoid biases still regularly fail and produce irreproducible results and wishful thinking papers that are then debunked - for somebody involved in politics where there's no such tools it's doubly hard. Hoping Facebook somehow magically solves this problem for us is naive at best.


The solution is simple, stop being gullible and stupid trusting things you hear or read without something credible to back it up.

Facebook has it's own bias, most people do. Some professional reviewers and political commentators try to be as unbiased as humanly possible, but why blindly trust them, even if that's their intention, there is nothing that say they cannot just be factually wrong.


Maybe their AI should be trained to fact check stuff posted on Facebook? So if you post something that claim outrageous things, it would be visible right from the start that the claim is false.


There was a quote regarding this which I can't find, but paraphrased it was something like,

"Bullshit flies halfway around the world before the truth has even gotten its pants on."


It's a sign of the times that we shit on facebook's AI for not quite being smart enough


There's a proper response to "accusations of bias" from alt-right white-nationalist thug media. Never mind whether the accusations "looked likely to lead to a Congressional investigation".

The response is to tell them to fuck off, and that the professional editors in your private company will continue to do their job however they see fit.

Period, paragraph.


The only reason it looks alt-right white-nationalist to you is because anyone on the left who disagrees with prevailing left wing dogma is branded a heretic and a bigot, and shut down quickly.

Take The Guardian. They employ people like Jessica Valenti and Laurie Penny who publish feminist clickbait. Then they aggressively moderate comments which explain why those two are ill-informed at best and dishonest manipulators at worst. Then they do a "study" where they demonstrate how much harassment their female writers receive, by considering every deleted comment to be an instance of harassment.

Alternative hypotheses are not welcome, and there is no accountability. Actual studies like Pew's about the magnitude and nature of real harassment are ignored, or spun by cherry picking. The dogma remains: women have it worse, and it's because they are women. This system gets them more traffic and attention, so they are encouraged to continue, even as they insist it's terrible and someone should stop it.


I may be missing something here, but what do your two paragraphs of axe-grinding about women have to do with his remark about American "alt-right white-nationalist thug media"?


What does two paragraphs of axe-grinding about woman like Jessica Valenti and Laurie Penny have to do with American "alt-right white-nationalist thug media"? Take a look at the narratives they've been pushing about Truump and the evils of the alt-right lately.


SOME women. You're already altering his words.


Where does he use the words "SOME women" then, if you want to be this anal?


Your comment implies that the commentor was axe-grinding with women in general. The complaint was about two particular women writers at the guardian using their own sex and feminism as a shield against criticism and fuel for their own click bait.


> Your comment implies that the commentor was axe-grinding with women in general. The complaint was about two particular women writers at the guardian using their own sex and feminism as a shield against criticism and fuel for their own click bait.

So it's okay for you to infer what other people mean, but not okay for me to do it? Where did I say "women in general"? Why would my post imply what he said, when everyone can read exactly what he said and interpret it for themselves, just like you did?

He's grinding an axe (completely out of nowhere) about "feminist clickbait", "female writers", the "dogma" that "women have it worse", and singling out two women (and "people like" them) as examples. What am I supposed to do, pretend he's talking about men?

Likewise, why should I pretend that he said "some women" when he didn't? I have no idea what proportion of women he has a problem with.

None of this is relevant to my point though, which was that his rant about women (however many it may be) had absolutely nothing to do with what he was replying to, and was simply an attempt to inject this particular hobby horse of his into a thread where it doesn't belong.


He has a problem with two women. It's you that made it ambiguous by claiming he had a problem "with women".


> He has a problem with two women.

How do you know he has a problem with (just) two women? Did he say that? No. Even now he hasn't said that. On the contrary, he specifically said "people like" the two examples he named.

Are you genuinely telling me you don't know what "people like" means in this context?

> It's you that made it ambiguous by claiming he had a problem "with women".

The words "with women" don't even appear in any of my comments, in any context at all, so I don't know where you're quoting that from. I actually said "I have no idea what proportion of women he has a problem with".

And if my understanding of another person's comment (which is still there, in his own words) somehow made that comment "ambiguous", why are you not finding it ambiguous?


I think The Guardian is a bad example of this. Their articles follow a particularly narrow group of viewpoints. They will never have the mainstream (liberal) appeal as the New York Times or the Washington Post.


Don't you mean a good example of this?


Incidentally, which pew studies? In my anecdotal experience, harassment is a pervasive issue e.g. I personally dont know of any urban woman not adversely affected by street harassment. Like a lot of things, it is a couple of men doing a whole lot of damage.

I keep feeling like the whole alt-right anti-PC movement seems to be born of things people said online or corner case behavior in insulated Universities. Like, who cares? Most men I know who love the anti-PC movement would, in the moment, gladly intervene if a woman is getting verbally harassed and is visibly shaken up. Mention Valenti to the same person and they fly into a rage. Like wut?!

Most feminist and LGBT activists groups aren't talked about by Jezebel and are doing great work.

tl;dr: Message to alt-right activists everywhere: Talk to minority rights activists in person outside of emotionally charged protests. You'll find you don't disagree by much.


> Most men I know who love the anti-PC movement would, in the moment, gladly intervene if a woman is getting verbally harassed and is visibly shaken up. Mention Valenti to the same person and they fly into a rage. Like wut?!

I'm a man, I dislike men who are asses to women and women who are asses to men. I'm finding it consistent and deeply un-wut-worthy. Heck, it seems I sometimes even care about men vs men and women vs women too.


Telling people who aren't aligned to with the liberal intelligentsia to fuck off is exactly what got us in the current mess. Most people are reasonable, but when the reasonable and rational media ignores their issues, we leave the door open to demagogues and dictators. How else could a silver spoon like Donald Trump ever claim to represent the everyman?


I think you're misinterpreting me, and also distorting my words significantly, which I object to. I didn't say we should tell people "who aren't aligned with the liberal intelligentsia" to fuck off. I was speaking narrowly about lawyers from places like Breitbart; i.e., the kinds of folks who have been trying to apply pressure to Facebook and making accusations.

NOT the general population.


Sorry, I apologize as I definitely was using your comment as more of a general soapbox than directly responding. Upvoted.


Don't apologize, it's how I took the comment as well. You can assume someone is guilty of generalizing when they generalize.


>Telling people who aren't aligned to with the liberal intelligentsia to fuck off is exactly what got us in the current mess.

I am so sick of this argument that Trump's victory is somehow the fault of bubble-trapped city-dwelling liberals for not understanding how the rest of the country thinks. Trump supporters are the ones who have either been manipulated to vote against their own interests; implicitly elected a Republican administration no different from the ones that they now claim to have rejected; ignored the obvious signs that Trump is of the same "elite" ilk that seeks to exploit them; been duped to amplify social hatred and reinforce the worst bigotry and sexism of the right-wing; benefited from many of the federal supports they claim to hate; or neglected the existential peril of their belief that the economy is more important than the planet's ability to sustain life; if not all of the above. Should the left have picked a candidate that is more aligned with the issues of the working class? Absolutely. Does that mean it's entirely on them? Hell no.


To put some perspective on this, I am from Minnesota, I was there in 1998 and I helped Jesse Ventura get elected. It had a remarkably similar vibe of unexpected victory up-ending the establishment. Of course Jesse Ventura is much more of a rational and thinking man than Trump, but the race was similar in that no one took him seriously and he won in a huge upset against the major party candidates.

The thing with Trump is that liberals everywhere still can't get over what an asshole he is and what a disaster this is, but that fact that an asshole like him gets elected is very telling. We need to observe and understand this. Saying that people were duped and are idiots is not an interesting story, it doesn't help us make a better future, and quite frankly Trump supporters are not all idiots and bigots. What the intelligent Trump supporter would say is that establishment politicians have duped everyone too, and that Hillary was not going to further your interests. Granted Hillary was definitely the lesser of evils, but I don't think there's a cut and dried case that her policy was going to help the low to middle classes. Plenty of blame to go around here.


If you want to know why Trump got elected, it's because most people don't have a basic clue as to how the foundations of this country actually works. Seriously, go ask 100 random people what the national debt is, and I doubt you'll get more than 1 or 2 who have any sort of clue.


>What the intelligent Trump supporter would say is that establishment politicians have duped everyone too, and that Hillary was not going to further your interests

And that is why your "intelligent" Trump supporter is still clueless in this case, because he just voted in a white house full of career politicians (Newt Gingrich? Are you kidding me?). It's clear that Trump was mostly a protest vote against the establishment, but it was an incredibly hollow one and made literally no sense if the supporters looked beyond catchy slogans and strongman signaling. And every Trump voter, no matter how intelligent or decent or caring, still cast a vote for a racist and sexist candidate and that is something I find to be unconscionable.


Again with the racist accusation. You are either trolling or you have no intention of ever understanding why Trump supporters supported Trump, or you're just venting your anger from Hillary's humiliating defeat.


Trump actually promised to surround himself with experienced people, so "a white house full of career politicians" should not surprise anybody. It's keeping a campaign promise.

There was little protest vote. This was about corruption, job loss in the heartland, and some very non-racial issues with immigration.


Personally I blame it on the left's constant, ruthless beratement and demonization of Christian|white|rural|blue-collar|conservative people. I'm not surprised they pushed back after a decade of bullying in the name of tolerance.


Oh yeah I forgot it was the evil left that was demonizing people during the past ten years. Certainly no Christians during that time berated and demonized someone just on the basis of say, their sexual orientation or religious beliefs.

Give me a break. The right lost the culture wars (for good reason as their regressive social policies trampled all over civic rights) and this is them sending our country into a suicide because they'd rather die than face the reality that America is no longer a white monoculture.

Your reasoning is basically "maybe if you weren't so mean to the white supremacists Trump wouldn't have happened". Are we supposed to appease neo-Nazis and ignorance now? There were racially motivated attacks across the country last night, all done in the name of Trump. It does not look good.


Have you been on Twitter lately? If you phrase one thing the wrong way you get branded as a bigot, racist, sexist, whatever, and the torches and pitchforks come right out. If you don't see the parallels between the extreme right and the extreme left and prescribed moral code of what ideas and views are acceptable to express then you are not as smart and objective as you think you are.


> face the reality

Talk about reality checks.

You guys wanted to have it too fast and too easy. You though you can change things by passing laws and silencing criticism with media hysteria. You celebrated your superficial legal victories while completely ignoring the fact that sentiments against "affirmative actions", "LGBT rights", feminists, Muslims, Blacks and whatnot had been steadily growing for ten years all over the West.

You gave voice to the most bitter representatives of minorities to sell them as victims to the general population and didn't mind that the venom they spill enrages everybody else. As if you completely didn't expect that one day somebody may give voice to the most bitter of your opponents.


I rest my case.


> vote against their own interests

If an unemployed factory worker votes for a protectionist who promises to bring back factories, how is he voting against his own interests?


That's the first-level analysis. Let's dig deeper...

Why would one believe a so-called "protectionist" who has actually off-shored jobs?

Why would one believe a billionaire who claims to not be part of the establishment? A billionaire who was a celebrated part of the establishment during the '80s, in New York. That den of conservative values?

Why would one trust a candidate who was essentially a left-leaning Democrat until he decided to run for President?

Why would one believe a candidate who contradicts himself, is difficult to pin down on any policy details, and who says different things to different groups to a degree that's unprecedented, even for a politician?

Anybody can say anything. If you believe someone this erratic, you almost have to be wilfully ignorant of human nature.


Well, for many of your examples I would say they fit both candidates, with small adjustments. So the only choice is to go with the one that currently states what the voters agree with.

I'm always fascinated that people don't realize that what one side says today the other side said yesterday.


You make some good points, but this:

> Most people are reasonable

Is blatantly false. People are tribal.


How does tribal automatically mean being unreasonable?


It doesn't. I'm conflating two different things that make sense in my context, but that I don't have time to explain.


First of all your use of biased language is ridiculous, you are as bad as the 'alt-right white-nationalish thug media' and I wish people like you had no place in rational discussions.

Second if you tell them to "Fuck off" you risk losing a significant chunk of your user base who believes their content and information is being manipulated against them. Facebook isn't in the politics business, they are in the social networking business. If Facebook is seen as political at all then a competing business will set up a service to sell to the other side. Look at what happened to mainstream media channels. Claims of liberal bias in the media giants directly led to the success of Fox News.


You're right. Telling bullies "no" does risk that other bullies will leave your site. But that risk is already there.

And Facebook is already seen as political; that ship has sailed.


>And Facebook is already seen as political; that ship has sailed.

I don't think that's true. Certainly the extremes on each side will say that. But I think everyone in the middle doesn't. I don't. Certainly not in the same way as I see Fox News/CNN/MSNBC as political.


We're only talking about the extremes. We're talking about spurious pressure from alt-right media.


It's funny that you don't think people marching in the streets, blocking traffic, screaming obscenities and destroying things right now aren't bullies.


I think you're inferring something here that was never explicitly stated.


Correct. I don't know why editorial bias was a problem in the first place.

Right now they have false-equivalency bias that drives fake stories.


> I don't know why editorial bias was a problem in the first place.

Because, like it or not, Facebook has become a trusted news source. Because of that, they have a duty to accurately report the news.

Their human editors fell down by vanishing stories they found politically uncomfortable.

Their algorithmic editor fell down by promoting stories that were false.


> Because, like it or not, Facebook has become a trusted news source. Because of that, they have a duty to accurately report the news.

What other trusted news source has this "duty?" What other trusted news source does not have biased editors?


> What other trusted news source has this "duty?"

All of them.

> What other trusted news source does not have biased editors?

None of them, of course. Man is a fallible creature.

But other news sources try, most of the time, to be objective and accurate.


"OMG, Trump has won through lies and deception! We failed to stop him. How on Earth did that happen? We must out-manipulate our opponents next time."

If you read between the lines, this is what the article condenses to.

The discussion here is mostly creepy groupthink shit.

Social networks fact-checking their content? What's next? Should AT&T stop the spread of misinformation over its phone lines? Should USPS fact-check your mail?

Facebook is not a real news source and never going to be one. At best it's a communication medium. At worst it's a giant propaganda machine. Any moves to get it further away from the former and closer to the latter are just machinations to change who benefits from the propaganda and nothing else.

We don't need an "improved" Facebook. We need a working replacement for old-school newspapers, TV stations and radio channels. The "new media" eroded all of those, but failed (so far, at least) to provide anything of equal utility and value. Hence all the issues involved in the coverage of these elections.


Agreed.

The problem is the users, not the platform. If users propagate misinformation, the users propagate misinformation.

The easy solution people often jump to is fighting negatives with negatives, assuming it yields a positive. But often a positive approach is more effective. Offering incentives for good acts, not just disincentives for bad acts, is a fairly popular recent trend backed by research.

In this case I can't help but think that the best solution is to focus on education and the propagation of correct information, not censorship (or at least, something that smells like censorship) of bad information. If "new media" is a problem we should be fighting it closer to the source.

I don't know what Facebook's role in that would be, but ideally, as a platform, it would be minimal.

But ultimately, it's worth remembering that it's hard to build a good system with bad raw materials. If people are interested in falsehoods and echo chambers, their social media will reflect that.


Hmm, I mostly agree, but it's important to remember that a bad mechanism can encourage bad behavior as well. Facebook must make choices about design of the feed algorithm that have important effects on incentives and behavior. So even if they take a minimal censorship role, we cannot ignore the effect of the feed algorithm - in fact, we should focus on how to design it!

For example, the extent to which one "bubbles" users into their own echo chamber is largely up to the algorithm designer. If you look at "content aggregator" sites like HN, reddit, Facebook, they all have pros and cons. HN and reddit give incentives in the form of karma. They all have different levels of "bubbling" and opting into or out of bubbles.

Good design of such sites is an open problem -- even formalizing good goals for such sites is an open problem -- but it is a design problem we should be thinking about and addressing.


If the news feed were just a reveres chronological list of everything your friends posted, then I'd agree with you -- facebook would be just a medium. But they filter and sort it.

I would actually like to have a knob or a switch I could use to add or remove certain elements from the news feed filter algo. I think it would be great if one of those parameters was a (open source) "factuality score."

they're already doing this with click-bate, why not brain-rot? Especially if it's an optional feature.


>>But ultimately, it's worth remembering that it's hard to build a good system with bad raw materials. If people are interested in falsehoods and echo chambers, their social media will reflect that.

I wouldn't phrase it quite so negatively, but I think you're on to something here. If people don't enjoy going on Facebook, they won't go on Facebook. What do people enjoy going to Facebook for? Interacting with their Facebook friends, whom they likely friended in at least some part due to their shared beliefs.

"Fixing" Facebook in this regard means forcing people to interact more with entities outside their self-selected social group, when the whole reason they got on Facebook in the first place is to interact with their self-selected social group! Facebook would be foolish to fix this problem, it would drive their users away.


I don't see why facebook is attempting to manipulate its users at all nor how anyone thinks this is acceptable.

If people spread misinformation, than the way to stop it is to spread correct information.

If facebook takes a more active role in shaping peoples' perceptions, i think it would be incredibly immoral and I would hope the company tanks.


The irony here is also the idea that all of this so-called propaganda comes from the right.

The left is constantly selecting it's own set of "facts" and tactfully leaves out whatever doesn't agree with their agenda.

Simply look at all the disingenuous pro-Clinton fact checking throughout the election. Just look at the number of rape, assault, race-baity bullshit that is circulated by liberals every day. There articles may not fly in the face of science, but they are usually equally misleading.

Is this a veiled attempt at trying to silence wikileaks? Infowars? There is a reason they are being called the regressive left. They want some articles to be banned from social media because they believe they aren't true. They believe, yet again, that people should not be allowed to make their own decisions in life.


Please don't make generalizations about large groups of people. It leads to irreconcilable feuds.


Should the world stop generalizing because it hurts your feelings? Because you could just look up the dictionary and understand that generalizing does not mean that whatever is said concerns every single person of the generalized group. It should be way easier for both you and the world.


You shouldnt let a fear of feuds control you.


Maybe the word "feud" wasn't strong enough. Have you seen what's happening in Turkey? My Turkish friend suggested that's where this leads.


Eh... unless you have numbers, I'm going to disagree. I don't visit FB much, but I've seen enough other social media to know that both sides play the same game. They both leave out data, mis-use existing data, and outright lie. I'd love statistics to prove me wrong.


Isn't that what the person you are replying to is essentially saying? That it's both sides, not just the right?


Is it? I took it to mean that it's coming from one side. Maybe I'm mis-understanding?


Yes, you are misunderstanding. The comment correctly accuses both sides of misleading propaganda.


>They both leave out data, mis-use existing data, and outright lie. I'd love statistics to prove me wrong.

I don't disagree with this at all. I just said that they are framing this as if the only propaganda on social media comes from the right.


My apologies then for mis-understanding!


Ims ure it is true but only one side has a dominant control over internet services.


What would be so bad about at&t offering a service that banned fraudsters from calling you? Or if the USPS refused to deliver mail that said "URGENT TIME SENSITIVE" on the envelope unless the sender could prove it really was urgent?

As someone with elderly relatives who've fallen for scams over email (which major email providers already do block, automatically), I would celebrate both of those outcomes.


> What would be so bad about at&t offering a service that banned fraudsters from calling you?

That isn't what we're talking about, now is it?


Labeling something "misinformation" is not an objective act, period.

Facebook will be engaging in censorship if they pursue this.


Objectivity is literally about things being factual regardless of subjective opinions.

There's nuance in the world, but if someone says "The earth is flat", it's not dystopian for a newspaper or online newsfeed to editorialize and minimize those claims.


its impossible to label an article presenting a theory or perspective as non-factual. no one ever said it was fact. anyone trying to claim it is fact is a fool.

Facts are things like:

I have 20$ in my pocket

Creationism isnt misinformation, even if it is lacking in evidence.


Agreed. This appears to advocate to fight propaganda with more propaganda. I don't trust anyone at Facebook to determine "facts."


Why? Facts are facts. The definition of a fact is that is an objective, verifiable statement. Trust does not come into play because you don't need to trust anyone judgment to verify a fact. Judgment is not involved.

Whether Fox fired Megyn Kelly or not is a fact. Facebook's is judgment is not involved in verifying it.

Whether an FBI agent that examined Clinton's emails is now dead or not is a fact. Facebook's judgment is not involved in verifying this fact.

You are conflating fact checking with opinion editing. The suggestion here is not that FB should filter out any kind of objectively verifiable fact, no matter which way the fact is used. It's that perhaps FB should filter out or at least not signal boost objectively verifiably false information.


Humans are bad at determining facts. That is a fact.

Consider the irony of refuting that statement. Yes the word "fact" has a definition, but that doesn't mean humans can or will obey that definition. In early 2003 the NY Times reported Iraq had WMD, so that was a fact, right? How long would it take me to find a Facebook employee who would tell me it's a "fact" that vaccines cause autism?

Anyone who claims they can differentiate facts from opinion and truth from fiction without bias, is the last person that should have that responsibility.


> "OMG, Trump has won through lies and deception! We failed to stop him. How on Earth did that happen? We must out-manipulate our opponents next time."

This is a ridiculous oversimplification of a complex and important set of issues.

The issue is the spread of facts vs misinformation, not liberalism vs conservativism, nor Democrats vs Republicans, nor Trump vs anti-Trump. Facts can work on both sides, as can misinformation. It's kind of fucked up to assume facts somehow only go one way.

> Social networks fact-checking their content? What's next? Should AT&T stop the spread of misinformation over its phone lines? Should USPS fact-check your mail?

Well, gee... Let's think carefully. Are your private phone calls on AT&T a public discourse? Is your mail? Do either AT&T or USPS signal boost some of your discussions during public transmittal? Is your analogy even logical?

Facebook is a communication medium, as you said. Moreover, it has a set of rules and policies governing what can be shared, which shared content gets shown to a given user, and how the content is further propagated and boosted. The entire discussion, which your post is completely sidestepping, is what content should or shouldn't be propagated and boosted.

One can make arguments for more rules or fewer on this, for different rules or keeping the same rules. But asserting that it is a non-issue or drawing false equivalences are non-arguments irrelevant to the discussion at hand.


I agree with some of this, but the main difference between AT&T and Facebook is that FB is controlling what you're receiving.

Imagine if USPS sent you things that were sent to you, but also random packages they thought were good for you.

Facebook, even just through its algorithm, is exercising some editorial control and distribution.


facebook will be less effective as a medium/content distributor if people become aware of the filtering that they do; i think they are shooting themselves in the foot if they start to actively censor the messages, people will just move on to twitter for politics and use facebook for the personal stuff only.

trust the market, there is plenty of competition that puts things into their proper place. Facebook doesn't own their users either.


Couldn't agree more.

There is no actual problem: social networks are simply not the right media to be a news source, and they were never meant to be.

Moreover, to expect to be fed "correct" information all the time, without no effort on the consumer's part, is flat out delusional. What we need, and what we have always needed, is to apply critical thinking.

Do you believe everything someone says? Well, then you have a problem.


I agree that the idea of "fact-checking" stories is troubling and is overall a bad idea.

You are not at all addressing what is actually in the article: the Newsfeed feature that surfaces the most shared stories. Facebook isn't discussing deleting homeopathy, Clinton body count, or Trump kompromat articles from our various Facebook walls.


They can fact check whatever they want, it's their site. It's stupid but people that should leave to greener pastures, it's not like in the beginning there was Facebook and we're bound to use that forever.

The problem is people want to 1) trust blindly and 2) not be taken advantage of that. You can only have one of those.


Such a comment betrays an incredibly unsophisticated understanding of the issues being discussed. Perhaps a revisit to McLuhan's maxim would be enlightening:

https://en.wikipedia.org/wiki/The_medium_is_the_message

https://www.youtube.com/watch?v=Ko6J9v1C9zE

https://www.youtube.com/watch?v=UoCrx0scCkM


Perhaps you could avail yourself of some of your erudition and explain what relationship these three links have to your objection.



It has less to do with "the digital age" and more to do with governments ignoring large numbers of voters. This is what's supposed to happen when the ruling class gets out of touch.


The idea of a unified "ruling class" is laughable. The current president is constantly at odds with the political party that controls the Senate and the House of Representatives.

I also disagree with the notion that most congressmen are out of touch with their voters. Each individual congressman is pursuing the agenda he or she was voted into office by his or her constituents. An example: some constituents want to repeal Obamacare while those in other districts want to preserve it. Each set is affected differently so it is reasonable for different districts to have different opinions.

Deadlock is a feature of this Republic; the founders considered it to be superior to a tyranny of the majority over minority interests.


>The idea of a unified "ruling class" is laughable. The current president is constantly at odds with the political party that controls the Senate and the House of Representatives.

Sort of. Not really. The political parties do represent different power factions.

But they are unified in the sense that people in the "deep state" - mid-high level bureaucrats, academia, and the media have more in common with each other than they do with you and me. And anything that threatens their collective control will be dealt with more harshly than they deal with each other.


Can you go into more depth? Also I believe you are leaving out a group; high-level business leaders, who often do a tour in government posts. (I'm not criticizing the practice; we want competent, subject matter experts working in government)


I agree. Business leaders belong on the list.

I once worked a job that involved dealing with a lot of civil service folks. We worked on a land-locked Navy base which had exactly two uniformed naval personnel (that I ever saw, anyway), who were in theory the #1 and #2 people in charge of a few hundred civil servants and a similar number of contractors.

Now, everyone knew the Navy guys (a captain and his XO) would be there for about 18 months and then they'd get transferred or retire. If the captain wanted something to get done, it would only get done if the civil servants (who had been there 20+ years) wanted it to get done, because they knew how to gum up the works until he was gone.

They also knew how to undermine and embarrass him, which at flag ranks (or wannabe flag ranks) will end your career. You can't fire a civil servant unless there's a felony involved, so even if he figured out what was going on he couldn't do much.

That's what happens in Washington, too. Political leaders come and go. They put appointees at the top positions of giant bureaucracies, but the bureaucrats have their own agendas, and they know how to work the system. They're know which reporters to leak what to if the president upsets them.

In Congress the Congressmen (and women) come and go. But they all rely on staff for information, and the same people pop up on congressional staffs over and over. Those are the people who actually write the laws (or edit what the lobbyists produce) - the congressmen don't even read what they're voting on.

The point is there's an entire layer of people, what I've seen called the "deep state", that you don't get to vote on except in the most indirect way. You could say they're not very ideological, if you're generous, or you could say their ideology is power. They went to the same schools, they go to the same parties, they marry each other, they read the same books, watch the same TV shows, etc.

It's not some grand conspiracy. It's just one of those self-organizing aristocracies that pops up whenever a government isn't overthrown for a long time.


Of course there are multiple elements of the ruling class with different interests. But I think it's also safe to say there are a lot of issues where Republicans and Democrats are in complete agreement and their constituents are not.


What would be an example of one of these issues?


How to handle Wall Street post-meltdown or various military ventures seem like the most glaring examples to me.


Immigration.


Brexit and Trump and Syriza and Bernie and all the rest of it point to the collapse of the middle class across the industrialized world more than anything else.


The middle class is doing fine. It's the working class that has been suffering, and they are the ones who voted for Trump in overwhelming numbers in swing states.


Sure, whichever. The terms are imprecise and many of the people we're talking about probably could have reasonably considered themselves middle-class in the past.


You've got it backwards. People imbue meaning into the medium.

Also, saying the comment you replied to is "poor understanding" is both not supported by anything you shared, since your argument is as factual as theirs, and obnoxious. Keep it to yourself, please.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: