This also explains why some issues seem difficult to change public opinion on. For instance, if you want to get a majority of people to care about privacy, or more specific issues such as encryption backdoors and spying, you need those people's social circles to include many people who care. Since those people also tend to disproportionately avoid social networks...
"That's one Scientologist for about every 12,000 Americans. In other words, the total number of active U.S. Scientologists is about the size of your run-of-the-mill local credit union."
"In 2008, the American Religious Identification Survey found that the number of American Scientologists had dropped to 25,000."
Please don't make comments or edits about being downvoted or downvoting somebody else, see the HN guidelines for more information: https://news.ycombinator.com/newsguidelines.html
We should encourage and increase the connections and links between people, rather than disrupt or change the links of the few with the most connections.
I wonder what the nature of a "Pravda effect" would be where people suspect their information source has been corrupted and made unrepresentative and thus will distrust many traditional mechanisms of authority.
Is there an actual assuaging of opinion, a disengagement of the process, an increase in conspiratorial beliefs ... what happens?
Propaganda does this through smear campaigns - e.g., painting an academic as being egotistical and seeking money (thus making them corrupted by undisclosed motives). I see this with regard to climate change, evolution, and formerly tobacco smoke.
But I missed the fact about the others who were not able to connect with others around them. Increasing connections between those around you is the solution here, not disrupting the connections of the highly networked ones.
As a couple of folks have noted, this means that all one has to do is suborn a relatively few influencers in order to effect mass change, despite the fact that the majority would have resisted that change if they'd known the truth.
As the Hitchhiker's Guide to the Galaxy might have said, some suspect that this has already happened…
This is misphrased, they vote based on their friends opinions, not the way their friends would vote. Otherwise it becomes a chicken and egg situation.
An interesting example of this is explored in this article:
If instead of each person checking if an opinion is held by the majority of the people they know,
and seeing what the typical result for that would be,
what if the operation was doubled?
So, each person would check what the majority opinion is among those they know, and then store that value.
Then each person would check what the majority of /those/ values are among the people they know.
I was wondering if this might partially undo the effects from some people being known to more people.
so, if a person x 's opinion on a topic is f(x), and the friends of a person is x.friends
then instead of majority(map(f,x.friends)) , instead use majority(map(lambda z:majority(map(f,z.friends)), x.friends))
So instead of using "what opinions do I see", instead using "what opinions do I see people saying they see". Would that be a more or less accurate measure of "what opinions do people have"?