There has always been misleading information. Local TV news promos always tell you to stay tuned to hear about x at 6pm only to find out x really meant y.
What we see now is the decentralization of media and it's influences.
First we had the really old days, where few books were printed at all and at great cost. Less misleading information in print, but not very democratic.
Then the printing press and more people could express themselves. So misleading books and documents became more common.
Later the media grew, and with it the amount of sources went up even further. Niche publications that spread 'unpopular' opinions appeared and misleading info spread further.
And with every medium since, the effects grew. Radio, TV, the internet in general, social media sites. Really, it's just an inevitable consequence of less gatekeepers and more people being able to publish their thoughts.
What's really going on is very hard to figure out. The Arab Spring took everyone by surprise; it's not enough to look at individual leaders and what they say and do, it's the superposition of multiple trends that are the cause of things.
Different people have different appetites for trying to figure it all out, and some have a hard time with unpleasant or contradictory information. They can become numb and start becoming avoidant, clinging to simpler inflexible positions that make life easier to live.
Others develop shortcuts to understanding the world and start coming to very strange conclusions, conspiracy theories that connect not through cause and effect, but by who benefits, outcome oriented post hoc rationalisation. Cynicism protects the ego by dismissing people who talk about the actual complexities as naive. But fundamentally they want to believe there's order to the chaos.
I personally think that the lack of a common enemy has created some kind of social autoimmune disorder, where reality isn't testing and proofing ideas, and our collective attention is free to amplify small irritants into national issues. We don't have enough real problems. Even terrorism today is a non problem, looked at in proportion to its lethality.
What if we instead could provide the tools to establish a soundness graph. Evaluate arguments on their own merits rather than their source.
Even Arguments based on false premises can be sound. Just as arguments can be based on true premises and still be unsound. Helping people identify which is which should at least raise the quality of disagreements.
When its premises are false, an argument is always unsound. But it can still be valid. A sound argument is one that is both valid and has true premises.
About builiding a soundness/validity graph, I've dabbled with building a webapp for that (though the graph is more implied than visual). It's still very basic, but if someone has ideas where exactly it should go or how it could be engaging to a community of critical thinkers, please contact me.
My own thinking is that it has to be less of an app and more of a protocol/federated thing augmenting existing channels. Think something like a github bot doing automated reviews with an SO-like community of meta-data authors annotating news articles and such
As a shortcut, trust graph is actually pretty good. Consider this example monologue:
This guy says really interesting things about cars, but my good friend Sally the Car Mechanic says he's talking nonsense; she's an expert in the domain, so I'll approach the new guy with huge dose of scepticism.
Note how trust graph implicitly takes care of known unknowns and unknown unknowns - I know little about cars, Sally knows a lot, so she's able to evaluate the situation better than me. Note also how it handles intent - Sally is my good friend, I trust her to have my best interest in mind, so I know what she's saying is her real opinion, and not e.g. trying to keep me as a customer of her workshop.
Intent is hard to judge, but is unfortunately very important when dealing with information that's not directly and independently testable (which is most of them, including especially conclusions drawn from testable facts). Trust graphs, or Evidence-based Ad Hominems™, are a very powerful shortcut for evaluating information.
Something like code reviews / peer review, but more broadly applied
People only trust sources with which they agree with at first place. Communities believing in a certain narrative will not re-evaluate anything, and all communities believe in specific narratives. The web just made that fact more obvious. No amount of technology of fact checking tool will change that.
Only the people who understand that the truth is often a valuable asset will find these tools valuable as well.
People trust sources which confirm their preexisting biases.
If someone says something that contradicts the information you already have, you should only change your belief proportional to how often they are right and you are wrong. And the only way to estimate the degree of relative correctness is to observe how often they agree with you ...
Now I'm just thinking aloud (awrite?), but maybe a general strategy to convince someone would be to find an authority that they trust more than themselves, or to become such an authority yourself. And the last part is hard, since you can only disagree with them if they will easily change their mind, thus changing their estimate of relative correctness in your favor.
It's also common for 4chan users to have accounts elsewhere, just to force memes or opinions. It's super easy to create multiple Twitter accounts and just start posting away.
You can't do this on 4chan with the same level of effectiveness. You can't target a user directly, and there's a lot of obscure culture that you will have to understand to properly communicate and not be outed as a shill by the community. Plus, there's no incentive to post for people who are used to Facebook, Twitter, Tumblr, etc. You're not gaining followers, you'll never get upvotes, you'll never get likes. I believe this is why the other networks will never influence 4chan the same way.
Plus, people are just plain offensive on 4chan, which can be scary if you're not used to it. I remember when there were raids on Tumblr with gore images, and Tumblr users responded by going to 4chan and posting back gore. They only ended up traumatizing themselves.
 There are various versions, some include Facebook etc: https://i.imgur.com/aAlOTA8.png