Maybe it's the headline that is misleading you, or it's a symptom of our times.
In any case: it should be blatantly obvious that the Twitter representative did not know or agree that the person in the video was a minor. The insinuation that it's Twitter policy to distribute child pornography is laughable. They have absolutely nothing to gain from it.
As to liability: Section 230 is about exactly that situation: trying to limit damaging material on your platform does not create any liability even if you fail. Because the alternative, where you either allow your platform to be flooded with swastikas and pornography or get sued for every single mistake you make, is unworkable.
> Because the alternative, where you either allow your platform to be flooded with swastikas and pornography or get sued for every single mistake you make, is unworkable.
The current model isn't workable either. Social media networks and their ad-driven models have hollowed out our democracies by sowing outrage and division at every opportunity. If one believes "The Social Dilemma", these networks have a dial for tuning public opinion that can even steer election outcomes (if you subscribe to the "Russia used bots to hack the 2016 election" theory, then you must agree).
Maybe we should rethink the role of social media networks. Perhaps they monopolize too much communication to trust them to curate which voices are amplified and which are suppressed. The discourse around this issue has been really strange, with the usual critics of corporate power rallying to defend social media giants and their rights and qualifications to curate such an enormous portion of our collective speech. Perhaps we should consider these networks to be more "dumb pipes" rather than "curators", and instead we should expect these networks to provide us with our own curation/moderation mechanisms. If there really is no workable social media model--that is, if they really can't deliver some net-positive social good (or at least some smaller net harm, like sex work, drugs/alcohol, tobacco, etc), then maybe we should regulate them out of existence?
This result could be non-ideological on the part of social media and advertisers. They may just be responding to the outrage mob. Advertisers are scared of any negative attention, but social media thrives on outrage. They can maximize their earnings by choosing one side to censor, and the other to amplify outrage. They end up choosing the side that generates the most ad views from the highest-spending audience, and often this is the most irrational in the “man bites dog” sort of way.
So... I was searching for the clip about this from the Howard Stern movie, and I realized that YouTube isn’t even trying to return relevant results anymore. They’re just dangling hate bait in every result. I gave up.
This honestly just reads like paranoia. Social media isn’t sowing anything. It’s the people on the platform and the recommendation algorithms behind the platform.
Social media is absolutely seeding these behaviors by leaving content discovery to engagement algorithms that push individuals further down a specific ideology without considering any ethical factors.
For example, I consider myself a centrist and yet whenever I log into YouTube account all I see are far right recommendations. I don’t have a Facebook or Twitter account, but I remember they behaved the same with their content discovery.
Read this study which goes further into the debate.
I understand what you're saying, in that platforms need to hone their algorithms to serve actually relevant and useful content to it's users but there is also the other side, where the consumers of content need to understand what they're ultimately consuming. I can't help but bring up the countless studies that violent video games do not make kids more violent. Can the same be said for content consumed on social media? At what point is content "extreme" and at what point is it just content? Whose to draw that distinction? There are very obvious examples of extreme content but there are, I'm assuming, many more subtle ones that are possibly impossible to police.
As someone who supports gay rights, women's rights and freedom of speech I always find it funny how I am suddenly alt-right because of the freedom of speech part of my approach towards life.
I consider myself a centrist because I find that there is a very pernicious and unacceptable part of the left fermenting in the last decade or so.
The right in America has been pulled so far right your likely more right wing then you think you are. Now you're using attempting to use left as an insult. You get right wing media suggested. "center" in America is basically the right that wants to act like there aren't major social issues in the country. You might want to step back and reevaluate your views. What left wing things do you agree with
> was pointing out the tribalistic attitude the modern left takes towards individuals who won’t consider themselves leftist
Do you not see the irony of what your accusing me of doing with what your saying? I'm not a self described leftist (though that is a right wing internet troll thing to say) I just have my opinions
American politics have gotten pulled so far to the right, center is pretty right wing. Idk how you can deny that? That's all I'm saying. Even if you don't, other wise sure prove me wrong explain how your actually center.
> American politics have gotten pulled so far to the right, center is pretty right wing. Idk how you can deny that?
80+ million of us Americans voted in record numbers for the left leaning candidate, our biggest media platforms promote left leaning message, right wing individuals have been censored, etc.
Maybe not everybody in the United States is left leaning, but I wouldn’t call this a politically right leaning country.
Also, the only irony I see is that this thread continues. You already alluded I reply like a ring wing troll. Perhaps it’s better we both disengage as this is going nowhere.
edit: i cant get the lines to line up, the beggining of the first line is supposed to be like two dashes over
the right has gotten more extreme than the left has gotten extreme. if you think your in the middle, your really just right wing becasue the right has gotten insane. thats my point. your free to prove me wrong when ever by explaining your thoughts on policy though
Your argument is based on a perception of America favoring right extremist ideology. Meanwhile for every popular extreme ideology on the right there is a counter extreme on the left ideology.
We have people advocating for extreme capitalism and people advocating for a socialist regime. Opposed citizen militias such as Proud Boys and Antifa. Individuals who want a wall around the country and people who believe in open borders. The examples don’t end there.
We have extremes with plenty of support on both ends and unless you can bring data that indicates there is a concentration of individuals in one side, you cannot infer America is becoming far right, or assume that the center is skewed to the right.
That first image shows what I'm talking about. America's left is center, the Republicans are all the way on the extreme right. If you think your an American center, your on the right
>Antifa isn't real, that is right wing media fear mongering
You better tell that to the lefties that label themselves antifa. What isn't real is a group called Antifa, but the anti-fascist (and anti-capitalist) movement, otherwise known as antifa, is real.
tayo42 i'm sorry to say this but I genuinely think that you're out of touch with the reality. maybe you ought to get out some more and actually interact with people.
It's the individual users who create the posts, but it's the social media companies using software to select posts to display to others. 'Sow' means to spread or disperse seeds, or when used figuratively, to spread around or propagate something in general. Automated recommendation systems seem to fit the word. They're not creating these seeds, but they sure are sowing them far and wide.
I understood the point. My point is that propoganda and it's distribution channels have existed for as long as human speech has. A new distribution channel isn't the problem, it's a society that cannot think for themselves and distinguish between what is real and what is not.
Blaming the platform is scapegoating the real problem.
Moderating the platform seems like a more tractable problem than changing what are effectively hard-wired aspects of human neurology. It’s not about blame, it’s about engineering a cost efficient solution.
I don’t think people are much worse at thinking for themselves these days, and to the extent that they are, it’s probably fallout from social media. I would like for people to have stronger independent thinking skills (just like I would like us to have infallible immune systems and no proclivity towards cancer and so on) but that’s all wishful thinking. Since we can’t do very much to make ours society more immune to social media, then we should change social media so it doesn’t wreak havoc on our society.
I suppose my point and probably the point of the other commenter above is that social media companies utilizing content recommendation systems are not merely distribution channels, by virtue of recommending content. Those recommendations go beyond distribution.
And to be clear, I am blaming corporations for creating these platforms, not blaming the platforms themselves.
I hear you. I agree that there's the incentive for platforms to serve content you want to see and will engage with but is it Facebook's responsibility to stop you from becoming more extreme? How do you even define extreme? There's obvious answers but I'm sure there's a lot less obvious ones too that are impossible to police.
Yes, Facebook has a dial and they know that they can turn that dial to make us more extreme (byproduct of generating engagement). Personal responsibility is a lovely thing but humanity isn’t just going to become more personally responsible over night, so if we’re going to save our society we have to look at the options available to us in reality and not those we wish we had (specifically an extra helping of personal responsibility). We do this all over—we don’t allow the sale of many harmful addictive substances even though one’s health, finances, etc are their own responsibility. We also regulate casinos and tobacco and alcohol. There’s certainly no reason why we can’t regulate social media.
You don't know they have a dial, everyone is taking one sensationalized (and overly dramatic) documentary and making it as gospel. Portraying it as a "dial" is also doing a complete disservice to the actual technical problem involved.
The documentary was successful in underplaying the challenges of moderating/recommending at scale and, in my opinion, scapegoating social media companies as the source of the problem when they're really a symptom.
The main focal point of The Social Dilemma, Tristan Harris, as creator of the Centre for Humane Technology has an obvious agenda (not saying he's wrong). Take the documentary for what it is but to throw the baby out with the bathwater is wrong when there are obvious benefits to social media.
Regulating social media is a fool's errand imo, it's a lose-lose. Either you let ideas flow which includes bad actors/ideas to propagate or you now create gatekeepers and censors with ever moving goalposts. I'm not against more regulation but do you really think the United States Congress is capable of passing legislation to effectively tow that line? Doubtful.
It's not a literal dial--it's an analogy and yes, it's oversimplified (the complexity of the implementation has no bearing on this debate), and the documentary is merely a touchstone. Lots and lots has been written about the subject with many first-hand accounts. Moreover, as discussed elsewhere, these networks have so much power that they can unilaterally influence democratic elections--at least that's certainly the necessary implication if you believe that Russia was able to indirectly influence these curation algorithms to hack the 2016 election (if Russia could manipulate these algorithms indirectly, then how much more power must Jack and Mark have given their direct access?).
> Regulating social media is a fool's errand imo, it's a lose-lose. Either you let ideas flow which includes bad actors/ideas to propagate or you now create gatekeepers and censors with ever moving goalposts. I'm not against more regulation but do you really think the United States Congress is capable of passing legislation to effectively tow that line? Doubtful.
This is just a generic argument against free speech. The obvious problem is that there's no way to ensure that our censors are going to be good actors, and in particular we know with some degree of certainty that Twitter, Facebook, etc are not. Congress (or whomever) doesn't have to toe that line at all--regulating these businesses out of existence is strictly a better option than allowing them to continue poisoning our society. No doubt they deliver some value, but (1) much of that value could be realized through other means (people can still organize on web fora like they did in the brief years prior to social media proper) and (2) they certainly don't deliver enough value to justify the rapid erosion of our social and political fabric. So the worst thing we can do is continue on with the status quo.
That said, I think it's entirely reasonable that we could be more surgical about regulation. There's no reason we can't keep some of the benefits of social media while doing away with the immense costs. For one, we can require social media companies to speak an open protocol such that anyone can compete--not just ad-based businesses with established large networks. We could require their curation algorithms to be made transparent. We could require that they behave as dumb pipes, but they may afford their users mechanisms to curate their own feeds. I'm sure there are many other solutions as well, but again, we oughtn't defer action until we find the best option because we know the status quo is strictly the worst option.
It’s hard to deny that social media is an entirely new way of disseminating content. There is an algorithm that determines what you see. It’s well known how that algorithm encourages echo chambers and the internet itself changes the way we think. It’s not just “people without critical thinking have always existed”.
No, getting content shoved into your face without human review is definitely a new problem. At least before recommendation algorithms, you were either looking up something you specifically sought out, or someone took on the publishing liability to recommend you something.
You're still abdicating personal responsibility. Why is it Facebook's responsibility to keep YOU from becoming more extreme? I'm just playing devil's advocate here because the easy position right now is to blame social media.
It’s same as asking why should government control abuse of meth and heroin. These drugs along with social media, exploit certain nature of how our brain works to make us addicted and alter our state of mind.
Yet we're seeing decriminalization of drugs, needle exchanges, etc that counter your argument. The world is becoming more liberal to drugs because criminalization has created a larger problem (black markets, impure drugs, etc).
I get your point is to have some more regulation, however the argument is much more nuanced than "Ban All Social Media" which is what I'm trying to portray.
I actually agree with decriminalizing drugs, but I think they should be controlled like any other prescriptions or like weed is currently controlled. Specifically portion controlled so that drug does not destroy person's life. I have experimented meth and many other drugs. Mind that if you haven't done meth and you try to compare it to some other drug like alcohol then you are just ignorant. I have to say that it's easy to get lost in meth and a lot of people don't have mental will(or maybe capacity) to get out of the drug and hence I have to say that this type of thing does need to be controlled in some manner.
Who is arguing for "banning all social media" in this comment chain? You're attacking a strawman.
We need high regulation of recommended content, probably by moving recommendations out of the scope of section 230. Pointing to a not banned but extremely highly regulated decriminalized sector kind of speaks for itself. It's not like I could go on a San Francisco street corner tomorrow and start selling pot like a paperboy.
The parent comment (the original one I responded to) said that if we can't find an equitable solution to "regulate them out of existence". That's what I was arguing against.
Moreover, at a certain point we can either hope and dream that humanity becomes endowed with super-human personal responsibility or we can accept that this isn’t going to happen and look at our available options.
The article is from the NY Post. This makes me question it right out the gate. And I just find it hard to believe that Twitter would want to keep something like this up, since it is illegal content. There is no incentive for them to do so, and the NY Post isn't the publication to dive into this at all.
I can place a contrarian, a nitpicker, a bully, even an algo, as twitter's manager of content. Twitter Users submit CP complaints. Then 99.99% responses are:
"After careful review of the content, we have not found this material violets twitter policy of no CP on our sites"
Truth is the eye of the beholder.
Thus, if the beholder is responsible for the decision and it is wrong, should they not be held accountable in the court of law? Even if it means lots of lawsuits?
> In any case: it should be blatantly obvious that the Twitter representative did not know or agree that the person in the video was a minor.
The lawsuit alleges precisely that Twitter had ample evidence to know that he was a minor. How about we let the case play out instead of assuming innocence?
In any case: it should be blatantly obvious that the Twitter representative did not know or agree that the person in the video was a minor. The insinuation that it's Twitter policy to distribute child pornography is laughable. They have absolutely nothing to gain from it.
As to liability: Section 230 is about exactly that situation: trying to limit damaging material on your platform does not create any liability even if you fail. Because the alternative, where you either allow your platform to be flooded with swastikas and pornography or get sued for every single mistake you make, is unworkable.