So basically they just removed the ability for people to hold people in power accountable by making their mischievous acts known in public, while it is known that "proper, lawful" channels would not have worked due to conflict of interest..?
There have been cases where independent journalists have had their footage censored by Twitter on specious grounds (riot footage as "promoting violence", etc), only for that same footage to be allowed on the accounts of larger media companies. This policy is only going to further entrench MSM institutional power.
At least HN would never censor and stop you from posting a video like this right? It sounds like he practiced his response with a team of lawyers and everything said was carefully scripted.
I mean they’re right, Twitter has for years made and used their own definition of public figure independent of other media sites.
I think before we start saying that somehow identifying public figures is some literally impossible problem that needs a government registry or is somehow controlled by CNN we could at least cite some examples where Twitter’s mods have made decisions that seem erroneous.
Only decisions that survived appeal, though. All referees make mistakes now and then, but if appealed rulings repeatedly survive instant replay review, then it’s time to take a closer look at the rulebook, and consider why the outcome is surprising.
They really don't want people like Andy Ngo posting photos of people rioting / committing crimes / etc.
They also REALLY don't like that Project Veritas uses their service to link people to undercover videos of big tech companies admitting to doing shady stuff. Big tech covers for big tech.
It's frustrating to see how tech is consistently unable to implement systematic yet fair solutions that are possible in law. Not that people aren't genuinely trying - but the nature of automation and the design of platforms just seem at odds with working privacy.
German law, for example, generally protects people's privacy (i.e. you are not allowed to take a photo of people without permission, even in public). But it implements fine-grained exceptions for people of national importance - either globally (i.e. politicians who remain so) or temporarily (i.e. you may take photographs of such people for a time but no longer when they have ceased to be famous).
> you are not allowed to take a photo of people without permission, even in public
This is clearly not true though. People end up in other peoples’ photos all of the time without permission. So it sounds like a bullshit law that can be used to arbitrarily string people up rather than anything actually enforceable or reasonable.
From a privacy standpoint AND the fact that everyone feels that everything must be uploaded to social media for narcissism points is one of my top worries.
My problem is that attempts to legally curb this have gross implications for the rights of people to keep informed about ongoings in the world. We have a right to be able to show corrupt cops as much as we do criminals and rioters. We have a right to showcase protests when going well or when they're destructive. We have a right to film stuff like Rittenhouse's situation and use that information to jail or free him of charges.
And any corporation will exploit this citizen desire for 'privacy' and use it as a wedge to strip us of our power as citizens, nevermind to allow some information that is in line with establishment values but disallow information that runs contrary to it's power.
You can't solve all these problems at once.
Censorship - both from the state and monopoly actors in the private sector, is incredibly worrisome.
What would be the worst case scenario is that our privacy is utterly gone AND censorship, citing privacy concerns, never really addresses privacy in a practical way but does help corporations and states manipulate the public discourse and undermine our very democracy.
As far as i'm concerned - even as a privacy advocate - our privacy is gone. We lost this battle. Theere's things we can do to maximize our privacy and make it better but the toothpaste is never going back into the tube.
But this is a moment where censorship is about to be normalized in an extreme way and this is a fight that shouldn't be lost.
I think you are wrong. I think there is a lot we can do with regard to privacy. We should not give up privacy protections because they could hypothetically be abused by people in power. Especially when the privacy protections have explicit exceptions for the cases you worry about.
People have been taking photos for over a century, and only a small percentage of them are ever published. Most are taken for what they've always been primarily taken for - memories.
Most of the times, the prohibition is not "taking the photo" but distributing it, which is much easier to enforce. At least it's how it is in Spain, where we have a similar law.
The law in question only concerns itself with publishing. It has accounted for the case you mention: It is explicitly allowed to publish photos that include people which have wandered into frame by chance and the main focus is some locality and not the person in question.
You cannot take a picture of a person where the person is the subject of the photo. This is not the case when you take a picture of a statue and there's people in the background, for example.
You can, as long you're not using it for publications. The usage on your own blog or Facebook account is allowed but the line is drawn at billboards, on TV or such (larger) channels. For those you must have an agreement.
This is not true and hasn't been for a while. It used to basically be a grey area but courts have repeatedly stated that just taking the picture was not allowed even if there wasn't ever any intention of publishing it anywhere.
You would have to justify it anyway and personal rights in Germany always trump the rights of others to create art, for example. You'll find many cases that are about surveillance cameras as well and even there's it's often in favor of the person not wanting to be on camera.
Semantically, it may be so. In practice, I fail to see how there would be a difference. If you are not presumed innocent and the burden of proof lies with you is that not the same as a presumption of guilt?
What do you think about the French law that didn't pass, which would have made it illegal to upload photos of the police? Arguably, the actions of police officers are more of a public concern than those of national politicians, but they also fall into the category of non-famous people.
> you are not allowed to take a photo of people without permission, even in public
I'm curious how this is expected to work in practice. Is there a clear definition of "a photo of people"? Taking a close-up portrait of a random stranger in public would be one extreme (and I guess most people would agree it should require permission). Presumably a photo of my kids and their friends having fun at the park is still clearly "a photo of people", and therefore also requires permission.
But if I take a photo of a street scene in Berlin, do I have to get permission from every passer-by who happens to appear? How about a landscape photo where I only realise later, on reviewing the picture, that there were a couple of hikers on a distant hill? To my mind, that's not "a photo of people", yet there are people in the photo.
Somewhere between the extremes, it seems to me there's an awfully wide grey area.
Edited to preface: I'm not a lawyer, I'm mostly paraphrasing and translating the source below.
There are three cases where you can publish[1] a photo without consent:
- Person of special public interest (literally: person of contemporary history, this is an idiomatic term and sounds less strange in German), which means you can take a picture of a politician holding a speech or an artist doing a public performance or a CEO addressing employees. Important exception for professionals, but there are various rules one needs to be aware of, for instance you probably cannot take a photo of the same CEO buying new shoes or eating dinner at a restaurant.
- The photograph does not primarily depict the person. Your landscape photo would be fine, for example, and your street scene may or may not be. Apparently the test criterion is: is the picture materially changed if the person is removed? So a photo of a beach landscape is permissible, but not so much if there's people bathing in the foreground.
- The photo is of a public gathering: a public concert, a political rally, though this only applies if it's a photo of the gathering as a whole, and not specifically an individual in it.
So, yes, in the end it is a bit of a case of "I can't define it, but I know it when I see it". In practice, you just err on the side of caution, and it works out fine. E.g. I don't really worry about if when I take a photo of a friend outside and there's a few people in the background (due to rule #2).
[1] I'm not sure how the law handles the case of merely taking a photo and never publishing it, nor what exactly constitutes publishing, e.g. presumably showing holiday snaps at a family gathering does not constitute publishing
That obviously makes sense to me, but IANAL, so I'll refrain from making any definitive judgement. A huge percentage of pictures are immediately shared with a third party (by being uploaded to the cloud).
Beyond that, while the rules outlined above deal with publishing, there are also rules for merely taking pictures. The exact rules for that aren't 100% clear to me[1], but if it's permissible to publish them, it must also be permissible to take the photo in the first place.
Yeah, who knows. It's arbitrary and probably depends mostly on how long it's been since the judge has had a meal* when they are making their decision. For some bizarre reason things like this are thought of as very enlightened when done in Europe, but would be seen as authoritarian and dystopian if done anywhere else.
> I'm curious how this is expected to work in practice.
Most of the time, the prohibition is on distribution. And it's only applicable when the people in the picture can be reasonably identified. A street scene in Berlin where you barely see people's faces or two hikers on a distant hill wouldn't be problematic.
As many people have pointed out, the primary protection in this law is on publishing.
However, there are also penal codes preventing the mere taking of pictures where intimate privacy is affected, i.e. in intimate situations, in your own home, or when you are helpless (i.e. when injured in public).
Exactly. If I take a picture of you standing naked in front of your first floor window, it's totally acceptable, you should have expected to be noticed. However if I use a teleobjective to photograph you getting naked out of the shower on the twelfth floor, that's an invasion of your privacy - because your expectations were different.
Unable or unwilling? If their network effects endow them with an impenetrable moat, why would they voluntarily spend lots of money to address a problem that harms only a minority of customers and isn't substantial enough to drive people off the platform?
>German law, for example, generally protects people's privacy (i.e. you are not allowed to take a photo of people without permission, even in public).
So everything from the videos showing Kyle Rittenhouse doing nothing illegal, to Chauvin videos providing evidence that he killed George Floyd wouldn't be allowed in Germany. Got it.
On the other hand, there are numerous cases where Facebook has been compelled by German courts to restore posts they have deleted under their content policy to protect the user's free speech rights. Something like that is unthinkable in the US.
Calling her a Nazi or fascist would have been fine as that's understood as an expression of opinion, calling her a swine is clearly an insult and insulting someone is a crime under civil law (not sure how this translates to the US but it's not something the police will arrest you for but something the insulted party can take you to court over).
There's a common misconception in Germany that there is a law about "Beamtenbeleidigung" (insult of a public official) but the truth is that public officials have no special protection in that regard per se, it's just usually easier for them to sue people (esp. when you insult a police officer as they're literally the police). There are some caveats when insulting government officials, especially foreign government officials, but insulting a Nazi politician on Facebook is not any different from insulting a celebrity on Facebook.
The problem with social media is that it can be difficult to find out who to sue and compelling a foreign company to release the likely incomplete information they have on a user in an attempt to identify them isn't great. I'm not saying the law in question (NetzDG, requiring social media companies to block such content in Germany) is a good solution to this problem but it's certainly not the worst.
If anything, the problem with NetzDG is that it allows users who are able to avoid revealing their identity unambiguously to engage in Holocaust denialism or Volksverhetzung[0] and have their posts still be visible with a proxy if blocked (allowing Nazi groups to organize and operate hidden in plain sight) or when the content is deleted for those crimes to just be swept under the rug and make it harder to report to the actual authorities rather than the social media company. Social media companies like Twitter have also made it nearly impossible to report ToS violations in Germany as the report button immediately funnels you into NetzDG technicalities users aren't meant to understand like which specific law you believe the offending post violates.
I say this as an Austrian... the fact that Nazi ideology and symbols are forbidden in our country is a godsend. We have a small but persistent problem with militant far-right Nazi sympathisers, and the "Verbotsgesetz" is an invaluable tool in dealing with them.
The law is extremely clear, nobody breaks it accidentally, and it makes sure that dangerous extremists are taken seriously by the police.
A couple of times a year the police discovers illegal weapon and ammo stashes when investigating Neonazis. These guys are dangerous, and pretending it's just about "free speech" is stupid.
Hate speech and radicalizing speech isn't meant for those that aren't reading or listening to that speech, but rather to motivate those who do listen to act out the things that the speakers are saying.
The speakers hide behind "I didn't do anything, I just said something" and count on those who take their words into their heart and convert them into action. This is the danger of hate speech. It's not enough for good people to just ignore. It requires more effort to prevent the talking from being doing. If the term "hate speech" doesn't sit well, I prefer to use the term "rhetorical violence". Basically, rhetorical violence is speech using the imagery and terminology of violence intended to inspire violent thoughts in others.
The video posted below by another commenter shows how radicalizing speech is used to motivate others to commit acts that the speaker themselves would not commit or would claim not to support. In essence, the speakers are claiming the rights to rhetorical violence while being disconnected from actual violence that the speech might incite, inspire, or support.
Reality doesn't fit so neatly into these categories that you're trying to construct, where speech is perfectly harmless unless it's direct incitement to violence and then suddenly it's harmful. That might be how the legal system works but it's not how reality works.
Motivating radicals and spewing racism might not be direct incitement to violence, but history shows that it can have significant negative consequences. The causal pathway is usually non-linear and hard to attribute. But, behind many genocides is racial hate speech that's been allowed to fester for years. Behind many lone wolf terrorist attacks is propaganda, even if nobody directly incited it.
I'm not arguing for or against any specific hate speech law here. Just trying to point out there's a grey area that your categorical thinking isn't good at addressing.
are both "demand-side" solutions, which conservatives are well aware don't work when there's people dealing poison in the street.
Still, rexreed, I'll always fight for free speech, even when the people exercising it are abhorrent. And even knowing they'll take advantage of that to the fullest effect they can. Because if we really restrict it, the worst possible people will take control of who gets to say what. And it won't be the people we'd like to be making that decision. Every encroachment on free speech is like feeding steroids to the nazis.
Free speech and fighting rhetorical violence are not mutually exclusive. There are ways to reduce the visibility and spread of rhetorical violence without imposing on the rights of everyone to speak.
Let's use another mental construct if this is helpful. Imagine at your place of work, one person every day comes into the office, points at you and says "I hate this guy. Someone should beat the crap out of them". This person then posts messages on the company chat about how much of a terrible person you are, spreading all sorts of half-lies and untruths. This person goes as far as to put a message on the bulletin board in the cafeteria saying that you are a rotten person and someone should slash your tires or make your life a living hell.
One day you come to work. Your tires are slashed. Someone has trashed your desk. When you leave work at night someone assaults you, punches you and throws you on the ground. You can't see who it is.
You can point your finger and say "this person has been verbally harassing me". Would it be right for the company to say "any speech is allowed, therefore, this person has the right to continue that speech. Any actions are the fault of the perpetrator and not the speaker."
How long would you be willing to put up with that and defend that right even though it is causing you direct harm? There are indeed laws against violent and harassing speech, even though the words themselves aren't harm because of the direct harm that can be linked. I agree that the line between annoying and controversial speech and overtly violent speech is not well defined, but the lack of a well defined boundary does not mean that there is no boundary at all. Clearly some things are beyond the pale.
Now the company can't tell the verbal harasser that they are not allowed to think or express their abhorrent views. That harasser, as abhorrent as their views are, are using protected free speech. But the company can tell the harasser they are not allowed to communicate those views on company grounds, in company chat rooms, in the company cafeteria, or in any capacity as a company employee. Basically, the company can impose limits on the spread of those views. And in the vast majority of cases, it's imposing limits on the spread of views that acts to dampen actual violence.
Definitely. I think the main problem modern society (post-internet) is having, is that people have conflated the right to speak with the right (or the recent privilege they've been granted) to be heard, and assumed that if you have one you should automatically have the other. It's never been so.
[edit] since you updated... so, it's often been said that "speech" for nazis is a boot to the face, and that's all the words they need. And the truth is that if violence takes over it eradicates speech. A societal commitment to free speech is what allows the victims of threats and harassment and violence to speak out where they would otherwise be afraid to - especially if the intimidating environment is not just one company, but society as a whole. And this is why it's very dangerous, and can possibly breed more violence, to ever say that speech==violence [edit2: people reading "revolutionary books" in prison can be equated with violence by the prison guards]. Yes, incitement is beyond the pale, but in the example you just delineated it's very possible to separate incitement from opinion. Remove "someone should..." &c.
Now imagine you're born and everyone you're related to is accused of horrible crimes against humanity, controlling the media, stealing from honest people and drinking babies' blood, and your grandparents' families were murdered by people who said the same thing, and you hear people saying stuff like that every day which is clearly intended to incite people to, you know, kill you. And then imagine coming to the point where you know that preserving their right to say whatever they want about you, however disgusting and evil, is the only chance you have to preserve your own rights as an individual. If you can put yourself there, mazel tov, you're Jewish.
And it's natural to wonder whether all that free speech is a terrible idea, so, like all important things it's open to debate. But it's why my grandparents came to America, and they wouldn't like the idea of a law against nazi speech any more than I do.
Twitter, of course, is a whole other story. Private enterprise and should be held accountable for every word on their platform. They should banhammer anyone they feel like.
100% this is the case. People are conflating the rights of those who have rhetorically violent speech to express those views with the supposed "right" of those violent speakers to use a given platform to spread that rhetorical violence. From the perspective of the social media outlets: I can't stop you from expressing your abhorrent views, if it's protected speech, but you do not have the right to use my platform or my loudspeaker or my venue or my publication or my social network to spread that rhetorical violence. The rhetoric might or might not be protected, but the platforms have no obligation to spread that rhetoric.
Long story short, your speech might or might not be a protected right, but your use of a given platform to spread that speech, and any obligations to spread that speech or provide visibility or virality to that speech is not a protected right. One cannot be arrested or detained or sued for simply expressing their opinions, and I agree that even that abhorrent speech is protected. However, a platform can opt to not publish hateful speech, pull the plug on the loudspeakers, prevent the use of their venues, and refuse to promote abhorrent speech. The most effective means for combating hate speech and rhetorical violence is not to suppress the speech, but rather to prevent its spread. In this way the rights are protected without increasing the harm.
You're right that not too long ago, those with rhetorically violent speech would have little access to mass media. They would have to literally stand on street corners with megaphones to shout their messages or print their own publications and then find ways to distribute those publications. Nowadays, everyone has instant and immediate access to mass media whose viewership, ease of spread, and total audience size rivals even the very largest of mass media publications 100 years ago. In the current age where a single viral Tiktok or Tweet can get millions of impressions, the power (and responsibility) of media companies is far greater than ever.
> They would have to literally stand on street corners with megaphones to shout their messages
This is the primary problem. "Speaker's corner" has always been the place for insane people to shout. Social media has elevated it to the mainstream. (And made a handsome profit).
Insanity is contagious. What I mean by that is: Mental instability, FUD, conspiracy theories, propaganda, and simple sociopathic narcissism are viruses. No one who has witnessed 2016-present could doubt that. But anyone who knows about 1932-1945 already understood it.
Individuals with violent and malevolvent personality disorders are very capable of spreading their mentality to others. All they need is a channel. Radio and television, in the wrong hands, were used to mobilize millions of people to their deaths. And suddenly we open a channel for the craziest of crazies, and think their mental afflictions won't affect billions of people around the world?
There is no right to be heard. Over all of human history, being heard by the masses has been an extremely rare privilege. Creating a technology that allows crazy people to be heard is frankly the definition of insanity breeding more insanity. Speech is not the problem. Proliferation is.
>Insanity is contagious. What I mean by that is: Mental instability, FUD, conspiracy theories, propaganda, and simple sociopathic narcissism are viruses. No one who has witnessed 2016-present could doubt that. But anyone who knows about 1932-1945 already understood it.
What an implicitly condescending, shitty thing you state so casually: That obviously the only reason Trump won in 2016 is because he "spread" his sociopathic narcissism to others, who also likely happen to be mentally unstable and possibly conspiracy nuts. No chance that maybe, just maybe, millions of people voted for him on their own no less rational volition than those who voted for a frankly terrible democrat candidate like Clinton. No, the Trump voters were just mentally infected, weak minded idiots I suppose?
I'm not talking about everyone who voted for Trump. His is not the only or even the most important species of insanity that's been allowed to spread like a virus. Yes, people have all sorts of reasons for voting in populist demagogues without needing to specifically buy their insanity wholesale. Trump's madness is a symptom and a vector, a stop on the road between Alex Jones shouting on a corner and Adolf Hitler in a bunker. The door just keeps opening wider, though.
Enough with the absurd hyperbole already. Trump's presidency was neither an Alex Jones conspiracy nutfest or an Adolf Hitler madhouse of dictatorship. It was mostly mediocre but hardly worse than many previous presidency. Possibly better than some even. I'm no fan of that guy in so many ways, but he lived up to very few of the insane worst expectations that were created when he just entered the office. The world certainly didn't go to hell because of it. If anyone promoted idiotic unfounded conspiracies during his presidencies, it was the media endlessly harping about Russian collusion in his victory, but never being quite able to provide solid evidence of a single aspect of that particular conspiracy theory. Or the obsessive fixation on the new boogeyman of "misinformation", which suddenly has become a global problem according to many media sources and politicans because, oh god forbid, a candidate that they didn't give their formal benediction to happened to win a major election.
> So impeding the speech of two people is a better outcome than impeding the speech of none? I don't get it.
I mean, it's not a better strategy and it's not right - what I'm trying to say is that impeding one person's speech leads to impeding another person's speech, and that's how you end up with totalitarianism, regardless of who's in control.
The trouble is that whoever speaks loudest never respects the mechanism that allowed them to speak in the first place, or extends that right to anyone else.
So as to what leads to a better outcome, I'd say the results aren't in yet.
If only I knew the content of something before I read it. I would have to limit my internet use to Signal conversations with my dog to avoid most of tech’s poison machine.
That is a good idea, it is what I did. I don't visit any social networks, I don't read the news, and I stop talking to those who send me information that I'm not interested in.
Yet here you are commenting along everybody else on HN. Unfortunately real world situations are not that black and white so they cannot be solved with such black and white solutions...
Is this a response to J.K. Rowling's concerns a few days ago? Three people supporting trans rights took a photo in the front of her house, showing the address, and published it on Twitter. In a Twitter thread published by Rowling, she mentions people reporting it to Twitter Support helped to get it removed [1].
Is there a definition of a public figure? Is writing a book enough? Is having an opinion enough? Is stepping out of the house enough for a public figure to be photographed and published about?
I wonder if Twitter will begin to enforce this by requiring photos of people to be tagged, and the subject of the photos to "consent" to Twitter using their photos. If so, and if Twitter behaves like any other tech company in using manipulative tactics engineered to extract "consent", I can only see this ending badly for privacy.
Since we have not had in person conferences or tech meet ups for 2 years it will be interesting to see all the tweets about them in the future not having any of the usual photographs of the crowd, people in the corridors on laptops and participants in sessions, at social events.
I wonder what other behaviours we have forgotten that used to occur? No photos of people in a pub, at a cafe, on vacation. This might make Twitter a less appealing place for the average person.
Why do we hold companies like Twitter in high regard? More so, why do we consider what is primarily a media company to be valued and considered as a technology company when in many regards it is more like the Washington Post or NY Times than it is like Google? I think the whole framing of Twitter as a tech company needs to be rethought and they need to be valued and considered in the same vein as media companies.
People always focus on the big hypotheticals with these kinds of features. The big potential abuses that could stifle speech and damage democracy.
I think it’s interesting to instead think about the number of individuals this might help. People being bullied, harassed, doxxed, etc. It could be life changing for them.
That’s just the standard “think of the children” argument though. Yes, there are some people it will help. That’s not really relevant in the conversation for what rights everyone is losing.
I disagree. You’re not losing any rights. People act like being able to say/do anything on these private social media platforms is important and censorship of it harms our society - but society seemed to function much better and be much less divided before these platforms arose.
We weren't less divided, our cultural divisions were merely swept under the rug while the powerful got to set the tenor of our cultural and societal development. We're in an age of reckoning, and I think that's a good thing.
If it’s an actual case of doxxing or canceling, as the parent comment is alluding to as a hypothetical, then dismissing this as censorship does a disservice to those who have to deal with the real censorship (of opinions) that is happening elsewhere on the internet, including on Twitter.
Taking down a Karen video shouldn’t be compared with real censorship (i.e of opinions) when these videos are weaponized to ruin lives over a bad day, and serve little other material value.
Sure, if it were fairly enforced, it might. But do we trust Twitter to fairly enforce it? Do they have a transparent, accountable process, with the right to appeal? Do they have a trustworthy track record for making fair decisons?
People always focus on the big issues because they are not hypothetical at all.
We are talking about a platform that has had dubious fact check warnings attached to tweets for a year now, which mainly seem to serve to reinforce a false consensus and prevent open debate.
People being bullied, harassed, doxxed, etc are a distraction from this larger issue, and will not benefit from this, because the only ones you ever hear about are people with enough influence and clout to turn being a target into being a public victim.
I don’t know. I had an unflattering photo of myself posted without permission once, and the guy tagged my handle that I use for business rather than the other account. It was good that the guy took it down when I asked. But he could have declined and it would have been a problem for me but not so much for him. This sort of thing happens more than you would think and is not hypothetical.
But Twitter, you haven't even fixed what's "Trending", specifically in the local context. For example, the word "Jane" is trending in your country, Jane is a well known politician who has probably said something interesting. You click the word to see whatsup...twitter returns some random posts that mention "Jane" probably basing on number of likes
Whether this was the CEO's first act of business or not, people should be genuinely scared of him. He doesn't give a fuck about the First Amendment and thinks it's his job to dictate what a healthy conversation is and who gets to participate [0]. People should be abandoning this company immediately, but the sad part is that many of its most active users agree with him ideologically. At least Jack gave the appearance of giving a fuck about free speech.
I don't know much him nor have I read that entire transcript, but one of the main jobs for a CEO is to set the direction and tone for a companies services. They can say/do whatever they want wrt free speech and their own messaging platform.
Uh huh. This will be applied unevenly. Pics that violate this that support one political cause will be allowed, pics that violate this that support the opposite will be banned.
Wouldn't this be an actual ideal use case for facial recognition? Apple wanted to do CSAM before upload, Twitter could do consent-checking before posting.
Users might consider saying "I do not want my images being public" and can be prohibited before posting.
Taking it down after it's posted is useless, it'll be in caches all over the world and by that time, it's too late.
For example, every reddit post ever posted (or near enough) are scraped in real time (pushbullet, bigquery etc) so the take-downs need to happen before it hits public APIs.
I also think we need to enforce consent, and begin taking action against tech firms - if I did not consent and they allowed it to be published, they should be held to account. They need stronger measures to ensure consent was freely and fairly given.
Is it time to create new platforms that decentralize power? If we agree that some level of social digital connections are important, then why should corporations be in charge? Shouldn’t it be a democratic solution?
The one thing I see a lot on Twitter is people "retweeting" the wrong photo of the wrong person. I saw all sorts of people being pictured as that school shooter last week.
Whatever we think of this policy, anyone who tweets or re-tweets a photo of an uninvolved person in a breaking news case should be, by their policy, permanently banned (or shadowbanned).