The case law around editorial control is at odds with most platforms' section 230 protection, which makes the fact that TikTok argued that its algorithm _is_ speech pretty different from how most platforms have argued to date (in order to preserve their section 230 protections)
I've understood that social media companies deliberately do not identify as editors because they don't want to be responsible for generated feeds of users. Is this wrong? This is why I'm asking to see evidence of a specific person from a social media company taking direct responsibility over a user's consumed content.
>owever, in a great act of self-incrimination, Bytedance (de facto controlled by CCP) has decided to not divest and would rather shutdown instead.
How is it self-incrimination? That logic doesn't work.
80% of TikTok's users are outside of the U.S., why would they sell the whole thing?
And the law is written in a way that there is no value to just sell the American operation without the algorithm, they have to sell the whole thing, including the algorithm, in order for there to be a serious buyer.
It's technology highway robbery. Imagine if China told Apple "sell to us or be banned", we'd tell them to pound sand too.
The West told plenty of its companies, through public pressure or laws, that they have to divest from Russia, and they did. Rationally they recognized that selling their assets is financially more lucrative than just closing their operations and making 0$.
Now why would an corporation which alleges to not be controlled by a government refuse to sell and forego billions in income, even though it is against the interest of their shareholders?
Because they don’t want to have a strong competitor in case they come back, or gave they competitor enter other markets they are still active in. Also, not all (if any) companies that divested from Russia sold "their assets" including IP such as algorithms.
from what I know the bids that have been put in place are just for the US operations and there are some bids that dont include the algo as a part of the deal.
The problem of allowing government banning propaganda is it allows government to ban anything they label as propaganda. There is no law defining what's propaganda, so you just end up with the government being able to ban any information they don't like.
Imagine the government drums up for another illegal war like Iraq using fake evidence, and we ban all counter evidence as "foreign propaganda". Do you not see how dangerous that gets?
>That, for me, is enough of a reason to ban it and justify it under our constitution
The Supreme Court has explicitly ruled in the past foreign propaganda is protected speech under First Amendment.
You cannot strip American citizens' rights to receive foreign propaganda if they choose to do so.
You left a few words off in my first quote. I did not say anything about banning propaganda! I am talking about the system of dissemination, not its content.
I can believe it to some degree. Believing that what the killer did is wrong, and being unsympathetic to Brian Thompson's death isn't mutually exclusive.
They don't even need to do anything especially conspiratorial, I would expect those polls to have substantial anti-Mangione bias by construction.
Many people will want to avoid being on record as supporting a murderer, for fear of any consequences down the line. I know polls are almost certainly anonymous, but you need to trust the pollster to actually abide by that. If you have even an inch of worry, it's easier to just not answer (or answer insincerely) and move on.
>Ticktock content is curated to fit CCP’s narrative not simply an algorithmic reflection of what its users care about.
If you can show evidence of that you should give it to the U.S. government, because it has repeatedly said there is no evidence of such and any threat remains hypothetical.
> If you can show evidence of that you should give it to the U.S. government
Tens of millions of teenage Americans are addicted to it is the evidence. Chinese don't allow their kids to waste all day long on stupid douyin, Americans don't have such luxuries, as tons of red necks are going to jump up and label it as anti free speech if you want something similar. As a result, you see Chinese kids spend time on STEM subjects, building toy robots and learning how to code AI stuff while American kids are all dreaming to be the most popular influencer on social media.
The whole system is an algorithm carefully designed. Let's just be honest. btw, Chinese national posting from China here, you'd be seeing me protesting in the Tiananmen Square if some American social media apps manage to waste Chinese teens time while being carefully restricted in the US for their own kids. It is just shocking to see it takes almost a decade for the US to actually start doing something concrete.
> Tens of millions of teenage Americans are addicted to it is the evidence. Chinese don't allow their kids to waste all day long on stupid douyin, Americans don't have such luxuries, as tons of red necks are going to jump up and label it as anti free speech if you want something similar. As a result, you see Chinese kids spend time on STEM subjects, building toy robots and learning how to code AI stuff while American kids are all dreaming to be the most popular influencer on social media.
So it's Chinas fault that the US doesn't have the same laws restricting social media content and screentime for children? Or, if it is the red necks fault, then how does 'TikTok is spreading CCP propaganda' follow from 'US red necks oppose laws that are anti free speech'? It seems you're skipping some steps to get to that conclusion.
There research is not credible because their only "evidence" is that when using certain keywords, Instagram and YT returned more "anti-China" content than TikTok.
So instead of arguing the U.S. social media has an anti-China bias, they argued that it's the evidence of TikTok being more pro-China.
Using American social media as the control group for neutrality on China is absolutely insane.
The most likely cause is that TikTok is just a lot less political and more international than YT and Instagram.
For rest of the world, people do not automatically associate words like "Xinjiang" to "Chinese government oppression", the fact that they expect that to be the top result can be argued that American media is the one manipulating information.
The damming bits were 100% on Ticktock, no need for comparison.
On TickTock, views to likes ratio for anti China content was 87% lower despite higher upvotes on TickTock anti China content. Read page 4 suppression on anti China content.
That alone shows the algorithmic alone isn’t selecting results and they are instead engaging in propaganda. The credibility question in my mind is in regards to how they are classifying videos and other bits you don’t see, but that’s a deeper question than the methodology.
Except the whole reason for the TikTok bill is that information/speech will be under Chinese government control on TikTok and that can be weaponized.
So make up your mind, if you say TikTok is being banned for the possibility of "weaponized propaganda", then it is information being suppressed.
If you say it's not about information suppression, then you can't use the "Chinese propaganda" argument, which is used by pretty much all ban supporters.
>This isn't like China where the government bans any services they can't control, and directs the services that they can control to suppress any information they don't want people talking about.
Except it is. The Supreme Court has actually ruled that the First Amendment rights for Americans to receive foreign propaganda, even during the Cold War:
I don't think you know what the First Amendment is. Not only does it guarantee freedom of expression, but also freedom to receive other's expression and speech.
The U.S. government is not allowed to ban any foreign books, movies, or even propaganda.
I really wish people like you do a little bit research before making such a confident statement like that.
They may not legally be allowed to ban it, but that doesn't mean it has to be easy to access it. This is probably also why banning Tiktok was / is such a challenge and couldn't just be done with Trump's exective order after Zucc whispered it in his ear in 2019, and why they can't just block it, but have to subpoena the app stores to delist it.
>Deliberately misinforming people, especially under a foreign state payroll, is illegal.
First of all if you have any evidence of TikTok engaging in it, you should present it since even our government have said there is no such evidence and that possibility remains hypothetical.
Secondly no, it's not illegal to spread misinformation, no matter the motive. The First Amendment absolutely guarantees that right.
> Secondly no, it's not illegal to spread misinformation, no matter the motive. The First Amendment absolutely guarantees that right.
Again, does NOBODY know what the first amendment covers???
If you yell FIRE in a crowded theatre (misinformation) that is not covered by the 1st amendment[1]. Please stop talking confidently about something you don't understand.
Edit:
Schenck v. United States was largely overturned by Brandenburg v. Ohio but not completely, only limiting the scope. There are also many other examples that could be used to show that spreading misinformation is not blanket covered by 1a (defamation for example).
If you do understand First Amendment then you should also understand that foreign propaganda is protected speech, and is not treated as yelling fire in a crowded theater:
Correct, the mere act of spreading foreign propaganda, without more, is not illegal.
But spreading foreign propaganda is indeed illegal despite that precedent if one does it as an agent of a foreign government within the FARA legal definition (which is reasonably implied by being on their payroll) and does not register with the US government as a foreign agent, aside from certain exceptions.
>There are a bunch and they are very easy to find.
Really? Because the U.S. government, in their own court filing, have openly admitted that there is no evidence of TikTok's wrong doing in terms of manipulating information.
I don't think it gets much more authoritative than U.S. government's own court filing.
The link you provided has been debunked over and over again. It was a paid-for study aimed to generate certain conclusion.
And its methodology is silly at best, insane at worst (uses U.S. social media company as a control group for neutrality on China lmao).
> Secondly no, it's not illegal to spread misinformation, no matter the motive. The First Amendment absolutely guarantees that right.
Not accurate, no, assuming that by misinformation you mean information that the author knows to be false. To name just two quite legally clear examples with no inherent connection to foreign states, US defamation law and US product liability law often create civil liability and occasionally even criminal liability for certain categories of knowingly false statements.
But, sure, spreading misinformation is not always illegal, and a blanket ban on that would indeed violate the First Amendment even though more targeted bans have been upheld as passing the relevant judicial tests for laws affecting First Amendment rights.
Such as the two examples I gave in the comment you're quoting: US defamation law and US product liability law.
To be more concrete about the defamation example:
Imagine someone has a grudge against you for some reason that doesn't involve any history of illegal behavior, like maybe your business won a lucrative contract that they wanted for their business. Motivated by a desire to hurt your personal reputation and cause you social ostracism, they tell all your friends and neighbors that you're a convicted murderer, when they know you've never even been accused of any kind of wrongdoing in any court whatsoever.
To the best of my knowledge, this is illegal defamation in every US state, and it's criminal in some of them. Although it's rarely prosecuted as a crime, criminal defamation laws have been upheld as constitutional in certain situations including ones that would cover this scenario (if the available evidence meets the criminal standard of proof in court). Civil defamation lawsuits are commonly enough made across the US, and under scenarios like this one, are also commonly enough won (or settled between the parties).
To be more concrete about the product liability example:
Imagine that you are a business selling a product and you write "safe for all ages" on the box, when you know it has components that are small enough for young children to choke on, but you lie about it on the packaging because your product really appeals to young children and you don't want to lose out on the profits from selling to their parents. If a 2-year-old then proceeds to choke on one of the components in the box, yes indeed there are lots of courts across the US that would award damages to the affected family, and maybe some courts that would find criminal liability as well although I'm less sure of that question.
>It is illegal if it is paid for foreign state and undeclared.
Good. Because ByteDance has never tried to hide the fact that it's a Chinese company. So that argument wouldn't matter even if there are evidence of them pushing Chinese propaganda.
I think the point isn't that they're trying to hide the fact that it's a Chinese company, but that they control the algorithms that can be used to push undeclared foreign state-sponsored content.
Here is the dividing line rarely discussed openly. I and many other Americans truly believe the United States government and it's oligarchs are their greatest threat to American citizens. Not China, Russia, Iran etc. The latter are certainly threats but they don't have anywhere near the capacity nor desire to limit my rights like the United States does.
> Removing the platform entirely is not censorship. Especially when everyone will just shift to Instagram, Youtube etc.
It is, because politicians voted for the bill have openly admitted the Israel-Gaza content is what pushed them to vote for the ban. Instagram, YT, etc are under U.S. control and engage in sufficient self-censorship when it comes the Israel-Gaza conflict in order to make Israel look good.
It is, and the court acknowledged that editorial control is protected speech.
The ruling was made based on data privacy ground, not First Amendment Speech ground.