Hacker News new | past | comments | ask | show | jobs | submit login
Network of channels tried to saturate YouTube with pro-Bolsonaro content (phys.org)
93 points by belter 11 months ago | hide | past | favorite | 49 comments



Society should discuss if recommendation algorithms and targeted advertisement must be regulated. It is known that people have more probability of sharing content that shocks them more. It lights a cycle where recommendation algorithms will give users ever increasing extremism inducing videos. This makes people more radical and makes them consume more of the same drug-like content.

Result: polarized society and a few content creators making a good amount of money generating extremist content and google and advertisers making billions regardless of how much they threat democracy elsewhere. Yes, people are making lots of money with lies and half-truths. It is well known and I think that some liars know they are lying but don't care because they are making money.

My girlfriend's father ate so much of these absurd things that nobody sane can talk to him anymore. Last year, after elections, he insisted that Lula was dead and that someone was impersonating him. At the new years eve, instead of spending time with his family, he was watching one of these youtube channels expecting to see a coup in brazil. He is still saying to everyone that our money in banks will be seized by the government.

Summarizing: recommendation algorithms and targeted advertisement are destroying families and people.


> Society should discuss if recommendation algorithms and targeted advertisement must be regulated. It is known that people have more probability of sharing content that shocks them more.

We can have that discussion now: “No.”

To the extent that you want to apply any law, it’s to content, not the recommendation algorithm: copyright infringing stuff[1], pornography, snuff films, and other already illegal stuff. I don’t like people like say, Tucker Carlson, their whole schtick is making themselves viral by making a very shocking mountain out of a mole hill and riling people up, but the government also has no place passing legislation against or “regulating” that.

[1]: I’m setting aside my own issues with extant copyright laws for the sake of a more productive discussion on the proposed topic, but I’m not entirely happy with them either.


> I don’t like people like say, Tucker Carlson, their whole schtick is making themselves viral by making a very shocking mountain out of a mole hill and riling people up, but the government also has no place passing legislation against or “regulating” that.

You can regulate recommendation algorithms (by making them transparent, opt-in, etc.) without infringing on Carlson's right to speech.

My proposed regulations would be that all automated recommendation algorithms are:

1. clearly labelled as automated (not "editor's picks" or "other people are watching," which connote social validation that may not exist)

2. opt-in (no recommendations shown unless you request them)

3. auditable (for X days after a recommendation is made, it should be possible to see each step involved in it being recommended to that user)

None of these infringe on anyone's speech. They just make it harder for massive, irresponsible companies to do psychological experiments on billions of people without transparency or culpability.


My concerns aren’t purely first amendment related. Some of this is just product decisions and business progresses.

1. What does this labeling get you? Taking YouTube as an example, it is basically some combination of what other people watch, what channels I Subscribe to, what’s picked out and even in those exceptional circumstances where I’m like 1 of maybe 40 views ever on a video, someone uploaded something similar to something I watched before. That’s the point of the algorithm: it takes multiple inputs, and spits them out in an automated fashion.

I think most people also understand that these are machine generated recommendations, even if they might phrase it differently. So again, what does it get you?

2. Why opt-in? That’s a product decision. If you’re using the service, you’re opting into how the service functions as is, and the service provider may provide additional options for how it works beyond that, but why does the government get to make that kind of product decision? Then, if YouTube isn’t providing a list of recommendations when you load it, what are you seeing instead? Is that going to be decided by the government as well?

3. Why are users entitled to see an audit of what’s basically proprietary product information about how the website works, effectively making that information available to competitors as well?

> None of these infringe on anyone's speech. They just make it harder for massive, irresponsible companies to do psychological experiments on billions of people without transparency or culpability.

This experimentation is a genuine ethics concern, and also from a practical standpoint I can’t walk up to SaaS that I paid for and expect it to work the same way as I had been using it every time. I would not support your proposed remedies though, but there may be a productive route in governing psychological experimentation.


> 1. What does this labeling get you?

It tells you that the recommendation is blind and wasn't reviewed by a human.

For example, much of the right-wing US media audience primarily consumes Fox News, while non-right audiences are spread across outlets.

So let's say your algorithm is based on what other people view. The entire feed would be Fox (and this was a common issue with Google News before I stopped using it). A minority of people are making it look like a majority of people are consuming that media, which legitimizes it. "Google" (a legitimate company) might be pushing a conspiracy theory as the #1 trending news story.

> 2. Why opt-in? That’s a product decision.

All experiments should be opt-in. We don't know the effects of algorithmic recommendations, especially because they constantly evolve.

> why does the government get to make that kind of product decision?

Because the government's job is to make it illegal to do things that are dangerous. Why does the government get to decide that cars have published safety ratings or that food has lists of ingredients?

If (for example) Facebook wants to be responsible for its algorithm's recommendations, then sure, take the government out of it. It becomes Facebook's speech, which is protected.

But Facebook is hiding behind Section 230, and that should come with additional oversight and restrictions.

> 3. Why are users entitled to see an audit of what’s basically proprietary product information about how the website works, effectively making that information available to competitors as well?

Seeing how something worked doesn't tell you how to build it yourself. If TikTok tells me how it recommended a video to me, that information isn't nearly deterministic enough for me to reverse-engineer their algorithm. In fact, their algorithm is impossible to replicate without all of their training data (which remains proprietary).

I agree that some "secrets" would be exposed, but they wouldn't be useful to a competitor. They'd be useful to consumers who want to choose the most competitive product.


Facebook is not hiding behind Section 230 for their recommendation algorithm. They are hiding behind the First Amendment. By making recommendations, Facebook is expressing an opinion. US courts take a dim view of government attempts to restrict the expression of opinions. There is an extensive body of case law on this issue.


If Facebook's algorithm recommends illegal porn or a terrorist's call to arms (and it has done both), then the First Amendment doesn't protect it. Section 230 does, even though Section 230 should have been limited to the hosting and broadcast of the content (not the recommendation/endorsement of it).


Section 230 does not address recommendation algorithms at all.

https://news.bloomberglaw.com/tech-and-telecom-law/justices-...


Yes, I understand. I understand Section 230 and the First Amendment.

Imagine two scenarios:

1. The Meta corporate entity posts a call to arms to kill the president. It is algorithmically shown at the top of every user's feed.

2. Famous actor Tom Cruise posts a call to arms to kill the president. It is algorithmically shown at the top of every user's feed.

In the first scenario, Facebook would be liable for inciting violence. In the second, it would not be. The difference is that Facebook is not responsible for anything posted by Tom Cruise.

That effectively allows Facebook to "speak" with impunity as long as they use Facebook users as volunteer sockpuppets.

This is only possible because of Section 230. Otherwise they'd be liable for any illegal speech on the platform, regardless of whether the algorithm spreads it or not.


> For example, much of the right-wing US media audience primarily consumes Fox News, while non-right audiences are spread across outlets.

In the context of US Google News, why wouldn't Fox News be a legitimate news source? I mean, it's not a high quality news source, even before the News side was basically subordinated to the needs of the Opinion side post 2020 election, but Google News was not the legitimizer, the American public was. Google went where they went, and while I'm no fan of Google News either, to their credit no one news organization tends to dominate search unless that organization is literally the only one covering a story (and sometimes Fox is that organization, the left buries stories they don't like for a long time too).

> All experiments should be opt-in. We don't know the effects of algorithmic recommendations, especially because they constantly evolve.

All product decisions including the algorithm in-use are a matter of someone's professional judgement in the end. When you don't trust their judgement, you shouldn't use their services and people do make that choice all of the time. I think this is conceptually severable from the A/B psychological experimentation that SaaS companies engage in on their existing userbase which changes the product for a subset of users in order to evoke and examine a response.

> Because the government's job is to make it illegal to do things that are dangerous. Why does the government get to decide that cars have published safety ratings or that food has lists of ingredients?

To "make something illegal" is specifically in the purview of legislatures in our system of government, and the legislatures of both the States and the Federal government have a lot more other work to do besides going around looking for things to make illegal. If anything, the laws that make up our collective criminal codes could see a reduction rather than further expansion and we would be better for it. To "make something illegal", you also need to make the political case for it to be something the legislature takes up, and since that's what I'm asking you to do here, it's a bit circular to then say "because it's the government's job".

Also: websites and internet services are not automobiles, food or drugs. They don't share the same challenges, issues, and risks. To the extent we tolerate government regulation of websites so far, it's to suppress things that are illegal in meatspace too (child pornography, money laundering, conspiracies to engage in various criminal activities like selling drugs and human trafficking). The presentation of a website or internet service, manual or algorithmically-generated, is unique to that website or internet service.

> If (for example) Facebook wants to be responsible for its algorithm's recommendations, then sure, take the government out of it. It becomes Facebook's speech, which is protected.

> But Facebook is hiding behind Section 230, and that should come with additional oversight and restrictions.

They do, and they are, but their liability is limited by Section 230 and I'm not sure that it shouldn't be. I keep going back to re-read Section 230, and to be honest it's one of those rare statutes that I think is exactly what it needs to be in this time and nothing more, and nothing less. Considering how bad and unclear most statutes are, that's something of an accomplishment. There is just enough room for interpretation for the Judiciary to examine its interactions with other statutes and they'll get to it when they get to it, on narrow grounds. I thought Twitter vs. Taamneh had potential to ever so slightly narrowly constrain Section 230, but the way that case came out, it didn't even come close to addressing the possible interaction between Section 230 of Communications Decency Act and Section 2333 of the Anti-Terrorism Act because the holding was that the Plaintiff failed to state a Section 2333 claim in their allegations.

> Seeing how something worked doesn't tell you how to build it yourself. If TikTok tells me how it recommended a video to me, that information isn't nearly deterministic enough for me to reverse-engineer their algorithm. In fact, their algorithm is impossible to replicate without all of their training data (which remains proprietary).

I think it still tells you a lot, even if it is insufficient on its own to build it yourself, it is sufficient for actors on the network to aggregate how a service comes by its recommendations and gives them additional insight into how to exploit it for their own uses and it still comes up against the issue of where is it the government's place to insert itself into product decisions like this and take this choice out of the hands of the product designers of websites and internet services?


> 3. Why are users entitled to see an audit of what’s basically proprietary product information about how the website works, effectively making that information available to competitors as well?

The same reason ingredients are listed on food products, and detailed nutritional information. Governmental agencies also physically audit food factories to ensure they conform to openly available health standards.

The products you sell have positive and negative impact on society, and society can choose for you to disclose exactly what's in your product or force you to limit how you make your product to limit the negative impacts.


There is also a substantive difference between the proposed legislation here and the FDA's mandate: the FDA's mandate covers substances we put into our bodies in order to live and/or not die, and even their mandate only goes so far. Even then, an ingredients list doesn't tell you everything. There are cooking processes that can effect the final appeal (read: "healthiness") of the product but which do not need to be disclosed.

You're going to need to give me something more than "it impacts society in some way" for the government to begin mandating product decisions, presentation and the disclosure of proprietary information. Human activity and economy doesn't occur in a vacuum, but that doesn't automatically give the government a mandate to regulate every aspect of society including all forms of business activity with a direct government interest and that's a tough sell when we're talking about voluntary services[1].

[1] Notwithstanding systems like Facebook's infamous "shadow profiles" which I am A-OK with the government axing. If you effectively have a "profile" without having a profile you created, that undermines your own agency.


> with a direct government interest

Future readers! This is supposed to read “without a direct government interest” but I am only catching this well outside the edit window.


> massive, irresponsible companies to do psychological experiments on billions of people without transparency or culpability

Exactly. While measuring and optimizing for monetization, completely ignoring and willing to sacrifice individual mental health, and society in general. The division and mass brainwashing we are experiencing do have a cost - that happens to be an external used one from the perspective of the content distribution / advertising platforms.

At this point people are openly being lied that a significant part of the population are groomers and pedophiles which 100% targets blind destructive unconditional rage and unreconcileable division. For the benefit of distraction, political division and engagement.

We need a force to limit the effects of the infinite cynicism of the people doing the brainwashing. We could at least not auto-amplify their voices.


> but the government also has no place passing legislation against or “regulating” that.

Banning recommendations would probably be fairly excessive, at least if the recommendation is consensual (that is, if the site has the user's informed consent to profile them for recommendation purposes). However, it would arguably be pretty reasonable, and in line with how, say, 'native' advertising is generally treated, to require sites to include a warning to the effect that this is a machine recommendation which is vulnerable to manipulation.

As is, users may read recommendations as "this is what my friends are watching" or "this is what a human has chosen as recommendations for everyone", whereas in fact they are generally "this is the platform's attempt at maximising the ad revenue they can squeeze out of me, but also it may have been manipulated by malicious actors and the platform knows that". This should probably be made clearer to users.


Is that really the case though? I think most people do understand that it is largely a non-human automated process that figures out what to show them. I asked someone else what the labeling really gets you, but your argument seems to be that the transparency is an end in and of itself.

So what about just a general disclosure that this service relies on automated recommendations?


People click on what they want. Those are symptoms of underlying issues from someone who shouldn't be using such services in the first place. Protecting people from their own mental and moral malleability and ignorance is futile, as there will always be an issue to be resolved. The focus should be on education, and (for parents) control on what they should be accessing.


> Those are symptoms of underlying issues from someone who shouldn't be using such services in the first place. Protecting people from their own mental and moral malleability and ignorance is futile, as there will always be an issue to be resolved.

Why is the blame on people for what is made available to them? This wouldn't be 'protecting people from their own mental and moral malleability,' this would be preventing others from making this which abuse those malleability for profit.

The same type of argument you have made here could be made for legalizing ponzi schemes. People _should_ know that there is no quick rich scheme and the history of these schemes are full of fraudsters. Is it wrong to ban this type of abuse because their victims made the choice to engage in the first place?

And, to preempt the "ponzi schemes are different" response, I ask for a response of this type to qualify it with how those differences matter.

  - If it is in legality, that only applies until a law is made about addiction-intended media and is circular (it is wrong because it is legal <--> it is legal because it is wrong).

  - If the difference is in the intent of the ones perpetrating it, both perpetrate primarily for monetary gain. I don't believe a ponzi-scheme's primary intent is ever malicious (with the qualifier that I'm not assuming pursing monetary gain is malicious in of itself), nor do I believe this about social media.

  - If it is the directness of ponzi schemes in their fraud versus social media, I don't view that is very relevant... I think it would be reasonable to not blame someone for being the proximal cause of an issue if they didn't know beforehand, but the abusers in this case know by now that they are at _least_ a proximal cause.


> Why is the blame on people for what is made available to them? This wouldn't be 'protecting people from their own mental and moral malleability,' this would be preventing others from making this which abuse those malleability for profit.

I do believe our own actions are the only thing we have absolute control in life. In a perfect world, maybe such efforts would be more effective. I take in account the subjectivity of human actions and behavior in that case. The amount of safeguards you would need to put in order to protect people in that scenario would leak into many other areas, or create an eternal conflict at minimum (it isn't like we don't have one already...). I believe that education and knowledge is the path to solve such issues, because it creates a mechanism for people to deal with them (like a vaccine), not to never see them.

> The same type of argument you have made here could be made for legalizing ponzi schemes. People _should_ know that there is no quick rich scheme and the history of these schemes are full of fraudsters. Is it wrong to ban this type of abuse because their victims made the choice to engage in the first place?

People should be educated about it. It wouldn't solve all the problems immediately, but it is more efficient if people protect themselves from these issues. Ponzi schemes are illegal and they still an issue, the same would happen to other regulations in my opinion. Companies always find a way to profit.

Social media is a moneymaking machine, and data is valuable. It's not just internet fun, it's serious business. In a way, i don't think they are too far apart. I dislike social media and haven't used it in many years. You could apply regulations, is not that it is wrong, it's just ineffective against underlying issues and the problem itself in the long term. The effort should be more towards a long term solution. The line here is blury, that's what happen when you deal with humans. You could go towards the "protect people from harm" or "give them tools for self-protection" idea, i honestly prefer the latter.


> Protecting people from their own mental and moral malleability and ignorance is futile

Drugs are regulated for the very same reason. While prohibition may be a polemic topic, regulation is not.


The example is good, and look where we are. Regulation brings immediate partial results to a problem, but doesn't solve it.

I agree that the problem should be resolved, but those methods are only put in place to please shareholders and the law. The hard path, of education and compromises (and possible lower revenue) no one wants to take. Those services exploit "mental and moral malleability", and for that there will never be enough regulation to solve unless you take down all services.


"We made an addictive product. Fuck you for being addicted to it."


>People click on what they want

Do they?

>Protecting people from their own mental and moral malleability and ignorance is futile,

Where are you from? I'm guessing some first world country that has all kinds of consumer protections.


> Do they?

They do. Can they be influenced? Yes. Can they be coerced trough social pressure? Yes. Can they fall for targeted advertisement and/or manipulation? Yes as well. Does this exclude the responsibility of their own actions? I don't think so. We can argue that this problem was born because of their parents or their own lack of knowledge/critical thinking later in life.

> Where are you from?

I am brazilian.


> The focus should be on education, and (for parents) control on what they should be accessing.

I've worked in education, and spend a good deal of time working with folks who are in dire straits.

The hard fact that a community where our 'median commenter' is above the median in the general population hates to realize is that roughly 1/3 or so of all people are just not remotely the type of person who can do the abstract meta-cognition to see they're being manipulated.

No amount of education will actually make them see something to abstract for them to grok.


> The hard fact that a community where our 'median commenter' is above the median in the general population hates to realize is that roughly 1/3 or so of all people are just not remotely the type of person who can do the abstract meta-cognition to see they're being manipulated.

True! This is a complex subject that i have been interested in a long time. It all boils down to 'digital inclusion' (making services dumber/simpler to be easier for the average person) for profit, and it's long term effects. But that's another whole can of worms.

There is no easy solution for that problem.


That's the first time I've heard this challenge described as 'digital inclusion' but that is exactly what it is.

We are pretty clearly in a world where people who understand computers are a new scribe class who have mountains of power over others, and hesitate to admit as much because it burdens us with the responsibility think about and be mindful about how we use that power.


I don't know. Before I went to prison (so I guess I was in a very poor state to make choices) I somehow gravitated to a bunch of conservative websites. Instapundit was the start. And I bought into it all. Having a 'nice' 5 year 'break' and then coming back I saw what total hateful/emotionally manipulative crap it was. But I think if I wouldn't have had that 'break' they would have been able to bring me along on their journey. I'm hippie love child love everybody libertarian (in the hippy sense) who is pro open borders and they were getting me to have hate over some straight up bullshit. I still have to have those sites in my NextDNS to prevent me from going there to 'just see' what they are talking about.


This is what i mean by underlying problems. Every extremist organization lures people with manipulative tactics, and if you are not in a good state of mind to spot those, you will be caught. That's why education and knowledge about those subjects is important, so people can understand and think about them. If you never understood why they are so bad, you would rely on others to never see it on your screen.


"a complex, web-like influencer system of channels that shaped political narratives"

So, political advertising in a very slightly more modern form.

Both parties I get to choose between every few years have their political advertising machines that could be described as above.

All of it sucks and there should be laws against it, but the people who make the laws are the ones doing it.

Spend money, reach voters. Just another variation.


It's not clear that money was involved here, or how much. Rather than jumping to paid advertising, this might be a "more modern form" of political free speech and advocacy


How would we tell the difference?


You have the option of voting for a third party candidate. They won't win in this next election, but it helps to build momentum for future elections.


>but it helps to build momentum for future elections

American third parties have not shown any evidence of this. Meanwhile we have strong evidence of the spoiler effect they have. We can only play the game we currently have


The "game we currently have" does have a way to support far-left or far-right candidates: you can vote for them in the primary. You can support them and vote for them every minute until the general election, and then you just need to pick one of the top two candidates.


Not surprising. The rapid increase of the amount of pro-Bolsonaro content was clearly visible, even spilling towards users like me with no history for viewing political content on the site, especially on 'cuts' for YouTube Shorts.


Not reading the content -- but being part of the team trying to figure out new people trying to influence youtube could be an interesting problem to work on probably from an academic perspective. In actuality you probably deal with a lot of garbage content.


1,200 videos over almost two months is my going to saturate anything on YouTube. Isn’t this just marketing?


According to the article, “98 percent of volunteers ranked public security, race, LGBT+ issues, and the economy as political themes,” and at the same time “98 percent of volunteers ranked…transportation, health, education, indigenous and feminist issues as adjacent to politics.” Is that plausible? Did they miswrite something?


I made an effort not to click on anything political for a few months and there's absolutely no video in neither side except for the "breaking news" section that youtube adds, not even on youtube shorts. We tend not to notice when things are right, but god damn how much I love youtube recomendation algorithm. I really hope it never corrupts itself. The "Do not recommend" works, and not just for the specific channel, but the overall category and related content, it's amazing.


Huh?! I despise their recommendation algo. So much so I've stopped clicking on even mildly-interesting-looking vids on my YT (Premium) home page.

I watched one, ONE, video about James Randi; the next day half the recommendations were clips about him, most from...I need a name for those channel operators who re-post other peoples vids just for impressions. "Scumbags"? "A-holes"?

But I digress.

I click on one British historian and see 10 videos from other British historians the next day. I click on a gay comedian's video, and see 10 other gay comedian videos. Wait a day and I'll see gay British historians' videos! WTF?

I used to spend several minutes during each session selecting "Not interested" and "Do not recommend" to tweak the algo and I don't see any proof that it works.

I just ignore the home page these days. Much like I do Twitter and Reddit...


Complete opposite experience from me. I don’t seek out political content on YouTube. I mostly watch some gaming content. Much of it (GDQ streams for instance), is pro-LGBT and left leaning.

Despite this YouTube constantly pumps far-right content into my recommendations, and there’s seemingly no way for me to stop this. Straight up offensive content, including YouTube shorts ranting about Trans people, misgendering them. Misogynistic shorts with comedians joking how “women belong in the kitchen haha”. Interviews with Andrew Tate. Not to mention the constant ads from religious nuts. Or the constant political ads. Their recommendations suck.

I skip these shorts partway though, and I avoid clicking the suggestions elsewhere, but it’s an endless stream. YouTube constantly is trying to get me to watch this awful content.

I’ve stopped using shorts, and basically exclusively check my subscription box. The home page and suggestions are useless for me. I put up with YouTube as a site, but I’d honestly prefer to watch the same content elsewhere if I could. Those recommendations are making the world a worse place.


How can it be surprising that a presidential candidate is campaigning during the election? I read the article and it strikes me as completely absurd. Of course a candidate is going to use his network of supporters to try to influence voters in every channel available, including YouTube. The writers try to present this as something strange, and I can't really understand why?

Brasil is a democracy. Candidates will use all legal means to try to influence people to vote for them - that is what they are supposed to do.


Honestly, you are giving too much credit for these guys. It was purely brute force


[flagged]


There are networks of groups trying to saturate the mainstream news with pro-Their Issues content. That's what activist networks are - they get donations, members, PR firms, etc. and seek to influence politicians and the population with their viewpoints.

Is this shocking? It's how democratic politics works - coalitions, influence, impact.


Yeah, literally everyone is doing it. I know HN doesn't love the "What aboutism" arguments, but everyone is doing it, so I tend to look askance at articles clutching their pearls about how some particular group is doing it, because I positively guarantee you "their" side is doing it too.... because everyone is doing it.

Me, I'd like "everyone" to stop, but that's not really on the table. All a legal ban would be is a privileging of the party in power at the moment, as they would be the enforcers and they would not enforce it against themselves.


Illegal in many countries such as France


Do you have an example of someone being stopped from doing this sort of thing with the leadership's propaganda?

Because I'd lay quite a lot of money down at good odds for you that they absolutely have a line item somewhere on some budget for paying people to flood the internet with their message, no matter how illegal it is. Being illegal just means they have to hide the line item better.


Huh? Political ads are illegal in france. I’m sure there are people doing influencing off the books like every country




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: