Hacker News new | past | comments | ask | show | jobs | submit login
Up next: a lawsuit threatening your YouTube watch queue (mux.com)
81 points by davekiss on May 1, 2023 | hide | past | favorite | 64 comments



I think the crux of the issue here is that the recommendation algorithm isn't built to say "find ISIS material and promote it". It says "find content that will generate more engagement for this user". Reddit is in a similar situation -- it promotes what other people upvote. Some submissions will be removed due to their specific content, but nothing is promoted because of it.

This is where Twitter could get into hot water -- if the code that was released is accurate, they were specifically promoting tweets by certain people and with certain content. They can't claim "it's just the algorithm" because their algorithm is intentionally biased based on the content itself.


The Twitter example is a really good one. Google obviously choose to take things to the extreme by arguing that even deliberately pro-ISIS algos should get protection, but one more moderate outcome could be something like "YT is only liable if they specifically designed their algo to recommend the problematic content".


Are you talking about the Musk tag used to measure visibility? I don't recall them hard coding promotion of specific accounts? Seems like it should be a major talking point.


Lots of changes will be made every day at Twitter that will have to keep the Musk metric healthy. Even if the changes aren't explicitly designed to promote Musk's own tweets, having the metric visible to all employees will at minimum eliminate any change that would demote Musk's tweets. Over time, the Twitter algorithm will become biased towards promoting Musk-like content.


We only have musk's word that tag was used to measure visibility. Wouldn't a random sample with users of different sizes and types make more sense for that?

Even if it was, that could have the same effect. If they test every change and only keep the ones that don't harm Musk's visibility that is in effect making the algorithm consider Musk specially.


Would promoting Twitter Blue Subscribers content over non-subscribers be promoting specific accounts? Musk has said this was going to happen and I think we saw it in the code + it sure feels like when using the site.

Seems like that would could even be considered hardcoding - Buy Twitter Blue get higher on replies on threads.


companies have been offering paid boosts for many years though. not sure how this is different. i guess as long as anyone can buy it it's ok?


>It says "find content that will generate more engagement for this user".

Actually, I think its probably "find content that will maximise monetisation through more engagement for this user".


>I think the crux of the issue here is that the recommendation algorithm isn't built to say "find ISIS material and promote it". It says "find content that will generate more engagement for this user".

Which would be a good point to argue, and a good distinction for the law to make, but Google's defense is that even a deliberately pro-ISIS algorithm would be exempt under 230. From the article:

>>They argue that in an internet brimming with content, there is literally no way to present things that doesn’t involve some deliberate prioritizing of what to show. Taking their argument to its logical extreme, Google’s lawyers even claimed that Section 230 would protect a platform against suit over a deliberately pro-ISIS algorithm.


When making legal arguments you'll often do things like: "this recommendation algorithm is legal because any algorithm would be legal, but even if you don't accept this then this algorithm is legal because it didn't consider the content of the video in making a recommendation, but even if you don't accept this then..."

That they asserted that any algorithm would be legal doesn't tell us that their case stands or falls on that assertion alone.


Sure, they want to argue the most extreme interpretation of 230 to get the court to either side with them completely or "compromise" by getting to what I wrote.

At the same time they are lobbying congress to keep 230 the way it is but do what I wrote as a compromise if they have to.

They just don't want to lose Section 230 protection because it would mean the end of user generated content. Without 230 protections, every user submission would have to be reviewed before going live, even here.


> Without 230 protections, every user submission would have to be reviewed before going live, even here.

Yep. And if that goes, then the last major good thing about the web will be destroyed. Or second-to-last, since some real people still actually do still put up their own noncommercial websites.


I don't think that they made this argument because they want to promote ISIS. I think they made it because they want to promote content for CNN, MSNBC, Pfizer, Fox News and whoever else gives them money to do so.


I’m used to lower quality SCOTUS analysis getting posted here, but the first thing I wanted to note is that this site’s page design and playback widgets is actually pretty good. It’s also apparently not a news organization because I was ready to blindly throw the RSS feed into NetNewsWire and see what I thought of it by next week.

Second, it pretty much nails it in this paragraph:

> A few outcomes are possible. First, the court could uphold the lower court decisions that 230 immunizes Google from liability in this case. Second, the justices could find in favor of the Gonzalez family, taking 230 immunity off the table and causing the lawsuit to move on to its next question: whether YouTube’s conduct violates specific antiterrorism laws. Last, the court could ultimately decline to rule on the question, citing clerical issues or the low likelihood Gonzalez would succeed on the antiterrorism element.

Those are the possibilities. I had to go back and look at my notes from a few months ago because there was a similar and related case against Twitter that had oral arguments scheduled the same day, and where I ended up coming down based off a reading of the room is that most likely Section 230 is upheld entirely in the Google case by 9-0 or 8-1 and it’s not even close; but it might still be very slightly curtailed by the outcome of the Twitter case which wasn’t a Section 230 (Communications Decency Act) case per se but a Section 2333 case under the Anti-Terrorism Act.

Here is the Oral Argument and transcript of Gonzalez v. Google LLC: https://www.supremecourt.gov/oral_arguments/audio/2022/21-13...

And this is Twitter, Inc. v. Taamneh: https://www.supremecourt.gov/oral_arguments/audio/2022/21-14...

Both are worth listening to, and if you’re going to listen to both, listen to Gonzalez v Google first as it was argued first. Oral arguments are not always or not even usually the determining factor for how a case comes out, but reading the room, I just don’t see the votes for curtailing interpretations of 230 in any significant way on the Court.


Is it just me or are a lot of software related questions going to these 9 people who “are not like the nine greatest experts on the internet”, according to Justice Kagan?

Although to be fair, they did really well on Oracle v Google (https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....). The questions the Justices asked were very good and the final judgement was definitely written by a clerk who understood software well.

It's possible they might mess this up. I'm having a hard time imagining a site that doesn't make a decision on what content to show. Even HN makes an editorial decision when choosing what to show on the front page. It would be absurd to hold dang accountable for that.


> Is it just me or are a lot of software related questions going to these 9 people who “are not like the nine greatest experts on the internet”, according to Justice Kagan?

We have an adversarial court system and they don’t need to know software to know the law. It is the jobs of the attorneys to properly educate the Justices in their briefs, filings and oral arguments on the facts of the case and how they think the law should be read in their favor; and the Court watches what goes on in the Appellate and District courts to gauge what issues they will be dealing with. Lastly while the Justices themselves might have left school decades ago, they regularly rotate through clerks bringing fresh opinions and perspectives into the Courthouse.

What’s important is what the law says and the way it should be read, and that’s going to be inconvenient for someone but typically any perceived defects in the law lay squarely at Congress’s feet.


> What’s important is what the law says and the way it should be read, and that’s going to be inconvenient for someone but typically any perceived defects in the law lay squarely at Congress’s feet.

Congress writes laws with the assumption that ambiguities and complications will be intelligently and justly resolved by a human being - namely, these 9 people. The issue is not that one side or the other may fail to educate the Court, it's that the problems of "what the law says" and "what justice is" and "how the internet should work" may not be entirely coherent, especially when societal norms are changing as fast as they are today.

When the Constitution was written, the word "publisher" had a completely different meaning, completely unaware of the possibility of YouTube. Even in 1996, algorithmic Internet moderation had a different meaning to what providers are doing today. How will - how should - people in 5 or 10 years moderate websites and consume content? I'm not the greatest expert on the Internet, I started using it at approximately the same time that section 230 was passed, but I'm very familiar with it and that would be a really hard question for me to answer. A senior citizen who formed and solidified most of their ideas about the way society works or should work three to five decades ago would have a really hard time wrapping their mind around the concept even with excellent guidance from the best attorneys.

We need judgements that put our society in the best position for the next decade, and these 9 people may not be sufficiently able to be coached to think ahead far enough to do that.


> We need judgements that put our society in the best position for the next decade, and these 9 people may not be sufficiently able to be coached to think ahead far enough to do that.

That is a kind of judgement, but that kind of judgment is a policy judgment, not a legal one.

If you want a policy judgment, you go to Congress; they make a determination, decide if they’re going to pass a law, and if they do so then you have a law that reflects what their policy is in the text of the statute. If you want a legal one, you go to the Courts. If there’s ambiguity in the law, then you’re going to have different interpretations, and the attorneys at trial, and often the Solicitor-General of the United States are going to present their best arguments as to how the law should be interpreted.

In other words, a trial and its eventual outcome is a collaboration between adversaries, the Justices, their staff, and the US government when they choose to chime in (they usually can if they want even if they’re not one of try party’s at trial). You’re giving the Justices too little credit and more responsibility than they lawfully possess; it is already sufficient that they have the responsibility of interpreting the laws, facts and arguments before them.


The Constitution doesn't even contain the word "publisher".

Issues that come before the Supreme Court are by nature largely toss-ups with no clear right or wrong answer. Instead of taking a risk with those 9 people, lobby your members of Congress to write laws that are so clear and specific that no judicial interpretation is needed.


+1 -- SCOTUS even seems to agree that they wish Congress would just modernize 230 so they wouldn't be trying to interpret what the word "publisher" (conceived before the internet existed) means in a digital era. All SCOTUS can do is look at the law as currently written and try to apply it to current situations in a reasonable way. By contrast, Congress can wholly rewrite the law and introduce new terms/concepts into it.


Section 230 was written for the digital era, in the digital era. The only people that think this law is ambiguous are pretty ignorant of its history. It is exceedingly clear, they just don't personally like that and are throwing around bullshit that makes no sense as FUD.

I would admit that the law may need changing given the rise of internet mega-corps.


Section 230 was written in 1996, and in using phrases like "interactive computer service" and "access software provider" is absolutely ambiguous in its application to a 2023 internet. If it were clear, this case wouldn't have made its way up to SCOTUS.


It also defines its terms right there within the statute, in 230(f): https://www.law.cornell.edu/uscode/text/47/230

Actually to save everyone a click, let’s bring it here:

=====================

(f) Definitions

As used in this section:

(1) Internet The term “Internet” means the international computer network of both Federal and non-Federal interoperable packet switched data networks.

(2) Interactive computer service The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

(3) Information content provider The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.

(4) Access software provider The term “access software provider” means a provider of software (including client or server software), or enabling tools that do any one or more of the following:

-> (A) filter, screen, allow, or disallow content;

-> (B) pick, choose, analyze, or digest content; or

-> (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.


Exactly, if there's some court-room drama type "gotcha" that a knowledgable Justice could ask then it is the job of one side or the other to make that argument.


Some of their past internet-related cases have been worrisome, but if you listen to the oral arguments on this one they do ask some good questions... albeit that's probably partially due to their recent-law-school-grad clerks helping with prep.


Keep in mind, when you get to the "oral argument" phase, the Justices have already had months to read the documentation and the briefs submitted in order to articulate and understand those issues.

The attorneys for both sides state their case well ahead of time before the oral arguments.


It's not far fetched to argue that content recommendation algorithms are protected under 230, but I do recall Twitter sidebar during the 2020 elections featured original commentary under the trending tags, which definitely is speech by Twitter. Stuff like "x wrongly claimed y" or "people are protesting z due to..."


I wonder if there are any legal protections for what the Twitter trending list used to do - paid people to summarize the topic that was trending - vs the company itself stating its opinion on a given subject. The google search answerbox, for example, will show you an "answer" for a given query, but it's not Google's answer to the query, it's whatever the source they selected says.


Then again, if I ask for a ranking by average user rating, or alphabetically, then I (the viewer) are setting the criteria.

If YouTube's algorithm is deciding what to show me, then I (as the user) am asking for YouTube's opinion.

So I think that there is a coherent argument that YouTube 's recommendation algorithm is expressing an opinion, and is therefore YouTube 's content.

Would it have 1st amendment protection though?


>If YouTube's algorithm is deciding what to show me, then I (as the user) am asking for YouTube's opinion. So I think that there is a coherent argument that YouTube's recommendation algorithm is expressing an opinion, and is therefore YouTube's content.

To me it seems like a platform could easily skirt this by having users opt in to recommendations (and maybe even choose a sorting criteria option) at signup.

>Would it have 1st amendment protection though?

The 1st amendment only protects against government censorship, not private lawsuits for existing causes of action (e.g., defamation). It would be individuals (like the Gonzalez family) suing YT in a world where the recommendations were deemed to be YT's own content.


> > If YouTube's algorithm is deciding what to show me, then I (as the user) am asking for YouTube's opinion. So I think that there is a coherent argument that YouTube's recommendation algorithm is expressing an opinion, and is therefore YouTube's content.

> To me it seems like a platform could easily skirt this by having users opt in to recommendations (and maybe even choose a sorting criteria option) at signup.

How does opting in make any difference, for this particular point?


If a distinction was drawn (by SCOTUS or Congress) between search results (where you ask the platform to produce a ranked list based on search criteria) and recommendations (where you theoretically didn't "instruct" the platform to show them), I could see platforms adding a little modal to the signup flow being like "please instruct us to show you recommendations" and maybe surfacing a few settings for that, so they could later argue the recommendations were only served at your request (and thus should get search-result-style protection).


Thxs for the explanation - I'm not USAian, so I don't understand the scope of the 1st amendment


Recommendation systems are a large part of what is wrong with the internet and the issues that we blame on moderation.


they're far from perfect, but are you saying you don't want recommendations at all? just want to blindly search for things you might find interesting?


> are you saying you don't want recommendations at all

Not from the platform itself. Not from an algorithm.

I want recommendations of channels to subscribe to from my friends. I want a world where things like consumer reports that do honest hard work succeed over SEO growth hackering. I want a world where the human judgement of people I trust decides who gets excluded, not easily gamed content moderation systems.


I'm not opposed to shutting down SEO BS, but I think there's a middle ground somewhere. Like the Netflix model seems reasonable. They're promoting content they think I'll watch, but the show titles and thumbnails aren't quite clickbait. They do fiddle with the thumbnails to increase engagement but at least it's still pics from the movie/show.


What is your definition of "blindly"?


My recommendations are things I'd never think to type into a searchbox. "Blindly" as in typing random words to discover content because I don't know how else you would find it.


Surprisingly I find myself thinking "Yes, the recommendation algorithm is Google's speech." Particularly if they claim it's secret and proprietary. If Google claims ownership of it, they own it, not any user, i.e. it's not user-generated content. That's my feeling on it, which obviously doesn't necessarily correspond to the law or to how this case eventually turns out.


I generally like the YouTube recommendation algorithm. I would hate to see it go away. On the other hand, I can foresee a recommendation algorithm that would potentially be problematic for society. It might be interesting if 3rd party pluggable recommendation algorithms were supported.


However besides recommendations Google is removing content and some of this removal appears politically motivated or perhaps government agencies are asking for its removal. This aspect is very bad.


YouTube recommendations make me incredibly sad, to the point where I start to lose hope for humanity. I know that's a strange thing to say but bear with me.

My web browser blocks all cookies and therefore no matter how often I visit YouTube, it thinks I'm a brand new user. The video recommendations I see are therefore YouTube's "default" set; presumably what's trending at the time.

The recommended videos are, without exception, pure trash. We're talking everything from Buzzfeed-level clickbait with blatantly Photoshopped thumbnails to the lowest-effort softcore-porn-level sensationalist crap which is mostly composed of lobotomised talking heads, mouths agog, "reacting" to other people's content.

Quite often YouTube presents starkly contrasting recommendations, for example a video of violent combat footage right next to some kids dancing, or stuff like this elegant pairing of the sexualisation of women next to some cUtE pUpPiEs: https://i.imgur.com/TcxhVhH.jpg

It's psychopathic.


You're not supposed to say that out loud, but that type of content must be what appeals to the average people. Which says a lot about their intelligence. Smart people overestimate how smart other people are. See

https://www.unz.com/akarlin/stupid-people/


I used to think so too, but I've met people smarter than I am who legit binge watch cute puppy videos for a basal emotional high they enjoy, just like a lot of other people can't help but impulsively wanna see a volleyball athlete's crotch. It's junk food; people are programmed to love it regardless of intelligence.


It seems plausible that there is at least a moderate correlation between low IQ and watching these videos.


The recommender is actually trying to get you to scroll the recommendations for as long as possible, not show you a useful/good video. On mobile they show you an ad every 5 recommendations, which is more ads per minute than if you watch an ad then a long video.


Based on the kinds of things that are most popular on YouTube I believe that the majority of YouTube watchers are children or intellectually disabled people.


I think in this case they're probably right - it's not merely user posted content but they're actively building a list of recommendations so instead of stumbling upon such content they're pushing for viewers which IMO changes their (YouTube's) responsibility, a curator if you will.


Of course, if the platforms didn't happen, we would all have our own websites and promote a curated list of others. The www would have worked out great.


This title (and the article itself) is deeply biased. Obviously "platform" recommendations are materially different from third party content.


Author here. Sounds like you and I agree that algo-generated recommendations of content are different from actual content itself. But the point of the article (and title) is that the plaintiffs in this case argue that each time YT generates recommendations, they're making new content that YT can be liable for.

I'm mostly just recapping what was said by each side at oral argument and the potential dramatic consequences if SCOTUS finds fully for Gonzalez -- not sure I follow where you're seeing bias.


I said that recommendations themselves are materially differwnt from hosted content and therefore are not protected by section 230.


How would you respond to Google's argument that all content has to be ordered in _some_ manner to be displayed?


Draw the line on number of parameters needed to produce the ordering. An A-Z or chronological ordering is based off 1 parameter whereas a recommendation algorithm uses dozens. A HN-style algorithm needs maybe 2-3 parameters. The regulator can make a value judgement that up to a certain number of parameters is just hosting a directory of third party content, whereas above that you’re promoting it and have to accept some liability for the content.


A quantity-driven test would mean that an algorithm that says "sort by number of mentions of ISIS" is safe, but an algorithm that says "sort by a score comprised of newness, number of upvotes, number of comments, and geographical proximity" would fail... It would also doom search engines, whose results pages are generated by multi-variate algorithms.


In principle I'm fine with all of that. I suspect there's not much mainstream appeal for a YouTube that only recommends ISIS videos, so you're not pushing ISIS videos to someone who wasn't already looking for them. I suspect such a site would fall foul of law enforcement on other grounds (such a rule change wouldn't be allowing anything that wasn't allowed before). I don't see why the regulator would need to decide on the same limit for pull-based search results (I've told the site what I'm looking for) rather than push-based recommendations (the site is pushing something else alongside what I was looking for).


I think your distinction on push vs pull is a good one that Congress could consider incorporating if it chooses to revisit 230, though that approach is probably too far from the current text for SCOTUS to be willing to read it in when it rules on Gonzalez.

However, something worth noting: wherever 230 lands, there's not some regulator using discretion in enforcing it (like a prosecutor deciding when to charge someone with murder) -- this is a law that gives tech platforms a defense from the private lawsuits that could otherwise put them out of business (e.g., suits by indviduals/businesses for defamation because the algo ended up recommending a "John Smith is a lying fraud" video).


No thanks. I don't want incompetent and biased government regulators dictating what software features private companies are allowed to build. If you don't like YouTube recommendations then just ignore them. No one is prying your eyes open and forcing you to watch.


No one is forcing you to smoke either, but there’d be armies of marketing teams with huge budgets encouraging you and your kids try if it were unregulated. I can recommend Fifth Risk by Michael Lewis if you want the long answer.


If they don't support ordering ascending and descending by date, number of views, and title alphabetically, that's their fault.


But do those need to be basically the only sorts?


it doesn't have to be "displayed" at all. It doesnt even have to be searchable. Once they start selecting things to show you, they stop being a host and start being a publisher.


Yes, it would be nice to be able to watch a WWII history documentary on YouTube and not have the recommendations instantly fill with Hitler/Nazi videos.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: