Hacker News new | past | comments | ask | show | jobs | submit login

Is there any way to comment past the date? This seems like a horrible idea.



Open model weight bans will likely be struck down as First Amendment violations because, at their core, model weights are a form of expression. They embody the ideas, research, and innovations of their creators, similar to how code was deemed a form of speech protected under the First Amendment during the encryption debates of the 1990s. Just as the government's attempts to control encryption software were challenged and largely curtailed due to free speech concerns, any attempts to ban open model weights will face legal challenges arguing that such bans unjustly restrict the free exchange of ideas and information, a cornerstone of First Amendment protections. The precedent set by cases involving code and free speech strongly suggests that similar principles apply to model weights, making such bans vulnerable to being overturned on constitutional grounds.


> Open model weight bans will likely be struck down as First Amendment violations because, at their core, model weights are a form of expression. They embody the ideas, research, and innovations of their creators, similar to how code was deemed a form of speech protected under the First Amendment during the encryption debates of the 1990s

I hope you are right, but I think there are some nuances here you aren't considering.

Courts have ruled that code is protected under the 1st Amendment because it is something created by human beings, and expresses the ideas and style of its human authors. There is a clear analogy to literary works – which is also supported by the precedent of copyright law protecting computer source code on the grounds that it is a type of literary work – and literary works are a core part of the 1st Amendment's scope as traditionally understood (even back to its original framers).

Whereas, model weights are just a bunch of numbers produced by an automated process. The legal argument that they should be protected by the 1st Amendment is much less clearcut. I would be happy if they were found to be so protected, but one ought to be careful to distinguish what one would like the law to be, from what it actually is.


> Courts have ruled that code is protected under the 1st Amendment because it is something created by human beings, and expresses the ideas and style of its human authors.

They’ve also allowed non-content-based restrictions of that speech under intermediate scrutiny, and content-based restrictions under strict scrutiny, so that LLMs might be within the scope of things to which 1A protection applies does not mean that restrictions on distributing open LLM weights would necessarily violate the 1A.


I agree. I just posted a reply to a sibling comment saying the same.

I personally think the odds are decent that content-neutral parameter size-based distribution restrictions will be upheld under intermediate scrutiny. Personally I am opposed to such restrictions and think they are a stupid mistake, but I can separate my own feelings about a topic from how SCOTUS is likely to perceive it

Whereas, I’m sceptical that any content-based regulations (require LLMs to have safeguards to prevent them from generating disinformation, extremism, hate speech, etc) will survive strict scrutiny. Even if there is some category of content which is so horrible that SCOTUS will conclude the government is justified in restricting LLMs from producing it, it is basically technically impossible to design safeguards that only apply to that horrible content without collateral damage on less horrible content. Given that reality, I doubt any content-based LLM restrictions will be upheld


The statement that "if you process the following list of numbers like this, you'll get that result" is human-created speech.

It's generally really easy to mix one category of information with another. The DeCSS song is a great example of somebody using a little bit of cleverness to inextricably combine functional code with core protected political expression. Even if you thought "pure" code was not speech, removing the code from the song would destroy the message of the parts that clearly are speech, and there's no obvious alternative way to send that message. https://www.youtube.com/watch?v=PmLpLdGzNpo

... but you're right that it's not a slam dunk in real court. What is almost certain is that the argument's strong enough for any regulations to end up stayed while the issue worked its way through a very long legal process. Probably long enough to make the whole question moot.

Basically it's a waste of time for any US Government agency to try this.


> The DeCSS song is a great example of somebody using a little bit of cleverness to inextricably combine functional code with core protected political expression.

Keep in mind the outcome of the DeCSS case - the Second Circuit did hold the DeCSS code to be protected speech, but it also upheld the DMCA as a constitutional restriction on that speech under intermediate scrutiny. The losing side decided not to appeal, because of the risk SCOTUS might uphold the decision

One could fine-tune an LLM to defend a particular political viewpoint - and you might thereby argue that model weights are expressive and hence within the scope of 1A.

But, if all the government does, is ban open model distribution above a particular parameter size, the government will argue that is a content-neutral restriction (it applies to all LLMs regardless of their viewpoint). It would essentially be a “time, place and manner” restriction, and thus subject to “intermediate scrutiny” - which requires it to be “an important government interest and substantially related to that interest”. And I’m afraid the government has a decent chance of winning there - they just need to convince the Court that AI risks are real, and this regulation is substantially related to controlling them - and I doubt they’ll struggle to convince SCOTUS justices of that (none of whom really understand AI), and if experts are telling them it is risky, they’ll believe them, and they will hesitate to stop the government from trying to regulate those risks

Whereas, if the US government went beyond a mere parameter size limit, and tried to impose “safety standards” on LLMs based on the kind of content they generate (disinformation, extremism, hate speech, etc) - that is harder to justify as content-neutral, and lack of content neutrality makes strict scrutiny apply, and the government is much more likely to lose there. The conservative justices especially are likely to view many of those standards as having a progressive political bias

> What is almost certain is that the argument's strong enough for any regulations to end up stayed while the issue worked its way through a very long legal process. Probably long enough to make the whole question moot.

Yes, it is likely to get tied up in the courts for a long time, and decent (but not certain) odds the regulations get suspended while the case proceeds, and the longer they are suspended for without the sky falling, it does tend to undermine the government’s case that they are substantially necessary. However, I’m not sure if that will really be decisive, because AI safety experts will always claim “we got lucky with this generation, the risk for next year’s or next decade’s remains very real”, and SCOTUS justices will probably believe them


That's up to the whims of the US Supreme Court. They're in the process of legalizing their own corruption right now. I don't think you should take for granted that just because you have a solid and obvious argument, your rights will be protected by them.


If it makes you feel better the Supreme Court unanimously struck down a North Carolina law that prohibited registered sex offenders from accessing various websites, including social media platforms where children could become members. The Court held that the law imposed an unconstitutional restriction on lawful speech. Justice Kennedy, writing for the majority, noted that the law interfered with the fundamental principle that states may not suppress the freedom of speech on public streets, parks, and in other public spaces just because the expression occurs online. Packingham v. North Carolina (2017)


No money was at stake there, I think?

It's not that I think the US supreme court will never rule correctly in favor of speech or other basic freedoms. I'm sure they will, as long as none of their corrupt interests are threatened. As they say, "the rules only matter when the outcome doesn't".


> If it makes you feel better

It doesn't. Pedophiles aren't high up on the priorities of the Federalist Society or Heritage Foundation or whoever Thomas' latest sugar daddy is.


I know. These are the digital rights the Supreme Court is fighting for.


"Fighting for"? That's laughable.

They're going to flip the second one of their patrons is involved, regardless of precedent.


as if Justice Thomas hasn't had a completely consistent judicial philosophy his entire tenure. but don't let anyone stop you coping


> That's up to the whims of the US Supreme Court.

The supreme court is certainly biased in favor of conservatives.

But conservatives these days are generally much more in favor of free speech and libertarian arguments than the people on the left.

So, for this specific issue, you should be happy that the supreme court is more conservative and willing to support free speech arguments.


> Corruption doesn't mean "they do bad things". Instead it means that they are biased.

That's totally wrong, it sounds almost like downplaying or excusing corruption (which is inherently a "bad thing" on its own) by comparing it to just having a bias.

Corruption is not just having a bias (towards "people who give me bribes", if nothing else) but also acting upon that bias and mixing it up with a job that requires impartiality.

In contrast, merely having bias is nowhere near as bad, particularly since someone can be strongly biased and still recuse themselves.

When someone talks about "corruption" on the Supreme Court, it's probably not hyperbole about bias, but a reference to hundreds of thousands of dollars in alleged bribes.


Gotcha. But in reference to the actual topic here, my point still stands, even given your caveat.

The people supporting open models are a much larger group than the minority who are trying to ban then.


Well, that too. But corruption is "The use of entrusted power for private gain". It doesn't have to be personal gain, and it doesn't have to involve bribes. You just have to serve someone else over the public who entrusted you with power.

Even without the scandals of gifts to SC judges, there's much room for judges to be corrupt.


Given how screwed up Citizens United is, I am starting to get wary of declaring every exchange of money and information as free speech.

IMO free speech is permitting the broadcast of clear political thought and the freedom from being persecuted by government for expressing arbitrary political or religious ideas.

Enabling every person to have the information equivalent of a nuclear bomb doesn't seem like a first amendment issue. But I'm not a lawyer. Maybe I just don't like the first amendment anymore.

AI is like everyone having a Star Trek holodeck, but everyone can force anyone else into their holodeck and fool them with the most unethical or criminal scams.


Open model weight bans will be impossible to implement unless the US literally implements the great firewall of China itself.


The US government can pressure tech companies like Microsoft (owner of GitHub) to delete any repos with open source AI models. These companies would voluntarily comply as they'd be happy to be able to blame the government for the anti-competitive behavior they probably want to engage in anyway.


They can do whatever they want. We will still be able to download them from Russian and Chinese mirrors.


Beyond horrible. Beyond even dystopian films. Look at the evil mega corps and governments have shown with regards to privacy and freedoms. They've been eating away at it for years, one bite at a time, not too much to cause a revolt. Are we to depend upon the good intentions of mega corp owners and politicians to wield AGI exclusively and for the benefit of society when that same AGI will render any form of public organized protest impossible.


You can write to them anyway and also send a copy of the leyter to your representatives.

It still sends the message, and if many people send such letters, it will force them to reconsider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: