Hacker News new | past | comments | ask | show | jobs | submit | hendrycks's comments login

A few things that I’m seeing folks in the comments misunderstanding about the bill (full disclosure: I’ve been one of a group of folks advising Senator Wiener on SB 1047)

1. The new Frontier Model Division is focused on receiving information and issuing guidelines. It’s not a licensing regime and isn’t investigating developers.

2. Folks aren’t automatically liable if their highly capable model is used to do bad things, even catastrophic things. The question is whether they took reasonable measures to prevent that. This bill could have used strict liability, where developers would be liable for catastrophic harms regardless of fault, but that's not what the bill does.

3. The bill requires developers to test their models and report whether they have hazardous capabilities (and the answer can obviously be yes or no). Even if the model does have hazardous capabilities, the developer can still deploy it if they take reasonable precautions, as outlined in the bill. For perjury, you would need to intentionally lie—good faith errors would not be covered. I get that models can have unforeseen capabilities, but this isn’t about that. If you are knowingly releasing something that could have demonstrably catastrophic consequences, it seems fair to have consequences for that. Some things which already require folks to certify under penalty of perjury: lobbying disclosures, companies’ financial disclosures, immigration compliance forms.

4. Overall it seems pretty reasonable that if your model can cause catastrophic harms (which is not true of current models, but maybe true of future models), then you shouldn’t be releasing models in a way that can predictably allow folks to cause those catastrophic harms.

If people want a writeup of what the bill does I recommend this one by the law firm DLA Piper (https://www.dlapiper.com/en/insights/publications/2024/02/ca...). In my opinion this is a pretty narrow proposal focused at the most severe risks (much more narrow than, e.g., the EU AI act).


1. The new Frontier Model Division is just receiving information and issuing guidelines. It’s not a licensing regime and isn’t investigating developers.

2. Folks aren’t automatically liable if their highly capable model is used to do bad things, even catastrophic things. The question is whether they took reasonable measures to prevent that. This bill could have used strict liability, where developers would be liable for catastrophic harms regardless of fault, but that's not what the bill does.

3. Overall it seems pretty reasonable that if your model can cause catastrophic harms (which is not true of current models, but maybe true of future models), then you shouldn’t be releasing models in a way that can predictably allow folks to cause those catastrophic harms.

If people want a detailed write up of what the bill does, I recommend this thorough writeup by Zvi. In my opinion this is a pretty narrow proposal focused at the most severe risks (much more narrow than, e.g., the EU AI act). https://thezvi.substack.com/p/on-the-proposed-california-sb-...


On point #3, as far as I can tell, the bill criteria defines a "covered model" (a model subject to regulation under this proposal) as any model that can "cause $500,000 of damage" or more if misused.

A regular MacBook can cause half a million dollars of damage if misused. Easily. So I think any model of significant size would qualify.

Furthermore, the requirement to register and pre-clear models will surely precede open data access, and that means a loss in competitive cover for startups working on new projects. I can easily see disclosure sites being monitored constantly for each new AI development, rendering startups unable to build against larger players in private.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: