Hacker News new | past | comments | ask | show | jobs | submit login
California residents: call your legislators about AI bill SB 1047 (twitter.com/chrislengerich)
22 points by convexstrictly 23 days ago | hide | past | favorite | 11 comments



> SB 1047 creates an unaccountable agency that can refer model developers for charges that lead to jailtime (yes, literally) and is coming up for a vote in the California Senate. It's highly unpopular in the AI community, at an estimated 10:1 ratio based on recent public comments.

I know nothing about this bill other than what's in this tweet, but oh my gosh, criminal accountability for AI developers?? What a horrifying idea.


Agreed. Though if you read the bill, the only reason people bring up jail is that there's the requirement to submit a report about the "positive safety determination" under penalty of perjury, which only applies if you knowingly lie in the report. I think that's a lot more reasonable: it also applies if you lie in e.g. your driver's license forms.

If you merely fail to submit such a report, you're only liable for civil penalties. (see section 2606, https://legiscan.com/CA/text/SB1047/id/2919384).


I mean, if a model has the potential to be used in ways which might be harmful, shouldn't we criminalize the people actually using it to do bad things (create deepfakes for pretend kidnap and ransom schemes or whatever) rather than the developer that create the model?

One could use Photoshop prior to generative models to create misleading content, but we didn't expect the developers at Adobe to face jailtime. Why should the developers of stable diffusion 8 (or whatever passes the 10^26 line in the sand they've drawn)?


I mean, how about picking a lane? The AI community really doesn't get to take "oh my gosh we're on the brink of unleashing AGI! the human race is under threat!!!" and "chill out, this is just Photoshop" as simultaneous positions to argue from.


The AI community is not monolithic. I do not think we're close to AGI, and have never claimed that we are.

My lane is this: I think we're producing new and powerful tools. Some of those tools can be used as weapons. Some of those tools seem to be footguns. But they're relatively broad tools that can be used for many purposes, and the idea that the toolmakers need special oversight because one day someone might figure out how to get the tool to produce a plan for a chemical weapon (Sec 3 (n)(1)(A)) seems misguided to me.

Lots of kinds of tools are powerful. SMT solvers, logic engines, finite element simulators, are all powerful, and it seems about as plausible that a team of smart people with a giant cluster of GPUs could use 10^26 operations to design a weapon with one of these other approaches ... or perhaps they could discover something really valuable! I don't think ML is actually in principle more dangerous than any of these other flexible computational tools, it's just changing the most and getting the most resources right now.


> My lane is this: I think we're producing new and powerful tools. Some of those tools can be used as weapons. Some of those tools seem to be footguns. But they're relatively broad tools that can be used for many purposes, and the idea that the toolmakers need special oversight because one day someone might figure out how to get the tool to produce a plan for a chemical weapon (Sec 3 (n)(1)(A)) seems misguided to me.

You do realize you're undermining your own point here? Photoshop isn't remotely similar to your description here...

But also, note that I'm not arguing this is a great bill. Again: I haven't had a chance to read it yet. Maybe it's terrible regardless. I'm just saying that if I take the (terse...) commentary I'm reading about it online at face value, they very much achieve the opposite of "convince the reader this is a terrible idea".


AI in the specific form of generative models for images is not dissimilar to photoshop. My claim is that AI software more broadly is like a lot of other software, in that you can use it for a lot of things, a minority of which are bad. You know you can use excel to do accounting for drug trafficking businesses? And you can use google maps to plan your routes when trafficking humans? I don't think AI software is different in kind, only in degree, and if we accept that developers should be on a short leash because they might build something that can potentially be used for harm, then we should be aware that the creations of OS developers, language developers, web framework developers, P2P messaging developers, cryptography library developers, etc also are sometimes used by people doing bad things.

Your initial reaction, which I think comes off as snide and flippant, is that it's fine for AI developers face criminal penalties for building AI, presumably because it's capable of doing harmful things, even if those developers do not use them for harmful purposes. You don't seem to attempt any positive argument for why those developers in particular should be criminally liable, but developers of other broadly applicable tools should not.

In any case, perhaps it's all moot bc the person that was repeatedly downvoted to death was correct: I did look at the text and don't actually see criminal penalties anywhere. There are civil penalties though, and I don't think AI developers in particular should have to convince regulators that their work is 'safe' to avoid fines, but at least the stakes are lower.


However, some forms of generative AI are clearly harmful and should not be released to the general public. Take tools that can clone someone's voice with a few seconds of data. The legitimate use of such a tool is basically limited to special effects in a small number of movies / radio plays where the original speaker is dead or otherwise unable to be recorded again. However the developers have released these tools to the general public, and they are now being widely used primarily to scam people via grandparent scams and fake endorsements of scams by celebrities.

Personally, I think the people that released these tools should be held to account for the harm they are causing. I get that it was a cool thing to do, but the harm of their release is far, far greater than the benefit to humanity. As the developers of these tools clearly aren't considering the consequences of their release to the general public, there is a need for governments to step in and regulate.

Furthermore, this isn't a new phenomenon in the tech industry. The checks and balances that existed in advertising prior to online advertising taking over the world were irresponsibly discarded by companies like Google. Organizations like the various Advertising Standards bodies serve a purpose: to protect the public from harmful content, including scams. Instead we have a tech platform that has completely removed humans from the ad sales loop, which has resulted in ads for scams showing up higher in search results than the actual content. Combined with the decision to blur the visual differences between ads and legitimate search results, it is very difficult for less discerning individuals to avoid phising and other scam websites. I've seen this with my own eyes where my elderly father gets highly targeted ads online for scams that don't show up when I do the exact same search from the same local network.

Maybe computer scientists need to be required to take ethics and sociology courses to teach them why these things are bad.


I really dislike seeing a call to action like this with all context squeezed down to a thread on X. Perhaps the bill is as crazy as the post makes it sound, but the giant claims (e.g. 24 regulators with no accountability that can put you in jail) make me suspect some level of hyperbole.

Does anyone have more thorough resources for this? I realize I can go read the bill, but I’m not sure how much I could grok from that.


here:

https://legiscan.com/CA/text/SB1047/id/2919384

and a helpful definition as you parse this:

(f) “Covered model” means an artificial intelligence model that meets either of the following criteria: (1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations. (2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.


Anybody have one of those prefilled forms I can attach my signature to and send to my representative? The twitter links were broken for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: