Hacker News new | past | comments | ask | show | jobs | submit login
Misconceptions about SB 1047 (asteriskmag.com)
30 points by mitchbob 33 days ago | hide | past | favorite | 35 comments



>If your model was trained on less than 10²⁶ flops and doesn't outperform those that were?

The speed at which the article glazes over one of the major issues with the bill is incredible.

The definition of "outperform those that were" (wording is pretty much the same in the bill) is part of the issue. The bill doesn't define any benchmarks, so whose to say tomorrow a new benchmark doesn't come out and your model trained at home on 3 4090s all of a sudden is the best at something.

And the bill also makes you responsible for what people do with your model. If it is decided that you didn't put in enough safeguards (again, no real definition) your liability goes through the roof.

Now companies are pretty likely to just ignore all this in the short term, but laws like this essentially make it just a matter of perspective to decide if someone is breaking the law or not. If you look hard enough from the right angle, any model could be argued to be in noncompliance.


In 5-10 years you'll be able to train models that blow right past these limits at home given the rate at which chip makers are diving into the specialized AI accelerator market.

This reminds me of key size limits back during the cryptography wars of the 1990s. "Nobody would ever need a key larger than 64 bits." Then 5 years later keys this small can be cracked on a gaming desktop.


I don't know that any toolmaker should be held liable for how anyone uses their tool. If that's the case, we have a long list of tool makers to hold liable before we get to AI model makers.


It wastes taxpayer funds on enforcing a moat for Sam Altman, it establishes a fixed computational bound in a legal regulation, it tries to police a free speech activity because of possible harms (but not the harms directly), and it is likely to have negative national security implications as other (less regulated) regions deal with fewer lawyers as they advance the state of the art.


Nice concise summary. The fundamental problem with all of these proposed "AI" "safety" regulations is that they adopt the corporate version of safety where LLMs refuse to talk about things that sound scary, mean, or even just controversial, while completely ignoring that these systems will be used to harm people at scale by turning gradually creeping corporate-individual power imbalances up to 11.


This exactly. I would be much happier if the regulation was "don't use GPT-4 to decide when to kick Grandma out of the hospital" or "don't use a Llama finetune to make policing decisions", which is where I see the most certain need of regulation in the near future.


Not to worry because "neither the Attorney General nor anyone else can throw people in jail for violating the bill".

A jury of their peers on the other hand...

I honestly quit reading after that.


Wait, is this in favor of an AI regulation bill that doesn't do anything because the it would only cover models that don't exist? Is that right? Then what is the reason for this bill's existence?


The models don't exist yet but they will in the near future. It's easier to regulate something that doesn't already exist (for example, the Outer Space Treaty in 1967 banned military bases on the Moon).


There are a few differences between the Outer Space Treaty and AI regulation: it was easy back than to have a good idea what having a military base on the moon would lead to. On one hand, AI research is moving fast and we don't know where we will be a few years from now (I don't think the prevalence of AI hallucinations as apparent 4 or 5 years ago). On the other, AI models are already doing real damage (audiovisual deep fakes, AI face recognition and tracking...).

So let me put it a bit more pointedly: what's the point regulating what doesn't exist when the existing models are already dangerous? Is there a legitimate reason? Is it a fig leaf measure so that politicians can uphold the appearance of taking action?


> it was easy back than to have a good idea what having a military base on the moon would lead to

Genuinely curious where this comes from, because it’s still not clear what military bases on the moon might lead to. (Which is partly why the more-restrictive Moon Treaty failed.)


It doesn't take a lot of imagination to realize that whoever builds a military moonbase gains the ability to target and destroy any place on earth. No superpower wants to be threatened from space in that way.


> whoever builds a military moonbase gains the ability to target and destroy any place on earth

By turning the 20- to 60-minute flight time of an ICBM into a three-day trip? What?! This is like people with zero knowledge of orbital mechanics getting uppity about nukes in orbit.


Even if it was possible to evacuate a big city in three days, it would still be a valuable target.


> Even if it was possible to evacuate a big city in three days, it would still be a valuable target

By that logic there is military value to staging nukes to attack Earth on Alpha Centauri. You can launch more payload quicker from Earth, all without giving your enemy a polite three-day heads up that they should get around to wiping down their interceptors and nuking you back before cocktail hour.

There is no established value to a military base on the Moon, certainly not as it pertains to Earth directly. We didn’t sign the OST because of fears about lunar bombardment, we signed it because we feared teams trying to stake out conflicting claims in space starting a war on Earth. (See: China and America racing for the edge of Shackleton Crater.)


I guess it requires more imagination than I have, because I don't understand what a military moonbase gets you.


Lunar PX


and what is that?


Cheap consumer goods ON. THE. MOON!


Oh. I'm sorry, I didn't realize that AI research had stopped yesterday and we will not be making any further improvements, ever. Guess we're safe then. Pack 'em up boys, nothing to see here!

I am having a whole heck of a lot of trouble reading your comment charitably. Either you do realize that AI research is continuing and will soon pass the limit in the bill if it hasn't already, or you are a troll.


My favorite provision for covered models is "make sure it can be shut down". Yes, perhaps this could be a concern for future "models", but circa 2024 LLMs do not have agency and are hosted on fairly traditional server farms with obvious power switches. This legislation may have referred to Hollywood films (Kubrick, Cameron, Wachowski et al) more than technological realities. While perhaps extreme shutdown measures might be desired to stop malicious software agents that depend upon inference from a covered model, there are other reasonable security measures available. The legislation seems to fundamentally misunderstand what large language models actually are.


So your complaint is that, for the current AI technology, which certainly doesn't cover the entire space of possible AI technology, the provision in the bill is ... easy to follow?

It's easy to shut down your inference cluster and this is a problem?

What about people who are using language models to build agents? What about when we start adding RL to our language models and training them to act in the real world?

Doesn't it make sense to be able to shut this down? Wouldn't it be best to build in the big STOP button first when it is indeed easy?


No, my complaint is that it misunderstands where the threat actually resides, for the near foreseeable future. The threat is the traditional black hat bad actor. Poorly written legislation could harm these moat building AI startups too, who have been cynically advocating for such regulation, if their services are wantonly shutdown by law instead of simply employing traditional network security counter-measures. If you read my original reply completely you would have seen my mention of agents.


The most dangerous thing about SB 1047 is that it is legislation without purpose. As the article rightly points out, it covers no real activities, and its requirement to be able to shut the machine down is, as far as anyone can figure, derived from watching The Terminator. A bill like this delegitimizes government altogether. I liken it to Dennis Kucinich's quixotic attempt to ban space-based mind control weapons.


The very obvious purpose of this bill is to provide a regulatory moat around the largest AI players, make no mistake. Didn’t openAI donate a large sum to this guy’s campaign pretty recently? Why aren’t they up in arms about this? Or any other big AI player? Doesn’t pass the smell test, especially given the barely veiled corrupt nature of the california political machine


> Covered models are those trained on more than 10²⁶ flops (a measure of computing power that, at current prices, is estimated to cost between tens and hundreds of millions of dollars), or projected to have similar performance on benchmarks used to evaluate state of the art foundation models. If your model was trained on less than 10²⁶ flops and doesn't outperform those that were? It is not a covered model.

I think the number needs to be indexed to some measure of readily available consumer hardware.

Otherwise, there are lots of computing capabilities that were estimated to cost 10s or millions of dollars (for example, the first hard drives were tens of thousands of dollars for a few megabytes and a terabyte of storage would have seemed like an astronomical amount projected at 10s of millions of dollars. Also for a long time, a teraflop was considered to be an astronomical amount of computing power which a high end desktop or laptop can now easily hit).


Not familiar with this magazine but the article is pure propaganda in favor of an absurd government bill. Feels odd to read. I hope people think about what would motivate one to write and publish a piece like this.


> I hope people think about what would motivate one to write and publish a piece like this.

The same can be said about those that oppose legislation. Are you against it because you're some black hat that knows how to exploit people using AI? Are you just part of Big Data and want to harvest everyone's info? Are you some kind of pseudo-anarchist that believes society is better off without any rules? Maybe you are all of those things. But it's most likely you're not. As it's most likely that the person who wrote this is not maliciously motivated.

My point is it's not conducive to assume someone has nefarious reasons for an opinion with no evidence of it. Try pointing out flaws and countering them. Jumping to conclusions that someone's a shill/troll/whatever is a bad idea in general.

There are reasons it's in the guidelines for HN: Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.


Asterisk is tied to the effective altruism movement, that has been spending a lot of money lobbying for AI regulation: https://forum.effectivealtruism.org/posts/Mts84Mv5cFHRYBBA8/...


>think about what would motivate one

to be blunt, I think just someone who thinks differently than you, clearly they do not think it's an absurd bill, that's what the whole piece is about...


whats a concrete example about something I do with an AI model that I wouldn't be liable for today, but I would be after SB 1047 passes?


""" This will not apply to a small developer for years. At the point that it does, yes, if you make a GPT-5-level model from scratch, I think you can owe us some reports """

This is an insane way to legislate, so at best, it makes me think of underlying issues like corruption and incompetency.

---

More holistically, it's alarming from a view of what they chose to focus on vs not

I'm pro-regulation, but at the model provider tier, I rather see things like the equivalent of Net Neutrality and rules on platform providers not unfairly competing with app providers. That's the main new thing to be figured out, afaict, at the generic platform/utility provider level.

AI risk is real. At the platform level, the issue isn't models capable of doing bad things, but whether you are a platform for those doing it and how liable you are. Something like KYC is an everyone problem and exists, so I'm unclear why the new legislative land grab, that's a distraction.

I rather see targeted risk/harm-based legislation for app providers... Except that also already largely exists. The bank making loan decisions is regulated. Phishing is already illegal, irrespective of it using AI. Which model provider helps AI users stay compliant is a market thing. If the gov wants to be helpful, make clear standards at that application level with real consumer/business harm, and providers can decide if they help streamline those audits.

AI Platform providers should be liable for knowingly enabling customers to do bad things (or intentionally ignoring), but the nature of compliance changes when you take this stance... KYC again, which already exists. So again, corruption, incompetency, or what?

These politicians and lobbyists could have focused on regulating specific risks for specific industries, or ensuring liability and fair competition for model providers. But they took a very different road that feels more about anti-oss, anticompetitive, personal power, posturing, and weak risk reduction.


> This is an insane way to legislate, so at best, it makes me think of underlying issues like corruption and incompetency.

To me this feels more like saying "do not be hermetic and opaque when creating society-altering tools" which is a point worth considering.

A subtext to this discussion might be the slavering horde of managers who can't wait to jam AI into critical aspects of life.


A fed official was explicit about intentionally not inviting meta (llama3) or other oss leaders to their AI policy group, just OpenAI/ms/etc, and in an env when we know OpenAI started with a heavy DC & policy lobbying team since the beginning... Sometimes the simplest answers are the right ones.


Nobody I know is furious with AI regulation. Indeed, we welcome it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: