It’s funny because I wish there was more legislation around databases. People do a piss poor job of securing and maintaining them, resulting in all kinds of theft, fraud, etc. Likewise, we for some reason allow to people to build databases of information about people that should be illegal (e.g. real time location tracking), which they then sell to people who have no business having that information.
So the thing is, nobody was worried about databases. And the people who were not worried about databases, are worried about AI. It's like saying, "Oh, you're regulating biological weapons? Funny, you could say the same thing about bows and arrows!"
Not all things are actually the same! At some point, there is a point where allegory breaks down and one thing can be VERY BAD where other things are FINE. You have to look at the actual contents of the thing, and a lot of very smart people think that AI is *not like the other things*.
As if the world doesn’t run on databases. And now the government want to be able to have them shutdown.
People were definitely worried about databases! All the jobs lot for people that did filing! Copying! Typing! It’s easy to look back with hindsight and so, eh no one worried.
Heck, there were strikes when someone first played a record on the radio! God forbid!
BTW: a lot of people consider what they work on being so much more important to the world, and thus they are more important.
A lot of hand waiving about how "minor paperwork" for everyone isn't such a big deal. But very little coverage of what benefit that paperwork is accomplishing. This looks like a big bag of regulatory requirements, some which might be good and some which might be bad ideas. If it's the start of a discussion around what actually good regulations look like, then they have to start somewhere and maybe it will accomplish that. If it's an actual proposal and we have to take or leave everything, maybe they need to go back to the drawing board and figure out something better.
This is true, but I do think it's worth reflecting on why/how.
Someone writes these things. They are influenced by lobby, yes, but even lobbies, think tanks and such generally draw their ideas from what's lurking around in their vicinity.
There are no secret "geniuses of working out how this should work." That is what it is, and it shouldn't surprise anyone.
The public sphere is not good at producing politically practical suggestions either. It's good to talking s** and offering charismatic partial answers, speaking for some specific interest or opinion. Barely better than the lobbies, sometimes worse.
That's an interesting concept: There's the theory, that I tend to believe, that any successful complex system was born as a simple system, then evolved by iteration, etc. (it's from a 1970s systems theorist, iirc). Trying to birth a complex system directly is almost inevitably a mess.
But does it apply to legislating? One drawback might be that political will and attention is fleeting; and interests become vested in a current state, blocking change; and it's now or not for another generation or so.
It's worth keeping in mind the well-known principle in economics that incumbent market participants, with large economies of scale, have a strong incentive to pursue regulations that increase fixed costs but don't substantially affect marginal costs, or are otherwise decreasing costs with respect to scale. This makes it harder to start a competing firm.
That is a risk, but regulation can also keep the market open to competition, preventing incumbants from abusing their market power. Antitrust law, for example, and the FTC do that.
Right. But this is not a thread about the general merits of government regulation in business. This is about something specific, and this principle is essential to keep in mind when evaluating any specific new regulation.
Because we should have a high "index of suspicion" whenever we see incumbent corporations pushing for regulation that would suppress competition.
If we want to have good regulations and effective regulatory agencies, we should be judicious and methodical about what regulations are adopted, who is granted enforcement authority, and what that authority consists of.
> One drawback might be that political will and attention is fleeting; and interests become vested in a current state, blocking change; and it's now or not for another generation or so.
If a problem, like tracking what non SOTA models are doing with respect to safety, don't rise to being worth regulated, then why bother regulating it? If you're going to force everyone in the field, in California, to file additional paperwork (and understand what they're legally supposed to report and how and when), then you probably should have a compelling argument for why. If you didn't do that now, and it's never a big enough problem to regulate then you've done no real harm. If it's clearly an issue that needs to be regulated in the future then it should be easy to do so.
> If it's clearly an issue that needs to be regulated in the future then it should be easy to do so.
That hasn't really been the case, though. Lots of things have needed regulation for a long time - look at carbon emissions, for example - and it doesn't happen.
Perhaps the solution is to create the regulatory agency and give them a legitimate process to create new regulations as needs change. Legislatures have limited time and expertise, and can't possibly keep up anyway.
>A lot of hand waiving about how "minor paperwork" for everyone isn't such a big deal. But very little coverage of what benefit that paperwork is accomplishing.
I also came here to comment on my concern about this. Only big players with significant legal resources like Google/Microsoft are going to be able to compete once you let this element start to creep in.
You also end up with a growing group of individuals whose livelihood is tied to whatever the regulatory regime is. If you are paying people to write new laws and regulate things, they are going to write even more regulations. Whether or not the insidiousness of this was intentionally malicious doesn't matter, you end up with a bureaucracy that only stops after a revolution or significant political change.
Some of these people are directly employed by the government, others are tied to (government funded) non-profits. They may have good intentions, but it is the end results that matter. Ultimately, they will not speak against the self-interest of their own group.
From the standpoint of software development, think of it as a codebase you can't refactor but has an endless stream of new business logic which may or may not conform well with the database structure. Eventually it just stops working.
To speak about Google/Microsoft/Big Tech in particular, it isn't just legal resources - it is overall resources. Doing basic things get a lot more expensive and slow.
This and other proposed legislation is attempting to hit the ball out of the park on the first pitch. I feel it would be a lot more sensible and effective to legislate clear and present harms, such as holding developing firms liable for deep-fake technology if used for identity theft for the purpose of fraud.
>Yes, if the Civic had a feature that made it easier to hit you, or lacked a reasonable feature that would have prevented it from hitting you.
Ah but some cars today have automatic breaking, so can I sue the manufacturer for not including one? Maybe Toyota's would have seen the pedestrian, is it reasonable to assume Honda's should have as well? Since this is a safety issue, why did Honda allow the car to even start without an up to date pedestrian detection system?
That one car has automatic breaking means you can sue all others for not having it. Depending on why the others don't have it you might or might not win, but once the technology exists they will ask why others didn't. (sometimes the courts will accept the patent on the technology was too expensive to license, and sometimes the technology can't be put on this car for technical reasons - see a lawyer)
As I said down-thread a bit... the issue with your car analogy is that we force everyone to get a license and register their car before they can drive around. Do you want to have to get a license and register your AI model before you're allowed to start generating images or text? If so, maybe that is a solution. But I doubt anyone would accept that sort of system. So, saying "you can't sue Honda if someone drives into you," while true, doesn't really get us anywhere in addressing the issues with AI.
>So, saying "you can't sue Honda if someone drives into you," while true, doesn't really get us anywhere in addressing the issues with AI.
I'm not trying to address any perceived 'issues' with ai here, I'm pointing out the flaw in holding the developer/ manufacturer liable for what a end user does.
Also I could switch the analogy to planning a robbery over WhatsApp, hacking into a bank using 'pentration testing tools' or even just windows itself for allowing users to run any software they want. Or maybe windows allows piracy by not scanning every file against a hash of known piracy content.
You can make up a million scenarios of end users misusing products, I'm sorry you don't like the car one.
With cars we have as a metric shit tonne of regulations so that manufacturers can be relieved of some liability.
Let's do the same for A.I., right? How about you reply with the regulations that A.I. companies face today that are equivalent to what car companies face. I'll check back for your answers. If there are any gaps, then let's get to work on that legislation.
1. *Fuel Economy Standards (Corporate Average Fuel Economy, or CAFE)*: Auto manufacturers are required to meet specific fuel efficiency targets for their fleet of vehicles. These standards aim to reduce greenhouse gas emissions and promote fuel-efficient technologies.
2. *Emissions Standards*: The Environmental Protection Agency (EPA) sets emissions limits for pollutants such as nitrogen oxides (NOx), carbon monoxide (CO), and hydrocarbons (HC). Compliance with these standards ensures cleaner air and reduced health risks.
3. *Safety Regulations (National Highway Traffic Safety Administration, or NHTSA)*: Auto manufacturers must adhere to safety standards related to crashworthiness, occupant protection, airbags, seat belts, and child safety. These regulations help prevent injuries and fatalities.
4. *Recall Requirements*: Auto manufacturers are obligated to promptly address safety defects by issuing recalls. The NHTSA oversees recall processes to protect consumers from faulty components or design flaws.
5. *Consumer Protection Laws*: Regulations ensure transparency in advertising, warranties, and pricing. Auto manufacturers must provide accurate information to consumers and address any deceptive practices.
6. *Clean Air Act*: This federal law regulates emissions from vehicles and sets emission standards for pollutants. Compliance with these standards is crucial for environmental protection.
7. *Corporate Average Emission Standards (CAES)*: Similar to CAFE, CAES focuses on reducing greenhouse gas emissions. Auto manufacturers must meet specific emission targets across their fleet.
(I'm sure the list goes on a good bit longer but I feel like this is enough for now.)
Actually I take it all back, the car is a really good model for how we should handle AI safety.
With cars, we let most people use some very dangerous but also very useful tools. Our approach, as a society, to making those tools safe is multi-layered. We require driver's ed and license drivers to make sure they know how to be safe. We register cars as a tool to trace ownership. We have rules of the road that apply to drivers. We have safety rules that apply to manufacturers (and limit what they are allowed to let those tools do). If a user continues to break the rules, we revoke their license. If the manufacturer breaks the rules, we make them do a recall.
I actually agree with you 100%, this is probably a good way to think about regulating AI. Some rules apply to individual users. Some rules apply to the makers of the tools. We can come together as a society and determine where we want those lines to be. Let's do it.
If their Civic's brakes were poorly designed or implemented, then yes, Honda should be liable. Then we get into the definition of 'poorly' - in what distance and time should the car stop? - and then we need some sophisticated regulation.
The analogue of someone using deep-fakes for fraud is for someone to purposefully hit a pedestrian with their car. Should Honda be held liable because someone tried to use their car as a weapon? The classical form of this argument is if a kitchen knife manufacturer should be held liable if someone used their knives for homicide.
This analogy is strained, because when it comes to motor vehicles, aside from the concept of "street legal" cars that limit what you can do with the vehicle, we also have cops that patrol the streets and cameras that can catch people breaking the rules based on license plate. Theoretically you can't drive around without being registered.
What's the equivalent of that for AI? Should there be a watermark so police can trace an image back to a particular person's software? If that isn't acceptable (and I don't think it would be), how do we prevent people from producing deep fakes? At the distribution level? These are hard problems, and I don't think the car analogy really gets us anywhere.
Yes, that's why I offered the kitchen knife example instead. Cars are also a problematic analogy, because even though some people still consider their operation to be fully controlled by the driver and not the manufacturer via their software, that's apparently becoming less the case.
> If that isn't acceptable (and I don't think it would be), how do we prevent people from producing deep fakes?
You don't. The problem isn't producing deepfakes. The problem is committing fraud, regardless of the tools used. Someone using deepfakes to e.g. hide facial disfigurement from their employer isn't someone I mind using deepfakes.
> Someone using deepfakes to e.g. hide facial disfigurement from their employer isn't someone I mind using deepfakes.
I agree here. But what about the harder questions? Do you think deepfake porn of celebrities should be allowed? What about deepfake porn of an unpopular student at the local high school?
If these aren't allowed, where is the best place to prevent them, but still has minimal impact on the allowed uses? At the root level of the software capable of producing them (what seems to be proposed in TFA)? At the user level (car analogy)? At the distribution level (copyright-style)? I don't know the answer to these questions, but I think we should all be talking and thinking about them.
> Do you think deepfake porn of celebrities should be allowed? What about deepfake porn of an unpopular student at the local high school?
Should be handled by impersonation/defamation laws. For celebrities, perhaps it may be handled by copyright. That would allow them to license their likeness under their own conditions.
> If these aren't allowed, where is the best place to prevent them, but still has minimal impact on the allowed uses?
By enforcing laws against the bad behaviors themselves and not trying to come up with convoluted regulations on tools just because of their potential to be used badly.
> At the root level of the software capable of producing them (what seems to be proposed in TFA)?
> At the distribution level (copyright-style)?
You'd just be increasing the costs of producing software and distribution means (thinking of stuff like YouTube). It's just setting up already powerful companies to become even more powerful by raising the bar on what potential competition must be ready for from the get-go.
> not trying to come up with convoluted regulations on tools just because of their potential to be used badly
and
> You'd just be increasing the costs of producing software and distribution means (thinking of stuff like YouTube). It's just setting up already powerful companies to become even more powerful by raising the bar on what potential competition must be ready for from the get-go.
These arguments could be made against any "custom" regulatory scheme like what we have for drugs, cars, airplanes, etc. But sometimes the unique harms presented by certain classes of products require unique regulatory schemes.
Maybe you're right (I hope you are) and the potential harms of AI are not really significant enough to warrant any special regulation. But I don't think that is _obviously_ the case, and I would be careful when it comes to talking with normies about this stuff - AI does seem to be really scary, and hearing a techie hand-wave their concerns over technology they don't understand has the potential to make it worse. Good luck out there buddy.
In that case, I think you have a point. However, consider these situations:
* Honda made their car poorly and there are sharp edges at the fenders, and the driver purposely used those to injure someone. I think Honda should still have some liability; their poor construction resulted in extra injury, regardless of the application.
* Honda intentionally or with recklessness built the car in a way that would serve as a useful tool for murder, in ways that served no worthwhile purpose, in ways that could be secured. I don't know the law exactly, but I expect Honda would be liable, and IMHO that would be absolutely right.
Still, if Honda builds a safe car and someone simply chooses to use its mass x acceleration to kill someone, then I wouldn't hold Honda liable.
If Honda says that the Civic is good for running into people then yes - as this is the clear purpose. Or if Honda says you don't have to worry as it is not possible to hit someone - because they promised that took care of things.
Note that courts take advertising over warning labels and the manual. Which is why many car ads have the text "professional driver on closed track on screen" - make it clear they they think car can do it but not most customers. Likewise cutting tools often have "guards removed for clarity" are clearly not operating (or clearly a cartoon image and not the real tool) - if they advertise someone running the tool without the guard they are liable.
There is also the concept of foreseeable misuse in courts. If you can imagine someone would do that you have do show the courts that isn't the intended purpose and you tried to prevent it. If someone does something you didn't think of, then you need to show the court you put a reasonable effort into figuring out all the possible misuses otherwise it becomes a lack of creativity on your part. Thinking of a misuse doesn't mean you have to make it impossible, just you have to make reasonable effort to ensure that doesn't happen (guards, warning labels, training, not selling to some customers - all are common tactics to sell something that can be misused without being liable, but even there you can't put a warning label on something if you could have placed a guard on the danger)
The above just brushes the surface of what the courts deal with (and different countries have different laws). If you need details talk to a lawyer.
I'm suspicious of this bill, but your analogy does more to show how cars are horrifyingly unregulated than push for individual responsibility.
The car allows you to break the law by going 2x faster than the highest speed limit in the nation. A faster car, with higher ground clearance does make it easier to fatally run into someone. The Tesla cybertruck is a killing machine in car form.
Cars are the leading cause of death in the US. Maybe we need to have a similar 'pre-emptive manufacturer-side intervention' bill for cars too.
Software developers and researchers should not be liable for distributing information or code, even if it's used for something illegal, as long as they aren't explicitly promoting the illegal activity and don't have any involvement with it outside of creating the software.
Not only is that consistent with previous decisions, such as those regarding copyright (i.e. torrents are fine, but making a client to torrent movies specifically isn't), but also any other decision would be a violation of the social contract with regard to open-source development.
If a bridge collapses and people are hurt the engineer is at fault and should be held accountable. If software fails and people are hurt the software engineer is at fault and should be held accountable.
This is a poor analogy. A better one would be if a murderer used your bridge to escape. Should you be held liable? What if the bridge were designed to handle highway speeds so he could escape faster?
I'd agree that they shouldn't be liable in that case, since the bridge works the same for everybody (no matter why they're driving over it) and is working as designed/intended. It's really only the idea that developers shouldn't be liable
for their code as long as they aren't explicitly promoting illegal activity and don't have any involvement with it outside of creating the software that I take issue with.
General AI tools also work the same for all users. It's not as if the average AI company is optimizing for celebrity deepfake nudes or spambots.
I mean, there are really two categories of software:
* Free and/or open source software. In this case, I think there is no good reason to make the developer liable, unless they're promoting illegal use. No person wants to be attacked for giving away something for free. That's why the LICENSE.
* Commercial/paid software. In this case, it is reasonable to argue that companies should be liable if end users are harmed by the software. For paid software especially, disclaimers cannot be absolute.
But I do not think it is acceptable to hold developers liable for second-order effects - i.e., a user doing something illegal with the software and harming a third party - unless it was obvious to them that the user was going to do something illegal.
If they are knowingly including large numbers of celebrity photos in their training data, slurping it into their models, and doing nothing to block users from abusing what is a clearly foreseeable harm? That's on the companies making the product, not on the users.
If Honda put a big spike on the front of their vehicles because they thought it looked good and would sell more cars, but the spike was good at skewering pedestrians, they'd be at fault too. It wouldn't matter that their designers thought the spike was sexy and would sell more cars. You can't make something you know to be dangerous and expect to sell it to the public without being regulated.
Want to avoid the regulation, don't steal a bunch of celebrity photos and an provide your users with a tool that that creates celebrity porn deepfakes on demand.
This isn't controversial. Go to Microsoft's AI chatbot today and try to get it to create a naked image of Taylor Swift. Microsoft has spent non-trivial engineering resources making that fail. Not doing that work is irresponsible and likely to lead to lawsuit that may or may not be winnable but that Microsoft and others clearly want to avoid.
Counterpoint: tons of tools are dangerous yet are still sold without much if any regulation. Knives are dangerous, but you don't need an ID to buy one from the store. We sell dangerous products all the time! We just put warnings and disclaimers on them (which AI models tend to come with).
That said, I dispute the idea that these models are "dangerous" in the first place. A box that generates texts and images is not even remotely as dangerous as a sharp spike strapped to a car. Such a comparison is hyperbolic.
People act like these models are going to be the end of US when they're literally just "instant photoshop." A dangerous model would be one designed to run a military drone or automatic weapons, not a random text and image machine.
All that aside, the deepfake issue has nothing to do with the model datasets including celebrity photos (in fact, it would work fine without any of them). And no, downloading public photos is not stealing either.
the original comment explicitly said "holding developing firms ..." - so this is not about software developers, it is about corporations. The moment you start to sell stuff is the moment you become liable.
HN doesn't know how to make that distinction any more. It's so overrun with corporate bootlickers who think the software engineers ARE the company and the company IS the software engineers. I presume it's just a bunch of temporarily embarrassed billionaires planning for the future, but it's a shame that a once hacker-friendly forum is now mostly focused on compensation maximization and defending trillion dollar corporations.
There are legal purposes for generative AI tools and even deepfakes, so there should be no issues with the tools themselves.
Obviously if a site promotes "download this tool to generate infinite nude pictures of celebrities", then that is illegal, since that particular tool was only developed for illegal uses.
There are legal purposes for cars and guns and we regulate the hell out of them because there are also plenty of not-legal purposes and even just the potential for accidents. When A.I. is as heavily regulated as cars, we can revisit the "it's just a tool" argument.
> I feel it would be a lot more sensible and effective to legislate clear and present harms, such as holding developing firms liable for deep-fake technology if used for identity theft for the purpose of fraud.
s/deep-fake/photoshop
Deepfakes are simply more convenient photo/video/audio editing that has been around for decades[1], and we don't really need new legislation to deal with them. Fraud/defamation/etc, the actual harmful aspects of what can be accomplished with deepfakes, don't need any new updates to handle the technology. If we're going to hobble new technologies, we may as well go back and hold Adobe responsible for all the shady things people have done with Photoshop, and video/audio editing suites for all the deceptive clips people have spliced together.
I vaguely recall seeing some fairly convincing B&W Soviet-era photos (I think they had Stalin in them) where people were removed and other people moved around to fill the gap. And document forgery for the purposes of fraud and espionage has of course been around for centuries.
But I think the issue is less the capability itself, and more that companies will make it too easy (trivial, actually) for anyone to commit mischief. The ability to mass-manipulate images on command is no longer restricted to the General Secretary of the USSR.
That doesn't necessarily mean regulation is required, though--plenty of modern technologies make it very easy to commit crimes, but only some of them require special rules.
I understood the bill to explicitly not target misuse of the AI (from the article: "Odd that ‘a model autonomously engaging in a sustained sequence of unsafe behavior’ only counts as an ‘AI safety incident’ if it is not ‘at the request of a user.’ If a user requests that, aren’t you supposed to ensure the model doesn’t do it? Sounds to me like a safety incident."). This seems to be entirely targeted at potential risk from a rogue AI. What regulation would you propose to address that risk?
The Author's entire section "What is a Covered Model Here?" kind of sums up one of the main issues with the bill, even if they say themselves how clear the bill is.
'''
(1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations.
(2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.
'''
They then go on to say:
>Under this definition, if no one was actively gaming benchmarks, at most three existing models would plausibly qualify for this definition: GPT-4, Gemini Ultra and Claude. I am not even sure about Claude.
Well how did they draw this conclusion? They decided that other models that do well are simply gaming benchmarks, and therefore can ignore the provision?
>Um, no, because the open model weights models do not remotely reach the performance level of OpenAI?
To me the author is simply incredibly biased and not well versed in other models. Is chatgpt4 better? Yes. Is Mistral8 comparable? I'd easily say yes. Using it just once would lead most to a similar conclusion, it's only after throwing word problems and tricks where you can really see the differences.
And these kind of laws open to interpretation are always an issue because a responsible company has to err on the side of caution. If your model is often helpful, but only 80% as often as chatgpt, is that comparable? If someone released a new benchmark tomorrow, and your model actually beats out chatgpt, does your model all of a sudden apply?
I'm not even sure the covered model description only applies to LLMs. Is Segment Anything a covered model? The paper self-describes as a "foundation" model, and it presents SoTA results on various benchmarks. It surely falls under subsection (b)'s incredibly broad definition of an AI model, since it "makes predictions".
It sounds like you can't make a "positive safety determination" if you're trying to get SoTA performance on any benchmark, so is this bill asking Meta to implement a procedure to enact a "full shutdown" of image segmentation models?
It feels like this section of the bill was written by someone who once read a LinkedIn article about AI, but isn't really sure what all the terms mean.
The author believes the CalCompute model is a terrible idea. I strongly disagree! A lack of access to appropriate infrastructure (due to cost) is one of the things that smaller researchers talk about a lot. It leads to a situation where the only types of people training large models are the types of people at large institutions. Giving more researchers an opportunity to train large models could bring more variety of thought in our exploration of AI safety and general development.
As an aside, I called my local library and a real person immediately answered the phone. Public infrastructure can have unexpected benefits over commercial offerings, and I’m all for it.
What's the problem with this, it seems like the best idea in this bill:
> "Section 5 11547.7 is for the CalCompute public cloud computing cluster. This seems like a terrible idea, there is no reason for public involvement here, also there is no stated or allocated budget. Assuming it is small, it does not much matter."
There are many solid arguments for state funding of open-source programming efforts, the only thing to look out for is that this effort doesn't follow the University of California model of 'public private partnerships' in which taxpayer funds are used to generate IP which is then exclusively licensed to private interests.
> There are many solid arguments for state funding of open-source programming efforts
Perhaps there are solid arguments for that if your goal is to put open-source projects under the de facto control of political institutions. But for those of us who prefer for FOSS to continue on in a decentralized, mostly autonomous fashion, rather than have it be overwhelmed by rent-seekers looking for state subsidies, or be used as a pawn to advance various factions' political agendas, those arguments don't seem quite so solid.
Not hugely, since NASA is an organization with a very narrowly tailored scope, and it's participating in FOSS projects as a means to the end of facilitating its space-exploration mission.
FOSS is incidental to NASA's purpose;: they use and modify code, and contribute their work back upstream, but aren't engaged in software development for public consumption as a primary activity.
OTOH, I'd be extremely wary of a political organization founded for the specific purpose of funding/influencing the overall FOSS ecosystem.
That doesn't seem to be state funding of open source programming efforts though. It addresses one aspect: the cost of compute when training models. But it doesn't provide incentives for people to use it and strike out on their own, abandoning the huge paycheck that the big AI research tech company gives them.
This is a “sounds good, but devil is in the details” type situation, and the details typically don’t work out all that well because the initiative effectively assumes good intent from the ecosystem, when we have seen time and time again that there are enough hostile actors with enough creativity to spoil it.
I don't think there is any benefit, and there is massive cost to moving for any company, especially one that is built on a highly specialized labor force that doesn't want to move. At least, that's my best reading of this article's interpretation of the law's impact.
> there is massive cost to moving for any company, especially one that is built on a highly specialized labor force that doesn't want to move.
California is banking way too hard on people who want to work for companies that do not want to pay them enough to live there, do not want to pay them to work remotely, actively campaign against building more housing, and actively support immigration policies that continue to push down the wages of these individuals as well as make this "specialized" labor force less "specialized."
You would assume that there's a limit to what both the workers and the businesses are willing to put up with.
This is for the employees that do not get paid enough to live in California, sure.
But those employees are not the ones that caused OpenAI to start and headquarter in San Francisco. Its started there becuase of the employees that do get paid enough to live in the state comfortably, and benefit massively from the aggregation of capital and experience in the Bay Area.
Perhaps somebody else can do the same elsewhere! Which would be interesting to see.
California treats anybody with less than several million in net worth and less than $400k/year in household income absolutely terribly. But that's a far bigger impact than this proposed AI legislation.
If the company just keeps paying the Californian salary but offers relocation to e.g Texas, it may be a significant direct benefit for the employees, worth the move. Just compare:
Tesla only moved their headquarters, it was a PR move.
The people leaving California don't really want to leave, they are just being forced out by high costs. Those who have high incomes and can afford housing, like all those OpenAI employees, are staying.
OpenAI isn't looking for just another person that picked up AI on a whim, they are looking for people that are driving the field forward so that OpenAI stays at the very forefront. These sorts of people could learn quantum physics easily, and many probably have in their undergraduate (or graduate) degrees.
No way. The people at the forefront of AI, most of them will have a much harder time learning quantum physics. Yes ML is hard but it's nowhere near QE, theoretical math or physics.
The reason is simply because ML is incomplete and thus right now is more art then science. We only understand these neural networks from the perspective of an analogy. The curve fitting analogy. Its nowhere near something like general relativity where even an analogy doesn't convey full understanding.
Does Tesla pay any state or federal taxes? From my understanding, their federal tax bill at least was $0 in 2022. If the company was operating in CA and paying little or no taxes, it may have been a net drain on the state.
Businesses threaten this, but if they really want no regulation there are countries that have none at all. It turns out that a well-regulated market is a better place to do business. It also attracts smart, talented people who want to live in the best living environment - schools, roads, culture; a place filled with people who want to build a great community - not the best environment for the billionaire founder who cares only for themself and their estate.
Much more wealth is generated, many more industries have been born and thrived, in places like CA and NY than in less-regulated, lower-wage, no/low-tax states. The correlation looks very strong by the eyeball method.
Incorporation location is irrelevant to this discussion. While governance of a Delaware corporation is litigated in Delaware business courts, a Delaware corporation that is headquartered in California, or even does business in California, needs to obey California law. And there are quite a few of them, to put it mildly.
>Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences…
Not gonna argue with that, but I’m having trouble imagining the alternative. Somehow we would have understood the risks without allowing the growth and observing the consequences? That seems unlikely, considering how surprised we all seemed when the consequences occurred.
With AI we seem to have a lot more noise around imagining consequences, but in my mind there’s no reason that would correlate with completeness or accuracy of the predictions. There will be lots of very bad, and lots of very good, consequences that nobody can currently imagine.
I would argue that a more robust regulatory framework around social media could have at least prevented the worst harms. For example, more robust moderation or at least requiring companies to have responsive liaisons who speak the local language for territories they operate in could have mitigated/prevented the serious harms we've seen Facebook and Whatsapp cause in multiple countries i.e. Myanmar.
This is of course assuming we'd get the kind of regulations we need, instead of useless security theater or regulations so onerous that they prevent products from developing at all. But it does seem like it would have been possible to prevent the worst of it, since many of these cases boil down to 'Facebook (or other big company) didn't care and didn't bother to try, and then bad things happened'
What is important is what we can imagine today. What we discover or imagine tomorrow needs to be handled in tomorrows updates, but isn't a problem today. The courts require you to do due diligence in imagining and mitigating the problems we can foresee today. If we invest something next year you will have to redo that for products you make next year - but you don't have to retrofit that to what you did this year.
> This seems like a terrible idea, there is no reason for public involvement here, also there is no stated or allocated budget.
While I dunno where I come down yet on this legislation, it seems truly bizarre to object to the CalCompute cluster. IMO, the biggest AI threat we face is from big tech being the only ones capable of creating new models. Right now, companies like Meta are aligned with open source AI, but that could change at the drop of the hat. It makes sense to preserve state capacity to level the playing field against attempts at monopoly or other forms of overreach. Also, a lot of great work is coming out of UC system labs, so why shouldn’t we invest more in making them successful?
"While I dunno where I come down yet on this legislation, it seems truly bizarre to object to the CalCompute cluster."
The calcompute cluster is what jumped out at me.
The danger is that this will be a huge subsidy to entrenched UC/CA contractors that are in place solely to navigate - and exclude others from - state procurement mechanisms.
In fact, if you were really cynical you might think that the entire purpose of this bill is the calcompute cluster and everything else is just scaffolding to enable a boondoggle.
I am not asserting this - I am just pointing to a possible interpretation.
Who's going to build the cluster? Who's going to operate it? Who's going to decide how time/access gets allocated? Are results of compute going to be in the public domain?
Given the glacial speed of government products, will the cluster be using obsolete tech by the time its finally up and running?
I absolutely agree, but I'm not sure something like CalCompute is going to do a good job counteracting that. Is the major limitation to others creating new models the availability and cost of compute, or is it the fact that big tech companies can afford to compensate AI researchers well enough that they won't work on open projects?
To me it feels like the latter, but admittedly I don't know for sure.
> With AI, we have the opportunity to apply the hard lessons learned over the past two decades. Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences, and we should take reasonable precautions this time around.
> Critical harm is either mass casualties or 500 million in damage, or comparable.
So the whole regulatory premise is based on missing the boat on social media, but how many examples are there of social media causing this kind of "critical harm"?
All these seem historically bizarre, and making me have deep suspicion about the people & processes driving this legislation. I expected California to legislate early, but not like this.
1. What I expected:
* AI Indemnification rules for platform providers building on ideas like net neutrality on one end, and content provider protections like DMCA and copyright on the other
* Consumer AI protection rules building on ideas like the Equal Credit Opportunity Act and GPDR
2. What we got: Something about KYC and rules on cases about $500M+ incidents and deaths
“If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law. . . . "
With AI this seems like an untenable position. We already have companies that are pushing the bounds with AI constantly and ever further, while telling us we shouldn't worry about the future dystopian implications.
Trying to reign these people and companies in with government regulation at this point doesn't seem feasible.
Id pierce the corporate veil and make executives and board members personally responsible for the damages of their models. Train whatever but if you host a model and that model's output results in significant damage being done, you have to own it
scott wiener has never endorsed anything that expands personal or civil liberties in any possible way. when I read his name having to do with any law my brain just automatically requires double the amount of persuasion that it'll do any social good whatsoever.
Yes. What needs to be regulated are systems that make decisions that affect people. The EU Data Protection Regulations already cover that, using the terms "automated system".
It's not AI that's the problem. It's power over others that's the problem. The trouble is, regulating this means regulating the power of businesses to mess up people's lives.
Worries about people generating porn with AI are overblown. That already works.[1] It's technically impressive, but kind of emotionless. Nobody seems to be worried about it.
But does AI itself not need some checks and balances on it? Reasonable or not, I think the big worry (especially among laypeople) is Terminator-style AI that wants to destroy humanity. Making a law that says AI can't be used to determine if people are allowed to take out loans doesn't stop that.
As is, the bill is not fair to small players. It encourages ventures to use existing models by big players to escape the regulatory burden of "safety", leading to a state-encouraged oligopoly. It also uses compute and benchmarks as the measuring stick for regulation, which can be easily gamed - and already is being gamed. Instead, the application plugged into the AI needs to be regulated. Does the application run afoul of the safety goals of the bill? Then restrict the application, not the model itself.
Oracle will need KYC of customers to determine if you are good and safe enough.
And make sure there are no future competitors to themselves (and maybe their customers as well!).