Hacker News new | past | comments | ask | show | jobs | submit login
Dean Ball on the new California AI bill (marginalrevolution.com)
15 points by paulbaumgart 20 days ago | hide | past | favorite | 10 comments



"If, back in the 70s, Steve Jobs and Steve Wozniak had to guarantee that their computers would not be used for serious crimes, would they have been willing to sign with potential jail time on the line? Would they have even bothered to found Apple?"

back in the 70s computer crime was not technically possible or even really feasible in the way we see it today. we didnt have an internet until the 80s.

what we have with AI is wholesale development, integration, and execution in our everyday lives across numerous platforms and services. its used in bill pay, litigation, even war. we absolutely do need legislation to protect and secure regular people from it.

this feels like a classic case of having your cake and eat it too. Either AI is a very powerful tool for humanity and thus would naturally require a regulatory framework around it to ensure its proper use and application, or its a buzzword hyped up by SEO's and marketing teams to make people think a handful of big companies that ran out of steam 20 years ago still have the potential to innovate stuff.


Why isn't the usual "regulatory framework" around powerful new technology and products – common-law liability for harms, prohibitions against fraud & various kinds of trespass & damage to established rights, plus the checks of reputational risks and vibrant competition – enough?

Do you think that because personal computers & the internet took a while to develop, states like California missed an opportunity to helpfully regulate them by forcing their makers to attest to their safety before product development (much less public release) gets underway? Should we add that sort of regulation, now?

A new website could be harmful – used for "serious crimes", "bill pay", "litigation", "even war"! Should every new website require filing paperwork guaranteeing its safety with a Calfornia Department of Technology division before going live? (There were oppressive regimes that tried to control presses, printers/fax-machines, & websites like this, using "safety" rationales!)

The mere fact a new technology is a "powerful tool for humanity" is not something that must "naturally require" a novel state-bureaucracy-run "regulatory framework around it to ensure its proper use and application".

The state in general, and the State of California in particular, is not our wise cloud-father with the foresight & disinterest to do what's best for us. It's instead a clumsy and often-corrupted tool for solving some common-coordination problems.

States usually do best when addressing a well-understood common history of specific problems & market-failures – rather than improvising new filing requirements against theoretical fears, as here.

Any new "very powerful tool for humanity" deserves the same freedom from prior restraint, & forebearance from premature budens that mostly benefit incumbents and large players, that prior technological innovations enjoyed.


> back in the 70s computer crime was not technically possible

Computers have been used for payroll processing and banking since at least the 1960s.



With the introduction of VisiCalc in 1979 on the Apple II, Apple opened itself up as a business tool that accountants could use to do the books on. An accountant could then use a computer in service of them embezzling a serious amount of money from their employer. Should Apple be held accountable for that?


> An accountant could then use a computer in service of them embezzling a serious amount of money from their employer. Should Apple be held accountable for that?

You joke, as if we didn't do exactly this -- regulate accountants and accounting software. (GAAP, DFARS, SOX, PCI DSS, etc)

And we did the same thing with say, Auto Manufacturers and automobiles (FMVSS and CAFE via NHTSA)


Those regulations don't put Intuit on the hook if my accountant embezzles money through Quickbooks though.


> Should Apple be held accountable for that?

You could ask the same thing about gun manufacturers and shootings. This is a totally normative, political question.

And before you start moving the goalposts: an eye opening experience for me was considering that while you can dismiss an individual’s claims of “harms” as imaginary, what about a huge group of people? Specifically, if the authors unionized (they did) and they say that AI training on their work harms them (they do), does that not make in “real,” in a special way, the same special way that a law that comes into being via a popular vote is more “real” compared to a law that is made by fiat by a dictator? I am just trying to open your mind past these really basic sentiments and gotchas.

No matter what, AI developers must grapple with popular opinions about AI.


Hopefully this fails, its idiotically restrictive. Every argument like this was also used against the open internet & encryption back in the day. Bill's like this are going to allow China and other adversarial nations to leapfrog the west in the long run. Open source Ai won't stop, it'll just move to places like UAE. And good luck trying to restrict individuals from using it in the US, code is protected speech.


> Bill's like this are going to allow China and other adversarial nations to leapfrog the west in the long run.

I'm not saying you're off base here, and I have never worked in tech in China ... but doesn't the state apparatus have its fingers even deeper in AI development over there? Why is state interference on the part of China not perceived as harmful to progress in the same way that the specter CA regulation is?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: