Hacker News new | past | comments | ask | show | jobs | submit login

You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.



I’d argue that sweeping categorical statements like this are at the center of the problem.

People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.

I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.


They might have lost you. But starting with "congress shouldn't regulate AI" would lose the senator.

Which one do you think is more important to convince?



"important" does not mean "good." if you are in the field of AI, AI regulation is absolutely important, whether good or bad.


If AI is to be a consumer good—which it already is—it needs to be regulated, at the very least to ensure equal quality to a diverse set of customers and other users. Unregulated there is high risk of people being affected by e.g. employers and landlords using AI to discriminate. Or you being sold an AI solution which isn’t as advertised.

If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.


AI is being used as a consumer good, including to discriminate:

https://www.smh.com.au/national/nsw/maximise-profits-facial-...

AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:

https://www.monster.com/jobs/search?q=artificial+intelligenc...

In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.

Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.

Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.


Post hoc consumer protection is actually quite common. Just think how long after cars entered the marked before they were regulated. Now we have fuel standards, led bans, seat belts, crash tests etc. Even today we are still adding consumer protection to stuff like airline travels and medicine, even though commercial airliners and laboratory made drugs have been around for almost a century.


If if someone doesn't agree with this, regulate what exactly?

Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?

"AI" is whatever congress says it is? That is an absolutely terrible idea.


> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?


I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.


Your argument could just as easily be applied to human cloning and argue for why human cloning and genetic engineering for specific desirable traits should not be illegal.

And it isn't a strong argument for the same reason that it isn't a good argument when used to argue we should allow human cloning and just focus on regulating the more direct causal links like non-clone employment loss from mass produced hyper-intelligent clones, and ensuring they have legal rights, and having proper oversight and non-clone human accountability.

Maybe those things could all make ethical human cloning viable. But I think the world coming together and being like "holy shit this is happening too fast. Our institutions aren't ready at all nor will they adapt fast enough. Global ban" was the right call.

It is not impossible that a similar call is also appropriate here with AI. I personally dunno what the right call is, but I'm pretty skeptical of any strong claim that it could never be the right call to outright ban some forms of advanced AI research just like we did with some forms of advanced genetic engineering research.

This isn't like banning numbers at all. The blame falling on the corner-cutters doesn't mean the right call is always to just tell the blamed not to cut corners. In some cases the right call is instead taking away their corner-cutting tool.

At least until our institutions can catch up.


I can get your example about eugenics. I get that the worry is that it would become pervasive due to social pressure and make the dominant position to do it. And that this would passively, gradually strip personhood away from those who didn’t receive it. There’s a tongue-in-cheek conversation to be had about how people already choose their mating partners this way and making it truly actually outright illegal might not really reflect the real processes of reality, but that’s a tad too cheeky perhaps.

But even then, that’s a linear diffusion- one person, one body mod. I guess you could say that their descendants would proliferate and multiply so the alteration slowly grows exponentially over the generations.. but the FUD I hear from AI decelerationists is that it would be an explosive diffusion of harms, like, as soon as the day after tomorrow. One architect, up to billions of victims, allegedly. Not that I think it’s unwise to be compelled to precaution with new and mighty technologies, but what is it that some people are so worried about that they’re willing to ban all research, and choke all the good that has come from them, already? Maybe it’s just a symptom of the underlying growing mistrust in the social contract..


I mean, I imagine there are anti-genetic engineering FUD folks that go so far as to then say we should totally ban crispr cas9. I would caution against over-indexing on the take of only some AI decelerationists.

Totally agree we could be witnessing a growing mistrust in the social contract.


> atoms matter, information isn’t

Algorithmic discrimination already exists, so um, yes, information matters.

Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance, just.. imagine AI underwriters. There's no court of appeal for insurance. It matters.


I am literally agreeing with you but in a much more precise way. These are questions of “who gets what stuff”, “who gets which house”, “who gets which heart transplant”, “which human being sits in the big chair at which corporation”, “which file on which server that’s part of the SWIFT network reports that you own how much money”, “which wannabe operator decides their department needs to purchase which fascist predictive policing software”, etc.

Imagine I 1. hooked up a camera feed of a lava lamp to generate some bits and then 2. hooked up the US nuclear first strike network to it. I would be an idiot, but would I be an idiot because of 1. or 2.?

Basically I think it’s totally reasonable to hold these two beliefs: 1. there is no reason to fear the LLM 2. there is every reason to fear the LLM in the hands of those who refuse to think about their actions and the burdens they may impose on others, probably because they will justify the means through some kind of wishy washy appeal to bad probability theory.

The -plogp that you use to judge the sense of some predicted action you take is just a model, it’s just numbers in RAM. Only when those numbers are converted into destructive social decisions does it convert into something of consequence.

I agree that society is beginning to design all kinds of ornate algorithmic beating sticks to use against the people. The blame lies with the ones choosing to read tea leaves and then using the tea leaves to justify application of whatever Kafkaesque policies they design.


> Add to that the fact that you're posting on a largely American forum where access to healthcare is largely predicated on insurance

Why do so many Americans think universal health care means there is no private insurance? In most countries, insurance is compulsory and tightly regulated. Some like the Netherlands and France have public insurance offered by the government. In other places like Germany, your options are all private, but underprivileged people have access to government subsidies for insurance (Americans do too, to be fair). Get sick in one of these places as an American, you will be handed a bill and it will still make your head spin. Most places in Europe work like this. Of course, even in places with nationalized healthcare like the UK, non-residents would still have to pay. What makes Germany and NL and most other European countries different from that system is if you're a resident without an insurance policy, you will also have to pay a hefty fine. You are basically auto-enrolled in an invisible "NHS" insurance system as a UK resident. Of course, most who can afford it in the UK still pay for private insurance. The public stuff blends being not quite good with generally poor availability.

Americans are actually pretty close to Germany with their healthcare. What makes the US system shitty can be boiled down to two main factors:

- Healthcare networks (and state incorporation laws) making insurance basically useless outside of a small collection of doctors and hospitals, and especially your state

- Very little regulation on insurance companies, pharmaceutical companies or healthcare providers in price-setting

The latter is especially bad. My experience with American health insurance has been that I pay more for much less. $300/month premiums and still even seeing a bill is outrageous. AI underwriters won't fix this, yeah, but they aren't going to make it any worse because the problem is in the legislative system.

> There's no court of appeal for insurance.

No, but you can of course always sue your insurance company for breach of contract if they're wrongfully withholding payment. AI doesn't change this, but AI can make this a viable option for small people by acting as a lawyer. Well, in an ideal world anyways. The bar association cartels have been very quick to raise their hackles and hiss at the prospect of AI lawyers. Not that they'll do anything to stop AI from replacing most duties of a paralegal of course. Can't have the average person wielding the power of virtually free, world class legal services.


America could afford universal healthcare, but it would require convincing people to pay much higher taxes.


You ended up providing examples that have no matter or atoms: protecting jobs, or oversight of complex systems.

These are policies which are a purely imaginary. Only when they get implemented into human law do they get a grain of substance but still imaginary. Failure to comply can be kinetic but that is a contingency not the object (matter :D).

Personally I find good ideas on having regulations on privacy, intelectual property, filming people on my house’s bathroom, NDAs etc. These subjects are central to the way society works today. At least western society would be severely affected if these subjects were suddenly a free for all.

I am not convinced we need such regulation for Ai at this point of technology readiness but if social implications create unacceptable unbalances we can start by regulating in detail. If detailed caveats still do not work then broader law can come. Which leads to my own theory:

All this turbulence about regulation reflects a mismatch between technological, politic and legal knowledge. Tech people don’t know law nor how it flows from policy. Politicians do not know the tech and have not seen its impacts on society. Naturally there is a pressure gradient from both sides that generates turbulence. The pressure gradient is high because the stakes are high: for techs the killing of a new forthcoming field; for politicians because they do not want a big majority of their constituency rendered useless.

Final point: if one sees AI as a means of production which can be monopolised by few capital rich we may see a 19th century inequality remake. It created one of the most powerful ideologies know: Communism.


Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

We're more likely to see a theocratic movement centered on the struggle of human souls vs the soulless simulacra of AI.


> Ironically communism would've had a better chance of success if it had AI for the centrally planned economy and social controls. Hardcore materialism will play into automation's hands though.

Exactly! A friend of mine who is into the communist ideology thinks that whichever society taps AI for productivity efficiency, and even policy, will become the new hegemon. I have no immediate counterpoint besides the technology not being there yet.

I can definitely imagine LLM based on political manifests. A personal conversation with your senator at any time about any subject! That is the basic part though: The politician being augmented by the LLM.

The bad part is a party, driven by a LLM or similar political model, where the human guy you see and elect is just a mouthpiece like in "The moon is a harsh mistress". Policy would all be algorithmic and the LLM out provide the interface between the fundamental processing and the mouthpiece.

These conflicts will likely lead to the conflicts you mention. I am pretty sure there will be a new -ism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: