Hacker News new | past | comments | ask | show | jobs | submit login
63 Percent of Americans want regulation to actively prevent superintelligent AI (vox.com)
23 points by atothei 8 months ago | hide | past | favorite | 21 comments



> And getting to it first could help the US maintain an edge over China; in a logic reminiscent of a nuclear weapons race, it’s better for “us” to have it than “them,” the argument goes.

This is interesting to me. The last two big national security "races": nuclear and space were, as far as I know, funded primarily largely by the government (despite the work being carried out by private sector).

The "AGI" race is entirely fueled by private sector. Specifically, it's fueled by private sector companies whose profits are, by and large, driven by advertising, an industry that thrives on manipulating customers to buy things, directly and indirectly.

This is my biggest problem with it. The tech _can be_ helpful, but the incentives are perverse.


It is regulated by the power and resources that are available.


The sun is a good source of power.


We are not Kardashev Type 2 civilization, barely 1.


Experts Agree Giant, Bioengineered Crabs Pose No Threat



If America doesn't create it, China will.


> If America doesn't create it, China will.

We must create a Torment Nexus, because if we don't, our adversaries will!


Isn't this how it has been with every arms race?


The US (and Europe I'm guessing) banned human germ-line engineering in the 1970s, and so far the tech has stayed stopped worldwide. The Chinese scientist who proudly announced a success with the tech (about 10 years ago) was jailed by the Chinese government.

So, no, if the US and UK ban large training runs, there's a very good chance the rest of the world will follow.

What the Chinese government wants more than anything is a stable domestic political situation: they want to avoid a revolution, and they want to avoid the country's breaking up into 2 or 3 countries. And just like they perceived (correctly, IMHO) that the internet has a lot of potential to cause political instability and responded by vigorously regulating it, they're likely to vigorously regulate AI (while using AI to help them surveil their population). Facebook has made it more likely China will vigorously regulate AI by releasing a potent model under open-source-like terms, which proves to Beijing exactly how advances in AI can put power in the hands of the average Chinese citizen, which again the Chinese government does not want.

BTW, there's no need for you to stop running your open source model on your Apple silicon or 4090: if those models were capable of causing significant problems for people, they would've done so already, so stopping their distribution and use is not on the agenda of the people trying stop "foundational" progress in AI.


> The US (and Europe I'm guessing) banned human germ-line engineering in the 1970s, and so far the tech has stayed stopped worldwide.

And so far as we know the tech has stayed stopped worldwide. (With at least one exception, as you point out. But that exception was, apparently, not officially approved.)

Do you really think North Korea won't do this if they think they see some benefit?


Secret projects to continue to advance AI are much less of a danger than the current situation in which tens of thousands of AI researchers worldwide are in constant communication with each other with no need to hide the communications from the public or from any government.

Advancing the current publicly-known state of the art to the point where AI becomes potent enough to badly bite us (e.g., to cause human extinction) is probably difficult enough so as to not be in Pyongyang's power or even in Moscow's or Beijing's power especially if the government has to do it under the constraint of secrecy. It probably requires the worldwide community of researchers continuing to collaborate freely to reach the dubious "achievement" of creating an AI model that is so cognitively capable that once deployed, no human army, no human institution, would be able to stop it.


> ...especially if the government has to do it under the constraint of secrecy. It probably requires the worldwide community of researchers continuing to collaborate freely to reach the dubious "achievement" of creating an AI model that is so cognitively capable that once deployed, no human army, no human institution, would be able to stop it.

And stopping now may be helpful towards stalling advances (if they're even possible), by providing just enough capability to pollute the potential training data going forward. If the public internet becomes a "dead internet" or a "zombie internet," it'll be much harder to economically assemble good and massive datasets.

All the AI hype (and its implications) is bringing me around to the idea of viewing spam (of all things) as a moral good.


The real threat is AI arguing/competing with itself and wasting 90% of world's power.


what's the argument? diatoms are better than transistors?


> Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?

> Americans, by and large, don’t want it.

Silicon Valley is the theory that geeks who've misread too much sci-fi know what they want, and we deserve to get it good and hard.


In other news, 63% of Americans should stop watching so many sci-fi movies.


Post the poll question or don’t post anything at all.

I’ve seen plenty of poll questions that can get the answer they want with phrasing. Unless I can see the poll questions I don’t trust this at all. On top of that, the idea that the American general public have any real idea about “AI” and what it means is somewhat laughable.


I'd say AI is already smarter than most humans...


34 percent of Americans reject evolution entirely and believe humans have existed in their present form for thousands or tens of thousands of years


Humans have been anatomically modern for hundreds of thousands of years. [0]

[0] https://en.wikipedia.org/wiki/Early_modern_human




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: