Hacker News new | past | comments | ask | show | jobs | submit login

>The pauses to consider if we should do <action>, before we actually do <action>.

Unless there has been an effective gatekeeper, that's almost never happened in history. With nuclear the gatekeeper is it's easy to detect. With genetics there pretty universal revulsion to it to the point a large portion of most populations are concerned about it.

But with AI, to most people it's just software. And pretty much it is, if you want a universal ban of AI you really are asking for authoritarian type controls on it.




> But with AI, to most people it's just software.

Practical AI involves cutting-edge hardware, which is produced in relatively few places. AI that runs on a CPU will not be a danger to anyone for much longer.

Also, nobody's asking for a universal ban on AI. People are asking for an upper bound on AI capabilities (e.g. number of nodes/tokens) until we have widely proven techniques for AI alignment. (Or, in other words, until we have the ability to reliably tell AI to do something and have it do that thing and not entirely different and dangerous things).


Right, and when I was a kid computers were things that fit on entire office floors. If your 'much longer' is only 30-40 years I could still be around then.

In addition you're just asking for limits on compute, which ain't gonna go over well. How do you know if it's running a daily weather model, or making an AI. And how do you even measure capabilities when we're coming out with with other functions like transformers that are X times more efficient.

What you want with AI cannot happen. If it's 100% predictable it's a calculation. If it's a generalization function taking incomplete information (something humans do) it will have unpredictable modes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: