Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the scenario where the current AI boom takes us all the way to AGI in the next decade, IMO there is little downside. Risks are very large, OpenAI/Sam have expertise, and their novel corporate structure, while far from completely-removing themselves from self-centered motives, sounds better than a typical VC funded startup that has to turn a huge profit in X years.

In the scenario where the current wave fizzles out and we have another AI winter, one risk is that we'll be left with a big regulatory apparatus that makes the next wave of innovations, the one that might actually get us all the way to an algined-AGI utopia, near-impossible. And the regulatory apparatus will now be shaped by an org with ties to the current AI wave (imagine the Department of AI Safety was currently staffed by people trained/invested in Expert Systems or some old-school paradigm).



When we have 50% of AI engineers saying there's at least a 10% chance this technology can cause our extinction, it's completely laughable to think this technology can continue without a regulatory framework. I don't think OpenAI should get to decide what that framework is, but if this stuff is even 20% as dangerous as a lot of people in the field are saying it is, it obviously needs to be regulated.


What are the scenarios in which this would cause our extinction, and how would regulation prevent those scenarios?


You do realise it is possible to unplug something, right?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: