Hacker News new | past | comments | ask | show | jobs | submit login

My worry is that as we start wiring non-super-intelligent AI more into our society, we'll make ourselves more and more vulnerable in the case where an AGI actually gets out of control. Pulling the plug on AI may mean causing a lot of chaos or disruption. And who is to tell if we will know when things are out of control, before it's too late? What if the people in charge just get fake reports of everything being fine? What if something smells a bit fishy, but it's always on the side of not quite worrying enough? What if the AIs are really convincing, or make deals with people, or exploit corruption?

Not just that, but it may be like fossil fuel dependency -- the more people's livelihoods are intertwined with, and depend on AI, the harder it is to pull the plug. If we need to stop (and I believe we should) it may be easier to do that now before that happens. To just focus on getting use out of the generative AI and narrow AI we already created in areas like medicine, to deal with the massive societal changes it'll bring, and to work on fixing real social problems, which I think are mostly things tech can't solve.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: