Hacker News new | past | comments | ask | show | jobs | submit login

The solution is to replace the board members with AGI entities, isn't it? Just have to figure out how to do the real-time incorporation of current data into the model. I bet that's an active thing at OpenAI. Seems to have been a hot discussion topic lately:

https://www.workbyjacob.com/thoughts/from-llm-to-rqm-real-ti...

The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs. The curious silence on military-industrial applications of LLMs makes me suspect this is part of the OpenAI story... Good plot for a novel, at least.




> The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs.

These cannot possibly be the most realistic failure cases you can imagine, are they? Who cares if "kids" "make illegal drugs?" But yeah, if kids can make illegal drugs with this tech, then actual bad actors can make actual dangerous substances with this tech.

The real risk is manifold and totally unforeseeable the same way that a 400 Elo chess player has zero conception of "the risks" that a 2000 Elo player will exploit to beat them.


Every bad actor who wants to make dangerous substances can find that information in the scientific literature with little difficulty. An LLM, however, is probably not going to tell you that the mostly likely outcome of a wannabe chemist trying to cook up something or other from an LLM recipe is that they'll poison themselves.

This generally fits a notion I've heard expressed repeatedly: today's LLMs are most useful to people who already have some domain expertise, it just makes things faster and easier. Tomorrow's LLMs, that's another question, as you imply.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: