Hacker News new | past | comments | ask | show | jobs | submit login

> If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.

If it takes software technology that we have already developed outside of secret government labs, it is probably too late to sequester it.

If it takes software technology that has been developed in secret government labs, its probably too late to sequester the already public precursors with which independent development of the same technology is impossible, getting us back to the preceding.

It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.

If it takes a breakthrough in hardware technology, then if we make that breakthrough in a way which doesn't become widely public and used very quickly after being made and the hardware technology is naturally amenable to control (i.e., requires distinct infrastructure of similar order to enrichment of material for nuclear weapons), maybe, with intense effort of large nations, we can sequester it to a limited club of AGI powers.

I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.




> It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.

Which in turn leads to the cautious approach for which OpenAI is criticised: not revealing things because they don't know if it's dangerous or not.

> I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.

Entirely possible, and a person I know who left OpenAI had a fear compatible with this description, though differing on many specifics.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: