Ben says over and over again that he's not making the argument about the importance of democratic oversight, but he clearly is. He doesn't like Amodei and is happy to see him fail, and it comes through loud and clear in the piece.
Anthropic, like any other US company, should be free to not sell to the government if they don't want to. These other arguments about oversight are nonsense.
I don't think he ever said in the article that he is not making the argument about the importance of democratic oversight, if anything, in the conclusion, he says:
"The way to address this new reality, however, is with new laws and through strengthening accountable oversight; cheering or even demanding that an unelected executive decide how and where such powerful capabilities can be used is the road an even more despotic future."
But I think you drastically misunderstood the point of this article. Ben is pointing out the implications of Amodei's analogy of advanced AI being like nuclear weapons. The government has a monopoly on nuclear weapons and has extreme regulations and oversight on the companies that help build nuclear weapons for it. And those companies do not tell the government how or when they can use the nukes.
So if advanced AI is like nuclear weapons, why can an unelected executive tell a democratically elected government how to use it?
This analogy makes no sense. You and I can go use Claude right now. The government clearly doesn't think they are nukes. If the government wants to control this technology like a weapon then it should do that, but that's not what's happening.
That's not what is happening, yet. If this technology is at the scale of nuclear weapons, you don't think the government is going to step in and take control of it?
Potentially. Or the US government can force over control of the technology. It's impossible to predict right now the implications of this technology in 5 years, let alone 10 years.
I'm not well-read on the history of nuclear, but is it the case that nuclear developed first in government and was then spread to private industry via gov't outreach/motivation?
Here, I think we're talking about the opposite, right? Private developed, then gov't used. It's so obvious in the first path that gov't would remain in the control, but I'm not sure how to think about what's "right" in the private-to-gov't path.
You're absolutely right and the order of operations matters. But I'm not arguing about what's right in this case. If as Dario says, that advanced AI is at the level of nuclear weapons, then governments around the world will see it as a threat to their power and sovereignty.
If Ford sells Broncos, and Pete Hegseth says “I’m Batman, we want them with RPG launchers”, Ford is not required to create Bronco’s with RPG launchers.
If Pete wants AI killer robots and AI domestic mass surveillance tools, he can go put out a RFQ like literally any other DOD DARPA project in history and get bids.
You're right in those cases. But if what Dario says is true, that advanced AI is on the scale of nuclear weapons, then that is a threat to the power and sovereignty of the government.
I thought this was a flimsy piece. Agree with your conclusion.
Also - I'm surprised the government didn't say "ok" and then use Claude (as they wished) anyways. Don't know the details and am oversimplfying, but seems like a plausible path without much recourse/oversight.
Anthropic, like any other US company, should be free to not sell to the government if they don't want to. These other arguments about oversight are nonsense.
reply