> If these claims were indeed serious, they would submit it for independent analysis somewhere.
They have. 40 different companies that have all committed resources to patching their systems based on vulnerabilities found by Mythos. One of them, Google, is a frontier AI lab that pointedly did not say that their own models have found similar vulnerabilities.
> Defense contractors are required to submit their systems (secret sauce and all) for operational test and evaluation before they're fielded.
Does this look something like having 40 separate companies look at the outputs of the system, deciding that it’s real and they should do something about it, and committing resources to it?
At some point, “cynicism” is another word for “lalala can’t hear you”.
Another cross-check I've run is, are the claims Anthropic is making for Mythos that out of line with the current status of AI coding assistents?
To which my answer is clearly, no, not even remotely. If Anthropic is outright lying about what Mythos can do, someone else will have it in a year.
In fact the security world would have to seriously consider the possibility that even if Mythos didn't exist that nation states have the equivalent in hand already. And of course, if Mythos does exist, nation states have it now. The odds that Antropic (and every other AI vendor) isn't penetrated enough by every major intelligence agency such that they have access to their choice of model approach zero.
I wonder about the overlap between people being skeptical of Mythos' capabilities, and those who are too skeptical of AI to have spent any time with it because they assume it can't be any good. If you are not aware of what frontier models routinely do, you may not realize that Mythos is just an evolution of existing capabilities, not a revolution. Even just taking a publicly-available frontier model, pointing it at a code base and telling it to "find the vulnerabilities and write exploits" produces disturbingly good results. I can see the weaknesses referenced by the Mythos numbers, especially around the actual writing of the exploits, but it's not like the current frontier models fall on their face and hallucinate wildly for this task. Most everything they produce when I try this is at least a "yeah, that's worth thinking about" rather than an instant dismissal.
The lead story was about the “useless” soldiers in a battle that was won. I think as a minimum effort one should look for an example where the battle was lost? Most companies can only wish their outcomes were as good as the US in World War II.
As someone who at first embraced the idea of prediction markets and is now ambivalent, sending them underground vastly reduces their harm. First, because discoverability is an issue. Second, there will be much less liquidity. Third, any gains will have to be laundered or hidden, making it even more difficult.
Maybe prediction markets are net positives, or maybe regulating them will make them so, but banning them does resolve most of their negative effects.
I can't believe how many betting ads I see or hear every time I consume US media. It's worse then all the ads about drugs they want you to request from your doctor.
No, that’s not accurate at all, and in case you are genuinely confused:
1. Anthropic should be free to sell its services under whatever legal terms and conditions it wants.
2. The Pentagon should be free to buy those services, negotiate for different terms, refuse to buy those services, and terminate contracts subject to any termination clauses.
You may or may not agree with what the Pentagon wants to do, but if things had stayed there, there would be no real issue.
The problem is that the Pentagon is trying to bury Anthropic as a company, calling it a danger to the United States because it exerted its non-controversial right in (1).
Any “explanation” that doesn’t address that is confused itself or trying to confuse the issue.
I leave it to you as to which category the linked source falls under.
> The problem is that the Pentagon is trying to bury Anthropic as a company, calling it a danger to the United States because it exerted its non-controversial right in (1).
My take is that the DoD very much wanted to continue using Claude. However, Amodei refused to budge on relinquishing final say over Claude usage. The DoD took this as a personal offense (how dare this guy, does he know who we are, etc) and lashed out in retaliation. The whole sequence of events makes sense when viewed under this lense.
> Amodei refused to budge on relinquishing final say over Claude usage.
So did Altman. The terms of each company’s agreement with the DoW are roughly the same when they come out of the wash.
“Mr. Altman negotiated with the Department of Defense in a different way from Anthropic, agreeing to the use of OpenAI’s technology for all lawful purposes. Along the way, he also negotiated the right to put safeguards into OpenAI’s technologies that would prevent its systems from being used in ways that it did not want them to be.”
It is more likely the plan purposely gave Anthropic terms it knew it would not accept to give a certain public perception. OpenAI was always going to be the recipient, but for reasons unknown, they could not make the deal directly, and had to create the perception that they had no choice.
> However, Amodei refused to budge on relinquishing final say over Claude usage.
And that's 100% acceptable and legal. They have the right to do that. And DoW can then turn around and say "no deal". And that's 100% acceptable and legal.
So Hegseth going above and beyond and lashing out on the People's behalf like a butthurt child is unwarranted at best, and should definitely be illegal if it's not already.
I agree, my point is simply that Hegseth lashing out over Amodei's refusal is more plausible than a grand conspiracy to move to OpenAI (while simultaneously locking themselves out from Claude).
Either human is a special category with special privileges or it isn’t. If it isn’t, the entire argument is pointless. If it is, expanding the definition expands those privileges, and some are zero sum. As a real, current example, FEMA uses disaster funds to cover pet expenses for affected families. Since those funds are finite, some privileges reserved for humans are lost. Maybe paying for home damages. Maybe flood insurance rates go up. Any number of things, because pets were considered important enough to warrant federal funds.
It’s possible it’s the right call, but it’s definitely a call.
If you're talking about humans being a special category in the legal sense, then that ship sailed away thousands of years ago when we started defining Legal Personhood, no?
Yes, humans tell themselves stories to justify their choices. Are you telling yourself the story that only bad humans do that, and choosing to feel that you are superior and they are worth less? It might be okay to abuse them, if you think about it…
The platonic ideal of how to dismiss any argument by anyone about anything.
reply