OpenAI's own estimate is a $14B loss in 2026[1]. Anthropic is aiming to break even this year (and some of that is probably due to Cowork, so that at least isn't unreasonable). So it may be revenue, but it isn't profit, yet...
Profit is what you do when you're not trying to grow. Name a time in history when a company with an industry-defining product with huge demand failed under the weight of cash flow, or regulation.
If we judged all startups by the same standards (in terms of revenue to debt ratios), many of today's established companies would have been "failures" at the same point in their life cycle.
Excited to try this. Is this not in effect a kind of "pre-compaction," deciding ahead of time what's relevant? Are there edge cases where it is unaware of, say, a utility function that it coincidentally picks up when it just dumps everything?
Yeah it's basically pre-compaction, you're right. The key difference is nothing gets thrown away. The full output sits in a searchable FTS5 index, so if the model realizes it needs some detail it missed in the summary, it can search for it. It's less "decide what's relevant upfront" and more "give me the summary now, let me come back for specifics later."
for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability
These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."
The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.
Again, these are just words, and they're a salad of legal terms of art that provide cover for virtually any action. "We prevent the -illegitimate- use..." [define illegitimate plz?]
It’s a contract dispute. Contracts are more than just talk.
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.
NSA and other three-letter agencies happily do it under cloak and dagger.
I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.
What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?
On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.
If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.
Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.
That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe
The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.
Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.
I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.
reply