Hacker Newsnew | past | comments | ask | show | jobs | submit | mvkel's commentslogin

Development isn't hard on ram. Doing what Apple claims this device is designed for, spending lots of time in multiple browser tabs, is.

Less-so if you do it in Safari than using a non-Apple browser.

It's strange that the low-end machines get positioned as "every day task" devices when the biggest ram hogs by far are browsers and websites.

It's more that they can't think of anything else that could possibly need that much compute.

> disassembling, evaluating, and feeding back ... listened, iterated, and shipped. they didn’t declare victory and go home. They kept pushing.

Not even an attempt to clear the ai smell out of this piece.


Anthropic and OpenAI are shipping AI agent SaaS and generating tens of billions in revenue. It's safe to say yes.

OpenAI's own estimate is a $14B loss in 2026[1]. Anthropic is aiming to break even this year (and some of that is probably due to Cowork, so that at least isn't unreasonable). So it may be revenue, but it isn't profit, yet...

1: https://finance.yahoo.com/news/openais-own-forecast-predicts...


You’re not approaching it from startup accounting. It’s only equity(Sam’s) that matter. Profit is a trifling matter.

Profit is what you do when you're not trying to grow. Name a time in history when a company with an industry-defining product with huge demand failed under the weight of cash flow, or regulation.

Twenty billion in revenue on hundreds of billions in debt is not "making money".

If we judged all startups by the same standards (in terms of revenue to debt ratios), many of today's established companies would have been "failures" at the same point in their life cycle.

I wonder what the ratio of failures and survivors would be if we really judged all startups... survivorship bias is not a great point to make.

its just the dotcom bubble all over again, only at a larger scale.

Theyre selling a dollar for 1 cent, but theyll make up the difference with volume.



It's more like "Tesla is going to be bankrupt imminently" all over again.

Name a time in history when a company with an industry-defining product with huge demand failed under the weight of cash flow, or regulation.

Frontier AI is very close to a zero sum game. Focus on profit, and you will lose.


And yet many of the tech incumbents in today's world came out of that era.

Name them.

1. Amazon 2. Google 3. Salesforce 4. ???

One that actually sold things. One that was legitimately sector-defining. One that wasn’t a B2C dotcom.


eBay, VMWare, Akamai, Paypal, to name a few.

actually sold things, not a dotcom, b2b infrastructure, legitimately sector defining

the companies that survived the bust weren’t the ones doing the land grab shenanigans


I mean, given your very specific filter, how many companies exist today that were formed in the last decade would be on the list? Stripe?

And yet many of the tech failures came out of that era.

Excited to try this. Is this not in effect a kind of "pre-compaction," deciding ahead of time what's relevant? Are there edge cases where it is unaware of, say, a utility function that it coincidentally picks up when it just dumps everything?

Yeah it's basically pre-compaction, you're right. The key difference is nothing gets thrown away. The full output sits in a searchable FTS5 index, so if the model realizes it needs some detail it missed in the summary, it can search for it. It's less "decide what's relevant upfront" and more "give me the summary now, let me come back for specifics later."

"as an ai safety company, we only believe in -partially- autonomous weaponry"

Ads are coming.


I'll be glad if they could open their platform enough so that it could run on ads and not 200 dollar subscriptions

for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability

These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."

The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.

The "safeguards" you are referring to are contractual, i.e. words. There are no technical safeguards, per the article.

The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough.



Again, these are just words, and they're a salad of legal terms of art that provide cover for virtually any action. "We prevent the -illegitimate- use..." [define illegitimate plz?]

In your second link, that team was defunded; the person heading it just left ceremoniously: https://x.com/mrinanksharma/status/2020881722003583421?s=46


It’s a contract dispute. Contracts are more than just talk.

While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.


Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.

NSA and other three-letter agencies happily do it under cloak and dagger.


I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.

What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?

On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"

https://thefulcrum.us/trump-state-control-capitalism


Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.


I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.

Are the guardrails not part of their core? Isn't that the whole premise of their existence?

If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.

Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.


That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe

The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.

Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.

I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.


It's more that I'm allergic to hypocrisy.

A little pessimistic of a take, IMO. You may very well be right, though.

Convenience trumps everything, including privacy and security.

Tell the average person that they have to install their own model is a deal breaker at the outset.

As for 99% capabilities being on device, battery life makes it a non starter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: