Hacker Newsnew | past | comments | ask | show | jobs | submit | giwook's commentslogin

I agree, and I'm also quite skeptical that Anthropic will be able to remain true to its initial, noble mission statement of acting for the global good once they IPO.

At that point you are beholden to your shareholders and no longer can eschew profit in favor of ethics.

Unfortunately, I think this is the beginning of the end of Anthropic and Modei being a company and CEO you could actually get behind and believe that they were trying to do "the right thing".

It will become an increasingly more cutthroat competition between Anthropic and OpenAI (and perhaps Google eventually if they can close the gap between their frontier models and Claude/GPT) to win market share and revenue.

Perhaps Amodei will eventually leave Anthropic too and start yet another AI startup because of Anthropic's seemingly inevitable prioritization of profit over safety.


I think the pivot to profit over good has been happening for a long time. See Dario hyping and salivating over all programming jobs disappearing in N months. He doesn't care at all if it's true or not. In fact he's in a terrible position to even understand if this is possible or not (probably hasn't coded for 10+ years). He's just in the business of selling tokens.

And worse, he (eventually) has to sell tokens above cost - which may have so much "baggage" (read: debt to pay Nvidia) that it'll be nearly impossible; or a new company will come to play with the latest and greatest hardware and undercut them.

Just how if Boeing was able to release a supersonic plane that was also twice as efficient tomorrow; it'd destroy any airline that was deep in debt for its current "now worthless" planes.


That's why open models are going to win in the long run.

I think the key question is “when”? In a highly competitive business environment, companies are going to naturally be attracted to the most capable model if it leads to a competitive advantage and the switching costs are low. This suggests that “open” (giving away inference despite ever-higher training costs) may not win for a very long time, if ever.

When frontier models plateau and efficiency increases sufficiently that it becomes a commodity like other cloud compute.

One driver of open models might be foreign actors. With the entire US economy being held up by AI, it's a crucial vulnerability for a capable foreign actor (guess who) to exploit if they wanted to.


Skeptical is a light way to put it. It is essentially a forgone conclusion that once a company IPOs, any veil that they might be working for the global good is entirely lifted.

A publicly traded company is legally obligated to go against the global good.


It’s not really, companies like GM used to boast about how well they treated their employees and communities. It was Jack Welch and a legion of like-minded arseholes who decided they should be increasingly richer no matter who or what paid for it.

It’s funny how corporations get a bar wrap. Have you ever worked with private equity? Bad to worse.

Most PE is ironically ultimately owned by publicly traded funds. If you have a 401k that you’re not personally managing odds are that PE is where most of your gains come from.

See also HP. Pretty much only Costco left.

This is where PBCs (Public Benefit Companies) and B-Corps may have a role to play. Something like that seems necessary to enable both (A) sufficient profitability to support innovation and viability in a capitalist society and (B) consideration of the public good. Traditional public companies aren't just disincentivized from caring about externalities, they're legally required to maximize shareholder profits, full stop. Which IMHO is a big part of the reason companies ~always become "evil".

The company I currently work for is both a B-Corp and an employee-owned trust. The difference in culture, attitude and behaviour to the previous place I worked at, which only cared about quarterly results is stark.

Costco is such a strange and stark case standing in opposition to this general rule. From everything I hear, I can only gather that the reason is because of extremely experienced and level-headed executive staff.

Middle class productive population produces commons goods and resources which gets exploited by Elites. Tragedy of the Commons applied to wealth generation process itself.

The previous deal was due to (a) a lower level of development of capitalism (b) a higher profit margin that collapsed in the 70s (c) a communist movement that threatened capital into behaving

"Is your washroom breeding Bolsheviks?"

Fair point.

Call me an optimist, but I'm still holding out hope that Amodei is and still can do the right thing. That hope is fading fast though.


« Don’t be evil »

If no one can buy your soul, what's its value? Every Management Consulting Firm

The problem is that people equate money to power and power to evil.

So no matter what, if you do something lots of people like (and hence compensate you for), you will be evil.

It's a very interesting quirk of human intuition.


A reasonable conclusion, considering that money and power seem to have their own gravity, so people with more of both end up getting even more of both, and vice versa.

Can't blame someone who comes to such a conclusion about money and power.


The unreasonable part automatically labeling power as evil.

It’s a sane default to label power as evil in a society driven by greed, usury, and capital gain. Power tends to corrupt, particularly when the incentives driving its pursuit or sustenance undermine scruples or conscientiousness. It is difficult to see how power is not corrupting when it becomes an end in itself, rather than a means directed toward a worthy or noble purpose.

Labeling power evil is not automatic, its just making an observation of the common case. Money-backed power almost never works for the forces of good, and the people who claim they're gonna be good almost always end up being evil when they're rich and powerful enough. See also: Google.

Google is the company that created a class-less non-hierarchical internet. Everyone can get the same access to the same services regardless of wealth or personhood. Google is probably the most progressive company to ever exist, because money stops no one from being able to leverage google's products. Born in the bush of the Congo or high rise of Manhatten, you are granted the same google account with the same services. The cost of entry is just to be a human, one of the most sacrosanct pillars of progressive ideology.

Yet here they are, often considered on of the most evil companies on Earth. That's the interesting quirk.


*Was

Lot of people and companies were responsible for that. Anyway, that says nothing about what Google has become.

> Google is the company that created a class-less non-hierarchical internet.

Can you explain what you mean by this? I disagree but I don't understand how you think Google did this so I am very curious.

For my part, I started using the internet before Google, and I strongly hold the opinion that Google's greatest contribution to the internet was utterly destroying its peer to peer, free, open exchange model by being the largest proponent of centralizing and corporatizing the web.


The alternative was a teleco AOL style internet with pay tiers for access to select websites. The free web of the 90's would remain, but would be about as culturally relevant as Linux.

Surely you have to recognize the inconsistency of saying that Google "corporatized" the web, while the vast majority of people using google have never paid them anything. In fact many don't even load their ads or trackers, and still main a gmail account.

If we put on balance good things and evil things google has done, with honest intention, I struggle very hard to counter "gave the third world a full suite of computer programs and access to endless video knowledge for free with nothing more than dumpy hardware", while the evil is "conspired with credit card companies to find out what you are buying".

This might come off like I am just glazing google. But the point I am trying to illuminate is that when there is big money at play, people knee-jerk associate it with evil, and throw all nuance out the window.

Besides, IRC still exists for you and anyone else to use. Totally google free.


No I actually do understand where your opinion comes from now and I partially agree. I had forgotten about how badly the ISPs wanted the internet to mirror Cable TV plans.

There’s several subjects to go into here and HN probably isn’t the best place for the amount of detail this discussion requires but I will just note the amount of people blocking Google’s ads and trackers is negligible and has significantly shrunk in the mobile first era.

The wave is shifting to other corporations now but for a good while most of the internet was architected to give Google money. Remember SEO? An entire practice of web publishing centered around Google’s profit share. That hasn’t disappeared- it’s just evolved and transformed into more ingrained rent-seeking.


Money and power are good when used democratically to clearly benefit the majority of the people. They are bad otherwise. It is hard to see this because we live in such a regime that exists in the negative space seemingly without beginning or end. Other countries have different relationships to their population.

> At that point you are beholden to your shareholders

No not really, you can issue two types of shares, the company founders can control a type of shares which has more voting power while other shareholders can get a different type of shares with less voting power.

Facebook, Google has something similar.


No, they still have to act in the interest of shareholders even if they have no voting power.

As a PBC, the intent of the company is not only profit, but it's hard to analyze the counterfactuals of if Anthropic were a pure for-profit or a non-profit

thats the benefit of a pbc

What will happen if they don't because the founders control the voting powe

I know limits have been nerfed, but c'mon it's $20. The fact that you were able to implement two smallish features in an iOS app in 15 minutes seems like incredible value.

At $20/month your daily cost is $0.67 cents a day. Are you really complaining that you were able to get it to implement two small features in your app for 67 cents?


Yea, actually, people should be complaining.

If you got in a taxi, and they charged you relative to taking a horse carriage, people should be upset.


That last sentence didn't make sense so I'm not sure what your point is. But I'll run with the analogy.

You got into a taxi and they were charging you horse carriage prices initially. They're still not charging you for a full taxi ride but people are complaining because their (mistaken) assumption was that taxis can be provided as cheaply as horse carriages.

People are angry because their expectations were not managed properly which I understand.

But many of us realized that $20 or even $200 was far too low for such advanced capabilities and are not that surprised that all of the companies are raising prices and decreasing usage limits.

OpenAI is not far behind, they're simply taking their time because they're okay with burning through capital more quickly than Anthropic is, and because OpenAI's clearly stated ambition is to win market share, not to be a responsibly, sustainably run company.


Shortly after I ran out of credits in 15 min, they tweeted that they increased usage limits to compensate for the higher token usage, so perhaps it is not as bad now.

Codex, this afternoon, I was able to use for like two hours on the $20 plan. Maybe limits will be tighter in the future. But with new data centers, new GPU generations, and research advances it might rather get cheaper.

Anyway, as you said, this is all pretty cheap. I'll go with the $100 Codex plan, since I now figured out how to nicely work on multiple changes in parallel via the Codex app with worktrees. I imagine the same is possible in Claude Code.


It seems to me a bit naive to think OpenAI would not increase prices/decrease usage limits at some point. $20 might cover a very small fraction of the actual cost that is incurred over a month of sustained usage.

No, I am happy with the results.

For a first test, it did seem like it burned through the usage even faster than usual.

GitHub Copilot’s 7.5x billing factor over 3x with Opus 4.6 seems to suggest it indeed consumes more tokens.

Now I’m just waiting for OpenAI to show their hand before deciding which of the plans to upgrade from the $20 to the $100 plan.


I'm not sure how many companies would trust a failed shoe company to be responsible for their compute.

I'd sign up if the price is right. Workloads can easily be moved if something goes down.

Expect grift in the grift economu.

Don't you know that it's okay to steal IP (and skirt laws in general) when you're a big company with lots of money?

One torrent is a crime, breaking all the laws by downloading terabytes of books and processing them is a trillion dollar business.

LOL.

This reminds me of when the Long Island Iced Tea company renamed themselves to the Long Blockchain Corp in 2017 when crypto was soaring and their stock immediately took off.

Four years later the SEC charged three people (including the company's majority shareholder) with insider trading.


Or if you want to avoid having to set new bindings, do '\ + enter' (which escapes the enter).

What a time to be alive.

This seems par for the course for OpenAI/Sam Altman.

Unfortunately they are not the first company to try and externalize their costs, and they will not be the last.

Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others?

Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?


Change the legal definition of corporations? Corporations exist to provide liablity protections to sharholders, which means they are mainly incentivized to externalize costs and avoid liability to maximize profit, or even to make profit in businesses that would not be profitable if they could be held liable for externalized costs (deep sea oil well drilling). Limit the ability of corporations to shield themselves from view through multiple levels of shell corporations and Special Purpose vehicles. These are probably controversial stances on a board about startup culture and breaking the rules to get rich.

Stop voting for people and judges that believe in the Friedman doctrine?

Every decision has tradeoffs. Western society has largely decided to prioritze capital owners over everything else.


> Is there anything we can do to push back against and discourage the externalization of costs onto others?

On a societal scale, no. Occasionally this works in some individual cases. Like the online outrage over SOPA/PIPA 15 years ago.

But when entity X can gain $$$$$$ (or power) from doing an action, and that action costs everyone only $ (or a minor bit of inconvenience or ideological righteousness), then the average person has very little incentive to take time out of their day-to-day life to fight it.

Meanwhile the entity will do whatever it takes to get the $$$$$$/power because they have a huge incentive. This is the same mechanism that allows democracies to be eroded, as we're seeing right now in the US.


Even if they were to pass such a law which would be political suicide, it would still be up to the courts to say that it doesn't violate the Constitution. For example, a law that says anyone with a net worth of $1B can freely punch anyone in the face whenever they want and have immunity would be a clearly illegal law. That's basically what this bill is. The courts would then need to be made sufficiently corrupt to not strike down such a law as unconstitutional.

Unconstitutional doesn't mean much when it's being decided by a group of unaccountable people that weren't elected through democratic means. If SCOTUS says something is legal, it's legal. That's how the system is setup, nothing else really matters. They'll justify their decisions however they want but the material ends are the only things that matter.

SCOTUS has ruled many terrible things over the course of our nation's history (upheld slavery, said slaves weren't people, equated money with speech, decided a presidential election while denying a recount, etc). Expecting them to somehow be better is a foolish task.

It's an institution that needs to be dismantled and rebuilt, where at minimum SCOTUS appointments should be elected by a national vote rather than letting an extreme minority decide (100 senators versus ~340,000,000 people).


That depends on your definition of "we". As a society, we can regulate companies and punish the offenders (e.g. don't dump toxic waste into sources of drinking water or you'll get prosecuted). As individuals, there's not much we can do directly. How to translate individual actions into societal action is kind of the fundamental question of civilization, and if there's a uniform solution for how to achieve it, I don't think we've managed to come up with it yet.

A lot of people will dismiss this answer, but... vote for Democrats. With Bernie and upcoming young Democrats more and more are pushing back. The parties definitely are not the same. Democrats created the Consumer Financial Protection Bureau. Republicans destroyed it.

Push your representatives to crush monopolies and manipulative practices. This happened before in the gilded age. Only a popular response can turn the tide.

Also, primaries are coming up, and not all Democrats are the same either. Plenty of the old school Democrats are facing progressive challengers. So, vote for the ones that will stand up to this garbage and follow up on whether they do. There are a lot of new faces in the Democratic party who are standing up to the BS.

The US has a lot of potential to change if we push it. A 25 point swing toward people who don't consider grift a personal priority will change a lot of things.


Do you mind elaborating on your experience here?

Just curious as I've often heard that Claude was superior for planning/architecture work while ChatGPT was superior for actual implementation and finding bugs.


Claude makes more detailed plans that seem better if you just skim them, but when analyzed, has a lot of errors, usually.

It compensates for most during implementation if you make it use TDD by using superpower et al, or just telling it to do so.

GPT 5.4 makes more simple plans (compared to superpowers - a plugin from the official claude plugin marketplace - not the plan mode), but can better fill the details while implementing.

Plan mode in Claude Code got much better in the last months, but the lacking details cannot be compensated by the model during the implementation.

So my workflow has been:

Make claude plan with superpowers:brainstorm, review the spec, make updates, give the spec to gpt, usually to witness grave errors found by gpt, spec gets updates, another manual review, (many iterations later), final spec is written, write the plan, gpt finds mind boggling errors, (many iterations later), claude agent swarm implements, gpt finds even more errors, I find errors, fix fix fix, manual code review and red tests from me, tests get fixed (many iterations later) finally something usable with stylistic issues at most (human opinion)!

This happens with the most complex features that'd be a nightmare to implement even for the most experienced programmers of course. For basic things, most SOTA modals can one-shot anyway.


Interesting. Have you ever had Claude re-review its plan after having it draft the original plan? Or do you give it to GPT right away to review?

Just curious as I'm trying to branch out from using Claude for everything, and I've been following a somewhat similar workflow to yours, except just having Claude review and re-review its plan (sometimes using different roles, e.g. system architect vs SWE vs QA eng) and it will similarly identify issues that it missed originally.

But now I'm curious to try this while weaving in more GPT.


I use both GH Copilot as well as CC extensively and it does seem more economical, though I wonder how long this will last as I imagine Github has also been subsidizing LLM usage extensively.

FWIW it feels like GH Copilot is a cheaper version of OpenRouter but with trade-offs like being locked into VSCode and the Microsoft ecosystem overall. I already use VSCode though and otherwise I don't see much downside to using GH Copilot outside of that.


You’re not locked into vscode. There are plugins for other IDEs, and a ‘copilot’ cli tool very similar to Claude Code’s cli tool.

I also wouldn’t say you’re locked into Microsoft’s ecosystem. At work we just have skills that allow for interaction with Bitbucket and other internal tooling. You’re not forced to use GitHub at all.



I'm hopeful because Microsoft already has a partnership and owns much of OpenAI so can get their models at cost to host on Azure with they already do, so they can pass on the savings to the user. This is why Opus is 3x as expensive in Copilot, because Microsoft needs to buy API usage from Anthropic directly.

I don’t think it’s API costs. Their Sonnet 4.6 is just 1x premium request which matches the 1x cost of the various GPT Codex models.

Sonnet is the worse model though, therefore it's expected that it is cheaper, the comparison would be Opus and GPT. That Anthropic's worse model is the same request cost as the best OpenAI model is what I mean when talking about Microsoft flexing their partnership.

You could use something like [https://opencode.ai](OpenCode) which supports integration with Copilot.

> but with trade-offs like being locked into VSCode and the Microsoft ecosystem overall

You can use GH Copilot with most of Jetbrains IDEs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: