AMD does a lot of work to ensure their support for Linux is first-class.
With the kernel now natively supporting their systems, you can expect good support.
It's earned them some good will over Nvidia
which has gotten better recently with the rise of AI, but still requires users to jump through a couple of hoops due to their attempts to protect their IP.
It is more than that, I really like openbsd as a desktop system. This is niche enough that I have zero expectation for any sort of support from the hardware vendors. However, because the amd drivers are opensource. Heroic people in the obsd dev community are able to make it work there. I don't strictly need a gaming gpu for my desktop work, but it is nice to have a setup I can boot linux on to play games with.
Heroic because the amdgpu driver is strangely huge, more code than the rest of the obsd kernel combined, It has something to do with gpu's having no isa stability and the generated code for each card present in the driver.
The biggest question on my mind is how the use of Cider V is being affected by the officially ordained Antigravity.
Is the trendline starting to show that its adopting more Antigravity style tooling? or is this causing some sort of rift?
If you are very into agentic coding then in 2026 you're using Antigravity. But if you are less into it Cider-V has a slightly less powerful (e.g. no web browser harness, no multi-agent parallelism) version that is backed by the same implementation. Since both are built on VSCode this is ~ trivial.
In my experience, antigravity IDE is much less seemless compared to Cider-V. I completely moved to using web-based antigravity for the agent and using cider-v to make manual changes and viewing code.
ah yes, the "deep state." The formless, nebulous, rhetorical tool that is always infinitely liquid enough to fit into or over any container necessary that the user can satisfy their immense personal problems disguised as eternal doomerism.
I thought it was just an overdramatic term for the unelected bureaucrats that make up the majority of the government, and who have their own institutional momentum.
It used to be that the thought process of receiving a portion of sale money before delivering any product allowed the company to pay suppliers and keep afloat as they drove towards the finish line of delivery.
Now it seems the grifting-meta is to make promises around a product with no plans on delivering it, take in pre-order money, and then just park it in an investment account to grow during a bull market.
By the time the grift comes due, your "investment" will have grown to a magnitude where even if you are forced to pay it back, you will have made a tidy profit.
Yes a lot of scams basically are elaborate ways to get interest-free loans. The only way to discourage these types of scams is to require the claw-back to include high interest. Which probably feels very punitive, so we don't do it. Generally we award the damages and then that's it. But like... damages 2 years ago don't equal damages now, that's not how money works. I guess our courts don't know that.
> Now it seems the grifting-meta is to make promises around a product with no plans on delivering it, take in pre-order money, and then just park it in an investment account to grow during a bull market. By the time the grift comes due, your "investment" will have grown to a magnitude where even if you are forced to pay it back, you will have made a tidy profit.
There's never been a time where that would work. A damages theory can't make you cough up your stock market gains, but unjust enrichment will do it.
Put into an example, it's always been black-letter law that if I misappropriate $1,000 from you, put it on red 27, and turn it into $36,000, I owe you all $36,000. If I'm less lucky than that and turn it into $50, I owe you all $1,000.
> it's always been black-letter law that if I misappropriate $1,000 from you, put it on red 27, and turn it into $36,000, I owe you all $36,000.
Only if you "stole", and only if you get caught. If you asked $1,000 for an "investment" with the intention of putting it on red 27, then win, you can repay your investors and they'd be none the wiser.
Are you sure? I'd have guessed that the debt is created when they generate the $36 000. Getting caught would just make it easier for the victim to collect.
> Put into an example, it's always been black-letter law that if I misappropriate $1,000 from you, put it on red 27, and turn it into $36,000, I owe you all $36,000. If I'm less lucky than that and turn it into $50, I owe you all $1,000.
Instead of ending this sentence in a period, I would have ended it:
Some people have argued that Sam Bankman Fried just had unlucky timing. If his Anthropic investment had an opportunity to mature everyone would have been happy.
I don’t subscribe, but I have seen the argument a few times.
Only civil, though, right? IIRC criminal law seeks restitution, which would be the original $1000. Civil law is where unjust enrichment would come into play, to my understanding.
>Michael Jackson did this with concert tickets, sort of. You had to pay hundreds of dollars for the chance to buy a ticket to his mega tour, to be refunded if you didn't manage to get one. People send their money in and have to wait like three months to find out if they managed to get one. Meanwhile, he's making money by the dump truck on the interest from all this.
This doesn't pass the sniff test. If we assume that "hundreds of dollars" is $500, and the risk free rate is 5%, and they hold it for 3 months, then you get $6.25 per victim. Hardly a huge sum. If you factor in credit card processing fees, they might even be losing money on it.
Tour attendance: 2.5 million
Only 1 in 10 purchases were honored, so purchase for 25 million tickets were attempted.
$750 million in Money market at 7% for 6 to 8 weeks.
So, 6 to 8 million in interest depending on the weeks (6 to 8) in money market.
I had some of the details wrong btw, you had to mail in $120 for the chance at 4 tickets, and he only held it for 6-8 weeks. Part of what was so shitty though was that very many of his fans couldn't really afford what was about a months rent but scrapped it together anyways. Maybe it was a poor financial decision on their part, but he took advantage of those people for his own profit, when he didn't even really need the money.
>Maybe it was a poor financial decision on their part, but he took advantage of those people for his own profit, when he didn't even really need the money.
Your own article contradicts your narrative that Jackson was somehow doing it for evil/greed reasons:
1. The scheme seems to have been cooked up by the promoters, with Jackson himself being against it
2. The "he filtered by zip code" allegation was entirely unsubstantiated, and seemed to be a side effect of making the tickets expensive.
3. Jackson donated his earnings to charity, so the "... for his own profit" claim was also questionable.
> Then when the ticket winners were selected, he filtered by zip code so he had an almost entirely white audience
How did that work in the 80s? Did he spend days and weeks poring through (paper) census data and correlating it with ZIP codes? Did he use VisiCalc on an Apple ][ or Lotus 1-2-3 on an IBM PC?
Whatever his other misdeeds, I never got a racist vibe off MJ.
If I want to create a furnace into which I can shovel tokens (ie: money) I dont think I can do it quite as elegantly as Gas City.
Its novel and funny, but the hype around agentic coding is bad enough for some engineers to think this represents the pinnacle of current software development practices.
I see drones as more of a side-affect to the new era of warfare we are in.
The more powerful your economy, the more autonomous weapons you can create and eventually deploy. Manufacturing capacity and economic resiliancy are becoming far more important than a nation's ability to equip and train its military.
The alarming part of this to me is that this heavily implies that wars will be decided more by who can successfully destroy their adversary's economy, than who can take and hold points of strength. Holding a city with an entrenched military doesnt matter much when there is still a factory deep in enemy territory producing the next wave of attacks.
The incentives for targeting non-combatant civilians is rising at an alarming rate.
Ukraine has shown that a drone factory can be made on any old building, it's not like they need huge machines. Would you carpet bomb all usable buildings? Cheap drones as a defensive weapon make war way more costly to the aggressor.
No, they made war expensive for people that are used to overwhelmingly superior but expensive military force. Drones are perfectly capable (even excel at) surgical strikes, and if the enemy destroys a $1,000 drone with a $100,000 missile, it's still a win for the drone.
The spend at my organization has reached beyond the $200,000 per month level on Anthropic's enterprise tier.
The amount of outages we have had over these past few months are astounding and coupled with their horrendous support it has our executive team furious.
its alot of money to be spending for a single 9 of reliablility.
If you are paying API rates (not using Max subscriptions) there's no reason to use Anthropic's API directly, the same models are hosted by both AWS and Google with better uptime than Anthropic.
How do things like prompt caching etc play into that? Would I theoretically have a more stable harness backing my usage?
Im seriously over the current claude experience. After seemingly fixing my 4.6 usage by disabling adaptive thinking and moving to max effort, it seems that the release of 4.7 has broken that workflow and Im 99% certain that disabling adaptive thinking does nothing even on 4.6 now. Just egregious errors in 2 days this week after coming back from vacation.
Im looking at moving to Pi and I like the minimal nature, but I disagree with a handful of decisions they make. So Id likely need to maintain a fork which is less than ideal.
What decisions is Mario making that you disagree with? My impression is Pi is minimal so any changes can live on top of Pi without needing to maintain a fork?
I started developing my own coding agent after using Pi for a couple months, so I’m curious what you don’t like about pi.
When I hear Mario talk about pi and his approach I find myself agreeing with a lot of it. But I also find myself agreeing with a lot of the points from this https://www.thevinter.com/blog/bad-vibes-from-pi
the opinions in question are that bash should be enabled by default with no restrictions, that the agent should have access to every file on your machine from the start, and that npm is the only package manager worth supporting. Bold choices.
To save others a click, though the article is worth reading.
He also mentions no subagents by default in pi as well.
That (and oh-my-pi) seem like an excessive swing in the other direction. Im all for the simplicity and minimalism of pi. There are just a few fundamental things that need updated (mainly subagent context and open-by-default security model).
yup thats mine. :)
i actually had some stuff layered into mono pi, and i frankly hit my limit in terms of architecture issues in monopi, omp aka oh my pi is frankly better architectured. if you pared back the fearure set to be minimal, you would full stop have a better designed minimal harness.
i do have a proper next gen no slop harness in the work.
amusingly , dog fooding existing tools with my improvements layered in, has repeatedly validated my design choices and if anything has reduced my tolerance for the errors that seem to happen in vanilla or first party harnesses
pi for the win, i have my own ai extend it when i want more specific features. vibe coded in 20 minutes shift+tab like claude code to add permission control.
I find it so funny that many of these harnesses sound like black magic and are completely mystical to me. I use Claude Code every day and yet i can't imagine the workflow of Pi. I also don't care to pay API rates just to experiment with them.
Largely though i'm happy with Claude Code w\ IDE integration, so i don't feel the need to migrate. Nonetheless i'm curious.
Obviously there is only so much you can say; but is that $200K due to the raw number of seats you have, or are you burning through a lot on raw API usage? I guess I'm trying to understand, large business, or large usage.
we are in the SMB space, the spend is almost entirely usage for us at this point, rather than seat cost.
For context, we are a software firm focused on difficult engineering problems, but I cant divulge much else.
Have you guys considered running your own local models? 200k a month is a ton of money and puts all your eggs in one basket. Or is it easier to just be able to run away from it all if you are done with it or something changes?
I led the team that did the math and analysis for determining our direction in selecting Anthropic.
We initially assumed this was where we would end up, but after some investment exploring our options we found it not worth the trouble.
Local models sound great until you realize you dont get alot of the features that we implicitly expect from hosted models. Many things would require additional investment into the operations and setup to get to a comparable system.
We ended up wanting things that would require us to roll our own memory system, harnesses for the model, compliance needs, and security.
It was possible for us to invest in this, but it would require additional investment in hiring or training to get us to a state comparable to the hosted options.
Eventually, I had to recommend against the project as it was more likely to be an investment in the leading team's resume, than an actual investment into our organization.
To start, I want to be clear I am trying to understand not criticizing, and mistakes are how institutional knowledge grows.
Your last paragraph hints at retention struggles which complicates the issue.
But was vendor mitigation not part of the evaluation? I get that most companies view governance and compliance as a pay to play issue, but there has always been an issue with rapidly changing areas and single source suppliers.
I admit to having my own preferences and being almost completely ignorant about what your needs are, but I have seen the value in having a rabbit to pull out of the hat.
If employee retention doesn’t allow for departure of individuals without complete loss of institutional knowledge I guess my position wouldn’t hold.
But during the rise of cloud computing I introduced an openstack install in our sandbox, not because I thought that we would stay on a private cloud but because it allowed our team to pull back the covers and understand what our cloud vendor was doing.
It was an adoption accelerator that enabled us to choose a vendor that was appropriate and to avoid the long tail of implementation.
I was valuable as a pivot when AMD killed seamicro with short notice, and the full cloud migration period was dramatically shortened.
I have a dozen other examples, but it is like stock options, volatility and uncertainty dramatically increase the value of keeping your options open.
We will have vendors fold, and a single source only story couples you org to the success of that vendor.
IMHO There is a huge difference between tying your success to an Oracle, who may be ‘safe’ if expensive as a captive customer and doing the same in uncertain markets.
it's an SMB, if you need redundancy on every 3rd party dependency your business will die anyway
better to take the risk for most things. if the worst case happens and you have to migrate, you migrate. otherwise you risk overengineering upfront and guaranteeing reduced productivity rather than risking it
We are probably closer than you think, and SMBs have zero leverage.
The point is not avoiding vendors or duplicating everything. The point is designing systems so the software/platform never becomes the point of control.
A self-hosted, minimal sandbox instance using simple containers and tools is one way to help avoid that lock-in trap.
It is not zero cost, but strategically important to make sure that vendors don't shape your enterprise, but support it.
IMHO Systems should be designed to be as replaceable as possible, without adding the extreme complexity that a true 'multi-cloud' solution would offer as an example.
The point being is that the vendor and/or platform can be replaced anytime the business changes its goals, market shifts, strategies change ...
Keeping the door open and trying to minimize the migration cost is my point, not boiling the ocean.
Repurposing a decomed server or desktop with a GPU (3090 or RTX PRO 6000 Blackwell not DC class) with linux/podman and llama.cpp will help a team understand without much cost, but that is an ignorant of your situation claim on my part.
We both very much agree that upfront multi-vendor implementations are a very bad idea. It suffers from the same problem IMHO, trying to plan past the planning horizon with aspects you have no control over.
Probably too much nuance to discuss here, but thanks for responding.
> Local models sound great until you realize you dont get alot of the features that we implicitly expect from hosted models. Many things would require additional investment into the operations and setup to get to a comparable system. We ended up wanting things that would require us to roll our own memory system, harnesses for the model, compliance needs, and security.
That's not local models vs hosted models, that's using the enterprise services from Anthropic. Any local LLM inference engine such as VLLM gives you an OpenAI compatible API with the exact same features as a hosted model.
I'm not sure what your use case is, but I personally found Anthropic's offerings lacking and inferior to open source or custom-built solutions. I have yet to see any "memory" system that's better than markdown files or search, and harnesses for agentic AIs are dime a dozen.
I don't blame you. I personally would consider revisiting it in the next month or so. A lot of people are saying some of these smaller models like qwen 3.6 are basically at Claude sonnet performance if not better.
That level of hardware, if the performance was enough is a much smaller investment and gamble.
Either way I understand the decision. Your product isn't in locally hosted LLMs, why fuss. That said I see 1 million plus in external spend I start wondering about the options. Not saying you did the wrong thing, I think you did the right thing but things seem to be changing on the local model front and quite rapidly.
Only if you're vibe coding, with ambiguous prompts that require the model to fill in a huge number of gaps and basically write the software for you.
The people who don't really know what they're doing (or don't care) need the full power of the SOTA models, those with experience can provide enough context and instruction to make even small local models work.
Some of the latest batch are more vibe code friendly even. It's pretty crazy. People are few shotting small toy games and stuff with qwen3.6. I'm personally not into that work flow but yea. It won't be long until the efficiency wave hits and small models are really all people need
Some of the local models are effectively there. It depends on what scale you need or want. Kimi 2.6 is up there with opus, granted it's huge. On some benches it's actually better. Qwen3.6 is up there with sonnet but it's nearly microscopic. A lot has changed in the last month
GitHub, along with MSFT in general, have massive copilot mandates where workers are being shamed into using slop tools to fix serious on-going issues. GitHub seems wholly incapable of resolving their issues: money isn't a problem, talent isn't a problem, but business leadership is definitely a major problem.
Look at how other companies are suffering massive outages due to LLMs too like AWS and Cloudflare. Two companies that use to be the best in the industry at uptime but have suddenly faltered quite quickly.
Companies that have even worse standards will quickly realize how problematic these tools are. Hopefully before a recession because this industry seems to be allergic to profitable businesses and leaders that have been around since ZIRP have shown zero intelligence in navigating these times.
None of the three major Cloudflare outages in the past six months had anything to do with LLMs. They were regular old human mistakes.
We did, however, determine that at least one of them (and perhaps all) would have been easily caught by AI code reviewers, had AI code reviewers been in use. So now we mandate that. And honestly, I love it, the AI reviewer spots all sorts of things that humans would probably miss.
(We also fixed a number of problems around configuration that would roll out globally too fast, leaving no time to notice errors and stop a bad rollout, as well as cases where services being down actually made it hard to revert the change... should be in a much better place now. But again, none of that had to do with LLMs.)
> None of the three major Cloudflare outages in the past six months had anything to do with LLMs. They were regular old human mistakes.
Is that true? At least one of them seemed to involve LLM-written code from what I saw. (Not to say that human error wasn't _also_ a contributing factor, but I wouldn't say it had _nothing_ to do with LLMs).
> We did, however, determine that at least one of them (and perhaps all) would have been easily caught by AI code reviewers, had AI code reviewers been in use. So now we mandate that. And honestly, I love it, the AI reviewer spots all sorts of things that humans would probably miss.
The reviewer is decent, but the false positive rate is substantial, and the false negative rate is definitely nonzero. Not that you would know that the way our genius CTO talks about it...
> Not that you would know that the way our genius CTO talks about it...
Honestly I find it bizarre that there are people at Cloudflare who have this attitude. Without Dane, the company wouldn't be half the size it is today.
Something unexpected that LLMs robbed from us is to receive the grace of assuming we failed on our own e.g. good ol' fashioned human/organizational failure.
Our expense is roughly around 12.3 software developers when you break it down across all people related expenses. But we've spent alot of time and energy prior to this focusing on our ability to measure our software development output across multiple teams.
The delivery improvements are not evenly applied across all teams, but the increases that we have seen suggest a better ROI than if we had hired 12 developers.
It's genuinely hilarious how the same leadership pushing for RTO because getting people together creates magic, seems to have no issues trading those same people out for LLM's churning at specs.
Respectfully,
After a certain level of compensation, you are indeed judged purely off of input and output.
Workplace improvement does not justify your salary.
You will also find that many problems in the harder sciences do not get easier by throwing more bodies at them.
Comments like these remind me that some project managers think they'd be able to delivery a baby in 1 month if they simply had 9 women.
> Respectfully, After a certain level of compensation, you are indeed judged purely off of input and output. Workplace improvement does not justify your salary.
I'd have to disagree. There's a narrow band in the middle where that's true, but once you exceed that, your personal inputs and outputs matter less and less, and the contributions you make to the overall workplace, and how well you enable those around you, make a larger part of why you're compensated.
Even as an IC, the more you're able to mentor and elevate the people around you, the more your compensation will grow (if you're in the right place, and thus already at the right earnings bracket)
I would agree if the team im on were still growing/scaling.
However we are well past our scaling phase, and at this point our concern is maintaining multi-million dollar contracts with a tight well-compensated team.
What local alternative could replace your Anthropic use? I have found none. I don't think many have, which is why most of us pay Anthropic, rather than using one of the numerous, far cheaper, cloud services that host "local" class models.
Most of us are paying for access to proprietary SOTA models, rather than hosting.
Speaking of developer tooling spend - IDEs are far harder to build such as JetBrain etc and don't think any IDE would be charging this amount to any customer per month.
Not sure how much of a productivity gain a 2.5 million per year it is?
Run Facebook on a single Proxmox box and demand would still outstrip the supply.
What yet needs to be seen is if that demand sustains in the long run at that price point or flattens out proving to be super elastic given that there are many other providers that are catching up pretty fast.
Yeah, I feel like all of the bad downtimes happen during American business hours. We use GitHub at work in Europe and I don't remember it ever being down or broken between 0700 and 1700 local time.
That’s statistically just luck then - plenty of outages this year already in Berlin time during work hours - I do remember the forced breaks with colleagues for sure.
I think there is alot of baseless fury behind your words, but my regular interactions with my leadership dont lead me to think they have the end goal of replacing labor.
We're blessed to have leadership with technical backgrounds, so the tools are regarded more as significant intelligence enhancers of already exceptionally smart engineers, rather than replacements.
Doesnt seem to us to be wheelbarrows of money, when you consider the average AWS/Azure bill.
Throwing bodies at a problem doesn't always scale.
There are many difficult problems that do not get easier by throwing more juniors or mid level engineers at them.
> the increases that we have seen suggest a better ROI than if we had hired 12 developers.
You can’t argue “we were able to get away with not hiring more developers” and also say you aren’t replacing labor.
Morally I trend towards your side of things, but it’s also important to be realistic about what you’re actually doing. Money is going towards Anthropic and not towards new hires. That’s a replacement of labor. It doesn’t matter what the end goal was.
I’m glad your leadership isn’t trying to fire everyone. But in case you live under a rock, tech layoffs are at all time highs. Companies are rewarded by the public markets for laying off workers.
Simultaneously we have AI industry leaders warning of an employment apocalypse once AGI is achieved.
They must have hired absolutely incompetent leaders on the core software and infrastructure side. Sure their AI research is great but it’s amateur hour. Or just vibe coded slop top to bottom. It seems like every single day people are talking about outages or billing issues or secret changes to how Claude works.
> business leader throwing those salutes and backing it up with talk of a "white homeland"
It is not every commenter's duty to cite their sources when you have the ability to easily infer the context and search the internet. These are very well documented actions that they refer to.
Your attempts to drive sentiment through casting doubt are noticed.
If this were completely uncharted territory, you might have a leg to stand on here.
But you are correct that this is exactly how Facebook started, and we know exactly how that goes, the poster is correct that this just leads to harassment at scale.
The author's response was the main problem, showing a complete lack of character or ethical concern.
There is a world of difference between being a hacker with a sense of rebelliousness and a jerk who thinks there should be zero consequences to their actions.
If we're using the Facebook example to call this unacceptable, we should really be fighting a lot harder against Facebook itself. Because it still has a reasonably positive reputation overall and it's affecting billions of people.
> If we're using the Facebook example to call this unacceptable, we should really be fighting a lot harder against Facebook itself.
I don't think many here would disagree with you.
> Because it still has a reasonably positive reputation overall and it's affecting billions of people.
I'm gonna disagree with you. Maybe it's because I live in the Bay Area so the culture is affected by the proximity of tech companies. But my family in the middle of the country mostly seem to be on the same page, so I don't know how you explain that. It may be that I'm drawn to people who care about these topics and some degree of sameness is expected within family dynamics resulting from the parents' values raising us. Whatever.
I think a good portion of society considers FB a garbage product but don't know of an alternative and just accept it for what it is. I think a smaller portion of society recognizes that they are amoral and terrible for society. How many countries have now discussed legislation to limit kids accessing social media (whether you agree or disagree)? That didn't spring out of nowhere fully formed. Years of criticism got us there.
> Maybe it's because I live in the Bay Area so the culture is affected by the proximity of tech companies. But my family in the middle of the country mostly seem to be on the same page, so I don't know how you explain that.
I can explain that. 100% of Americans add up to roughly 5% of the worlds population. As such, there are billions of non American users with very different viewpoints and opinions.
Yes, we really should be! You’ve hit it on the nose with that point: Facebook has been a stalker with effectively legal immunity in a lot of people’s lives for quite a long time. I’m glad to see others realizing it, too. The more that do, the sooner their formerly-untouchable behavior becomes unacceptable.
"There is a world of difference between being a hacker with a sense of rebelliousness and a jerk who thinks there should be zero consequences to their actions."
Given the external consequences of certain actions, for all intents and purposes that "world of difference" may exist only inside their skull.
reply