Doesn't look like it has proper worktree management. UIs that abstract away worktrees are very powerful. I vibe coded my own (https://github.com/9cb14c1ec0/vibe-manager), which unfortunately doesn't have the remote component that hapi does.
"I use arbitrarily complex software that has a rapid SDLC to obfuscate the issue with the fact that we have to have military grade encryption for displaying the equivalent of a poster over the internet".
The state of our industry is such that there will be a lot of people arguing for this absurdity in the replies to me. (or I'll be flagged to death).
Package integrity makes sense, and someone will make the complicated argument that "well ackshually someone can change the download links" completely ignoring the fact that a person doing that would be quickly found out, and if it's up the chain enough then they can get a valid LE cert anyway, it's trivially easy if you are motivated enough and have access to an ASN.
Nah, you've simply never lived in a country which is afraid of its own population and does (or tried to) MITM internet traffic. Mine does both, there was a scandal several years ago:
I'll take enforced HTTPS for absolutely everything, thank you very much. Preferably with certificate pinning and similar aggressive measures to thwart any attempts to repeat this.
Changing the links and doing nothing else would be a pretty dumb MITM. You could do a more complex variant which is not so easy to spot (targeting specific networks, injecting malware whilst modifying the checksum)
The key property of SSL that is useful for tamper resistance is that it’s hard to do silently. A random ASN doing a hijack will cause an observable BGP event and theoretically preventable via RPKI. If your ISP or similar does it, you can still detect it with CT logs.
Even the issuance is a little better, because LE will test from multiple vantage points. This doesn’t protect against an ISP interception, but it’s better than no protection.
People will argue with you because your initial quoted sentence is chock full of fallacies.
* Caddy's complexity (especially when it comes to TLS) is not arbitrary, it's to meet the needs of auto-renewal and ... y'know, hosting sites on TLS.
* Caddy's SDLC is not, as far as I understand it, especially rapid.
* Implying that "military grade" is some level of encryption beyond the minimum level of encryption you would ever want to use is silly.
* The Manjaro website is not "the equivalent of a poster", and in fact hosts operating system downloads. Operating system integrity is kinda important.
You may have reasonable arguments for sites that are display only, do not out-link, and do not provide downloads, but this is not one of those circumstances.
As many others in this conversation have asked, can we have some sources on the idea that the model is spread across chips? You keep making the claim, but no one (myself included) else has any idea where that information comes from or if it is correct.
I was indeed wrong about 10 chips. I thought they would use llama 8B 16bit and a few thousand context size. It turns out, they used llama 8B 3bit with only 1k context size. That made me assume they must have chained multiple chips together since the max SRAM on TSMC n6 for reticle sized chip is only around 3GB.
Using an API key is orders of magnitude more expensive. That's the difference here. The Claude Code subscriptions are being heavily subsidized by Anthropic, which is why people want to use their subscriptions in everything else.
I think the people who use more than they pay for vastly outnumber those who pay for more than they use. It takes intention to sign up (not the default, like health care) and once you do, you quickly get in the habit of using it.
This move feels poorly timed. Their latest ad campaigns about not having ads, and the goodwill they'd earned lately in my book was just decimated by this. I'm sure I'm not the only one who's still just dipping their toes into the AI pool. And am very much a user that under utilizes what I pay for because of that. I have several clients who are scrambling to get on board with cowork. Eliminating API usage for subscription members right before a potentially large wave of turnover not only chills that motivation it signals a lack of faith in their marketing, which from my POV, put out the only AI super bowl campaign to escape virtually unscathed.
> the goodwill they'd earned lately in my book was just decimated by this
That sounds absurd to me. Committing to not building in advertising is very important and fundamental to me. Asking people who pay for a personal subscription rather than paying by the API call to use that subscription themselves sounds to me like it is. Just clarifying the social compact that was already implied.
I WANT to be able to pay a subscription price. Rather like the way I pay for my internet connectivity with a fixed monthly bill. If I had to pay per packet transmitted, I would have to stop and think about it every time I decided to download a large file or watch a movie. Sure, someone with extremely heavy usage might not be able to use a normal consumer internet subscription; but it works fine for my personal use. I like having the option for my AI usage to operate the same way.
The problem with fixed subscriptions in this model is that the service has an actual consumption cost. For something like internet service, the cost is primarily maintenance, unless the infrastructure is being expanded. But using LLMs is more like using water, where the more you use it, the greater the depletion of a resource (electricity in this case, which is likely being produced with fossil fuel which has to be sourced and transported, etc). Anthropic et al would be setting themselves up for a fall if they allow wholesale use at a fixed price.
It would not completely de-legitimize it. Maybe a government doesn't want anyone to know they are surveilling a suspect. But it definitely would reduce cash flow at commercial spyware companies, which could put some out of business.
It feels like Anthropic's models from 6 months ago. I mean, it's great progress in the open weight world, but I don't have time to use anything less than the very best for the coding I do. At the same time, if Anthropic and OpenAI disappeared tomorrow, I could survive with GLM-5.
Claude: you get rate-limited with one prompt so hard to validate 4.6
Codex: better with rate-limits, 5.2 strong with logic problems
Cursor: cursor auto - a bit dumb still but I use the most for writing not really thinking, it's also good at searching through codebase and doing summaries etc.
Claude / Codex still miss tons of scaffolding for sane development or it's due to sandboxes or sth. Like for example you ask in /plan mode to check think with link to github and it does navigate github via curl, hitting rate limits etc. instead of just git clone, repomix etc. so scaffolding still matters a lot. Like it still lacks a tons of common sense
I have Claude Max plan which makes me feel like I could code anything. I'm not talking about vibe-coding greenfield projects. I mean, I can throw it in any huge project, let it figure out the architecture, how to test and run things, generate a report on where it thinks I should start... Then I start myself, while asking claude code for very very specific edits and tips.
I also can create a feedback loop and let it run wild, which also works but that needs also planning and a harness, and rules etc. Usually not worth it if you need to jump between a million things like me.
Smooth sailing and still frustrating at times. I have very high standards for the code that goes into production at my company. Nothing is getting yoloed. Everything is getting reviewed. Using Claude Code with a Max plan.
There are a lot of comments not liking zulip. I wonder if the like/dislike feeling is tied to the size of the user/company of the poster. My experience is the zulip works very well in my small 3 person fully remote business. Maybe the UI workflow of Zulip breaks down with larger numbers of users?
I used it in a 3 person company and I hated it for many of the reasons stated here. The topic based UX is terrible in practice and the whole thing feels janky and ugly.
I simply do not believe that all of these people can and want to setup a CI. Some maybe, but even after the agent will recommend it only a fraction of people would actually do it. Why would they?
But if you setup CI, you can pick up the mobile site with your phone, chat with Copilot about a feature, then ask it to open a PR, let CI run, iterate a couple of times, then merge the PR.
All the while you're playing a wordle and reading the news on the morning commute.
It's actually a good workflow for silly throw away stuff.
No, they won't be. Inference costs will continue to drop, and subscription prices will follow as AI is increasingly commoditized. There are 6 different providers in the top 10 models on openrouter. In a commoditized market, there will be no $60/month subscriptions.
reply