The UX is the big benefit, especially on teams who may not even know what nix is. I held off on exposing my nix setups for a long time, but devenv has made it possible to check things in without losing a ton of time to tech support.
“Needed” is too strong, but this does not provide services, does not provide project-specific scripts, does not setup LSP, does not setup git hooks, can't automatically dockerize your build, does not support multiple profiles (e.g. local and CI), etc.
Modules let you express the system in smaller, composable, reusable parts rather than express everything in one big file. (There are other popular tools which support modules: NixOS, home-manager, flake-parts).
That devenv also provides "batteries included" modules for popular languages (including linters, LSPs) is also a benefit.
devenv also has tasks/services. For example you need to start redis, then your db, then seed it, and only then start the server. All of that could be aliases, yeah, but if you define them as aliases you can have them all up with `devenv up`. It even supports dependencies between tasks ("only run the db after migrations ran")
right now I have bought into the Nix koolaid a bit.
I have NixOS Linux machines and then nix-darwin on my Mac.
I use Nix to install Brew and then Brew to manage casks for things like Chrome what I'm sure updates itself. So the "flake.lock" probably isn't super accurate for the apps you described.
Who cares? Why does the world need so many fringe tools/runtimes? So much fragmentation. Why does every project have to be a long-term success? Put some stuff out if its misery. Don't waste the time of the already few open-source contributors who pour hours into something for no good reason.
If Deno moved things forward, doesn't that suggest that we do need efforts like this to support ongoing progress? There doesn't seem to be strong evidence to the contrary in the JS ecosystem.
I'd argue that the mainstream, lowest-common-denominator tools are the ones which waste people's time. (Especially when they're backed by an incumbent. Deno, on the other hand, clicked immediately.)
> Your machine runs a little slower, your bandwidth gets a little thinner, and someone halfway around the world is routing traffic through your home IP.
I wish in 2026 the default on new computers (Windows + Mac) was not only "inbound firewall on by default" but also outbound and users having to manually select what is allowed.
I know it is possible, it's just not the default and more of a "power user" thing at the moment. You have to know about it basically.
As a power user I agree, but how do you avoid it being like the Vista UAC popups? Everyone expects software to auto update these days and it's easy enough to social engineer someone into accepting.
Even if it was a default there is so many services reaching out the non-technical user would get assaulted with requests from services which they have no idea about. Eventually people will just click ok with out reading anything which puts you back at square one with annoying friction.
I do this outbound filtering but I don't use a computer running Windows or MacOS to do it
It doesn't make sense to expect the companies promoting Windows or MacOS to allow the user to potentially interfere with their "services" and surveillance business model
Windows and MacOS both "phone home" (unfiltered outgoing connections). If computer owners running these corporate OS were given an easy way to stop this, then it stands to reason that owners would stop the connections back to the mothership. That means loss of surveillance potential and lost revenue
As of 2006, still nothing stops anyone from setting the gateway of their computer running a corporate OS to point to a computer running a non-corporate OS that can do the outbound filtering
The writing has been on the wall since day 1. They wouldn't be marketing a subscription being sold at a loss as hard as they are if the intention wasn't to lock you in and then increase the price later.
What I expect to happen is that they'll slowly decrease the usage limits on the existing subscriptions over time, and introduce new, more expensive subscription tiers with more usage. There's a reason why AI subscriptions generally don't tell you exactly what the limits are, they're intended to be "flexible" to allow for this.
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
> How much VRAM does it take to get the 92-95% you are speaking of?
For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.
Thank you. Could you give a tl;dr on "the full model needs ____ this much VRAM and if you do _____ the most common quantization method it will run in ____ this much VRAM" rough estimate please?
The $20 one, but it's hobby use for me, would probably need the $200 one if I was full time. Ran into the 5 hour limit in like 30 minutes the other day.
I've also been testing OpenClaw. It burned 8M tokens during my half hour of testing, which would have been like $50 with Opus on the API. (Which is why everyone was using it with the sub, until Anthropic apparently banned that.)
I was using GLM on Cerebras instead, so it was only $10 per half hour ;) Tried to get their Coding plan ("unlimited" for $50/mo) but sold out...
(My fallback is I got a whole year of GLM from ZAI for $20 for the year, it's just a bit too slow for interactive use.)
Try Codex. It's better (subjectively, but objectively they are in the same ballpark), and its $20 plan is way more generous. I can use gpt-5.2 on high (prefer overall smarter models to -codex coding ones) almost nonstop, sometimes a few in parallel before I hit any limits (if ever).
I now have 3 x 100 plans. Only then I an able to full time use it. Otherwise I hit the limits. I am q heavy user. Often work on 5 apps at the same time.
reply