Hacker Newsnew | past | comments | ask | show | jobs | submit | MuffinFlavored's commentslogin

Can you help me understand why devenv is needed instead of a shell like this/what is gained?

    { pkgs }:
    
    pkgs.mkShell {
      nativeBuildInputs = with pkgs; [
        # build tools
        cmake
        ninja
        gnumake
        pkg-config
      ];
    
      buildInputs = with pkgs; [
        # java
        jdk8
    
        # compilers
        gcc
        clang
        llvmPackages.libcxx
    
        # libraries
        capstone
        icu
        openssl_3
        libusb1
        libftdi
        zlib
    
        # scripting
        (python3.withPackages (ps: with ps; [
          requests
          pyelftools
        ]))
      ];
    
      # capstone headers are in include/capstone/ but blutter expects include/
      shellHook = ''
        export CPATH="${pkgs.capstone}/include/capstone:$CPATH"
        export CPLUS_INCLUDE_PATH="${pkgs.capstone}/include/capstone:$CPLUS_INCLUDE_PATH"
      '';
    }

It is a more user friendly abstraction on top of Nix. Most people don’t want or need to understand the specifics of Nix or the Nix language.

Btw, I say this as a huge fan and heavy user of both Nix and NixOS.


To be honest, I don’t know. I just enjoy the simplicity of devenv. It’s the right amount of user friendly.

The UX is the big benefit, especially on teams who may not even know what nix is. I held off on exposing my nix setups for a long time, but devenv has made it possible to check things in without losing a ton of time to tech support.

“Needed” is too strong, but this does not provide services, does not provide project-specific scripts, does not setup LSP, does not setup git hooks, can't automatically dockerize your build, does not support multiple profiles (e.g. local and CI), etc.

devenv lets you express shells as modules.

Modules let you express the system in smaller, composable, reusable parts rather than express everything in one big file. (There are other popular tools which support modules: NixOS, home-manager, flake-parts).

That devenv also provides "batteries included" modules for popular languages (including linters, LSPs) is also a benefit.


devenv also has tasks/services. For example you need to start redis, then your db, then seed it, and only then start the server. All of that could be aliases, yeah, but if you define them as aliases you can have them all up with `devenv up`. It even supports dependencies between tasks ("only run the db after migrations ran")

really good question.

right now I have bought into the Nix koolaid a bit.

I have NixOS Linux machines and then nix-darwin on my Mac.

I use Nix to install Brew and then Brew to manage casks for things like Chrome what I'm sure updates itself. So the "flake.lock" probably isn't super accurate for the apps you described.


> What’s next for Deno?

Who cares? Why does the world need so many fringe tools/runtimes? So much fragmentation. Why does every project have to be a long-term success? Put some stuff out if its misery. Don't waste the time of the already few open-source contributors who pour hours into something for no good reason.


Deno is much more than a fringe tool. It's a genuine improvement in many ways.

The world doesn't need a dozen JS runtimes.

The world doesn't need a dozen JS engines.

The world doesn't need many dozens of Linux distros.

The world doesn't need a handful of BSD distros.

The world doesn't need many dozens of package managers.

The world doesn't need hundreds of JS frameworks.

The world doesn't need dozens of programming languages or chat protocols or CI/CD systems.

The world doesn't need dozens of init systems, service managers, display servers, audio stacks, universal app formats, build tools/bundlers.

Deno may have dragged the JS runtime space forward, fully agree. Maybe it served its purpose and it is time to say goodbye.


If Deno moved things forward, doesn't that suggest that we do need efforts like this to support ongoing progress? There doesn't seem to be strong evidence to the contrary in the JS ecosystem.

The world doesn't need so many people or anything they have to offer it.

I'd argue that the mainstream, lowest-common-denominator tools are the ones which waste people's time. (Especially when they're backed by an incumbent. Deno, on the other hand, clicked immediately.)

any reason why you did

    const { rows } = client.query(
      "select id, name, last_modified from tbl where id = $1",
      [42],
    );
instead of

    const { rows } = client.query(
      "select id, name, last_modified from tbl where id = :id",
      { id: 42 },
    );


That is the way node-postgres works. pg-typesafe adds type safety but doesn’t change the node-postgres methods


Is trading even a real thing? Is there really a job title "trader"? Entities really think they can outperform DCA SPY?


Yes. Traders are involved in all kinds of deals that aren't like index funds.


I guess you better go tell them they don't exist.


Who do you think makes the price of SPY change?


> Your machine runs a little slower, your bandwidth gets a little thinner, and someone halfway around the world is routing traffic through your home IP.

I wish in 2026 the default on new computers (Windows + Mac) was not only "inbound firewall on by default" but also outbound and users having to manually select what is allowed.

I know it is possible, it's just not the default and more of a "power user" thing at the moment. You have to know about it basically.


I use LuLu (https://objective-see.org/products/lulu.html) to block outgoing connections and manually select which connections/apps are allowed. It's free and works just fine.


As a power user I agree, but how do you avoid it being like the Vista UAC popups? Everyone expects software to auto update these days and it's easy enough to social engineer someone into accepting.


Even if it was a default there is so many services reaching out the non-technical user would get assaulted with requests from services which they have no idea about. Eventually people will just click ok with out reading anything which puts you back at square one with annoying friction.


I do this outbound filtering but I don't use a computer running Windows or MacOS to do it

It doesn't make sense to expect the companies promoting Windows or MacOS to allow the user to potentially interfere with their "services" and surveillance business model

Windows and MacOS both "phone home" (unfiltered outgoing connections). If computer owners running these corporate OS were given an easy way to stop this, then it stands to reason that owners would stop the connections back to the mothership. That means loss of surveillance potential and lost revenue

As of 2006, still nothing stops anyone from setting the gateway of their computer running a corporate OS to point to a computer running a non-corporate OS that can do the outbound filtering


Fort Firewall for the win.

https://github.com/tnodir/fort


I always wondered this, is this true/does the math come out to be really that bad? 6x?

Is the writing on the wall for $100-$200/mo users that, it's basically known-subsidized for now and $400/mo+ is coming sooner than we think?

Are they getting us all hooked and then going to raise it in the future, or will inference prices go down to offset?


The writing has been on the wall since day 1. They wouldn't be marketing a subscription being sold at a loss as hard as they are if the intention wasn't to lock you in and then increase the price later.

What I expect to happen is that they'll slowly decrease the usage limits on the existing subscriptions over time, and introduce new, more expensive subscription tiers with more usage. There's a reason why AI subscriptions generally don't tell you exactly what the limits are, they're intended to be "flexible" to allow for this.


> Imagine if Siri could genuinely file your taxes

I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.


> You can shrink the model to a fraction of its "full" size and get 92-95% same performance, with less VRAM use.

Are there a lot of options how "how far" do you quantize? How much VRAM does it take to get the 92-95% you are speaking of?


> Are there a lot of options how "how far" do you quantize?

So many: https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overvie...

> How much VRAM does it take to get the 92-95% you are speaking of?

For inference, it's heavily dependent on the size of the weights (plus context). Quantizing an f32 or f16 model to q4/mxfp4 won't necessarily use 92-95% less VRAM, but it's pretty close for smaller contexts.


Thank you. Could you give a tl;dr on "the full model needs ____ this much VRAM and if you do _____ the most common quantization method it will run in ____ this much VRAM" rough estimate please?


It’s a trivial calculation to make (+/- 10%).

Number of params == “variables” in memory

VRAM footprint ~= number of params * size of a param

A 4B model at 8 bits will result in 4GB vram give or take, same as params. At 4 bits ~= 2GB and so on. Kimi is about 512GB at 4 bits.


Did you eventually move to a $20/mo Claude plan, $100/mo Claude plan, $200/mo, or API based? if API based, how much are you averaging a month?


The $20 one, but it's hobby use for me, would probably need the $200 one if I was full time. Ran into the 5 hour limit in like 30 minutes the other day.

I've also been testing OpenClaw. It burned 8M tokens during my half hour of testing, which would have been like $50 with Opus on the API. (Which is why everyone was using it with the sub, until Anthropic apparently banned that.)

I was using GLM on Cerebras instead, so it was only $10 per half hour ;) Tried to get their Coding plan ("unlimited" for $50/mo) but sold out...

(My fallback is I got a whole year of GLM from ZAI for $20 for the year, it's just a bit too slow for interactive use.)


Try Codex. It's better (subjectively, but objectively they are in the same ballpark), and its $20 plan is way more generous. I can use gpt-5.2 on high (prefer overall smarter models to -codex coding ones) almost nonstop, sometimes a few in parallel before I hit any limits (if ever).


I now have 3 x 100 plans. Only then I an able to full time use it. Otherwise I hit the limits. I am q heavy user. Often work on 5 apps at the same time.


Shouldn't the 200 plan give you 4x?? Why 3 x 100 then?


Good point. Need to look into that one. Pricing is also changing constantly with Claude


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: