Hacker Newsnew | past | comments | ask | show | jobs | submit | pxc's commentslogin

> updating a dependency via poetry would take on the order of ~5-30m. God forbid after 30 minutes something didn’t resolve and you had to wait another 30 minutes to see if the change you made fixed the problem

I'd characterize that as unusable, for sure.


In Python's case, as the article describes quite clearly, the issue is that the design of "working software" (particularly setup.py) was bad to the point of insane (in much the same way as the NPM characteristics that enabled the recent Shai Hulud supply chain attacks, but even worse). At some point, compatibility with insanity has got to go.

Helpfully, though, uv retains compatibility with newer (but still well-established) standards in the Python community that don't share this insanity!


My gripe is with Rust rewrites. Not uv. Though I very much think uv is overhyped.

Actually uv retains compatibility with the setup.py “insanity,” according to the article:

> uv parses TOML and wheel metadata natively, only spawning Python when it hits a setup.py-only package that has no other option

The article implies that pip also prefers toml and wheel metadata, but has to shell out to parse those, unlike uv.


Ugh. Thank you for the correction. :(

I mean, you’re on the right track in that they did cut out other insanity. But unclear how much of the speed up is necessarily tied to breaking backward compat (are there a lot of “.egg” files in the wild?)

> (are there a lot of “.egg” files in the wild?)

Not as far as I can tell, except perhaps in extended-support legacy environments (for example, ActiveState is still maintaining a Python 2.x distribution).


Succinctly, perhaps with some loss of detail:

"Rewrite" is important as "Rust".


as important as

whoops. That's right

That it supports fetching via Git as well as various via forge-specific tarballs, even for flakes, is pretty nice. It means that if your org uses Nix, you can fall back to distribution via Git as a solution that doesn't require you to stand up any new infra or tie you to any particular vendor, but once you get rolling it's an easy optimization to switch to downloading snapshots.

The most pain probably just becomes from the hugeness of Nixpkgs, but I remain an advocate for the huge monorepo of build recipes.


Yes agreed. It’s possible to imagine some kind of cached-deltas scheme to get faster/smaller updates, but I suspect the folks who would have to build and maintain that are all on gigabit internet connections and don’t feel the complexity is worth it.

> It’s possible to imagine some kind of cached-deltas scheme to get faster/smaller updates

I think the snix¹ folks are working on something like this for the binary caches— the greater granularity of the content-addressing offers morally the same kind of optimization as delta RPMs: you can download less of what you don't need to re-download.

But I'm not aware of any current efforts to let people download the Nixpkgs tree itself more efficiently. Somehow caching Git deltas would be cool. But I'd expect that kind of optimization to come from a company that runs a Git forge, if it's generally viable, and to benefit many projects other than Nix and Nixpkgs.

--

1: https://snix.dev/


Yes indeed. That said nix typically throws away the .git dir so it would require some work to adapt a solution to nix that operates at the git repo level.

The ideal for nix would be “I have all content at commit X and need the deltas for content at commit Y” and i suspect nix would be fairly unique in being able to benefit from that. To the point that it might actually make sense to just implement the fact git repo syncs and have a local client serving those tarballs to the nix daemon.


Loved this article. Just enough detail to make the broad scope compatible with a reasonable length, and well-argued.

I feel sometimes like package management is a relatively second-class topic in computer science (or at least among many working programmers). But a package manager's behavior can be the difference between a grotesque, repulsive experience and a delightful, beautiful one. And there aren't quite yet any package managers that do well everything that we collectively have learned how to do well, which makes it an interesting space imo.

Re: Nixpkgs, interestingly, pre-flakes Nix distributes all of the needed Nix expressions as tarballs, which does play nice with CDNs. It also distributes an index of the tree as a SQLite database to obviate some of the "too many files/directories" problem with enumerating files. (In the meantime, Nixpkgs has also started bucketing package directories by name prefix, too.) So maybe there was a lesson learned here that would be useful to re-learn.

On the other hand, IIRC if you use the GitHub fetcher rather than the Git one, including for fetching flakes, Nix will download tarballs from GitHub instead of doing clones. Regardless, downloading and unpacking Nixpkgs has become kinda slow. :-\


> Why ever use Ruby when there’s a virtually identical system [...] with a bigger community.

There was a time in the history of Python when people who chose Python did so primarily because they found it beautiful or pleasant to work with. These are reasonable factors in choosing a language, and they continue to be popular reasons for choosing relatively unpopular languages today.

A related essay has made the rounds on HN before. It might be worth revisiting if this question is on your mind: https://www.johndcook.com/blog/2011/10/26/python-is-a-volunt...


> Most people don’t even have an internal monologue

Is there any scientific indication that whether private thoughts are automatically verbalized actually has an impact on cognitive activity or function?

Also where do you get this idea that most people lack an internal monologue? Afaik research indicates that totally lacking verbal thinking is very rare.


There is a person thinking about how to solve actual problems at the bus/rail stop. The other person is totally reactive (someone FaceTimes them), mostly glued to doomscrolling (consuming non stop). There are disproportionately more of the latter than the former.

There’s nothing wrong with that it’s just how humans are wired. It’s pretty obvious.


You can pay with an envelope of cash, so they don't need your banking data to begin with.

Perhaps so, but that's damn difficult or very risky for all but a very select few.

Because you can't mail cash? Or it won't be delivered without a return address

Indeed, the whole thing reads like it was written by an LLM.

They're in the gradual process of open-sourcing their driver stack by moving the bits they want to keep proprietary into the firmware and hardware, much like AMD did many years ago.

It takes a long time to become mature, but it's a good strategy. NVIDIA GPUs will probably have pretty usable open-source community drivers in 5 years or so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: