Hacker News new | past | comments | ask | show | jobs | submit login

I have only ever really used venv. Poetry was fine but didn't give me any additional benefits that I could see. What does this offer? And more broadly, why do people consider pip to be a problem? I have literally never had any issues with it in any of my projects.





Personally, I use it for everything right now. It's faster to do `uv init` and then add your dependencies with `uv add` and than just `uv run <whatever>`. You can argue that poetry does the same, but `uv` also has a pipx alternative, which I find myself using more than the package manager that my distro offers, since I never had compatibility issues with packages.

The main advantage these tools (poetry, pipenv, uv, ...) offer is, that they let you create lock files, which make your venv reproducible. Without that, you are kind of living in the wild west, where tomorrow you project can break. These tools help with projects, that are supposed to do more than "runs on my machine".

Maybe I'm not fully grasping lock files then, but why isn't it sufficient to just pin the versions in the requirements.txt? Obviously it doesn't handle the python version itself but I just use pyenv for that. So my stance is just pyenv + venv seems to solve those problems. But then I see people singing the praises of these newer tools and I wonder what I am not getting

I was in the same position myself and had to learn the answers myself. It still doesn't really matter very much for me - I do more library than application development, and my applications would probably generally be fine with a wide range of dependency versions - if they even have dependencies. In short, different people have very different use cases, and you might simply not have the use cases that drive so many others to the tools in question. But it's still useful to understand the theory.

>why isn't it sufficient to just pin the versions in the requirements.txt?

Because your dependencies have dependencies.

You can, in fact, pin those as well explicitly, and as long as what you pin is a valid solution, Pip will (to my understanding; I haven't done a thorough, explicit test) happily grab exactly what you asked for. And as long as the version number is enough information, that will work for your application.

But some people also want to ensure they use specific exact builds of a package, and verify their hashes. Some of them might be using private indexes (or even mixing and matching with PyPI) and need to worry about supply chain attacks. And at any rate they don't want to account for all the transitive dependencies manually.

In principle, a lock file is a record of such a solution, including all the transitive dependencies, plus file hashes, etc. There isn't perfect agreement about what needs to be in them, which is why discussion of a standard lock file format has been tried a few times and is still ongoing (the current effort is https://peps.python.org/pep-0751/ ; see the linked threads for just some of the related discussion of the concept - there is much more, in the abstract).


Look into concrete vs abstract dependencies:

https://martin-thoma.com/python-requirements/


Just to give a concrete example, it helped me with my projects involving langchain and several langchain extensions. I have a clear and consistent source of version record for each dependency.

> create lock files

That's where pip-tools comes in.


And you can use uv as a full replacement for that with less bugs, faster performance and just generally a more pleasant experience. pip-compile -> uv pip compile and pip-sync -> uv pip sync.

(Though I think the high level interface is the better thing to use)


No disagreement here, just pointing out that it's also possible to do with existing tools.

"existing tools" -- What do you mean by that? pip-tools [1] seem to be just another PyPI package. What makes them more available than any of the other tools? The other tools "exist" as well.

[1]: https://pypi.org/project/pip-tools/


My point was that there is a way to have lock files and keep using pip (using pip-tools), without full switching to poetry or uv or anything else.

And uv's first release was just a drop-in replacement for pip-tools, just faster and less buggy.

I know. Maybe you should read the comment I replied to so you understand the context:

https://news.ycombinator.com/item?id=42415924

You will notice that it is about the ability to create lock files and nothing else.

And btw I've been using pip-tools for years and it's worked great for me.


Cohesion and simplicity of use.

uv init; uv add

No more pip freeze, source .venv/bin/activate, deactivate bullcrap.


You don't have to 'activate' anything if you don't want to. The bin/ directory inside your venv contain the binaries and scripted entrypoints for the packages installed in your virtualenv.

Yeah, and I don't "have" to use Make and build systems either, but it makes my life easier and bullshit free.

Why is it better if every command you use starts with `uv`? Why is it "bullcrap" the other way?

It’s not “uv” part that is important. It’s the focus on simplifying things you do thousands of times a week.

How does putting "uv" in front of the commands that you use simplify them?

uv init; uv add requests and you automatically get environment that can be easily shared between team members with predictable locking, with no “source .venv/bin/activate” bullshit.

Activating the venv only changes some environment variables. You can, as explained before, use the environment's Python executable directly instead. Activation does allow other tools to use the venv without having to know anything about it. The point is that you aren't then putting `uv run` in front of every command, because you don't need to have an integrator program that's aware of the venv being there.

If you had a bad experience with a different locking tool, sorry to hear it, but that has absolutely nothing to do with the venv itself nor its activation script.


> What does this offer?

Referring to the entire category in general: mainly, it offers to keep track of what you've installed, and help you set up reproducible scripts to install the same set of dependencies. Depending on the specific tool, additional functionality can vary widely. (Which is part of why there's no standard: there's no agreement on what the additional functionality should be.)

> why do people consider pip to be a problem?

Many problems with Pip are really problems with the underlying packaging standards. But Pip introduces a lot of its own problems as well:

* For years, everyone was expected to use a workflow whereby Pip is copied into each new venv (this is surprisingly slow - over 3 seconds on my machine); users have accidentally invented a million different ways (mainly platform-specific) for `pip` to refer to a different environment than `python` does, causing confusion. (For a while, Setuptools was also copied in by default and now it isn't any more, causing more confusion.) You don't need to do this - since 22.3, Pip has improved support for installing cross-environment, and IMX it Just Works - but people don't seem to know about it.

* Pip builds projects from sdists (thereby potentially running arbitrary code, before the user has had a chance to inspect anything - which is why this is much worse than the fact that the library itself is arbitrary code that you'll import and use later) with very little provocation. It even does this when you explicitly ask it just to download a package without installing it; and it does so in order to verify metadata (i.e., to check that building the sdist would result in an installable wheel with the right name and version). There's wide consensus that it doesn't really need to do this, but the internals aren't designed to make it an easy fix. I have an entire blog post in my planned pipeline about just this issue.

* Pip's algorithm for resolving package dependencies is thorough at the cost of speed. Depending on the kind of packages you use, it will often download multiple versions of a package as sdists, build them, check the resulting metadata, discover that this version isn't usable (either it doesn't satisfy something else's requirement, or its own requirements are incompatible) and try again. (This is partly due to how Python's import system, itself, works; you can't properly support multiple versions of the same library in the same environment, because the `import` syntax doesn't give you a clean way to specify which one you want.)

* Because of how the metadata works, you can't retroactively patch up your metadata for old published versions because e.g. you found out that you've been using something that's deprecated in the new Python release. This has especially bad interactions with the previous point in some cases and explaining it is beyond the scope of a post here; see for example https://iscinumpy.dev/post/bound-version-constraints/ for a proper explanation.

But the main reason why you'd use a package manager rather than directly working with Pip, is that Pip is only installing the packages, not, well, managing them. Pip has some record of which packages depend on which others, but it won't "garbage-collect" for you - if something was installed indirectly as a dependency, and then everything that depends on it is removed, the dependency is still there. Further, trying to upgrade stuff could, to my understanding, cause breakages if the dependency situation is complex enough. And above all of that, you're on your own for remembering why you installed any given thing into the current environment, or figuring out whether it's still needed. Which is important if you want to distribute your code, without expecting your users to recreate your entire environment (which might contain irrelevant things).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: