However, Kenneth abused his position with PyPA (and quickly bumped a what is a beta product to version 18) to imply Pipenv was more stable, more supported and more official than it really was.
And worse still, for anyone saying "but ts open source, you get what you pay for", Kenneth as former Python Overlord at Heroku, encouraged Heroku to place Pipenv above Pip as the default Python package manager in the Python buildpack. This decision impacted paying customers and the Python buildpack used a broken version of Pipenv for a long time. So long, most people I know just went back to Pip.
Then, lastly, when people complained he had a tizzy at reddit and twitter and got PyPA to help backtrack and say "no we didn't support it, nope, its just a thing that happened", all while the main Pipenv Github repository was held under the PyPA GitHub Org.
There's been a lot of work on Pipenv over the the last 6 months, predominantly by Dan Ryan and Tzu-Ping Chung, and it's getting stronger and stronger with each release.
If you've gone back to using pip I'd encourage you to give Pipenv another try. Introducing a lockfile is a big step forward for Python dependency management, and the team working on Pipenv are committed and doing a great job.
I don't deny that, what I am (and the article is) saying is that we were sold on Pipenv being "the officially recommended Python packaging tool from Python.org".
And PyPA didn't refute it, and Heroku didn't refute it, so the community bought it.
Yes, introducing a Lockfile is huge, and it was massively needed, and thats why when we were told "heres the official way to do it", we got excited. Then we got daily breaking updates, rude issue close messages, and a giant backtrack of "its free and still under development, why do you expect so much from us?"
I don't think that the parent disagreed with that. The point, as I understood it, was that this beta stage improvement was marketed as being ready. IOW, if pipenv was not Kenneth's project, it likely would have evolved in, to use your phrase, a straighter line.
That's kind of the problem. Why on earth are libraries and apps getting a different treatment? The JS ecosystem manages to have one tool for apps and libraries. One too for installing and publishing. All of it with lockfile support, workspace/"virtualenv" support, etc. And somehow, it's not confusing.
Adding one more tool to the stack is a really funky step forward. Yes, it brings lockfiles. Cool, although we already kind of had those (--freeze). Packaging in Python is a mess and I'm more and more in the camp that as long as Pipenv keeps flouting itself as "the better solution", all the while not covering all the basic use cases, we've gone backwards and are in even more of a mess.
This is a solved problem across a variety of popular and mainstream programming languages. I don't mean to suggest that the problem isn't complicated, but this isn't a problem that doesn't have a wealth of previously written solutions to look at for inspiration.
My Java app declares its dependencies in a build.sbt file using Scala syntax and has them cached in an Ivy directory. Yours declares them in a pom file using XML syntax and has them cached in a Maven directory. Neither of us even tries to do lockfiles and instead just has the CI server build an "uber jar" with all the dependencies bundled in.
I get why Python needs lockfiles, but goddamn, that need is a symptom of the mess of managing Python dependencies.
There's still a long way to go - I use Airflow for ETL management, and I'm using pipenv to manage it - except Pipenv can't create a lockfile because a dependency of a dependency of Airflow requires in flask-wtf <= 0.7, while another dependency of a dependency requires ==0.8
In a Maven pom.xml I can easily exclude or override the conflicting dependency manually if needed, but I can't in a Pipfile.
Well, that seems naive. Almost 1/4th of Maven Central libraries broke binary compatibility in patch updates: https://avandeursen.com/2014/10/09/semantic-versioning-in-ma...
Netflix, at least, doesn't agree that Java doesn't need lockfiles: https://github.com/nebula-plugins/gradle-dependency-lock-plu...
Yes, which is part of the reason why lots of Java projects almost never update their dependencies, because no one remembers why that version was chosen. Splitting what you want from what you have is important to communicate this.
> There's still a long way to go - I use Airflow for ETL management, and I'm using pipenv to manage it - except Pipenv can't create a lockfile because a dependency of a dependency of Airflow requires in flask-wtf <= 0.7, while another dependency of a dependency requires ==0.8
I'm not sure what you expect it would be able to do here...
Most shops I know use one of maven or gradle. I can’t think of any other serious contenders in Java (if you still use ant: please consider alternatives).
Poetry looks like it did just that though and I'm warming up to it at a very high speed.
Boot, and Lein. I'm partial to boot lately.
You wouldn't write your application to support 5 versions of Django, but you _probably_ would do so for a library.
That said, I do basically agree about `pip` existing already. We could have built a tool to manipulate `requirements.txt` files instead of introducing another format and a toolchain that is _much_ slower and brittler. Though ultimately at this point Python packaging woes feel like they are at a much lower level (the fact that libraries end up being "installed" mean that preparing all your dependencies to go out to multiple servers is a mess).
The "library" workflow works for applications too. Put your direct dependencies in setup.py. Build wheels of everything and upload them to an internal PyPI server. Pin everything in requirements.txt.
There's pip-compile from https://pypi.org/project/pip-tools/ that does exactly that. Pipenv uses its resolving mechanism if I'm not mistaken. It produces standard req.txt file with versions pinned and supports separate dev requirements. It had some bug with upgrades last time I checked though, not sure whether it's resolved, currently considering using it for projects at work.
That would be weird. Perhaps you mean 'flaunting'?
Huh? I'm not really familiar with the state of dependency management for Python/dynamic languages but... there's much more out there beyond just lockfiles. I'm a bit appalled Python is so far behind.
Much more in the field of build tooling/package management. Pinning versions is fine, but dependency resolution is another legitimate choice.
Yet, I think PyPA has not been taking the best decisions regarding Python packaging.
Your Kenneth story is not the only "weird event" in their history.
Did you know that we don't need "pyproject.toml" at all ? That there is already a production ready plain text standard to replace setup.py ?
Did you know that this standard has been perfectly working for TWO YEARS with regular setuptools, is incredibly simple to use and completly compatible with the standard "setup.py stuff" workflow (and hence the whole legacy tool stack) ?
Yep. And nobody talks about it.
Let me (re)introduce...
Oh, I know... Most people believes it's a useless file.
After all, the Python documentation seldom states to use it and for only one tiny option:
But no. Setup.cfg is awesome !!
Put one line in setup.py:
import setuptools; setuptools.setup()
Not only it has been working since 2016, but it has fantastic goodies:
version = attr: src.__version__
"license = file: LICENCE.txt"
Try it, it just works. And you can "python setup.py sdist upload" as usual with it. You don't need any new tool.
Now why did the PyPA decide to forget about this and create a new format ? The short explanation in PEP 518 is a bad joke. And why does nobody talks about it ?
When I asked the PyPA, they told me they were too invested in the new project to stop now. I don't like this answer at all: we suffered enough with python packaging during the "distutils, eggs, etc" fiasco.
setup.cfg works. It works now. It's nice. It's compatible. It does what we need.
Use it. Talk about it. Write about it.
Make sure a lot of people knows so that tool makers and PyPA finally acknowledge that there is not need for the XKCD comics about standard to be true again.
Here are some examples of my libs/apps using it in the real world, if someone needs references for how to use setup.cfg with an empty or near-empty setup.py:
Edit: I see you mention attr: src.__version__. I personally prefer doing it the other way around, with the version defined in setup.cfg and a pkg_resources snippet in __init__.py (https://github.com/HearthSim/python-hearthstone/blob/master/...).
To be honest I wish __version__ were automatically defined like that (but more reliably). Do you know if this was discussed in a PEP?
- there are things you can't do with pyproject.toml you can with setup.cfg. E.G: you can't use pyproject.toml with legacy tools, or with just a fresh python setup. This may change in the future, but would requires a lot of effort because changing setuptools is a very tedious process.
- resources (time, people, money, documentation, public attention, communication, etc) invested in creating and supporting pyproject.toml could be invested in improving setup.cfg and its ecosystem support. E.G: Why does poetry support pyproject.toml and not setup.cfg ? No technical reason. Why does nobody knows this easy way to package python libs ? No technical reason reason either.
So not only the new format brings nothing on the table, but it is also a setback, AND add clutter to a situation that was just begining to be solved. It's not just poor engineering, it's poor manners really.
I've been coding in Python for 15 years. I've lived this: https://stackoverflow.com/a/14753678
Stop the pain.
Second, the format of setup.cfg is defined in a documentation already, so there is a reference outside of configparser. Yes, the low level format is not explicitly defined (although it is implicitly): so let's do define it instead of creating a new one.
Third, it's still a much easier and saner task to rafine the definition of setup.cfg than to create a new standard. I don't even understand how this is controversial, espacially among people in the computing world, where we had those kind of problems for decades and we know the pros and cons, and consequence of this.
The "I add my little format because it's pure and better and current things suck" falacy is such a stereotype we should all be able to recognize it from miles away from nowaday.
I wouldn't call comments an edge case. The distutils documentation has a definition for comments, but I think it actually just uses configparser. setuptools just uses configparser. The pbr documentation has a slightly different definition, but I wouldn't be surprised if it just uses configparser too.
They also have different definitions of non-string types.
Even if you call those edge cases, do you think a PEP that turned edge cases into silent errors would be approved?
The most used one in the world: https://github.com/travis-ci/dpl/issues/822
Also the last time I used tox, anything complex didn't work either.
> Even if you call those edge cases, do you think a PEP that turned edge cases into silent errors would be approved?
Well the current PEP decided to turn a packaging situation that was stable into one that was not, again, after 15 years of mess with many versions of things. So you tell me.
Check the usage stats I posted in an other comment to see the problem.
Besides, yes, we do make compromise on best practices to allow peaceful transition all the time in Python. `async/await` allowing to be a variable silently. Non utf8 defaut encoding argument in open() in windows. Then... we fix it later.
Because I think you conveniently skip a lot of things I wrote in my comments. I clearly state that we would and should consider setup.cfg as a version 1 of the format. Then we would increment on that. I gave a detailed procedure on one way to do that, and there are others.
The point is, all your concerned can be addressed with a progressive transition, starting from setup.cfg. Actually we could even end up with a toml format in setup.cfg, __on the long run__, that matches exactly the current one.
While you addressed non of ours concerns. Just reject them. No will to even recognize there is a problem. It's insulting, really.
We did that during the 2/3 transition. Didn't work so well, did it ?
Oh but we do. Then we rationalize it away, because "this time...". Like we do for Big Rewrites.
It might have something to do with the fact that programming is mostly a craft you learn by doing it, so we overvalue "doing it again" because that's how we usually get better.
> Use it. Talk about it. Write about it.
Still happily using plain setuptools for library development and pip-tools for application development.
The reasoning is good, but we were just arriving to the point that every Python tool out there is either compatible with tox.ini, setup.cfg, or both (much like the JS ecosystem has tools reading from package.json).
Now we have both Pipfile and pyproject.toml on top of it!
For a language that prides itself on its stability and backwards compatibility (especially when compared to the JS ecosystem), we churn through boilerplate files harder than Google churns through instant messaging apps.
This can be done with setup.cfg. Setuptools is only a backend supporting it. You can create other ones. Poetry and pipenv could support it in a week in their authors decided so.
> The reasons for not using setup.cfg are explained in the PEP.
Those are not reasons, those are excuses. Let me quote it:
>> There are two issues with setup.cfg used by setuptools as a general format. One is that they are .ini files which have issues as mentioned in the configparser discussion above.
Not only setup.cfg does the job with the current limitations of the ini format (while pyproject.toml still doesn't with its fancy one), but python projects are not so complex they require such a rich content.
Besides, nothing prevent PyPA to says that setup.cfg format now has a version header, with the current setup.cfg being headerless version 1, then make the header mandatory for version 2 and increments it to move toward TOML if we ever reach a limitation. That's how formats grow everywhere else in the world.
>> The other is that the schema for that file has never been rigorously defined and thus it's unknown which format would be safe to use going forward without potentially confusing setuptools installations.
That's incredibly dishonest, since I gave a link to a complete documentation of the format in my previous post. Besides, it's better to actually refine the existing standard if you ever find it lacking than recreating one from scratch. While there are good reasons to do so, the later is rarely a rational engineering decision, and most often driven by ego.
>> While keeping with traditional thanks to setup.py, it does not necessarily match what the file may contain in the future
So ? How is that a problem ? A standard is not meant to be set in stone. It evolves. But it can't do so if everytime one has an itch, one reinvents the wheel.
The last sentence you quoted explains why they picked "pyproject" instead of "setup". It isn't why they picked TOML.
Also "higher-level things like key names" is half of the standard.
Besides, picking a new (even if better) serialization format is not good reason to create a whole new standard with names, convention, tooling, etc., as explained earlier.
There are sane ways to make the existing system evolves and improves incrementally, using the legacy standards that benefits from the existing situation, and allow the improvements from the new one. All that without the madness of messing with the entire community once again after 15 years of instable package management.
Yeah it's less sexy that having to create your new baby, yes it's less fun than using that new shinny format (and I say that while I __love__ TOML), and less it's less satisfying than having your name as the creator of a whole new jesus-format-saver. But that's the mature and professional things to do.
The "legacy standards" are subtly incompatible INI dialects that people recently started putting into the same files. The incompatibilities mostly don't matter because most tools just read their own sections. They do matter if you want to standardize them.
The only new tooling for TOML is a small library. A new INI dialect would need one too.
No, if you use any key, it won't work with setuptools.setup(), and just like a python code that doesn't run with cPython will never be popular, it will not be used.
Also, if you look at how poetry use pyproject.tml, they just create a custom section. So basically, they don't use your standard.
> The "legacy standards" are subtly incompatible INI dialects that people recently started putting into the same files. The incompatibilities mostly don't matter because most tools just read their own sections. They do matter if you want to standardize them.
That's kinda my point for comments and comments. Standardize the status quo, then increment from that. Not sexy. Not pure. Welcome to the real life.
Didn't you learn anything from the distutils/distribute/setuptool mess ? From the Python 2 / Python 3 breakage ?
And could you address any of my concerns instead of just attacking ? Because I'm trying to address yours with solutions. You just write short busts of "no, it's bad, we are good". That's not really giving me trust in your decisions, and it __lowers__ my confidence in pyproject.toml because the people defending it basically are not behaving like engineers trying to solve a problem, but as salesmen trying to only defend their product.
> The only new tooling for TOML is a small library. A new INI dialect would need one too.
But but we can start from a standard that works now, is used already, and is compatible with existing stacks. Instead of arriving with the theorical untested, incompatible best thing that add a layer on top of the mess.
- setup.py: 1,259,007 results (https://github.com/search?q=filename%3Asetup.py)
- setup.cfg: 165,716 results (https://github.com/search?q=filename%3Asetup.cfg)
- pyproject.toml: 2,137 results (https://github.com/search?q=filename%3Apyproject.toml)
Also, remember that setup.cfg is completly compatible with setup.py, the migration is painless. All the legacy tools work. Not the case with pyproject.toml.
However the default in poetry seems to be pyproject.toml... I'm confused.
Computing is not black and white, and perfect purity is only nice in "fizz buzz".
Now to be extra fun, poetry uses a custom section ([tool.poetry]) in pyproject.toml, not really the standard itself. What does that say about this format ?
ptest poetry add requests
I've never seen that error before.
Which version of Python do you use?
And feel free to create an issue on the issue tracker: https://github.com/sdispater/poetry/issues
I also am a big fan of pyenv  but that’s of course to manage Python versions (not environments)
It is also great to see the author is very responsive.
My only concern is the lack of integrated "toolchain" management (what versxon of python to use, something like rustup) that is cross platform.
The only non-system package manager that provides Python and its own toolchains - for Linux and macOS presently - which are used to compile every C, C++ and Fortran package, including Python itself is conda and the Anaconda Distribution.
Not doing this leads to static linking and that's inefficient and insecure.
Disclaimer: I work for Anaconda Inc.
Nix is also perfectly usable without NixOS, and provides all of that, but has far more non-Python libraries and applications packaged. It's also not constantly trying to sell you an enterprise version...
Not sure we constantly try to sell our Enterprise product. You could look at it less cynically as we sell an Enterprise product to allow us to provide the Anaconda Distribution for free.
It runs fine on macOS. It works on WSL if you disable SQLite's write-ahead log (`echo "use-sqlite-wal = false" > /etc/nix/nix.conf` before installing), but it's much slower than running it on native Linux.
> What's the oldest Linux distro upon which it will run?
It brings its own libraries, so the primary question would be what kernel you use. I haven't verified any specific version, but you'll probably be fine. You might need to disable sandboxing though, since that makes pretty elaborate use of the various namespace systems.
There's no "native" Windows support (yet), but I think it might work with some of the UNIX emulations (cygwin, mingw, wsl, etc.)
Not sure what the oldest working Linux version would be. However, NixOS has been around since 2003, so maybe quite old.
While I'm generally happy with it, some gripes:
- Using it's own package format with its own repos means that for many (most) projects you can't get all dependencies from conda, but some from pypi as well.
- And it doesn't keep track of which files belong to which package. So package X will happily scribble over files installed by package Y, and vice versa, leading to either X or Y being silently broken depending on the order they were installed in! Argh! I mean, this is something dpkg/rpm/etc. figured out decades ago, it's not rocket science.
- The dependency solver seems a bit weird. Often when upgrading an environment, it will install the same version of a package with another 'build tag', then a few days later if you upgrade again, it will downgrade back to the previous build tag. Not sure if this is the fault of the dependency solver, or whether the problem is in the packages themselves.
- Similarly, there's a lot of mutual incompatibility in the repos. E.g. dependencies on openssl versions prevent upgrading, or require removal of some package etc. I think this is not so much the fault of the conda tool itself, but rather that Anaconda Inc. needs to be more picky wrt packaging policy. Again, Linux distros have been pretty good at this. E.g. https://www.debian.org/doc/debian-policy/ , https://docs.fedoraproject.org/en-US/packaging-guidelines/ .
PS: While I have above mentioned dpkg/rpm as examples to follow, it's not like those formats don't have problems either. https://nixos.org/nix/ and https://www.gnu.org/software/guix/ are perhaps the most prominent examples of 'next generation' packaging systems solving some of the problems of the old-school dpkg/rpm approaches.
I've been using conda since ~4 years now, and every single complaint lodged against any of the other package managers was never an issue with conda in the first place. And yet, it seems like there's a SEP field around it and people just ignore its existence?
In this thread, for the first time, I've seen someone mention that you might have problem porting a conda env from a Mac to Linux - never had that problem myself, but I guess it's possible; But that's easily solvable, and certainly does not require a new package manager?
1. On Lustre file systems, 'Solving environment...' can take minutes. I don't like waiting minutes to provide permission to install packages.
2. A lot of conda packages are broken. It's managed by maintainers, and unfortunately, some people maintaining believe that if it works on their machine, it will work elsewhere. Since I'm tired of ABI and runtime linking errors, I often just install from source.
w.r.t lustre - can't comment about that; I prefer local file systems for development for various reasons (most importantly: mmaping huge local files is 10x to 100x more efficient than through networked filing systems).
Publish on PyPI, it's just as usable in Conda to everyone.
But assuming you for some reason insist on publishing to the system you use - the vast majority of users don't ever publish a package; what's stopping them from using Conda?
I admit I have never tried to publish anything on the Anaconda cloud, but I'm a bit surprised - I was under the impression that publishing pure python packages is simple; The requirement to do it for different python versions, though, seems perfectly warranted to me - and indeed, I ran into issues with packages on PyPI not working on specific versions (but nowhere listed as such).
Conda is also annoyingly outside the python ecosystem. It seems to want to replace pip/pypi instead of working with it.
An important thing for me is that you can disable the virtualenv management and manage it yourself (poetry makes it hard to install different python versions so I'd rather do that myself). Overall it works really well and I'd highly recommend it for general python dev.
I like the idea of pyenv, but in practice I find it to be pretty buggy (e.g. failing when installing older versions of python due to openssl build issues). I kind of wish it had some competition.
As far as #2 is concerned, does poetry allow you "ignore" or change subdependency requirements for specific packages a la maven?
Can you flesh this out a little bit? What in particular were you disappointed with pipenv?
How does poetry do a better job?
So you usually couple it without something else. I use "pew".
I wish a tool would merge both.
Poetry does create virtualenvs to install dependencies on a per-project basis.
curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python
I find npm very easy to wrap my head around. I do npm install ___ and it does a lookup in its repository and downloads it and its dependencies to node_modules. If I want to start fresh, I simply delete node_modules. Everything else "just works" when I invoke node myapp.js. Punto e basta.
pipenv masquarades as the same thing, but then there is no python_modules folder to be found. Instead it downloads it to some obscure directory halfway across my computer. If I want to start fresh I guess I have to trust the tool's uninstall function? And it's unclear if I need sudo or not. Also I can no longer just run my program, I have to run it with pipenv now? Python requires too much cognitive burden for module/dependency/virtualenv management.
For me it's to the point that developing using a docker image with globally installed python modules is easier to manage and wrap my brain around than using pipenv/virtualenv/whatever.
> a mechanism to automatically recognize a __pypackages__ directory and prefer importing packages installed in this location over user or global site-packages. This will avoid the steps to create, activate or deactivate "virtual environments".
The speed of simply extracting a bunch of wheels directly in to `__pypackages__/$PYVER/lib/` is a huge benefit. It behaves well if you symlink from a global tree of extracted packages too, like an even simpler version of pundler¹.
If others want to play with it without too much breakage, I massaged the patch on to 3.7². The 3.8 base was a little too big a change for my liking ;)
Edit: It’s a third of a second to stand up a complete environment with 63 wheels in its dependency tree for the project I'm playing with right now.
pip install numpy
The real interesting and occasionally bothering point on managing your virtualenv is resolution of dependencies and compatibility ranges (hard to solve in general).
pip install --target deps -r requirements.txt
PYTHONPATH=deps python myapp.py
A clean reinstall is easy from there: remove the .env and just repeat.
I feel like pipenv violates KISS, and that a more traditional virtualenv/venv/pyenv setup is the way to go.
Repeat what, manually doing a bazillion `pip install`? Another nice thing about npm is the packages.json file it creates. This allows us to simply add that to version control and then all the new dev has to do is clone and run `npm install` which reads packages.json and installs everything inside of it. I'm sure there's a way to do it in python but, like everything else I bet it's a non-intuitive multistep process.
First time installation:
npm install express --save
rm -rf node_modules
pip3 install requests
pip3 freeze > requirements.txt
pip install -r requirements.txt
rm -rf .env
[exit bash to clear environment]
And yes, aliases and bash scripts help a lot, but do increase initial overhead. I have the entire "delete create install" sequence in an alias as well as activate in another.
But envs are just files, you can even skip activate and just do `.env/bin/python` and it works. That's powerful in a linux shell because now I can just use that environment like a regular executable from anywhere, no global installs required.
I can appreciate preferring something less manual even if I don't! Ultimately, your machine, your code.
Wat? just deactivate before you rm.
Please do not talk about things you clearly don't know well enough (which is patently the case if you can't rememer freeze and deactivate).
> Out-of-the-box cognitive burden is several times greater
You got any stats on that, or is it just your opinion? Because to me, the completely counter-intuitive --save parameter is much more painful to remember.
The venv/pip workflow is not perfect, but what you've described is not the problem.
Ok sure, but starting clean is still a 2 step process (deactivate, remove)
> Please do not talk about things you clearly don't know well enough (which is patently the case if you can't remember freeze and deactivate).
> You got any stats on that, or is it just your opinion?
My opinion, of course, but shared by the dozens of people who upvoted my OP.
> Because to me, the completely counter-intuitive --save parameter is much more painful to remember.
In what way is --save counter-intuitive? If anything pip freeze is counter intuitive. How does it know what to save? Does it just save everything you've ever installed? What if you don't want to save everything, just a few of them?
Also, --save and --save-dev allow you to segregate developer dependencies from production dependencies. Is there a way to do that with python? Again, just a guess, but it's probably going to be an unintuitive 3-4 step process that I'll no doubt find on stack overflow.
And still, you are here trying to measure the length of your "commands" with others' "commands".
> My opinion, of course, but shared by the dozens of people who upvoted my OP.
Ah great, engineering by acclamation. That usually ends well. That's how we got pipenv, btw: a popular developer stood up and declared "I'll fix it!", to general acclaim from the Powers That Be... and then things broke harder, and here we are.
That's precisely the perspective that led us to the mess that is pipenv: "npm is the model, we should all be like npm". Except npm fundamentally serves only a few specific needs, and was built on the lawless prairies of an ecosystem with limited aims, no stdlib, and without 28 years of accumulated legacy practices; whereas python has been pulled in every direction for literally decades, and now has to herd all that legacy into something more coherent, slowly (because this or that constituency will be ready to scream about breaking compatibility, as we've just had to endure for about 10 years with py3) and correctly - to avoid ending up in situations like the periodic breakage that happens in npm because this or that package has misbehaved.
> In what way is --save counter-intuitive?
"I've already told you to install, why should I repeat the concept? Are you really so dumb a 'manager' that you would ignore what you just installed?"
And btw, Stack Overflow says --save is actually obsolete since 2013 at least , so it looks like you don't know npm very well either. Maybe we should just give up and build an AI that learns development from SO, and find ourselves more meaningful jobs.
"edited Sep 18 at 18:15"
`5.0.0` was introduced May 25, 2017 . Most linux distributions have not picked it up yet in their repos. Ubuntu 18.04 (released this year) is still on npm 3.5. It's unreasonable to expect a developer to be familiar with the bleeding edge, especially when existing projects are locked into using older versions.
As for sudo, there is no situation in which you should use sudo with pip.
> ...my fame, while certainly categorized under “cult of personality” is not necessarily accidental. It’s called marketing. I worked very hard at becoming well known within the Python community, and toiled away at it for years.
I see this as the real issue, which led to premature adoption of a tool that wasn't stable, and the subsequent backlash.
Also, I have been using virtualenv for years and if I want to freeze my dependencies, running `pip freeze > requirements.txt` is sufficient for me.
Pipenv on the other hand was not only slower, but _failed_ to actually resolve our dependencies correctly. Additionally, it was slower. And looking up the issues, I can echo that the development team seems a bit defensive and dismissive.
We’ve been using Pipenv, but it is atrociously slow and flawed at dependency resolution. An alternative is extremely welcome, E.g. Poetry which was mentioned above.
I've been using pipenv on my local machine but then switched to virtualenv in my ec2 deployments (because all the tutorials used it).
What makes it easier for ruby? Is there just this "registry API" that has gained enough traction that everyone uses it?
For example, the popular scikit-learn package has the following in its setup.py:
if platform.python_implementation() == 'PyPy':
SCIPY_MIN_VERSION = '1.1.0'
NUMPY_MIN_VERSION = '1.14.0'
SCIPY_MIN_VERSION = '0.13.3'
NUMPY_MIN_VERSION = '1.8.2'
Until this is replaced by static version numbers, and all popular packages adopt it, a registry API cannot exist as it needs to run code on your machine to figure out the dependencies.
In addition when/if I have time I'll further debug and attempt at PRs and issues to help.
What the hell is the difference? All that I have used have worked pretty much exactly the same as the others. All work just fine. I've never had a problem. I just use virtualenv since it's the oldest. I see no more reason to switch or try other options as long as it continues to deliver.
The capabilities provided by a tool like Leiningen just makes everything even tangentially related to dependency management an absolute breeze.
If the specific version of all subdependencies are pinned, then you have a mess on your hands of keeping track of what's actually required. You have to either manually maintain your requirements.txt, or you run the risk of removing a dependency and missing the removal of its subdependencies.
Further, you can't just upgrade everything, but dependencies might have conflicting version requirements for subdependencies.
One of the big selling points of pipenv is that you can pin the versions of the packages you use and their dependencies.
None of the others do this, afaik.
Pipfile.lock ensures that the tarball I download for a given package release is exactly the same as the one I downloaded during development. This closes what I consider to be a fairly significant security hole.
Hmm I'll admit I only started using pipenv because of that... "endorsement".
What bugs me is we have some excellent examples of good patterns… in the JS community. Pipenv promised to be Yarn and ended up way off target :/
More context from my previous post:
I use pipenv in production and testing to simplify deployment on systems that don't natively support python 3.6+. When it works it is great. When it fails, or when the cli options fight each other and try to be smart but instead for a circular firing squad it is one of the most insanity inducing pieces of software I have ever used. Pipenv release have repeatedly broken CI builds for me for the past 3 months. I was so pissed with how bad it was about 9 months ago that I actually gave up trying to use it on my development machine and learned how to write gentoo ebuilds. On reflection it seems like the perfect tool for python -- if you stay on the happy path and only use it in BDFL APPROVED ways then it can be great, be woe to the fool who wanders from the light into madness.
Overall the article seems hyperbolic, relative to the actual events cited. Packaging is a work in progress everywhere.
I can run pipenv shell to get a new shell which runs the activate script by default, giving you the worst of both worlds when it comes to virtualenv activation: the unwieldiness of a new shell, and the activate script, which the proponents of the shell spawning dislike.
Why is `pipenv shell` any worse than that? What do "proponents of shell spawning" dislike about the activate command?
For me, it modifies my existing shell in ways that I don't fully understand. While I've never had any issues with that, I have experienced similar problems in the past with RVM and they were incredibly difficult to debug.
Using the OS's tooling to create an isolated environment seems to fit much better with the "Unix philosophy" than modifying the existing one.
That is only required obviously if you're using bash scripts which can't be directed to an actual python binary.
`deactivate` changes `$PATH` back to what it was when you ran `activate` for example, which isn’t likely to be what you want if you’ve changed it since. At least with a subshell you can know what state you’re returning to with <C-d>.
That is part of the problem with the `virtualenv` story in my eyes. It offers the illusion of isolation, but falls down in quite a few ways which are annoying when they do pop up.
pip-compile --upgrade --generate-hashes --output-file requirements/main.txt requirements/main.in
pip-compile --upgrade --generate-hashes --output-file requirements/dev.txt requirements/dev.in
pip install --editable .
pip install --upgrade -r requirements/main.txt -r requirements/dev.txt
rm -rf .tox
update: update-deps init
.PHONY: update-deps init update
Working in a team it is often useful to have your versions "pinned" down so you have a reproducible environment regardless of upstream backwards compatibility policy (just to name one example).
The Pipfile + Pifile.lock combo seemed right for the task.
Creating virtualenv was a nice plus.
(for what it's worth, i use pipenv and rather like it.)
And then there is the speed. pipenv is pathetically slow installing packages.
And I am thinking to myself having recently read an article about how bad npm is, "Only people who never had to deal with pythons packaging mess think npm is horrible"
The CLI can be confusing at times too, I always have to google how to create a new env or export it.
Why do you suggest conda doesn’twork for that? It’s one of the things conda specifically does. When recreating the same env on a different platform, it will resolve the dependencies for that platform, so there are no cross-platform issues of the underlying env unless the library being installed simply doesn’t support that platform, in which case _no_ environment manager could possibly solve that specific problem.
> We just switched the project over to calver, with the explicit purpose of preventing [Kenneht Reitz] from making more than one release a day
 http://journal.kennethreitz.org/entry/r-python (Ctrl-F 'calver')
Personally, today I'd better use `venv + pip` instead, of `pipenv`, but `pipenv` somehow was the "official" tool for package management for months, until people started discussing it . I would like to have better packaging tools in python, but `pipenv` approach seemed really strange, now I know why.
If you had manic episode and did some mistakes - go and try reverse them. Revert/change the commits. Apologise for the comments you've made. But no, let's get into the position of a victim, when someone criticises you. I believe that KR may have psychological problems and/or illnesses, but his "normal self" seems to be rather egomaniac too and takes bolder claims that he's comfortable to handle. If it would be otherwise - there would be no drama, there would be no fake "official" tools. KR could just enjoy his fame from `requests` and save his time while writing responses on his blog.
I understand that they want to encourage keeping dependencies up-to-date, but I think the proper approach to this is npm's, where it informs me about outdated packages, but lets me do what I will with that information.
So, for the moment, I'm still in the dark ages of manually pinning everything into `requirements.txt`.
All I want is a simpler file to look at compared to requirements.txt
Here is my guide for newbies like me
1. Use Pipenv in development.
2. Create a requirements.txt each time you make any changes to your pipenv.
Commit all three files: pipenv, pipenv lock, and requirements.txt to git.
3. Use this requirements.txt in production.
Now here is my only gripe:
I believe pipenv is meant for simple people like me. I have to say I want to use Python 3. There is no way to say I accept anything 3.5+
My understanding is that pipenv is not meant for people who actually know python inside out. My use case is that it lets me keep track of what dependencies I installed as opposed to what dependencies my dependencies installed. This should have been the ONLY problem that pipenv fixes but like the old saying goes... no project is complete until it is able to send mail (sorry I probably said it wrong).
1. Put your dependencies in requirements.in.
2. Run pip-compile to generate requirements.txt.
3. Commit both files.
4. Use requirements.txt in production.
You can also use setup.py or setup.cfg instead of requirements.in. This lets you build packages and specify dependencies like "Python 3.5+". requirements.in is simpler, though.
^ This was just such a glaringly terrible design decision, and the fact that they won't even acknowledge it as being a mistake is really frustrating.
npm already found the best solution (if ./node_modules exists, it is automatically used -- no need to run any sort of shell commands). All they had to do was just copy their behavior.
Oh well, maybe the _next_ python virtual environment tool will get it right... :(
Python is a great technology but it is really a shame that is hampered by bad decisions (e.g. the 2->3 fiasco).
Kenneth used a growth hack.
But worse, setup.py is a python script with access to the full power of python. It can crash. It can be slow. It can loop forever. It can have dependencies of its own. It can require specific versions of python. It can download stuff from the internet. It can do anything.
If you wanted to implement something like npm for python, you'd have to convince all the python package maintainers to write and test package.json files for all their packages. Even if they were willing and enthusiastic about that, you'd have a chicken and egg problem because nobody could test a package.json until all their dependencies had them.
Python has a such a plethora of packaging tools because people keep trying to solve the problem by writing better code. But the real problem is lack metadata. Python will always have a lousy packaging ecosystem because it relies on setup.py.
This does not happen with languages where a single niche accounts for most of the use, so a broad consensus is easier to achieve. The closest to Python is Java, but that is traditionally steered top-down (but still has competing toolchains, e.g. ant/maven/gradle).
Pipenv in particular has some additional problems that were completely self-inflicted (namely, a famous developer abusing his popularity to push an incomplete implementation as blessed).
For a project to be easy for me to use, it doesn't need to do anything fancy to accommodate me: 1) 2 or 3 or both? 2) a setup.py or a requirements.txt. I basically always `pyenv virtualenv 3.7.1 $PROJECT && pyenv local $PROJECT` and then more or less never worry about this problem again.
When I need to _produce_ a pinned set of dependencies, it's all just pip + virtualenv so pip freeze works just fine.
The killer feature of pipenv is allegedly pinning but this has never made sense to me. Maybe I just don't understand it? But pyenv makes highly segregated environments easy, so as long as each project has an internally set of environments, everything is A-OK.
The annoying thing with pip freeze is it doesn't have a concept of a "world" file like emerge (Gentoo). With emerge when you install a package it gets entered into your world file but the dependencies do not. That way you always know what you actually want, rather than just incidental dependencies. I wish pip freeze did that.
In particular, one way this has bitten me is if the OS expects a certain Python with a certain set of dependencies to be available for systems its responsible for. Isn't portage itself written in Python for example?
Portage is written in python, yes, so you can't just change your system python at will, but that's not a problem really.
Am I weird? Am I not supposed to do that? Is it bad practice?
The root problem is that just having a system like pip to manage your Python dependencies is woefully insufficient, because it installs dependencies globally, you might need different versions in different projects, and you might be trying to link in some native code, so that behavior will change based on what you already have installed or what your OS is. Also, to make matters worse, you also have to manage your dependency on Python itself, since there are multiple mutually incompatible versions.
So you can use virtualenv to handle the problem of isolating one Python project from another, pyenv to handle the problem of different Python projects using different versions of Python. Then you need a system like pipenv to tie them all together. Except pipenv, itself, is in Python, so there's a bootstrapping issue with using a tool written in Python that has a dependency on a Python interpreter to indirectly manage your dependency on a Python interpreter. Sometimes if you do something ridiculous like "using a Mac where you haven't specifically installed the right version of Python via pyenv or Homebrew yet", things will break in confusing ways.
One way around this whole mess is to put everything in a Docker container, so you get to stipulate that everything runs on a particular Linux distro with a particular set of dependencies from the ground up and there's no possibility of anything else at all on your or any other machine ever polluting that dependency chain and even if you're on a Mac it'll just install dependencies and run code inside of a VM. The isolation you're trying to accomplish with virtualenv or even pyenv, whether by hand or using a tool like pipenv, is pretty much just a poor man's Docker container anyway. But this adds complexity of its own while still punting the real work off to a tool like Pip.
To address one of the problems, Running Scripts:
basic complaint is you are starting a new shell always and you have to prefix all commands.
Prefixing commands sucks but you can get around it pretty easy with an alias.
alias pr="pipenv run"
alias dm="pipenv run python manage.py"
now 'pipenv run python manage.py startapp foobanizer'
becomes 'dm startapp foobanizer'
or 'pr django-admin startapp foobanizer'
As for the shells. I really like this feature because it's a lot cleaner. With activate and deactivate you are essentially mutating the current shell which can be dangerous. Also because it loads your ENVs on every run, it's easier to switch out ENVs.
Isn't that a circular definition, not the official Free Software Foundation line? Is it just a sneaky way of not actually saying "free as in speech"?
It's not that it demonstrates a political motivation, just an ignorance of the actual free software culture they're trying to associate themselves with. It's like saying "Make America USA Again". Tautologically circular, close, but no cigar.
Can anybody confirm which codebase is being described as convoluted here?
1. Can it support different python version and virtual environment?
2. Can it support build packages for windows,osx,ubuntu,centos...?
3. Can it support different build tools, for example: wheel, cython...?
4. Can it manage dynamic depdency version?
5. Can it manage version, like bumpversion?
Secondly, interspersed beteween the hate I could count at least 6 alternatives to pipenv.
So as a python user I don't know who to trust.
I stumbled across pipenv at a time when I was managing a "global" directory of virtualenvs instead of putting them inside each project dir.
So I could do source ~/.venvs/project/bin/activate because I was tired of having different .venvs inside project dirs.
Pipenv seemed like a welcome change and I especially liked doing pipenv run commands. I still run my dev servers with pipenv run. Keeps my environment clean.
Sourcing a virtualenv before would lock that shell to only working with one project, one environment.
I have noted some issues with pipenv, the first was in ansible deployment but it wasn't a show stopper.
The second issue I can't remember so it must have been fleeting.
As someone else pointed out, maybe it's for simple users like me who don't need to understand the advanced internals of Python.
Edit: Right! The 2nd issue was actually that I work with some services that are spread across three different git repos/projects. Maintaining a central virtualenv for multiple projects needed figuring out but I think I've resolved it by having a parent dir for the service where the virtualenv is created and then pipenv commands in the sub-dirs (git repos) use the parent virtualenv.
I've also noted some confusion in pipenv on whether it's using python3 or 2. A pipenv --two project might try to install packages using pip3 for some reason.
Lastly, a lot of the hate towards pipenv seems to be directed towards how it was launched and marketed. Which in my opinion has little to do with the tool and how it might help users with Python.
Open source has always been and will always be a wild ecosystem where the best tool floats to the top by word of mouth alone. So why be mad because someone used python.org to promote a tool when you can contribute your time and knowledge to promoting your favorite yourself.
I just don't like when there are too many options to choose from and I don't know which one is right for me. I guess time will tell. Also I don't really do CI/CD pipelines yet so maybe a lot of issues are unknown to me.
It's a super complex topic, it needs leadership support so that you can get a reasonable percentage of the community to drill down on one solution instead of having 4-5 competing ones, but it also needs a lot of in-depth insights, which is kind of opposite to leadership which requires a top-down view.