I'd much more recommend reading through https://python-packaging.readthedocs.io/en/latest/index.html to actually understand how packaging works.
That said... packaging stuff in Python could really do with being a lot simpler. Pbr just doesn't seem to make it simpler, it just moves the problem to a different file.
If you think packaging is a matter of configuration, then PBR makes a lot of sense. It gives you a config file that you simply fill out so you don't have to worry yourself about the code.
I've used both PBR and setup.py quite a bit and I personally prefer PBR since there's less things I need to debug when things go wrong.
You fail to give any examples as to why it would be an improvement too, aside from a very vague "less things I need to debug when things go wrong". Which I'd argue against since now I also need to potentially worry about things going wrong in PBR, which is one more thing I need to debug. This seems to be the general theme with pbr, there's no clear reasons for why it's better and when asked about it you get these kind of hand-wavy answers.
If you actually have some concrete examples as to what it improves and why I'm all ears.
Trying to paper over the (real) problems with the Python packaging toolchain by generating setup.py from a config file strikes me as a vanity project.
with codecs.open('requirements.txt') as f:
requirements = f.read().splitlines()
In case anyone sees this, at a minimum you need to use
from pip.req import parse_requirements
from my understanding, when you build the package, your computer extracts the relevant required packages, and sends up that data to pypi, which then in turn sends that data back to users when they install
At least thats how I think it works
if you used any other package manager, it would need to resolve dependencies from the additional info in the package index, resulting in pip being downloaded and installed before the desired package is installed
If you control requirements.txt, there is nothing "not robust" about parsing it in setup.py.
You can also use `dependency_links` in your setup.py to specify this, which allows deps on github etc.
Define my dependencies
run piptools pip-compile to generate a locked set of dependencies
It gives me some of the benefit of Rust's Cargo.toml / Cargo.lock in the python world (and actually respects all package's dependency version declarations unlike other tools like pyup).
That along with setuptools-scm and pip-tools have for me solved most of the issues that pbr addresses.
Are you really implying that learning INI would be a burden?
First of all, requirements.txt is for development requirements. Runtime ones belong in setup.py.
Also, the "extras" feature is already in setup.py via extras_require.
I see no need to use this nonstandard tool when the standard tooling works.
I could keep a freeze version if I really want to have a full view (for debugging purpose).
I used to think this was a good idea. Then I found a huge loophole in it: if I copy-paste code from another FLOSS project, then that code's author should still be listed, but won't have any commits.
> version management based on git tags
setuptools-scm already does this for us.
Finally, a big upside of setup.py, is that I can programatically generate information, whereas pbr's cfg file doesn't seem to allow that.
flavors are listed as a big deal, but much like the article says, setuptools already has this, nothing new here.
>I used to think this was a good idea. Then I found a huge loophole in it: if I copy-paste code from another FLOSS project, then that code's author should still be listed, but won't have any commits.
If you think that the other person deserves all the credit for the change, set the author with:
git commit --author="Guido van Rossum <firstname.lastname@example.org>"
Co-authored-by: Guido van Rossum <email@example.com>
The problem with package manifests that are arbitrary scripts (Python, Ruby and probably others) is that you lose even basic introspectability into rudimentary parameters like name, version, dependencies unless your threat model allows for executing random code that people put into the setup.py. This can be mitigated by using complex static analysis carefully crafted for the specific task, but this isn't easy to implement and there is a non-insignificant amount of cases where the data cannot be legitimately decided. I'm just wondering how this is conceptually different from Pipfiles:
(Previous discussion https://news.ycombinator.com/item?id=13011932)
You still need to install system files for those c line. You still can't have multiple packages requiring different versions of libraries. What a mess.
I guess your second point means something like symbol versioning for Python libraries, which I'm not aware of any solution to, apart from just running things in a virtualenv.