Hacker News new | past | comments | ask | show | jobs | submit login
Some packages are no longer installable after test command is removed (github.com/pypa)
56 points by pcwelder 57 days ago | hide | past | favorite | 32 comments



The pypa team are just not capable stewards of core aspects of the python ecosystem. As a maintainer and developer of Python based tools and libraries it is very frustrating having these folks push some change that they want and simply oopsie a significant chunk of the Python ecosystem, and then go dark for hours.

They've done it this time by making poor architectural decisions ("Isolated builds should install the newest setuptools") and then add in poor library maintenance decisions ("We'll remove this feature used by thousands of packages that are still in use as active dependencies today"). Possibly each of these decisions were fine in a vacuum, but when you maintain a system that people depend upon like this, you can't simply push this stuff out without thinking about it. And if you do decide to do those things, you can't just merge the code and call it a day without keeping an eye on things and figuring out if you need to yank the package immediately! This isn't rocket science, everyone else developing important libraries in Python world has mostly figured this stuff out. In classic pypa form, it sounds like there was a deprecation warning but it only showed up if you ran the deprecated command explicitly, while the simple presence of this command causes package installs to fail. You have to at least warn on the things that will trigger the error!

These days I try to rely on the absolute minimum number of packages possible, in order to minimize my pypi exposure. That's probably good dev practice anyway, but is really disappointing that I can't rely on third party libraries basically at all in Python, when the vast pypi package repository is supposed to be a big selling point of the language. But as a responsible developer I must minimize my pip / setuptools surface area, as it's the most dangerous and unreliable part of the language. Even wrapper tools are not safe, as you see in the thread.


You might want to try getting them from apt-get. They're usually more stable there and get patched if they fail to install or fail to work with a newer version of something else.


Particularly sad that even projects that attempt to pin their dependencies are running into this issue: if a dependency doesn’t have a pre-built wheel, then it will be built in an “isolated” environment, which means it won’t inherit a pinned setuptools version from the root project, and will instead pull the latest (unless the dependency itself is pinning setuptools).

Reproducible builds can’t come too soon.


Having "install package" and "build package from source" be indistinguishable command lines is yet another massive design failure in the Python packaging system. It may sometimes, vaguely, work on the original programmer's machine. But the build environment isn't reproducible, and on Windows is often not present at all so installing the package simply falls over.

(yes, we're using GRPC-on-Windows-Python, which means we're tied to the release schedule of wheels for that)


Until about 7 or 8 years ago, install pretty much always meant build from source; wheels only came in fairly recently in the history of Python and are still not universal, as you've found out. You'll typically have problems on:

* ARM Linux

* POWER9/10

* Alpine distros which use MUSL for the C library (https://rpep.dev/posts/alpine-python-antipattern/)

* Some packages which depend on C libraries that are difficult to build on Windows (fault of the C libraries rather than Python really).

Conda (and previous tools before it like Enthought Canopy) were designed to try to fix this problem, since the core Python packaging tooling just wasn't good enough. Wheels were proposed as a PEP extension and adapted but it took years for common packages to be built as wheels and even now there's nowhere near universal coverage for them even on common platforms.


I only have source releases for macOS as I can't figure out how to include OpenMP for the binary wheels of my proprietary package.


You can't really do it, the best way is to build it with Clang installed with Homebrew and then put in the installation instructions to ask users to install OpenMP with `brew install libomp`.

It really frustrates me that Apple strip it out of their build of Clang in favour of pushing their own Grand Central Dispatch.


> This functionality has been deprecated for 5 years

I used to think that 5 years is a long time when I was younger... that was before I had to maintain multiple legacy codebases.

The biggest problem here is that it's a yet another pinning mechanism (PIP_CONSTRAINT) which most people didn't need[0] to know about until today.

[0] need is defined here as 'production works without having to know this'.


It was printing a warning about the deprecation for close to 5 years. I don't think a few more years of support would make much of a difference.

> WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.

https://github.com/pypa/setuptools/commit/cd84510713ada48bf3...


The warning only appears if you run `python setup.py test` (or equivalent), but the breakage occurs regardless of whether you were actually running this; if you depend on a package that doesn’t have a pre-built wheel, and the package attempts to import the removed module in its `setup.py`, then your build will break.


Reason number 37 to hate "code as config"


code as config is amazing. well can be, easily, it needs to be descriptive and composable. meaning it needs to be able to return data according to some schema.

the point is, that the problem again is the lack of a clear boundary between the gathering, assembling, ordering, calculating and validating the steps required to prepare/build/install/load/use/and-so-on and the actual execution of those steps.

Terraform, for example, has a plan and run phase.

And in general the fear of too many layers (and in-situ DSLs) leads to very brittle and extremely inconvenient "interfaces" (ie. GitHub and Ansible programming in YAML comes to mind)

...

Ideally build/packaging/setup steps themselves were written in a way that their reinterpretation[0] is easily possible. (So we don't need to litter the implementation with explicit hook points, etc.)

[0] basically using interfaces and the visitor pattern, or in an FP way via the https://blog.rockthejvm.com/free-monad/


I think config _is_ the clear boundary between those gathering, ordering etc steps.

The less expressive config is, the more regular and structured it is, and it becomes very easy to be descriptive and composable.


from time to time config is just not willing to sit there in a list. it needs to be computed.

just an example from a few days ago. someone needs to pull data from some remote service (maybe Vault), or generation of keys/entropy/etc. [0][1]

of course this just means folks should wrap the whole thing in a program. but that's getting inefficient fast if every layer only accepts "env vars" or a plain JSON file.

[0] https://github.com/nextauthjs/next-auth/pull/9638#issuecomme...

[1] https://github.com/nextauthjs/next-auth/pull/9638#issuecomme...


Nobody reads these. It’s alert noise.

If it broke every 10% of runs folks would notice. Brutal but there’s zero cost of ignoring the warning beforehand.


Often nobody even sees them. The software gets built by a CI system, there's no way for the CI system to alert on new warnings appearing, and nobody checks the logs.

Maybe a staged process.

Start with a warning as usual, to CYA. Nobody will care, but you can feel smug and say you told them.

Then turn it into an error and add a flag that a user can set to turn that deprecation back into a warning. Setting the flag should be a trivial change, but since builds will fail it'll be noticeable. The build failure message should include the date the deprecated feature will be removed, a link to migration steps, and instructions on how to set the flag to allow builds to succeed.

Then finally remove the feature.

That would allow users a chance to schedule work to remove use of the deprecated feature.


I wouldn't say it's zero cost. It's a risk with potential to cost you. So there's some kind expected value to that cost and the risk grows the longer you ignore the warning


I kind of agree - though I make a distinction between risk and cost in the same way you say 'potential cost' i.e. cost is realized risk. Until the risk materializes, you don't pay. The base scenario in this case is the classic thanksgiving turkey though - everything is fine until it isn't.


It says a lot about the way this tooling is being used that five years of deprecation warnings still isn't enough to get people to port over their software.

Perhaps this should've been implemented as a gradual failure with a temporary workaround ("as of X, this feature will break unless you specify the MY_CODE_WILL_BREAK_IN_OCTOBER=1 environment variable" rather than just breaking after an update), though I doubt that would've changed much.


The warnings were only generated when doing "python setup.py test".

No warnings were emitted with "from setuptools.command import test", which is what people did to modify how the "test" command works.

If someone used to use "setup.py test" with a modified command, then switched to another way to run the tests, but forgot to remove the old code, then they would never get the warnings (because it required running "test"), yet the code broke (because the import failed).


Or maybe, you know, just don't knowingly break other people's working code.


And so never improve the language and ecosystem.


Linux and Windows keep supporting even blatant bugs for decades, in order not to break old working code that relies on these bugs. The ecosystem keeps evolving healthily under that constraint.

Breaking working code for an essentially stylistic change of API is not even malice. It's plain stupidity.


An OS isn't touched anywhere near as much by a diverse user base as a language and ecosystem like Python is. It can be more or less guaranteed that everyone building things directly on top of an OS already has a very good idea of what they're doing, and are likely aware of any strange quirks or workarounds they may need to consider. And if they don't, they know how and where to seek help. This is definitely not the case with most Python users, and as such, when bugs and bad practices can be remedied or mitigated they should be, so the many who just want to get their pipeline or automation scripts going can do so without polluting 101 channels with low quality questions and time-wasting problems.


Ugh. I've been trying to get my recently rebuilt desktop compiling a Python program (Redash) today, and it's been refusing to build due to this exact problem.

Have spent hours on this, trying to get it working first on my Proxmox desktop, then in VMs, and was now testing in a stock standard Debian VM. It turns out it's an upstream bug.

---

The steps here (for poetry users) worked for me:

https://github.com/pypa/setuptools/issues/4519#issuecomment-...


Can we somehow vote to just stop having setuptools be maintained? When it breaks it wasts years of collective developer time and it really doesn't need to change at this point...


For those caught out by this, I put together a Github repo that shows how you can work around it in your Github Actions. Hope it helps! https://github.com/simonwhitaker/setuptools-demo


Looks like this was fixed a few hours ago, by the release of a new (patched) version 72.1.0 of setuptools: https://github.com/pypa/setuptools/releases/tag/v72.1.0


Looks like Python 2 -> 3 all over again. Something is deprecated for a very long time, people ignore the warning, and then cry foul when the thing is removed. Is there to be no cleanup and advancement in the language/ecosystem?


As I understand it, the warning was only generated on "python setup.py test", and not on "from setuptools.command import test", which was the deleted module.

If you never used 'setup.py test' then the warning was never generated.

Pulling up the first repo example I saw in the issue, https://github.com/IBM/python-sdk-core/commits/main/setup.py back in 2019 was created using

   from setuptools.command.test import test as TestCommand
   ...
   class PyTest(TestCommand):
     ... code to forward to py.test ...
   setup(
          ...
          cmdclass={'test': PyTest},
          ...)
This was a clear adapter to use setup.py as a test runner, rather than using "pytest".

Most likely they all started using "pytest" and forgot that the adapter was there. It wasn't used and never generated warnings.

This is confirmed in the commit message from two hours ago at https://github.com/IBM/python-sdk-core/commit/bd44dd1152e01b... :

> That means there will be no more `python setup.py test`, but it wasn't be used for a long time.


That makes sense. At least they aren't making a fuss about it too, and just did the update to resolve it. I see a setuptools maintainer also did a partial rollback to provide a minimal fix. Hopefully everyone has updated their stuff by the time the minimal fix is also to be removed.

Also I fell like this says something important about testing practices. Code should be exercised enough with sufficient coverage, and anything not covered reviewed periodically to see if it can be updated or removed.


[flagged]


(affectionate)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: