Hacker News new | comments | ask | show | jobs | submit login
What, No Python in RHEL 8 Beta? (redhat.com)
88 points by collinmanderson 33 days ago | hide | past | web | favorite | 60 comments

> In order to improve the experience for RHEL 8 users, we have moved the Python used by the system “off to the side”

This was a good move.

> we came to the tough conclusion, don’t provide a default, unversioned Python at all. Ideally, people will get used to explicitly typing python3 or python2.

This seems like a principled stance, one I can agree with. But it's probably one RHEL will suffer for. I think they have a lot of share for the enterprise linux marketplace (maybe #1 or #2?), so maybe they can afford it. RHEL customers should dust off their Python2 source and seriously consider migrating to straddle both 2 and 3 for now. Although the fact that RHEL8 provides ten years of Python 2 support means that they're unlikely to feel that pressure any time soon.

Yes, yes, and yes.

The rule of (sore) thumb is: never use system Python for anything. Pretend it does not exist (and RH just did). It's there for running system software that happens to use Python interpreter and packages. It should only be managed by the system package / dependency manager.

For anything you write or download as source, explicitly install the right Python version, and create a separate virtualenv / container / whatnot, without any link to system-provided Python packages. It will save you a huge amount of headache.

> customers should dust off their Python2 source

Customers will do what they want to do with Python 2.7 code and libraries that work very well right now. The continuous improvement workers are clearly annoyed that Python 2.7 is not broken.

With the right amount of effort, Python 2.7 will continue to work well for the next ten years.

Perl comes to mind here..

It's going to break horribly in 2020 irrespective of RedHat's efforts.

Earlier this year, one of the projects I maintained started failing on CentOS 6 (Python 2.6). No code changes had been made to account for it. What had happened was that pip/pypi had updated some modules which had dropped Python 2.6 support, and now required Python 2.7. Unfortunately, the package management and dependency information is so rudimentary that pip will happily create an environment which is completely broken, or fails to build due to the scripts being incapable of running with an older interpreter. This completely broke our build.

The same fate lies in wait for Python 2.7. A number of common modules are planning on fully dropping Python 2.7 support come the EOL date. Once they do so, pip will break, and it will not be possible to construct a working environment except by hand. Or using manually packaged modules e.g. from the FreeBSD ports tree or the Debian archive. This will be the death knell for Python 2.7 for practical use and deployment. It will only take a tiny number of critical modules to make the switch for the whole pip/pypi ecosystem to be completely crippled for use with Python 2.7.

> Once they do so, pip will break, and it will not be possible to construct a working environment except by hand. Or using manually packaged modules e.g. from the FreeBSD ports tree or the Debian archive.

Or by explicitly specifying versions in requirements.txt (yes, it can be pita for dependencies inherited from other packages). Pypi does keep the old versions, no need to scour fbsd or debian archives.

Yes, you're absolutely right about requirements.txt. However as you say, it's a problem when you have to start considering transitive dependencies. In fact, for the case I mentioned, this became a problem because there were so many it became increasingly complex; you ended up having to start moving more and more of the transitive dependencies directly into the requirements file. And this isn't very scalable.

That on its own would be somewhat manageable. But if you start getting requirements to update "just one more module" for some specific functionality, you eventually end up at the point where it's not possible to provision a functional set of modules. That's where I ended up with 2.6, and 2.7 is likely to be just the same, in my opinion.

Transitive dependencies (or what I called inherited from other packages) can be solved with pip freeze in existing working environment; it will list them too.

New development, on the other hand, will be difficult. As it usually is, when you develop in something that has been obsoleted years ago. If you are still developing the system, instead of just keeping it running, shouldn't you spend some resources on python3 migration, too? The modules you need are already there.

No disagreement here from me!

The times we have had to update things have been when the existing modules broke. One example which comes to mind from the recent past is when urllib TLS certificate stuff broke and it needed updating. In that case, it didn't cause collateral damage, but we were forced to do it to keep things working. Externally-enforced changes like this could potentially cause breakage in the future if they drop python 2.7 support directly or indirectly.

The whole eco system stops caring about Python2. Everybody using Python will suffer if they rely on using Python2 until 2030.

Unfortunately though, the whole ecosystem hasn't stopped caring, and is unlikely to anytime soon. They should care, but they either don't care or can't migrate quickly enough. Migration can be phenomenally expensive and introduce a ton of bugs. Python 2 will be the IE6 of this era. It's too entrenched to abandon. On the other hand, it will slowly fade into obscurity as migrations occur and old projects die.

This approach is a fair one, which will keep all their customers happy. If you've got complex Python 2 applications deployed on CentOS 7, then this won't be a hindrance to migration. Likewise if you want to drop Python 2 and only deploy with Python 3, they have that covered as well.

All in all, it sounds pragmatic and reasonable, and from the business point of view, they aren't going to lose customers as a result, by having huge blockers to upgrading.

> In order to improve the experience ... > yum install python results in a 404.

I'm not convinced the attention is focused on "the experience".

Returning 404 is hardly a user friendly experience, even if the linked blog post (and thus the solution) is trivially Googleable. Maybe an error/informational message with instructions to `yum install python3` or `yum install python2` could be displayed instead?

Why would application developers try to straddle 2 and 3?

If they are gonna do the work to support 3, just abandon 2.

Libraries are a different story.

Usually it's because big projects are a logistical nightmare to migrate. You can't do it in one go. Not only do you have to transition every library plus the applications and tools and tests, you also have to keep it functional, testable and deployable on all the existing python 2 platforms as well.

I'm not saying this is a good way of doing things, simply that it's the practical reality for many codebases. And with tools like python-modernise, it's also eminently achievable. I've managed to do this with several Python codebases, so that they work with either 2 or 3, and in the future we can drop 2 and fully migrate to 3-only features. This allows a smooth, seamless transition with no breakage.

It would definitely be easier if a clean break could be made, and Python 2 support is dropped on the floor, but for many projects this is simply unacceptable.

I recently installed OpenSuse tubleweed. I'm not sure if I just had a weird installation, but both python 2.7, and python 3.6 were installed. That's great, but the "pip" command by default was pointed at pip3.6. Imagine how confused I was when I installed a bunch of packages using the "pip" command, but none of them were available from the default "python" shell.

I think the, let's be explicit about what we're installing approach isn't bad.

python3 -m pip install foo

It also works for python2. I try to avoid using python tools that installs their own scripts and go for calling the module through python. This also works for python3/2 -m venv venv

Great move. I recently experienced the whole Ubuntu 16.04 LTS Python 3.5 and 3.6 on my system and it is just plain messy. Yeah, they can co-exist, but now I have 3 versions of Python. How many people didn't know better and have have tried removing 3.5 and completely broke their install and possibly ruined their weekend or evening. It also sends a message. A proper / official way for distributions to deal with this situation is needed.

Any current Fedora users want to tell me what all the Red Hat tools use? Eg is usr bin banana actually Python 2 or Python 3?

On Fedora, there's a system Python interpreter installed at `/usr/libexec/system-python` that is used to run Python tools installed in the base system like `dnf`. On my F29 system, it's version 3.7. You can't install pip packages into the system Python environment, which helps you not break your system, like what can happen if you've ever run `sudo pip install --upgrade pip` on Ubuntu or Debian!

Fedora at least as of F28, moved to have system tools like dnf use Python 3.

For RHEL 8, the tools included with the system will use a different binary, platform-python that isn't in /usr/bin.

I can't remember if that's also the case for F28/F29.

Red Hat-specific tools or just all the general system utilities nstalled on a RHEL host? I've got a ThinkPad sitting here running the beta of RHEL 8.

(FWIW, I don't think either is installed by default, but maybe that was something else I installed recently?)

Neither ship by default. If you run a search for libpython you should find one hidden away in a nonstandard location. As far as I’m aware, only core tools necessary for a minimal install rely on it. Other packages in the repos can depend on an actual user space python version.

^ Needs verification

Fedora doesn't ship with any Python 2, all the utilities in the default installation are Python 3.

"Ship with" is a bit too ambiguous for my taste. F28/F29 "ship with" Python 2 in the sense that the packages are there, just not installed with a default install. If you `dnf install python2` (or something that depends on Python 2.7) it will get installed. They've also gone through the effort and moved the system stuff line dnf to Python 3.

zenlot 33 days ago [flagged]

"However, we do try to make it as easy as possible to get Python 2 or 3 (or both) on to your system." - it just requires quite a long blog post to explain how easy it is.

Install instructions are right at the top and are typical `yum` commands that RHEL users should already know.

So RHEL been around for 18 years, and suddendly users need to be aware about the "install instructions", which "are right at the top" and read a blog post about it for RHEL 8?

Presumably with RHEL being around for 18 years users would be able to run "yum search python", see the packages are versioned, and yum install the version they want. All sans blog article.

It's far from the first time in 18 years a package name has changed with a major version upgrade.


Me? Nah, I'm just a user of RHEL in the healthcare space.

Woo hoo!

This is such a strange and annoying compromise to make. It's somewhat antithetical to the concept of RHEL versions being a standardized set of packages such that when someone says "I have RHEL 7", you can say "My software will work on that".

Now in order to support RHEL 8, you have to tell them to also have their sys admin install your preferred python.

A better and less bespoke to RHEL solution imo would have been to make an anaconda install or virtualenv the default python for each user.

Wouldn't you just specify the dependencies in your installer so it just works?

Most people in my field don't use installers or rpms, we just share the git repo directly. I'm kind of pissed off just thinking about how many build systems will break when '/usr/bin/env python' doesn't return anything by default.

I’m pissed almost daily at how many build systems break because they need 3.5.3 vs 3.5.4 or whatever, and at how many only work because they create a new virtual environment for every single script (Virtual environment ~= copy of python auto-downloaded from some random places).

Also, FFS, why can’t pyc files have a bytecode format version number embedded in them, or at least be called .pyc3 by python 3?

The whole ecosystem needs to solve the packaging problem, or get replaced by some other language ASAP. Instead, it’s breaking entire distros with its packaging tire fire.

For many python projects I’ve dealt with, it is easier to reimplement that dang thing in some other language than make the python build reliable, secure and redistributable.

Doesn't Python 3 compile the .pyc files under the __pycache__ directory, and version them based on Python's version?

Maybe Python should stop changing so often?

With C++ for example you are usually either targeting C++11, C++14, or C++17 (or if you're really unfortunate, C++03). The set of features in the language only changes every few years.

Granted it's still possible that someone could have an old compiler that doesn't implement the advertised standard correctly, but that's a bug, not a deliberate design choice.

Thats not really fair. Python 3.2 was release in 2011 and Python 3.4 in 2014. Only 3.3 was released in the middle, I would not say it is only "so often" compared to c++. To be more general, C++ had 3 major versions from 2011 to 2017, while python had 5. Its more, but not significantly so (especially since very little compatibility is broken between 3.4 and 3.5 for example).

And if you are talking about which version is shipped with a distro, then RHEL7 ships with gcc 4.8, which support only parts of c++14, while for example the latest Ubuntu LTS ships GCC 7,3, meaning you get full C++17 support.

Don't get me wrong, the Python packaging situation is not very clean. But it is far from being an exception, and people complaining about it are just spreading FUD.

But the compiler is the same, so you specify the dialect you expect to use rather than running a different binary. Although to be fair, at least one distro decided to start shipping gcc like gcc-X where X is the version, so you can have multiple GCC's installed. I'm not sure that was the best idea, but its what they did.

> Maybe Python should stop changing so often?

This argument is almost cyclical.

> With C++ for example you are usually either targeting C++11, C++14, or C++17 (or if you're really unfortunate, C++03). The set of features in the language only changes every few years.

C++ went from glacial-pace changes - 85 / 98 / 03 / 11 -- to much faster ones: 14, 17, 20. This change in standards behavior was motivated by closing the gap with feature-rich, faster-moving languages like Python (Java, C#, etc).

Python makes language changes somewhat rarely, but library changes more often. The majority of the time, those library changes are brand new functionality.

#!/usr/bin/env python is almost guaranteed to give you a surprise at some point. If someone has a virtualenv or conda env active in that shell it will be a different python environment (with different installed modules) than in other shells.

At least in the build-system glue language use case, the 'batteries included' modules like os, sh, glob, etc will all be there.

Similarly, if you require pandas like I do, there's a fairly comprehensive baseline of functionality you can expect regardless of what environment it comes from.

Note: You shouldn't use #!/usr/bin/env python. Instead you should explicitly specify python2 or python3, unless your script is compatible with both.

You say "you" as if you're talking to the person you're replying to. Most scripts in the world aren't authored by the person you're replying to. They're powerless to change the ocean of python out in the world.

For example, out of about 50 or so /usr/bin/python scripts I see in a RHEL7's /usr/bin I can't find a single one that runs on python3.

No individual user is going to be fixing this. It's a very silly and shortsighted move on RedHat's part.

Of the 50 /usr/bin/ python files on my Ubuntu 16.04 desktop, most are versioned in some way. None are called from "env".

     37 /usr/bin/python3
      4 /usr/bin/python3.5
      4 /usr/bin/python
      3 /usr/bin/python2.7
      2 /usr/bin/python2
The 4 that aren't versioned are

speedtest, speedtest-cli, dh_python2, apt-offline

So two from ubuntu/debian, and two from a package

Right, someone working in ubuntu distro development has done this work for those packages specifically.

Above, we're discussing some build system where someone hasn't done this work. Most collections of python code in the world (I'm going to hazard a guess significantly upward of 90%) just use /usr/bin/python or /usr/bin/env python.

That's the nature of the problem.

You're discussing RHEL7's /usr/bin.

  > For example, out of about 50 or so /usr/bin/python scripts I see in a RHEL7's /usr/bin
I posted Ubuntu 1604's in comparison.

I'm discussing the problem of collections of python scripts using /usr/bin/python and I included the state of RHEL7 as an example.

The actual subject is CoolGuySteve's build system, which is likely to look like my example of RHEL7 rather than yours of ubuntu. The vast majority of environments look like RHEL7 or CoolGuySteve's build system where #!/usr/bin/python is the norm.

First, This is about RHEL 8 (Beta) not 7, 6, etc. Second `#!/usr/bin/python` is a different animal than `#!/usr/bin/env python`. One is hard coded. The other is `PATH` based so will change depending on current `PATH`.

On RHEL 7 (and earlier) `/usr/bin/python` must be the Python 2 the system shipped with or your break `yum` and other admin tools. When people try to install their own python by doing a `make install` as root, `yum` breaks and it's hard to recover the system.

What the article is saying is that RHEL 8 addresses this by having a platform python that the system tools use. So RHEL 8 tools will not use /usr/bin/python

CoolGuySteve complained that things will break because existing scripts assume /usr/bin/python. CoolGuySteve is correct. There is an enormous amount of work involved in changing or removing the function of /usr/bin/python.

Offering advice about how to write new software does nothing to help with the frustration he highlighted: His existing build systems will break. Build systems are especially frustrating as they tend to have hacked-together code that no one really wants to look at.

People will often work around this by installing a python2 as /usr/bin/python. At the end of the day this change results in a less predictable platform. On previous RHEL systems I know what /usr/bin/foo is -- it's predictable given the major version of the distro. But now I won't know what /usr/bin/python is on RHEL8. It will be uniquely different and that's not a good thing.

Totally agree in principle.

One interesting note - MacOS doesn't have a /usr/bin/python2 (at least, not at the time of this writing). It supplies /usr/bin/python, and also /usr/bin/python2.7.

That might not matter at all, but it's worth noting.

I know. I filed a bug about this in Apple's tracker years ago and they still haven't done anything about it. This contravenes the official upstream Python recommendation, which is that "Unix-like software distributions (including systems like Mac OS X and Cygwin) should install the python2 command into the default path whenever a version of the Python 2 interpreter is installed". See PEP-394 https://www.python.org/dev/peps/pep-0394/.

Which is fine but how do you specify dependencies right now? Surely a minimal install of RHEL doesn't have everything you need.

Not meant to be snarky, just a tad confused on what exactly you’re asking. Have you ever made an RPM yourself? Dependencies are specified there. Or are you asking about dependencies that don’t exist in standard RHEL repositories?

The parent was saying that specifying dependencies with rpm wasn't possible because they deploy with git but surely they must have something to specify/install deps since a RHEL base likely doesn't have everything they need especially if they're using the system python.

No doubt they have one of those "curl http://site.com/script.sh && sudo bash script.sh" installation instructions.


I think at that point we’re kind of stuck with virtual environments for Python dependencies. I’m not too sure what the best solution is.

Can't you just symlink your preferred python?

No need to symlink manually, the `alternatives` mechanism will handle that for you.

The best solution I am aware of is to package your Python apps statically with the version they expect. For example, Facebook's XAR format (https://github.com/facebookincubator/xar/) does this very nicely.

This looks remarkably similar to AppImage (which i suppose can also be used to package your own Python).

My guess is a lot of sysadmins will just end up installing both 2 and 3.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact