Hacker News new | comments | ask | show | jobs | submit login
Python 2 will be replaced with Python 3 in the next RHEL major release (redhat.com)
684 points by alianinfo 10 months ago | hide | past | web | favorite | 334 comments

This is good:

- Python 2 support officially ends somewhere in 2020

- most popular packages are now compatible with Python 3

- Python 3.7 performs about as well as 2.7 with future release expected to be better

Although it still took way too long, if you consider Python 3.0 was released about 10 years ago.

> most popular packages are now compatible with Python 3

I often see this but I think it's a perception from the Internet/web world.

I work for CGI, all (I'm not kidding) our softwares (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of peoples in the industry is 2.7.

And we are a tiny focused industry. So I'm sure there is many other industries like us which are 2.7 that you will never heard of.

That's why "most popular" mean nothing once you take how Python is use as a whole. We don't use any of this web/Internet/network "popular" packages.

I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, us none of those.

Well, the CGI industry had money, competent people, and 10 years to upgrade. An entire decade. And a LOT of tooling and documentation to help.

I made a lot of code conversion from 2 to 3. Most of them took me a couple hours to a few days.

I'm currently working on a 2.7 project that will never migrate because they literally patched the cpython runtime, but you can't freeze a whole community because some will take bad technical decisions or accumulate tons of technical debt.

Now, Python 2.7 will not stops working after 2020. It's just that we, as a community, will stop to pay the price for the ones that didn't move. If you want to stay there, you'll pay a commercial actor for it.

In 2020 centos 6 will stop received updates. Will you go complain that the new RPM are not compatible ? No, either you update, or you pay commercial support.

Ruby and node did it in a few months and told the community to move or die. Nobody complained.

Perl took so long the language has been forgotten.

PHP literally canceled the version 6, and made it taboo.

We gave years, and means, then extended the deadline, and heard nothing but complaining since.

I won't pretend the migration was masterfully executed. The whole 2/3 debacle was painful.

But, it's been since 2008, come on.

>> It's just that we, as a community, will stop to pay the price for the ones that didn't move. If you want to stay there, you'll pay a commercial actor for it.

That's very well put. But I guess this can happen only with big, important, projects where you can afford to loose/upset some users...

No that's just how IT works.

Your games don't work in windows XP anymore. Actually, your USB3 mouse doesn't work on windows 7 out of the box.

Centos have LTS, but still EOL.

Ubuntu init system was changed to upstart. Then to systemd. Also gnome, then unity, then with new menu/notif/systray semantics, then back to gnome (but shell), and soon wayland. It breaks a lot of things.

Firefox new addons don't work with some addons from last year.

NodeJS had 3 incompatible forks in it's short life.

Twitter and Facebook API breaks every sunday.

JS frameworks are just madness.

Python break compat, ONCE.

Once since 1990.

Also gave 10 years to migrate.

In our industry, that's not bad at all.

And the community hold. We worked. We wrote tools, doc, blog posts. We were there all the way. We have incredible libs like python-future to help.

And if any of that is not enough, well, Anaconda Continuum will be happy to do business with you.

It's how it is.

> Your games don't work in windows XP anymore.

In fact I can play EarthSiege 2, a game from the 3.11/Win95 era, just fine on Win7 x64 with the only things not working being joystick input (I guess it does some shenanigans with the MIDI/joystick port in addition to using the windows joystick API) and the pause screen which shows your vehicle spinning around spins too fast (probably because its speed is tethered to CPU sppeed).

Microsoft takes, with the exception of drivers, a lot of effort to keep backwards compatibility. And that is why people like it, in contrast to Linux where "Things break" is the norm and even in OS X it's not unheard of. Oh, and also why big enterprises stay as far away as humanly possible from anything NodeJS or more modern than PHP and Java.

Enterprises want and need stability first.

Whenever this comes up I feel it is worth pointing out that Linux's "Things break" approach is a problem exclusive to the userland tooling built up around the kernel. Linus takes a very hardline stance against breaking kernel ABI compatibility (except for drivers), but pretty much every piece of software outside of that, including GLIBC, thinks it's totally ok to break things all the time.

It's really very sad that the Linux community never adopted Linus's view.

Because they are entirely different things. Breaking the kernel ABI doesn't just break a few packages for a popular language, it breaks the entire ecosystem. More importantly there's literally no reason to break compatibility in the kernel. There are perfectly valid reasons to break compatibility in programming languages.

Apples and oranges comparison.

Even the kernel occasionally breaks interfaces. Case in point, FUTEX_FD was introduced in 2.6.0 and retired in 2.6.26 [1].

I don't think anyone working on glibc is intent on breaking user space. Sometimes it's just the most pragmatic solution.

[1] http://man7.org/linux/man-pages/man2/futex.2.html

Yes and your Python 2.x code will still run after 2020. But going the other way - new programs might not run on older distro releases, which is what GP said.

We can trade anecdotes on which games age well on which OSes all day.


Released in 2004, you can't play on anything but Win XP because of Starforce DRM that didn't bother to patch for future compatibility.

I'm sure there is a community created solution for this. (a nice description of a "crack")

Meanwhile I routinely use Lisp written before I was born (in 1986) that works out of the box in 2018. I’ve used Java packages (recently) from 1999. My company still uses python/pypy 2.7 and we see no compelling reason to upgrade. If there is any sort of upgrade, it will be off of python.

When, oh when will we ever finally unlearn "worse is better?" It's hard enough writing good code without having to fight your tooling (non-orthogonal, non-homoiconic, non-malleable, non-backwards-compatible languages, some with header files propagating changes upwards and breaking user code, and some with fascist type systems causing combinatorial explosions of containers and factories, ugh)

Python broke compatibility a lot more than once. Python 3.5 and Python 3.6 are not compatible, for example. There are even cases where code written for an older version of Python 3 will not work on a newer version of Python 3. For example, see PEP 479.

2.X would still be included as an optional package, correct? I couldn’t determine that from the link, but it seems like RedHat’s practice. Or, they could install it from source.

If so, 2.X users still have two years or so to migrate. That’s plenty of time in my opinion, even when all of your code is 2.X. There’s even a library for it.[1]

[1] https://docs.python.org/2/library/2to3.html

2to3 is a bad and outdated tool, please don't recommend it. You want http://python-future.org/ .

Thanks for posting this, I didn't know about python-future.

> 2.X would still be included as an optional package, correct?

I wouldn't bet on a RedHat provided 2.7 package. if they provide it, they will be stuck maintaining it, and their long term support contracts are very long term.

On the python side - From 2020-01-01 no commits will land on the 2.7 branch. RH is still supporting RHEL 4, which was released in 2005, and supported 3 from 2003 -> 2014, which (if they keep a similar timescale for RHEL 8) could cause them to try and maintain a working build / test / development system for py27 for an extra 10 years.

"And if any of that is not enough, well, Anaconda Continuum will be happy to do business with you."

Yes, this is a key point that seems to be forgotten. The Python development team will end their support on 1st January 2020, but there are other providers with Python expertise who will almost certainly pick up the commercial opportunity, e.g. ActiveState, and possibly even Red Hat with a separate product.

This is not the end of Python 2, it's going to be a transition to a different support model. Commercial users either pay to move up, pay to rewrite the Python code with something else, or pay for a super long-life version of Python 2.

The awkward part may be academic research, where there's a probably a lot of Python 2 code that has no maintenance budget. I would not be surprised if a project appears to build no-support versions of Python 2 with essential fixes after 2020.

> Ruby and node did it in a few months and told the community to move or die.

Uh, Ruby’s big compatibility breaking upgrade (1.8 -> 1.9) took more than a few months, it took years for the community to fully move. (Because of Ruby’s pre-2.0 versioning practice, 1.8.x -> 1.9.x was a major version upgrade, and it was actually a more significant one than the 1.9 -> 2.0 update. But lots of people stayed on 1.8 throughout most of the 1.9 period and the transition didn't really complete until late in that period or early in the 2.x period, and 1.8.7 was getting maintenance releases for four years after 1.9.1 stable release.)

2.7 is a different language than 3.x.

You could argue that 2.y is also a different language than 2.x but this is besides the point: as long as there is a compiler for that language and it works (for your definition of works) you don't need to translate your programs in another language just because you have money.

You don't need to upgrade your servers to the lastest linux kernel either. But in 10 years, you won't get any security update for it, unless you pay a lot of money for it.

It will be the same for Python.

You want the excellent and money making free work of volunteers ? Do your part.

You don't want too ? It's ok. In 2020, plenty of companies will sale you services for the real market price of your technical debt.

Is now 'ignoring security' the free pass that is thrown into every argument against opposing positions?

I'm being facetious about that; but you know that security risks can be reasoned and mitigated with different means (sometimes less costly) than simply upgrading software.

Companies who cannot upgrade their codebases are companies who cannot maintain their codebases.

Eventually, the technical debt hurts your ability to deliver with both speed and safety. When your org can't do that, your org stops being competitive in the market, and if the market doesn't kill your company then the brain drain will.

(small and non-notable exceptions for the literal handful of types of orgs where this is not the case)

True, but is there a situation where upgrading will hurt instead of benefit?

Not every solution is just as good for everyone. Think of the delayed upgrade game that some people play. They wait for others to upgrade first to get rid of new bugs at their expense. Now take this game at a 10 year extreme.

For them, the new branch (e.g. new language etc.) is simply too risky. It's not that they are not smart to upgrade, they simply have different opinions on what is valuable than you do.


Yes, upgrading is an engineering expenditure like any other. If you devote manpower to maintaining your codebase you're not devoting it to user-facing features. As such, upgrading your codebase can hurt your ability to deliver some priority feature on-time.

But that kind of zero-sum thinking is myopic. My whole point is that if you only ever prioritize feature work, eventually you lose the ability to deliver features. Nobody in their right mind thinks you can have a company with zero technical debt, there's always a balancing act in play, but if your managers are just playing the risk card without, you know, doing an actual risk analysis which includes the risk of not being able to deliver future features, then your company isn't making smart decisions.

Now, there do exist codebases where you can make the logical conclusion that there really will not be any future feature work, but it's still running in production and so it will need support, and it's a fairly large codebase, and so there would be an enormous cost to re-certifying for the upgrade with very little apparent benefit. That happens, but it only (really, truly, only) happens in large enterprises with many, large codebases and only so many engineers. Those enterprises do have the resources to hire new employees and/or outside contractors to pay down the technical debt on these half-alive projects - they just aren't prioritizing those resources on the technical debt, even long after it became clear that the clock couldn't be stretched out any longer. Through painful personal experiences, I have zero sympathy for companies in that position who think that they can solve their problems with more firewalls.

That's right, but...

It seems to me that all counter-arguments I get ignore the context that I set which is to not do things "simply because you have the money".

What I get is "hey, you have the money and it's obviously the right thing to do so do it".

"Having the money" is orthogonal to "it's the right thing to do" but this thread is too old to recover.

There is cost the individual user of an open source project has to consider, and there is cost the community of that project as a whole has to consider.

So from your perspective, just keeping the old version might be "the right thing to do", and at the same time the decision of the community to not support it anymore is also "the right thing to do" from their perspective.

As the community does their work unpaid, it seems you have no right to impose your perspective on them, except if you pay money for the necessary work. Which you are free to do.

I guess what the parent posters point out is that this will usually shift your own cost/benefit ratio in a way that upgrading becomes "the right thing to do" for you, too.


I don't think that centos 6 is bug free yet it EOL in 2010 too.

Actually i'm using it right now and I'm positive it has at least one bug when running in virtualbox.

My client won't upgrade. They pay support, expensive support, to keep their old version.

In 2020 I'll open a shop to convert old Python code bases or fix bugs in them. I'll charge 4 times market rate. For me it's a net win that people don't migrate.

Sometimes I wonder if the opensource movement, with its perennial “update anxiety”, is actually busy generating an industry of legacy maintenance. In a way, it’s a natural extension of the original “development is free but you pay for support” idea, but I don’t think anyone openly elaborated it into a long-term revenue-generating strategy. It looks like a slam-dunk, to be honest, with the only caveat being that work is extremely unfashionable.

It's not the opensource movement. It's IT in general. A lot of software stopped working with the windows 10 update that was forced on the users.

Android breaks the API regularly by changing the permission game.

The PSF has limited resources (3 million of dollar of budget) to exists (this includes running pypi and organising pyconf), but I think it's track record is quite good, even comparing to commercial products.

Actually, Android doesn't break the API, even by changing the definition of permissions. If you have a breakage, investigate your app manifest, what API version you are targeting. Android frameworks shim the old behaviors for the apps that declare the use of the old API versions.

E.G: Since API Level 19, the Alarm system has been completely remade. One must check the API Level to act some how or some differently. The same is for permissions after the advent of Android OS 6.0+. But we still need the support library to give Fragments and the ActionBar to API Levels lower than 11. And VectorDrawables to API Levels lower than 23. And many more objects.

It's not that simple.

The shims are managed by targetSdkVersion, i.e. if you declare targetSdkVersion >= 23, you will get the new permission system (because it was introduced in 23) if running on [23,maxSdkVersion] device; if it is running on device that has API level [minSdkVersion,23), you get the old one; if you declare it < 23, you will get the old one, always, on newer devices too.

The targetSdkVersion says what you designed against. If you claim supporting the new API, you should handle the new API (and that includes detecting what the underlying device supports). That does not preclude you from including the support library needed for lower API levels still supported.

Except if you release a new/updated app, it MUST target a recent (less than a year) targetSdkVersion.

"2019 onwards: Each year the targetSdkVersion requirement will advance. Within one year following each Android dessert release, new apps and app updates will need to target the corresponding API level or higher."


That's Play Store policy for new apps and app updates, not a technical limitation of the OS. Old apps will keep working just like they did before, the new APIs will not break them.

the old trick is to get your changes to the main branch so you dont need to maintain your it. that is also why anyone bothers to do so in the first place

More power to you, but my initial argument was contingent on "for your definition of works".

Eventually, your client will see that money spent on maintaining old (more appropriately rusty) packages will better be spent on upgrading.

> you don't need to translate your programs in another language just because you have money.

True, but if you know the support behind your current language is going to go away, it might be smart to migrate.

Yes but this is a different argument. He could have said "if you are smart" instead of "if you have the money". See?

> Well, the CGI industry had money, competent people, and 10 years to upgrade. An entire decade. And a LOT of tooling and documentation to help.

I don’t think that’s a money issue. Python 3 upgrade is not really compelling. You get slower speed - at least until recently -, tests might pass on 3.x, but documentation and edge cases still are better on 2.x, etc. While nothing is really exciting in the 3.x branch.

>While nothing is really exciting in the 3.x branch.

Sorry, have you looked at Python 3 lately? I don't think I can sum up all of the amazing work that's been done in one post (async? Cleaned up stdlib? Better errors? Not having an aneurysm from text encoding issues unraveling your whole project? New splat syntax?).

I would really encourage you to check out what's happened in the last ten years. I think you'll find many more exciting developments than you think.

He's likely referring to the extended unpacking changes, stuff like

    merged = {**base, **changes}
Which is the equivalent of what previously was

    merged = {k:v for k, v in base.items()}

I think that he's kind of right about the problem.

In my opinion, Python's primary business value has been that sweet spot it occupies. It's not the most fun, but it's pleasant. It's not the fastest, but it's not too slow. It's not great on resources, but it's not too bad either.

Good people will work with it, and they tend to be the "let's not get creative, let's just get it done" folks that businesses love. Mediocre people do great with it, because it feels much nicer than many of the other things they've worked with and it channels them towards producing better code and being more productive.

Python makes it huge pain-in-the-ass to get weird/creative, and generally frowns upon it, so people tend not to as much. Junior and low-skill contributors can't really get in too much trouble as long as they stick to the program. It's an awesome "just sit the fuck down and do your work" language.

As a language, it's carved out a great space being pleasant, consistent, well-rounded, predictable, etc.. I know there are people on here using it to build their rocket ships. And it can go there too... maybe not as flexible or fun as more extreme alternatives, but it's also less likely to blow up in your face. For those software rocketeers (probably most any SV start-up), updating makes undeniable sense. Python 3 is hands-down better, and they can handle the transition no problem.

But a lot of people don't use Python to build rocket ships. They don't build rocket ships at all. They build delivery trucks and conveyor belts and coffee dispensers. And, for them, the very things that make Python a great choice - a very clean and conservative focus on simplicity and stability - are the very reasons not to switch. There's just not really anything that's been introduced in the last 10 years that is going to make any real difference for their use cases. Whatever trouble they've had with unicode and the other bs has long been lived with. Whatever's missing they've long lived without. And things have been just fine.

Python's killer feature - being a really nice and well-rounded option that works pretty damn good for a large swath of people - is sorta it's undoing here. It's hard to really be that too much more so.

I mean I switched to 3. It's clearly better. But if I was running a Python team that had been effectively just trying to get the job done in 2.7 for over a decade I would have to admit that I wanted to switch for my own sake - I think it would be unlikely to make them that much more happy or productive.

Hell, many people can't even set their same environment back up in a week after loosing their laptop.

> But if I was running a Python team that had been effectively just trying to get the job done in 2.7 for over a decade I would have to admit that I wanted to switch for my own sake - I think it would be unlikely to make them that much more happy or productive.

Part of being a software engineer is keeping your environment up to date. Sure, I could keep using Node 0.10. But I'll get no updates for security or fixes for bugs.

If you're on a team that hasn't updated to a version that's supported, then you're neglecting one of the fundamental ongoing maintenance tasks involved in software engineering. If that's "too hard" to do over the course of a decade, then perhaps there's something wrong with your engineering culture.

Sure, you don't have to do it. But don't expect the rest of the world to continue to support your old setup. If you have to compile Py2.7 from source because RHEL doesn't come with it, that's fair penance for not keeping up with the community. It's just about the most entitled thing in the world to say "I didn't spend the two weeks in ten years time to upgrade to a newer version [using the numerous automated tools] and I'm mad that the world at large isn't making it easy for me to continue to not do anything."

I'll just say this... I still have 2.7 as my base install... for Ansible. Because they haven't switched yet.

Ansible is owned by... Red Hat. They acquired it in late 2015.

Seems like Red Hat - the people in the post that we're talking about that are shoveling folks off 2.7 - has been neglecting one of the fundamental ongoing maintenance tasks involved in software engineering, and perhaps has something wrong with their engineering culture.

Sorry, this shit is just too funny.

And it seems that Red Hat is booting Ansible from the core repos as well (looks like it's in that depreciation notice!), presumably instead of spending the "two weeks" to update it (you might want to go ask the Ansible team why they're too damn lazy to spend that "two weeks", see what they say).

However, unlink Red Hat and Ansible, some organization depend on the softwares in question, and can't just sideline them 'cause the shit they've successfully run on for a decade-plus has lost it's blessing.

The 3.x branch has developers committed to improving it and the 2.x branch basically doesn't. You shouldn't bet your horse on stagnation in a technology industry.

Most of CGI’s customers (if it’s the CGI I’m thinking of) are used to supporting software ecosystems like Fortran, Java, and COBOL for 20+ years on major releases. 10 years is “mid-cycle” comparably. When the IRS pays Microsoft millions and millions to keep supporting Windows XP, these “niche” organizations may be small in actual engineering persons involved but from a capital perspective extremely overweight.

Once again, I don't say Python should not move on, actually, as a CGI "dev" (Technical Director we say) myself, I would LOVE to trash 2.7 and move on.

I was just rambling about "most popular packages" which are only web/network/academic oriented while a huge part of the Python user base don't use them at all. It's just something the "Python community" can't see because most of them are web/network/internet/academic dev (and maybe it explain why they thought it could go well with breaking compat', internet bubble?).

I really want Python to move. I had the opportunity to discuss with some vendors involved in the industry they told me Python 3 switch was a running gag... I hope 2.7 will be such full of security hole my industry will finally do this once for all and stop joking (they will never move on instead), but please stop tell "most popular packages" represent how Python is used as if it was what should been seen, it's definitely not.

TBH, I think Python community give great tools for transition and write close compatible Python 2 and 3 code is not that hard.

Once my industry will be 3+, as a dev, I will love to follow the deprecation period.

EDIT: I love your blog SamAndMax. :D

> Most of them took me a couple hours to a few days.

Since 2008, the automated tooling has vastly improved. I'd bet most of those "few days" issues would either not exist or take hours given today's advances in the automated code translation and new knowledge on stack-exchange.

OK, but then can the python community stop saying "Pretty much all of the packages are converted anyways!" if the answer is really "we don't care anymore if your aren't" ?

Hello from the embedded world!

The web world indeed can rapidly change their entire stack from LAMP to MEAN to Dockerized Q-Basic CGI scripts and load balance their Twitter clone. There's challenges for sure, but pushing updates is simple and quick, and you can always throw more cloud if you run into walls. If your complete platform rewrite fails, just roll the load balancer back over and try again tomorrow.

It's a lot more difficult when your Python application is communicating with a temperamental motor controller over an ancient RS-232 link with AT commands.

There are only ten of these devices in the world, and three in deployment. They perform critical functions, and they need to run 24/7 without failure or human interaction. (Did I mention these devices are remote, and it takes an afternoon to go out and investigate a failure?)

Every single failure results in a few lines of change in the code base. Those lines cost quite a bit -- both time, money, and loss. It's quite stable now, but that stability came at a price.

But haha, what am I talking about? Geez, I've had ten years to move off this "dead" platform. Let's run it through 2to3, or perhaps try a port to nodejs and deploy it tomorrow.

I’m not sure what you’re complaining about here. Those applications will still work.

The attitude of most python3 devs, which can be surmised as: "I moved my Twitter clone from python2 to python3, and it was easy. What's the big problem? You've had ten years to get this done! Hurry up, you're hurting my productivity."

Just curious, what parts of your code are not portable to Python 3? The biggest barrier usually seems to involve strings/Unicode. Or are you relying on packages that don’t have Python 3 support?

In any case, it sounds like you have a stable Python 2.7 system running, so what’s the issue? Nobody’s going to come and take your code away from you.

Well if you’re not happy about the EoL part then that’s a fair response.

If you don’t care (because your apps will still work and they don’t need or can’t be updated) then it shouldn’t really come up...?

So keep using Python 2.7 on some previous version of RHEL? There's still banks using COBOL, man. We don't insist that RHEL ships IBM COBOL with it.

But... would you not use tools that have a guarantee of long term support for such a use case?

Automated tests is extremely beneficial for such cases. You should know that the code works before you deploy.

Easier said than done. This isn't the web world, things aren't so simple.

How do I mock a temperamental motor controller responding to AT commands? I mean, I'm sure it's hard enough to replicate the oddities of their serial port parser, but the temperamental part is gonna be tricky. This is an embedded system I'm talking to, which is parsing bytes coming over a serial port in an interrupt service routine at 16 MHz with 4 kB of RAM. Not an async callback with JSON in a nodejs app with GHz of processing power and 8 GB of memory. Emphasis on: temperamental.

Oh, and don't forget. This is a mechanical system, not a website. We'll have to put torque and strain gauges on the system to make sure it's doing what we want.

The skinny is, the system my python scripts talk to is far too complicated to just "stub out". You may be able to test 90% of the cases, but what you really worried about is that 10% of freak accidents that break things. Look up the Therac 25 and how that failed.

I work primarily in embedded, so I feel you :)

One can separate the I/O code from the application logic, and do data-driven testing of the application logic. Use recorded data from real-life systems to build a (partial) model of the system, at least enough to test individual interactions or short sequences of interactions. Capture this data continiously in production systems, so that when an incident occurs it can be cut-and-pasted to form a new testcase. The latter is critical to build up a library of regression tests for those nasty things one had to find the hard way - so that at least it only happens once.

With this basis you can optionally go to hardware-in-the-loop testing, where some of the same testcases run with real hardware instead of fake responses. This should then be instrumented with neccesary sensors/code to be able to observe its behavior. This ensures that your mock and system stays in sync, and increases coverage for non-deterministic cases. At least one such system should be hooked up to your CI server to test every revision of the software, to catch issues early.

When building the next product, consider shipping the instrumentation neccesary for QA into all products. That way the testing system is closer to real system, and one can integrate self-checking, failsafes and error reporting in production systems. This can be used by final QA (before shipping), by engineers debugging on-site and as part of periodic (or continious) health checks on the equipment.

I've done such things for electromechanical systems, though nothing with a potentially-dangerous motor yet.

Fortunately, on the web, things never go wrong. Hardware never fails. Traffic never spikes. You never face DDOS attacks or people actively trying to break your security. The network is never unpredictable. It isn’t a complex distributed system consisting of hundreds of interconnected parts. Every potential failure can be anticipated and easily tested against. And every component scales effortlessly as the system scales by orders of magnitude. A lowly web developers could never understand the Herculean feat of writing a Python script to communicate with a motor controller over a serial port.

Are you afraid of network security attacks to your embedded hardware? If not, then you probably don't care about missing security updates. What are you concerned about with python2.7 no longer being supported? Like, are you worried about getting python2.7 installed on a new piece of hardware five years down the road? I understand you dislike the tone from the python3 devs, but I don't know what you would like instead.

If you arent using the packages that most people use, and decided to roll your own, then it is on you to upgrade. Not seeing the issue here of course your custom niche libraries havent been upgraded if you havent been diligent.

The py2 end of life is supposed to give your manager hard reasons to switch. But if you never use pypi packages and dont interact with the web, you dont need to upgrade, as you don't need to install packages that might be 3 only and you dont have potential for security vulnerabilities.

The CG industry (atleast for vfx) is moving to python 3 starting next year. Python 3 is part of the cy2019 reference platform.

Projects that need to continue using 2.7 can use a fork that promises to preserve backwards compatibility like Tauthon:


I don't understand your argument. If you don't use any of those packages, then indeed it does not matter. As with all decisions, whether this argument applies to you depends on your situation, but in my experience incompatible packages were often the most important reason not to choose 3.x.

I know some "tiny focused industry" that's still using DOS.

I see a DOS application running on XP all the time I go and visit the local doctor (social healthcare, regional center).

Given the use case, I bet it was most likely written in Clipper.

Probably, friend of mine seen a dbase one lately...

If you're not doing much/any text encoding, guessing graphics/VFX is not, then upgrading is pretty painless these days.

Well, I also imagine that the native code is mostly C89 or C++98.

it's really not a big deal. If you absolutely need legacy python support, you will very likely be able to solve it using a virtualized or containerized environment.

by "the CGI industry" you most likely mean Disney. Disney can afford the investment to make the upgrade.

> Although it still took way too long, if you consider Python 3.0 was released about 10 years ago

Somehow, it was the plan[0]. I feel like I wrote about that extensively already way too many times[1].

[0]: (2016) https://news.ycombinator.com/item?id=11502844

[1]: (2012) https://news.ycombinator.com/item?id=4569440

I think one of the biggest mistakes is that 3.0 didn't have "PREVIEW RELEASE" right in the name. People started counting the day it was released, meanwhile the developers didn't have any expectation that (many) people would use it in production.

Right, the way I look at it:

2008-2012 3.x itself being usable.

2012-2016 Libraries getting 3.x support.

2016-2020 Applications geting 3.x support (trac, samba, ansible, etc)

> "- Python 3.7 performs about as well as 2.7 with future release expected to be better"

are you sure on that? my laptop (2.7 ghz 4 cores, 16gb ram) would fall behind my desktop (3.8 ghz 6 cores, 64 gb ram) because it would be paging things in and out of ram, but once the models started running (like XGBOOST), py2x was about twice as fast.

I am not stating this with difinity, because I was not trying to benchmark the two. Just that I casually noted my laptop would always finnish running the same code in 1 hour vs over 2 hours on my desktop.

Got a source for the performance regressions? Some things are faster in Python 3 than 2

"Most popular" packages, which ones? Most moved a long time ago to Python 3

Here's some light benchmarking (this was posted a few weeks ago I think...) https://hackernoon.com/which-is-the-fastest-version-of-pytho...

Basically once 3.7 is out (which is soon), CPython3 will finally be broadly faster than CPython2.7.

Interpreter startup time is still annoying though. I really wonder what the right strategy for addressing that is. Like honestly 2.7 is faster than 3.x, but 2.7 CLI tools still feel silly to use.

If you are doing raw number crunching you shouldn't be using python in the first place, the whole idea of python is that it makes it easier to write better algorithms lowering your big-O, or use python for the lightweight orchestration and call into native processing libs or external processes such as numpy for the heavy lifting. Same goes for distributed computing which is becoming much more popular; asyncio, await, and other improvements from py3 makes it easier to write fast and robust distributed code, improving your overall application performance.

So it took only a decade to bring Python 3 to levels of Python 2? Victory!

Perfect example of how not to use plots. Y-axis simply says mean, mean of what? I can't tell if lower is better or worse.

> light benchmarking

Thanks for this!

Do you know of any more longer term, less light Py2/3 benchmarking?

> "Most popular" packages, which ones? Most moved a long time ago to Python 3


Thanks for proving my point

There's Fabric3, MySQL libs that work with Python3 (and Django), futures are a backport of a feature from Python3, uWSGI and BeautifulSoup might be worrying but the site shows mostly green

At this point if the library hasn't been ported to Py3, someone else would have done it for you or abandoned it.

Edit: Thanks for the updates!

BeautifulSoup4 is Python3 compatible. It was released under a new name.

uWSGI has Python3 support as a module.

I have been using both on Python3 for many years. I don't think there is any python module left which does not have Python3 support or a more modern, better alternative.

And BeautifulSoup is the not-supported older version, not BeautifulSoup4 which runs on Python3 just fine.

I think ansible is a big one for RHEL though.

Ansible works just fine on python3 for both local and remote actions. You need to explicitly tell it to use python3 at the moment, but this will go away eventually.

There are a couple modules with options that break in a Python 3 environment, but that number is I think in the single digits now. I’m very close to switching all my Ansible masters to 3 finally.

> "Most popular" packages, which ones? Most moved a long time ago to Python 3

The person you replied to said

> most popular packages are now compatible with Python 3

so it looks like you two are in agreement there.

Why would previous versions of Python3 (e.g., 3.5, 3.6) not perform as well as 2.7? What about Python3 makes it slower? Is it a language design issue?

We’re in the process of migrating several keras/TensorFlow-based projects to 3.6, and I’m struggling to get my emacs environment setup to handle Python 2 and 3 code simultaneously.

> - most popular packages are now compatible with Python 3

And if one isn't, then it's another incentive to update it or abandon it.

You forgot one:

- 90% of business code is written in Python 2.x

We've switched from 2.7 to 3.4 (and upwards to 3.5, 3.6) in about a month or so. It delayed some deployments but nothing serious.

If you do it step-by-step it's not that hard. The worst part was str/binary conversions but afterwards everything was a lot easier and better to maintain.

If your company is big enough you should maintain and keep it up-to-date, not "it just works"(TM) and hope for the best after 10 years. The death of 2.7 was already in the air for years.

It's not that hard unless you had some very advanced code utilizing Python 2 internals that were significantly changed in Python 3. However, conversion takes a lot of time and if something works, why should you be forced to change it?

Not referring to parent specifically. By "you", I'm referring to all python 2 hardliners out there.

Nobody's telling you to change your codebase. You're free to carry on using python 2.x. If it already works, it will continue to work in its current environment.

So why are you attempting to tell others in the python ecosystem, be it core maintainers, package authors or OS vendors, support your continued use of python 2 for free?

You want support beyond 2020, pay for it.

Nobody is forcing you to do anything. The developers of Python aren't volunteering their labor to continue to support older versions of Python. RHEL is deciding not to continue to support older versions of Python past their EOL per their developers. Seriously, Python is free software, libre and gratis. You are completely free to do whatever you want to with Python 2.7, except expect other people to do the work for you.

Various intranet pages at my company "work" in IE6/7 or whatever and nobody seems to be rushing to fix them. That kind of sucks, though.

IE6/7 objectively sucks. One could argue Python 3 sucked more than Python 2 based on performance benchmarks even year ago. IMO both Python 2 and Python 3 are pretty messed up languages with similarly messed up libraries, and "pythonic" way is often meaning "idiomatic" in the ugly sense of the word. But I do Deep Learning, data science and stuff with Boto/MWS on Amazon, so I have to stick with it. However seeing all the warts Python 3 throws at me and a crowd of people nagging to switch all the time, I frankly don't see much value in switching from one set of warts to another set of warts, waste the most precious thing I have - time, and all that just to duplicate what I already had (and disliked developing in Python anyway, but it was the fastest way).

Because non-IT users (your customers) are sick and tired of apps from 2000 that run clunky and ugly because someone is "forcing you change it" (aka doing your job) and you don't feel like it. Yes,

I know some extremely well-written apps from 2000 without trendy obfuscating flat interface, and many trendy albeit unusable monstrosities from the past 3 years. What kind of argument did you make?

Embracing pure technical churn is no ones job. That's why Python3 is viewed as a problem by so many. People embrace true technical innovation, which it is not.

even if it's "easy", does not mean it's "simple", or "quick". ever maintained a plone/zope instance? life is too short to convert that to python3 :}

yeah and 90% of business code is also written in COBOL and another 90% in MUMPS, but somehow the world continues to work despite the fact that they've been pining for the fjords for decades.

There is obviously a massive jump from COBOL to Python, and small but annoyingly incompatible step from Python 2 to Python 3. I hope you understand the difference. It's not like Python 3 was the best language ever, it's just a variation (and IMO not that great one to justify breaking backwards compatibility) of the same theme. Ask Perl guys how they liked a similar situation.

If I didn't need to use Python, I wouldn't. Now why I have to rewrite my old code to v3 if it works? Why do I have to waste time with such a stupid thing? All my new code is v3, but why is somebody nagging me about v3 all the time when v3 is full of warts and mindblowing conceptual holes as well, and doesn't really address multi-threading (GIL love forever) etc.? Coming from C++/Java to Python world it was like throwing away a lot of powerful stuff in exchange for faster time to write. Didn't expect that would be dragged down by additional time to rewrite because of some half-baked API-breaking changes.

A similar situation to Python 2->3 in Perl is from Perl 5.6 to Perl 5.26 with the difference being that Perl maintains mostly backwards compatible.

If you are talking about Perl 5->6 that is more like going from C++ to D. (With a touch of Haskell+Go mixed in)

For several years we have been considering the two current Perl's as sister languages. Both are being actively developed with a yearly stable release for Perl 5 and a quarterly stable release for Perl 6. (Rakudo Perl 6 is mostly written in Perl 6 or subset language, and is also a newer codebase; so it is easier to change without breaking things.)

You don't have to rewrite it, just like you don't have to rewrite your COBOL or MUMPS or RPG or any of the other all-caps languages.

In another 20-30 years Python may very well be added to that list (though less annoyingly capitalized). No one will have to rewrite their Python into whatever the new thing is then either.

But not re-writing ProgramX into LanguageX+1 is not LanguageX's authors' problem or CompanyX-no-longer-using-LanguageX's problem.

If they called Python 3 e.g. Cobra or Boa Constrictor or even Turbo/rapid/mega Python or similar nonsense, nobody would likely complain as it would clearly distinguish itself from existing ecosystem and all the enthusiasts could jump on the wave, boasting how much better it is, that it comes from the original authors of Python and then go through usual Darwinian selection to see if they prevailed. But breaking compatibility in a major way doesn't make a lot of people happy, when the gains are small, as with v3.

Yes they would. The apocalyptic doomsaying that would have happened if Guido had said "oh hey guys we're going to stop developing python here in a bit" would have been off the fucking charts.

At least in our shop it's the other way around.

On the one hand... This is 3 years after the first time I saw a Gentoo python3 only environment.

On the other Python2 support will go away when the community is done with Python2. And that process goes a lot slower than I think most people appreciate. Still, a big vendor changing their defaults helps a good deal with the push.

The double standard when it comes to Python is mindblowing. No other language could have gotten away with such ridiculous fragmentation and perf regression for such a long period of time.

What do you mean by double standard? Everyone in the community, including the core developers, acknowledge that the 2 to 3 story was full of mistakes. They have promised not to make it again. And they took some actions to remedy the problems, and that's why Python 3 is finally succeeding.

I recommend this presentation by Victor Stinner at FOSDEM 2018 which talks about it: https://fosdem.org/2018/schedule/event/python3/

Could someone provide a summary of the kinds of mistakes/remedies?

Did not watch the presentation but IMO the primary mistake was not declaring EOL for 2.x sooner. Other languages had similarly breaking changes and they simply gave short time before legacy version was declared EOL and developers ported their applications to the new version.

In Python it took 10 years, and even now you can't get companies to migrate, because why make developers do the work when application still can be used. Many places will probably start the migration in 2020, which shows that giving 10 years head start was a mistake and all it did was create FUD and division in the community.

The Python 3 immediately picked up for new projects after announcement that 2.7 will no longer have new feature and only bugfixes in 2015. Industry will wait for the last moment when all support for 2.7 will be ceased in 2020.

For the record, Python 2.7 EOL was originally 2015. In 2014, it was moved later, likely because RedHat would continue to maintain it for free anyway.


One mistake was providing 2to3 as a migration path. This required projects to move completely from 2 to 3, for libraries it meant that they had to either stop supporting 2 (which were the majority of their users) or forking and maintaining both versions. The remedy was to make it possible for libraries to be both 2.7 and 3.x compatible, using tools such as six.py and making some changes to 3.x to make it more compatible with 2 (e.g. adding u'' strings).

The problem with six is that once you decide to use it your entire code base must be written with the lowest common denominator in mind, in this case py2.7. You can't use six.py at the interface boundaries and nice python3 features inside your implementation. Six is only useful for old 2.7 libraries that want to quickly be 3.x compatible and still maintain their 2.7 compatibility, not if you have an old 2.7 application and want to incrementally add 3.x features.

The proper way make a major breaking change like python did is to provide a bridge between the two versions, so you could import 2.7 modules from 3.x applications, and even the other way around. As of today there is still no such thing.

The double standard is that people still stick to Python despite those problems. If we're being honest and all conspiracy aside, imagine the same story for Ruby (a very similar language in terms of capability, features and performance), everyone would be laughing at them and the language would be pulling a Perl right now.

Those three scripting languages actually an interesting comparison.

Perl 5 -> 6 broke all your code in exchange for a whole new language. Some people moved to Perl 6, some to other languages, and a lot stayed with Perl (5), which still runs programs written in Perl 1.

Ruby 1.8 -> 1.9 broke most of your code in exchange for minor cleanups. But Ruby was Rails, so the culture accepted constant rewriting and breakage, and people went along.

Python 2 -> 3 also broke most of your code in exchange for minor cleanups. Users expected some stability, but the language's culture demanded conformity. The result has been a decade-long war in which Python fans try to shame Python users into moving to version 3.

> But Ruby was Rails, so the culture accepted constant rewriting and breakage, and people went along.

The main reason people switched probably wasn't rails making people expect breakage, the main reason was that (a) it was relatively easy to rewrite your code to work on both 1.8 and 1.9 (as long as your dependencies had already done it), and (b) 1.9 was much faster.

> The result has been a decade-long war in which Python fans try to shame Python users into moving to version 3.

I think that's the most succinct way to put this whole debacle I've ever seen. I'm totally stealing this.

I do not think double standard means what you think it means.

Ruby had 1.9. It was like the Python 3 situation in miniature, and Matz has said he’s keen to avoid such a mistake again.

It also was in a minor version change 1.8 -> 1.9, but people moved on because 1.8 was declared EOL this is what Python was missing. Python 3 is 10 years old and 2.7 is still supported (bufixes, but still).

If you look at timing, Python 3 picked up for new projects in 2015, when PSF declared that no new features will be backported to Python 2 from 3.

I believe the industry will wait for 2020 to start migrating existing code after Python 2.7 will be declared EOL.

That massive chip on your shoulder is weighing you down.

If you really think something like that is happening (big if - I’ve gone through so many py3 flames...), the only answer is: because people love the language SO DAMN MUCH that they will overlook its warts. And the next logical step would be to find out how an independent research project, staffed mostly by volunteers and without a marketing budget of any sort (and not running in every browser nor shipping preinstalled on the most popular operating system), ended up with this sort of mindshare.

There might be some good lessons there.

Ruby took off because of Rails. Rails' convention over configuration defies Ruby's flexible nature. It feels more like Python.

People prefer dumb unambiguous code over clever code. That's why Python and Rails succeeded

“Dumb, unambiguous code” does not rely on magic conventions that change with the whim of the framework developer.

I think it's the opposite. When other languages make incompatible changes, they have serious problems (e.g perl6).

Previous versions of python were back-compatible. Java versions stress back-compatibility. I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.

Python has done everything possible to avoid this fate, by maintaining 2.7. And I'm not even convinced python has gotten away with yet (e.g. kalite, offline khan academy, needs 2.7). RHEL may end up reversing this. Give it another ten years!

Amd the Enterprize is the most back-compatible thing you've ever seen. This change is going to break lots of code, and they will reverse or lose customers.

I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.

Of course. Swift 3 is one example. Ruby 1.9, as others have pointed out, is another.

PHP is yet another, and perhaps more relevant because it is close to contemperaneous. It had its "Python 3 moment" with the migration to PHP 5. Lots of BC breakages.[0] That transition took about 3-4 years for most.[1] Most users are now on the current major version, 7, and the remainder on 5 are mostly using its latest release (EOL is EOY). No one uses 4.

[0] https://secure.php.net/manual/en/migration5.php

[1] https://web.archive.org/web/20110720002753/http://www.gophp5...

Swift 3 is compiled. So you can provide the binaries and be ok.

Ruby 1.9 is a great example of why you should not be too nice: they told everybody "you have a fews months, deal with it". The community moved. Python said "poor things, we understand, take those tools and years to do the thing", and the community cried, and did nothing.

PHP literally failed. They canceled V6 and jumped to V7.

The funniest part ?

None of those languages are even close to Python popularity.

Swift is not even used for most Apple codebase. Python supported even Atari.

Ruby and PHP has almost no use case outside of the web. Python is used by OS, on the web, in GIS, by data analysts, for CGI, in AI, for sysadmin, to make GUI, video games...

And Python is much, MUCH older. 1990. 4/5 years older than Ruby/PHP. It has way more technical debt to pay. Swift ? 2014

Yeah the migration was badly handled from some aspects. But honestly, given the challenge and track record of the competition, it's not too shabby.

Comparing Python and Ruby upgrades is not fair.

Ruby 1.9 was working mostly the same as 1.8 while provising 2-3x speed improvment, and new exciting features.

Python 3 breaks even the basic Python 2 hello world, everything was notably slower, and not anything new and exiciting feature-wise apart ubified string support.

Yes, Python really botched it.

PHP 4 to 5, which was what my comment was about, took a middle path and had success much faster than Python did.

Given how much you like Python, I'm surprised at the gall to argue that Python had more technical debt to clean up than PHP did.

The remark about PHP was the move from version four to five. It was a big move.

Swift is Apple’s baby and has a captive audience. Since Apple cannot even update their example code, I don’t think it proves any points about keeping up with a moving target.

Maybe I'm wrong, but I have the impression that Ruby is now an expert's language, and much less accessible to beginners than it used to be. And, while it remains useful and used, it's not growing...

When trying to learn it, I encountered incompatible examples and tutorials, complex and changing ways to update it... and then my toy code stopped working. I eventually found the cause of the code-breaking change, fixed it... thought a bit, and decided against ruby. That's just me though.

> I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.

Ruby qualifies, I'd say. From 1.8 -> 1.9 there were syntax changes, changes to how strings worked and so on. Code written for 1.8 would not in general work on 1.9 and the other way around. While it was possible (sometimes with some extra if statements) to write code that worked on both versions I would definitely call it an incompatible, code-breaking upgrade.

It's notable that 1.9 came with a significant performance boost, so there was a good practical reason to want to upgrade. People were motivated by the prospect of their code running twice as fast - sure, the language was improved a bit too, but it's Ruby, it's already pretty nice, so that was hardly going to light a fire under anyone.

When Python 3 turned up and it was on average slower, my first thought was "well, good luck getting people to adopt that".

The fact that it was possible to write code that worked with both wasn't just so critical, though. It allowed you to upgrade your code incrementally to support 1.9 while doing regression testing on 1.8, and then come up with a deprecation schedule for 1.8.

Python could be written that way (2/3-compatible) as well and plenty of programs and modules were.

Uhhhh... sort of? I mean, I could go through all of the needless "why the hell did they make this partner incompatible also" stuff like, like removing u'' support (thai was added later), or all of the ludicrous breakage in the underlying APIs (like filesystem calls that made assumptions about character encoding that make no sense for the underlying Unix model and literally caused files to just disappear from iteration)... a lot of this stuff was fixed in later versions of Python 3, but that was years later, making it nearly impossible to deal with the existing land mines of people who had installed Python 3.0-3 (or even 3.4).

But what makes me shocked that you think that that is the case is basic basic stuff... I mean, as one obvious example, they changed the exception syntax. So, "uhhhh.... sure?" it was possible to make a program that ran on both 2 and 3, but it required using "try" with no variable assignment and then digging into Python's exception reflection support to recover the variable. And you had to do this every single time. (And of course a lot of the time the underlying core library would now throw different exceptions, making the situation all the more ludicrous.)

I had code that straddles the 1.8/1.9 boundary of Ruby, and even though I wasn't an expert at Ruby it was trivial to have code that worked for both (and I mean 100% correct: Unicode and everything; remember that neither Ruby 1.9 nor Python 3 added Unicode support, they only made Unicode support easier to obtain. for Python 2 there was already comprehensive, if non-default, support, and for Ruby 1.8 you just needed to be careful about where you did conversions).

Half of the problem you mention are either:

- if you tried to support Python < 2.7.

- if you tried to support Python < 3.3

But nobody used 3.0, 1 or 2. And supporting 2.6 is like supporting ruby 1.7 and 1.8 and 1.9. Which nobody did.

And it's easy to rewrite libs when really the language started to be used in 2004.

Or maybe not, given that "gem install" still regularly crashes on anything other than the top 20 packages, so it's still not a solved issue.

Python 3.0 came out in 2008 and 3.3 came out in 2012. Are you really saying that nobody used Python3 for the first 4 years of its existence? I was certainly using 3.1 and 3.2 back then.

Pretty much yes. Some Linux distrib where still using 2.6, and not a lot of them had even 3.2 in their packages.

Only a few like you and me tried it.

I can't recall of any big project or lib ever supporting < 3.3.

Blender was using py3k for quite a while before 3.3. I was doing blender dev work back then and can recall the almost immediate upgrades as soon as the newest python release came out -- kind of painful really since I prefered to use the fedora installed version instead of building from scratch so would often stall my dev work until fedora caught up to blender.

They jumped on so early the python devs were saying "WTF are you people thinking?!?"

Given blender is mostly c bindings and a huge code base, it's kinda impressive, and a good example that it's not an impossible task to migrate even a very hard project, let alone the most common ones.

It wasn't a migration as much as a bottom-up rewrite of the entire bpy and big chunks of blender itself. Most of the bindings are generated by makesrna at compile (compile compile?) time which is fairly simple once you go deep diving into the sources.

It actually takes a fair bit of shenanigans to get hand-coded bindings into blender, my last(?) patch was bindings for the kdtree and I had to do some convincing to get that accepted. I'd guess that 99% of bpy is generated bindings.

If they stuck with python 2 when they started the initial overhaul they'd probably still be using it since artists are very vocal when their scripts break.

Django never had support for python < 3.3 and numpy and scipy only supported python 3.2 and up. The story is similar for most other packages. So if you where using python with any of its major libraries then python 3 was pretty much useless for you the first 3-4 years.

Django had support for 3.2 and 3.3, but not 3.0 and 3.1. Django didn't add support for 3.x until it was able to drop 2.5 (Feb 2013).


it was possible to make a program that ran on both 2 and 3, but it required using "try" with no variable assignment and then digging into Python's exception reflection support to recover the variable.

What? The PEP that added the new syntax also added it to Python 2.6: https://www.python.org/dev/peps/pep-3110/#compatibility

> When other languages make incompatible changes, they have serious problems (e.g perl6).

Counter-example: The ruby 1.8 -> 1.9 -> 2.0 upgrade path. 1.9 introduced the whole encoding system, which was a major update at the time. It did break a lot of libraries that handled strings. Still, the change went over fairly quickly, partly because major players in the ruby world (rails) adopted the change. The update also brought clear benefits and it was clearly marked that 1.8 was going to go out of support. There are still some stragglers (centos 6, I'm looking in your direction), but it's clearly accepted that ruby 2.x is the standard that you should be writing code for.

> I don't know if HN can find any counter-examples, of a language that thrived through an incompatible, code-breaking upgrade.

Swift breaks every major version at least but not as often and on a large scale anymore with point releases as with 1.x and 2.x. Has been like that from the beginning and everybody that writes it accepted it.

> Java versions stress back-compatibility.

You reminded me of the java version incompatibility nightmares that once littered the enterprise landscape. If they do that now, it's because their history includes perhaps the worst set of incompatibility jumps of any popular language.

Quite the opposite.

Python had deep problems that could simply not be fixed without BC breaks. The decision was made, and in hindsight it was the correct decision by Guido and other core devs.

Look at PHP, and look what a total cluster of madness the language has become.

I think Python 3 was not brave enough, to be honest, to future proof Python:

* an optional type system without hacks was not introduced (types in comments, really?)

* the GIL was not removed or at least a solid parallelism story was not included, one that would allow Python to use all cores on a system while sharing, if needed, memory

* no true performance improving changes were made; by this I mean stuff that improve performance by an order of magnitude; Python is still way too dynamic and this basically sunk Google's Unladen Swallow and Dropbox's Pyston, at least in terms of worldwide adoption

I say this because I'm a bit worried that long term Python has locked itself into a corner where it will keep getting pushed into as the other languages develop a better developer UX. Python was a bit lucky to catch the ML train but who knows how long this will last. And Go, Kotlin, Typescript, Swift, Julia, Rust even are improving their ergonomics and edging towards Python's core competency.

> a bit lucky to catch the ML train

I'd argue that the fact that it was explicitly a design goal to be a teaching language meant that it saw wide-scale adoption by universities, which made it more likely that someone writing an adaptor library to old linear algebra packages would write it in python.

Perl6 was brave enough. It became a totally different language and lost much of its user base who moved on to Python, Ruby and to a lesser extent Node.

From what I saw, Python3 doesn't have type hints but it at least warns when you try to do stuff which involves magic, such as trying to increment a float.

Perl6 doesn't warn in those cases. If you use type inferrence it just happily goes ahead and uses the equivalent of perl5 BigNum which makes the code run 50x slower, instead of upgrading to BigNum when int overflows like Ruby does. If you use ugly C-style loops instead of nicer native for loops, your loop will run 2x faster. Too much magic or the wrong sort of magic can be really annoying sometimes.

> If you use ugly C-style loops instead of nicer native for loops, your loop will run 2x faster.

This is what I currently consider a workaround. At the moment if you have a constant endpoint and it fits in a native int, such as `for ^10`, it is already highly optimized and actually runs faster than the equivalent in Perl 5. That is not done yet where the endpoint is a native int variable. :-(

Since Python 3.5 (3 years ago) type hints are part of the core syntax. Don't need to put them in comments. https://www.python.org/dev/peps/pep-0484/#type-definition-sy...

They badly need to update the docs, then. Almost everything I see on that page does this:

vec = [] # type: Vector[float]

vec = [] # type: Iterable[Tuple[float, float]]

a = MyClass() # type: MyClass[int]

x = Foo() # type: Foo[int]

Ugh that's because PEP 484 (which introduced 'proper' type hints) only introduced it for parameters and return types and implemented in Python 3.5. Type hinting bare variables was introduced with PEP 526 and implemented in Python 3.6.

While I really like the concept of PEPs, and I think keeping original (like as they were written) PEPs around are the right thing to do, I do wish sometimes that Python did a better job at incorporating changes from PEPs into the core documentation once implemented.

> a_list = [] # type: List[T]

How do you do that with out a comment?

from typing import List

a_list: List[T] = []

Visit https://docs.python.org/3/library/typing.html for more information.

> a_list: List[T] = []

a_list: List[T] = []

the GIL was not removed or at least a solid parallelism story was not included

Not for lack of trying. People have been trying to do this since at least python 1.4. They just haven't found a way of doing it without negatively affecting single threaded performance, which Guido considers an unacceptable trade-off. http://pyparallel.org is probably the most interesting latest experiment in this field, but it never got cross platform support and seems to have stalled out.

They were breaking many things, maybe they should have rewritten the C API? From what I remember that's the blocker.

That would have broken all C extensions, but it's not like Python 3 at launch was a raving success regarding library porting.

The C api isn't the blocker for removing the GIL. The blocker is that Python is a dynamic language and there is no good way to know if another thread has just changed the underlying function on an object or not without having fine grained locking.

Fine grained locking works, but is expensive for non-multithreaded applications and slows down Python. It would speed up multithreaded applications though.

They did break all C extensions in py3k -- not so bad you can't work around it with some preprocessor magic but break it they did.

And they continued to break them within the py3k lifetime, mostly unicode related stuff from what I can remember. Probably still break stuff but I haven't had time for my toy python projects in a while so haven't been messing with the C-API lately.

We're actually agreeing here, I think.

I wished they had broken everything harder and once and for all, in order to fix the issue.

If there's going to be breakage anyway, people adjust their expectations. The scale is basically:

a lot of breakage > a bit of breakage >>>>>> no breakage.

Once you do break backwards compatibility, you might as well do it in a big way, provided you:

* don't offer vaporware for a decade (hello Perl 6!)

* try to offer some mitigations for the migration pain (auto-conversions, support libraries, top-notch docs, etc.)

* offer a great carrot at the end of the tunnel (good performance would be such a carrot)

Here's a video and HN thread that covers most of the current problems with removing the GIL:


"I say this because I'm a bit worried that long term Python has locked itself into a corner where it will keep getting pushed into as the other languages develop a better developer UX."

You can stop worrying. That is absolutely what will happen.

But it is a good thing. Languages should be something, not merely an accumulation of every fad and trend that appeared over its lifetime. That means eventually they will mature, and eventually hit a peak, and then fade. But during that process, at least you'll have a nice, mature language, instead of one always chasing the latest pretty-shiny, immature, confusing, crammed with too many features, with libraries constantly jerking around trying to keep up and always just a bit broken. Indeed, I kind of thing Python has already done a bit too much of the latter, and were I in charge of Python I'd give some serious attention to the idea of simply freezing the core language.

Python is what it is. Let it be what it is, because what it is is pretty good. It's by far my favorite language of the genre it is in. It's got way too much baggage to compete in the next arena, so let it be the master of the one it is in instead.

> [Languages] will mature, and eventually hit a peak, and then fade.

"Maintenance mode" should be a badge of honor: you have created something that is a stable foundation for new things. For example, how many programming languages are implemented in anything other than C and/or themselves? Not a lot, and that's a huge compliment to C. Despite its faults, you can probably count on your C program working after you're dead.

What's the last mainstream language that died? Even more so, what's the last mainstream language since the age of the internet that died?

Nothing truly dies anymore. It just lives as a ghost and haunts unfortunate journeymen.

> I think Python 3 was not brave enough, to be honest

Agree 100%. I would have added:

* fix the C API and pass the interpreter instance as the first parameter to all functions as is common in other interpreted languages, rather than relying on a global state.

They didnt really "get away with it". But as far as Oracle, for example,doing something similar -- yes, open source languages get more of a pass.

Being an open community effort gives you a free-pass to make mistakes because you're methodologically sincere. Decisions are made by, and on behalf of, those who use the systems.

Having proprietary interests means you're methodologically insincere: mostly, you sell decisions for the developer but they are mostly for the company.

Also the tech upgrade is free in terms of software license costs. Yes, those are generally dwarfed by time investments for tests, but still.

Double standards by whom?? Every other article or comment I’ve read since basically py3+1y has been incredibly negative about it. The only positive news about Python, recently, seems from people who were not around since the py2 days. Data science is where it’s at now, but it used to be massive and ubiquitous. Golang came at a time where public perception of Python was down the drain, and ate a big chunk of its lunch.

I've been around since ~1.5 and have no idea what you are talking about. I suspect our perception is very much coloured by where and whom you are hanging out with.

Agree that there was never a shortage of negative articles about py2/py3 transition. Fairly certain Golang success was down to its advantages more than Python's perception who's use has grown through all this time.

My interpretation of the usage share is that Golang ate a bunch of lunch that would've otherwise gone to Python. That is, people moving from other languages (Perl, PhP) to something else for various reasons.

Absent Go, I suspect many current Gophers would've landed on Python. So in a sense, Python lost people to Go. But I've never seen anything to indicate that there was a large migration from Python to Go, and certainly not because of 2 to 3 or public perception--whatever that means in the context of a programming language.

Python 2 is still the mainstay for Data Scientists. I've seen more resistance to moving to 3 there than almost anywhere. Which is pretty funny because it's the easiest code to port assuming your data science libs support 3 (and all the big ones do.)

I think it's more accurate to say that Go ended up with some of Python's potential mindshare, not that it stole a significant amount of existing mindshare.

> and ate a big chunk of its lunch

Pretty sure that's not true. Golang might have taken some from Python web frameworks or command line tools, but that's about it. Python's popularity in scientific computing has only grown over time, and that encompasses a lot of fields and users, not just data science.

Wasn't there a big brouhaha over the change from VB6 to VB.Net? The big difference there is that if it's ordained from your corporate overlords, you have little choice, whereas you feel like you've got a bigger right to complain when it's "just" coming from the BDFL.

VB.NET is a totally new language almost unrelated to VB6. That’s like talking about C vs C++, not py2 vs py3.

Calling it a “New language” flatters Vb.net. In reality, it was a “marketing compatibility shim” on top of C# to help VB developers acclimatise to the CLR.

As VB6 is compiled, you don't need your users to have a particular interpreter version installed.

I do remember the good old days of having to make sure you had MSVBVM50.DLL or MSVBVM60.DLL in the right place...

It is no different than ensuring you have the right libc.so in place.

Many languages seem to suffer from legacy support. C++ and PHP for example. The double standard might be the only way to avoid this!?

The legacy support is a benefit for people who are still using it.

The lesson from both Perl and Python is clear: you cannot remove language features - if you do you're creating a new language which has to start from zero in adoption.

Perl 6 removes very few language features. About the only notable ones I can think of is fork and format. Since Perl 6 has built-in multithreading, fork is not needed. And hardly anyone uses format anymore (it is great for directly printing to a line printer).

What Perl 6 did is change the syntax for various features of Perl 5; while adding a lot of new features.

I guess you haven’t looked at Swift. All 4 major versions are very incompatible.

It's still a young language and doesn't have as much legacy codebases as python.

Xcode also has tooling support that will handle a lot of the language conversion for you.

Not to mention the loss of tuple unpacking in function signatures. Even if i felt morally wrong about it, for the last few years I've been writing python2 with a few `from __future__`s at the top since it was fastest and most featureful and had all the libraries.

I'm not sure that tuple unpacking in function signatures is any great loss. It's a feature I hardly ever saw used and if you do need to unpack an argument then you can do so in the first line of the function's code.

I saw it used a lot and it caused hours of rewrites to get libraries python3 ready. It's one of those things where I wonder why they didn't just depreciate it for another few iterations, allowing scripts to be changed over time.

The 2to3 tool will rewrite it automatically. It's not something that requires manual intervention.

Oh ho ho, it is _not_ that simple.

Any code with a lot of assumptions about text encoding or integer division is going to be terribly broken after 2to3. It very much does require manual intervention unless you're doing very simple scripts.

Sure but the 2to3 tool allows you to run the tuple_params fixer on its own. Text encoding and integer division are separate issues and have nothing to do with unpacking tuples in function arguments.

My point in this conversation has been that this feature doesn't require manual rewrites. And I'd add that in any case the deprecation period for python 2 will have been 12 years once Python 2 reaches EOL. That's plenty of time to update a code base.

In case anyone wonders why tuple unpacking was removed:


Php3 and php4? Php4 and 5? Then I've lost contact with pure php

> No other living language could have gotten away with such ridiculous fragmentation and perf regression for such a long period of time

FTFY. Just think about Perl6, Lua5.3 and VB.NET

I am pretty sure somewhere in this world a miserable soul has to maintain a Ruby1.8 codebase.


Well, they didn't really get away with it.

Perl isn't as widely used as it used to be. Besides, in spite of the name Perl6 isn't compatible with Perl5 but rather a completely new language.

Perl 6 is a completely new language, but that doesn't mean it's incompatible. Inline::Perl5 provides a rather extensive compatibility layer with Perl 5: https://github.com/niner/Inline-Perl5

By this logic, Perl is also compatible with the many other languages for which there are Inline:: modules.

The Perl 6 Inline::Perl5 automatically maps Perl 5's syntax, object system semantics, exceptions, etc. to Perl 6's so that Perl 6 users don't in principle have to know about Perl 5, just the logical API of any used module/library/function.

I don't think the Perl 5 inlines do this.

Perl6 is not as widely used as Python 3. Also it's practically a new language in comparison to Perl5.

Does this whole thing qualify as a "debacle"? Why not a stronger effort at backwards compatibility? Certainly not to maintain performance, correct?

About as well, but in a number of cases signicantly worse. I am not saying people shouldn't upgrade, but it's not without issues.

i can feel about the same time for Java 8 to Java 11, where 9 and 10 are transitional and with a really low level of engagement, especially for thrid-party libraries, kicking the jar-hell again

Ansible still doesn't support Python 3....

Ansible is an application not a library so it is not a problem. Python allows to install multiple versions at the same time, for example you can have Python 2.7, 3.3, 3.4, 3.5 and 3.6 installed side by side and all will work fine without issues.

So you can have Python 2.7 installed for Ansible and 3.6 for your application.

I use Ansible via the API. I want to do it Python 3. I moved the ansible directory from /usr/local/python2 ... to /usr/local/python3 and then squashed out any bugs by hand.

It works! I'm using Ansible via the API in Python 3!

It's for system use. You should never depend on the vendor-supplied compilers for anything, because their goals are different from your (the developer/application-wrangler's) goals. Always develop in a defined, controlled virtual environment, period. If you want be stylish, there's a fancy new thing called a container in which you execute your application in defined, controlled environment. :-)

This is really by far the most important point in the thread. I understand the points about embedded use cases and all the other stuff that people bring up when the flames come out in 2 vs. 3. But this is specifically about RHEL.

If you're using RHEL, the point here is almost entirely moot. You should never have been depending on system python for anything important anyway. If RHEL replacing Python 2 with Python 3 breaks any part of your code or your company, you have already made a huge shit sandwich for yourself.

The fix, however, isn't that bad. Just start using virtual environments. It will take you a while to unravel hidden dependencies, but it's doable in a reasonable amount of time.

Django also recently dropped python 2. I think it's nice that we're finally seeing support for Python 2 end. We've been in a frustrating middle ground of supporting both for too long now.

About goddamn time. Now let's take care of PHP <= 7 and Node <= 8 and us sysadmins can stop living a foot in a decade ago and a foot in today.

> Node <= 8

Great! Mostly no problem. The backwards-incompatible changes are few and seem to rarely be used in the projects I've seen.

> PHP <= 7

Totally different story. PHP does upgrade slowly, but many of the backwards-incompatible changes are almost impossible to detect without 100% code coverage. A lot of legacy PHP < 7.x code bases are also going to be lacking perfect coverage.

That said, I think bespoke PHP is less common than people think. The WordPress/Drupal/whatever installations are major drivers of the usage numbers for PHP.

We upgraded our bespoke, massive php5 codebase to php7 in a week.

The much bigger upgrade was 4 to 5. We have something like 6 custom C extensions and that was a huge pain.

php7 is a great language though. Worth the pain.

I believe you did that conversion and that it worked for you. I still don't think it's a slam dunk when you're talking about a stable, production product that can't have errors. It's almost impossible to do the conversion with static analysis alone, unless you know something I don't.


Would you please not post unsubstantive comments to Hacker News?


This is good, but it's sad that #!/usr/bin/env python will no longer work as the go-to shebang since python won't exist, only python3 will exist by default.

Geofft[1] has a nifty solution but it's not as cross platform as the old tride-n-true #!/usr/bin/env python:

#!/bin/sh """"type python3 >/dev/null 2>&1 && exec python3 "$(readlink -f "$0")" "$@" #"""

""""exec python "$(readlink -f "$0")" "$@" #"""

[1] https://ldpreload.com/blog/usr-bin-python-23

I think this has been the ugliest, most badly managed update for a major language since Perl6. And this is not completely over, yet. Over the next 5 years I still expect to see people complaining about this Python3 thing. I hope the designers have learned a lesson.

People used to complain about the way Python changes were breaking their code. So it was decided to restrict all the non-reverse compatible changes to a separate version called 3 and to stop breaking things in 2. That probably allowed Python to become more popular than it would of otherwise, but at the cost of a significant fork.

So what are you going to do? Either way has downsides.

And it isn't like developers really have a lot of power to guide the course of a language. Lua is an excellent example of this. If you want to write a Lua program you probably want to target version 5.1 . The developers have released 5.2 and 5.3 but uptake has been fairly minimal and it's a problem because 5.3 is a Python3 like fork. You can lead but that doesn't mean that anyone will follow. The Python 3 thing could of gone a lot worse (or better) depending on what you think of Python 3.

Ultimately the issue is that different people have different ideas about where a language should go (if anywhere) and no faction of developers or users have ultimate control. It isn't anyone's fault, it is everyone's fault.

Seems like the best approach here is to not introduce breaking changes to begin with. Go has done a very good job at this. And AFAIK most of the breaking changes in Java have been the introduction of new keywords and have been relatively easy to manage.

Python is a dynamic language; adding a new method or member to an existing class is a breaking change. You don't have that problem with Go and Java.

Perl6 isn't an update to Perl5. It's a completely unrelated language with a misleading name.

They may not be compatible but they certainly seem at least related.

You could easily argue the same is true with Python2 and Python3.

With one main difference being that we are perfectly content to let people continue to use Perl 5. (Most of us anyway)

Another is that Perl 6 brings in many features from other very disparate languages while making them seem as if they always belonged together. It doesn't seem like Python 3 brings that many new features.

I think the process was run about as well as it could be. It's just very, very hard to change a language like this, particularly something as fundamental as the type of strings. In retrospect more time should have been put up front in transition frameworks like six and for some of the compatibility affordances added to Python 2.7 and ~3.3; it took awhile for folks to figure out the best way to write code that supports both languages.

The Internet may switch to 3 but industry will stay on 2.7 for the next decade and no EoL will change that. There is absolutely nothing that can change that because there is zero benefit to rewriting a decade's worth of code and no manager will authorise that.

So don't bin those 2.7 books just yet if you want a job at a big co.

... Said the COBOL developers.

To some degree, you may be correct, that there will be companies that refuse to upgrade for many years. By and large, I think most people will start to switch:

* Small orgs will begin to see costs of maintaining legacy code skyrocket as it becomes harder and harder to get 2.7 interpreter support for newer kernels. Those that aren't already transitioning now will eventually bite the bullet.

* Medium orgs will probably be the laggards. They have enough funds to pay someone else to make compatible interpreters for them. Your observation about manager authorization very likely applies here so many probably won't bother to upgrade without an internal skunkworks-style initiative.

* Large orgs will upgrade. Their infosec departments will freak out that an old, "potentially insecure" language is being used, regardless of third party vendor support. I see this a fair bit now in the PHP space; where RHEL supports and backports patches for old, insecure versions of PHP, but the infosec people still can't stand it. These days, infosec is getting more and more pull in every huge organization, so it wouldn't surprise me at all to see them start to treat 2.7—or the old, un-updated packages that are locking someone to 2.7—as a possible attack vector and force a change.

All that said, you are right about jobs. If someone knows 2.7 inside and out, they will start to see higher and higher paying contract gigs over the next 15-20 years. Just like the COBOL programmers saw.

...and then there's Mega Large orgs, like Google, who are used to maintaining their own software.

I am super curious what Google will do. The thing to watch is whether Chrome/Chromium (and therefore Node.js) can ever be built without using Python 2.7.

Mega large orgs have already moved, in some cases. They have the advantage of being able to throw significant resources into infrastructure to make switching easy.

I'm thinking about Google, Mozilla, Dropbox, etc. They still use a lot of Python 2.

Google created Golang and then created a tool to convert Python2 -> Go.


That grumpy project seems to have stalled in the last 6 months.

I feel like Google hasn't made much progress converting away from Python 2.

Python 2.7 is still required to build Chrome, for example, and I don't think there are any plans to change that. https://bugs.chromium.org/p/chromium/issues/detail?id=61357

And, I believe they use 2.7 whenever they use TensorFlow.

Indeed, on the other hand facebook has (mostly/significantly) moved to py3, Google is chugging in that direction at a feverish pace, etc.

Let's hope this will finally change the "compatibility with py3 is a feature" to "no py3 compatibility - no library" state of things. It's really surprising how stubborn some teams are.

What's amazing is that pip3 gladly downloads and tries to install py2 stuff. Because package/dependency management is a complete afterthought in Python.

And sure, it'll get better, and finally we have lockfiles (Pipenv), and maybe eventually a proper SAT solver will help resolving dependencies https://github.com/pypa/pip/issues/988#issuecomment-36084645... , aaand gradual typing is nice too, and maybe eventually we'll have a proper async library (python-trio, as asyncio is there, but not low level and/or not ergonomic enough).

Yet Python is free and open, and amazing, and I haven't contributed much other than report bugs, so I'm not complaining, just comparing to other ecosystems.

Conda has a silver and has been available for however many years. It also works well with pip.

Among people complain about python package management, I have never heard an argument against Conda other than “I don’t like that they recompile base python”. What’s your reason to discount conda?

Our python is compiled with the latest versions of gcc (on Linux) and clang (on macOS). So you're getting gcc 7.2 compiled python on RHEL/CentOS 6. And all of the security and performance enhancements that come along with it [1]. (Ok we're still on 7.2; we'll update to 7.3 in the next couple months.)

[1] https://www.anaconda.com/blog/developer-blog/improved-securi...

P.S. If you're looking for a latest and greatest C/C++ compiler toolchain that's backed with "production" testing via hundreds of millions of package installs and works on older versions of linux...

It's time!

It's really irritating in the scientific programming world at the moment because we've had some dependencies take years to get to Python 3 compatibility and are they're now being all holier than thou about it all.

We couldn't even start doing the work until our biggest dependency (which we use in 90% of the code base) was ported to Python 3 and while that's been available for about a year, they weren't distributing it and you had to compile from source to get the Python 3 version - and compiling this particular package, which shall remain nameless, is one of the most difficult compilations I've ever had the misfortune to try. It's notable by it's absence on the UK national supercomputer because of this, for example.

So now, we're trying to port our code to it, but it's likely going to take 6 months to a year to do so. Our day job is doing scientific research, and that's what brings in grant income - so for us, it's not a huge priority to spend all our time working on the switch to Python 3, because we can run everything in a Docker container with fixed dependencies and it should continue to run fine for the next 10 years. It will happen, it's just not the most important thing on our list.


It’s not that surprising. People want to work on their tasks of interest. Smart programmers allocate some time for maintenance activity but py3 is a rewrite that is well outside the boundaries of normal maintenance. When the first thing you run has syntax errors and you might have to change almost every file to make any progress, it’s a rewrite.

And plus, many projects are maintained by people in their free time, often for free. Of course no one wants to spend a ton of free time redoing everything only to work apparently the same as before.


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact