We all knew this was coming, but doing it in a point release seems like exceedingly bad form, particularly as Apple does major releases annually. I never expect point releases to break stuff, at least not on purpose.
The risk of unpatched vulnerabilities between now and the next "major release" far exceeds the risk of inconveniencing anyone foolish enough to rely on the system Python 2.7 installation.
If they wanted to announce at the June 2022 WWDC that macOS 13 wasn't going to have it, that's fine. But we're several months into macOS 12.x and such a large change is bad form.
Let's face it: at this point macOS version numbers are a marketing tool, and not much more. It's only 2 major releases ago that a "x.y" release *would* have been considered "major".
I don't think that's particularly relevant. It was pretty clear that between 10.0 and 10.15, the second number was the "major" version for all intents and purposes, and now it's the first number.
Bingo.
This falls under the scope of support for an enterprise linux company-- orgs pay them to keep stuff backwards compatible long after the upstream deprecates it. Apple is decidedly not an enterprise support organization.
RHEL also doesn't want you using system python as your dev python. RHEL 8 uses 'platform-python' for the bits the system needs and if you install 3.6, they share the common bits
If your using a mac as a dev box, and really need python 2.7, you still have options.
Apple supported Python 2 for two years after it was officially sunset by the Python Foundation. They've been advising that they will drop support for longer.
How long is long enough to avoid breaking something the users depend on? One more year? Five years? Fifty? I think we can all agree that it's reasonable to drop support at some point, making it just a question of when and at what cost.
"How long is long enough to avoid breaking something the users depend on?"
10 years - a piece of software compiled with 'supported' libraries, SDKs and what have you should function for 10 years. An appliance or a service should function for 10 years and have spare parts avaliable for that long.
That is the guideline I ise as a consumer, if I can see that something won't last a decade, I won't be spending money or time on it if i can avoid it. At that point it's no longer a durable good, it's a consumable.
If there are users that still depend on python 2.7, they can install it manually?
I mean I already think it's... interesting that an operating system comes with (multiple) programming language interpreters, available to the end-user.
That’s true if they’re using it from the command-line, but not if they’re using apps that linked against the system SDK’s libpython2.7.dylib (and assumed PyObjC would be present)
Is there any significant software still using that?
Apple is pretty aggressive about breaking backwards compatibility, as they did with x86, so it's no real surprise for unmaintained software to become completely unusable.
Based on what they’ve done, I guess the answer is no.
My company was shipping (somewhat niche) software linking with the built-in python2 up until last year though, and it’s been a scramble since Monterey to rebuild things to avoid the warning message. Definitely thought we would have python2 until 13.0 though (and part of me wonders whether this is just a “brown-out” for the beta and 12.3 release will be back to normal)
I think an end user that wants to automate things with a script can be expected to install their interpreters themselves or use an interpreter that builds on the OS's shell.
I am not to familiar with MacOS, but I imagine it comes with a bash/sh terminal? If so, then that is something that should be first choice for automation.
Everything else, be it python, perl, ruby or whatever should be installed by a user as needed.
And I think that's totally reasonable—as long as Apple makes the change in a major update, when users can expect (some) things to break and should be encouraged to make time to prepare.
Part of it is that it's just a mental burden to keep this kind of compatibility information in my brain. I have VMs set up of past macOS releases in case I need to run something old. I don't have VMs set up for earlier versions of a release, and I don't even know where I'd get an installer. Admittedly, in this case it would just be a matter of installing Python, but I don't necessarily know that as a user, or the software could need to be linked to Cocoa or some such.
> Apple the corporation doesn’t have a package manager.
> MacOS, the ecosystem does.
> No different from Linux. There is no Linux package manager. There are numerous open source package managers for Linux.
macOS the OS has a package manager in terms of a tool to install packages. It does not have the ability to find/download packages to install though.
In the sense of a package manager being able to download and install additional software, macOS has several choices available but none of them are included with the OS, or have the ability to update the OS.
There is arguably no single "Linux ecosystem". There are numerous Linux-based distributions, which are largely comparable to macOS as an OS. One of the very key differences between each family of distro's is the package manager the OS ships with by default and uses to update itself.
So no. macOS does not "have" a package manager that's comparable to those included by default with linux distributions, but there are several third party package managers available for it.
> Another false statement. Homebrew is just as capable of searching as any Linux package manager.
I wasn't talking about homebrew.
It helps if you understand the comment before you reply to it and make accusations.
Edit to add, because fuck the HN timeouts. Heaven forbid someone make 6 comments in two fucking hours:
Try reading the whole fucking comment again, with the points being made really hammered home for you:
> macOS the OS has a package manager in terms of a tool to install packages. It does not have the ability to find/download packages to install though.
This is about the built in macOS package manager, that the OS itself uses to install packages. It's invoked via `installer` on the command line, or Installer.app in the GUI. It does not expose the ability to find/fetch packages from remote URLs.
> In the sense of a package manager being able to download and install additional software, macOS has several choices available but none of them are included with the OS, or have the ability to update the OS.
This is referring to homebrew, macports, etc.
We certainly do see things differently, because only one of us is repeatedly calling the other a liar due to poor reading comprehension.
If you don’t include homebrew, as a package manager for MacOS, then you aren’t having a serious conversation.
This whole line of conversation is about whether it’s necessary to manually install python.
If you claim it is you are lying, because it can easily be installed via homebrew, and you are clearly aware of that.
If you have some need to claim that homebrew isn’t a MacOS package manager or that MacOS doesn’t “have” homebrew, because Apple doesn’t ship it. Go right ahead, but it’s a red herring.
No, it means that macOS and its third party software aren't one in the same thing.
Is faker.js a macOS vulnerability because there were people using Macs who installed npm and then installed faker.js? This isn't a theoretical question—whenever you install something, you need to consider where it's coming from and whether they can be trusted.
It has a third party package manager, yes. I'm pretty sure your parent meant that there isn't a first party one.
On Debian, packages in the official repositories are considered to be literally part of the Debian project. The maintainers are considered Debian maintainers, and bugs go into the Debian bug tracker.
This matters due to the trust issues I alluded to above. It also means that on Debian, there is one definitive version of Python (or at least one per Python version). The Homebrew, MacPorts, and Python.org Python binaries are all slightly different, such that it's possible for software to work with one but not the other.
No two Linux package managers are identical either, and not all Linux distributions work like Debian. Debian is a collection of volunteers. The choice to trust it is no different to the choice to trust Homebrew. Considering the package manager to be part of Debian doesn’t change this.
However none of this has any relevance to the claim in question, which is that python must be installed manually.
MacOS certainly has a package manager that can install and maintain python.
Except that most other open source package managers strive to be good at what they do.
Brew has a long history of treating any criticism of their truly woeful approach to security, dependency management etc, with "well fuck you we dont want to hear your opinion".
There were other better macOS package managers before it; macports was sort of even Apple sponsored.
Everyone switched to brew even though it was the most poorly designed because it came from the Rails hype era where everyone wanted all their tools to be written in Ruby and didn’t care about anything else.
I used and still use macports, but there were valid reasons to switch to homebrew: macports did not have precompiled packages, did not use system libraries, and machines were not as powerful as they are now.
This meant if you wanted to install, say, ImageMagick, you would spend one day compiling stuff.
Also contributing a brew recipe was (and probably still is) easier than contributing to macports, and brew casks are pretty convenient.
I deeply dislike some of the choices of homebrew, but I can understand why it was popular, and it wasn't just because of the ruby hype.
Just because someone has the resources to do something it doesn't mean that doing so is the right {business, technical} decision. You're confounding the two.
Customer service is still a business reason. It serves to keep your customers happy, and the calculus here is probably that there aren't enough customers to keep happy by maintaining python2.7 that would justify the costs and headache.
As a counter-point, my app was able to rely on the system Python 2.7 for FIFTEEN YEARS because it was an extremely stable component of the OS (and it allowed the app binary to be literally 10 megabytes).
Now? I honestly can’t think of a way to release without adding something like 100-200 MB to the app bundle, consisting of an entire Python installation. With a user-facing app it is not ideal to try to say something like “just download Python and tell me where it is”.
The risk of unpatched vulnerabilities between now and the next "major release" far exceeds the risk of inconveniencing anyone foolish enough to rely on the system Python 2.7 installation.
You make an assumption that the way your system and method of working is the same as everyone else on the planet. It is not.
I have two tools that I use monthly which require python 2.7. The people who made the tools never updated them to work with 3. For this reason, I have a work box that I cannot upgrade.
I know that Macs are aimed at end users, and not developers, but it really is getting harder and harder for me to use my Macs for the kind of work I do. Each major software update borks my virtual hosts. PHP has been removed, and because of code signing requirements, adding it back is so complicated it takes an entire day and filtering through a dozen half-baked online tutorials. No, the ones that say "Just use brew!" are not the answer unless you have a very simple job. I don't use Docker, but from what I read on HN, that's six miles of bad road, too. But as I understand the situation, that's Docker's fault, not Apple's.
I've previously worked at a vendor using Python2. Someone brought up this concern. I looked up the number of security releases we had cut in the past due to Python2 vulnerabilities -- it was zero.
Doesn't need to be an RCE to be considered an exploit of the user's intentions. I can't fathom how people are still trying to use an EOL language or why they would even want to. Learn some new syntax, you'll live and the writing has been on the wall for like 20 years.
I don't think a whole lot of people are writing new software in Python 2, but I think a lot of people are still running old software via Python 2. Software they didn't write themselves and wouldn't know how to modify.
And that's not to say those users won't have to find something new eventually (or migrate the software to a contained environment). It just shouldn't happen in an automatically installed update.
>The risk of unpatched vulnerabilities between now and the next "major release"
His point doesn't necessitate waiting. You could just classify it as a major release instead. That is not to say it should be a major release necessarily.
Right, Apple could have removed Python 2 last fall with the release of Monterrey, and I would have been neither surprised nor unhappy. I was actually kind of surprised when it was still in Big Sur.
Then they could have done it in 2021 with the release of Monterrey. Heck, Monterrey was released before Omicron, so if we're concerned about COVID that should have been less disruptive than doing it now.
This wouldn't have worked either. Lots of countries, where vaccines weren't widely available yet, were going through a huge Delta wave before and after WWDC.
Now that most places have vaccines, it probably doesn't make sense to wait for the next major release because this has already been long delayed.
Other factors could have also been at play - perhaps Apple Inc had a lot of internal tools on 2.X and has only recently gotten around to migrating all of those.
I think this mindset is dangerous. There's a really long tail of niche things people do on their computers, such that I suspect a majority of users have at least one niche need. Especially today, when Chromebooks and iPads are available for anyone who only needs basic web browsing.
If you stop paying attention to the niches, they add up to a lot of people.
You are right, but I would say that this is quite extreme case. This is a programming language which support ended 2 years ago. Only reason to use it is to avoid turning old code base to new one. There are some serious questions if that is really a good approach. And if it is, why you need to use pre-installed Python 2 instead of installing it by yourself?
I expect that there are users, who are not themselves developers, who are using either apps or scripts which rely on the system-installed Python 2. They likely aren't aware that they're using Python, and they may not even know what Python is. It could be one line in one shell script block in one automator action that a coworker put together five years ago.
It's okay for that stuff to break. (I wish Apple would put more effort than they do into making it break less often, but that's neither here nor there.) But, it should break in a major release, when the user is prepared for things to break. It should not break in an update that was installed automatically overnight.
It's funny how people think old, robust software is suddenly going to spring a bunch of vulnerabilities while sitting there, static, unchanging. It's far more likely the bleeding edge python(s) are going to be getting new bugs.
* One of the libraries that old code depends on has a vulnerability discovered. They provide a patch, but it's only for the current version rather than the one used 10 years ago, which means that it's not as simple as just rebuilding Python 2.
* A new attack on a cryptographic component, network protocol, etc. is developed and suddenly defaults previously considered safe are no longer considered safe. As a great example, how many old things broke when major sites and services like PyPI or GitHub deprecated SSLv3, TLS 1.0, etc.?
* One of the Python libraries you use has had a vulnerability which was only recently discovered. The maintainer quite reasonably does not provide unpaid support for free software which they updated years ago.
What all of those have in common is that the problem is less the vulnerability than the fact that you are likely to need significant amounts of unrelated work getting other components updated to the point where you can install the patch. Given how often vulnerabilities are discovered which have been present for many years, this is likely to happen for any sufficiently large code base.
Of course it's not going to get new vulnerabilities, but that doesn't mean hackers won't find previously unknown vulnerabilities that have always been there. Looking through the CVE database, core python alone has had 15 CVE reports in 2020 and 2021, and for old pythons those will never be patched.
I'll never understand why people think they can write some software and it'll stay pristine forever. Even if it was had no known bugs at some point in the past, the world is a harsh place and your program, its dependencies, its programming language or even its OS can be found to be subtly broken at any moment. Even if it is not outright buggy, the world will change around you until the interfaces you depend on are no longer there. (looking at you, software program at $WORK that only works with PS/2 keyboards but not USB keyboards)
In the old days, software was like math - it had immutable, provable qualities. In the last several decades software has gradually become more and more like biology, where the core mechanisms are shrouded in mystery and we rely on observation and experiments to understand. The Star Trek game in Basic I typed into a computer is eternally true, a platonic form in a way software now no longer even aspires to be.
I'm sure it is a fine program, don't get me wrong. But citing a Basic I program as something that is eternally true is exactly what I mean when talking about the world moving on. Does it even run on 64-bit operating systems? Will it keep working correctly after the unix timestamp goes beyond its signed 32 bit maximum and overflows? I'll also eat my hat if there is anything you can prove about it in the modern meaning of formal verification.
There is plenty of software being written to very high standards of quality (formally verified real-time control software for rockets/power plants/etc comes to mind), but most of that is not available for free on the internet (or for $5/month as a SAAS) because you get what you pay for.
NASA software is very good, but not in general formally verified, I think (based mostly on reading postmortems on various NASA failures).
Even TLA+ analysis and proof of correctness doesn't stop things like "new zookeeper clients are pushing a lot of new writes => disk space needed for logs increases => across all the nodes in the zookeeper cluster => the entire formally correct cluster keeps falling over now but it didn't last week"
I meant more in the sense of the proofs in TAOCP - these are programs that don't interact with anything outside of the program itself, no non-specified input, so a lot simpler, no dates, no actual machine memory, just a little synthetic world defined by the language itself. Even TeX is imagined to be unchanging after Knuth's passing, and is basically unchanged since the 70s.
For sure the world has moved on, but we have lost the ability to have confidence in the software and have had to develop tools and ways of thinking inspired by biology.
If it is dynamically linked (probably the case, as that’s the norm on Mac OS), it might even get new vulnerabilities.
The combination (old Python, new shared library) could have vulnerabilities that neither (old python, old shared library) nor (new python, new shared library) has.
Many of those CVEs will not impact your application code, though. This is just a side effect of Python having a huge and somewhat crusty standard library.
I think, striving for software to not have a maintenance need if no new features need to be implemented is sound. I am very well aware, that the complexity of any but a trivial piece of software on a trivial piece of hardware is beyond human comprehension.
The general approach of most projects and vendors at the bottom of the stack to not accept full responsibility for their part is striking. The amount of lying, faking, leaky abstractions, adapting but not 100% etc. is so immense it is a wonder we get something done at the higher levels of the stack at all.
I cannot comprehend, how it is possible, to hammer the RAM in such a way, that you can affect neighboring cells charge until the information interpreted based on the amount of charge in the cells is changed. This should never ever be possible. I haven't seen any such disclaimer this was possible on the package or in the data sheet of any RAM module. The same is with changing the kind of magnetic recording in the same hard drive model without notification (or change in model name that would be more honest). And then there are "subtle" problems, like coil whine on mainboards or PSUs for no reason - we know how to fix this. These high pitched sounds can give you headaches without you hearing much or anything. There is no indication about it on the package and almost no review tests this. And the whole Meltdown and Spectre type of bugs is just the cherry on top with major performance regressions because of it. So how are you supposed to write reliable software on top of all this if just a tight loop might actually change the state of hardware in such a way that is becomes a problem? I mean, this is just crazy.
Vulnerabilities are going to come up, absolutely. Unless you believe that the older version is somehow 100% bug free and vulnerability free just by virtue of it being old. Being old doesn’t grant it any magic powers.
What’s old now is what was bleeding edge years ago - it hasn’t been updated (otherwise it wouldn’t be old), it has just gotten older.
And the problem with keeping old versions around is that when (not if) someone does discover a vulnerability, what are you going to do then?
If your answer is “we’ll patch it then”, then that’s not going to be an old version anymore, is it?
If your answer is “I guarantee it will 100% never have any vulnerabilities ever and so it will never need to be updated”, I don’t think we’re living in the same reality.
You leave out that even if software A itself doesn't change
1. The OS and hardware under it can change and lead to new vulnerabilities.
2. Dependencies of Software A can change, which can lead to new vulnerabilities in Software A.
3. It's easier to find more and more vulnerabilities on a target that does no longer move. And once they are found there will be fewer experts to detect/fix/report it as time goes on.
There was some recent news about a vulnerability in pkexec that's been around for 12 years. Who knows what Python 2 exploits are still undiscovered in a similar manner? Since it's no longer actively supported, attackers have all the time they want to find those problems.
Why is this so downvoted? I really missed the days of python 2.7, I spent far less time debugging breaking version changes and spent more time on my actual work.
For security critical work, like web services, it makes more sense to constantly upgrade to the newest version. But for scientific work, which is a huge audience of python users, who often use obscure poorly supported libraries there is a lot of value in a stable foundation.
> I really missed the days of python 2.7, I spent far less time debugging breaking version changes and spent more time on my actual work.
This has been the Python 3 experience for years. If you had lots of previously ignored bugs in your Python 2 around string handling, it took longer but on a clean codebase it was often only a matter of minutes.
And yet just the other day on the python 3.10 migration in arch stuff broke and I wasted a good day hunting the error down. The solution was installing python 3.6, which should hopefully work for a year or two until THAT is left unsupported.
Kind of like how various Python 2 releases broke stuff and required time to fix? It's easy to forget that if it wasn't memorable but, for example, I remember the specific 2.7.9 release for that reason.
Sibling post mentions the kext breakage happening as well as another example, but I feel like they've been slipping on this as they've had increasing difficulty matching major features to their unbending yearly release schedule. So more and more (and this is the case with macOS 12 also, see Universal Control) we've seen big headline features announced at WWDC keynotes not manage to make it into the .0, but rather trickle in over the .1 to .3 releases. Once that rubicon was crossed having more other major/breaking changes start sliding into point releases doesn't seem too surprising I guess, even if the cultural slippage is irritating.
Apple probably has to either give up strict semantic versioning for breaking changes, give up their strict release schedule, or be willing to defer announced features that don't make one major version all the way to the next. Something needs to give. Looks like they've decided to go with a "semantic-lite" where breaking changes and major features aren't the rule with point releases but can still happen in old enough/deprecated areas.
It does seem more needless though in cases like this granted, unlike other examples it's not really clear why they'd do this now rather than 12.0. Not as if Python 2.7 was a spring chicken last year either, support ended in 2020 and that was like 11 years after release already? Heck, why not 11.0?
It's not necessarily strict semantic versioning. However, Apple has always had a distinction between major and minor updates. Minor updates are installed on most Macs automatically, whereas major ones are not. Major updates have longer developer preview and public beta periods. Major releases continue to receive security patches for a couple of years.
If Apple isn't going to follow any types of rules, then what's even the point?
I kinda view them as using a modified version of semantic versioning where, if we take A.B.C, the normal roles of A and B are combined into B, and A is the big annual release with the media hype and advertising and shiniest features at the forefront.
Meaning A and B bumps could be breaking, but C is still just for bugfixes.
Sure it's not a great system from the technical side of things, but it shouldn't be shocking that Apple puts marketing first.
> It's implied in the fact that Apple has different types of updates in the first place.
As far as I understand, Semantic Versioning is not a synonym for "versioning scheme", but refers to a specific scheme (https://semver.org). So I don't understand how Apple having any versioning scheme implies that it must be semver.
I edited my post slightly. I never meant to imply that Apple follows strict semantic versioning—and in fact, they clearly don't, since they stayed on one major release for well over a decade.
It's okay if Apple's "minor releases" occasionally have breaking changes. But I don't think "wait until the next major release to remove an entire language runtime which has been built-in for 15+ years" is too much to ask.
Semver is a particular set of rules for defining what is a patch level, what is a minor update, and what is a major update. Before semver, these ideas still existed, but different groups had different ideas of what they meant.
That is perhaps still true in some large companies.
> it's not really clear why they'd do this now rather than 12.0. Not as if Python 2.7 was a spring chicken last year either, support ended in 2020 and that was like 11 years after release already? Heck, why not 11.0?
I don't think introducing breaking changes in 2020 (with the world turned upside down) for 11.0 would have been a good idea. The same reasoning applies to 12.0, many countries were going through a Delta wave without having much vaccine availability this past summer.
I think Apple has been very reasonable about this - they've been clear that 2.x is going away, they have been extremely accommodating of changing circumstances, and 2.x can still be installed manually after this change.
Yeah, and it’s been years or decades since you wanted to use the bundled software on MacOS for stuff that was important. Them removing Python 2 probably will fix more problems due to multiple Python installs than it will break anything. I tend to get a lot of support questions on “how to make this Python thing work on my Mac laptop” and I have seen some screwed up things.
It’s not semver, but I’m willing to bet (although I haven’t checked) that there were probably a few leftover system apps that didn’t get moved to Python 3 before 12.0 (many of the features were delayed, I’m guessing due to the pandemic their work schedule has been disrupted since Big Sur was feature frozen).
But see, if that is the case... Apple is a large company with lots of resources and knowledge of their internal roadmap. If it took Apple's own internal developers until now to remove their dependence on Python, then third-party developers and their users should get at least until next fall anyway.
(I'm not defending the use of Python 2. I'm arguing that, as a general rule, if something takes the vendor X time, users should be given X + Y time.)
It was deprecated back in 10.15 this didn't come completely unexpected. Using it since then always had the risk of it getting removed in a major or a point release.
Major release are already such a pain in terms of just how many dev ecosystem things break every time. Doing it in a minor release is really a blessing in disguise -- all you have to deal with is this.
Let's say I have a little utility shell script I wrote ten years ago. I basically haven't touched it ever since. Inside the script is one line of python for fetching the current time with more precision than `date`, or something, which I forgot about a long time ago.
When I install a major update, I test all my scripts to make sure they still work. I do not test after every point release, and I shouldn't have to!
You should be testing early and often and not relying on arbitrary OS version numbers (Apple disagrees with you regarding the utility) to provide some kind of api stability or assurance.
This is why docker and other hermetically sealed packaging and languages (golang) systems work by avoiding the OS implementations which must change and update as quickly as possible.
The fact anyone would rely exclusively macOS level implementations and somehow expect them to remain stable and backwards compatible for any stretch of time blows my mind.
macOS isn't quite semver unfortunately. The mid year releases are usually quite significant.
While I'm not a fan of removing Python 2 mid major cycle, it's not without precedent, and they've had popups since the first Monterey release anytime anything linked against the system Python 2
We all knew this was coming, but doing it in a point release seems like exceedingly bad form, particularly as Apple does major releases annually. I never expect point releases to break stuff, at least not on purpose.