Hacker News new | past | comments | ask | show | jobs | submit login
Outdated vs. Complete: In defense of apps that don’t need updates (vivqu.com)
936 points by ingve on Sept 26, 2022 | hide | past | favorite | 498 comments



I can relate to the frustration of the author. I wrote an open source library which gets downloaded 50K+ times per week. At one point, I hadn't updated it in several months and people started asking if the project is dead... In fact, it wasn't updated in a while because it had been working perfectly and was built in a forward-compatible way; that makes it more healthy and therefore more alive than probably 99% of libraries on the internet.

The expectation that software libraries or frameworks should break every few months is disturbing. I think this may be due to the fact that many of the most popular frameworks do this. I find that React-based apps tend to break every few months for a range of reasons; often because of reliance on obscure functionality or bundling of binaries which aren't forward-compatible with newer engine versions.


> The expectation that software libraries or frameworks should break every few months is disturbing

I think it is more that recent updates are used as a proxy for determining if the developer still cares about the project.

The problem is it is hard to distinguish between a project that has been abandoned, and a project that is feature complete and hasn't had any bugs reported in a while.

Although, if you assume that any software is free of bugs, and that at least some users will report bugs, if there is low activity, that suggests that either it isn't well maintained, or there isn't a critical mass of users to find and report those bugs. But then, higher quality software will require a higher number of users to hit that critical mass so it doesn't give you that much information unless you also know how good the software is.


I mean one way to so it would be to just write: I consider this project complete and have it running ever since


Well, does that mean "i will never touch this again" or "I won't add any new features, but I will fix bugs"? If the former, I probably don't want to use it. If the latter, I might consider it, depending on what it does, and if it has the features I need, but then, again, how do I know if they are still watching it since they posted that.

And there is also the state of "it has all the features I need, but I'd be open to adding a new feature if someone else shows a compelling use case for it."


"If the former, I probably don't want to use it."

I guess that really depends on what it is.

The older I get, the more change-averse I become. That doesn't mean that everything is perfect and we shouldn't fix things that are broken, but it does mean I start to embrace the philosophy of "don't fix what isn't broken" more and more.

If I have something installed on my desktop / workstation that a) is not network attached and b) is not exploitable from a security point of view (least privileges, no sensitive data) c) has little risk of corrupting data and d) is unlikely to "just stop working" one day because of a systems / dependency update ... then I'm not in any hurry to update it. When I do get around to it it's probably because I'm installing a new version of Linux and so everything is getting an update. But if the old version still works, has no bugs or security risks and there's been no updates for years ... who cares?

I can think of several categories of applications that fall into this category:

- Music / mp3 player

- Classic Shell / Open Shell for Windows

- Office software, like LibreOffice. I'm sure newer versions offer some niceties but things "just work" for me. I don't see the reason to upgrade.

- Almost every game, ever (assuming it's stable and has no major usability bugs that need patching)

- Lots more

Things I do want to upgrade would be my web browser, operating system kernel, anything network attached. Really anything that could affect data integrity or security. Other than that, I'm lazy, comfortable and I don't want to risk the introduction of breaking changes.


> That doesn't mean that everything is perfect and we shouldn't fix things that are broken,

Okay, so this is exactly what GP said. There is a difference between "not adding new features" and "not fixing bugs".


Not to get into a semantic argument about specific words and their meanings, but my broader point is that there are certain categories of bugs where I wouldn't care if they ever got fixed or not. Maybe this is nit-picking and maybe it's my cynical brain coming out of 25 years of software development where people can't even seem to agree on what constitutes a "bug" vs a "change request" half the time, or how we prioritize the severity of bugs etc. but my point is that sometimes "it's not broken" means "it's good enough."

Would I be upset if the developer "fixed" these categories of bugs? Of course not. But would I avoid using the software at all if they never did?

...

"I guess that really depends on what it [the software in question] is."


Maybe a bot that once every three months (or whatever time period you prefer) does a commit to the changelog that says "I am [not] still maintaining this project" and it sends you an email a week before to hit a toggle to remove the "not". Then in the readme you can put a paragraph saying "I use Maintaina-bot (tm), and I commit to clicking the button every three months as long as I'm maintaining this project".

It's a little silly, but I could imagine that if this become a common enough thing, people would start to trust it.


This is already available in reverse: you can mark a repository as unmaintained in GH to let people know you wont touch it anymore.


My thinking is that someone forgetting about a project, not caring about it, or for some reason being unable to update it (e.g. losing access to an account or even being physically unable due to injury or death) might not update the project to indicate this; I think a lot of the wariness of older projects that haven't been recently touched is because it's hard to tell the difference between "no changes need to be made so I'm leaving it untouched but monitoring" and "no one is even looking at this and if a change needs to be made, no one knows".


Someone will then come up with "Maintaina-bot-bot", so that we don't have to bother clicking that button.

Programmers are lazy creatures.


The laziness (which I have no issues admitting I also have!) is exactly why I think having to explicitly act each time to indicate ownership is important; otherwise, it would be impossible to tell the difference between software that's maintained but hasn't needed updates from something that no one is maintaining anymore, which is the issue we have now.


I think a mix of your suggestion + https://news.ycombinator.com/item?id=32994453 might be good, no?


You can update copyright comments once a year.


You should only do that if you have changed something. Otherwise you are falsely extending the period before the work enters the public domain.


I mean it depends on the kind of project right? What would be catastrophic for a web framework could be totally normal for a command line tool. I mean when was the last commit to some of the GNU utils we use every day? If your module provides math functions or implements some protocoll chances are those underlying things are not going to change etc.

I use a ton of tools I wrote myself and haven't had to touch in years except for the occasional dependency update.


> when was the last commit to some of the GNU utils we use every day?

17 hours ago: https://github.com/coreutils/coreutils/commits/master


Sure, because the repo contains a ton of tools. Yet another advantage of the monorepo: if everything is in one repo nobody will notice that you haven't touched that one part of the code since you wrote it.

Or let's phrase it differently: there will always be a part of the code that you do not have to touch for decades if all goes well. And if that part sits in a seperate repo, is it abandoned or just finished?


All abandoned projects were active at some point in the past. A project being active now is not a guarantee it will stay so. What happens if the project is abandoned after you chose to rely on it?


> What happens if the project is abandoned after you chose to rely on it?

If the project already had the features you need, was thoroughly tested and debugged, and was built with almost no dependencies (only relying on a small number of very stable platform APIs), then it might not matter even if it was already abandoned when you started using it.

Case in point: On a recent software project, I took dependencies on two Lua libraries which were already abandoned. Their source code was short and readable enough that I had no problem going through it to understand how they worked. In one case, I added an important feature (support for a different type of PostgreSQL authentication to a Postgres client library). I also packaged both libraries for my Linux distro of choice and submitted them to the official package repositories. If anyone reports bugs against the version which I packaged, I'll fix them, but it seems unlikely that many bugs will ever be found.

As the OP rightly pointed out, the expectation that software libraries should normally need to be constantly updated is ridiculous. Many other things invented by humans stay in use for decades with no alterations. Beethoven's 5th symphony hasn't had any updates in more than 200 years.


> Many other things invented by humans stay in use for decades with no alterations. Beethoven's 5th symphony hasn't had any updates in more than 200 years.

I don't understand this argument. Are you saying that some things don't require updates, therefore we shouldn't expect software libraries to be updated as well? But there are also things that do require updates or maintenance. Would that invalidate the argument?

Some examples of things that need updates or maintenance: buildings, infrastructure, tools, machinery. Even immaterial things like books or laws receive updates.

And that includes the 5th Symphony too, which has different editions.


Re-reading the comment as a whole might help to correct this "strawman" interpretation, but anyways, here is some clarification:

In today's world of software development, many take it as given that every software library should receive periodic updates, and that a library which has not been recently updated is in some sense "dead" and should not be used.

This is logical in cases where the domain is itself constantly changing. For example, software packages which calculate the results of income tax returns must be updated yearly, because the tax laws change. Similarly, packages which perform time zone calculations must be periodically updated, because the time zone rules change.

However, a vast number of software libraries operate in domains which are not inherently subject to change. As an arbitrary example, imagine a hypothetical library which performs symbolic differentiation and integration. Since the rules for how to integrate a function do not change, once thoroughly tested and debugged, there is no reason why such a library could not be used unaltered for decades. Yes, it might not benefit from the development of more efficient algorithms; but if the library was already more than fast enough for a certain application, that might not matter.

While software is different from other things created by humans, the existence of creative works such as books or musical compositions, which often survive for decades or centuries despite not receiving regular updates, provides an illustration of what can and should be possible for software as well. Note, this is an illustration and not an argument.


> Beethoven's 5th symphony hasn't had any updates in more than 200 years.

What a shame it’s been abandoned and no other musician stepped up to take over the work.


> What happens if the project is abandoned after you chose to rely on it?

Fork it and start maintaining it with other people depending on it.


An active project is more likely to stay active than an abandoned project to become less abandoned. See also the Lindy effect.


...what?

What does it matter if the developer still cares about the project?

It isn't crashing and no new features are needed. If it did then the project wouldn't be complete.

We're not talking about some early access half broken NPM package here, the blog specifically talks about a completed product.


What if you do find a critical bug after you started using it, and there is no one around that can help you with it?


When you find no one paying for maintainance, then congratulations: you are the maintainer !


Agreed and upvote but it becomes a problem if the owner is unresponsive but still holding the main repository, domains, package repositories etc. Forking is possible but a very complicated, messy, hostile path and also a lot more work than just contributing a patch.


Forking is trivial and non-hostile. Is there a plumbing issue in your language where it is unreasonably difficult to wire other dependencies to your fork instead of the original?


If there are no updates, then even forking should be light overhead.


> a lot more work than just contributing a patch.

For you it is.

But for a maintainer, reviewing and accepting a pull request can still be a lot of work, especially if the code has been stable for a long time.


If you want an SLA on support, then pay for support. It's literally that simple.


This is a fantasy ... go offer some open source maintainer some money to implement something they don't want or care about and come back and tell us if it was simple. It may be in some cases where they are actively looking for that kind of work, but more than likely they will have no interest.


I would expect for many developers it is really only a matter of how much you are willing to pay. If they don't already do freelance work then that cost might reasonably be higher than what it would cost to hire someone else to implement it because of the extra tax paperwork and whatnot that they now need to deal with.

Of course there will be some maintainers who are simply not interesed or unable to offer paid support (e.g. due to their employer being a control freak) but that's ok. As others have pointed out, forking / maintaining out of tree patches (and providing them to others) is not inherently hostile.


> The problem is it is hard to distinguish between a project that has been abandoned, and a project that is feature complete and hasn't had any bugs reported in a while.

Usually I browse the issues / bug reports page and see if there are any breaking bugs that aren't getting much attention. If there are, then that's one more data point that the project might be abandoned.


Speaking to GitHub specifically:

1) The owners sometimes archive the project or write in the readme: complete, no more features will be accepted.

2) You can look at the number of open issues. A project with X stars/used-by/forks that hasn't been updated in 8 months and has zero open issues is probably stable.


Maintainer's activity in GitHub issue is also a good metric.


I disagree. I have some common lisp libraries that have not been touched in years and the majority of my day to day is in company git.

I imagine that I am not the only one.


First thing I was taught when I was introduced to a CL codebase/project..

"Library/repository activity has little to do with usability, or quality. Some things are more or less finished and correct."


If your using a non-github platform without mirroring the repository, your pretty SoL for getting those green squares of activity to reflect your actual activity.


Is that really a measure of activity, because i'm 90% sure i saw tools to fake that into some kind of image.

Edit: https://github.com/blaise-io/contribution

For example.. I guess that maybe gaming the system really is required if thats a 'good measure'.


Why should i care about the green squares (which can be faked anyway)? This feels like microsoft propaganda to get me to put everything on github for no good reason, especially considering that if he's pushing to a company git that company probably doesn't want it on the public internet.


SoL?


Shit outta luck.


Shit outta luck


I have code running today for which my last commit (in CVS!) was in 2002.

Some software just does what it is supposed to and doesn't need to change, and this is OK, and I have no idea why this fact infuriates some people.


I wouldn't be surprised if most software in the world has been chugging along unchanged for decades. I suspect future historians in 1000 years will find pieces of ancient code copied around unchanged for centuries.


Because people these days tend to have the attention span of squirrels.


The annoying thing about software is that we still haven't quite gotten the transfer of knowledge working to the point that you can communicate all knowledge about a piece of software purely through code (and no, being able to emulate all behaviour is not the same, otherwise decompiling wouldn't be a thing).

What this means is that for most software an essential part of that software consists of knowledge that needs to be kept alive. The only proof people can have of this is activity surrounding the project. This puts us in the uncomfortable position of having to designate someone as the 'keeper of all knowledge' surrounding a project if it is to be sustained, and failing to do so will result in knowledge being irrevocably lost.


If you have source, you can figure it out. If there's something clever in source that isn't obvious, that's either a problem, or a well documented exception to the rule.

If your whole entire source code is so innovative that you can't immediately understand it, there might be trouble, but so much code is or can be pretty boring.

At most, changing something will make a bug that's not really in the spec but is something everyone wants and relies on, but isn't immediately obvious, then you'll have to fix it.

The trouble is that if knowledge is lost, people will probably not bother to figure it out, and starting fresh might be more appealing.

The bigger issue especially in open source is that if there's no activity, there might not be any in the future. It could be abandoned for 10 years and perfectly fine, until a change is needed, and even if the change is easy and the knowledge is there, it might not happen if nobody cares anymore, and you'll have to fork it yourself, so maybe you choose the actively maintained thing even if it's less complete or more complex or ugly to use.


Even if you have source, you can't necessarily figure it out. Sure, you can figure out what the program does, but you can't know precisely what the program was intended to do, or how closely the current version matches that eventual goal.

Sure, you might have the specification that the original developer was given, but every developer brings their own opinions and hard-won experience to a project. Just because the specification says "outgoing messages are sent to a queue" and the program links with ZeroMQ, doesn't tell you whether that's a design goal or a temporary stop-gap proof-of-concept. Maybe the program is structured so it can use an email queue for distributed operation. Maybe the program is designed to be linked into a larger system and just push message structs onto an in-memory vector.


It's exceedingly rare that you would not be able to figure it out given the source code. The real question is whether you have the time.

I think if a project was that important, the people who needed something patched would be able to get it together even if the original maintainer went missing.

The original analogy to the challenges of knowledge transfer isn't totally applicable. Knowledge transfer is important to organizations because figuring everything out all over again would be costly, not because it would be impossible.


As long as you are relying on the software, what you do care about is what it does today, which you get from the source.

You certainly care about what it will do tomorrow too (so you can keep up, or avoid having it break stuff for you), but if it goes unmaintained, you don't have to worry about that since it does the job for you.

OTOH, I've been on projects where a dependency was chosen on the promise of what they will do in the future. That future never materialized, so we were left with a bunch of unfinished dependency code, maintained a bunch of components ourselves etc — it was a mess, in short. So I never rely on promises, only on what's there today.


What a program does at the moment is exactly what it should do, unless there's a bug report filed, or the current developers want it to do something else.

Whatever it was originally meant to do is probably either obsolete, or well known to all users, if it's been so long the original devs left.

If it's got ZeroMQ, and has for 5 years, one can probably assume someone will be upset if it suddenly does not have ZeroMQ.

Regardless of intent, for practical purposes, if people are using it, they probably want that to stay around until you do an orderly transition to some other thing.


Are you confusing software dependencies for the semantics of a running program?


If I understand your point correctly, software dependencies are absolutely part of a running program.


Very much so. I've spent a lot of time trying to decipher code and the biggest questions are usually "why is it doing this" and "what's this large piece of code indended to do".

Reverse-engineering all that can be an utter nightmare even with tests. I suggest that people who disagree consider, for example, the word "reverse-engineer" as in how they might reverse-engineer a car.

Usually you have this kind of problem when it's NOT producing desired results and the problem is how to make it do something that's needed without affecting some set of usecases that you don't understand yet.


I think a better way to go about it is to state your own goals. If you want it to bridge it to ZeroMQ, you can evaluate the program and see if the way it is coded fits you. If you want another service, you evaluate how much work to retrofit it. As longs as you expect decades out of your software, you should prepare to own all your dependencies. The intention of the original developer does not matter that much.


why does coding have to do with opinions and such? coding is about code. pure logic. fuck opinion. useless as a metric, as a standard.

gn


None of us are coders as far as I know, I don't think that's a real job title. They call us programmers or developers or engineers.

We solve problems with code, or by finding ways to avoid writing code.

We translate vague requirements into things a computer can handle, or we translate between computer languages, because any spec that was complete would just be the program. That's why they keep us. The computers can do pure logic on their own.

Opinion is very much a part of it because pure logic is just a useless pile of text unless it has something to do with the real world, which is full of opinions.

Even inside the program itself, we aren't dealing with logic. Languages are designed for human minds, and compilers are meant to catch mistakes people make. Libraries are often chosen for all kinds of perfectly good nontechnical reasons, like "That one is actually maintained" or "Everyone already knows this" or "People seem to like this" or "That seems like it won't be a thing in 3 years".


Software would be a point in space. There are multiple types of documentation which can either justify why the point would be where it is, or what direction it should move in.


I think it depends heavily on the coding style. In many corporate environments, developers tend to obfuscate or over-engineer or they will tend to introduce too much complexity into the project.


The problem with not updating dependencies for a long time is that it can force you to make sudden version jumps if something like a CVE is discovered.

At my company we had a real scramble to upgrade log4j when the CVE for that came out. A lot of software was many versions behind. Luckily for us, no breaking changes had been made, and it was simply a matter of bumping the version to the latest one.

That was a lucky break, because if that had been a library like Jersey or Spring it would have been an entirely different ballgame.

If you don't keep your builds up to date, you open yourself up to risks like that, which are unknown and to some extent unknowable.


I think communicating this information in your README or whatever might help a lot. It's hard to know if something is finished or abandoned in general, so a message indicating the former would help people arrive at a decision sooner.


I think one of the core issues contributing to fragile, continuously-updated software is a culture of over-using dependencies. Each dependency added is a potential complexity trap. Using a dependency that has few sub-dependencies is less likely to cause problems. Or that has several sub-dependencies that each have no dependencies. It's deep dependency trains that are likely to cause this. Ie, dependencies that depend on multiple others.


> The expectation that software libraries or frameworks should break every few months is disturbing.

Dependecies and external APIs change so software breaks.

An example from a project of a customer today: Ruby had a Queue class up to 3.0.2. For some reason they moved it to Thread::Queue since 3.1.0. It's only a name change but it's still one release to do.


Depends on your choice of language and framework etc though? The js ecosystem is very break-y and, at least when I worked with it, so was the Ruby/RoR ecosystem. In .net or Java, far less so; we have almost 10 year old software which we can update the libs off and it compiles and runs the tests just fine.


Even Java broke things between java 8 and 9


With flags to recover the previous behavior, and the changes of which only broke code using non-public API surface, i.e.: code that should never have been using those APIs but which previously had no way to prevent usages.


> With flags to recover the previous behavior

I'm not sure what flags you are referring to, but that might be because I went from 8 to 11 and skipped 9 and 10.

> the changes of which only broke code using non-public API surface

The APIs weren't non-public. They were publicly documented. They did say they were internal and you should avoid them, but they were still used in libraries and referenced in tutorials and stack overflow answers. And if you happened to transitively use a library that used one of those APIs, it would cause a fair amount of pain when you upgraded.


They were non-public, at least all the ones that I recall being broken (from memory, but let me know if I’ve missed something important here: URLClassLoader, Base 64 classes (made public under a different name), unsafe: name implies the contract to some extent, but also still available under a flag. Then there’s the whole slew of things like DNS resolvers that override internal classes, but again- weren’t supposed to be for “public” consumption.

Ultimately, the break seems to have been intentional, but did receive appropriate consideration. Mark’s comments about moving forward do address these points.


Modularization "broke" a few public APIs since they had to remove an interface declared in swing from some classes outside of the package. I am not sure if there is even one known case of anyone being affected by that change.


I don’t know the details of your particular library but I think 50k/week downloads and no problems is an outlier. In my experience even projects with small audiences tend to accumulate outstanding bug reports faster than they’re fixed.


It's an open source pub/sub and RPC server/client (similar to gRPC) - It has a client and server component written with JavaScript and Node.js. I avoided using any dependency which relied on pre-built binaries. Also, I skim-read the code of all dependencies and made sure that they were well written and didn't use too many sub-dependencies. The trick is to minimize the number of dependencies and maximize their quality.


I 100% believe that’s it’s possible to build a library without tons of bugs, and appreciate that there’s people who put out quality libraries like yours.

I do think as a general heuristic no updates for 3 months means it’s been at least 2 months since anyone paid attention to outstanding bugs, especially the npm ecosystem.


It really depends on the size and complexity of the library. Do you think something like leftPad needed updates every month to perform its function as expected and remove bugs? Of course not, the mere suggestion is ridiculous, but also something like jQuery def needs constant upkeep. In the middle, there's a world of possibilities.


Leftpad shouldn't exist and any library using it as a direct dependency should be treated suspiciously.

I'm not saying that because of the leftpad fiasco, I mean that any library that small should simply be pasted into your codebase because the risk of managing the dependency outweighs the benefits.


I strongly agree with this. Another set of libraries I take issues with is gulp and its plugins. I find that gulp plugins keep breaking constantly and are often flagged with vulnerabilities. The sad thing is that these plugins solve very simple problems, there is no reason for them to keep breaking every few months for the last 5 years or so.

If I see a library which is supposed to solve a simple problem but it requires a large number of dependencies to do it, that's a big red flag which indicates to me that the developer who wrote it is not good enough to have their code included in my project - I don't care how popular that library is.


gulp itself is sort of abandonware and is dependent on libraries that have vulnerabilities (you judge yourself if they apply to you, if you run them on untrusted inputs etc). A remedy in my project was to rip it out altogether and replace it with shell commands.


Do you think something like leftPad needed updates every month to perform its function as expected and remove bugs?

Node libraries are a bit odd in that respect because package JSON files have an "engines" property that informs users about the minimum and maximum Node version the library works with. The author of leftpad could check its compatibility with each new version of Node that's released, and update the engines value version accordingly. That would be quite helpful because abandoned libraries that aren't known to be working with newer Node versions don't get updated, and users can user that information to look for something else.

What that means is that any Node library that isn't being updated as often as Node itself is probably not actively maintained, or the author isn't making full use of Node's feature set.


> "The author of leftpad could check its compatibility with each new version of Node that's released"

Really? This sounds feasible at all for you for all the millions libraries in npm not to be labelled as "obsolete"? Checking if string concatenation still works on every major node release and tagging it as such?


This sounds feasible at all for you for all the millions libraries in npm not to be labelled as "obsolete"?

Yes. I mean, they could just leave the max version of the engines property blank because it should always work if it's something as trivial as leftpad, but in a well-run package repository the work of checking library compatibility should be getting done. The fact that NPM is full of old libraries that haven't been tested in new versions of Node is a bad thing, and those libraries should be flagged as untested.


Can I get the name?


I’m sure you’re right, but it sounds like it’s an outlier not by random chance, but because it was built carefully.

I think there’s a useful lesson there.


I can confirm that this is the norm for all my open source projects. Forward-compatibility issues can be easily avoided by not using too many dependencies and by carefully checking dependencies before using them.

I check for things like over-engineering and I check out the author (check their other libraries if there are any).


Built carefully, extremely tiny, or single purpose.

"Do one thing and do it well" - the unix philosophy, and why I still use cd, ls (dir on windows) today.

If a project exceeds two functions its probably doomed to become either redundant or broken.

If a project does one thing well, even if something else uses it, you can always decompose to it (your software is always useful, even when somebody writes the fancy bells and whistles version).


What non-game-able metrics do you think an outsider can use to evaluate the quality of a tool? In some ways a "perfect" tool would have no flaws (thus no issues/pull reqs) and be so easy to use (few training updates, SO questions) that it would appear to be dormant.

SQLite seems to hit a good balance. There's always something new but nothing particularly breaking (from my light usage POV), so it looks live.


Filed issues are getting radio silence, no matter their quality.

The author chimes in, says he’ll take a look next evening, then two years whoosh by.

There is an ambitious todo and nothing moves.

The code is in alpha for years, considered unstable, huge exclamation points in the readme that it should not be used, and yet somehow there are hundreds of reverse dependencies and it’s like this for years (usually should mean “RUN!!!”). Sometimes sneaky as it’s not a direct dependency for you, but for those two or three libraries used by like 80% of the ecosystem.


We have the same problem with our sass product: the product is "complete" in our minds, we don't want to add more features, make it more complicated, ... we perform minor updates of third-party components, some minor fixes, ... This means we don't maintain a blog with very recent posts, or a very active twitter. We have a daily usage counter on the homepage though, which shows that thousands of users use it these days. But from time to time someone still asks if the product is alive... We haven't found a solution to this yet: it's as if we are expected have a blog post every few days or weeks, create new stuff all the time, ...


Could you post release notes on Twitter and on your blog? That way users knew that the product is still currently supported


We do have a release notes page, but it's really not interesting content: updates of some third-party frontend or backend packages that users never actually see, fixed a typo, minor css alignment, ...

We do announce when there is a user-visible and significant issue or update, but that's quite rare, maybe once every 6 months or more.


Try opening an old project from 3-5 years ago that has a lot of dependencies like something from react-native, Django, etc. and you’ll find many of them impossible to even install without using the original Python or Javascript versions - that’s assuming the oldest dependencies are still available in the pinned versions.

It’s not a common use-case, I guess, but I had some old projects that I abandoned and wanted to try working on them again and found it almost impossible. The worst cases had dependencies which weren’t available unless I used an x86 laptop with an older version of Mac, because the packages were never built for arm laptops.


That's what version pinning and semver is for. If I install 1.2.3 in 2018, and the software is now 4.5.6, I don't expect it to work without updates.

In regards to the second case, bet that's Python with the wheels situation, where it doesn't fallback to building from source by default for some reason (instead throwing a super obtuse error), at least with Poetry.


I had to choose between two libraries at work, lead rejected my recommendation because github was showing less activities, the other project was seeing a total rewrite regularly, largely buggy because of that, and unclear documentations, yet it was the popular choice.


I think one issue, is that the world changes around an app or library. In these cases, the code base could need updating as a side effect of this change.

Also, not many people (especially me included) are able to write something that's perfect first time. In an ideal world, other developers would be testing and challenging what's been written; which would meant the product or library needs updating.

I think it's (perhaps ideally) likely, that at least one of these factors is relevant.


When evaluating libraries, recent activity is one of the things I look at. It's a far from perfect heuristic. I've probably rejected libraries like yours for what amounts to no very good reason.

I don't know if this is a good suggestion or not but I wonder if some form of "keepalive commit" might help here. I can imagine a few ways they might work, some simple, some more elaborate.


I don't check the commit history, I check the issue tracker. If something has unmerged bugfix patches from 3 years ago without any comment from the maintainer, I assume that it's just not being maintained.


I recently added a very visible note to the README.md of one of my projects that essentially said: "lack of commits means this is STABLE, not dead. Our plan for <next major runtime> is bla bla, in a year or so"

This expectation that things need to be committed to or updated every month has to stop, we're just wasting time and energy on stuff that is already fixed (unless there are security issues, of course).

Maybe GitHub needs to automatically include an UPDATES.md into each page, I dunno.


Well, there is also the security aspect in mind. Most software relies on libraries that repeatedly get CVEs over months (not years).

You might not care too much, and it clearly depends on the application, but for me updating a software that doesn't use anymore a library for getting user input that leads to a buffer overflow if you insert a certain character, or similar things (like remote code execution, etc.), can be quite important. Finally, apps are most of the time connected to the Internet, which makes them inherently vulnerable.

After searching "android software libraries CVEs" I found this in the first results' page: https://github.com/dotanuki-labs/android-oss-cves-research

It might be outdated, but the principle still applies.

It's very unlikely that your application is self-sufficient and doesn't need updates.

EDIT: I read the message from Apple as a "hopeful" attempt to say "come on, man at least update the dependencies".


> Well, there is also the security aspect in mind.

What security ? "Google security" where everyone except the user has access to his data ? Why does a note taking app needs internet to function ? Why does my phone app need access to internet ?


Those are great counterpoints. Instead of relying on developers to keep their own apps up to date (you will always have a significant number of outliers), you enforce security better at the system level. Having more tightly scoped permissions and better sandboxing of runtimes solves this way better than depending on individual app developers to secure their own apps.


how many note taking apps do you have? without even going on my phone, I think 70% of my apps need internet access (for a reason).


I understand your frustration, also thank you for your work.

> The expectation that software libraries or frameworks should break every few months is disturbing.

As a library user, it's hard to find a good balance between fully trusting a system to stay alive for a while without maintenance, while being super paranoid about every aspect I rely on. Mentally it's easier to expect it to break every now and then, than to keep thinking "it's probably fine, but I still need to be defensive about stuff that never happen".

I think the issue on a library being "alive" or not is best mitigated by the users being comfortable to fix it if it broke under their use case. There's libraries I thought were completely dead but were a good shortcut to where we wanted to go, and we expected to extend it here and there anyway. That can't work for everything (thinking of security libs in particular, I wouldn't want to touch them) but it's to me the ideal situation, with no undue stress on the lib maintainer as well.


If your product relies on any compilers/tools which are themselves updated, then at least your CI needs to be updated regularly enough. May be in your case, that was the case, but if not, then that would just to me that the product is not being maintained, even if the product is feature complete.


Depending on how you feel about it you could update the readme every so often with: [date] Checked project, still is ok in current form.


"I think this may be due to the fact that many of the most popular frameworks do this."

According to the OP, it is also due to companies controlling mobile OS such as Apple.

One of the things I like about non-corporate open source OS like NetBSD is I can run old userland utilities with new kernels without any problems.


I feel like, and hope gorhill responds to this, that uMatrix was similar to this. Done and stable and needing to be touched just so people can wouldn't ask if it was still maintained. (Pre manifestv3 that is)


Perhaps you should commit a new "Still working well..." line to its readme file in every few weeks to satisfy the update hungry crowd. :)


> built in a forward-compatible way

What do you mean by this?


Any general comments on how to write forward-compatible software for a hack programmer?


How do you think users would react to a monthly no-changes-required update?


Sounds like they consider the software to be complete. I that case an update provides no value.


An update provides no value except as a signal to users that the developer hasn't killed the project. I was wondering how annoyed or reassured users would be if they saw version x.x.x+1 with release notes noting no changes.


Please don't. A sentence like "I'm still around, don't have anything to release for now because the library is feature-complete and no bug has been found for a while" would be better.


If you have some kind of security review process for dependency updates, then this is annoying.


> So I had to spend four hours on Saturday upgrading all my platform libraries simply so that I could compile the damn app.

I sort of assume this is the actual point? Apple presumably wants to drop support for older versions of the SDKs, and that requires that app developers update to the newer versions. I think you can make a reasonable argument that dumping work on third-party developers to simplify things for themselves is bad, but the author's belief that it was simply pointless busywork is probably incorrect from Apple's perspective.

I suspect the minimum download threshold to be exempt from this is _very_ high. Maintaining backwards compatibility for a small fixed set of apps doing things an old way is a categorically different problem from maintaining backwards compatibility for every app out there.


If they have specific libraries or APIs that they wanted to deprecate, I think developers would understand that. But the current implementation of "anything 2 years old is too old" is arbitrary.

If this was really about deprecation, they wouldn't have a "minimum monthly downloads" exemption either. This policy is just a way to clear out old, unpopular apps from the store


Traditionally, this is a significant philosophical difference between Apple and Microsoft that goes quite a ways towards explaining how Microsoft gained and held dominance for so long in the business sector.

Businesses don't want to be told that their working software needs to be updated to make a vendor's bottom-line cheaper. They recognize cost-shifting when they see it and respond by backing towards the exits. Microsoft maintained a philosophy for decades that it was their responsibility to, if at all possible, maintain backwards compatibility with older Windows software as a market differentiator. The primary times I remember them breaking this policy were security related.

(That having been said, I got out of active development of Windows software around Windows 8, so this may have changed).


Again, I don't think people would have as much of an issue if it was clearly about deprecation.

Something like Google's minimum sdk version is annoying, but understandable. It's technical and concrete - you must link against version X because version X-1 is going to disappear.

This is not that. It's culling apps that are arbitrarily too old and arbitrarily not popular enough. They must be keeping around old sdk versions if those old but popular apps are allowed to continue on.


This is the single biggest reason why Microsoft continues to be dominant in the market of practicality. Microsoft will bend backwards 420 degrees and land a headshot to make sure something from 30 years ago will run (within reason) on the latest version of Windows.


Not anymore. That may once have been true but circa Windows 8.1/10 that changed substantially IME. It really came to a head when Windows 7 support was dropped but I spent several years consulting and helping people migrate really really old software (which should have been replaced) that was Win 3.11/DOS based from Win7 to either Dosbox or Wine on Linux systems because Windows 10 specifically couldn't actually support running them anymore. Compatibility was broken in a rough way with 10 for very old applications which were otherwise working.

It's fine to argue typical desktop applications need to be updated but purpose-built applications cost money, so when you have things like a POS terminal installed in thousands of fast-oil-change locations and it's from the Win3.11 era working perfectly fine for years but suddenly stops working after moving to 10, that's a sudden cost on a business (especially with no warning). Yes, you can argue that companies should be updating that kind of software, if only for reasons of security (I often did). The bottom line tends to be king, especially in smaller businesses, in my experience.


That's the reputation, but it's far from true at the moment. For example, there are numerous games from the 2000s that don't work anymore (I mean crash at startup or soon after, not some soft "not working like before") in any compatibility mode on modern Windows. I'm more familiar with games, but I'm sure the same is true of various other kinds of software (though graphics drivers may well be an important part of the problem, so games may well be over-represented).


Games, especially in the 90s (and some early 2000s) did a ton of API abuse so they are among the harder types of software to keep running. Still, most do work out of the box and with 3rd party compatibility shims like dgVoodoo (for the older ones) and DXVK (for the more recent ones) stuff tend to work since the breakage is still very minimal.

More regular applications tend to be much easier to get working though. E.g. something like Delphi 2 or C++ Builder 1 work out of the box just fine. The biggest issue with older software is that they sometimes had 16bit installers who do not work with 64bit Windows. Windows comes with some updated stubs for some popular installers and it is possible to manually fiddle with some of the unsupported ones to run the 32bit engine directly, though something like odtvdm/winevdm that emulates 16bit code on 64bit would also work.

But in general you can get things working, depending on how well the application was written. In some cases (games, 16bit installers) you do need workarounds as they wont work out of the box, but even those workarounds are based on the 99.9% of the rest of the system preserving backwards compatibility.


I remember the old trick on MacOS of calling `getNextEvent()` instead of `waitNextEvent()` to make sure no other processes got CPU cycles.

Worked generally okay until the era of the Internet came along and after you quit the game, all manner of programs would crash when the network stack suddenly found itself teleported an hour or two into the future and couldn't cope.


Games depend on more volatile elements like graphics drivers and APIs, like you said, so they aren't exactly representative of Microsoft's backwards compatibility efforts. That said, there are still quite a lot of games which owe their continued operation in modern Windows to Microsoft's veneration of old programs.

A better example is productivity software, like Photoshop and Illustrator or Paint Shop Pro. I can get Paint Shop Pro 5, a raster graphics editor from 1998, to run on Windows 11 just fine, for example. Another is Microsoft Office, in which Microsoft goes out of its way to make sure documents created long ago will load and work fine in modern Office, and ancient versions of Office itself will happily run mostly fine on modern Windows too (eg: I run Office XP on my Windows 7 machines).


Games were among the worst offender in terms of writing on the disk outside of user space (including Program Files and the registry).

Old games were also quite often doing direct access to graphic cards outside of official APIs

And games are not business software. An old game that stops working is "too bad, move long", accounting software that stops working has real life consequences.


That's not the market they care about. What they care about is that the stupid labelmaking application my employer uses for its annual conference whose vendor went out of business in 2005 still runs today even though it was compiled for Windows 2000.


Only things I've ever noticed that had issues in modern versions of Windows with is software reliant on old, specialized hardware (for example, dental imaging software) and games. Wouldn't really put the blame in these areas on Microsoft, as they don't provide the drivers nor the custom graphic API's (notice that most old DirectX software has no issues working) - and hell, you can still run your 40-or-so-year-old apps on a 32-bit version of Windows...


https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...

> The most impressive things to read on Raymond’s weblog are the stories of the incredible efforts the Windows team has made over the years to support backwards compatibility ...

> I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.

> ... Raymond Chen writes, “I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure. I spent many sleepless nights fixing bugs in third-party programs just so they could keep running on Windows 95.”

> A lot of developers and engineers don’t agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It’s why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple’s official Bible of Macintosh programming, there was a tech note saying “you can’t do this,” they did it, and it worked, and their programs ran faster… until the next version of the OS came out and they didn’t run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.

---

This really gets into a philosophical difference of how software should work and who should be maintaining it. As a user, I love Raymond Chen's approach. As a developer, trying to maintain bug for bug compatibility with older versions of the code is something that scales poorly and continues to consume more and more resources as more and more bugs need that bug for bug compatibility across versions.


Being dominant in the PC market is kind of a Pyrrhic victory.


> The primary times I remember them breaking this policy were security related.

And they also went above and beyond to make it happen too. That time when they byte-patched a binary which they didn't have the source code anymore comes to mind.




it seems like that on a long enough timeline however, promising that legacy cruft will always keep crufting as it did before seems like a huge burden that leaks into your ability to deliver quality in your current offerings.

Sure there are old things that are good and "complete" but far more old stuff is just old and could well be burnt to the ground, except for the fact that you have some customer somewhere relying on its oddities and unintended behaviors as an essential feature of their integrations.


I feel you are arguing for real harm now, versus potential harm in the future.

There's no sign that being very-backward-compatible is holding Windows back that I can see.

In the other hand I have collected a set of utilities into my work-flow that are easily 20+ years old, and they all work fine.

The last major cull from Microsoft was that 16bit programs ceased to work on 64 bit platforms. Other than that I've never had an app fail.

Most of those utility supplies have long since disappeared, retired, or died. But I can still keep transferring those apps to the next machine and they keep running.

None of this impacts the quality of my current offerings. Ultimately all this costs me is some cheap disk space.


I have a couple camera apps from the mid 'aughts that were written for Windows XP that still function fine under Windows 10. Same thing with an old photo scanner, that just only recently failed. My last SCSI dependent peripheral. Ran fine on Windows 10, on ancient software, using ancient TWAIN SCSI drivers, until the scanner itself failed. As I moved from XP -> 7 -> 8.1 -> 10, the first thing I did was load up all my old photo software to be sure it would work.


My father has an ancient scanner from the XP era and the only thing that still works with it is a Windows XP VM he happens to have (I assume later windows could work too, but the XP exists for other reasons and so we continue to connect the scanner to it).

Maybe Linux could be beaten to work with it, but this works well enough.


Microsoft struggles to release new OS’s, move to new devices and it’s been trying to move to ARM and has seen little success.

Backwards compatibility is very much holding MS back


Windows ARM64 works great, with (similar to macOS) transparent support for x64 programs. Successful or not, I don't see any problem with the software.


Works great as long as you don’t care about the lack of performance, not getting great battery life, and the lack of native applications…


Which is the reason that Apple is never going to conquer "traditional" gaming. Traditional games (ones that are NOT perpetually updated live services) and rock-solid-stable development targets have always been very close -- game consoles would always be guaranteed to run any game that's ever been officially released for it, even if it's an unpatched launch-day game running on the final hardware revision of console 8 years after release.

Even if Mac hardware manages to vastly outrun the top end gaming PCs on raw performance, they'll never be seen as serious mainstream targets for this reason alone.


> This policy is just a way to clear out old, unpopular apps from the store

Except popularity doesn't correlate with utility when it comes to apps. Probably only addictive games and social network apps will pass whatever arbitrary threshold has been set.

This will harm any one off apps built to satisfy a niche purpose downloaded by a small set of users. Which Apple probably think are not important, like all of the little high street shops, except cumulatively they might affect a majority of users. Also if it's measured by "downloads" rather than "installed", then it could take out larger more widely used apps that are considered complete by both authors and users, but don't have enough daily new users to pop up on their radar as important enough... this is similar to the "maintenance" fallacy of NPM, where frequent patches = better, even though if your package is small and well written you should be making no patches as a sign of quality.


It's a way to clear out old apps that don't make money. A freeware app can be quite popular but won’t get updated if the developer as moved on to other jobs and can't continue to devote unpaid labor toward their passion project. In order to afford to keep maintaining it, they may have to move a happily free app toward a subscription or ad-supported model. And Apple's forced updates are nudging people in that direction.

It reminds me of a class I once took where the professor stated that some colonial governments would go into tribal areas, claim land ownership, and start taxing the indigenous people. And because these people now owed taxes, they had to give up their lifestyle, enter the workforce, and participate in the money economy whether they wanted to or not. I don't know if that scenario is historically accurate but it certainly is analogous to Apple's policy. Developers who might not even want to make money are being compelled to do so or see their apps get pulled, because forced updates amount to a tax that must be paid.


> forced updates amount to a tax that must be paid.

It's not a tax that must be paid. The developer can simply discontinue the older app, and not have to pay anything. So your analogy doesn't apply.

Switching an app from free to paid is a lot more work than recompiling and updating the app. There's a ton of coding and infrastructure you have to do. So it's not really saving you anything. The work of switching is a large upfront cost, which might not pay off, because apps don't magically sell themselves, you have to market them (which costs money!). This is especially a problem if you already have an app with low download numbers and low consumer awareness.


> But the current implementation of "anything 2 years old is too old" is arbitrary.

To an extent, but the reality is that an unacceptably-large percentage of apps that are 2+ years old are not correctly handling current screen sizes and reserved sensor areas.

> This policy is just a way to clear out old, unpopular apps from the store.

Great. This is the kind of active culling/editing an app store requires to remain vibrant.


Yes, I agree. All users should be forced to re-evaluate all the apps they are using everytime they get a new device.

It doesn't matter if that tool you currently use is perfect, or the game you play is just fun as-it-is, it is clearly harmful to you (and makes the app store "less vibrant".

Everything older than 2 years must go. Crumbs, everything older than 1 year should go... Nay make that 1 month...

Who cares if users like an old app, "vibrancy" matters most.

/s


If she had to spend time getting it to work with the newest version of XCode, there are probably APIs that are being deprecated.


> I suspect the minimum download threshold to be exempt from this is _very_ high.

You can suspect based on no evidence, but nobody knows, and Apple refuses to say.

The crazy thing is, if Apple truly wants to drop support for older version of the SDKs, then how in the world does it make sense to exempt the most used apps???


My guess is the income from a reduced subset of apps offsets special maintenance for only those parts of the old SDKs which that subset uses. Likely as a cost saving measure. Apple may even ask for support contracts beyond some point, to motivate those apps to update too.


> My guess is the income from a reduced subset of apps offsets special maintenance for only those parts of the old SDKs which that subset uses.

Again, a guess based on zero empirical evidence. Also, Apple's "rule" here makes no distinction between paid and free apps. Indeed, free apps tend to have more downloads than paid apps, which means that Apple would be targeting the wrong apps if they were looking to offset costs.


Do the free apps not have in app purchases? Or ad revenue which must be shared with Apple?


Some do, some don't. Why are you asking me, have you not ever seen the App Store?


Not in many years


Ad revenue is never shared with Apple in third party apps .


It's much easier to support only a handful of apps on an old SDK than millions. Those a few probably only utilize 30% APIs the SDK exposes, meaning you can safely assume the rest 70% can be removed from maintenance list. They can also one-on-one work with those few companies on deprecation and migration.

Basically a cost saving move.


> They can also one-on-one work with those few companies on deprecation and migration.

> Basically a cost saving move.

Working 1-on-1 with companies to determine what to keep and what not is anything but cost saving, by my estimate.


They don't need to work 1 on 1. Any uploaded app is automatically analyzed for API usage. At least I've been getting warnings about deprecated or unsupported APIs in my apps once in a while and other developers too - even if it sometimes seems like crude string matching rather than checking for actual usage of the API


You're totally making up numbers of out thin air.


I agree that this may be the point. Software changes, and being able to compile with latest libraries is somewhat of a minimum requirement.

To Apple's defense, are they supposed to wait until the app breaks, starts receiving many complaints from customers, before it triggers the review process for them (which they would be forced to look at as somewhat high priority) before they then take action to remove the offending app? That hurts the customer experience from their perspective.

Better for them to institute a policy preemptively addressing these issues (arbitrary as the timeframe may be).

And four hours is a good chunk of time, but what percent of time is it compared to the amount of time for the app to be designed and implemented in the first place?


> To Apple's defense, are they supposed to wait until the app breaks, starts receiving many complaints from customers

Except that Apple is exempting apps with more downloads, and only punishing apps with fewer downloads, which is the opposite of worrying about "many complaints".


This is still in line with their thinking from a prioritization perspective.

For an app with more downloads, they can dedicate more labor/resources to it.


> For an app with more downloads, they can dedicate more labor/resources to it.

What resources? For older apps with more downloads, Apple is doing exactly what you said they shouldn't do: wait until the app breaks, and start receiving many complaints from customers.


I've had a few apps on the Google Play Store automatically removed because I didn't update them for some new policy or some such.

If they paid me maybe I would have. Otherwise I don't have time to keep dealing with their requests every 6 months. Is it such a hard thing to ask that if shit works, just leave it be?


My prof just ran a Win32 programs written 20+ years ago on today's Windows. At least someone is paying attention to backward compatibility.


Windows even emulates bugs from old version of Windows for specific old apps so they don't break due to the bug having been fixed.


How well are those Win32 apps working on Windows mobile?


This is a great point, but if true, Apple should do a better job of communicating the reason, including specifics, not just say "your app is rejected because it is outdated".


> The intention behind Apple and Google’s policies is to force developers to make sure their apps run successfully on the latest devices and operating systems.

The intention is to not support old iOS APIs with new versions of XCode and iOS anymore.

Apple isn't Microsoft or the Web - very old Windows programs and very old websites still run pretty fine.

Apple would rather shift the burden to update and App according to the latest API to the Devs than to provide API support forever.


I don't think culling games/apps based on age is the right approach to improving discoverability.

I don't see Nintendo removing old Switch games from the eShop.

I don't see Apple Music removing the Beatles because they haven't updated recently.

My recommendation, which Apple's (obnoxious) ad business would never accept, would be to

1. Remove the obnoxious and generally useless ads which eat up the top half of the first page of app search on the iPhone.

2. Improve app search with more options and a fast, responsive UI. Also they might consider allowing you to consider ratings from other sources such as critical reviews vs. user reviews (a la metacritic.)

3. Better human curation with more categories. This is the same approach that Apple Music takes and it seems to work well. Games should have Artist/Studio/Developer pages as well as Essentials lists to facilitate discovery. Same with genre/category essentials, which have a rotating list that can be saved/bookmarked.


> I don't see Apple Music removing the Beatles because they haven't updated recently.

I wish I could upvote you for that multiple times!


They would ask the Beatles copyright owners to update their format if they posted an old 8 track or vinyl in to the music store though, right? Audio formats change, just at a more glacial pace than software dependencies. There is remastering and all that too.


An 8 track is many decades old. TFA refers to an app that hasn't been updated in three (3) years.


It has probably been remastered, which is about the same as recompiling


The missing point is that they are culling apps based upon age and downloads. I think this is a reasonable criteria if Apple is trying to make space for popular and potentially upcoming apps. Yes, space is an issue if you think of it in terms of the customer's attention span.

The real problem is that developers have no choice except to offer their app through Apple's store. There is no room for the developer to offer their labour of love or niche product in a storefront that better serves their needs, or even from their own website.


> The real problem is that developers have no choice except to offer their app through Apple's store

How is this different from other walled garden game systems/game stores like Nintendo's eShop?

Yet they seem to keep older games and apps, even niche titles for a limited audience (such as music production apps or BASIC programming apps) which I greatly appreciate.

I would agree that discoverability isn't great on the eShop, but there are dozens of enthusiast web sites which are pretty good for game discovery (also aggregators like metacritic.)

And, as I noted, I think Apple already has a good approach which they're using with Apple Music - better human curation including category/genre/time period/artist/etc. playlists. Podcasts/radio shows also help.

Many games (at least ones that aren't in the "live service" category) are more akin to movies, music, or books than to some sort of perishable produce, so a curation approach that balances older content with new content makes sense.


Consoles are different because they have a limited shelf life. For example, I dug out my PS3 the other day and took a peek at their eshop. The most recent title was three years old. It also sounds like they're going to shut down the PS3 eshop, effectively ending the sale of old games anyhow. Something similar appears to be true for Nintendo. I pulled up their eshop on my 3DS and they have a message saying that it will be impossible to add funds to an account after March 20223. Presumably that means sales will end.

As for other media, brick and mortar retailers were always selective about what they offered. I suspect that we will see something similar happen with their digital variants in the coming years. I also suspect that it will be sales numbers, not human curation, that will be the basis of their decisions.


It depends on how you view your phone. Is it a single use appliance? Or is it a general use computer?


Does that really matter? Would Microsoft making a web browser for Xbox[0] make it more general purpose, given you can now bank on it?

0: https://support.xbox.com/en-US/help/hardware-network/console...


I wouldn't bank on it.


> Yes, space is an issue if you think of it in terms of the customer's attention span.

I don't hear independent developers griping that Apple is failing to advertise their apps and bring those apps to the attention of iPhone users. Instead, I hear independent developers griping that Apple is preventing their apps from being run on iPhones.

Therefore, I completely disagree. This is purely a matter of data storage space (cheap and nigh infinite), not a matter of limited attention.


I hear the opposite of you. There are very few serious apps whose code Apple is preventing from being run.

There are huge numbers of good apps on the store that don’t get visibility.


Virtual space only makes sense in terms of floorspace. If you want to access a backroom special-order item (virtually unlimited space here), you should be able to get it by searching for a specific item (or URL).

Removing apps based on downloads or lack of updates is troubling.


> I think this is a reasonable criteria if Apple is trying to make space for popular and potentially upcoming apps

I do not. Do you think shovelware author x has any issues pushing a new garbage update?

I want the app store to be full of high quality stuff, not recent.


You ignored the part where they also use downloads. If a high-quality app has no downloads, is it useful?


> Yes, space is an issue if you think of it in terms of the customer's attention span.

By that argument they should also get rid of most old music, old books, old movies.


> if Apple is trying to make space for popular and potentially upcoming apps

“Make space”? This isn’t a shelf. There’s always enough space for digital items.


So do you want to sift through a bunch of apps that don't work on the current version of the operating system or are otherwise significantly dated?

Even in a less wall gardened environment, I'm mostly not interested in running 10 year old applications unless they're something fairly simple that really does do everything required for the purpose and still runs reliably.

In any case, as a user, I probably figure that if an app hasn't been updated in five years, it may or may not work and I'll probably at least subconsciously resent the store's owner a bit for clogging up the store with old stuff.


>I'm mostly not interested in running 10 year old applications unless they're something fairly simple that really does do everything required for the purpose and still runs reliably.

The 10 year old version of AutoCAD still runs, and you can use it today to do a ton of high-value CAD work. Thanks to Microsoft for not arbitrarily blocking it from running.


Yes. But that’s a case of having an old version of a program that you know still runs on a given platform. That’s rather different from looking for a CAD program and deciding to take one that hasn’t been updated for ten years for a spin.


Backwards compatibility benefits current, and future users. I agree that "hasn't been updated" is a concern, but you can also look for "hasn't needed to be updated". If you can obtain a legal copy of 10 year old software, you can use it for the fraction of the cost of the new version.


> So do you want to sift through a bunch of apps that don't work on the current version of the operating system or are otherwise significantly dated?

This is precisely why Google only indexes webpages written in English and are focused on the American market.


When I do a search in English, Google absolutely returns mostly web pages written in English. And, yes, it would be very annoying to the point of being useless if it did otherwise. Presumably if you query it in French, I assume it returns mostly French pages.


So… Do you think this solution could be applicable here?


Which is fine - if there was any other place for iOS apps for those of us that did want niche/old/obscure/historical apps.

Also - there's a cultural preservation issue here as well. That bothers me.


All developers pay an annual tax to Apple, which should cover storage and discoverability at least right?


> I don't see Nintendo removing old Switch games from the eShop.

The difference is that Nintendo shops have a limited shelf life, while App Store is forever(-ish). Nintendo will be shutting down the Wii U and 3DS eShops in 2023.

2023 is 6 years after Wii U was discontinued. Since the platform was frozen in time in 2017, there's no point to game updates.


eShop games still end up sticking around a lot longer, however. The first 3DS eShop games came out in 2012.


Nintendo updated hardware every five to seven years and don’t old eShop apps run within an emulator?


APIs change, hardware changes, the comparison is relatively silly.


Rectangle with capacitive screen is just that. Well designed apps should be robust to improved screen resolution, faster processors or network connectivity.


> Rectangle with capacitive screen is just that.

If you think that’s all a smartphone is, then it’s natural to come to the conclusion that the only thing that has changed is speed and resolution.

It also happens to be simply wrong.


That's not all that a smartphone is, but it is most of what a smartphone is.

Most applications basically just need:

- a canvas to paint some bitmaps on

- some way to tell what part of the screen the user tapped on

- a way to get TCP/IP or HTTPS traffic in and out - some sound output

- some persistence

- some way to show notifications

- a few other odds and ends like GPS, sometimes

Almost the entire list been supported on every major platform since the late 2000s. Yes, rich multimedia apps that make good use of additional APIs and hardware features do exist. But it's inappropriate to nuke most old "normal" applications just because old rich multimedia apps stopped working over time.


You seem not to think the features operating system itself provides are part of a smartphone.


Back in the iPhone 4/iPad era, there was no facility within iOS to have responsive screens.

Apple introduced size classes and then you needed to adjust for different views for iPads once they supported more than one app being displayed on the screen.

Apple rightfully got rid of 3/ bit app support on the actual die. It introduced new permission models, design aesthetics change, new accessibility options are added, better APIS are written, etc.


It also ensured that many apps/games that were very successful in their time will never be preserved.

https://youtu.be/KB07oxJkd2g


I bought Crash Bandicot in particular when it first came out for iOS 2. It was broken long before the 32 bit purge.


Rectangle, lol, if only.


Older apps don’t properly fit with the curved corners and notch.


Android vendors were able to make it work.

Besides, isn’t this the company that allowed you to install iPhone apps on iPads and blow the UI up to 2x size?


Android was always written with the expectation that it would run on a diverse set of hardware.

You could run iPhone 4 apps on iPhone 5 and beyond. But they looked horrible.


Sorry, I think you’re misunderstanding. Android vendors built compatibility layers that allowed 16:9 apps to run on 18:9 and up screens. The apps weren’t built to run on 18:9 screens but the vendors found a way regardless.


Part of what makes iPhones great is that you aren’t running apps sitting on a pile of compatibility hacks. It was all designed to work correct on the exact device you use.


> running apps sitting on a pile of compatibility hacks

You do realise the only reason iphone retina screens became a thing was to enable double pixel scaling because iphone apps at the time were coded to a fixed resolution.

That's the definition of a compatibility hack.

Of course apple being apple they managed to sell it as a feature and made every person who was happy with a 1366x768 laptop suddenly desire a retina display.


What does the user care if there's a "pile of compatibility hacks" there or not, so long as the app still exists and works as it should?


In the case of Apple removing 32 bit support from the OS first and then the actual processors:

- they care about their apps not being evicted from memory as quickly when they switch apps because memory isn’t taken up by 32 bit and 64 bit versions of shared libraries.

- increased battery life by not having as much RAM - yes RAM takes energy.

- using the die space saved by not having 32 bit instruction decoding means you can use that die space for enhancements that users care about, and decrease the die size to make the phone more battery efficient

- for Mac computers, you have computers that are faster, have more than twice the battery life, are more memory efficient (meaning less swap), and can be fanless without getting hot.


The 2022 SE3 has the same 750×1334 rectangle as the 2014 iPhone 6.


The 2018 iPhone X does not.


Oh no, the horror.

Let the user make the damn choice.


Apple should just have fewer apps. I don't see why developers should cater to their customers. They buy less and require more support. They are the worst segment to support.


You’re just wrong. The reason developers support iOS is that iOS users buy more apps.


Are you serious? There's way more revenue to be made on iOS. A lot of Android users spend $0/yr on paid apps.


> A lot of Android users spend $0/yr on paid apps.

As do a lot of iOS users. I think the stat you are looking for is that, on average, iPhone users purchase more apps.

Don't forget to consider Android's massive installed user base in the calculation. Even if Droid users convert to paid at 1/4 the rate of iOS, you can make it up in sheer bulk.


> The impact of the App Store Improvement policy is nonexistent for VC-funded companies.

Not true. More often than not, our iOS releases get delayed hours if not days, while our long-suffering iOS lead patiently walks yet another green reviewer through the policies and our previous interactions with reviewers to convince them that our app is in compliance. Among other things, our app repeatedly gets flagged for failing to comply with rules that don't apply to it. This is usually resolved with a simple message to the reviewer, but depending on the turnaround, that can effectively add a day to the time it takes to get a bug fix out.

Dealing with these bogus issues probably accounts for 5% of the productive time of our iOS lead. And this is despite keeping our release cadence slow (every two weeks except for the most critical bugs) and after we've made many reasonable low-to-medium effort changes in our app to stop triggering false positives in their tools.

God help us if Apple ever went the Google route. Apple reviewers might be inexperienced and undertrained, but at least they're human and capable of talking through issues.


How is releasing a new version every two weeks "slow"?


It’s slow compared to a continuous delivery process in which other dev teams (e.g. web) release multiple times daily.


Since when has releasing an application ever been as fast as releasing something that runs on the web?

Besides, users aren’t going to be updating their app multiple times per day as they would a website that is continuously updated.


In our case, Apple actively destroys user experience with this kind of stuff.

We have an old iOS app for our products that runs fine even on the latest version and while there is a next gen app available that most users have switched to, some prefer the old one. For some use-cases, even for new users, the old app just fits a lot better.

We cannot rebuild and republish that app, because it depended on third party software that we no longer have access to. The app will continue to work fine for users that own a correspinding device but will most likely be removed from the app store for no real reason in a few months.

This will force all userd that want to actively use it to switch to the next gen app that they maybe don't want as soon as they add someone new to their device.


Does removing a program from the apple store also delete it from users' telephones?


I don't think so, no, but I don't think they will be able to sync it to a new device when they buy one.

In our case, the apps are linked to hardware that the user owns and shares with multiple others. Due to very restrictive changes in iOS background activity over the years, we had to restructure some of the functionality, so the apps are not really interoperable. I think in this case, as soon as the user wants to add a new user to the device or an existing user gets a new phone, all users tied to that device would have to switch to the next gen app to keep having a consistent user experience.

This is not the kind of experience we want to provide for our users but we simply cannot invest as much money as would be required into completely mitigating the effect of Apple's decisions (such as heavily restricting background activity) year after year.


I agree.

Some software from the past that made me feel that way:

1. Adobe Photoshop 7

Felt feature-complete, never ran into a bug, not resource-intensive (I would run it on a Pentium 200), no online activation/membership BS

2. Apple Aperture 3

Also felt feature-complete, had everything I needed. Nowadays I need to use 2 or 3 different software to have the features of a 2003 app. Unfortunately the app stopped working after 64-bit transition, and Apple retired it in favor of the much simpler Photos. Shame on you, Apple.

3. iTunes 10

Pretty much the sweet spot of features for managing your music library. Apple replaced with the Music app, because nowadays users consume streaming vs. maintain their own libraries, but the app is a buggy mess.

4. Winamp

Solid, just worked.

More software should be like that, but never will, because it isn't profitable to have users using the same version for many years. The revenue maximisation strategy is to release continuous beta-quality software, and apps should provide a steady revenue stream. That's why the Apple Store just assumes an app that didn't update is "abandoned". That's the perspective of users in general today, too.


Photoshop 7 struggled in larger PSDs with lots of layers or just a huge resolution/dpi. The memory management was really bad in that era too featuring manual scratch drive configuration. Constant crashing in large projects was the norm for me. I thought CS2 was a good upgrade and my final point before the nonsense of subscription based software took over. Now I just use photopea because 99% of my photoshop needs can be met by a single dude on a single webpage


> More software should be like that, but never will, because it isn't profitable to have users using the same version for many years.

You can get some of it back with FOSS applications, even if the quality normally isn't the same as with old freeware/proprietary applications. At least with a FOSS application you get to keep it forever, unlike SaaS which evaporates into thin air the moment the parent company decides to pull the plug.


You get to keep it, but if the software itself depends on a foundation made of sand, you can still be out of luck - especially for desktop applications that tend to rely on all sorts of libraries that just can't keep themselves from breaking their APIs and ABIs every few years (and those libraries have their own dependencies, etc).

Of course if time is of no essence you can spend it keeping the software up to date, but being able to waste that time doesn't really excuse the underlying foundations forcing application developers to waste their time.


You can still wrap it in a Docker container or a VM, statically compile it, or even use full system emulation if you eventually switch architectures. Or even pay someone to make the code run on a newer environment, if the application is worth it to you.


Every version of Geometer's Sketchpad feels like complete software. Yeah they add features every so often (pinch to zoom is pretty huge) but the core of it is the same.


This is another case where Apple fundamentally doesn't "get" games. They had a plan for clearing out crap from their store which probably sounded reasonable, but nobody even thought about an entire class of programs.

Lots of stumbles and missed opportunities - like, what if they had actually been serious about making the Apple TV a Wii-style games console? They have the hardware expertise to do that at a reasonable cost, but they just have no idea about the market, and apparently no desire to learn.


Oh, they get it.

Apple products very much have a time and a place. Their plan is recurring revenue. Something that happened 3 years ago is almost entirely irrelevant to them. It's true with both hardware and software.

Wii-style games on AppleTV would not be a win for them. They don't want to sell a few million copies of Wii Sports once every 3-5 years. They want you buying a new 99¢ game at least once a month. They want you spending $5-10 in micro-transactions a month. They want you subscribed to AppleOne for $15-30 a month. They want you to buy a whole new iPhone every 2-3 years and start the process all over again with some new AirPods, cases and bands in between.

Apple doesn't want to sell you a new game for $59 every 2 years. They want to sell you what amount to an average of $59 worth of goods and services a month… forever. And while that sounds like a lot, that's a low target. If you have your TV subscriptions through Apple and Apple Care you can easily be contributing $100/mo or more to Apple's bottom line.


I understand that Apple is trying to deal with the sort of shovelware/accumulation problem that the Wii had, but the blanket approach of culling games based on release date seems wrongheaded.

If Nintendo took that approach then they'd end up throwing away Super Mario Odyssey and Zelda: Breath of the Wild.


Probably not since they get plenty of downloads...


I am a maintainer of Chalk (Javascript library for outputting color codes for terminals).

We consider chalk 'finished' and probably won't release another major version that changes the API - perhaps just typescript maintenance as Microsoft messes with the language even further and we have to fix types.

Sometimes, software is complete. I hate this new industry dogma that code rots over time and needs to constantly be evolving in order to be "alive".


I think when a project heavily depends on dependencies it is easy for security issues to make it too risky. Lately, I've been seeing more and more npm packages that keep "extending" a dependency to add more features, which eventually gets abandoned.

I'll use this package, multer-s3-transform, as an example. Its been abandoned. It depends on multer-s3, which depends on AWS-SDK and multer, which then depends on busboy.

Now, AWS-SDK, Multer, and Busboy have organizations maintaining them. However, the random projects overlayed on top of those are by single individuals.

The answer is to two-fold. Avoid overlayed projects and avoid big projects managed by a single individual.


Chalk has no dependencies aside from those vendored into the source tree.


I took a quick look at Chalk. I would say using yoctodelay dependency is kinda useless, and is maintained by a single individual. I sense potential danger there. The other dev dependencies should be audited too.


So? They're dev dependencies. They're not needed to actually use the code.


Looking at the metrics Apple tracks in the App Store, it seems to me that "engagement" is all that really matters. You could have some huge number of happy users, but if they open the app only occasionally, you get numbers in red. Or you could have an app that solves an important issue but only for a small number of users, and you get numbers in red. If you're not releasing hits that people use all the time, you sink. If tons of people don't spend tons of time in your app, you sink.

Author complains about the time required to update libraries, and that's an aggravating process, but that's just an unfortunate part of maintaining an app. The real issue, again it seems to me, isn't that you have to do a lot of work just to increment the version string; it's that, ultimately, modern content ecosystems are designed according to engagement-driven revenue models. And solid, simple, quiet, long-lived, useful or fun little apps simply can't thrive when all the sunlight is blocked by towering marketing-tech giants.

IMHO.


You're completely right. And I wouldn't single out just Apple here, they're falling into the same trap as others. And this is going to be a recurring theme across all domains. We're attempting to model things like 'usefullness', 'credibility', 'importance' etc into measurable metrics because of the pressure to scale and monopolize the market. Its scary to think that for all the talk of hiring the 'best and brightest', its these metrics that supposedly the smartest engineers in the world come up with. Its pure hubris on our part.

There is a dystopian arc, but I prefer to be more optimistic. I would think a legal mandate that champions interoperability, open data standards, and platform openness will put a dent in this march towards convert the human experience into numbers.


> Looking at the metrics Apple tracks in the App Store, it seems to me that "engagement" is all that really matters.

This has been the name of the game in ad tech like fb, Google and social media in general. I think two worlds are clashing with each other, where consumer tech is somewhat aware of the problems around mindless scrolling and addiction, but the growth & engagement mindset of the 2010s is cemented in the culture. Apple has little reason to follow this model because they primarily make money from selling hardware. Having a software platform that protects the user from this crap is a competitive advantage against Google, who depends on ad-based revenue. Apple seems to have an identity crisis, fearing they lose out on sweet subscription fees and ad revenue, now that most apps are free. This in turn is creating conflicts of interest, where they end up fighting their own customers.

If regulators would bark and bite harder around anti-competitive behavior, it might actually force corporations to focus on what they're good at instead of everyone building their own mediocre walled gardens that grow like a cancer and eats the company from within. At least, that's my wishful thinking..


They are trying, just look at gdpr. It's just not happening in the US.


Additionally, updating libraries periodically is inescapable for any app involving network connections or handling arbitrary user content, because doing otherwise means exposing your users to vulnerabilities.

Fully offline, totally self-contained apps are a different matter, but those represent an increasingly small percentage of apps.


I've personally witnessed several unnecessary and costly refactorings in my job that were done due to this weird perception the blog is talking about. All those weeks and months that were spent removing a perfectly fine tool that was working without issues.


Hey, Apple, people still play Super Mario, they didn't change a single bit since it was released, 37 years ago.

But there is one reason I somehow understand Apple stance. First is security, almost all apps are online now, and new vulnerabilities are found regularly. Super Mario has bugs, be no one cares, in fact, speedrunners have fun with them, because a NES is not online and there is nothing sensitive in there to begin with. Second is Apple's own fault: they break their own system and want you to make sure you keep up.


I don't think the Super Mario comparison is entirely fair here.

Super Mario Bros runs on a single piece of unchanging hardware. Even when you play it on any other platform, you're playing it through an emulator designed to mimic the original hardware.

And that's the case with all video game consoles. Even when the physical hardware changes, you're given the guarantee that it won't be breaking to the software. The few times that has not been the case has been notable.

iOS devices don't have that guarantee. The hardware can have breaking changes. You software now has to contend with the fact that there is no guarantee that certain features may or may not be present. Things that were previously not customizable, now are. If you want your software to run on the latest iOS, it is at least partly your responsibility to ensure that it does.


Yet somehow I can load up 15 year old PC games on decade newer hardware pieced together from random vendors and it all works fine.


But he didn't compare 15 year old PC games to iOS applications. He compared a 37 year old console game.

You also need an emulator to run 31 year old PC games like Civilization on modern OS/hardware.


> My old code simply did not work anymore with the latest versions of Xcode.

In other words if there was some actual reason to do an update, like a security flaw or serious bug, she wouldn't have been able to readily do so.

This seems to support Apple's policy to make developers update the app every once in a while.


Apple is causing this issue in the first place by breaking compatibility and requiring projects to be updated for each new Xcode version.


That’s because the platform is actually changing.


That’s not a reason, unless the platform is changing in incompatible ways, which is exactly the problem, and a largely avoidable one, as other platforms demonstrate.


Other platforms don’t care if applications fingerprint the device without the user’s consent.

Closing off APIs that facilitate fingerprinting is just one of many incompatible changes Apple has been making.

Some other reasons are deprecating power hungry technologies and asking for more permissions to access private data.

These are changes that benefit users on a massive scale. Why shouldn’t a developer be expected to respect users needs?


There are ways to still maintain API compatibility in that case (Windows does it all the time), and it’s also not a reason to require projects not affected by those changes to recompile with the newest Xcode version each year. I’m maintaining a Swift library/framework that doesn’t use any iOS APIs at all, only basic Swift standard library calls, and I still have to produce a new build each year from unchanged source code so that the library can be consumed with the new Xcode version. That’s just insane.


Why is it insane? Do you not test your library with new versions of the OS?

Also - “windows does it” is obviously irrelevant when we are talking about a mobile OS.


Testing with a new OS version shouldn’t (and usually doesn’t) require rebuilding the project. The need to rebuild is imposed primarily by the Xcode releases here, not by the OS. And that seems entirely unnecessary. Note that breaking compatibility and having to rebuild for new Xcode versions are two independent and orthogonal issues here.

I won't continue this discussion about maintaining compatibility. This is fundamentally a philosophical issue. I agree with the sibling comment that it is the job of an OS to provide stable APIs across versions. It requires some effort (as a long-term library maintainer, I'm very well aware of this), but it is not an impossibility at all.


The sibling and you are entitled to this preference, but I don’t see anything philosophical about it.

The job of the computer is to serve the end user. The job of the OS is to manage the computer’s resources on the users behalf.

API stability can contribute to that goal, but this is not an absolute.

Apple balances their view of what is good for users over what is good for developers and themselves.

Your preference is to prioritize developer comfort over both end users and Apple.


Being able to continue using an older not actively maintained app they still want to use is pretty good for the end user...


Maybe a tiny minority of end users, where the breaking changes benefit everyone else.


New features in the API indirectly benefit the end users (once apps start using them). But those new features don't have to be breaking, and that part doesn't help the users at all. It does help Apple spend less resources on maintenance, though.


Changes to old features also benefit the end users, and they do have to be breaking in a significant number of cases.

If you think there are no insecure or inefficient APIs in older versions of operating systems, then you are simply wrong.


The whole point of an OS is to provide a stable API to use.


This has never been true. The point of an OS is to manage the resources of the computer on behalf of the user.

API stability sometimes serves the end user, and sometimes does not. When it does not, it should not be maintained.


Being able to use the app you want to use serves the end user pretty well.


Yes, that’s the point here. A few end users can use an app the developer has abandoned, at the expense of the millions.


If I'm aware of an issue (be it tracking, or power consumption, or ...) with the older version of the app and want to run it despite that, that should be my prerogative as a user.


It’s clearly delusional to expect iOS users to keep track of the deficiencies of all the old APIs their apps are using.


> Other platforms don’t care if applications fingerprint the device without the user’s consent.

Isn't this supposed to be why the App Store has a manual approval process? So Apple can actually check if developers are doing malicious things?

By all means, apply extra scrutiny if an old API is in use. Maybe allow the API only in updates to old apps, and not new ones.


No. The manual approval process covers the visible features, not shady stuff the reviewers can’t see.

> By all means, apply extra scrutiny if an old API is in use. Maybe allow the API only in updates to old apps, and not new ones.

If the developer is updating the app, there is no reason they shouldn’t adopt a more user friendly api.


Why not ask for those things specifically rather than saying "old is bad?"


In practice, it amounts to the same thing. To make use of the APIs with the enhanced privacy etc., you have to use the latest SDKs.


Keeping apps up to date with modern privacy concerns is a good thing. Why is anyone arguing in favor of developers abandoning apps?


> In other words if there was some actual reason to do an update, like a security flaw or serious bug, she wouldn't have been able to readily do so.

(1) There wasn't.

(2) "After 4 hours of work to re-compile my app and 44 hours waiting in the review queue"

The biggest obstacle here isn't the developer, it's Apple.


The most basic, first step of maintenance is: does it still compile? Not in this case.

So another way to look at it is Apple requires developers to once every 3 years do the absolute minimum amount of maintenance, and they get a 90 day window to do it in.

Developers needing to demonstrate once every 3 years that they still have the source code and it compiles should not be an obstacle at all.


> Developers needing to demonstrate once every 3 years

You say "Developers" without qualification, missing the crucial exemption: for some bizarre reason, developers of apps that are above a certain download threshold do not have to demonstrate that their code still compiles. Does that make sense?


What Apple actually stated is the policy is for "discoverability of apps" for "the majority of users". Clearly the intention stated is to remove old, unpopular, outdated, crappy apps from the App store.

So when you ask but why are popular apps exempt? - well, they already told you exactly why, because they're popular. People still want the app.

I feel like the actual reason you're against this policy is left unstated.


> Clearly the intention stated is to remove old, unpopular, outdated, crappy apps from the App store.

1) The criteria do not include "crappy". Apple didn't claim that the article author's app is crappy.

2) What's the difference between "old" and "outdated"? The article author's app was recompiled but otherwise not changed.

3) The app is now back in the App Store. So if Apple's goal was to remove the app from the App Store, then Apple failed.

> People still want the app.

Alternatively, they could be scams that are getting downloads from keyword stuffing, fake reviews, search ads, and other well-known methods.

> I feel like the actual reason you're against this policy is left unstated.

Not at all. The same reason as the article author:

"The decision to remove outdated apps places a disproportional burden on indie developers and hobbyists"

Apple has said, in response to anti-trust claims that "developers, from first-time engineers to larger companies, can rest assured that everyone is playing by the same set of rules." But that's clearly a lie, is it not?


> 1) The criteria do not include "crappy".

I said that was part of their intention, indicated by "follow current review guidelines". Go ahead now and try to put Apple's intention into your own words. What do you feel is Apple's intention behind this policy?

> "... everyone is playing by the same set of rules." But that's clearly a lie, is it not?

Large developers also have to resubmit their very unpopular, obscure apps every three years. Small developers with popular apps also do not. Everyone is playing by the same rule. You made the claim, what developers are not subject to this same policy?

> "The decision to remove outdated apps places a disproportional burden on indie developers and hobbyists"

Explain why you feel a few minutes every three years is a burden. Literally the only thing Apple is requiring you do is recompile your unpopular app and submit it as an update once every three years.

It's not disproportional on indie developers and hobbyists unless you feel they have more unpopular apps that they also almost never update. Hobbyists can make popular apps, such as that Japanese guy's side-by-side calculator app that was on the HN front page the other day and they often put more love and effort into updating their apps, even the old ones. Frankly this view that hobbyists are oppressed by needing to recompile once every three years reads like an insult.


> Go ahead now and try to put Apple's intention into your own words. What do you feel is Apple's intention behind this policy?

I can't read Apple's mind. You can't either.

> Large developers also have to resubmit their very unpopular, obscure apps every three years.

Oh give me a break. I'm done here, this is not a good faith argument.

> Explain why you feel a few minutes every three years is a burden.

A few minutes. Again, this is not a good faith argument. Your reply is not serious.

> Literally the only thing Apple is requiring you do is recompile your unpopular app and submit it as an update once every three years.

Your reply is contradicting itself, because you claim Apple's intention is to get rid of apps, but you also claim that it's incredibly easy to not get removed.

> Frankly this view that hobbyists are oppressed by needing to recompile once every three years reads like an insult.

Do you think the original article author was insulting herself?

What a bad faith, insulting reply.


> I can't read Apple's mind. You can't either. ... Do you think the original article author was insulting herself?

I asked what your understanding was, not what Apple actually was thinking, and you absolutely can say what you believe about Apple and their intentions; this goes to Theory of Mind that is a cornerstone of being a functional person.

> give me a break ... not a good faith ... reply is not serious .. bad faith, insulting reply

It's now clear to me that you have unstated reasons because when calmly asked to explain your reasons you went off crazy like this rather than simply state those beliefs. You feel it's a burden, but why is effort so minimal a burden? You won't say.

Your expression of concern over the blog author's character, your inability to explain your views, and those views being irrational leads me to conclude that you are simply white-knighting.


Compare with media files. Music and movies don't expire. The most common audio and video formats will probably be supported forever. (Perhaps there are exceptions for obscure codecs?)

It seems like this could be done for at least some games, but defining a virtual machine and binary format that's both good enough that game developers will want to use it, and at the same time stable enough that it lasts forever, seems like a challenge? Also, it needs to be non-networked, or it will stop working when the server goes down.

This was done a long time ago for interactive fiction, but doing it for modern mobile games seems like a challenge. WebAssembly should probably be involved, but the user interface will need a huge API surface that's also stable.


I think Wasm + WebGPU is probably very close to what you want


No, thank you. I'd rather target Dosbox.


I want to make WebAssembly the Unicode of executable code formats, stable indefinitely far into the future. That requires backwards compatibility for all code format features and simple, stable APIs.


Not only the file format. The content also gets supported and legal protections were extended and not shorted over the years.


On a related note, I feel that some websites at are a point where they're complete, and adding new features makes the experience worse.

Take Github: to me it feels that it's now at an optimum where adding new features that make it more social or more "fun" would simply make it worse.

Similarly Youtube: tons of good material, user interface is fine. Then they introduced shorts, and to me it feels like these type of video's are simply not part of youtube's identity, adding them only makes it worse.

At some point it might be best to stop adding new features and just agree that things are fine now as they are.


Surely it's possible for GitHub to add features that aren't related to making it more social or "fun," though? For example, they could improve their support for stacked PRs in a number of ways; that's an area that is far from fine at the moment IMO.


they could show the linecount of each file in the repo browser


"Planned obsolesence" as a matter of official policy. Any cultural expression older than a few years is not worth the bits encoding it. Meanwhile, GTA V from 2013 is still one of the top selling games on Steam.


GTA V has been updated countless times since 2013, including entire re-releases for two newer console generations.


The Windows port being sold at Steam is from 2015. But of course you have a point; there have been updates in the meantime. It's conceivable that those updates are what's keeping the game marketable.

Still, I don't think Steam would confess to deleting a game just because it is old. They carry a lot of games which haven't been updated in years and which still sell.


They also carry old games which sell but don't work anymore.



Don't forget "Feature Complete".

It's a real thing especially when you're building something under the unix philosophy of "do one thing only, really well".


>I think the asymmetry of App Review is still lost on Apple. For indie developers our hopes and dreams (and sometimes our finances) hang in the balance, for the App Review team it’s just another app rejection among tens of thousands. I know they think they get it, they just don’t.

I really don't think it's lost on Apple or the app review team. They simply have no incentive to change and a captive audience in both developers & app users/buyers. Better customer service (to devs) on the app review process when there are issues, and continuing support for older SDK's, cost time and money and Apple does not see the value in that investment.

Absent competition there's also no pressure to change this. There's effectively a duopoly in mobile software between Apple & Google. They don't even need to explicitly communicate with each other to act as a cartel, they just need to silently follow each other's lead.

Android is at least marginally better for allowing-- through a few scare-tactics hoops to jump through-- sideloading of apps, but I don't consider that sufficient competition to overcome the duopoly label. Neither is Android's allowance of alternative app stores, no more than MS allowing users to install alternate browsers was sufficient to overcome their uncompetitive practices when it came to IE. The primacy of the Play store on initial setup & its extremely deep integration in the OS is difficult to overcome.

Alternatively, I am slightly conflicted on this due to the security aspects of app installation. Mobile certainly isn't perfect, but the prevalence of malware seems significantly reduced from what is seen in PC's. (well, windows. MacOS is not as bad as Windows, but I've always considered that to be a produce of it's lower market share & therefore lower cost/benefit ratio for attackers.)


I see some people defending Apple. Bizarre. The support Apple has in the most absurd decisions led me to classify it as a religion, not a company.

Apple is a trillion dollar company with over 154,000 employees. Apple has the resources to keep the code for two years, DOS programs still work on Windows 10.

The decision is arbitrary, someone who doesn't understand software has a lot of power over Apple.


How well do your DOS programs run on Microsoft’s successful mobile platform that doesn’t have swap, limited memory and has 24 hour battery life on the latest processor node?


Frankly, isn't that just what you signed up for as a mobile developer? The WorldAnimals app would probably have worked perfectly fine as a web app, but then the author would have had to figure out payments and discoverability himself. That's what the App Store offers, and in exchange Apple gets tremendous power to tweak every nook and cranny to maximise their profit. I'm sure Apple has good commercial reasons for the process that the author has experienced.

I think the real answer here isn't to try and beg Apple to be nicer to the small fish, but to instead use the "old", non-walled version of the web.


It doesn't apply here, but there are a few app ideas I have where I don't really care about payments or discoverability; I would need use native just because the features are not supported in iOS Safari. For example, here's a list of standards that the Webkit team has said that they have no plans to implement: https://webkit.org/tracking-prevention/#anti-fingerprinting (for me, WebBluetooth and Web NFC are the big blockers).

Until those gaps are fixed, native apps are your only option.


Well, no, a native app and a web app aren’t one-for-one interchangeable.

Apple acts as a gatekeeper by only allowing native apps to be distributed through the App Store, so it is not unreasonable to ask that they refrain from making the process too onerous.


The real problem is that any native developer is essentially slave to Apple or Google's requirements (or both!). Where or how did we go wrong with native development and applications? Its all owned (and thus gated) by these industrial behemoths. The web, for example, is a much more 'open' platform. There needs to be an open platform like the web that can be used on mobile devices.


Yup, my very simple/basic/i-thought-forward-compatible app broke when the stores required 64bit only. Maybe that's a simple change, but I can imagine how more complex apps may break much more frequently with OS changes.


What's wrong with the web on mobile devices? PWAs are going to get better support even on iOS now. What remains to be done?


Bluetooth on iOS is one example


Totally relate to the OP. I recently got my app kicked out of Apple App store for not being updated to the latest SDK. No patience or time to change programming language. layout engine, audio API, graphics, screen sizes etc. I just said "screw that" and now point people to a crippled-but-usable web version of it instead when they ask.

My app is 5+ years old and I basically considered it "done". It came to be exactly how I wanted and nothing more .. and users kept discovering new possibilities even after years of usage. It's a metronome https://talakeeper.org .


Eh, this is Apple you’re talking about. They don’t know what backwards compatibility means, and your app might randomly break on the newest version of iOS. Or it might be slightly ugly around the Dynamic Island™ and Apple users would hate that.


> They don’t know what backwards compatibility means

Platforms are supposed to absorb developer pain, but Apple offloads a continuous and multiplicative update burden onto all of their developers.


And you have to pay for the luxury too.


Because Apple prioritises the best user experience.


I would advocate for Apple and other app marketplaces to adopt the carrot and not the stick.

Incentive App developers who make good and substantial updates, whether its new features or updating to newer system APIs, with a positive % modifier to their search results in the App store.

This way well-maintained Apps should show up higher in search rankings and there's a natural incentive for developers to keep their programs maintained. But at the same time, this would allow developers to call a program "finished" and leave it on the store without being scared of having their hard-work destroyed. Their hard-work might not be as visible, but that would be on them.


I think there's something to be said about this in regards to consumer electronics as well. One of the oldest electronic devices that I still use on a daily basis is my ereader, which is a very early model. It simply does everything it needs to do, it's feature-complete.

Unfortunately there aren't many consumer electronics that fall into that category and a huge portion of consumer electronics that end up in the dump are not physically damaged in any way, but simply can't run the latest software, firmware, or OS. Or else they're missing the latest wifi or cellular radios, high end cameras, wireless charging protocols, etc.


Actually, I realized that much of my _music_ equipment falls into this category as well.

My mixer is an early example of a MIDI-controllable analog mixer but since the MIDI protocol is still ubiquitous, it still fits right into my modern workflow. New mixers don't really bring much to the table that am average person would need.


Similar thoughts about my PlayStation 2 from 2004[0] that I still play Dance Dance Revolution on. I don’t have to worry about anything changing that breaks the experience. (Except finding a TV that minimizes upscale lag but that should have been a trivial thing for TV makers to get right.)

[0] The PS2 platform is from 2000 but I use a miniaturized PS2 slim which was released and probably manufactured in 2004.


In an ecosystem like npm, having such thing is next to impossible. The useful libraries that don’t depend on anything else from npm are rare, and those that are need to review their lockfiles and run npm audit and whatnot every once in a while, even if their own code doesn’t change. Otherwise it’s quite a pain to integrate them with newer codebases.

In any ecosystem at all: if it speaks HTTPS, it’s either never complete, or it ceases to function past some date.

I get the sentiment very much, but there are cases when you can’t have nice things.


> I was essentially told that I hadn’t updated my app in three years and now it counts as outdated. I needed to update the app within 90 days or it would get automatically taken down.

There are "obsolete" devices that require Internet Explorer 6 with some ancient Java or ActiveX for configuration. I have a Windows XP virtual machine for this. Recently, there was an article about someone servicing trains using Windows 98 https://news.ycombinator.com/item?id=32884814

Nowadays, devices are configured with a smartphone app. In a few years, the apps will all be unavailable, and there will be no possibility to use such devices.


Earlier this year I set out to completely rewrite an old project[1] using minimal, battle-tested dependencies to avoid having to constantly patch code rot associated with more modern languages and toolchains. The rewrite[2] is now complete, more performant and lightweight than the old one, and I'm confident I won't have to touch it much, if at all, and it will compile and work in 20 years.

[1] https://github.com/vkoskiv/NoMansCanvas

[2] https://github.com/vkoskiv/nmc2


You'll probably still need to patch every once in a while for security updates on your dependencies.


Sure, yeah. That and I'm happy to do bug fixes as well, of course.


It is ridiculous that not all apps are held to the same standard. Before traveling with my partner recently we decided to buy some of the top paid games from the App Store on her iPad. A bridge building game I wanted to get into that cost maybe $5 CAD had severe scaling issues which meant I frequently couldn't see elements. Why should high purchase count work TOWARDS exemption from updating and not AGAINST? The more people who use your app should mean more frequent updates, in my mind.


The use of recency as a proxy for quality or usefulness is one of the more perverse trends of recent years, but it's understandable. There is just so much of everything that a strong demand is created for any method of filtering.

Of course I have to add that it does matter a lot for certain types of software. Anything that interacts with ever-changing APIs, hardware specs, and protocols needs to be kept up at least frequently enough to stay useful.


Reminds me of a game I published on the play store. Years after, I got an email saying that I needed to do something (don't remember exactly what), otherwise it'll be removed from the store. I was busy at the time and it wasn't that great of a game anyway. So now it's gone forever. It's things like this that discourage me from developing anything mobile.


As a user, there are two issues I could face from this type of culling. First, I keep older devices—maybe just for RSS or news apps or podcasts or in-car music or video players. Relatively simple, straightforward uses. If I have additional uses in mind, apps won't be available. Similarly, the same would be true for someone who wants to let a child use an older device for something like games.

The other issue is the number of iOS hiccups which can lead to support documents saying to reinstall the OS. Doing that will make older, removed apps unavailable after reinstallation of the OS if the user relies on iCloud backups. This risk necessitates iTunes backups to insure such apps remain available after reinstalling the OS.

I doubt a situation like this contradicts the overall value of the culling. But I do know it has an effect on me and there are some people—certainly, a small number—who will lose apps because of the shortcomings in iCloud backups.


This helps to drive iOS updates. My iPad 4 is locked to iOS 10.3, so I can't easily download newer releases of apps. I am aware of being able to "buy" an app on a newer device and then download the last compatible version on that iPad. This policy could impact that ability.


I think this applies to more than just apps too; it would apply to software in general.

"Update or perish" is a bad model for software.

https://gavinhoward.com/2019/11/finishing-software/


A similar thing happens in the Open Source WordPress plugin repository.

All of my WordPress plugins are free & Open Source. Most are tiny plugins using functionality (filters, actions) part of WordPress core. Unless WordPress becomes backwards-incompatible they will function perfectly fine for the foreseeable future. From my perspective these plugins are feature complete & unless there's a bug, don't need any attention from me. Sadly the WordPress repository expects me to update the version number or else the plugin will be become less visible in search results and a notice will be placed above the plugin's title stating: "This plugin hasn’t been tested with the latest 3 major releases of WordPress. It may no longer be maintained or supported and may have compatibility issues when used with more recent versions of WordPress."

So for some of my plugins I occasionally 'bump' the version number to make sure people can still find it in the search & for some plugins I just leave it be because I have better things to do. However it didn't feel quite right to keep people using my work in the dark, so I've added a text to communicate this to them. This is the text from my 'Redirect To Homepage' plugin:

"Is this plugin actively being developed? Yes and no. Let me explain: I consider this plugin to be feature complete and unless bugs are found there will be no development on this plugin. In other words this plugin is in maintenance mode and will be maintained for the foreseeable future. Due to other obligations I’m not always able to keep up with WordPress version’s and updating this readme’s ‘Tested up to’ version number. However, unless WordPress significantly changes the way the login_redirect filter works it should work perfectly fine even though the ‘Tested up to’ might be of a lower version number. As always, when in doubt, test it (and when it does give you issues, feel free to leave a comment)."

I think this balances both interests, those of people using (perhaps depending on) my work as well as my own. A similar approach could be used by commercial app stores to restore autonomy & balance interests.


I've been thinking about this topic for a while. If I ever build any kind of language or framework, I would prioritize backwards compatibility and make sure I don't waste anyone's time with unnecessary changes or deprecations.

I've also had to archive a few of my mobile apps and remove them from the App Store because it just wasn't feasible to get them running again.

I've adopted this practice with my SaaS startup (docspring.com) out of necessity, since I never want to change anything that might break a customer's templates or API integration. I also never remove or rename any database columns. It's been totally fine! I have a handful of feature flags sprinkled through the code, and a solid test suite that makes sure I keep supporting legacy behaviours. It has never bothered me. I don't understand companies or maintainers who are constantly changing things and breaking stuff.

I had a look at a Ember.js project recently and wanted to update everything to the latest version. I couldn't believe how many dependencies were broken, and how painful it was to even migrate to the next minor Ember version. Why??!?! What are all these extremely important changes people are making? The same goes for Rails, React, and every popular library. It's just a treadmill of things constantly breaking for no good reason.

But if you ignore upgrades for a while and fall behind, then suddenly a security vulnerability is announced and you've gotta spend the next 1-2 weeks upgrading libraries just so you can fix the vulnerability.

I had the idea to start a "upgrades as a service" development agency. We would monitor all your Dependabot PRs and fix any broken CI builds, and make sure you stay up to date. I wanted to build an automated code-mod tool that would know how to upgrade libraries between versions. Especially for simple things like a method or class being renamed.

I'd really love to be a customer of that service if it already exists. I waste so much time on this when I should be building new features and fixing bugs.

Maybe GitHub Copilot will be able to handle it one day.


Genuine question: what would it take for a lot of indie mobile developers to move to the web? It seems app stores are pretty punishing with their capricious rules and high fees, and the web avoids that with greater freedom, better backwards compatibility, and lower fees. Some of the tech was flaky 10 years ago, but a lot of web tech is impressive now, and still advancing (e.g. WebGPU and CSS container queries).

I guess discoverability and monetization are key points, but what about free art projects or small games built as side-projects and such? It seems to me they wouldn't have much to lose and a lot to gain by moving to the web.


I couldn't really accept this blog post because the author doesn't even realize they benefit from Apple's walled garden:

> Most people are trying to build well-designed, useful mobile apps

and then proceeds to say how the review process is not helpful because they don't weed out many malicious apps. First, confirmation bias. Second, do they have any idea how things are in Android land? It's a constant struggle against nefarious app devs trying to abuse new technologies or means of getting ad/install revenue. It's one of the things I like the least about the ecosystem I use.


The frustrating thing about software is that we still haven't quite mastered knowledge transfer to the point where you can exclusively communicate through code all of the information about a given piece of software. As long as you are depending on the software, all that matters is what it can currently do, which you can learn from the source. If it's been so long since the original developers left, whatever they originally intended it to do is probably either no longer relevant or already known to all users.


Apple and many other services are truly becoming unhealthy for the society at large. not individuals, but the global ecosystem.

they are destroying information at a rate that is almost unprecedented. I think they probably burn a library worth of information pretty frequently and just move along after salting the earth....

I see it with all sorts of platforms that claim to be innovative....they are so short sighted and think they can subtle hide their greed...


Random note: I loved "malicious apps are relatively rare; arguably, Apple doesn’t do a great job filtering them out anyway". We sure are lucky that bad actors only target Android!


I hate to see a piece of open-source software abandoned and broken. But I love to see a piece of software “abandoned” and still working because it has been designed for long-term use and simply doesn’t need updating.

That is how open-source grows. Because you can’t grow a codebase if you have to keep updating and fixing what you’ve already done. As you have to keep updating and fixing stuff, new progress slows and eventually stops. It’s the same for “codebases” like ecosystems/package repos and open-source in general


Interesting argument. If you take a look at Sublime Text packages, you will see that some of them were last modified years ago but they are still downloaded and sometimes even trend according to the packages site metrics. You then have to ask, can I trust these packages? Or do I look elsewhere like VSCode on Neovim whose extensions and plugins are often regularly updated and maintained in comparison?


There is a simple solution to this issue. Since Apple's distribution is not working well for these app developers, they should choose another mechanism that suits their purpose better. They can simply choose to distribute binaries for all the machines they like to support. As long as the target platform APIs work, their app will continue working for anybody who wants to run it.


In offense to apps that do not get updated, notably on Apple App Store:

You must update the software to show what the privacy levels are.

Apple App Store mandates spelling out how the users’ data are being used starting in 2021. Sadly, this only applies toward new updates.

Hiding behind how our data are being used … by the virtue of not updating your app … is not a defense.


Don't spend your time defending your app. Spend your time lobbying your representatives to either break up Apple or force them to open up their devices to other stores.

Even Ma Bell couldn't tell you who you could buy a telephone from, who you could call, or what you could do on the call, much less take a percentage of anything you ordered over the phone. Enough is enough.


Actually Ma bell could tell you who to buy a telephone from early on. They controlled what devices were allowed to be connected to the network.


It's just as well to produce releases every so often to prove that its' still possible. That is an aspect of safety - lets say some security problem arises in a library that your code uses - the more uptodate your code the greater the chance that merely updating the library will fix the problem without forcing an update of your app.


Maybe I misunderstood her, but she says it cost her several hours to rewrite her code to be able to compile her app again for the newest iOS and iPadOS versions.

Doesn’t this imply that her app would not have worked on the latest OSses?

In other words: if she had done nothing then her app would not work on newer devices? If so, then isn’t it useful for apple to warn her about this?


You misunderstood. Otherwise the whole article wouldn't make sense, she clearly states that it still works fine on new devices.

Compiling with the newest versions of the libraries/tools can be a lot of hassle including downloading/instaling multiple GB's

The newest OSes will run apps made with earlier versions of the libraries/tools fine, up to a point of course.


Having read this I can only assume apple is trying to gouge money as usual. No recourse for the developer but to waste their time and money on a pointless number update.

Were I developing an app I would never target an apple device, and anything I developed would be in the same vein and the article's app.

At least with android I could develop an app and target the fdroid store.


I don't know what more evidence we need that iOS and Apple are rotten and regulation is necessary. Too bad legislators are walking at a turtle's pace.

Were the platform open to competition, we could have multiple stores that would have better policies. It would still elude the common user, unfortunately, but it would be a solution.


So you’re going to “legislate” that Apple must maintain compatibility forever?


Requiring an update to ensure compat with the latest SDK/API seems reasonable. Apple has never maintained eternal backward compatibility (a-la-Microsoft), so that behavior is pretty consistent and known for devs on their platform.

Having some rule that looks at number of downloads is unreasonable. That has nothing to do with compatibility or security.


One thing also to keep in mind is it's often hard to keep a project working because the ecosystem evolves around the project. So updates are needed just for existing functionality to remain.

Features and APIs are deprecated/ removed from libraries, and if you don't update your project stops working in most user environments.


Soooo the wild thing is, apple gets more aggressive with it's app store policies the larger you get. Some small feature that may have been ok at a million users may be seen as inappropriate when you hit 10 or 100 million. Their requirements just balloon and they just get more and more annoyingly involved in your business.


There have been a couple times I’ve built software that just worked for years on end with no need for updating.

Felt like magic both times.


I don't know why Apple is doing this if they want to maintain their monopoly, I only see them throwing fuel on the fire of the breaking up of the App Store for some pretty marginal benefits for themselves. They are literally targeting the smallest and poorest developers explicitly.


Another day, another Apple rant about the walled garden.

They know you wont leave, they know you'll put up with it.


At least in Microsoft it’s very bad for your career to be involved in a “complete” project. Most Windows code just hums along doing it’s thing generating value and not breaking. But that value will never counted in a performance review of any engineer/manager.


So, what self-respecting programmings languages are there that wouldn't offer some new syntactic sugar every year so that you can keep your codebase up-to-date? ;-)

(Basically, programming is all about engagement nowadays, like with everything else.)


I recalled having a bit of an argument with IT at a previous position, I wanted netcat installed, they told me no, since they didn't have any good arguments they told me it hadn't been maintained, I told them it was because it was done.


As a developer myself, I can relate to the author and hope that app review process needs to be reformed to give more power to the developers. As a consumer user, I have to say that I will actually prefer the apps to be updated and usable in my new devices.


google forced gmail users to use a javascript web engine to re-authenticate their account then I wrote my smtp server, 5 years ago, did setup a pi with it, I don't recall adding/modifying anything since.

I wrote my own video player based on ffmpeg/vulkan/x11, changes did happen mostly because of a core change in ffmpeg or for some workarounds for hls streaming since my ISP has really trash peering with live stream CDNs.

I am still with x11, I use dwm and st, change are really rare.

Don't forget, open source software is not immune to planned obsolescence, this seems very true with open source software with strong ties with "corpos".


God, the UX in that karalof video is so much better than what iOS has become today.


And despite all of Apple's efforts, the number of misleading, scammy, or useless apps in the app store is huge. I don't search the appstore any more. I have to see a link to the app from a trusted source.


It seems like not a day goes by without someone writing an article about their experiences dealing with well-meaning, but none-the-less insane policies and procedures of the apple app review process.


It's Apple being Apple.

I don't think apps that truly don't need updates are possible, because platform APIs will change, and if they read or write files, or talk to anything on the network, other apps will change and compatibility will need to keep up.

But we shouldn't require updates for no reason, and we should probably be way more careful about messing with those platform APIs.

Android and I assume iOS is much better than something like Linux, where "platform APIs" don't even exist and things are provided by stacks and stacks of distro, version, and customization specific things, so there's no reason stuff shouldn't keep working for years.


Here’s a cheap trick: update the version number of your app regularly even if there’s no real change. People will be convinced the app is still fresh.


Thus spake the master programmer:

“Though a program be but three lines long, someday it will have to be maintained.”

The Tao of Programming, Geoffrey James


> the update description mentions the evil rival mobile platform

Do Apple really forbid apps mentioning that they are also available on Android?


The only app that doesn't need an update 2-3 years later is one that runs on a VM.


PWA

Oh, that's right, Apple doesn't fully support PWA, because it breaks their rent model.


3 years without updates? What would it take for the author to download the newest Xcode and compile? It won't take me a day. If you're in the mobile development space, you have to keep up and compile with the latest as much as possible. This is the way to get Apple and Google off your ass.


> What would it take for the author to download the newest Xcode

The author is a professional mobile engineer and has the newest Xcode.

The app is an old project not related to the author's current job.


Yes I am aware it's a hobby project. But nonetheless, same as every mobile project uploaded on the App Store, it should be updated frequently including the dependencies.


> If you're in the mobile development space, you have to keep up

> This is the way to get Apple and Google off your ass.

The article author has a full-time job as a mobile developer, does keep up, and is not lazy. Your comment was insulting.

> Yes I am aware it's a hobby project.

You show no recognition that people are busy and have better things to do than to constantly update a hobby project to satisfy Apple's whims.


Is updating once a year considered constantly updating? That's the least a "professional" iOS developer should do.


> Yes I am aware it's a hobby project

> That's the least a "professional" iOS developer should do.

You continue to show no recognition of the situation, and you also continue to be insulting by putting "professional" in quotes.


Just to recap, update your iOS project once a year, be it hobby or not. I don't want an app that hasn't been updated for 3 years on my son's phone or any device. Call me paranoid but security wise.


These are just your personal opinions. It's unclear why anyone should follow them.

Apple itself seems fine with developers updating once every 3 years, or even longer than 3 years, depending on how many downloads the app has.


Apple is starting to remind me of auditors, but instead of trying to keep people honest/push trendy practices, they're pushing whatever's trendy in Apple's management class this month.

When do we get to the point of companies hiring "review specialists" who used to work for Apple?


Anyone who could afford a "review specialist" is probably already getting the white glove treatment this post describes, and wouldn't actually need one.


Hate away, but in this case I'm on Apple's side here. The author may be the exception where no updates are actually needed, but that does seem to be the exception. In my experience, most iOS apps I've tried downloading that were last updated 2-3 years ago simply don't work.

Apple requires an update in the past 3 years or else a minimum threshold of downloads. Presumably the latter is required so there are simply enough newer reviews to indicate whether it's broken (lots of 1-star) or not. These seem like pretty reasonable indicators that it's still working.

> My old code simply did not work anymore with the latest versions of Xcode. So I had to spend four hours on Saturday upgrading all my platform libraries simply so that I could compile the damn app.

Honestly that's just good hygiene. Because if they waited another 2 years, it might have taken 4 days to get the thing working. New versions of libraries come with all sorts of improvements, whether it's for accessibility or security or a hundred other things.

It doesn't seem unreasonable to me that if you're maintaining software in a public app store, you should be able to compile it semi-regularly with new versions of packages.


Backward compatibility is Apple's job, not developers.

(This debate is as old as computers, but I'm strongly in Raymond Chen's camp.)


I agree. Unfortunately they've abdicated their responsibility and dumped the maintenance/compatibility burden upon developers.

It seems like a multiplicative trade-off: Apple saves a small number of hours but offloads a support burden onto thousands of developers.

I think you have to be careful about this. Apple benefits from lower costs and having more time to improve the platform; but developer time/focus/creativity is also being taxed by a constant support burden.


> It seems like a multiplicative trade-off: Apple saves a small number of hours but offloads a support burden onto thousands of developers.

Backwards compatibility is not a trivial effort, and can severely constrain the future evolution of a product. Keep in mind you’re asking for a completely backwards compatible embedded device OS. Being willing to make breaking changes is part of why Apple’s software is the quality that it is.

If you want a counter example, Windows is notorious for strange undocumented behaviors and special case handling. These are consequences of Microsofts choice to fully embrace backwards compatibility. This leads to a complicated platform that is continually more and more expensive to maintain, polish, and secure.

That expensive polishing cost is what tips the scales here for Apple. Backward compatibility can get in the way to get the level of polish that they call acceptable.


Android to me is better regarding of backward compatibility. Of course everything is easier since it has a much more flexible architecture, being based on bytecode and a JVM (contrary to iOS where applications are actual native binaries). The problem is that iOS is not all that well designed in that regards.

> If you want a counter example, Windows is notorious for strange undocumented behaviors and special case handling. These are consequences of Microsofts choice to fully embrace backwards compatibility. This leads to a complicated platform that is continually more and more expensive to maintain, polish, and secure.

Yes, and macOS (and iOS) a POSIX operating systems that maintains backward compatibility with something from the 80s. In that regards Windows is much more modern.


Well. MacOs also recently dropped the M1 architecture shift with no warning. Rosetta is more of a crutch than true backwards compatibility


This is just wrong. Some older APIs are simply insecure or inefficient for example, and of course there are things that apps must now ask permission for where they didn’t in the past.

Apple can’t make these things compatible, nor should they try.


Worse, developers need forward compatibility. Predicting the future is widely considered harder than inspecting the past.


In theory if you follow Apple's guidelines then you should maximize your app longevity; in practice Apple is going to break you eventually, and quite possibly in the next OS release.


I learned of this working at Apple that they don’t really strive for backwards compatibility. I think I upgraded some internal tool and discovered this MO. This was in such a contrast to Microsoft where I heard crazy stories of incredibly old software still being able to run. What exactly is the overhead to achieving such astonishing backwards compatibility as Microsoft has?


Hard disagree. Letting your platform stagnate by ossifying bugs and poor API choices is not what I want to see from any platform provider, and I for one am happy Apple is aggressively pruning the old to make room for the new and not allowing themselves to be beholden to past mistakes.

I'm sorry this developer spent a couple hours hitting "build" until their app started working. If they'd done it earlier it would have been far less painful. Their sacrifice improves the community.

I say this as an iOS engineer since like 2012 and a macOS developer since ... 2004?

Carbon [1] should never be allowed to happen again. If you really want an old system, you should be allowed to virtualize it, but the world must move forward.

[1] https://en.wikipedia.org/wiki/Carbon_(API)


I don't understand this comment, do YOU actually ship breaking changes to your users and expect them to continue using your code or programs? Just because Apple does it doesn't mean it's actually a good idea.


I, like anyone with a project sporting a major semver number >1 definitely ship breaking changes to my API customers.


Can you not? I understand sometimes you absolutely need to change a signature or deprecate a feature, but that should absolutely be the exception and not the norm.

Going back to the original post, I find it ridiculous that Apple regularly ships breaking changes (to their APIs) and developers just put up with it. In my minds, it's like being in an abusive relationship where you think if you try a little harder and stay up to date with the latest API version maybe Apple will treat you better (they won't). Apple can get away with it because they own the whole iOS/macOS tower top-to-bottom, but if you want to build trust with your users breaking changes should be the absolute last choice.


I know this isn’t an issue for the author. But for Apple to keep backward compatibility for ever, they would still be using die space for 32 bit support and have duplicate versions of every shared library both in memory and on disk.

Last time I checked, because MS worships at the alter of backwards compatibility, there are nine ways to represent a string in Windows in C and you constantly have to convert between them depending on which API you’re calling.

Every piece of code in your operating system is another vector for security vulnerabilities and something else to maintain and in Apple’s case port to a new processor. How slow has Microsoft been entering new markets because of the albatross of legacy code?


Nope, cull it all. I don't need zombie apps in store because one guy in Australia really likes it and I really don't need 10 years backward compatibility in OSes. This whole debate let ie6 go on for over 21 years... 21 years because businesses couldn't be asked to update their apps. Sorry if you need old shit emulate it.


I'm advocating for me, and users like me, not for the interests of a greedy corporation. I just want the things that I paid for to continue working. I don't want to re-buy things just so Apple can line their own pockets. I don't want to chase developers to provide updates, or if they have gone out of business, be stuck. I use technology as a tool to get stuff done. I don't need Apple to come and disable my investment in software.

https://lkml.org/lkml/2012/3/8/495


If you bought the apps they will still be available to download… You can use them at least for another 2 years with a ios16 device and probably later on. You just can’t buy them anymore if you really need spacebar heating it’s on you.


As per Apple's support, you can't.

https://discussions.apple.com/thread/252735331


That’s not apple support that’s the answer of your neighbor that made himself a little police badge…this is the support article… https://developer.apple.com/support/app-store-improvements/


I'd be on board if Apple allowed alternative app stores or internet downloads. Right now you either get curated or nothing.


Unreasonable backward compatibility is how we end up with windows vista


Windows Vista was painful, but consider that:

1) Compatibility wise it wasn't worse than macOS Catalina or iOS 11, which both killed off 32-bit apps; actually a typical iOS release is probably worse than Vista in terms of backward compatibility

2) Vista's security improvements (such as sandboxing device drivers, requiring app permissions, etc.) were beneficial and persist to today – and were arguably the predecessor to modern app permissions systems in macOS and iOS (and Windows 10/11)

2a) Microsoft eventually largely addressed compatibility by providing an XP compatibility mode (really an XP VM); they could have/should have managed this better

3) Vista got the UI for app permissions wrong, and users hated it; I think Apple did it better but Apple is generally better at UI and also had the chance to learn from Microsoft's mistakes.

At the end of the day, Vista's woes were largely from not paying enough attention to backward compatibility, combined with poor UI design. It was a rough transition, but Microsoft seems to have learned somewhat from the experience (although Windows 8 also had some UI issues.)


The tradeoff with killing 32 bit apps was that Apple could remove support for 32 bit code in ARM processors. Apple knew they were moving to 64 bit ARM processors.

But should Apple have kept support for PPC support forever? 68K? Why not keep a 65C02 emulator around so I can run Oregon Trail from the 80s?


> Why not keep a 65C02 emulator around so I can run Oregon Trail from the 80s?

Sounds good to me! Pretty sure Apple 2 system software and an emulator would only be a few megabytes, and it would be awesome as something you could install for free like GarageBand etc.. Apple might also be able to acquire the rights to a bunch of classic Apple 2 apps and games...

Maybe we can convince Apple to do it for their 50th birthday. Or maybe a Mac emulator for the 40th anniversary of the Macintosh in 2024. ;-)


Windows 7 had a better backwards compatibility story than Vista, and it's arguably the best version of Windows to date, ever. So whatever the problem was with Vista, it sure wasn't back-compat.


Agreed. Apple is not asking for more features, but a recompile of the app against the latest iOS and SDKs.

That's generally a good practice as buffer overflows and all kinds of entropy accumulates on an OS.


Apple was adding buffer overflows to their OS in 2014?


> In my experience, most iOS apps I've tried downloading that were last updated 2-3 years ago simply don't work.

Not sure what category of software that was but e.g. for games it’s usually not an issue. I used to have a bunch of games and game-adjacent software from the first AppStore year, we’re talking the time when you’d pay a buck for a virtual rain stick which’s use the accelerometer.

They kept working until the 64b switch killed them. Rebuilding them would undoubtedly have been difficult for the author even if they’d not moved away from them, but because those’d use only minimal is features (raw-ish inputs, and setting up opengl) there’s basically nothing to break which would not be flagrant.


I also had many games killed off by the 64-bit switch (on both iOS and macOS), but even after that a number of games that still worked were removed from the app store.


Games are definitely “complete” after a while., sometimes on first release. If they continue to work on the OS, the only reason Apple is dropping them is because they want the developer to give them another $99. The busywork this creates is a headache for everyone except Apple.


> Games are definitely “complete” after a while.

More importantly for better than or worse it’s just not how game lifecycles and financing go.


$99 from a few developers is meaningless to both Apple and the developer of a game. It's like 40 minutes of wages for an average engineer. Apple doesn't want to have to support old bugs and poor API choices in perpetuity.


What makes you think it's only a few and not large scale? And why would forcing a developer to update their feature-complete game result in stopping support of "old bugs and poor API choices"? There is no requirement to update the API level. Read the article.


Simply rebuilding and resubmitting the app with a new version of Xcode will yield a significantly different binary artifact today than a few years ago.

It isn't necessarily about API level, it could be (from time to time) about going from arm32 to arm64, to adding better-optimized builds for new microarchitectures, getting bitcode that's recompiled by Apple to support app thinning. Apple may deprecate or remove APIs from time to time that should also be addressed.

That there wasn't necessarily, specifically, a change this time doesn't mean that this program isn't about supporting that over time. It's much easier to make such changes if developers are in the habit of periodically making updates vs. being told to make a breaking update out of the blue.


You really think the $99 a year from developers would incentivize anything?

And before you quote the number of “registered developers”, all of those aren’t paying developers.


1,000,000 paying developers is $99 million in revenue -- every year. This is easily an incentive for Apple.


In your grandparent comment, you said it costs everyone but Apple. Apple has manual reviewers. It would be much cheaper if all those apps didn’t have to go through the app review process. You think Apple changes their APIs willy nilly just to collect $99 a year?


> In my experience, most iOS apps I've tried downloading that were last updated 2-3 years ago simply don't work.

The linked article already answered this:

    For instance, the App Store could factor in:
      - App Store ratings and reviews
      - Active developer membership
      - Historical behavior of the developer (ie. updating other apps)
      - Number of crashes
      - Number of active sessions, especially for the latest devices and platform versions
    These are all metrics that the Apple already automatically tracks


Apple just like most sane platforms have a deprecation policy.

First they deprecate APIs and then remove support entirely. She said herself that she had to make changes to update her code to the latest SDK.


>Apple tells me I could have avoided all this pain if my app was being downloaded above a “minimum download threshold”. The policy is completely silent on what this download number needs to be to avoid getting flagged as outdated–is it hundreds or thousands or hundreds of thousands of downloads a month?

I think this is a fair policy as well. From Apple's POV how can they ensure many apps in the app store still work on the latest phones? If the developer has been updating the app, they can be relatively sure. If the developer hasn't, but the app continues to be downloaded with success they can also be sure the app works. But for an app the developer has (seemingly) abandoned and and nobody downloads? It's most likely broken. You could argue that that Apple should test the app before sending such a message but I think it's probably likely that (1) the majority of apps that are abandoned are broken and (2) the developers don't actually respond. It's easier to tell all the devs to get up to speed and those that actually care will submit the app for review (even if all you have to do is change a version number).

What actually sucks is if you have to rewrite parts of your app because even though the iPhone 22 Pro Max Plus w/ Turbo & Knuckles supports the same SDKs as the iPhone 4, Apple won't let you resubmit your app as is.


It also makes sense because it might not be a good idea to apply the same criteria everywhere. In an app category with 10,000,000 downloads per month, an app pulling 500 downloads a month might be a drop-worthy also-ran that's mostly just directing a few people away from better options & cluttering results. In a category with 1,000 downloads a month, the app with 500 dl/m is likely the best and most-popular app in that category and dropping it would be tantamount to dropping the whole category.


Mh, changing the "ecosystem" so much means the ecosystem is not stable, witch means is a crappy mass of bugs/a working prototype not a commercial product.

Oh, I like of course innovation but if we talk about mobile stuff, so something tied by choice to hw, their real update rates behind security related patches, should be very calm.

Anything must evolve to be alive, but also anything must NOT be stressed to change just some crap to prove that something have changed. That's not evolution.

Beside that, I fail to see any tangible improvement in ALL modern software to a point that I almost quit using mobile crapware of any vendor and kind, for real. I can't avoid it completely, unfortunately, but that's is.


Quite a few changes over the past few years have been reducing access to data that can be used for fingerprinting, and requiring apps to ask permission for access to user data.

This is squarely the fault of developers abusing the users trust.


> Presumably the latter is required so there are simply enough newer reviews to indicate whether it's broken (lots of 1-star) or not. These seem like pretty reasonable indicators that it's still working.

You think App Store ratings and reviews are accurate?

They're incredibly easy to fake and buy. Scam artists are doing it all the time.


You know what doesn't have this problem? Web apps. I have a site up from 2001 that still functions perfectly.


I doubt it. Web browsers of today can't pass the old acid tests because the spec changed.


Unless you did one of the gazillion things that was permissible in 2001 but is now considered insecure and was patched out of all the major browsers. Not saying that making browsers more secure is a bad thing but it's definitely not true that every 20 year old web app will work flawlessly on a modern web browser in 2022.


I’ve never even heard of anyone getting promoted by saying “this app is perfect, let’s leave it alone except for bug fixes.” The overwhelming incentive for everyone involved in most commercial software work is to continually make changes, necessary or not.


Imagine my shock, Apple's being greedy....


This is why the products I sell are in plain PHP + MySQL, nothing to really update as most PHP/MySQL language updates are usually backwards compatible.


As someone who recently did a large classic LAMP migration: nope. Lots of things to break.


What exactly broke? With migrations there might be stuff like httpd.conf or mysqld.conf that don't get properly copied. But if everything stays on the aame machine, thinks rarely break by themselves, even when upgrading the OS, database, PHP version or web server.


This was a huge upgrade. PHP <6 + MySQL 5 to the latest versions of everything (except substituting MySQL for Maria).

A whole variety of scripts that used constants without quoting (`CONSTANT_NAME`), which later became a problem because the language mandated them in quotes (`'CONSTANT_NAME'`). We were finding those in various places literally months later.

mysql_ family of functions was gone from PHP, so we had to patch files to include a "polyfill" snippet to translate those calls to mysqli_

MySQL (/MariaDB) itself was another idiot-fest because of the way it handles Unicode. This is a slightly too long story and would risk doxxing myself, so I will simply give you some highlights, and then ask you to imagine that everything that could go wrong, did go wrong:

1. MySQL storage engines have a fixed upper limit on how long an index can be for each row. For 4-byte UTF-8, that is an upper limit of 768 characters.

2. MySQL defaults to utf8mb3 rather than utf8mb4.

3. Unicode characters in column names are rarely a good idea.

Apache, surprisingly, went relatively fine. I really don't like the ancient way it does things, but it's serviceable if you're fine with banging your head against the wall to do something that seems like it would obviously work.


For me it’s all about security updates, so it depends on the posture. If it’s not taking external traffic, non-critical, and does something simple, it’s probably fine to use an old version without a lot of concern. Otherwise I’d like to see at least a steady stream of dependency updates. With Dependabot and the like this is a relatively low effort commitment to merge its PRs when they come up. If there isn’t some security auditing on it and it’s serving on the open web it’s out of the question for prod IMHO.


We built an Android app long time ago. When GDPR came in we did not need to do any code work because we respected our users data upfront. Only thing we needed to do is add a link to policy. And people are still using it. Crashes do happen but it's rare. So it's frustrating to get proded to update it.


I am convinced the world doesn't understand the concept of something being "complete." It's like part of their brain can't comprehend that something is functionally whole, and any changes would actually detract from what is there.

Related: open source devs fret if a library hasn't been updated in the last month, even if it is feature complete with no bugs.


For open source code, I think the concern is practical: bit rot is real, and libraries with no recent updates are more likely to have suffered bit rot. It's not that a library can't be finished, but that the world in which it lives never stops moving. It can be a bigger or smaller problem depending on the ecosystem. Old C89 libraries have a good chance of still working, because the ecosystem isn't moving so much any more. Old C# libraries almost definitely don't, because the ecosystem is moving quickly.


Bits don't rot--vendors break things. If my code built with yesterday's SDK and ran on yesterday's OS, and it doesn't for today's, that's not some supernatural phenomenon where digital media rots away--let's clearly blame who's causing it: SDK and OS vendors making changes that break backward compatibility and dump the maintenance burden on developers. The term "bit rot" is an attempt to shift the blame away from those who are causing the rot in the first place.

Whether or not it's good to have a moving ecosystem is another story, but I request we not use the term "bit rot" when we mean "vendors breaking compatibility."


Perhaps the solution is a monthly "NOOP" release?


I would assume a noop is automated, and the library is no longer maintained.

If, however, I saw a statement at the top of the README that said:

"Update as of 2 months ago: We believe this project to be complete. We are not adding new features at this time, but if you see any issues, please open a ticket, as this project is still under active development as needed"

then I would feel very confident.


Thanks, that's a good point.


maybe commit to a log indicating that the test suite has run correctly as of <today's date>?this assumes you have a useful test suite but also at least indicates that you're still paying attention & trying to ensure your library works


quid custos ipsos custodiet still applies. The fact that your test suite ran in Esolang v27.8 successfully doesn't mean what you-the-consumer want it to mean if the last supported version of Esolang is v96.2. It just means someone down-stack of you is committed to backwards compatibility.


That future version won’t be on the supported versions list anyway.


I think this varies by project.

I personally support every version of my projects.

My goal is not to minimize the effort to myself but to facilitate the use of my software.


I meant the new version of Esolang, v96.2, wouldn’t be explicitly supported by your project.


Ah, also a good point.


Great idea!


Agreed but this definitely isn’t in the interest of library maintainers. If they did so it’d mean they are acknowledging all the currently open issues and deciding to do nothing, which would sometimes cause backlash.


I mean most projects just auto close all issues now. Dunno where this desire to have 0 open issues came from. Seems like in the past projects would mark the issue as unconfirmed and ignore it, now they all want to close the thread and not allow further comments. Almost as if someone is going to judge their open source project negatively for having open issues, which if that has ever happened that person is more insane than the devs practicing this.


Other game platforms (game consoles, Windows) seem to have drastically better backward compatibility and API stability.

If you look at the Nintendo DS/3DS (for example), the platform had many hardware revisions (about every other year) and dozens of firmware revisions, yet games from 2004 still worked on a brand new 3DS in 2020.

On the Sony side, PS4 games from 2013 still work fine on a PS5 in 2022 (and probably through 2029.)

Steam has tons of old games that work great on Windows (but the Mac side took a major hit with macOS Catalina's 32-bit apocalypse.)


The 3DS physically included most parts of a DS, that often went unused in "normal" operations.

In my experience old games often work better in proton/wine than in Win 11.

My understanding is that the Steam Runtime also provides a stable base, that (or something close to it) unfortunately wasn't adopted in other ecosystems. Flatpak is great, but not used widely used for binaries yet.


And Playstation is way behind Xbox when it comes to backwards compatibility. Xbox backwards compatibility goes all the way back to the original Xbox released in 2001.


A PHP library I use and used to help maintain had a bit of a fight break out in the Issues on GitHub a couple weeks ago.

They were dropping support for older versions of PHP despite getting nothing from it and using none of the newer features. Just needlessly limiting the audience, and churn for churns sake.


I am reminded of the libressl - 30 days in report. https://youtu.be/oM6S7FEUfkU?t=1073

Removing support for old versions is a simplification of what needs to be considered. It is a reduction of complexity. If someone is running a PHP version that has been EOLed, it is perfectly reasonable to discontinue support for it in a library - even if new features that weren't available in the EOL'ed version haven't been added yet.


> If someone is running a PHP version that has been EOLed, it is perfectly reasonable to discontinue support for it in a library

Its not. Large Linux distributions support those PHP versions for a long time in their LTS releases, and majority of web hosts and infra providers use those distros in their infra.


So what version of PHP are we talking about? https://www.php.net/supported-versions.php and https://www.php.net/eol.php

That some provider is still hosting old EOL'ed and no longer supported versions shouldn't prevent a library author from saying "I'm not going to deal with something that had its last release nearly 4 years ago."


7.4 is supported on focal until 2025 for example. Centos has its own LTS and supported versions and so on.

> That some provider is still hosting old EOL'ed and no longer supported versions shouldn't prevent a library author from saying

Yes it should. Its not 'some' provider. Its providerS. A gigantic part of the web lives on such large hosts. AWS and similar ecosystems constitute a small part of the web. This is not saying that the former is low traffic but large in size compared to AWS et al. There is similar traffic and user activity going on in both ecosystems.

You can easily 'deprecate' something and spend millions of dollars man-hours in upgrading your stack as a larger startup which uses AWS. But millions of small businesses, individuals who rely on Open Source software and such hosting providers won't have the money or time to jump through such upgrade hoops 'just because'. It is 'just because' for the simple fact that a lot of such version hops we do in Open Source software development do not bring much to the end users. And they will not appreciate their businesses getting hampered trying to go through upgrade hoops which they have never asked for.

An alternate approach to this is saying 'f*k you' to those people and doing Open Source for the sake of software development itself, without caring about end users. That would also work - but only for those who develop Open Source and who have the time to keep it updated. The general public would just move on from Open Source and start using reliable private service providers that don't break their businesses every other year.


How much effort should a library maintainer expend to keep 5.x code in place?

Is it worth it for a library maintainer to remove support for 5.x even if they don't replace it with code that makes use of 7.x functionality?

If a site is still running 5.x (stats put it at about 20% of the sites out there are still running these versions https://w3techs.com/technologies/details/pl-php/5 ), are they going to be updating to the latest versions of libraries?

Note that the FOSS approach to "but it doesn't support the old version that I'm running" has been "don't upgrade or fork it and maintain it yourself." It's the flip side of https://boyter.org/posts/the-three-f-s-of-open-source/ ( https://news.ycombinator.com/item?id=32591265 ).

I'm experiencing this myself with Spring Framework 6 targeting Java 17 as a minimum version even though Java 8 LTS continues through at least 2030.


> How much effort should a library maintainer expend to keep 5.x code in place?

Based on the usage of the version and the support for it in large hosting/infra providers.

> Is it worth it for a library maintainer to remove support for 5.x even if they don't replace it with code that makes use of 7.x functionality?

Breaking backwards compatibility is practically always bad. You do it once. You do it twice. By the third time it does not matter because a large part of your users would have moved on to some other stack by the second time.

> If a site is still running 5.x (stats put it at about 20% of the sites out there are still running these versions https://w3techs.com/technologies/details/pl-php/5 ), are they going to be updating to the latest versions of libraries?

These must be seen as ecosystems. The ecosystem would slowly move to a higher version over the span of 2-3 years starting from the point when a version starts nearing its EOL. And the users upgrade slowly on their end.

> https://boyter.org/posts/the-three-f-s-of-open-source/

Using the same language in the article: The users would say one single F word to such a project, without the project maintainers ever hearing about it, and silently move on to some project that is not so self-indulgent and so irreverent towards its users. Nobody has the time to jump through hoops to keep their business running on Open SOurce, less suffer such arrogant sh*t.

Such an attidude is only workable if the project is a hobby one, does not aim to gather ANY kind of community/userbase, does not aim to do any impact.

> I'm experiencing this myself with Spring Framework 6 targeting Java 17 as a minimum version even though Java 8 LTS continues through at least 2030.

I would recommend that you prioritize your users and backwards compatibility in that order. The moment your users start trusting your project that it will not break their software, they will start upgrading easily and adoption and retention will get boosted.

JSON API spec's 'only add, never deprecate' approach is the holy grail to chase:

https://jsonapi.org/format/

> New versions of JSON:API will always be backwards compatible using a never remove, only add strategy. Additions can be proposed in our discussion forum.


I hear you and I get what you're saying. However, to argue the other side, maybe it's a decision based on the possibility of long maintenance going forward. Keeping track of, in your example, PHP and all of its various versions is mentally daunting. What's wrong with a project just saying, "Hey look, we're not saying it won't work for you, but if you're running these older versions, we just don't give you any guarantees that it does. YMMV."

e.g. say a bug comes up in the library, and say that it only affects older versions of PHP. Why can't the project just shrug its shoulders and be truthful that it doesn't have interest in maintenance/support against those old versions? It's better that the project declares this intention now, before the bug comes up, then later having to part ways with the old versions more abruptly.

This is why OSS is great, as you can decide if, at the time said bug is found, that you want to provide legacy support for older PHP versions, maybe through a fork or other means. Have at it, it's your software too.


> feature complete with no bugs

I suppose for simple libraries this might be possible (e.g. left-pad) but how would you know in general that something is bug-free? `units` was introduced in 1979 and is still one of the canonical examples used in automated program repair research to show that new bugs can be found even in extremely well-studied codebases.

I think people might be:

1. Implicitly assuming that all nontrivial code has bugs, and 2. Using recent commits as a signal that means, "If I ran into a bug, there is someone who might fix it (or at least accept a fix PR) to take away my immediate pain.


I've never used or heard of the left-pad library, but I'm willing to bet that I could find an RTL bug in a library with that name.


That reminds me of the overflow bug in the JDK binary search that was present for quite a while.


The Elixir ecosystem is quite good for this. I've found smaller libraries that haven't been update in a long time but still work just fine since Elixir itself hardly ever changes in a breaking way (the recurring phrase in the community being: "There will probably never be a version 2"). It's weird to get used to at first but it's actually quite refreshing.


I used an Erlang library in my Elixir project a while back that hadn't had an update in a decade. If a language's semantics don't change (and they really, really shouldn't change without either A) a very good reason or B) a really, really good backwards compatibility story) then there's really no reason for a library that does a job well to ever update.


About open source lib not being updated :

The thing is, even if the lib is "feature complete", it's pretty rare to not have to update anything, since the ecosystem in which this lib evolves will undoubtely have changed. Programming language, hardware, OS, etc. everything evolves all the time.


This is only true if the ecosystem doesn’t care about compatibility, so this is just a symptom of a deeper issue. Say what you will about Wintel, but the long-term compatibility of x86 and Windows still lets me run a lot of win32 applications written two decades ago without issues.


I bet that if I moved back to Windows, KatMouse would work just as well as ever—and it was already nearly a decade out-of-date when I stopped being a daily Windows user.


Theres a good chance the documentation could say things in a better way…


Security threats also evolve all the time. A library that hasn't been updated in months is very likely an insecure library.


I use an ancient library to perform some math. It's been working just fine for more than 10 years without any need for update. Can someone explain why is it insecure and what updates are needed to complex algos to make them so? When looking at the code it is pure computation.

I think what you are saying is simply FUD when taken as a blanket statement.


> I use an ancient library to perform some math.

Anyone can cherry-pick examples to try to disprove a security principle. A pure, functional math library isn't representative of the vast, vast majority of libraries that are imported every day.

The most-used libraries across languages are for things like database access, logging, package management, HTTP, and (de)serialization. All of those things need to be kept updated for security.

> Can someone explain why is it insecure and what updates are needed to complex algos to make them so? When looking at the code it is pure computation.

You didn't specify the language or library.

Most libraries that people import are going to be JS, just because of the number of JS users and the minimal standard library in that language. Many are also going to operate on user input, which means they can have vulnerabilities.

The mathjs package in NPM, for example, has had tons of vulnerabilities[1].

1. https://security.snyk.io/package/npm/mathjs


C++ Library, forgot the name, need to look.

NPM system is security abomination I agree. But I am not using it. I look at things from my own perspective. If 90% of the world programmers are bound to NPM ecosystem (doubt it) it is their problem. Not mine. I do not "import" half of the Internet for my "hello worlds".


Are you passing user input into it? Does it interact with anything on the system? Poorly sanitized inputs present a very serious risk for RCE vulnerabilities.


As already said it is pure computational lib. It does not interact with anything and I use it from my code. There are tons of libraries like this. They work just fine, do not pose a security risk and have no need for update.

But thanks for trying anyways.


Updates may be a security threat too… didn’t log4j become a security treat _because_ it had been updated?


Imagine TeX becoming unavailable because Knuth didn’t update it for three years.


From the Tex Tuneup of 2021: "The TEX family of programs seems to be nice and healthy as it continues to approach perfection. Chances are nil that any documents produced by previous versions of TEX or METAFONT will be affected by the changes in the new versions"

This is how the Gods do programming :)


I mean a heartbeat is needed sometimes to make sure something is still alive and can address critical issues if they do happen. But I do agree that libraries and apps don't need to "update" to show that.

Maybe some other way like "Hey, we're still here just the thing we're working on doesn't need an update" would suffice for most things.


By "the world" I think you mean a specific subset of people: investors/execs/the business side. I don't particularly believe that average end users or competent engineers feel that way...


In fact, average end users infamously don't voluntarily update things that aren't visibly broken.


I would love to see this theoretical world in which libraries have no bugs.

Thing is, even if there are no known bugs, having rare updates means if I do find a new bug it’s less likely to get addressed quickly.


I'm not sure I follow your logic. If you find a bug, wouldn't that provide a reason for updating your code? What is the utility of issuing superfluous updates when you don't have any new known bugs to fix? How does doing that shift your likelihood of fixing bugs when they are found, in any direction?


I like that so many unix command tools (like cat, ls...) is rarely updated. Often comes with a man page written in 30 years ago. Same with a lot of C libraries.


If a library is not updated in some capacity "recently" (coming from the Java land, "recent" is a different timescale than Javascript), then I have little confidence that a security issue can be resolved if discovered. Even if a library is complete, it may depend on libraries downstream that do have security issues that are no longer supported, which means it needs to swap to newer versions, etc.


One word: security updates. Ok that was two. Most software had dependencies, those tend to surface vulnerabilities over time.


Everything you use continuously improves. What do you use that has seen improved versions since you bought it?

Our knowledge, our abilities, our raw materials continuously improve, and everything is expected to advantage of this. Things that don't improve, are soon outdated and fall behind.

Everything continuously improves: cars, bicycles, windows, houses, refridgerators, tv, etc, etc.


s/improves/churns/g

We reached the peak of software usability a long time ago, probably around the turn of the century. Now it's all about trendchasing, change for the sake of change, dark patterns to squeeze the $$$ out of you, etc.


I think there's still room for real improvement: once we have afforable displays that cover your entire wall, imagine what something like miro could bring you. Visual collaboration, pen drawings that turn into organizated information, architecture diagrams that can be turned into low-code models and running software, all whilst collaborating with others all over the world in front of their digital display walls.

Last 2 years, the thing i miss most about working in the office, is the ability to collaborate with co-workers in front of a huge whiteboard. The brainstorming, ideation. Miro is nice, but it deserves a huge screen, it needs more interaction with your co-workers, and it needs to help you take your drawings into real value (running software, designs that can be fed into 3d printers, cnc machines, etc).


In a world with security concerns, can anything be complete?


yes. (especially but not only on mobile, where the platform is responsible for a lot of it)

Security concerns are very clustered around limited attack surface, that many apps just don't have.


Yes. The ls Unix command.


Here are a few implementations that did not get the memo:

coreutils ls: https://git.savannah.gnu.org/cgit/coreutils.git/log/src/ls.c

freebsd ls: https://github.com/freebsd/freebsd-src/commits/927f8d8bbbed7...

busybox ls: https://git.busybox.net/busybox/log/coreutils/ls.c

openbsd ls: https://github.com/openbsd/src/commits/master/bin/ls/ls.c

The latter seems to be the most stable, yet has been updated two years ago, many years after its first introduction.


Haha you got me, but I wouldn't be surprised if there were Unicode issues etc as you have to support modern filesystems

Like this https://www.exploit-db.com/exploits/33508


I’m not sure the ls command has to do much if anything to support new file systems since the file system driver should present a standard interface for them.


The standard interface for paths on Linux is "bag of bytes", do you want to see bag of bytes or do you want to see human-readable path?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: