I can relate to the frustration of the author.
I wrote an open source library which gets downloaded 50K+ times per week.
At one point, I hadn't updated it in several months and people started asking if the project is dead...
In fact, it wasn't updated in a while because it had been working perfectly and was built in a forward-compatible way; that makes it more healthy and therefore more alive than probably 99% of libraries on the internet.
The expectation that software libraries or frameworks should break every few months is disturbing. I think this may be due to the fact that many of the most popular frameworks do this.
I find that React-based apps tend to break every few months for a range of reasons; often because of reliance on obscure functionality or bundling of binaries which aren't forward-compatible with newer engine versions.
> The expectation that software libraries or frameworks should break every few months is disturbing
I think it is more that recent updates are used as a proxy for determining if the developer still cares about the project.
The problem is it is hard to distinguish between a project that has been abandoned, and a project that is feature complete and hasn't had any bugs reported in a while.
Although, if you assume that any software is free of bugs, and that at least some users will report bugs, if there is low activity, that suggests that either it isn't well maintained, or there isn't a critical mass of users to find and report those bugs. But then, higher quality software will require a higher number of users to hit that critical mass so it doesn't give you that much information unless you also know how good the software is.
Well, does that mean "i will never touch this again" or "I won't add any new features, but I will fix bugs"? If the former, I probably don't want to use it. If the latter, I might consider it, depending on what it does, and if it has the features I need, but then, again, how do I know if they are still watching it since they posted that.
And there is also the state of "it has all the features I need, but I'd be open to adding a new feature if someone else shows a compelling use case for it."
The older I get, the more change-averse I become. That doesn't mean that everything is perfect and we shouldn't fix things that are broken, but it does mean I start to embrace the philosophy of "don't fix what isn't broken" more and more.
If I have something installed on my desktop / workstation that a) is not network attached and b) is not exploitable from a security point of view (least privileges, no sensitive data) c) has little risk of corrupting data and d) is unlikely to "just stop working" one day because of a systems / dependency update ... then I'm not in any hurry to update it. When I do get around to it it's probably because I'm installing a new version of Linux and so everything is getting an update. But if the old version still works, has no bugs or security risks and there's been no updates for years ... who cares?
I can think of several categories of applications that fall into this category:
- Music / mp3 player
- Classic Shell / Open Shell for Windows
- Office software, like LibreOffice. I'm sure newer versions offer some niceties but things "just work" for me. I don't see the reason to upgrade.
- Almost every game, ever (assuming it's stable and has no major usability bugs that need patching)
- Lots more
Things I do want to upgrade would be my web browser, operating system kernel, anything network attached. Really anything that could affect data integrity or security. Other than that, I'm lazy, comfortable and I don't want to risk the introduction of breaking changes.
Not to get into a semantic argument about specific words and their meanings, but my broader point is that there are certain categories of bugs where I wouldn't care if they ever got fixed or not. Maybe this is nit-picking and maybe it's my cynical brain coming out of 25 years of software development where people can't even seem to agree on what constitutes a "bug" vs a "change request" half the time, or how we prioritize the severity of bugs etc. but my point is that sometimes "it's not broken" means "it's good enough."
Would I be upset if the developer "fixed" these categories of bugs? Of course not. But would I avoid using the software at all if they never did?
...
"I guess that really depends on what it [the software in question] is."
Maybe a bot that once every three months (or whatever time period you prefer) does a commit to the changelog that says "I am [not] still maintaining this project" and it sends you an email a week before to hit a toggle to remove the "not". Then in the readme you can put a paragraph saying "I use Maintaina-bot (tm), and I commit to clicking the button every three months as long as I'm maintaining this project".
It's a little silly, but I could imagine that if this become a common enough thing, people would start to trust it.
My thinking is that someone forgetting about a project, not caring about it, or for some reason being unable to update it (e.g. losing access to an account or even being physically unable due to injury or death) might not update the project to indicate this; I think a lot of the wariness of older projects that haven't been recently touched is because it's hard to tell the difference between "no changes need to be made so I'm leaving it untouched but monitoring" and "no one is even looking at this and if a change needs to be made, no one knows".
The laziness (which I have no issues admitting I also have!) is exactly why I think having to explicitly act each time to indicate ownership is important; otherwise, it would be impossible to tell the difference between software that's maintained but hasn't needed updates from something that no one is maintaining anymore, which is the issue we have now.
I mean it depends on the kind of project right? What would be catastrophic for a web framework could be totally normal for a command line tool. I mean when was the last commit to some of the GNU utils we use every day? If your module provides math functions or implements some protocoll chances are those underlying things are not going to change etc.
I use a ton of tools I wrote myself and haven't had to touch in years except for the occasional dependency update.
Sure, because the repo contains a ton of tools. Yet another advantage of the monorepo: if everything is in one repo nobody will notice that you haven't touched that one part of the code since you wrote it.
Or let's phrase it differently: there will always be a part of the code that you do not have to touch for decades if all goes well. And if that part sits in a seperate repo, is it abandoned or just finished?
All abandoned projects were active at some point in the past. A project being active now is not a guarantee it will stay so. What happens if the project is abandoned after you chose to rely on it?
> What happens if the project is abandoned after you chose to rely on it?
If the project already had the features you need, was thoroughly tested and debugged, and was built with almost no dependencies (only relying on a small number of very stable platform APIs), then it might not matter even if it was already abandoned when you started using it.
Case in point: On a recent software project, I took dependencies on two Lua libraries which were already abandoned. Their source code was short and readable enough that I had no problem going through it to understand how they worked. In one case, I added an important feature (support for a different type of PostgreSQL authentication to a Postgres client library). I also packaged both libraries for my Linux distro of choice and submitted them to the official package repositories. If anyone reports bugs against the version which I packaged, I'll fix them, but it seems unlikely that many bugs will ever be found.
As the OP rightly pointed out, the expectation that software libraries should normally need to be constantly updated is ridiculous. Many other things invented by humans stay in use for decades with no alterations. Beethoven's 5th symphony hasn't had any updates in more than 200 years.
> Many other things invented by humans stay in use for decades with no alterations. Beethoven's 5th symphony hasn't had any updates in more than 200 years.
I don't understand this argument. Are you saying that some things don't require updates, therefore we shouldn't expect software libraries to be updated as well? But there are also things that do require updates or maintenance. Would that invalidate the argument?
Some examples of things that need updates or maintenance: buildings, infrastructure, tools, machinery. Even immaterial things like books or laws receive updates.
And that includes the 5th Symphony too, which has different editions.
Re-reading the comment as a whole might help to correct this "strawman" interpretation, but anyways, here is some clarification:
In today's world of software development, many take it as given that every software library should receive periodic updates, and that a library which has not been recently updated is in some sense "dead" and should not be used.
This is logical in cases where the domain is itself constantly changing. For example, software packages which calculate the results of income tax returns must be updated yearly, because the tax laws change. Similarly, packages which perform time zone calculations must be periodically updated, because the time zone rules change.
However, a vast number of software libraries operate in domains which are not inherently subject to change. As an arbitrary example, imagine a hypothetical library which performs symbolic differentiation and integration. Since the rules for how to integrate a function do not change, once thoroughly tested and debugged, there is no reason why such a library could not be used unaltered for decades. Yes, it might not benefit from the development of more efficient algorithms; but if the library was already more than fast enough for a certain application, that might not matter.
While software is different from other things created by humans, the existence of creative works such as books or musical compositions, which often survive for decades or centuries despite not receiving regular updates, provides an illustration of what can and should be possible for software as well. Note, this is an illustration and not an argument.
Agreed and upvote but it becomes a problem if the owner is unresponsive but still holding the main repository, domains, package repositories etc. Forking is possible but a very complicated, messy, hostile path and also a lot more work than just contributing a patch.
Forking is trivial and non-hostile. Is there a plumbing issue in your language where it is unreasonably difficult to wire other dependencies to your fork instead of the original?
This is a fantasy ... go offer some open source maintainer some money to implement something they don't want or care about and come back and tell us if it was simple. It may be in some cases where they are actively looking for that kind of work, but more than likely they will have no interest.
I would expect for many developers it is really only a matter of how much you are willing to pay. If they don't already do freelance work then that cost might reasonably be higher than what it would cost to hire someone else to implement it because of the extra tax paperwork and whatnot that they now need to deal with.
Of course there will be some maintainers who are simply not interesed or unable to offer paid support (e.g. due to their employer being a control freak) but that's ok. As others have pointed out, forking / maintaining out of tree patches (and providing them to others) is not inherently hostile.
> The problem is it is hard to distinguish between a project that has been abandoned, and a project that is feature complete and hasn't had any bugs reported in a while.
Usually I browse the issues / bug reports page and see if there are any breaking bugs that aren't getting much attention. If there are, then that's one more data point that the project might be abandoned.
1) The owners sometimes archive the project or write in the readme: complete, no more features will be accepted.
2) You can look at the number of open issues. A project with X stars/used-by/forks that hasn't been updated in 8 months and has zero open issues is probably stable.
If your using a non-github platform without mirroring the repository, your pretty SoL for getting those green squares of activity to reflect your actual activity.
Why should i care about the green squares (which can be faked anyway)? This feels like microsoft propaganda to get me to put everything on github for no good reason, especially considering that if he's pushing to a company git that company probably doesn't want it on the public internet.
I wouldn't be surprised if most software in the world has been chugging along unchanged for decades. I suspect future historians in 1000 years will find pieces of ancient code copied around unchanged for centuries.
The annoying thing about software is that we still haven't quite gotten the transfer of knowledge working to the point that you can communicate all knowledge about a piece of software purely through code (and no, being able to emulate all behaviour is not the same, otherwise decompiling wouldn't be a thing).
What this means is that for most software an essential part of that software consists of knowledge that needs to be kept alive. The only proof people can have of this is activity surrounding the project. This puts us in the uncomfortable position of having to designate someone as the 'keeper of all knowledge' surrounding a project if it is to be sustained, and failing to do so will result in knowledge being irrevocably lost.
If you have source, you can figure it out. If there's something clever in source that isn't obvious, that's either a problem, or a well documented exception to the rule.
If your whole entire source code is so innovative that you can't immediately understand it, there might be trouble, but so much code is or can be pretty boring.
At most, changing something will make a bug that's not really in the spec but is something everyone wants and relies on, but isn't immediately obvious, then you'll have to fix it.
The trouble is that if knowledge is lost, people will probably not bother to figure it out, and starting fresh might be more appealing.
The bigger issue especially in open source is that if there's no activity, there might not be any in the future. It could be abandoned for 10 years and perfectly fine, until a change is needed, and even if the change is easy and the knowledge is there, it might not happen if nobody cares anymore, and you'll have to fork it yourself, so maybe you choose the actively maintained thing even if it's less complete or more complex or ugly to use.
Even if you have source, you can't necessarily figure it out. Sure, you can figure out what the program does, but you can't know precisely what the program was intended to do, or how closely the current version matches that eventual goal.
Sure, you might have the specification that the original developer was given, but every developer brings their own opinions and hard-won experience to a project. Just because the specification says "outgoing messages are sent to a queue" and the program links with ZeroMQ, doesn't tell you whether that's a design goal or a temporary stop-gap proof-of-concept. Maybe the program is structured so it can use an email queue for distributed operation. Maybe the program is designed to be linked into a larger system and just push message structs onto an in-memory vector.
It's exceedingly rare that you would not be able to figure it out given the source code. The real question is whether you have the time.
I think if a project was that important, the people who needed something patched would be able to get it together even if the original maintainer went missing.
The original analogy to the challenges of knowledge transfer isn't totally applicable. Knowledge transfer is important to organizations because figuring everything out all over again would be costly, not because it would be impossible.
As long as you are relying on the software, what you do care about is what it does today, which you get from the source.
You certainly care about what it will do tomorrow too (so you can keep up, or avoid having it break stuff for you), but if it goes unmaintained, you don't have to worry about that since it does the job for you.
OTOH, I've been on projects where a dependency was chosen on the promise of what they will do in the future. That future never materialized, so we were left with a bunch of unfinished dependency code, maintained a bunch of components ourselves etc — it was a mess, in short. So I never rely on promises, only on what's there today.
What a program does at the moment is exactly what it should do, unless there's a bug report filed, or the current developers want it to do something else.
Whatever it was originally meant to do is probably either obsolete, or well known to all users, if it's been so long the original devs left.
If it's got ZeroMQ, and has for 5 years, one can probably assume someone will be upset if it suddenly does not have ZeroMQ.
Regardless of intent, for practical purposes, if people are using it, they probably want that to stay around until you do an orderly transition to some other thing.
Very much so. I've spent a lot of time trying to decipher code and the biggest questions are usually "why is it doing this" and "what's this large piece of code indended to do".
Reverse-engineering all that can be an utter nightmare even with tests. I suggest that people who disagree consider, for example, the word "reverse-engineer" as in how they might reverse-engineer a car.
Usually you have this kind of problem when it's NOT producing desired results and the problem is how to make it do something that's needed without affecting some set of usecases that you don't understand yet.
I think a better way to go about it is to state your own goals. If you want it to bridge it to ZeroMQ, you can evaluate the program and see if the way it is coded fits you. If you want another service, you evaluate how much work to retrofit it. As longs as you expect decades out of your software, you should prepare to own all your dependencies. The intention of the original developer does not matter that much.
None of us are coders as far as I know, I don't think that's a real job title. They call us programmers or developers or engineers.
We solve problems with code, or by finding ways to avoid writing code.
We translate vague requirements into things a computer can handle, or we translate between computer languages, because any spec that was complete would just be the program. That's why they keep us. The computers can do pure logic on their own.
Opinion is very much a part of it because pure logic is just a useless pile of text unless it has something to do with the real world, which is full of opinions.
Even inside the program itself, we aren't dealing with logic. Languages are designed for human minds, and compilers are meant to catch mistakes people make. Libraries are often chosen for all kinds of perfectly good nontechnical reasons, like "That one is actually maintained" or "Everyone already knows this" or "People seem to like this" or "That seems like it won't be a thing in 3 years".
Software would be a point in space. There are multiple types of documentation which can either justify why the point would be where it is, or what direction it should move in.
I think it depends heavily on the coding style. In many corporate environments, developers tend to obfuscate or over-engineer or they will tend to introduce too much complexity into the project.
The problem with not updating dependencies for a long time is that it can force you to make sudden version jumps if something like a CVE is discovered.
At my company we had a real scramble to upgrade log4j when the CVE for that came out. A lot of software was many versions behind. Luckily for us, no breaking changes had been made, and it was simply a matter of bumping the version to the latest one.
That was a lucky break, because if that had been a library like Jersey or Spring it would have been an entirely different ballgame.
If you don't keep your builds up to date, you open yourself up to risks like that, which are unknown and to some extent unknowable.
I think communicating this information in your README or whatever might help a lot. It's hard to know if something is finished or abandoned in general, so a message indicating the former would help people arrive at a decision sooner.
I think one of the core issues contributing to fragile, continuously-updated software is a culture of over-using dependencies. Each dependency added is a potential complexity trap. Using a dependency that has few sub-dependencies is less likely to cause problems. Or that has several sub-dependencies that each have no dependencies. It's deep dependency trains that are likely to cause this. Ie, dependencies that depend on multiple others.
> The expectation that software libraries or frameworks should break every few months is disturbing.
Dependecies and external APIs change so software breaks.
An example from a project of a customer today: Ruby had a Queue class up to 3.0.2. For some reason they moved it to Thread::Queue since 3.1.0. It's only a name change but it's still one release to do.
Depends on your choice of language and framework etc though? The js ecosystem is very break-y and, at least when I worked with it, so was the Ruby/RoR ecosystem. In .net or Java, far less so; we have almost 10 year old software which we can update the libs off and it compiles and runs the tests just fine.
With flags to recover the previous behavior, and the changes of which only broke code using non-public API surface, i.e.: code that should never have been using those APIs but which previously had no way to prevent usages.
I'm not sure what flags you are referring to, but that might be because I went from 8 to 11 and skipped 9 and 10.
> the changes of which only broke code using non-public API surface
The APIs weren't non-public. They were publicly documented. They did say they were internal and you should avoid them, but they were still used in libraries and referenced in tutorials and stack overflow answers. And if you happened to transitively use a library that used one of those APIs, it would cause a fair amount of pain when you upgraded.
They were non-public, at least all the ones that I recall being broken (from memory, but let me know if I’ve missed something important here: URLClassLoader, Base 64 classes (made public under a different name), unsafe: name implies the contract to some extent, but also still available under a flag. Then there’s the whole slew of things like DNS resolvers that override internal classes, but again- weren’t supposed to be for “public” consumption.
Ultimately, the break seems to have been intentional, but did receive appropriate consideration. Mark’s comments about moving forward do address these points.
Modularization "broke" a few public APIs since they had to remove an interface declared in swing from some classes outside of the package. I am not sure if there is even one known case of anyone being affected by that change.
I don’t know the details of your particular library but I think 50k/week downloads and no problems is an outlier. In my experience even projects with small audiences tend to accumulate outstanding bug reports faster than they’re fixed.
It's an open source pub/sub and RPC server/client (similar to gRPC) - It has a client and server component written with JavaScript and Node.js.
I avoided using any dependency which relied on pre-built binaries. Also, I skim-read the code of all dependencies and made sure that they were well written and didn't use too many sub-dependencies.
The trick is to minimize the number of dependencies and maximize their quality.
I 100% believe that’s it’s possible to build a library without tons of bugs, and appreciate that there’s people who put out quality libraries like yours.
I do think as a general heuristic no updates for 3 months means it’s been at least 2 months since anyone paid attention to outstanding bugs, especially the npm ecosystem.
It really depends on the size and complexity of the library. Do you think something like leftPad needed updates every month to perform its function as expected and remove bugs? Of course not, the mere suggestion is ridiculous, but also something like jQuery def needs constant upkeep. In the middle, there's a world of possibilities.
Leftpad shouldn't exist and any library using it as a direct dependency should be treated suspiciously.
I'm not saying that because of the leftpad fiasco, I mean that any library that small should simply be pasted into your codebase because the risk of managing the dependency outweighs the benefits.
I strongly agree with this. Another set of libraries I take issues with is gulp and its plugins. I find that gulp plugins keep breaking constantly and are often flagged with vulnerabilities. The sad thing is that these plugins solve very simple problems, there is no reason for them to keep breaking every few months for the last 5 years or so.
If I see a library which is supposed to solve a simple problem but it requires a large number of dependencies to do it, that's a big red flag which indicates to me that the developer who wrote it is not good enough to have their code included in my project - I don't care how popular that library is.
gulp itself is sort of abandonware and is dependent on libraries that have vulnerabilities (you judge yourself if they apply to you, if you run them on untrusted inputs etc). A remedy in my project was to rip it out altogether and replace it with shell commands.
Do you think something like leftPad needed updates every month to perform its function as expected and remove bugs?
Node libraries are a bit odd in that respect because package JSON files have an "engines" property that informs users about the minimum and maximum Node version the library works with. The author of leftpad could check its compatibility with each new version of Node that's released, and update the engines value version accordingly. That would be quite helpful because abandoned libraries that aren't known to be working with newer Node versions don't get updated, and users can user that information to look for something else.
What that means is that any Node library that isn't being updated as often as Node itself is probably not actively maintained, or the author isn't making full use of Node's feature set.
> "The author of leftpad could check its compatibility with each new version of Node that's released"
Really? This sounds feasible at all for you for all the millions libraries in npm not to be labelled as "obsolete"? Checking if string concatenation still works on every major node release and tagging it as such?
This sounds feasible at all for you for all the millions libraries in npm not to be labelled as "obsolete"?
Yes. I mean, they could just leave the max version of the engines property blank because it should always work if it's something as trivial as leftpad, but in a well-run package repository the work of checking library compatibility should be getting done. The fact that NPM is full of old libraries that haven't been tested in new versions of Node is a bad thing, and those libraries should be flagged as untested.
I can confirm that this is the norm for all my open source projects. Forward-compatibility issues can be easily avoided by not using too many dependencies and by carefully checking dependencies before using them.
I check for things like over-engineering and I check out the author (check their other libraries if there are any).
Built carefully, extremely tiny, or single purpose.
"Do one thing and do it well" - the unix philosophy, and why I still use cd, ls (dir on windows) today.
If a project exceeds two functions its probably doomed to become either redundant or broken.
If a project does one thing well, even if something else uses it, you can always decompose to it (your software is always useful, even when somebody writes the fancy bells and whistles version).
What non-game-able metrics do you think an outsider can use to evaluate the quality of a tool? In some ways a "perfect" tool would have no flaws (thus no issues/pull reqs) and be so easy to use (few training updates, SO questions) that it would appear to be dormant.
SQLite seems to hit a good balance. There's always something new but nothing particularly breaking (from my light usage POV), so it looks live.
Filed issues are getting radio silence, no matter their quality.
The author chimes in, says he’ll take a look next evening, then two years whoosh by.
There is an ambitious todo and nothing moves.
The code is in alpha for years, considered unstable, huge exclamation points in the readme that it should not be used, and yet somehow there are hundreds of reverse dependencies and it’s like this for years (usually should mean “RUN!!!”). Sometimes sneaky as it’s not a direct dependency for you, but for those two or three libraries used by like 80% of the ecosystem.
We have the same problem with our sass product: the product is "complete" in our minds, we don't want to add more features, make it more complicated, ... we perform minor updates of third-party components, some minor fixes, ... This means we don't maintain a blog with very recent posts, or a very active twitter. We have a daily usage counter on the homepage though, which shows that thousands of users use it these days. But from time to time someone still asks if the product is alive... We haven't found a solution to this yet: it's as if we are expected have a blog post every few days or weeks, create new stuff all the time, ...
We do have a release notes page, but it's really not interesting content: updates of some third-party frontend or backend packages that users never actually see, fixed a typo, minor css alignment, ...
We do announce when there is a user-visible and significant issue or update, but that's quite rare, maybe once every 6 months or more.
Try opening an old project from 3-5 years ago that has a lot of dependencies like something from react-native, Django, etc. and you’ll find many of them impossible to even install without using the original Python or Javascript versions - that’s assuming the oldest dependencies are still available in the pinned versions.
It’s not a common use-case, I guess, but I had some old projects that I abandoned and wanted to try working on them again and found it almost impossible. The worst cases had dependencies which weren’t available unless I used an x86 laptop with an older version of Mac, because the packages were never built for arm laptops.
That's what version pinning and semver is for. If I install 1.2.3 in 2018, and the software is now 4.5.6, I don't expect it to work without updates.
In regards to the second case, bet that's Python with the wheels situation, where it doesn't fallback to building from source by default for some reason (instead throwing a super obtuse error), at least with Poetry.
I had to choose between two libraries at work, lead rejected my recommendation because github was showing less activities, the other project was seeing a total rewrite regularly, largely buggy because of that, and unclear documentations, yet it was the popular choice.
I think one issue, is that the world changes around an app or library. In these cases, the code base could need updating as a side effect of this change.
Also, not many people (especially me included) are able to write something that's perfect first time. In an ideal world, other developers would be testing and challenging what's been written; which would meant the product or library needs updating.
I think it's (perhaps ideally) likely, that at least one of these factors is relevant.
When evaluating libraries, recent activity is one of the things I look at. It's a far from perfect heuristic. I've probably rejected libraries like yours for what amounts to no very good reason.
I don't know if this is a good suggestion or not but I wonder if some form of "keepalive commit" might help here. I can imagine a few ways they might work, some simple, some more elaborate.
I don't check the commit history, I check the issue tracker. If something has unmerged bugfix patches from 3 years ago without any comment from the maintainer, I assume that it's just not being maintained.
I recently added a very visible note to the README.md of one of my projects that essentially said: "lack of commits means this is STABLE, not dead. Our plan for <next major runtime> is bla bla, in a year or so"
This expectation that things need to be committed to or updated every month has to stop, we're just wasting time and energy on stuff that is already fixed (unless there are security issues, of course).
Maybe GitHub needs to automatically include an UPDATES.md into each page, I dunno.
Well, there is also the security aspect in mind. Most software relies on libraries that repeatedly get CVEs over months (not years).
You might not care too much, and it clearly depends on the application, but for me updating a software that doesn't use anymore a library for getting user input that leads to a buffer overflow if you insert a certain character, or similar things (like remote code execution, etc.), can be quite important. Finally, apps are most of the time connected to the Internet, which makes them inherently vulnerable.
> Well, there is also the security aspect in mind.
What security ? "Google security" where everyone except the user has access to his data ?
Why does a note taking app needs internet to function ? Why does my phone app need access to internet ?
Those are great counterpoints. Instead of relying on developers to keep their own apps up to date (you will always have a significant number of outliers), you enforce security better at the system level. Having more tightly scoped permissions and better sandboxing of runtimes solves this way better than depending on individual app developers to secure their own apps.
I understand your frustration, also thank you for your work.
> The expectation that software libraries or frameworks should break every few months is disturbing.
As a library user, it's hard to find a good balance between fully trusting a system to stay alive for a while without maintenance, while being super paranoid about every aspect I rely on. Mentally it's easier to expect it to break every now and then, than to keep thinking "it's probably fine, but I still need to be defensive about stuff that never happen".
I think the issue on a library being "alive" or not is best mitigated by the users being comfortable to fix it if it broke under their use case. There's libraries I thought were completely dead but were a good shortcut to where we wanted to go, and we expected to extend it here and there anyway. That can't work for everything (thinking of security libs in particular, I wouldn't want to touch them) but it's to me the ideal situation, with no undue stress on the lib maintainer as well.
If your product relies on any compilers/tools which are themselves updated, then at least your CI needs to be updated regularly enough. May be in your case, that was the case, but if not, then that would just to me that the product is not being maintained, even if the product is feature complete.
I feel like, and hope gorhill responds to this, that uMatrix was similar to this. Done and stable and needing to be touched just so people can wouldn't ask if it was still maintained. (Pre manifestv3 that is)
An update provides no value except as a signal to users that the developer hasn't killed the project. I was wondering how annoyed or reassured users would be if they saw version x.x.x+1 with release notes noting no changes.
Please don't. A sentence like "I'm still around, don't have anything to release for now because the library is feature-complete and no bug has been found for a while" would be better.
> So I had to spend four hours on Saturday upgrading all my platform libraries simply so that I could compile the damn app.
I sort of assume this is the actual point? Apple presumably wants to drop support for older versions of the SDKs, and that requires that app developers update to the newer versions. I think you can make a reasonable argument that dumping work on third-party developers to simplify things for themselves is bad, but the author's belief that it was simply pointless busywork is probably incorrect from Apple's perspective.
I suspect the minimum download threshold to be exempt from this is _very_ high. Maintaining backwards compatibility for a small fixed set of apps doing things an old way is a categorically different problem from maintaining backwards compatibility for every app out there.
If they have specific libraries or APIs that they wanted to deprecate, I think developers would understand that. But the current implementation of "anything 2 years old is too old" is arbitrary.
If this was really about deprecation, they wouldn't have a "minimum monthly downloads" exemption either. This policy is just a way to clear out old, unpopular apps from the store
Traditionally, this is a significant philosophical difference between Apple and Microsoft that goes quite a ways towards explaining how Microsoft gained and held dominance for so long in the business sector.
Businesses don't want to be told that their working software needs to be updated to make a vendor's bottom-line cheaper. They recognize cost-shifting when they see it and respond by backing towards the exits. Microsoft maintained a philosophy for decades that it was their responsibility to, if at all possible, maintain backwards compatibility with older Windows software as a market differentiator. The primary times I remember them breaking this policy were security related.
(That having been said, I got out of active development of Windows software around Windows 8, so this may have changed).
Again, I don't think people would have as much of an issue if it was clearly about deprecation.
Something like Google's minimum sdk version is annoying, but understandable. It's technical and concrete - you must link against version X because version X-1 is going to disappear.
This is not that. It's culling apps that are arbitrarily too old and arbitrarily not popular enough. They must be keeping around old sdk versions if those old but popular apps are allowed to continue on.
This is the single biggest reason why Microsoft continues to be dominant in the market of practicality. Microsoft will bend backwards 420 degrees and land a headshot to make sure something from 30 years ago will run (within reason) on the latest version of Windows.
Not anymore. That may once have been true but circa Windows 8.1/10 that changed substantially IME. It really came to a head when Windows 7 support was dropped but I spent several years consulting and helping people migrate really really old software (which should have been replaced) that was Win 3.11/DOS based from Win7 to either Dosbox or Wine on Linux systems because Windows 10 specifically couldn't actually support running them anymore. Compatibility was broken in a rough way with 10 for very old applications which were otherwise working.
It's fine to argue typical desktop applications need to be updated but purpose-built applications cost money, so when you have things like a POS terminal installed in thousands of fast-oil-change locations and it's from the Win3.11 era working perfectly fine for years but suddenly stops working after moving to 10, that's a sudden cost on a business (especially with no warning). Yes, you can argue that companies should be updating that kind of software, if only for reasons of security (I often did). The bottom line tends to be king, especially in smaller businesses, in my experience.
That's the reputation, but it's far from true at the moment. For example, there are numerous games from the 2000s that don't work anymore (I mean crash at startup or soon after, not some soft "not working like before") in any compatibility mode on modern Windows. I'm more familiar with games, but I'm sure the same is true of various other kinds of software (though graphics drivers may well be an important part of the problem, so games may well be over-represented).
Games, especially in the 90s (and some early 2000s) did a ton of API abuse so they are among the harder types of software to keep running. Still, most do work out of the box and with 3rd party compatibility shims like dgVoodoo (for the older ones) and DXVK (for the more recent ones) stuff tend to work since the breakage is still very minimal.
More regular applications tend to be much easier to get working though. E.g. something like Delphi 2 or C++ Builder 1 work out of the box just fine. The biggest issue with older software is that they sometimes had 16bit installers who do not work with 64bit Windows. Windows comes with some updated stubs for some popular installers and it is possible to manually fiddle with some of the unsupported ones to run the 32bit engine directly, though something like odtvdm/winevdm that emulates 16bit code on 64bit would also work.
But in general you can get things working, depending on how well the application was written. In some cases (games, 16bit installers) you do need workarounds as they wont work out of the box, but even those workarounds are based on the 99.9% of the rest of the system preserving backwards compatibility.
I remember the old trick on MacOS of calling `getNextEvent()` instead of `waitNextEvent()` to make sure no other processes got CPU cycles.
Worked generally okay until the era of the Internet came along and after you quit the game, all manner of programs would crash when the network stack suddenly found itself teleported an hour or two into the future and couldn't cope.
Games depend on more volatile elements like graphics drivers and APIs, like you said, so they aren't exactly representative of Microsoft's backwards compatibility efforts. That said, there are still quite a lot of games which owe their continued operation in modern Windows to Microsoft's veneration of old programs.
A better example is productivity software, like Photoshop and Illustrator or Paint Shop Pro. I can get Paint Shop Pro 5, a raster graphics editor from 1998, to run on Windows 11 just fine, for example. Another is Microsoft Office, in which Microsoft goes out of its way to make sure documents created long ago will load and work fine in modern Office, and ancient versions of Office itself will happily run mostly fine on modern Windows too (eg: I run Office XP on my Windows 7 machines).
Games were among the worst offender in terms of writing on the disk outside of user space (including Program Files and the registry).
Old games were also quite often doing direct access to graphic cards outside of official APIs
And games are not business software. An old game that stops working is "too bad, move long", accounting software that stops working has real life consequences.
That's not the market they care about. What they care about is that the stupid labelmaking application my employer uses for its annual conference whose vendor went out of business in 2005 still runs today even though it was compiled for Windows 2000.
Only things I've ever noticed that had issues in modern versions of Windows with is software reliant on old, specialized hardware (for example, dental imaging software) and games. Wouldn't really put the blame in these areas on Microsoft, as they don't provide the drivers nor the custom graphic API's (notice that most old DirectX software has no issues working) - and hell, you can still run your 40-or-so-year-old apps on a 32-bit version of Windows...
> The most impressive things to read on Raymond’s weblog are the stories of the incredible efforts the Windows team has made over the years to support backwards compatibility ...
> I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.
> ... Raymond Chen writes, “I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure. I spent many sleepless nights fixing bugs in third-party programs just so they could keep running on Windows 95.”
> A lot of developers and engineers don’t agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It’s why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple’s official Bible of Macintosh programming, there was a tech note saying “you can’t do this,” they did it, and it worked, and their programs ran faster… until the next version of the OS came out and they didn’t run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.
---
This really gets into a philosophical difference of how software should work and who should be maintaining it. As a user, I love Raymond Chen's approach. As a developer, trying to maintain bug for bug compatibility with older versions of the code is something that scales poorly and continues to consume more and more resources as more and more bugs need that bug for bug compatibility across versions.
> The primary times I remember them breaking this policy were security related.
And they also went above and beyond to make it happen too. That time when they byte-patched a binary which they didn't have the source code anymore comes to mind.
it seems like that on a long enough timeline however, promising that legacy cruft will always keep crufting as it did before seems like a huge burden that leaks into your ability to deliver quality in your current offerings.
Sure there are old things that are good and "complete" but far more old stuff is just old and could well be burnt to the ground, except for the fact that you have some customer somewhere relying on its oddities and unintended behaviors as an essential feature of their integrations.
I feel you are arguing for real harm now, versus potential harm in the future.
There's no sign that being very-backward-compatible is holding Windows back that I can see.
In the other hand I have collected a set of utilities into my work-flow that are easily 20+ years old, and they all work fine.
The last major cull from Microsoft was that 16bit programs ceased to work on 64 bit platforms. Other than that I've never had an app fail.
Most of those utility supplies have long since disappeared, retired, or died. But I can still keep transferring those apps to the next machine and they keep running.
None of this impacts the quality of my current offerings. Ultimately all this costs me is some cheap disk space.
I have a couple camera apps from the mid 'aughts that were written for Windows XP that still function fine under Windows 10. Same thing with an old photo scanner, that just only recently failed. My last SCSI dependent peripheral. Ran fine on Windows 10, on ancient software, using ancient TWAIN SCSI drivers, until the scanner itself failed. As I moved from XP -> 7 -> 8.1 -> 10, the first thing I did was load up all my old photo software to be sure it would work.
My father has an ancient scanner from the XP era and the only thing that still works with it is a Windows XP VM he happens to have (I assume later windows could work too, but the XP exists for other reasons and so we continue to connect the scanner to it).
Maybe Linux could be beaten to work with it, but this works well enough.
Which is the reason that Apple is never going to conquer "traditional" gaming. Traditional games (ones that are NOT perpetually updated live services) and rock-solid-stable development targets have always been very close -- game consoles would always be guaranteed to run any game that's ever been officially released for it, even if it's an unpatched launch-day game running on the final hardware revision of console 8 years after release.
Even if Mac hardware manages to vastly outrun the top end gaming PCs on raw performance, they'll never be seen as serious mainstream targets for this reason alone.
> This policy is just a way to clear out old, unpopular apps from the store
Except popularity doesn't correlate with utility when it comes to apps. Probably only addictive games and social network apps will pass whatever arbitrary threshold has been set.
This will harm any one off apps built to satisfy a niche purpose downloaded by a small set of users. Which Apple probably think are not important, like all of the little high street shops, except cumulatively they might affect a majority of users. Also if it's measured by "downloads" rather than "installed", then it could take out larger more widely used apps that are considered complete by both authors and users, but don't have enough daily new users to pop up on their radar as important enough... this is similar to the "maintenance" fallacy of NPM, where frequent patches = better, even though if your package is small and well written you should be making no patches as a sign of quality.
It's a way to clear out old apps that don't make money. A freeware app can be quite popular but won’t get updated if the developer as moved on to other jobs and can't continue to devote unpaid labor toward their passion project. In order to afford to keep maintaining it, they may have to move a happily free app toward a subscription or ad-supported model. And Apple's forced updates are nudging people in that direction.
It reminds me of a class I once took where the professor stated that some colonial governments would go into tribal areas, claim land ownership, and start taxing the indigenous people. And because these people now owed taxes, they had to give up their lifestyle, enter the workforce, and participate in the money economy whether they wanted to or not. I don't know if that scenario is historically accurate but it certainly is analogous to Apple's policy. Developers who might not even want to make money are being compelled to do so or see their apps get pulled, because forced updates amount to a tax that must be paid.
> forced updates amount to a tax that must be paid.
It's not a tax that must be paid. The developer can simply discontinue the older app, and not have to pay anything. So your analogy doesn't apply.
Switching an app from free to paid is a lot more work than recompiling and updating the app. There's a ton of coding and infrastructure you have to do. So it's not really saving you anything. The work of switching is a large upfront cost, which might not pay off, because apps don't magically sell themselves, you have to market them (which costs money!). This is especially a problem if you already have an app with low download numbers and low consumer awareness.
> But the current implementation of "anything 2 years old is too old" is arbitrary.
To an extent, but the reality is that an unacceptably-large percentage of apps that are 2+ years old are not correctly handling current screen sizes and reserved sensor areas.
> This policy is just a way to clear out old, unpopular apps from the store.
Great. This is the kind of active culling/editing an app store requires to remain vibrant.
Yes, I agree. All users should be forced to re-evaluate all the apps they are using everytime they get a new device.
It doesn't matter if that tool you currently use is perfect, or the game you play is just fun as-it-is, it is clearly harmful to you (and makes the app store "less vibrant".
Everything older than 2 years must go. Crumbs, everything older than 1 year should go... Nay make that 1 month...
Who cares if users like an old app, "vibrancy" matters most.
> I suspect the minimum download threshold to be exempt from this is _very_ high.
You can suspect based on no evidence, but nobody knows, and Apple refuses to say.
The crazy thing is, if Apple truly wants to drop support for older version of the SDKs, then how in the world does it make sense to exempt the most used apps???
My guess is the income from a reduced subset of apps offsets special maintenance for only those parts of the old SDKs which that subset uses. Likely as a cost saving measure. Apple may even ask for support contracts beyond some point, to motivate those apps to update too.
> My guess is the income from a reduced subset of apps offsets special maintenance for only those parts of the old SDKs which that subset uses.
Again, a guess based on zero empirical evidence. Also, Apple's "rule" here makes no distinction between paid and free apps. Indeed, free apps tend to have more downloads than paid apps, which means that Apple would be targeting the wrong apps if they were looking to offset costs.
It's much easier to support only a handful of apps on an old SDK than millions. Those a few probably only utilize 30% APIs the SDK exposes, meaning you can safely assume the rest 70% can be removed from maintenance list. They can also one-on-one work with those few companies on deprecation and migration.
They don't need to work 1 on 1. Any uploaded app is automatically analyzed for API usage. At least I've been getting warnings about deprecated or unsupported APIs in my apps once in a while and other developers too - even if it sometimes seems like crude string matching rather than checking for actual usage of the API
I agree that this may be the point. Software changes, and being able to compile with latest libraries is somewhat of a minimum requirement.
To Apple's defense, are they supposed to wait until the app breaks, starts receiving many complaints from customers, before it triggers the review process for them (which they would be forced to look at as somewhat high priority) before they then take action to remove the offending app? That hurts the customer experience from their perspective.
Better for them to institute a policy preemptively addressing these issues (arbitrary as the timeframe may be).
And four hours is a good chunk of time, but what percent of time is it compared to the amount of time for the app to be designed and implemented in the first place?
> To Apple's defense, are they supposed to wait until the app breaks, starts receiving many complaints from customers
Except that Apple is exempting apps with more downloads, and only punishing apps with fewer downloads, which is the opposite of worrying about "many complaints".
> For an app with more downloads, they can dedicate more labor/resources to it.
What resources? For older apps with more downloads, Apple is doing exactly what you said they shouldn't do: wait until the app breaks, and start receiving many complaints from customers.
I've had a few apps on the Google Play Store automatically removed because I didn't update them for some new policy or some such.
If they paid me maybe I would have. Otherwise I don't have time to keep dealing with their requests every 6 months. Is it such a hard thing to ask that if shit works, just leave it be?
This is a great point, but if true, Apple should do a better job of communicating the reason, including specifics, not just say "your app is rejected because it is outdated".
> The intention behind Apple and Google’s policies is to force developers to make sure their apps run successfully on the latest devices and operating systems.
The intention is to not support old iOS APIs with new versions of XCode and iOS anymore.
Apple isn't Microsoft or the Web - very old Windows programs and very old websites still run pretty fine.
Apple would rather shift the burden to update and App according to the latest API to the Devs than to provide API support forever.
I don't think culling games/apps based on age is the right approach to improving discoverability.
I don't see Nintendo removing old Switch games from the eShop.
I don't see Apple Music removing the Beatles because they haven't updated recently.
My recommendation, which Apple's (obnoxious) ad business would never accept, would be to
1. Remove the obnoxious and generally useless ads which eat up the top half of the first page of app search on the iPhone.
2. Improve app search with more options and a fast, responsive UI. Also they might consider allowing you to consider ratings from other sources such as critical reviews vs. user reviews (a la metacritic.)
3. Better human curation with more categories. This is the same approach that Apple Music takes and it seems to work well. Games should have Artist/Studio/Developer pages as well as Essentials lists to facilitate discovery. Same with genre/category essentials, which have a rotating list that can be saved/bookmarked.
They would ask the Beatles copyright owners to update their format if they posted an old 8 track or vinyl in to the music store though, right? Audio formats change, just at a more glacial pace than software dependencies. There is remastering and all that too.
The missing point is that they are culling apps based upon age and downloads. I think this is a reasonable criteria if Apple is trying to make space for popular and potentially upcoming apps. Yes, space is an issue if you think of it in terms of the customer's attention span.
The real problem is that developers have no choice except to offer their app through Apple's store. There is no room for the developer to offer their labour of love or niche product in a storefront that better serves their needs, or even from their own website.
> The real problem is that developers have no choice except to offer their app through Apple's store
How is this different from other walled garden game systems/game stores like Nintendo's eShop?
Yet they seem to keep older games and apps, even niche titles for a limited audience (such as music production apps or BASIC programming apps) which I greatly appreciate.
I would agree that discoverability isn't great on the eShop, but there are dozens of enthusiast web sites which are pretty good for game discovery (also aggregators like metacritic.)
And, as I noted, I think Apple already has a good approach which they're using with Apple Music - better human curation including category/genre/time period/artist/etc. playlists. Podcasts/radio shows also help.
Many games (at least ones that aren't in the "live service" category) are more akin to movies, music, or books than to some sort of perishable produce, so a curation approach that balances older content with new content makes sense.
Consoles are different because they have a limited shelf life. For example, I dug out my PS3 the other day and took a peek at their eshop. The most recent title was three years old. It also sounds like they're going to shut down the PS3 eshop, effectively ending the sale of old games anyhow. Something similar appears to be true for Nintendo. I pulled up their eshop on my 3DS and they have a message saying that it will be impossible to add funds to an account after March 20223. Presumably that means sales will end.
As for other media, brick and mortar retailers were always selective about what they offered. I suspect that we will see something similar happen with their digital variants in the coming years. I also suspect that it will be sales numbers, not human curation, that will be the basis of their decisions.
> Yes, space is an issue if you think of it in terms of the customer's attention span.
I don't hear independent developers griping that Apple is failing to advertise their apps and bring those apps to the attention of iPhone users. Instead, I hear independent developers griping that Apple is preventing their apps from being run on iPhones.
Therefore, I completely disagree. This is purely a matter of data storage space (cheap and nigh infinite), not a matter of limited attention.
Virtual space only makes sense in terms of floorspace. If you want to access a backroom special-order item (virtually unlimited space here), you should be able to get it by searching for a specific item (or URL).
Removing apps based on downloads or lack of updates is troubling.
So do you want to sift through a bunch of apps that don't work on the current version of the operating system or are otherwise significantly dated?
Even in a less wall gardened environment, I'm mostly not interested in running 10 year old applications unless they're something fairly simple that really does do everything required for the purpose and still runs reliably.
In any case, as a user, I probably figure that if an app hasn't been updated in five years, it may or may not work and I'll probably at least subconsciously resent the store's owner a bit for clogging up the store with old stuff.
>I'm mostly not interested in running 10 year old applications unless they're something fairly simple that really does do everything required for the purpose and still runs reliably.
The 10 year old version of AutoCAD still runs, and you can use it today to do a ton of high-value CAD work. Thanks to Microsoft for not arbitrarily blocking it from running.
Yes. But that’s a case of having an old version of a program that you know still runs on a given platform. That’s rather different from looking for a CAD program and deciding to take one that hasn’t been updated for ten years for a spin.
Backwards compatibility benefits current, and future users. I agree that "hasn't been updated" is a concern, but you can also look for "hasn't needed to be updated". If you can obtain a legal copy of 10 year old software, you can use it for the fraction of the cost of the new version.
When I do a search in English, Google absolutely returns mostly web pages written in English. And, yes, it would be very annoying to the point of being useless if it did otherwise. Presumably if you query it in French, I assume it returns mostly French pages.
> I don't see Nintendo removing old Switch games from the eShop.
The difference is that Nintendo shops have a limited shelf life, while App Store is forever(-ish). Nintendo will be shutting down the Wii U and 3DS eShops in 2023.
2023 is 6 years after Wii U was discontinued. Since the platform was frozen in time in 2017, there's no point to game updates.
Rectangle with capacitive screen is just that. Well designed apps should be robust to improved screen resolution, faster processors or network connectivity.
That's not all that a smartphone is, but it is most of what a smartphone is.
Most applications basically just need:
- a canvas to paint some bitmaps on
- some way to tell what part of the screen the user tapped on
- a way to get TCP/IP or HTTPS traffic in and out
- some sound output
- some persistence
- some way to show notifications
- a few other odds and ends like GPS, sometimes
Almost the entire list been supported on every major platform since the late 2000s. Yes, rich multimedia apps that make good use of additional APIs and hardware features do exist. But it's inappropriate to nuke most old "normal" applications just because old rich multimedia apps stopped working over time.
Back in the iPhone 4/iPad era, there was no facility within iOS to have responsive screens.
Apple introduced size classes and then you needed to adjust for different views for iPads once they supported more than one app being displayed on the screen.
Apple rightfully got rid of 3/ bit app support on the actual die. It introduced new permission models, design aesthetics change, new accessibility options are added, better APIS are written, etc.
Sorry, I think you’re misunderstanding. Android vendors built compatibility layers that allowed 16:9 apps to run on 18:9 and up screens. The apps weren’t built to run on 18:9 screens but the vendors found a way regardless.
Part of what makes iPhones great is that you aren’t running apps sitting on a pile of compatibility hacks. It was all designed to work correct on the exact device you use.
> running apps sitting on a pile of compatibility hacks
You do realise the only reason iphone retina screens became a thing was to enable double pixel scaling because iphone apps at the time were coded to a fixed resolution.
That's the definition of a compatibility hack.
Of course apple being apple they managed to sell it as a feature and made every person who was happy with a 1366x768 laptop suddenly desire a retina display.
In the case of Apple removing 32 bit support from the OS first and then the actual processors:
- they care about their apps not being evicted from memory as quickly when they switch apps because memory isn’t taken up by 32 bit and 64 bit versions of shared libraries.
- increased battery life by not having as much RAM - yes RAM takes energy.
- using the die space saved by not having 32 bit instruction decoding means you can use that die space for enhancements that users care about, and decrease the die size to make the phone more battery efficient
- for Mac computers, you have computers that are faster, have more than twice the battery life, are more memory efficient (meaning less swap), and can be fanless without getting hot.
Apple should just have fewer apps. I don't see why developers should cater to their customers. They buy less and require more support. They are the worst segment to support.
> A lot of Android users spend $0/yr on paid apps.
As do a lot of iOS users. I think the stat you are looking for is that, on average, iPhone users purchase more apps.
Don't forget to consider Android's massive installed user base in the calculation. Even if Droid users convert to paid at 1/4 the rate of iOS, you can make it up in sheer bulk.
> The impact of the App Store Improvement policy is nonexistent for VC-funded companies.
Not true. More often than not, our iOS releases get delayed hours if not days, while our long-suffering iOS lead patiently walks yet another green reviewer through the policies and our previous interactions with reviewers to convince them that our app is in compliance. Among other things, our app repeatedly gets flagged for failing to comply with rules that don't apply to it. This is usually resolved with a simple message to the reviewer, but depending on the turnaround, that can effectively add a day to the time it takes to get a bug fix out.
Dealing with these bogus issues probably accounts for 5% of the productive time of our iOS lead. And this is despite keeping our release cadence slow (every two weeks except for the most critical bugs) and after we've made many reasonable low-to-medium effort changes in our app to stop triggering false positives in their tools.
God help us if Apple ever went the Google route. Apple reviewers might be inexperienced and undertrained, but at least they're human and capable of talking through issues.
In our case, Apple actively destroys user experience with this kind of stuff.
We have an old iOS app for our products that runs fine even on the latest version and while there is a next gen app available that most users have switched to, some prefer the old one. For some use-cases, even for new users, the old app just fits a lot better.
We cannot rebuild and republish that app, because it depended on third party software that we no longer have access to. The app will continue to work fine for users that own a correspinding device but will most likely be removed from the app store for no real reason in a few months.
This will force all userd that want to actively use it to switch to the next gen app that they maybe don't want as soon as they add someone new to their device.
I don't think so, no, but I don't think they will be able to sync it to a new device when they buy one.
In our case, the apps are linked to hardware that the user owns and shares with multiple others. Due to very restrictive changes in iOS background activity over the years, we had to restructure some of the functionality, so the apps are not really interoperable. I think in this case, as soon as the user wants to add a new user to the device or an existing user gets a new phone, all users tied to that device would have to switch to the next gen app to keep having a consistent user experience.
This is not the kind of experience we want to provide for our users but we simply cannot invest as much money as would be required into completely mitigating the effect of Apple's decisions (such as heavily restricting background activity) year after year.
Some software from the past that made me feel that way:
1. Adobe Photoshop 7
Felt feature-complete, never ran into a bug, not resource-intensive (I would run it on a Pentium 200), no online activation/membership BS
2. Apple Aperture 3
Also felt feature-complete, had everything I needed. Nowadays I need to use 2 or 3 different software to have the features of a 2003 app. Unfortunately the app stopped working after 64-bit transition, and Apple retired it in favor of the much simpler Photos. Shame on you, Apple.
3. iTunes 10
Pretty much the sweet spot of features for managing your music library. Apple replaced with the Music app, because nowadays users consume streaming vs. maintain their own libraries, but the app is a buggy mess.
4. Winamp
Solid, just worked.
More software should be like that, but never will, because it isn't profitable to have users using the same version for many years. The revenue maximisation strategy is to release continuous beta-quality software, and apps should provide a steady revenue stream. That's why the Apple Store just assumes an app that didn't update is "abandoned". That's the perspective of users in general today, too.
Photoshop 7 struggled in larger PSDs with lots of layers or just a huge resolution/dpi. The memory management was really bad in that era too featuring manual scratch drive configuration. Constant crashing in large projects was the norm for me. I thought CS2 was a good upgrade and my final point before the nonsense of subscription based software took over. Now I just use photopea because 99% of my photoshop needs can be met by a single dude on a single webpage
> More software should be like that, but never will, because it isn't profitable to have users using the same version for many years.
You can get some of it back with FOSS applications, even if the quality normally isn't the same as with old freeware/proprietary applications. At least with a FOSS application you get to keep it forever, unlike SaaS which evaporates into thin air the moment the parent company decides to pull the plug.
You get to keep it, but if the software itself depends on a foundation made of sand, you can still be out of luck - especially for desktop applications that tend to rely on all sorts of libraries that just can't keep themselves from breaking their APIs and ABIs every few years (and those libraries have their own dependencies, etc).
Of course if time is of no essence you can spend it keeping the software up to date, but being able to waste that time doesn't really excuse the underlying foundations forcing application developers to waste their time.
You can still wrap it in a Docker container or a VM, statically compile it, or even use full system emulation if you eventually switch architectures. Or even pay someone to make the code run on a newer environment, if the application is worth it to you.
Every version of Geometer's Sketchpad feels like complete software. Yeah they add features every so often (pinch to zoom is pretty huge) but the core of it is the same.
This is another case where Apple fundamentally doesn't "get" games. They had a plan for clearing out crap from their store which probably sounded reasonable, but nobody even thought about an entire class of programs.
Lots of stumbles and missed opportunities - like, what if they had actually been serious about making the Apple TV a Wii-style games console? They have the hardware expertise to do that at a reasonable cost, but they just have no idea about the market, and apparently no desire to learn.
Apple products very much have a time and a place. Their plan is recurring revenue. Something that happened 3 years ago is almost entirely irrelevant to them. It's true with both hardware and software.
Wii-style games on AppleTV would not be a win for them. They don't want to sell a few million copies of Wii Sports once every 3-5 years. They want you buying a new 99¢ game at least once a month. They want you spending $5-10 in micro-transactions a month. They want you subscribed to AppleOne for $15-30 a month. They want you to buy a whole new iPhone every 2-3 years and start the process all over again with some new AirPods, cases and bands in between.
Apple doesn't want to sell you a new game for $59 every 2 years. They want to sell you what amount to an average of $59 worth of goods and services a month… forever. And while that sounds like a lot, that's a low target. If you have your TV subscriptions through Apple and Apple Care you can easily be contributing $100/mo or more to Apple's bottom line.
I understand that Apple is trying to deal with the sort of shovelware/accumulation problem that the Wii had, but the blanket approach of culling games based on release date seems wrongheaded.
If Nintendo took that approach then they'd end up throwing away Super Mario Odyssey and Zelda: Breath of the Wild.
I am a maintainer of Chalk (Javascript library for outputting color codes for terminals).
We consider chalk 'finished' and probably won't release another major version that changes the API - perhaps just typescript maintenance as Microsoft messes with the language even further and we have to fix types.
Sometimes, software is complete. I hate this new industry dogma that code rots over time and needs to constantly be evolving in order to be "alive".
I think when a project heavily depends on dependencies it is easy for security issues to make it too risky. Lately, I've been seeing more and more npm packages that keep "extending" a dependency to add more features, which eventually gets abandoned.
I'll use this package, multer-s3-transform, as an example. Its been abandoned. It depends on multer-s3, which depends on AWS-SDK and multer, which then depends on busboy.
Now, AWS-SDK, Multer, and Busboy have organizations maintaining them. However, the random projects overlayed on top of those are by single individuals.
The answer is to two-fold. Avoid overlayed projects and avoid big projects managed by a single individual.
I took a quick look at Chalk. I would say using yoctodelay dependency is kinda useless, and is maintained by a single individual. I sense potential danger there. The other dev dependencies should be audited too.
Looking at the metrics Apple tracks in the App Store, it seems to me that "engagement" is all that really matters. You could have some huge number of happy users, but if they open the app only occasionally, you get numbers in red. Or you could have an app that solves an important issue but only for a small number of users, and you get numbers in red. If you're not releasing hits that people use all the time, you sink. If tons of people don't spend tons of time in your app, you sink.
Author complains about the time required to update libraries, and that's an aggravating process, but that's just an unfortunate part of maintaining an app. The real issue, again it seems to me, isn't that you have to do a lot of work just to increment the version string; it's that, ultimately, modern content ecosystems are designed according to engagement-driven revenue models. And solid, simple, quiet, long-lived, useful or fun little apps simply can't thrive when all the sunlight is blocked by towering marketing-tech giants.
You're completely right. And I wouldn't single out just Apple here, they're falling into the same trap as others. And this is going to be a recurring theme across all domains. We're attempting to model things like 'usefullness', 'credibility', 'importance' etc into measurable metrics because of the pressure to scale and monopolize the market. Its scary to think that for all the talk of hiring the 'best and brightest', its these metrics that supposedly the smartest engineers in the world come up with. Its pure hubris on our part.
There is a dystopian arc, but I prefer to be more optimistic. I would think a legal mandate that champions interoperability, open data standards, and platform openness will put a dent in this march towards convert the human experience into numbers.
> Looking at the metrics Apple tracks in the App Store, it seems to me that "engagement" is all that really matters.
This has been the name of the game in ad tech like fb, Google and social media in general. I think two worlds are clashing with each other, where consumer tech is somewhat aware of the problems around mindless scrolling and addiction, but the growth & engagement mindset of the 2010s is cemented in the culture. Apple has little reason to follow this model because they primarily make money from selling hardware. Having a software platform that protects the user from this crap is a competitive advantage against Google, who depends on ad-based revenue. Apple seems to have an identity crisis, fearing they lose out on sweet subscription fees and ad revenue, now that most apps are free. This in turn is creating conflicts of interest, where they end up fighting their own customers.
If regulators would bark and bite harder around anti-competitive behavior, it might actually force corporations to focus on what they're good at instead of everyone building their own mediocre walled gardens that grow like a cancer and eats the company from within. At least, that's my wishful thinking..
Additionally, updating libraries periodically is inescapable for any app involving network connections or handling arbitrary user content, because doing otherwise means exposing your users to vulnerabilities.
Fully offline, totally self-contained apps are a different matter, but those represent an increasingly small percentage of apps.
The expectation that software libraries or frameworks should break every few months is disturbing. I think this may be due to the fact that many of the most popular frameworks do this. I find that React-based apps tend to break every few months for a range of reasons; often because of reliance on obscure functionality or bundling of binaries which aren't forward-compatible with newer engine versions.