The expectation that software libraries or frameworks should break every few months is disturbing. I think this may be due to the fact that many of the most popular frameworks do this.
I find that React-based apps tend to break every few months for a range of reasons; often because of reliance on obscure functionality or bundling of binaries which aren't forward-compatible with newer engine versions.
I think it is more that recent updates are used as a proxy for determining if the developer still cares about the project.
The problem is it is hard to distinguish between a project that has been abandoned, and a project that is feature complete and hasn't had any bugs reported in a while.
Although, if you assume that any software is free of bugs, and that at least some users will report bugs, if there is low activity, that suggests that either it isn't well maintained, or there isn't a critical mass of users to find and report those bugs. But then, higher quality software will require a higher number of users to hit that critical mass so it doesn't give you that much information unless you also know how good the software is.
And there is also the state of "it has all the features I need, but I'd be open to adding a new feature if someone else shows a compelling use case for it."
I guess that really depends on what it is.
The older I get, the more change-averse I become. That doesn't mean that everything is perfect and we shouldn't fix things that are broken, but it does mean I start to embrace the philosophy of "don't fix what isn't broken" more and more.
If I have something installed on my desktop / workstation that a) is not network attached and b) is not exploitable from a security point of view (least privileges, no sensitive data) c) has little risk of corrupting data and d) is unlikely to "just stop working" one day because of a systems / dependency update ... then I'm not in any hurry to update it. When I do get around to it it's probably because I'm installing a new version of Linux and so everything is getting an update. But if the old version still works, has no bugs or security risks and there's been no updates for years ... who cares?
I can think of several categories of applications that fall into this category:
- Music / mp3 player
- Classic Shell / Open Shell for Windows
- Office software, like LibreOffice. I'm sure newer versions offer some niceties but things "just work" for me. I don't see the reason to upgrade.
- Almost every game, ever (assuming it's stable and has no major usability bugs that need patching)
- Lots more
Things I do want to upgrade would be my web browser, operating system kernel, anything network attached. Really anything that could affect data integrity or security. Other than that, I'm lazy, comfortable and I don't want to risk the introduction of breaking changes.
Okay, so this is exactly what GP said. There is a difference between "not adding new features" and "not fixing bugs".
Would I be upset if the developer "fixed" these categories of bugs? Of course not. But would I avoid using the software at all if they never did?
"I guess that really depends on what it [the software in question] is."
It's a little silly, but I could imagine that if this become a common enough thing, people would start to trust it.
Programmers are lazy creatures.
I use a ton of tools I wrote myself and haven't had to touch in years except for the occasional dependency update.
17 hours ago: https://github.com/coreutils/coreutils/commits/master
Or let's phrase it differently: there will always be a part of the code that you do not have to touch for decades if all goes well. And if that part sits in a seperate repo, is it abandoned or just finished?
If the project already had the features you need, was thoroughly tested and debugged, and was built with almost no dependencies (only relying on a small number of very stable platform APIs), then it might not matter even if it was already abandoned when you started using it.
Case in point: On a recent software project, I took dependencies on two Lua libraries which were already abandoned. Their source code was short and readable enough that I had no problem going through it to understand how they worked. In one case, I added an important feature (support for a different type of PostgreSQL authentication to a Postgres client library). I also packaged both libraries for my Linux distro of choice and submitted them to the official package repositories. If anyone reports bugs against the version which I packaged, I'll fix them, but it seems unlikely that many bugs will ever be found.
As the OP rightly pointed out, the expectation that software libraries should normally need to be constantly updated is ridiculous. Many other things invented by humans stay in use for decades with no alterations. Beethoven's 5th symphony hasn't had any updates in more than 200 years.
I don't understand this argument. Are you saying that some things don't require updates, therefore we shouldn't expect software libraries to be updated as well? But there are also things that do require updates or maintenance. Would that invalidate the argument?
Some examples of things that need updates or maintenance: buildings, infrastructure, tools, machinery. Even immaterial things like books or laws receive updates.
And that includes the 5th Symphony too, which has different editions.
In today's world of software development, many take it as given that every software library should receive periodic updates, and that a library which has not been recently updated is in some sense "dead" and should not be used.
This is logical in cases where the domain is itself constantly changing. For example, software packages which calculate the results of income tax returns must be updated yearly, because the tax laws change. Similarly, packages which perform time zone calculations must be periodically updated, because the time zone rules change.
However, a vast number of software libraries operate in domains which are not inherently subject to change. As an arbitrary example, imagine a hypothetical library which performs symbolic differentiation and integration. Since the rules for how to integrate a function do not change, once thoroughly tested and debugged, there is no reason why such a library could not be used unaltered for decades. Yes, it might not benefit from the development of more efficient algorithms; but if the library was already more than fast enough for a certain application, that might not matter.
While software is different from other things created by humans, the existence of creative works such as books or musical compositions, which often survive for decades or centuries despite not receiving regular updates, provides an illustration of what can and should be possible for software as well. Note, this is an illustration and not an argument.
What a shame it’s been abandoned and no other musician stepped up to take over the work.
Fork it and start maintaining it with other people depending on it.
What does it matter if the developer still cares about the project?
It isn't crashing and no new features are needed. If it did then the project wouldn't be complete.
We're not talking about some early access half broken NPM package here, the blog specifically talks about a completed product.
For you it is.
But for a maintainer, reviewing and accepting a pull request can still be a lot of work, especially if the code has been stable for a long time.
Of course there will be some maintainers who are simply not interesed or unable to offer paid support (e.g. due to their employer being a control freak) but that's ok. As others have pointed out, forking / maintaining out of tree patches (and providing them to others) is not inherently hostile.
Usually I browse the issues / bug reports page and see if there are any breaking bugs that aren't getting much attention. If there are, then that's one more data point that the project might be abandoned.
1) The owners sometimes archive the project or write in the readme: complete, no more features will be accepted.
2) You can look at the number of open issues. A project with X stars/used-by/forks that hasn't been updated in 8 months and has zero open issues is probably stable.
I imagine that I am not the only one.
"Library/repository activity has little to do with usability, or quality. Some things are more or less finished and correct."
For example.. I guess that maybe gaming the system really is required if thats a 'good measure'.
Some software just does what it is supposed to and doesn't need to change, and this is OK, and I have no idea why this fact infuriates some people.
What this means is that for most software an essential part of that software consists of knowledge that needs to be kept alive. The only proof people can have of this is activity surrounding the project. This puts us in the uncomfortable position of having to designate someone as the 'keeper of all knowledge' surrounding a project if it is to be sustained, and failing to do so will result in knowledge being irrevocably lost.
If your whole entire source code is so innovative that you can't immediately understand it, there might be trouble, but so much code is or can be pretty boring.
At most, changing something will make a bug that's not really in the spec but is something everyone wants and relies on, but isn't immediately obvious, then you'll have to fix it.
The trouble is that if knowledge is lost, people will probably not bother to figure it out, and starting fresh might be more appealing.
The bigger issue especially in open source is that if there's no activity, there might not be any in the future. It could be abandoned for 10 years and perfectly fine, until a change is needed, and even if the change is easy and the knowledge is there, it might not happen if nobody cares anymore, and you'll have to fork it yourself, so maybe you choose the actively maintained thing even if it's less complete or more complex or ugly to use.
Sure, you might have the specification that the original developer was given, but every developer brings their own opinions and hard-won experience to a project. Just because the specification says "outgoing messages are sent to a queue" and the program links with ZeroMQ, doesn't tell you whether that's a design goal or a temporary stop-gap proof-of-concept. Maybe the program is structured so it can use an email queue for distributed operation. Maybe the program is designed to be linked into a larger system and just push message structs onto an in-memory vector.
I think if a project was that important, the people who needed something patched would be able to get it together even if the original maintainer went missing.
The original analogy to the challenges of knowledge transfer isn't totally applicable. Knowledge transfer is important to organizations because figuring everything out all over again would be costly, not because it would be impossible.
You certainly care about what it will do tomorrow too (so you can keep up, or avoid having it break stuff for you), but if it goes unmaintained, you don't have to worry about that since it does the job for you.
OTOH, I've been on projects where a dependency was chosen on the promise of what they will do in the future. That future never materialized, so we were left with a bunch of unfinished dependency code, maintained a bunch of components ourselves etc — it was a mess, in short. So I never rely on promises, only on what's there today.
Whatever it was originally meant to do is probably either obsolete, or well known to all users, if it's been so long the original devs left.
If it's got ZeroMQ, and has for 5 years, one can probably assume someone will be upset if it suddenly does not have ZeroMQ.
Regardless of intent, for practical purposes, if people are using it, they probably want that to stay around until you do an orderly transition to some other thing.
Reverse-engineering all that can be an utter nightmare even with tests. I suggest that people who disagree consider, for example, the word "reverse-engineer" as in how they might reverse-engineer a car.
Usually you have this kind of problem when it's NOT producing desired results and the problem is how to make it do something that's needed without affecting some set of usecases that you don't understand yet.
We solve problems with code, or by finding ways to avoid writing code.
We translate vague requirements into things a computer can handle, or we translate between computer languages, because any spec that was complete would just be the program. That's why they keep us. The computers can do pure logic on their own.
Opinion is very much a part of it because pure logic is just a useless pile of text unless it has something to do with the real world, which is full of opinions.
Even inside the program itself, we aren't dealing with logic. Languages are designed for human minds, and compilers are meant to catch mistakes people make. Libraries are often chosen for all kinds of perfectly good nontechnical reasons, like "That one is actually maintained" or "Everyone already knows this" or "People seem to like this" or "That seems like it won't be a thing in 3 years".
At my company we had a real scramble to upgrade log4j when the CVE for that came out. A lot of software was many versions behind. Luckily for us, no breaking changes had been made, and it was simply a matter of bumping the version to the latest one.
That was a lucky break, because if that had been a library like Jersey or Spring it would have been an entirely different ballgame.
If you don't keep your builds up to date, you open yourself up to risks like that, which are unknown and to some extent unknowable.
Dependecies and external APIs change so software breaks.
An example from a project of a customer today: Ruby had a Queue class up to 3.0.2. For some reason they moved it to Thread::Queue since 3.1.0. It's only a name change but it's still one release to do.
I'm not sure what flags you are referring to, but that might be because I went from 8 to 11 and skipped 9 and 10.
> the changes of which only broke code using non-public API surface
The APIs weren't non-public. They were publicly documented. They did say they were internal and you should avoid them, but they were still used in libraries and referenced in tutorials and stack overflow answers. And if you happened to transitively use a library that used one of those APIs, it would cause a fair amount of pain when you upgraded.
Ultimately, the break seems to have been intentional, but did receive appropriate consideration. Mark’s comments about moving forward do address these points.
I do think as a general heuristic no updates for 3 months means it’s been at least 2 months since anyone paid attention to outstanding bugs, especially the npm ecosystem.
I'm not saying that because of the leftpad fiasco, I mean that any library that small should simply be pasted into your codebase because the risk of managing the dependency outweighs the benefits.
If I see a library which is supposed to solve a simple problem but it requires a large number of dependencies to do it, that's a big red flag which indicates to me that the developer who wrote it is not good enough to have their code included in my project - I don't care how popular that library is.
Node libraries are a bit odd in that respect because package JSON files have an "engines" property that informs users about the minimum and maximum Node version the library works with. The author of leftpad could check its compatibility with each new version of Node that's released, and update the engines value version accordingly. That would be quite helpful because abandoned libraries that aren't known to be working with newer Node versions don't get updated, and users can user that information to look for something else.
What that means is that any Node library that isn't being updated as often as Node itself is probably not actively maintained, or the author isn't making full use of Node's feature set.
Really? This sounds feasible at all for you for all the millions libraries in npm not to be labelled as "obsolete"? Checking if string concatenation still works on every major node release and tagging it as such?
Yes. I mean, they could just leave the max version of the engines property blank because it should always work if it's something as trivial as leftpad, but in a well-run package repository the work of checking library compatibility should be getting done. The fact that NPM is full of old libraries that haven't been tested in new versions of Node is a bad thing, and those libraries should be flagged as untested.
I think there’s a useful lesson there.
I check for things like over-engineering and I check out the author (check their other libraries if there are any).
"Do one thing and do it well" - the unix philosophy, and why I still use cd, ls (dir on windows) today.
If a project exceeds two functions its probably doomed to become either redundant or broken.
If a project does one thing well, even if something else uses it, you can always decompose to it (your software is always useful, even when somebody writes the fancy bells and whistles version).
SQLite seems to hit a good balance. There's always something new but nothing particularly breaking (from my light usage POV), so it looks live.
The author chimes in, says he’ll take a look next evening, then two years whoosh by.
There is an ambitious todo and nothing moves.
The code is in alpha for years, considered unstable, huge exclamation points in the readme that it should not be used, and yet somehow there are hundreds of reverse dependencies and it’s like this for years (usually should mean “RUN!!!”). Sometimes sneaky as it’s not a direct dependency for you, but for those two or three libraries used by like 80% of the ecosystem.
We do announce when there is a user-visible and significant issue or update, but that's quite rare, maybe once every 6 months or more.
It’s not a common use-case, I guess, but I had some old projects that I abandoned and wanted to try working on them again and found it almost impossible. The worst cases had dependencies which weren’t available unless I used an x86 laptop with an older version of Mac, because the packages were never built for arm laptops.
In regards to the second case, bet that's Python with the wheels situation, where it doesn't fallback to building from source by default for some reason (instead throwing a super obtuse error), at least with Poetry.
Also, not many people (especially me included) are able to write something that's perfect first time. In an ideal world, other developers would be testing and challenging what's been written; which would meant the product or library needs updating.
I think it's (perhaps ideally) likely, that at least one of these factors is relevant.
I don't know if this is a good suggestion or not but I wonder if some form of "keepalive commit" might help here. I can imagine a few ways they might work, some simple, some more elaborate.
This expectation that things need to be committed to or updated every month has to stop, we're just wasting time and energy on stuff that is already fixed (unless there are security issues, of course).
Maybe GitHub needs to automatically include an UPDATES.md into each page, I dunno.
You might not care too much, and it clearly depends on the application, but for me updating a software that doesn't use anymore a library for getting user input that leads to a buffer overflow if you insert a certain character, or similar things (like remote code execution, etc.), can be quite important. Finally, apps are most of the time connected to the Internet, which makes them inherently vulnerable.
After searching "android software libraries CVEs" I found this in the first results' page: https://github.com/dotanuki-labs/android-oss-cves-research
It might be outdated, but the principle still applies.
It's very unlikely that your application is self-sufficient and doesn't need updates.
EDIT: I read the message from Apple as a "hopeful" attempt to say "come on, man at least update the dependencies".
What security ? "Google security" where everyone except the user has access to his data ?
Why does a note taking app needs internet to function ? Why does my phone app need access to internet ?
> The expectation that software libraries or frameworks should break every few months is disturbing.
As a library user, it's hard to find a good balance between fully trusting a system to stay alive for a while without maintenance, while being super paranoid about every aspect I rely on. Mentally it's easier to expect it to break every now and then, than to keep thinking "it's probably fine, but I still need to be defensive about stuff that never happen".
I think the issue on a library being "alive" or not is best mitigated by the users being comfortable to fix it if it broke under their use case. There's libraries I thought were completely dead but were a good shortcut to where we wanted to go, and we expected to extend it here and there anyway. That can't work for everything (thinking of security libs in particular, I wouldn't want to touch them) but it's to me the ideal situation, with no undue stress on the lib maintainer as well.
According to the OP, it is also due to companies controlling mobile OS such as Apple.
One of the things I like about non-corporate open source OS like NetBSD is I can run old userland utilities with new kernels without any problems.
What do you mean by this?
I sort of assume this is the actual point? Apple presumably wants to drop support for older versions of the SDKs, and that requires that app developers update to the newer versions. I think you can make a reasonable argument that dumping work on third-party developers to simplify things for themselves is bad, but the author's belief that it was simply pointless busywork is probably incorrect from Apple's perspective.
I suspect the minimum download threshold to be exempt from this is _very_ high. Maintaining backwards compatibility for a small fixed set of apps doing things an old way is a categorically different problem from maintaining backwards compatibility for every app out there.
If this was really about deprecation, they wouldn't have a "minimum monthly downloads" exemption either. This policy is just a way to clear out old, unpopular apps from the store
Businesses don't want to be told that their working software needs to be updated to make a vendor's bottom-line cheaper. They recognize cost-shifting when they see it and respond by backing towards the exits. Microsoft maintained a philosophy for decades that it was their responsibility to, if at all possible, maintain backwards compatibility with older Windows software as a market differentiator. The primary times I remember them breaking this policy were security related.
(That having been said, I got out of active development of Windows software around Windows 8, so this may have changed).
Something like Google's minimum sdk version is annoying, but understandable. It's technical and concrete - you must link against version X because version X-1 is going to disappear.
This is not that. It's culling apps that are arbitrarily too old and arbitrarily not popular enough. They must be keeping around old sdk versions if those old but popular apps are allowed to continue on.
It's fine to argue typical desktop applications need to be updated but purpose-built applications cost money, so when you have things like a POS terminal installed in thousands of fast-oil-change locations and it's from the Win3.11 era working perfectly fine for years but suddenly stops working after moving to 10, that's a sudden cost on a business (especially with no warning). Yes, you can argue that companies should be updating that kind of software, if only for reasons of security (I often did). The bottom line tends to be king, especially in smaller businesses, in my experience.
More regular applications tend to be much easier to get working though. E.g. something like Delphi 2 or C++ Builder 1 work out of the box just fine. The biggest issue with older software is that they sometimes had 16bit installers who do not work with 64bit Windows. Windows comes with some updated stubs for some popular installers and it is possible to manually fiddle with some of the unsupported ones to run the 32bit engine directly, though something like odtvdm/winevdm that emulates 16bit code on 64bit would also work.
But in general you can get things working, depending on how well the application was written. In some cases (games, 16bit installers) you do need workarounds as they wont work out of the box, but even those workarounds are based on the 99.9% of the rest of the system preserving backwards compatibility.
Worked generally okay until the era of the Internet came along and after you quit the game, all manner of programs would crash when the network stack suddenly found itself teleported an hour or two into the future and couldn't cope.
A better example is productivity software, like Photoshop and Illustrator or Paint Shop Pro. I can get Paint Shop Pro 5, a raster graphics editor from 1998, to run on Windows 11 just fine, for example. Another is Microsoft Office, in which Microsoft goes out of its way to make sure documents created long ago will load and work fine in modern Office, and ancient versions of Office itself will happily run mostly fine on modern Windows too (eg: I run Office XP on my Windows 7 machines).
Old games were also quite often doing direct access to graphic cards outside of official APIs
And games are not business software. An old game that stops working is "too bad, move long", accounting software that stops working has real life consequences.
> The most impressive things to read on Raymond’s weblog are the stories of the incredible efforts the Windows team has made over the years to support backwards compatibility ...
> I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.
> ... Raymond Chen writes, “I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure. I spent many sleepless nights fixing bugs in third-party programs just so they could keep running on Windows 95.”
> A lot of developers and engineers don’t agree with this way of working. If the application did something bad, or relied on some undocumented behavior, they think, it should just break when the OS gets upgraded. The developers of the Macintosh OS at Apple have always been in this camp. It’s why so few applications from the early days of the Macintosh still work. For example, a lot of developers used to try to make their Macintosh applications run faster by copying pointers out of the jump table and calling them directly instead of using the interrupt feature of the processor like they were supposed to. Even though somewhere in Inside Macintosh, Apple’s official Bible of Macintosh programming, there was a tech note saying “you can’t do this,” they did it, and it worked, and their programs ran faster… until the next version of the OS came out and they didn’t run at all. If the company that made the application went out of business (and most of them did), well, tough luck, bubby.
This really gets into a philosophical difference of how software should work and who should be maintaining it. As a user, I love Raymond Chen's approach. As a developer, trying to maintain bug for bug compatibility with older versions of the code is something that scales poorly and continues to consume more and more resources as more and more bugs need that bug for bug compatibility across versions.
And they also went above and beyond to make it happen too. That time when they byte-patched a binary which they didn't have the source code anymore comes to mind.
Sure there are old things that are good and "complete" but far more old stuff is just old and could well be burnt to the ground, except for the fact that you have some customer somewhere relying on its oddities and unintended behaviors as an essential feature of their integrations.
There's no sign that being very-backward-compatible is holding Windows back that I can see.
In the other hand I have collected a set of utilities into my work-flow that are easily 20+ years old, and they all work fine.
The last major cull from Microsoft was that 16bit programs ceased to work on 64 bit platforms. Other than that I've never had an app fail.
Most of those utility supplies have long since disappeared, retired, or died. But I can still keep transferring those apps to the next machine and they keep running.
None of this impacts the quality of my current offerings. Ultimately all this costs me is some cheap disk space.
Maybe Linux could be beaten to work with it, but this works well enough.
Backwards compatibility is very much holding MS back
Even if Mac hardware manages to vastly outrun the top end gaming PCs on raw performance, they'll never be seen as serious mainstream targets for this reason alone.
Except popularity doesn't correlate with utility when it comes to apps. Probably only addictive games and social network apps will pass whatever arbitrary threshold has been set.
This will harm any one off apps built to satisfy a niche purpose downloaded by a small set of users. Which Apple probably think are not important, like all of the little high street shops, except cumulatively they might affect a majority of users. Also if it's measured by "downloads" rather than "installed", then it could take out larger more widely used apps that are considered complete by both authors and users, but don't have enough daily new users to pop up on their radar as important enough... this is similar to the "maintenance" fallacy of NPM, where frequent patches = better, even though if your package is small and well written you should be making no patches as a sign of quality.
It reminds me of a class I once took where the professor stated that some colonial governments would go into tribal areas, claim land ownership, and start taxing the indigenous people. And because these people now owed taxes, they had to give up their lifestyle, enter the workforce, and participate in the money economy whether they wanted to or not. I don't know if that scenario is historically accurate but it certainly is analogous to Apple's policy. Developers who might not even want to make money are being compelled to do so or see their apps get pulled, because forced updates amount to a tax that must be paid.
It's not a tax that must be paid. The developer can simply discontinue the older app, and not have to pay anything. So your analogy doesn't apply.
Switching an app from free to paid is a lot more work than recompiling and updating the app. There's a ton of coding and infrastructure you have to do. So it's not really saving you anything. The work of switching is a large upfront cost, which might not pay off, because apps don't magically sell themselves, you have to market them (which costs money!). This is especially a problem if you already have an app with low download numbers and low consumer awareness.
To an extent, but the reality is that an unacceptably-large percentage of apps that are 2+ years old are not correctly handling current screen sizes and reserved sensor areas.
> This policy is just a way to clear out old, unpopular apps from the store.
Great. This is the kind of active culling/editing an app store requires to remain vibrant.
It doesn't matter if that tool you currently use is perfect, or the game you play is just fun as-it-is, it is clearly harmful to you (and makes the app store "less vibrant".
Everything older than 2 years must go. Crumbs, everything older than 1 year should go... Nay make that 1 month...
Who cares if users like an old app, "vibrancy" matters most.
You can suspect based on no evidence, but nobody knows, and Apple refuses to say.
The crazy thing is, if Apple truly wants to drop support for older version of the SDKs, then how in the world does it make sense to exempt the most used apps???
Again, a guess based on zero empirical evidence. Also, Apple's "rule" here makes no distinction between paid and free apps. Indeed, free apps tend to have more downloads than paid apps, which means that Apple would be targeting the wrong apps if they were looking to offset costs.
Basically a cost saving move.
> Basically a cost saving move.
Working 1-on-1 with companies to determine what to keep and what not is anything but cost saving, by my estimate.
To Apple's defense, are they supposed to wait until the app breaks, starts receiving many complaints from customers, before it triggers the review process for them (which they would be forced to look at as somewhat high priority) before they then take action to remove the offending app? That hurts the customer experience from their perspective.
Better for them to institute a policy preemptively addressing these issues (arbitrary as the timeframe may be).
And four hours is a good chunk of time, but what percent of time is it compared to the amount of time for the app to be designed and implemented in the first place?
Except that Apple is exempting apps with more downloads, and only punishing apps with fewer downloads, which is the opposite of worrying about "many complaints".
For an app with more downloads, they can dedicate more labor/resources to it.
What resources? For older apps with more downloads, Apple is doing exactly what you said they shouldn't do: wait until the app breaks, and start receiving many complaints from customers.
If they paid me maybe I would have. Otherwise I don't have time to keep dealing with their requests every 6 months. Is it such a hard thing to ask that if shit works, just leave it be?
The intention is to not support old iOS APIs with new versions of XCode and iOS anymore.
Apple isn't Microsoft or the Web - very old Windows programs and very old websites still run pretty fine.
Apple would rather shift the burden to update and App according to the latest API to the Devs than to provide API support forever.
I don't see Nintendo removing old Switch games from the eShop.
I don't see Apple Music removing the Beatles because they haven't updated recently.
My recommendation, which Apple's (obnoxious) ad business would never accept, would be to
1. Remove the obnoxious and generally useless ads which eat up the top half of the first page of app search on the iPhone.
2. Improve app search with more options and a fast, responsive UI. Also they might consider allowing you to consider ratings from other sources such as critical reviews vs. user reviews (a la metacritic.)
3. Better human curation with more categories. This is the same approach that Apple Music takes and it seems to work well. Games should have Artist/Studio/Developer pages as well as Essentials lists to facilitate discovery. Same with genre/category essentials, which have a rotating list that can be saved/bookmarked.
I wish I could upvote you for that multiple times!
The real problem is that developers have no choice except to offer their app through Apple's store. There is no room for the developer to offer their labour of love or niche product in a storefront that better serves their needs, or even from their own website.
How is this different from other walled garden game systems/game stores like Nintendo's eShop?
Yet they seem to keep older games and apps, even niche titles for a limited audience (such as music production apps or BASIC programming apps) which I greatly appreciate.
I would agree that discoverability isn't great on the eShop, but there are dozens of enthusiast web sites which are pretty good for game discovery (also aggregators like metacritic.)
And, as I noted, I think Apple already has a good approach which they're using with Apple Music - better human curation including category/genre/time period/artist/etc. playlists. Podcasts/radio shows also help.
Many games (at least ones that aren't in the "live service" category) are more akin to movies, music, or books than to some sort of perishable produce, so a curation approach that balances older content with new content makes sense.
As for other media, brick and mortar retailers were always selective about what they offered. I suspect that we will see something similar happen with their digital variants in the coming years. I also suspect that it will be sales numbers, not human curation, that will be the basis of their decisions.
I don't hear independent developers griping that Apple is failing to advertise their apps and bring those apps to the attention of iPhone users. Instead, I hear independent developers griping that Apple is preventing their apps from being run on iPhones.
Therefore, I completely disagree. This is purely a matter of data storage space (cheap and nigh infinite), not a matter of limited attention.
There are huge numbers of good apps on the store that don’t get visibility.
Removing apps based on downloads or lack of updates is troubling.
I do not. Do you think shovelware author x has any issues pushing a new garbage update?
I want the app store to be full of high quality stuff, not recent.
By that argument they should also get rid of most old music, old books, old movies.
“Make space”? This isn’t a shelf. There’s always enough space for digital items.
Even in a less wall gardened environment, I'm mostly not interested in running 10 year old applications unless they're something fairly simple that really does do everything required for the purpose and still runs reliably.
In any case, as a user, I probably figure that if an app hasn't been updated in five years, it may or may not work and I'll probably at least subconsciously resent the store's owner a bit for clogging up the store with old stuff.
The 10 year old version of AutoCAD still runs, and you can use it today to do a ton of high-value CAD work. Thanks to Microsoft for not arbitrarily blocking it from running.
This is precisely why Google only indexes webpages written in English and are focused on the American market.
Also - there's a cultural preservation issue here as well. That bothers me.
The difference is that Nintendo shops have a limited shelf life, while App Store is forever(-ish). Nintendo will be shutting down the Wii U and 3DS eShops in 2023.
2023 is 6 years after Wii U was discontinued. Since the platform was frozen in time in 2017, there's no point to game updates.
If you think that’s all a smartphone is, then it’s natural to come to the conclusion that the only thing that has changed is speed and resolution.
It also happens to be simply wrong.
Most applications basically just need:
- a canvas to paint some bitmaps on
- some way to tell what part of the screen the user tapped on
- a way to get TCP/IP or HTTPS traffic in and out
- some sound output
- some persistence
- some way to show notifications
- a few other odds and ends like GPS, sometimes
Almost the entire list been supported on every major platform since the late 2000s. Yes, rich multimedia apps that make good use of additional APIs and hardware features do exist. But it's inappropriate to nuke most old "normal" applications just because old rich multimedia apps stopped working over time.
Apple introduced size classes and then you needed to adjust for different views for iPads once they supported more than one app being displayed on the screen.
Apple rightfully got rid of 3/ bit app support on the actual die. It introduced new permission models, design aesthetics change, new accessibility options are added, better APIS are written, etc.
Besides, isn’t this the company that allowed you to install iPhone apps on iPads and blow the UI up to 2x size?
You could run iPhone 4 apps on iPhone 5 and beyond. But they looked horrible.
You do realise the only reason iphone retina screens became a thing was to enable double pixel scaling because iphone apps at the time were coded to a fixed resolution.
That's the definition of a compatibility hack.
Of course apple being apple they managed to sell it as a feature and made every person who was happy with a 1366x768 laptop suddenly desire a retina display.
- they care about their apps not being evicted from memory as quickly when they switch apps because memory isn’t taken up by 32 bit and 64 bit versions of shared libraries.
- increased battery life by not having as much RAM - yes RAM takes energy.
- using the die space saved by not having 32 bit instruction decoding means you can use that die space for enhancements that users care about, and decrease the die size to make the phone more battery efficient
- for Mac computers, you have computers that are faster, have more than twice the battery life, are more memory efficient (meaning less swap), and can be fanless without getting hot.
Let the user make the damn choice.
As do a lot of iOS users. I think the stat you are looking for is that, on average, iPhone users purchase more apps.
Don't forget to consider Android's massive installed user base in the calculation. Even if Droid users convert to paid at 1/4 the rate of iOS, you can make it up in sheer bulk.
Not true. More often than not, our iOS releases get delayed hours if not days, while our long-suffering iOS lead patiently walks yet another green reviewer through the policies and our previous interactions with reviewers to convince them that our app is in compliance. Among other things, our app repeatedly gets flagged for failing to comply with rules that don't apply to it. This is usually resolved with a simple message to the reviewer, but depending on the turnaround, that can effectively add a day to the time it takes to get a bug fix out.
Dealing with these bogus issues probably accounts for 5% of the productive time of our iOS lead. And this is despite keeping our release cadence slow (every two weeks except for the most critical bugs) and after we've made many reasonable low-to-medium effort changes in our app to stop triggering false positives in their tools.
God help us if Apple ever went the Google route. Apple reviewers might be inexperienced and undertrained, but at least they're human and capable of talking through issues.
Besides, users aren’t going to be updating their app multiple times per day as they would a website that is continuously updated.
We have an old iOS app for our products that runs fine even on the latest version and while there is a next gen app available that most users have switched to, some prefer the old one. For some use-cases, even for new users, the old app just fits a lot better.
We cannot rebuild and republish that app, because it depended on third party software that we no longer have access to. The app will continue to work fine for users that own a correspinding device but will most likely be removed from the app store for no real reason in a few months.
This will force all userd that want to actively use it to switch to the next gen app that they maybe don't want as soon as they add someone new to their device.
In our case, the apps are linked to hardware that the user owns and shares with multiple others. Due to very restrictive changes in iOS background activity over the years, we had to restructure some of the functionality, so the apps are not really interoperable. I think in this case, as soon as the user wants to add a new user to the device or an existing user gets a new phone, all users tied to that device would have to switch to the next gen app to keep having a consistent user experience.
This is not the kind of experience we want to provide for our users but we simply cannot invest as much money as would be required into completely mitigating the effect of Apple's decisions (such as heavily restricting background activity) year after year.
Some software from the past that made me feel that way:
1. Adobe Photoshop 7
Felt feature-complete, never ran into a bug, not resource-intensive (I would run it on a Pentium 200), no online activation/membership BS
2. Apple Aperture 3
Also felt feature-complete, had everything I needed. Nowadays I need to use 2 or 3 different software to have the features of a 2003 app. Unfortunately the app stopped working after 64-bit transition, and Apple retired it in favor of the much simpler Photos. Shame on you, Apple.
3. iTunes 10
Pretty much the sweet spot of features for managing your music library. Apple replaced with the Music app, because nowadays users consume streaming vs. maintain their own libraries, but the app is a buggy mess.
Solid, just worked.
More software should be like that, but never will, because it isn't profitable to have users using the same version for many years. The revenue maximisation strategy is to release continuous beta-quality software, and apps should provide a steady revenue stream. That's why the Apple Store just assumes an app that didn't update is "abandoned". That's the perspective of users in general today, too.
You can get some of it back with FOSS applications, even if the quality normally isn't the same as with old freeware/proprietary applications. At least with a FOSS application you get to keep it forever, unlike SaaS which evaporates into thin air the moment the parent company decides to pull the plug.
Of course if time is of no essence you can spend it keeping the software up to date, but being able to waste that time doesn't really excuse the underlying foundations forcing application developers to waste their time.
Lots of stumbles and missed opportunities - like, what if they had actually been serious about making the Apple TV a Wii-style games console? They have the hardware expertise to do that at a reasonable cost, but they just have no idea about the market, and apparently no desire to learn.
Apple products very much have a time and a place. Their plan is recurring revenue. Something that happened 3 years ago is almost entirely irrelevant to them. It's true with both hardware and software.
Wii-style games on AppleTV would not be a win for them. They don't want to sell a few million copies of Wii Sports once every 3-5 years. They want you buying a new 99¢ game at least once a month. They want you spending $5-10 in micro-transactions a month. They want you subscribed to AppleOne for $15-30 a month. They want you to buy a whole new iPhone every 2-3 years and start the process all over again with some new AirPods, cases and bands in between.
Apple doesn't want to sell you a new game for $59 every 2 years. They want to sell you what amount to an average of $59 worth of goods and services a month… forever. And while that sounds like a lot, that's a low target. If you have your TV subscriptions through Apple and Apple Care you can easily be contributing $100/mo or more to Apple's bottom line.
If Nintendo took that approach then they'd end up throwing away Super Mario Odyssey and Zelda: Breath of the Wild.
We consider chalk 'finished' and probably won't release another major version that changes the API - perhaps just typescript maintenance as Microsoft messes with the language even further and we have to fix types.
Sometimes, software is complete. I hate this new industry dogma that code rots over time and needs to constantly be evolving in order to be "alive".
I'll use this package, multer-s3-transform, as an example. Its been abandoned. It depends on multer-s3, which depends on AWS-SDK and multer, which then depends on busboy.
Now, AWS-SDK, Multer, and Busboy have organizations maintaining them. However, the random projects overlayed on top of those are by single individuals.
The answer is to two-fold. Avoid overlayed projects and avoid big projects managed by a single individual.
Author complains about the time required to update libraries, and that's an aggravating process, but that's just an unfortunate part of maintaining an app. The real issue, again it seems to me, isn't that you have to do a lot of work just to increment the version string; it's that, ultimately, modern content ecosystems are designed according to engagement-driven revenue models. And solid, simple, quiet, long-lived, useful or fun little apps simply can't thrive when all the sunlight is blocked by towering marketing-tech giants.
There is a dystopian arc, but I prefer to be more optimistic. I would think a legal mandate that champions interoperability, open data standards, and platform openness will put a dent in this march towards convert the human experience into numbers.
This has been the name of the game in ad tech like fb, Google and social media in general. I think two worlds are clashing with each other, where consumer tech is somewhat aware of the problems around mindless scrolling and addiction, but the growth & engagement mindset of the 2010s is cemented in the culture. Apple has little reason to follow this model because they primarily make money from selling hardware. Having a software platform that protects the user from this crap is a competitive advantage against Google, who depends on ad-based revenue. Apple seems to have an identity crisis, fearing they lose out on sweet subscription fees and ad revenue, now that most apps are free. This in turn is creating conflicts of interest, where they end up fighting their own customers.
If regulators would bark and bite harder around anti-competitive behavior, it might actually force corporations to focus on what they're good at instead of everyone building their own mediocre walled gardens that grow like a cancer and eats the company from within. At least, that's my wishful thinking..
Fully offline, totally self-contained apps are a different matter, but those represent an increasingly small percentage of apps.