How are you supposed to deal with recalcitrant users? I work for an organization that is ending support for several long running APIs. And by support I mean turn off the servers, you must move to an entirely new platform.
We’ve sent out industry alerts, updated documentation and emailed all user. The problem is the contact information goes stale. The developer who initially registered and set up the keys, has moved on. The service has been running in production for years without problems and we’ve maintained backwards compatibility.
So do we just turn it off? We’ve put messages in the responses. But if it’s got 200ok we know no one is looking at those. We’ve discussed doing brownouts where we fail everything for an hour with clear error messages as to what is happening.
Is there a better approach? I can’t imagine returning wrong data on purpose randomly. That seems insane.
> How are you supposed to deal with recalcitrant users?
Keep the servers running, but make the recalcitrant users pay for the costs and a then some more. It is actually a common strategy. Big, slow companies often have trouble with deprecation, but they also have deep pockets, and they will gladly pay a premium so that they can keep the API stable at least for some time.
If you ask for money, you will probably get more reactions too.
Step 1: Stop thinking of them as "recalcitrant". They're not recalcitrant. They bought (presumably for money) a product, and expect that product to keep working as long as they need it to! They don't expect the vendor to pull the rug out from under them and break it just because the API is old and icky and their software engineers are sad to keep it around.
Instead of "deprecate like you mean it" the article should be: "Release software like you mean it" and by that, I mean: Be serious. Be really, really sure that you are good with your API because users are going to want to use it for a lot longer than you might think.
> They bought (presumably for money) a product, and expect that product to keep working as long as they need it to!
This depents on the terms of the contract. Typically, termination of service is covered in a license. If the license terms are okay in the respective jurisdiction, there is no fundamental ethical obligation to run a server beyond that. There might exist specific cases where it would be inappropriate to follow the terms by the letter, but that also has its limits.
Contract terms usually define legal obligations, not ethical obligations. They create duties parties must perform or face legal consequences--they don't speak to what those parties should do ethically.
Following legal obligations is an important part of ethics. The law has also the purpose to relieving the individual of the burden of complex ethical considerations. This is the general situation, especially in a democracy under the rule of law.
There are, of course, exceptions and disagreement about specific regulations. But as long as you have the law on your side is a very strong indicator that what you are doing is also ethical more or less okay. It is very hard to say that one person is far off ethically, if two people agreed on something and the terms of their agreement are without doubt legally correct.
Most systemic evil in the modern world is done legally, IMO. There is everything legal but nothing ethical about the way John Deere screws farmers over, how big tech sells your data, how Amazon creates a consumerist race to the bottom, how United Healthcare denies you coverage capriciously, etc.
Nothing lasts forever. The second you decide to use a new 3P API you have to understand it might disappear one hour after your production launch, and that's okay.
You also have to understand that there will be APIs that eagerly rugpull and APIs that don't, and if you're offering the former then your users will end up moving to your competitors all else being equal.
Software evolves over time, along with business needs. What seemed like (or even was!) a good idea at some point will almost surely cease to be a good idea at some point in the future. Breaking the API is totally fine if there's a good reason and it's carefully managed.
A technique I used on a project was to change the URL, and have the old URL return a 426 with an explanation, a new link, and a clear date when the moved API. This reliably breaks the API for clients so that they can't ignore it, while giving them an easy temporary fix.
Clients weren't happy, but ultimately they did all upgrade. Our last-to-upgrade client even paid us to keep the API open for them past the date we set--they upgraded 9 months behind schedule, but paid us $270k, so not much to complain about there.
I suspect it's not so much that it was considered more cost-effective, and more that it wasn't considered at all. My impression was that nobody was even allocated to work on the transition until 8 months, because that's when we started getting emails from their devs, and the upgrade took them less than a week when they actually did it.
No--the goal was to break the API so users noticed, with an easy fix. A lot of users weren't even checking the HTTP status codes, so it was necessary to not return the data to make sure the API calls broke.
We did roll this out in our test environment a month in advance, so that users using our test environment saw the break before it went to prod, but predictably, none of the users who were ignoring the warnings for the year before were using our test environment (or if they were, they didn't email us about it until our breaking change went to prod).
Sleep()s that increase exponentially every month seem like a good solution. When the API has a 10 second latency hopefully someone starts asking questions. If not I think brownouts are a decent idea.
We’ve sent out industry alerts, updated documentation and emailed all user. The problem is the contact information goes stale. The developer who initially registered and set up the keys, has moved on. The service has been running in production for years without problems and we’ve maintained backwards compatibility.
So do we just turn it off? We’ve put messages in the responses. But if it’s got 200ok we know no one is looking at those. We’ve discussed doing brownouts where we fail everything for an hour with clear error messages as to what is happening.
Is there a better approach? I can’t imagine returning wrong data on purpose randomly. That seems insane.