A MITM (e.g. a router along a multi-hop route between the victim client and StackExchange) could silently drop the unsafe HTTP requests and maliciously repackage it as an HTTPS request, thereby circumventing the revocation.
Also: even if an insecure HTTP request isn't dropped / makes it through to StackExchange's endpoint eventually (and thereby triggering the API key revocation), a MITM with a shorter trip time to SE's servers could race for wrecking havoc until the revocation happens.
Nevertheless, SE's revocation tactic contributes positively to a defense in depth strategy.
Nothing could possibly be bulletproof. You sent a key over the wire unencrypted. You were in trouble before the data even got to the server to do anything about it.
This approach is a practical choice based on the reality that the bulk of unencrypted traffic is not being actively mitmed and is at most being passively collected. Outside of actually developing cryptosystems, security tends to be a practical affair where we are happy building systems that improve security posture even if they don't fix everything.
as an old-school reader of the cypherpunks email list from before HTTPS existed, I'm still mad about this part:
Outside of actually developing cryptosystems, security tends to be a practical affair where we are happy building systems that improve security posture even if they don't fix everything.
there was a time in the 1990s when cryptography geeks were blind to this reality and thought we'd build a very different internet. it sure didn't happen, but it would have been better.
we had (and still have today) all the technology required to build genuinely secure systems. we all knew passwords were a shitty alternative. but the people with power were the corrupt, useless "series of tubes!" political class and the VCs, who obviously are always going to favor onboarding and growth over technical validity. it's basically an entire online economy founded on security theater at this point.
> we had (and still have today) all the technology required to build genuinely secure systems.
True, but if we actually did that, it would make those systems very unpleasant to use. The age-old tradeoff of security vs convenience is still as much a force today as is always has been.
Having technically the tightest possible security is not always the right answer. The right answer is whatever balance people are willing to work with for a particular use case. There's a reason that most people don't secure their houses by removing all windows and installing vault doors.
I’d disagree that there has to be a trade off at all. Using hardware security keys or device based passkeys, secure authentication is actually pretty convenient now.
Interesting; I have had the opposite experience.
Many websites will directly enroll my Yubikey and will even let me use it instead of a password, and logging in is as simple as touching the button at the prompt. It’s honestly much simpler then using a password for me, and MUCH more convenient then pulling out my phone for 2fa codes (especially for the university site’s painfully short session times)
yeah, the assertion is entirely false. there doesn't need to be any such tradeoff.
there's probably a term for the cognitive fallacy where you assert that however it happens to be is how it had to be. it's like normalcy bias, but retroactive.
I'd argue your reasoning is incorrect. By the time your service is developed you would have already changed it to https, as during development every time you tried your API keys sent via http got disabled.
So an in-the-wild MITM would never get to see your http request
I agree from a developer point of view, but the people configuring and deploying the application aren't always the same people developing it.
As a developer I like to make many options available for debugging in various situations, including disabling TLS. This isn't controversial, every Go and Rust library I've ever seen defaults to no TLS, preferring to make it easy rather than required, so reflecting those defaults in the service's configuration is natural and intuitive.
I make sure my example configurations are as close to what will be needed in production as possible, including not just TLS but at least one "mutual TLS" validation. I even sync these back from production if it turns out something had to be changed, so the examples in the repository and built artifact are in line with production.
Yet I routinely find at least some of these disabled in at least some production deployments, presumably because the operator saw a shortcut and took it.
Let's rework Murphy's original law: if there are multiple ways to deploy something and one of those will be a security disaster, someone will do it that way.
There is a non-zero number of developers out there who would sooner deploy a proxy that upgrades http to https because the thought of changing the application code wouldn’t spring to mind
There's complicated authentication schemes around hmac that tries to do this, but if you're putting that much effort into it you might as well give up and use https.
Some of these include a nonce and/or are deployed over TLS to prevent replay attacks and avoid sending bearer tokens over the wire. AWS sig v4 and RFC7616 come to mind.
Even if the copy the header, they can only perform a replay attack, which is an improvement over leaking an API key. Also, you could include a timestamp in the signature to limit the amount of time it could be replayed.
It’s preventing the theft of the API key. The attack can, at most, replay that specific request (which you could also mitigate with a nonce and expiration).
Yes, but what is meant, that it provides protection against common calibers. So you can have bullet proof security in IT. That does not mean it is blast proof, or acid resistant, or prevents someone using a backup key on the second entrance. It is just a metapher saying this security is very solid. Might be true, or not, but that nothing is 100% secure is quite known.
I've been thinking for about 5 minutes about this comment and what to write but i've come to the conclusion that this is really not the best thing to do, but the correct thing to do.
It's not different levels of good or bad... everything else is wrong.
One of the approaches mentioned in the article is to just not listen on port 80. Supposedly that’s equally good because the connection should get aborted before the client has the chance to actually send any API keys.
But is that actually true? With TCP Fast Open, a client can send initial TCP data before actually learning whether the port is open. It needs a cookie previously received from the server to do so, but the cookie is not port-specific, so – assuming the server supports Fast Open – the client could have obtained the cookie from a prior connection over HTTPS or any other valid port. That’s the impression I get from reading the RFC, anyway. The RFC does mention that clients should distinguish between different server ports when caching refusals by the server to support Fast Open, but by that point it’s too late; the data may have already been leaked.
If you're serving web traffic and API traffic on the same domain, which many services are, then not listening on port 80 may not be possible. Even if you do use a different domain, if you're behind a CDN then you probably can't avoid an open port 80. I do keep port 80 closed for those of my services I can do so for, but I don't have anything else that needs port 80 to be open on those IPs.
I think Stack Exchange's solution is probably the right one in that case -- and hopefully anyone who hits it will do so with dev keys rather than in production.
I always thought it was bad practice to use the same domain for API and non-API traffic. In the browser there'll be a ton of wasted context (cookies) attached to the API request that isn't needed.
So it's better to have "api.example.com" and "www.example.com" kept separate, rather than using "www.example.com/api/", where API requests will have inflated headers.
That very much depends on what's hitting your API and why. If it's browser clients, you might want to worry about headers and cookies -- but with http/2 and http/3 using hpack and qpack you should be able to avoid sending all the data each time. If the clients aren't browsers then the question is moot but there are other reasons to consider.
In any case, I'd recommend same-origin requests for browser access to APIs and a separate domain for non-browser access, purely for separation of concerns. That lets you tailor the access rules for your endpoints according to the type of caller you're expecting.
What matters is that there is nothing listening on port 80 on the same IP address. That may be hard to control if you are using an environment with shared ingress.
If someone is in your path they can just fake listen to 80 and intercept, then forward your call to 443.
Probably best to listen on 80 and trash the token right then as the majority of the time there won't be a MITM and breaking the application will force the developer to change to https
But listening on port 80 and revoking the key also won’t help either as the active MitM would have been smart enough to internally proxy to port 443 or return some other fake response.
The real point is to break the application during development before the first MitM. Either approach does that equally well.
But not listening on port 80 will also usually break the application. Though I suppose the same API key may be used by multiple applications, or multiple copies of an application configured differently.
edit: and even if there's only one application, yet for whatever reason it doesn't get taken down despite being broken, revoking the key now still prevents against a MITM later.
Well, nothing you do on the server side will protect a client willing to use http: when an MITM is present: the client can still connect to the MITIM, give away its credentials, and your server won't know.
Still, I agree that this is a very good way to teach your users to not start with http:! And that this is what one should do.
It could make sense for first-party SDKs for an API to block http access to the first-party API domain, but that should be unnecessary – typically users would use the default base URL hardcoded in the client library, and only replace it if they're going through some other proxy.
When they _do_ go through some other proxy, it's commonly in an internal network of some kind, where http is appropriate and should not be blocked.
It should, but additional server-side mitigations are good for defense in depth. There may be people using a different client-side library, maybe because they use a different programming language.
Of course. But I think the poster above was referring to just posting random keys to the server.
In other words I don't have your key, or any key, but I have "all of them".
The correct response to this though is that "there are lots of keys, and valid keys are sparse."
In other words the jumper of valid keys that could be invalidated in this way is massively smaller than the list of invalid keys. Think trillions of trillions to 1.
Good point. Presented that way I am seeing more positives to their policies, in particular if a vulnerability was unearthed by the invalidation quirk it's a way better way to find out than any other way.
This is just a run-of-the-mill DoS attack, with the astronomically unlikely jackpot of additionally invaliding a random unknown user's API key when you get a hit.
Astronomically is an understatement. If they made 1000 requests per second they might have a 1% chance of revoking a key before the heat death of the universe.
Cracking hashing requires large parallel processing, something you can't do if you're API limited
If the API key is a UUID or similar in complexity, they'd have to send 5.3 undecillion API keys to make sure all of them were invalidated.
So yes, it would open the door to revoking random API keys, but that's not a bad thing; when using an API key, you should be ready to rotate it at any point for any reason.