The ossification phenomenon happens to all sorts of APIs. Consider stat(2): all hell would break loose if we introduced another file type for st_mode. I'm a fan of keeping things "well oiled" by exercising extension points that are supposed to keep working.
That said, I wish we would be more willing to force these rusted joints to open, breakage be damned. Breaking 3% of the web sounds like a lot, but it's not that much, especially considering that anyone broken will very quickly upgrade.
By using elaborate workarounds described in the article, we recognize and reward the worst technical practices. The author may not want to cast blame, but I do.
Web breakage is a Prisoner's Dilemma. Unless all major vendors agree to break the same thing at the same time, the one that "defects" to being compatible with garbage will seem more reliable to users. Remember, most users don't know what TLS is, but they care whether the page they wanted to see opens "fine" or shows scary errors.
We've been there with CSS box model, DOM levels, (X)HTML(5), and CSS prefixes. There have been some cases of cooperation, e.g. SHA-1 certs and kicking of bad CAs, but most of the time browser vendors only "break" things gradually and only when it affects < 0.1%.
> We've been there with CSS box model, DOM levels, (X)HTML(5), and CSS prefixes.
Other examples:
Content-Type. There were a few years where many sites were misconfigured to serve virtually every non-HTML file as text/plain. IE, deviating from the HTTP spec, always used its own algorithm to detect the actual file type and thus "worked", whereas Netscape (and later Mozilla) respected the header and would display "garbage".
Windows-1252 mojibake. Various Windows-based authoring tools simply used the native Windows code page 1252 to encode their text in HTML files, which were then claimed to be encoded as ASCII or ISO 8859-1. This led to documents that mostly looked correct but would have garbage sprinkled around due to added punctuation in Windows-1252 (primarily the distinct open/close quotes and em-dashes, if memory serves). IE of course handled these "correctly", i.e. interpreted ASCII or ISO 8859-1 to mean Windows-1252. I think this behavior is now specified by HTML5 under certain conditions.
> IE of course handled these "correctly", i.e. interpreted ASCII or ISO 8859-1 to mean Windows-1252. I think this behavior is now specified by HTML5 under certain conditions.
You can leave out "under certain conditions", whatwg just plain specifies the "iso-8859-1" label identifies the windows-1252 encoding [1]
Considering Chrome's market share, it's enough for Chrome to break something to force server-side action. Distrusting Symantec certificates is a prime example, I think.
The breakage rate for distrusting all Symantec certificates is significantly higher than 0.1%, but there is just one vendor that needs to fix things.
Middleboxes are considerably more problematic, since you've got a lot of manufacturers out there, some of them might actually be out of business. Pushing firmware upgrades to all of those middleboxes would also be a nightmare. And the icing on the cake is actually finding the malfunctioning middleboxes, since they can be anywhere between the end user and the web server.
> The breakage rate for distrusting all Symantec certificates is significantly higher than 0.1%, but there is just one vendor that needs to fix things.
No. Every website that uses a Symantec cert (that was issued before 2017-12-01) needs to renew their cert early to move to their new root.
I'm okay with this if and only if browser vendors make old releases available for download, or better yet open-source their old releases. That way we can be more confident, when archiving pages that no longer render correctly, someone will be able to actually view them.
Old Firefox releases (binary and source) are very much available for download, though whether you can run them on modern operating systems somewhat varies. And you almost certainly can't compile some of them with a modern compiler, though a compiler from the era of the release should work...
Anyway, https://ftp.mozilla.org/pub/firefox/releases/ has links to all the Firefox release binaries and corresponding source snapshots starting with version 0.8. It includes a number of the betas as well (so for example has the Firefox 58 beta builds already).
>And you almost certainly can't compile some of them with a modern compiler,
I wonder how long and difficult it must be to get all the required dependencies to compile a modern web browser...
[two hours of obscure error messages later]
"You have v2.3030392.302 of dependency 113, you need to compile v2.3030392.300 in 32-bit first, then build the 64-bit version of v2.3030392.301 then ..."
You get all the compiler errors fixed and then it's on to round two... linker errors.
Maybe old browser versions should be archived as container images with all the necessary shared libraries. Then the only interface that might break them is the kernel interface, and we know Linus' stance on breaking userspace.
I've created dev environment VM's with classic operating systems (XP, etc) because my job involves updating lots of really old code (most clients only update when they have to, every 5-10 years).
I could definitely see someone doing that with Firefox. They may already exist. I've just, from experience, had so much trouble with compiling large codebases.
On the bright side, the larger a codebase, often, the more popular. And the more popular, the more likely people will have EXACT dependencies listed.
For the current version, it's not too bad if you're on a "popular enough" OS. For example, Firefox has a bootstrap.py script for Linux which will install the right packages on either Ubuntu or Fedora Core. See https://developer.mozilla.org/en-US/docs/Mozilla/Developer_g...
"Middleboxes" is a pretty chicken shit definition of what are essentially appliances that, at best, allow companies to spy on employees and at worst enable despotic governments to target and crush dissent. My security should not have to wait for the devices that seek to violate it.
Most of these middle boxes aren't actually man-in-the-middle attacking TLS. Instead they are enforcing aspects of the protocol, as some kind of Firewall. Sometimes this is silly fluff, preventing the bit-torrent protocol from running over port 443.
But sometimes it is useful: after HeartBleed I wrote a TLS inspecting firewall that blocks heartbeat records and I know it really helped get some customers out of a deep hole (they were unable to upgrade the impacted software itself).
Often it is something in the middle; like firewalls that look at SNI and X509 certificates to block "bad" domains associated with phishing and other abuse.
"Spy on employees" = Monitor the network activity of a sensitive, private business network.
I'll give the same spiel I give every time proxies come up: I am a donor to EFF, friend of privacy and the 4th amendment and I love the ACLU. I also recognize the legitimacy and importance of MITM proxies that inspect the content of SSL on a corporate network. It's not despotic to deploy company assets with company root CA certs that proxy TLS traffic to make sure a random employee isn't uploading credit card data to Box.
> make sure a random employee isn't uploading credit card data to Box
Except you aren't making sure, and there are many ways sensitive data can be exported. Let's be honest with ourselves here, these measures are often implemented under the guise of security but really are just liability and risk reduction approaches. The practical security benefit is very low IME, especially considering the burdens these measures put on employees' ability to work their best (and, as this article points out, burdens on others as well).
Which is harder for a call center employee or server admin?
1) Upload a file via GDrive or Box
2) base64 encode the data andsend it to a remote DNS server over the course of several days via TXT record requests
As another poster says, it's all about reduction of risk. It's about balance. And on a corporate network that owns the workstation and server configuration, running a root CA cert and installing it in a gold load is pretty easy for the level of inspection and prevention it provides.
This is not the end all of security, either. It's one of many steps.
I think it's mostly security theater without significant benefits. There are many parallels with the TSA here. It's not about ease of implementation as much as it's about employee trust. We shouldn't pretend that it doesn't also scoop up personal communications. Or you can say absolutely no personal communications using the computer you use for 8 hours a day if you're that type of person. Many SMBs survive without these measures, yet somehow they're sold as requirements by those with IT departments and the means. I hope "byopc" and remote work become more popular.
>Or you can say absolutely no personal communications using the computer you use for 8 hours a day
Exactly. It's a company resource. Most people seem to have / can afford a mobile device with cel/Wifi connectivity. Why should you feel so privileged to go to reddit, Gmail, Box, etc. on a bank network?
Email attachments and a few other mediums account for almost ALL nonmalicious leaks and MOST malicious leaks. Seems pretty good to me.
>Let's be honest with ourselves here, these measures are often implemented under the guise of security but really are just liability and risk reduction approaches
This is a fundamental misunderstanding of security. It is ALWAYS about risk and liability reduction.
It isn't always, but when it is, it's because the costs outweigh the benefits. Usually this is in the form of lack of employee flexibility to do the best for the company (i.e. red tape and hoops become an impediment). However, in this case the cost is employee privacy. You might argue that they have none and spying on all traffic is reasonable, but many times these policies encourage employees to use workarounds that are even more dangerous just so they can have a modicum of privacy (e.g. alternative and less-vetted software packages, non-company hardware, etc).
Having worked defensive infosec for almost 2 decades, the privacy-invasion-leading-to-insecure-workarounds is not my experience. (US-based, I'll grant. I'm sure my experience would be different in Germany, for example.)
As long as the inspection was transparent (which it was until the TLS 1.3 discussion started), employees mostly didn't know they were being monitored, and they didn't care even when they learned they were. Folks who got caught downloading porn or running their side-business over the work network were surprised that we were looking for that stuff, but no one felt like their privacy had been violated.
I wish browsers would visually distinguish between the vanilla trust store from the OS or browser vendor and the current trust store as may have been altered by the system owner. Maybe the API to various certificate stores can't tell you this? But in practice there's probably some certificate qualities that are "really really good" at indicating where this connection is trusted on this computer but would not have been trusted by a newly installed Windows/macOS/linux/firefox/chrome system.
If you are going to spy on me, please disclose when and where. AFAIK w/HSTS sites they generally don't interdict the traffic (not because they don't want to but probably because they can't).
HSTS only says connections must be encrypted, not with which cert... you're thinking of a CAA record. Although a CAA record doesn't mean much if your adversary controls the DNS server.
Well actually I think you mean... no just kidding. Thanks for the correction. I thought CAA records would also tell clients if a trusted root was also trusted for that domain.
Data Loss Prevention solutions are the only class of security products I can think of that attacks the problem in a worst way than antivirus do... That's say a lot.
All the time spent implementing those "solutions" would be better spent deploying proper access control, so that retards that may inadvertently upload confidential data to box don't have access to the information in the first place. As a bonus, it also protects you better from malice and not only stupidity.
I think middleboxes also just includes network load balancers. A lot of places do TLS termination at a load balancer and plain HTTP inside the data center. These load balancer appliances are probably where a lot of these issues come up. And they aren’t there to spy on you.
Load balancers tend to be used on the server-side though, where the servers behind the load balancer serve web API's and such. It's fine if a TLS tunnel terminates at the edge of your private network if what's behind it is aware of this — i.e., the servers within the network serve plain HTTP by design and delegate transport layer security to the load balancer. TLS 1.3 doesn't break this.
These middleboxes are a way to spy on HTTPS connections between browsers and servers (where server can mean the load balancer at the edge of your server cluster), without having to ensure every device on the local network has a set of spyware installed on it.
This article doesn't explain why they have so much trouble with versioning a protocol that is quite a lot simpler than many IP protocols that work. They shouldn't need GREASE, because anybody implementing TLS should be testing against a dedicated server, identified in the RFC, that connects as a client and jiggles all the knobs.
What, no conformance testing address is in the RFC? There's your problem right there.
To the conspiracy-minded among us, it is clear that "some people" don't want progress on encrypted communications, and know just what percentage of devices need to be intolerant of progress to stall adoption of fixes. Making browsers treat dropped connections as part of the protocol was very convenient for "those people". I.e., this is not (significantly) a matter of incompetent implementers, this is an enemy attack. Deployment plans that assume good faith by all parties fail under enemy attack.
As with everything cryptographic, a threat model is essential. Deployment is as much a part of the system and attack surface as the ciphers.
> What, no conformance testing address is in the RFC? There's your problem right there.
The vast majority of RFCs don't have conformance testing suites. IETF focuses on interoperability testing, which demonstrates that independent implementations work together for the major use cases, it doesn't test all the edge cases (such as future version negotiation) that a good conformance test would.
Creating and maintaining a conformance test is an order of magnitude more work than creating or maintaining a specification.
Someone has to be willing to expend the effort, and that usually means it has to make economic sense, and it rarely does.
And even when it does, in order to gain funding it often has to be licensed on commercial terms, which puts it out of reach of a lot of open source projects and smaller commercial implementors. (Even many large companies wouldn't pay for a test suite unless customers start putting it in RFPs.)
NIST used to maintain a whole bunch of free conformance test suites (CGM, COBOL, FORTRAN, PHIGS, POSIX, SQL), but the US government decided to stop paying for that.
The WHATWG and the W3C have been moving in a very different direction, treating testing as a major part of spec development, because the goal is to get implementations of the specifications, and how do you determine that if not by testing.
https://blog.whatwg.org/improving-interoperability is a post from the WHATWG side about this, and the outcome isn't just the direct "it's easier to write code against tests" but also "it's easier to notice when the spec changes" (because you now get a failing test).
Usually IETF WG members do testing and implementation (they run their modified version somewhere), but no information about that becomes part of the standard.
The protocol supports versioning just fine. There's an entire negotiation phase in the handshake just for that. The problem is there are too many broken implementations of TLS 1.2 out there.
It's the "fail closed" principle in action: if I don't understand it, it must be malicious, so the connection should be rejected as swiftly as possible.
Also seen in firewalls which drop all ICMP packets ("the only real-world use of ICMP is ping floods, right?"), breaking PMTUD.
But this isn't fail-closed. The specification allows for newer versions. The problem is, you are supposed to spit back the version you actually support instead of disconnecting. I don't understand how this can be interpreted as anything but non-compliance of the standard.
I would say that technically it's complaint because there's nothing saying a server can't tear down a connection whenever it wants.
Our InfoSec friends are rightfully suspicious of 'weird' looking packets and data from clients. It's one of the few ways to catch/stop zero day vulns. It does make things difficult when legitimate traffic is caught in the crossfire but such is the nature of most security practices.
Microsoft's Skype for business servers block(ed?) ICMPv6. That was a fun one to track down when there was an MTU issue on our network, especially when their diagnostic tool claimed there was a 403 error from the SIP endpoint!
If the team had good soft skills, they might have felt comfortable to ask questions about edge cases. And they might have put the work in to make sure they understood the users' goals. Instead they probably just wrote code to pass some conformance test and patted themselves on the back for being very technically correct.
Reminds me of the OpenGL 2.0 version issue. So many games and programs just checked the minor version number, as OpenGL had been stuck in 1.x land for over a decade.
So when drivers suddenly started reporting 2.0 rather than 1.3 or 1.4 the flawed version check failed and reported you didn't have a modern enough OpenGL implementation.
As someone who has debugged many weird TLS problems, I'm not at all surprised. The protocol documentation is pretty hard to use, so I'm not surprised that server or proxy implementations would bail out on unexpected input, and that they would only use existing clients as their base of expected input.
They essentially can via the WebRTC api, albeit in a more cumbersome way. There's nothing inherently insecure about it and the onus rests on the serving party and the browser itself to ensure no foul play is happening.
If you allow a browser to open sockets to random places, it's the exact same principle! The onus is on the listener to ensure that the people communicating respect the agreed upon protocol.
Then every browser could be used to send spam. It would be horrible. Imagine that every person who visits a compromised page immediately started sending spam emails over smtp. Or spam irc messages.
Generally is not a problem, as Flash needs to check a file called "crossdomain.xml" that is served from the destination server, which specifies how Flash can communicate with it.
Yes I'm aware (WebRTC actually uses DTLS, SCTP, RTP, RTCP, ICE, STUN, etc depending on what it's doing). My point is about the objection against being able to "open sockets" in the post I'm responding to...
Being able to "open a socket" kind of implies you actually get the socket, not just "Oh, if the far end negotiates the protocol we've pre-agreed then you can also send bytes over it". Web sockets "open a socket" in the sense WebRTC does too, but of course that isn't what you want either.
Only Flash actually has an API like BSD sockets where you can do make TCP connection and send an arbitrary protocol over it. Hence, since they want to test protocol compatibility, they used Flash. Doubtless if Flash didn't exist they would provide a handy Python program or something, and 99% of visitors would never run it.
This is always the proposed answer whenever someone wants to introduce X potentially insecure API. It is lazy IMO. It requires the potentially uninformed user to make a critical decision. Plus, if we introduced every insecure-but-we-prompt API proposed, the user would be inundated with prompts and it would reduce their productivity.
Forget impacting their productivity, we've already seen what happens when you defer these questions to the user. They always hit yes or okay. It's like it's not even there, people would view mindlessly accepting prompts like that's just how you're meant to use a computer.
Flash requires you to install a socket policy daemon[1] that typically listens on port 843 (which is why the MITM test requires that port 843 be open). When you request a raw socket, Flash first connects to port 843 and requests the policy file, which will tell it whether it's allowed to use raw sockets with that server.
I've set one of these up before, and I'm not really a fan.
You probably don't run arbitrary untrusted applications on your computer but you execute arbitrary untrusted Javascript all day when visiting web sites. To make this seemingly crazy thing work at all, some restrictions have to be imposed on that untrusted JS code.
Since you wouldn't have access to cookies I don't think it would be the end of the world. However you would have access to the users LAN which would likely make many local devices' vulnerabilities exploitable. All in all it is probably a bad idea with the current state of IoT, home routers, printers...
I'm not sure users would understand what they are being asked. Plus, whenever someone wants to do something dangerous in the browser, they say "Oh, we will ask the user". If everyone got their way, users would be inundated with dialogs.
Sigh. I'm leading an effort to cleanup TLS company wide and it's a nightmare.
I get why some people want middleboxes but honestly, I'd rather TLS1.3 take the opportunity to clean things up instead of coming up with workarounds for fallback.
The downside of hosting things all in the same domain is that cookies are shared between them, so a vulnerability in one site (e.g. XSS) leads to compromise of all sites. Choosing different domains means they are sandboxed and safe from each other.
Any domain name could be used to host porn. But not any domain name can get linked from a cloudflare blog. I think the fact that it's linked from cloudflare's blog should indicate that it's fine.
> The downside of hosting things all in the same domain is that cookies are shared between them, so a vulnerability in one site (e.g. XSS) leads to compromise of all sites. Choosing different domains means they are sandboxed and safe from each other.
I believe this is incorrect. Cookies should only be shared (by default) if the domain matches exactly, which is why it's best practice to use a www subdomain instead of the domain alone. For example, www.example.com cookies will not be shared with test.example.com by default, though this can be enabled. See here for a fuller explanation: https://stackoverflow.com/a/23086139
> There is no signal to developers that an implementation is flawed, and so mistakes can happen without being noticed. That is, until a new version of the protocol is deployed and your implementation fails, but by then the code is deployed and it could take years for everyone to upgrade.
> If a protocol is designed with a flexible structure, but that flexibility is never used in practice, some implementation is going to assume it is constant.
Perhaps this is a naive thought. I remember several years ago Mozilla announced its experimental browser rendering engine (Servo) passed Acid 2 tests [1]. So why can't we come together and create such standard tests so that server implementers and middlebox implementers are encouraged to include them as part of their QA score?
I know this is an optional. There is no mandate, but it sounds like a start. Isn't there some network vendor organization the major players are part of?
To me it sounds like reenabling the fallback with trying TLS1.2 after TLS1.3 fails for a connection would be the best solution to gradually upgrade all devices.
The article says "Browsers did not want to re-enable the insecure downgrade and fight the uphill battle of oiling the protocol negotiation joint again for the next half-decade." So I guess the natural solution to that is making the TLS protocol a User-Agent-esque nightmare of compatibility patches and pretending to support something you don't which surely WONT come back to bite them in the ass years down the line...
Yes, I read it, but I think a clean new TLS 1.3 with that old retry fallback system like before would still have been the best solution to establish TLS 1.3 without breakage. Poddle was solved by SCSV and this could be part of TLS 1.3 again, but even a desaster that makes us turn off TLS 1.2 5 years from now is a "good" solution because then the middleboxes have been upgraded.
Why do you assume that the middleshits would get upgraded?
If fallback works, fallback works. End of story. 80% of operators (governments, corporations, soho setups) will explicitly prefer not touching things if at all possible. They won't fix their "not broken" network.
I dont get the crisis. If 3% disconnect with 1.3, then retry again with secure 1.2. Its only a performence penalty for those 3%. Its obviously less than lets-talk-about-it penalty.
The article calls this "insecure downgrade" - which is not just the performance penalty you say it is, it's also a security penalty that's been previously (ab)used via POODLE. It's also code that's been removed and would need re-implementing, re-testing, etc.
The only crisis the article mentions - in passing at that - is when they tried to roll out 1.3 initially and things broke outright at alarmingly high rates. Whoever wrote the article appears to be a fan of discussing things before they turn into crisises.
It means anyone with a MitM position gets to decide you don't have TLS 1.3 . At that point, why even have TLS 1.3? It's not protecting you anymore than TLS 1.2 is.
> It means anyone with a MitM position gets to decide you don't have TLS 1.3.
So what ? 1.2 is not weak. Connection is still secure.
Also MITM still gets to decide whether to have tls or not. Then "Why even have tls ?". If MITM is blocking a protocol (or higher version, which is equivalent), then there is nothing to be done.
HSTS was created specifically to prevent the MITM from downgrading HTTPS to HTTP.
As for MITM blocking a protocol, that is a noticeable situation, and one that does not give the attacker any control over the cryptography used on sensitive data.
A downgrade attack is very different from blocking a protocol. The user doesn't notice, and the attacker gains some control over the cryptography used.
In the end, what sense does it make to have a TLS version an attacker can opt out of? All you get is defence against passive MitM. Until we aren't safe against those passive MitM with TLS 1.2 it makes no sense to rush TLS 1.3.
Especially because that rush would really hurt when it turns our TLS 1.2 is insecure, at which point the rushed solution becomes vulnerable to all active MitM attacks.
Now consider, what level of access to infrastructure only allows for passive MitM?
> The user doesn't notice, and the attacker gains some control over the cryptography used.
In my case user will because the tls client wont accept insecure version request. Connection broken. Client will notify the user of server still using a insecure version.
I think this might be the misunderstanding: A good client/server never establish/accept insecure versions.
Yes MITM gets to pick but if he picks insecure version, there is no connection.
POODLE was never attack on the protocol but poor implementations.
One misunderstanding is that you are accepting as fact that TLS 1.2 is secure, when it's entirely plausible that one or more state actors already know that not to be the case.
There's no magic light that goes on when the NSA breaks TLS 1.2 so that we know to stop trusting it.
I could go back a few years and ask you the same question: "Hows downgrading to secure SSLv3 insecure ?" You could say "it wasn't", and then you could suddenly be facing a bunch of CVEs and a cute acronym. What makes 1.2 so special that truly, this time, there won't ever be any problems found with it? Especially when enough potential problems were found to motivate TLS 1.3?
> Regarding sslv3, why would client ever use it, downgrade/POODLE or not ?
At the time, downgrade attacks specifically forced SSLv3. So yes, why would a client ever enable downgrade attacks - intentionally, no less - is a very, very good question. Of course, we're only talking about downgrading to 1.2 these days - but we've already had the history to show us why downgrades are a bad idea, so why repeat that history?
> So what ? 1.2 is not weak. Connection is still secure.
For now. So was SSLv3, until it wasn't. At the bare minimum, it's a larger attack surface of extremely security sensitive code - which is worrying in it's own right - and an intentionally written security vulnerability which, while "not weak" today, may become weak or even exploitable in the future.
Completely future-proofing things is a losing game, of course, but seeing as insecure downgrade moots the point of 1.3 - strengthening security - it seems reasonable to try and figure out how to do things right "now" - or failing that, "before it becomes a crisis."
> Also MITM still gets to decide whether to have tls or not. Then "Why even have tls ?". If MITM is blocking a protocol (or higher version, which is equivalent), then there is nothing to be done.
Does your browser automatically downgrade HTTPS to HTTP if the former fails? I hope not! Connections failing when MITMed by unknown third parties is a feature when you're logging in to your bank. This single feature is basically the entire point of TLS, HTTPS, and the entire CA infrastructure. That's "why even have TLS". There is something to be done: Defer connecting to your bank until you're no longer on your current, terrible MITMing wifi connection. Maybe pick one of the other 20 hotspots on your connection list. As currently implemented, this requires some minor, manual intervention, on the part of the user.
TLS 1.3 with insecure downgrade accomplishes this no more effectively than 1.2. It's a waste of bits. It's why insecure downgrade to SSLv3 is gone - it was mooting the entire point of TLS.
> So yes, why would a client ever enable downgrade attacks - intentionally, no less - is a very, very good question.
No I meant a good client/server would never use sslv3.
> Completely future-proofing things is a losing game
I am not assumming any such thing. The moment 1.2 is known to be broken, every client/server should mark it insecure and never establish/accept 1.2 connection.
Assuming good client and good server are poodled, then no tls connection will be established.
> Defer connecting to your bank until you're no longer on your current, terrible MITMing wifi connection.
Which a good client already does without any anti-poodle fix.
I think this might be the misunderstanding: A good client/server never establish/accept insecure versions. If they get poddled, then no tls connection.
> No I meant a good client/server would never use sslv3.
They did. They don't now, but they did.
Eventually, a good client/server will never use TLS 1.2.
My point is that it's the same situation - just at different points in time, with different protocols.
> The moment 1.2 is known to be broken, every client/server should mark it insecure and never establish/accept 1.2 connection.
That's basically what happened with SSLv3. The moment SSLv3 was known to be broken, every client/server marked it insecure and never established/accepted a SSLv3 connection again. This took the form of hotfixes, CVEs, "crisis", etc. - after all, that's just when browser devs found out it was broken. Frequently, criminals and security agencies will find out before them.
To avoid repeating the exact same history of hotfixes, CVEs, and "crisis", you can patch out 1.2 proactively - before it's "known" to be vulnerable.
But either way - proactive or reactive - we end up in a future with TLS 1.3+ and no TLS 1.2 downgrade support eventually. Does releasing 1.3 early with downgrade support help accomplish that future? The TLS and browser devs seem to think that's a distraction, and that plain old TLS 1.2 will do just fine until they can make TLS 1.3 just work, without the downgrade support.
Sure, it delays the release of "TLS 1.3" a bit, but so what? It's not delaying the release of "secure against any future TLS 1.2 exploits" - the part that actually matters from a security standpoint.
And if I'm reading the "Making TLS 1.3 work" section of the article correctly, they're already nearly ready to roll out TLS 1.3 without downgrades: A 0.2% higher success rates with "Experimental changes" (TLS 1.3 without downgrade support?) than TLS 1.2 on Chrome somehow (possibly just within the sampling error rate) and a 0.05% drop in success rate in Firefox (again possibly within the sampling error rate.)
> I think this might be the misunderstanding: A good client/server never establish/accept insecure versions. If they get poddled, then no tls connection.
We agree there - but a better client/server drops support for "secure" versions before they become known to be "insecure".
Looks like we are making case for different things.
You are arguing that everyone should update to latest version asap. Because, as you say, older version is more vulnerable. I dont think thats true, case-in-point: macos root bug. What if criminals and security agencies has broken 1.3 but not 1.2 ? Dont fix it unless its broken. Fix it when its broken or predicted to be broken.
But thats different to what I was arguing for: Some poor 1.2 implmentations aborting connection when connecting to 1.3 client, is not a problem as the article says. For complete arguments, see my previous comments here.
> But either way ... the downgrade support.
What downgrade support ? Are you saying 1.3 browsers will only connect to 1.3 ? Because there is no way every/most 1.2 servers will just transition to 1.3 together. Before 1.3 becoming 99.9%, some will be using 1.3 and some 1.2. It just cannot be avoided.
The argument is not that we should update asap. The argument is that we should not reconnect using 1.2 when 1.3 fails.
There is still the normal version negotiation mechanism in TLS that would allow clients that are willing to use 1.2 when the server does not support 1.3
The issue lies with servers that do not adhere to the standard regarding version negotiation. Your proposed solution (the 'insecure fallback') is about making a client deal with this in a way that is insecure when TLS 1.2 is insecure.
Our proposed solution is to change version negotiation so the faulty servers continue to work.
As you noted, 1.2 is still secure. So at this moment, we don't need 1.3 (save for the advantages like 1RTT handshakes).
Thus there is no security benefit to adapting 1.3 earlier.
Now, if 1.2 ever becomes insecure, insecure fallback means MitM can force 1.2 and then attack, even if both server and client are willing and capable of using 1.3.
It is true that when 1.2 is insecure, neither clients nor servers should be willing to use it.
However, slow updating or 'compatibility' might see servers and clients still support it.
It is those servers and clients that are vulnerable under insecure fallback.
Under normal version negotiation the only weak systems are those that only support 1.2 (presuming higher versions are secure).
Thus, if 1.2 were broken, insecure fallback leaves more legacy systems vulnerable than normal version negotiation does.
In my personal opinion, non-compliant servers are broken, and we should not try to support them. But I understand how reality makes that infeasible.
By normal version negotiation if you mean rfc7507/TLS_FALLBACK_SCSV then even that wont work for legacy systems last updated before April 2015 (publish date of the rfc).
But nonetheless spec authors picked compatibility over security. I would not. Let the legacy insecure tls servers be blocked by the browsers, I say.
I get your position now. Thanks for taking the time to explain it.
> But nonetheless spec authors picked compatibility over security. I would not.
Most of the compatibility compromises don't seem to sacrifice security - except perhaps for increased attack surface, and a more complicated spec (but I'm not sure that leads to a more complicated implementation, so much as it's more thoroughly covering any downgrade dance?)
If there are more security sacrifices to achieve compatibility, I agree with you - that's bad.
Except it is INSECURE:
> However, insecure downgrades are called insecure for a reason. Client downgrades are triggered by a specific type of network failure, one that can be easily spoofed. From the client’s perspective, there’s no way to tell if this failure was caused by a faulty server or by an attacker who happens to be on the path of the connection network. This means that network attackers can inject fake network failures and trick a client into connecting to a server with SSLv3, even if both support a newer protocol. At this point, there were no severe publicly-known vulnerabilities in SSLv3, so this didn’t seem like a big problem. Then POODLE happened.
This is exactly with what happened with Poodle and SSLv3 and it's in the article. Attackers were successfully downgrading to the vulnerable SSL when a TLS connection was attempted.
In the future when TLS1.2 is considered insecure, we do not want this workaround in place.
That said, I wish we would be more willing to force these rusted joints to open, breakage be damned. Breaking 3% of the web sounds like a lot, but it's not that much, especially considering that anyone broken will very quickly upgrade.
By using elaborate workarounds described in the article, we recognize and reward the worst technical practices. The author may not want to cast blame, but I do.