Hacker News new | past | comments | ask | show | jobs | submit login
Removing Old Versions of TLS (blog.mozilla.org)
217 points by edmorley on Oct 15, 2018 | hide | past | favorite | 91 comments



Side note unrelated to TLS.

Using telemetry Mozilla was able to precisely measure how many connections are actually established with TLS 1.0 and 1.1. Without numbers, they'd have been flying blind, making decisions with no rational basis. That's why I personally choose to leave telemetry on in applications that I trust. It helps the dev makes sensible, data-driven decisions.


> Using telemetry Mozilla was able to precisely measure how many connections are actually established with TLS 1.0 and 1.1. Without numbers, they'd have been flying blind, making decisions with no rational basis. That's why I personally choose to leave telemetry on in applications that I trust. It helps the dev makes sensible, data-driven decisions.

Some unrelated advice: what would you do if business brings up Google analytics data showing Google Chrome, IE, and Safari are three main browsers people use and the usage share of Mozilla Firefox is under 5% for a particular website? I tried to show my own use case with uBlock Origin which means when I go to the website it doesn't show up on Google Analytics.

How do we make decisions based on self-reported data?

You'd think institutions funded by public monies would be more sensitive on issues like this but my pleas fall on deaf ears.


In this specific usecase, if you're having trouble convincing, maybe doing analysis based on access logs (depending on your size, I guess, maybe you'd need to sample only a couple of days depending on how much access logs you store).

EDIT: I'm mostly suggesting this because it sounds like all you need is proof that your Google Analytics data might be incorrect.


I find it ridiculous that an adblocker blocks access to even first party analytics. /piwik.php is found on EasyPrivacy list, which is one of the default uBlock Origin filter lists.

I really didn't expect my adblocker to do that when I installed it.


I find it ridiculous that first party analytics requires extra requests at all - there's enough log information from my actual page requests, you don't need to make extra ones. If your logging is so inefficient, it should be blocked.


It can make sense if they would like to know things such as how long the page load took. That is not something you can measure by only looking at the request logs, especially if the page uses a lot of javascript.


I'm always surprised people are willing to give random developers that much control over their browser when installing extensions like that.

I've had so many things break because someone somewhere didn't think a request was valid enough and blocked it and it was put into a common blocklist. WebRTC, several fonts, first party analytics, even whole services like Google shopping were blocked entirely because "they are ads" or they were "trackers". We even had to modify an app which was using css modules one time because class names were compiled down to a hex encoded hash of the css properties, and any class names beginning with `ad` were blocked by EasyList at one point...

It is giving an uncomfortable amount of power to a very small amount of developers. And it ends up ruining efforts to avoid 3rd party analytics because not only are first party harder to implement and maintain in many cases, but they get blocked just the same for very innocuous reasons, and give worse insights in some ways.

Just look at the list of what EasyList considers "unacceptable data to gather" in first party analytics [0]. Things like the useragent, timezone, IP address, and language are considered "personal information" and should be blocked even in first party scripts with EasyList! IP address or useragent alone literally makes all first party tracking impossible...

[0] https://easylist.to/2011/08/31/what-is-acceptable-first-part...


> I'm always surprised people are willing to give random developers that much control over their browser

I will say this is why people install content blockers, to avoid giving control to random developers behind countless websites, which have shown that there is no limit in the amount of data they will try to gather without a hint of informed consent.

On the other hand, content blockers are opt-in and people are free to install or not after reading the description/documentation and further researching them.


This discussion got bigger than I expected. I just want to take a moment to thank you not just for your hard work but the trust you have brought in uBlock Origin. From the bottom of my heart, thank you.


I get that, and I fully support that choice, and I agree with that feeling.

But it's the lack of choice in terms of what is blocked, and the limited ability to understand that it's the content blocker that is potentially breaking the application which is what i'm lamenting here.

I can't overstate how much I want to respect the user if they don't want some aspects, or some features, or anything which they just don't like, or which they feel can impact their privacy. (I'll argue with you about the ethics of blocking ads but still using a service, but that's another point entirely!)

My issue is that there is a group of people that are deciding what things are allowed and which aren't, and they are going to necessarily have to make tradeoffs (I don't think that all of amazon.com should be blocked because they can track your purchase history by virtue of handling your purchase!). But for the most part the trade offs made are opaque to the user and site owner, and the UX is lacking in showing the user that the content blocker can be the cause of the problem, not to mention that by default these lists are very powerful tools able to pretty quickly harm good actors (and this isn't something i'm single-ing out content blockers for, antivirus vendors, ISPs (AKA net neutrality), and even social networks all have this issue to some extent).

There might not be a good answer, and I sure as hell don't pretend to have it, but I'm just wishing for a way that users can make informed decisions on what they want to block, rather than have someone decide for them. Without some kind of give and take here, the arms race is just going to continue, and as it escalates, good actors (of which I'd like to think i'm one of) will get increasingly pushed away as they are blocked just the same as bad actors, leaving them with the options of "play dirty and survive" or "die peacefully".

I'm sad to say that I ended up taking a page out of the "play dirty" book and got a new domain name for the app in question to get around the content blockers, but I'd do it again in a heartbeat since I'm not abusing that information or power, but I'm blocked as if I am. And while I feel dirty that I'm forcing my wants onto others, the ratio of "unwilling participants" to "those who actually wanted to use my app, but not one it's core features" is so unbalanced that I ended up ignoring the latter to save the former. And while I wish I could respect the wants of the latter group, if I engage there the former will have the content blocked for them as well again, and I'll be right back to square 1.


If you take all those pieces of information together you're very close to uniquely identifying the user, in a way which could be correlated with similar analytics from other websites if and when your analytics data is leaked or sold.

Sometimes people really mean it when they say they don't want to be tracked: they want to leave no footprint at all. Sometimes people just want the website to load as fast as possible by discarding as much Javascript and as many requests as they can get away with.

But don't forget the good old HTTP access logs, which give you all this information anyway without having to load extra scripts.


>But don't forget the good old HTTP access logs, which give you all this information anyway without having to load extra scripts.

But that's my point, whole domains have been added to EasyList under these rules that break the entire application because the person who added it assumed it was tracking related and not part of the application. People see xhr requests which save the current timezone so the server can update calendar notifications at the right time as "tracking" and break the application entirely because of it. They see a request they don't understand and block it because "it includes the useragent".

Take Google shopping for instance, an entire service shut off because "it's tracking", while Amazon or eBay or any other online shopping service isn't blocked.

If you really want that blocked, I'm more than happy to not serve you! I don't want to track you if you feel that using your timezone is over the line! But when your feelings toward that are pushed to who knows how many computers and now my site is broken for a large number of users, who are now contacting me to fix it, it becomes a problem.

To use an analogy, I don't have any issue with people that are allergic to peanuts. I don't want to try and force feed you peanuts, I want to make sure everyone knows I serve peanuts, and I want to give you the option of not eating at my establishment if you are allergic or simply don't like peanuts.

But if you start removing my establishment from maps, and start putting road blocks in front of my building preventing others from getting in, now it's overstepped a boundary.

I don't pretend to have a good answer, this isn't an easy problem to solve, but giving a handful of devs a "break this website for all adblocker users" button seems like a lot of power, and it especially hurts those who are trying to accommodate you.

If I didn't care, I'd just work around it, use server logs and send them to 3rd parties, lie and actively route around your attempts to block. And I'd not only make more money, but have to deal with less issues from users...

My hope is that the devs that believe these things can give more options to the user to decide for themselves and make informed decisions about the tradeoffs, rather than saying "this is bad, I blocked it for you".


Why do need the user's current timezone? You say calender events. Do you really want to mess with their calender based on the latest timezone their computer reported? What if they access your site whilst on a weekend trip? Do you really want to then shift all their appointments for the next week around? I've seen people miss out on jobs because overeager software changed the interview event's timezone... As a general practice don't fiddle with peoples' data without them knowing.


And this again is the core of the issue.

In the instance I'm talking about, the app was about reminders for taking medications. Having those go off at 3am local time because you flew to Hawaii for a week doesn't do you any good. Not to mention that the app tracked and helped tune dosages and timing, and needed information like when you woke up, time of day for you, the kind if medication, and a lot more.

Again, you are free to not give it, but that was a core point of the application, and trying to disable or break the application in the name of privacy is like cementing your door shut to prevent breakins.

Users aren't "tricked" into having timezone information used to track them, they aren't frusturated with the timezone tracking feature, and the app was paid and had a privacy policy in place that prevented me from selling or using any of the information for advertising. And if users become frusturated with the feature, I can as the developer fix it for them. You cannot, and blocking requests that you don't understand in an app you don't use is exactly what caused all the headache in the first place. And when I went to try and get it unlocked I was met with similar hostility, and told in so many words "too bad".

You don't always know better than the user, and you don't always know better than the developer.

In the end, I switched domains, and since then have managed to fly under the adblocker radar. I'm no longer involved with it, so I don't know if they managed another solution without me.


Hm this does actually sound like a usecase for using timezone info.

However. I don't think that content blockers are third parties. They are part of the user agent. The choice to use a blacklist is up to the user. It is not a third party breaking your app. It is the user themselves.


But it's the user using a tool to do one thing (block ads) which someone has taken to an extreme (block all requests which could potentially be used for tracking, and treats different players differently depending on sentiment in a lot of cases).

To use a more controversial phrase, the "blocklist" groups are now a cartel which gets to decide which apps work and which don't, and you have no way of appealing or even reasoning with them in a lot of cases. They get to decide that the Amazon cookie is okay, but a Google font is not, that login-with-facebook isn't allowed, but login-with-twitter is okay. That my app which uses timezone information was crossing a line, but my competitor wasn't.

I'm more than happy to help work with those who want to protect their privacy. I genuinely agree with a lot of viewpoints and I work to protect my own privacy as well, but at some point things go too far and I feel a lot of these tools are throwing the baby out with the bathwater here, and are even causing a false sense of security as people think they are protected from everything when really they are hurting the good actors the worst while the bad actors continue to operate just fine.


> To use a more controversial phrase, the "blocklist" groups are now a cartel which gets to decide which apps work and which don't, and you have no way of appealing or even reasoning with them

As I’m one of these people who aggressively controls what I let my web browser load maybe I can answer some of your questions.

Facebook, Twitter, Google Analytics and tag manager are blocked outright, all the time.

If your site needs several different CDN’s to load it’s assets, for seemingly no reason then I generally leave; unless I need something from that particular page, in which case I give it about 2 or 3 goes of playing “which CDN serves the information I actually care about?”.

Any domain you use that isn’t obviously yours or a CDN or obviously related to what’s on the page (web stores get to load shoplift for example) is blocked because assume it’s just another advertising/analytics company.

Sites that load the DOM and contents into memory, but display a blank page until I unblock a bunch of JS are my pet peeve, and I make a note to never return to them.

> I feel a lot of these tools are throwing the baby out with the bathwater here

The carté Blanche approache is used because it’s faster and easier for me to find the minimal set of domains and resources to load than it is to get every website out there to not stuff their site full of trackers and unnecessary JS.


>If your site needs several different CDN’s to load it’s assets, for seemingly no reason then I generally leave; unless I need something from that particular page, in which case I give it about 2 or 3 goes of playing “which CDN serves the information I actually care about?”.

>Any domain you use that isn’t obviously yours or a CDN or obviously related to what’s on the page (web stores get to load shoplift for example) is blocked because assume it’s just another advertising/analytics company.

I'll generally agree with you (for the most part). With the exception of using a CDN to serve static assets that I control (like amazon s3 for instance), I have no issues with you or anyone else blocking all requests that they don't like! (and hell, i'm fine with you even blocking first party requests, but since you are the one doing it you'll know that it is the reason the site breaks).

> Sites that load the DOM and contents into memory, but display a blank page until I unblock a bunch of JS are my pet peeve, and I make a note to never return to them.

But I'm talking about applications, not static web pages or documents. Complaining that an application needs a programming language is like refusing to turn on any computer that runs any code at all. The code is the application, and if you don't want to run JS, then you don't want to use the program. (outside of viewing some static assets like images, or possibly the minimal DOM put into the HTML for bootstrapping, there's literally nothing else).

>The carté Blanche approache is used because it’s faster and easier for me to find the minimal set of domains and resources to load than it is to get every website out there to not stuff their site full of trackers and unnecessary JS.

Again, I'm more than happy to let you block what you want, i'll even help support your use case if you want and it's possible.

But when you take your approach, and you start making a large percentage of the web use it, and you auto-update their browser plugins to use your approach so that things that were once working yesterday are now broken, and users don't have any real reason or understanding why. Or even worse, you begin making wild assumptions about why I need data backed by nothing but scaremongering (displaying warnings like "this site is trying to track you" when I give the option of offering the app in multiple languages...), then we have a problem.


Well there's a strong argument to be made that there are plenty of people making web applications for information that doesn't need to be a web app. For example: your generic company website does not need to be a SPA with every conceivable JS feature thrown in. You are only presenting me some information, HTML and CSS is more than adequate, stop making things more complex than they need to be.

In these case, I'm more than happy to break their stuff as it's seemingly the only way to get them to pay attention.


But that's an entirely different argument, and it's one that always muddies up the conversation whenever I bring this up.

This isn't a blog that loads 50 trackers and tons of superfluous animations and video, this is an application which is designed to do things for the user. It's stuff like a music making app, or a medication reminder and tracker application that takes a lot of your personal information to help you find accurate dosages. It's stuff like games or offline-caching documentation browsers with IDE integration and powerful client-side search, or self-hosted home automation controller systems, or chat applications, or a barcode scanner app that checks for coupons for things that you scan in a web browser, etc...

And again, i'm fine if you don't want to run one aspect of it, you are the client and I can't technically control what you do, but complaining that the car won't work when you ripped the spark plugs out is dumb, and a mechanic that goes around taking out spark plugs every chance he/she gets so that people won't speed isn't helping.


Fundamentally it's a bit of a prisoner's dilemma situation, but with three actors. The relationship between website developers and adblock developers is necessarily adversarial - adblockers only exist because users disagree with the developers how the website should appear.

> start putting road blocks in front of my building preventing others from getting in

It's more like you've erected a billboard, and the Campaign Against Ugly Billboards have given some people AR glasses which edit out the billboard .. and a bit more of your storefront. It's not really in the same category as fraudulent Yelp reviews.

> If you really want that blocked, I'm more than happy to not serve you!

I think you'd be better off just accepting this. The user and their user-agent has a "right" to display a website how they want, including in ways that the developer didn't think of - and including ways that are broken. The user can break your site if they want. I appreciate that this causes problems when they've not realised that it's their choices that break it, or put in annoying requests ("why doesn't this site work in Lynx"). But fundamentally they're customers, customers are always annoying. Given your context of a paid app it sounds like it's worth asking them to disable the adblocker pre-emptively, but presumably you've tried that already and they haven't.

> Take Google shopping for instance, an entire service shut off because "it's tracking", while Amazon or eBay or any other online shopping service isn't blocked

ublock Origin blocks bits of Amazon within the website - the internal sponsored advertising. I only noticed when some of it flashed up and then vanished.


“I think you'd be better off just accepting this. The user and their user-agent has a "right" to display a website how they want, including in ways that the developer didn't think of - and including ways that are broken. The user can break your site if they want. I appreciate that this causes problems when they've not realised that it's their choices that break it, or put in annoying requests ("why doesn't this site work in Lynx"). But fundamentally they're customers, customers are always annoying.”

It’s something at least akin to unethical to deliberately break something I’ve built and then come complain to me that it’s broken.

And sure, they might be would-be customers, but not all would-be customers are worth serving.

Just as customers have the right to buy from whoever they like so to do seller have the right to sell to whoever they like (barring certain narrow exceptions like racial discrimination.)

It’s always the negative value customers that throw the biggest tantrums when they are told a business would rather not serve them. Or in the case of ad block users, not even customers just people that want something for nothing.


>It's more like you've erected a billboard, and the Campaign Against Ugly Billboards have given some people AR glasses which edit out the billboard .. and a bit more of your storefront. It's not really in the same category as fraudulent Yelp reviews.

But your analogy misses a key aspect which is that the adblocker breaks the app entirely. It's not like just hiding my storefront, it's preventing users from entering the store at all unless they completely disable the device. Often those same users don't realize that these blocklist update silently in the background and it's their adblocker that is breaking the application that they have been using every day for years.

> I think you'd be better off just accepting this.

I think I worded that poorly. I do accept this. If you don't want to use an aspect, or want to render something your own way, I'm happy to let you! But I'm not happy when you start imposing those restrictions on others who don't know it's happening and don't know why core aspects of the app aren't working.

Informed decisions are good, blanket disabling or breaking applications because a developer "knows better" isn't.

Useragent issues I can work around, bugs can be fixed, and those who don't want that part of the app to work are happy. But I can't work around an adblocker blocking things for users that don't want then blocked. If I work around it I'm then "tracking" those who are attempting to block it (and IMO should be called out for it), and if I do nothing I lose users and paying customers who will move to competitors which for some reason are not blocked. (And I don't blame them, my app doesn't work, so why would you stick with it?)

> ublock Origin blocks bits of Amazon within the website

But back a year or so ago, it blocked the entire google.com/shopping URL, and the shopping.google.com subdomain (I haven't checked if it still does).

The entire service blocked because someone somewhere decided it wasn't worth it, but Amazon and eBay and AliExpress all only have some elements blocked because it's somehow different.

And while you can go in and re-enable those parts, it takes a long time and a good amount of technical know how of things like globbing and regex to whitelist it correctly. And then must be repeated on every device you use.


There is no guarantee that your first party tracker does not relay information to a third party via your web server. Furthermore, at least some of the data you list is considered personal information under GDPR. Block access if you want but do not collect personal data about me.

What I am hating most is that sites where I am willing to send money to (web shops) do these things and send data to ad net works too, no less.


> There is no guarantee that your first party tracker does not relay information to a third party via your web server.

You have no guarantees at all, but that information is always sent across, and in many cases must be sent to even make a request. You literally can't prevent an IP address from being "sent" because it's needed to route the information back to you. Sending an "Accept-Language" header lets a website be translated into multiple languages. And virtually 100% of my users want times and dates in their local time, not UTC.

Just because it's potentially identifying information doesn't mean you can block it and pat yourself on the back as "stopping tracking". When you don't allow an IP address to be sent, you have just created a system where you can arbitrarily choose literally any request and block it under that rule. It's like making a law against breathing oxygen, then arresting anyone that you don't like for any reason because they are breaking at least that law.


Don't you have logs from your http server?


The information from people that explicitly opt-in would have given them the same insight.

> That's why I personally choose to leave telemetry on in applications that I trust

The problem is that when I have to "leave it on" rather than explicitly enable it then it's no longer an application I trust.


The opt-in information would be skewed to the point of uselessness though right, thanks to extreme correlation? In fact, the non-opt out is probably pretty skewed itself so some wrong data driven decisions are inevitable.


There's a third, seldom-discussed alternative to opt-in and opt-out: no default.

On first launch or after a set period, show a non-dismissable dialog asking users to choose between the two, with no default option selected, so you can't just click through.

This strikes me as the best compromise between respect for user privacy and acknowledging that data collection can be useful, with the trade-off that some users will choose not to use the software/website rather than decide on an option (thus, 'no default' must be used sparingly).


That's still opt-in because the user elects to submit their telemetry data, even though they didn't seek out the setting.


It is opt-in, but typically both opt-in and out-out in these sorts of contexts refer to what happens in the absence of user input.

In that context opt-in has a bit of an issue with people even knowing it exists, not to mention caring enough to go find the option to opt-in, and then doing so. This route will, for better or worse, produce a very self-selected group. The proposal above is a way to maintain the benefits of opt-in (namely, consent) while trying to improve the quality of the group.


As a Firefox user, Moz://a has plenty on their tab and does not deserve special treatment.


Most of the pushback here isn't going to be on the web. It's going to be in corporate systems and proxies that haven't upgraded, and reject anything they don't understand.

For instance, some corporate proxies will parse TLS and drop connections they don't understand. Theoretically, they do this to combat things like Heartbleed; in practice, they do it because the same tools will (with the flip of a switch) do termination and interception.


In my experience this kind of thing is becoming less common as time goes on, but granted it's a long-tail of behavioral change.

A lot of organisations (at least, ones I've talked with) are starting to look realistically at how they handle things like proxying and traffic monitoring in a world where they can't reasonably MITM HTTPS sessions any longer.

Reality for most is, they're starting to realize they can no longer paper over a people problem with technology, and are starting to look realistically at how the issues they were addressing via MITM can be solved in softer ways.

Genuine high-security environments are a different matter. If you absolutely have to safeguard against data-loss events and such, there are a wide variety of other techniques available such as full air gapping, or maintaining full end-client control at the tail of the tunnel so you never need to peek halfway along the pipe. The other common need for MITM is to police time-wasting behavior and perceived risks of staff web activity, and those are better solved via good old fashioned people management.


Honestly, I don't know why corporate systems just plain reject standard HTTP(S) traffic and require browsers to configure a corporate proxy which signs connections with their own internal certificate authority. Transparent MitM is such a hassle and in a corporate environment you should control the client devices anyway. As an added bonus you can choose which protocols you will or will not support on the external side, protecting your ancient Windows XP boxes from SSL2 downgrades and allowing them to contact modern websites their SSL libraries might not support.


There are still some essential government, military and corporate websites relying on these protocols that will not be updated any time soon - it should always be possible for a user to override this block.

I really dislike this "browser smarter than the user" design.


How many users:

a) Know what TLS is

b) and, have a secure channel to their destination website that allows them to determine that it intends to serve TLS 1.0

c) and, aren't in a position to just upgrade the darn thing to at least TLS 1.2?

d) and, know that there are no undisclosed weaknesses in the outdated design of TLS 1.0 or in the outdated cryptography that it mandate

Most users fail a). Basically the only way to pass b) is to be the website operator or someone that knows that person or group in real life. But, to pass c), you can't be the operator. Then, finally, no one can really pass d), but, the closest you could get would be to be a part of a sophisticated government sponsored security agency, probably working as a cryptographer and definitely being kept up to date on pretty sensitive intelligence. And for reasons that aren't clear, you are totally fine with the website in question running on outdated crypto - so, in addition, you are probably bad at your job.

How many people fit that description? Those are the people that have a right to consider this a user hostile change. I'm willing to bet its a pretty small group. Everyone else benefits since they either know they can't make a good choice as to whether to accept TLS 1.0 from a website, or, mistakenly think they can.


To add to this:

Supporting old stuff costs time. If the company (or a group of companies) believes they know better, they can contribute the code to change this. And maintain it. Or pay someone else to. It's perfectly viable and it accurately reflects the cost of the business's decision to not change something.

Otherwise no, most browsers must be safe for the lowest common denominator, and they do know better than most.


That's why it should be disabled by default but also be overridable. Those users would have to mess with browser flags to re-enable older versions. And if a user is willing to mess with advanced browser settings without understanding them, there are far worse security settings to mess with than outdated tls protocols.

Users MUST have ultimate control over software and not the other way around; even if it such control is used to do something very stupid. Software deliberately designed to go against the wishes of its users is defective, malicious, or both.

PS: point (d) is a non-point.


> Software deliberately designed to go against the wishes of its users is defective, malicious, or both.

I think its quite a stretch to say that Mozilla choosing not to support a technology makes their product "defective" or "malicious". They get to choose what they support. They beauty of open source software, is that if someone disagrees with that decision, they are free to support it themselves. That is unlikely to happen in this case - and that just validates Mozilla's decision.

Point D) is highly relevant - if it's hard for users to present a rational reason that a feature should exist, it further justifies Mozilla not wanting to support it. The IETF, NIST, browser vendors, PCI security standards, vendors such as Cloudflare, etc have all moved away from TLS 1.0 or recommended no longer using it as described in https://tools.ietf.org/html/draft-ietf-tls-oldversions-depre.... That document also lays out various technical reasons to no longer use TLS 1.0. TLS 1.2 has been the recommended version of TLS since 2008 - 10 years ago and it will be 12 years by 2020 when Mozilla stops supporting it. That is all overwhelming evidence that anyone that thinks that they are the special exception for whom using TLS 1.0 makes sense, is almost certainly wrong. People have the right to be wrong, but, it's hardly Mozilla's ethical obligation to enable them.


I understand the point you're making, but I still stand by my opinion. Software that goes out of its way to subvert the wishes of the user is defective, malicious, or both.


Spoken like someone who never had to support some old feature because "maybe there is one user who still uses it". Features have a cost. Disabling features and removing them if they are no longer useful for the majority is a valid response to limited resources. I'd rather see Mozilla work on features that are useful and secure than garbage from yesteryear.


> Users MUST have ultimate control over software and not the other way around

You can build your own Firefox. You can even download the source and build an old version. What more control do you want?


This will give the techies a reason to give to their bosses to pay off that technical debt that has accrued with these systems. Every system that uses outdated websites will need to upgrade. And they will have two years to do it. It makes the argument go from the nebulous "it will make us safer" to the concrete "things will not work".

And yes, it's a heavy handed way, but the fact there are "There are still some essential government, military and corporate websites relying on these protocols that will not be updated any time soon" shows the soft touch isn't working.


Not to mention "fringe" websites that still contain much useful information, are occasionally found in search engines, and more frequently in bookmarked site lists... gov/mil/corp have plenty of resources to add new TLS versions, but just not always the willingness to, and that isn't so bad since "adding willingness" is not impossible; it's really the "small players" out there which will be most affected, those who have personally maintained sites or even sites abandoned on servers for a long time.

Fortunately most of those sites still use "not secure" plain HTTP (I wonder if they're going to remove that too!?), but this feels to me like yet another sad sacrifice of freedom for security, and in this case it's almost --- but not quite --- book-burning. The Internet used to be a much more diverse and interesting place, if perhaps more dangerous; but in encouraging the dominance of this "safe and secure" censorship, sites run by large corporations and centralisation of power into them and the CAs that essentially act as access gatekeepers, I feel like we've lost a lot of what made the Internet a really unique and fun (including the risk) experience.

I think an appropriate real-world analogy is https://en.wikipedia.org/wiki/Slum_clearance


Government, military, and corporations. Exactly the people I want pushing critical data (or my data) over insecure channels. /s

Besides, we all know there will be plenty of organizations that issue convoluted instructions that are the equivalent of "reset your clock to before the cert expiration".

As someone who had to deal with fallout from Equifax, I'm all in favor of smarter, yes smarter, parties acting in the collective security benefit of us all. As you point out, some will drag their feet otherwise.


This isn't the browser acting smarter than the user; this is the browser trying to push the web forward that last little bit so that everyone is more secure.

I'm sure that alternatives will exist for people who know they need to deal with TLS 1.0 for a while longer.


if military and government sites rely on old encryption schemes i think there's much bigger problem than that

And even as a person who wants to have toggle for everything i don't think this is a good option in this particular case. If someone wants legacy, they can stick with an old browser instead.


Why wouldn't they just put a TLS1.2 reverse proxy infront of whatever legacy system is there? or use cloudflare? that'd be way easier than telling people to reconfigure their browsers to be insecure everywhere on the web.


Essential has nothing to do with it. Upgrading to a ten year old standard as a minimum is not burdensome. If these services are so critical, they have far bigger problems due to these gaping security holes.


Honestly if you still need something that clearly deprecated, and you're given a two year warning (if not more) then you can just fork Firefox and put the old TLS versions back.

With open source software you have every opportunity to customize it to your needs. The question that remains of cause is: Which is more expensive, upgrading the outdated software or maintaining your own Firefox branch.


Older versions of the browser aren't going anywhere. Users are free to keep them as long as they want.


If you can afford to spend $1 trillion on a fighter plane that is marginally better than the previous ones that you already had, you can afford to spend $100k on basic network security.


> military

But uh isn't it better that it breaks in peacetime* than in wartime?


Then don’t upgrade the browser.

Don’t jeopardize my security just so you can keep living in the Stone Age.


This will almost certainly have an `about:config` option (it has one right now)


Corporates be damned! (and they will be if they don't adapt). There are many issues here and you touch on one: MItM web proxies. To be honest it is pretty trivial to get Squid to do Bump and Splice so MiTM is a thing for anyone and not just corporates with expensive proxy toys.

In the Windows world, you have a weird collection of SSL/TLS settings available out of the box. Unfortunately you have at least three sets of these to worry about: SChannel, WinHTTP and .NET (this one has several sub versions). Updates enable extra features but some need reg keys to switch them on.

Then, once you get things tuned correctly so that TLS <1.2 is unavailable, something will need an older version and you have to switch it back on ... unless your IT security officer is also the Managing Director, which means I get to tell the whiner to piss off and fix their application. I do accept that I have a luxury unavailable to most people!


Where is the problem?


If you want Nginx to use TLS v1.2, this is what you need:

  ssl_protocols TLSv1.2;
…and if you compile a recent Nginx from source and bake in OpenSSL 1.1.1 while you do that, you can have TLS v1.3 with a TLS v1.2 fallback, too:

  ssl_protocols TLSv1.3 TLSv1.2;
See also:

https://caniuse.com/#feat=tls1-2

https://caniuse.com/#feat=tls1-3


This is a bad design by nginx, how many people configuring a web server are thinking to themselves "I better check which version of OpenSSL I compiled with in order to set the appropriate TLS versions?". I'd guess approximately none.

The correct design would be something like:

  tls_minimum_version 1.2;
If they feel a compulsion to do so they could add a maximum version, but with a default of (none) and an explicit warning that this probably isn't what you wanted to change.


tls 1.3 is still very new, so it makes sense that it would have both compile time and runtime concerns. Unless you are specifically trying to use tls 1.3 (which isn't most people), you don't need to turn it on, even if it is compiled in. Of course if you are specifically trying to use it, you probably know how you compiled nginx. So really it isn't bad design at all.


I reported this four years ago with a patch at https://trac.nginx.org/nginx/ticket/642. Nobody seems to have noticed.


These links work with JS disabled:

https://caniuse.com/tls1-2

https://caniuse.com/tls1-3


The default includes TLSv1.2 (along with TLSv1.1 and TLSv1). So the default does and will continue to use TLSv1.2 with browsers that support it, and will still work when browsers disable TLSv1 and TLSv1.1


https://caniuse.com itself only supports TLS 1.0. They know about it https://github.com/Fyrd/caniuse/issues/4198




Chrome's seems to be the only announcement that doesn't mention it's coordinated with the other major browser vendors.

Dominant position talking :(


I kind of wish they'd leave the option to re-enable them in extreme circumstances. It's really annoying to try to bring up the web interface on some crusty old piece of hardware and discovering that the SSL/TLS negotiation can't find a workable solution.


At some point that code has to be removed. They can't keep every old option and code path around forever, it's just not sustainable.

There's a cost to using outdated systems, and this is it. If you need it, you can proxy requests to upgrade their protocol to something newer browsers speak, upgrade the device in question, keep old versions of browsers around for those single purposes (and probably gap that device as much as possible as it's now a much easier target for attackers).

Calling for all software to support all outdated options and code paths forever or even to always keep the code and flags or settings to enable it is absurd, and leads to unsustainable software that becomes impossible to change and improve.


Assuming that device is actually worth accessing, then you could still keep and use an old version of a browser for that purpose. Newer browser versions should be pushing the web forward where possible.


I agree with you in the case of old encryption methods (plain DES, RC4, NULL cipher) but not all protocol problems are because of the lack of a recent encryption algorithm.

There's heaps of old modems that use a weak DH key and will never see a firmware update. You're left with either accessing the device insecurely over HTTP, hoping your ISP will send you a new one (good luck with that) or paying for your own modem which will probably never be allowed on the ISPs network.

Weak DH keys should not be that hard to keep in the code base yet still most browsers will present an impassable TLS error screen.


Those modems should no longer be being used, period. If someone cannot afford a replacement and has an incompetent ISP incapable of providing them with a subsidized replacement, then that is a separate problem that needs addressing as soon as possible.

Perpetuating it won't do, and if in doing so we're perpetuating a larger impending security issue, then we need to resolve it stat, not defer everything because there is heaps of old hardware lying around.

That may be easy to say and harder to resolve, but there comes a time when problems need to be resolved. Maybe that won't be 2020, if the desired timeline proves unrealistic, but two years is plenty of time to move on it. It generally takes far longer to deprecate and remove protocols from the web than it does to get a replacement modem.


That’s awesome. Server side software should also be actively removing those old protocols.


Also, probably it will affect accessing a range of hardware devices that include a web interface (such as routers).

A recent example:

https://msfn.org/board/topic/177834-modern-browsers-and-lega...


So if we remove TLS 1.1 from our servers and just offer 1.2, we fail on fallback when testing through Qualys.


What web server are you using? I'm running numerous servers with just TLS v1.2 and get A+ at Qualys [1].

Not bragging, just curious where you fall down.

[1] https://www.ssllabs.com/ssltest/analyze.html?d=tractor.textp...


Your link shows all the clients you’re blocking. Expand the “unsupported clients” section. You’re currently blocking a lot of clients some folks care about (I say good riddance to them, but not everyone can).


There's no reason to remove TLS 1.1 from your server. This change is about the minimum protocol version supported by the browser.

Your server can advertise SSLv3 support alongside TLS 1.2, and Chrome 70 will still happily connect to it.


People also thought that there's no reason to remove SSLv2 from your server, and then the DROWN attack happened:

https://drownattack.com/

DROWN shows that merely supporting SSLv2 is a threat to modern servers and clients. It allows an attacker to decrypt modern TLS connections between up-to-date clients and servers by sending probes to a server that supports SSLv2 and uses the same private key.


> There's no reason to remove TLS 1.1 from your server.

I posit there's no reason to support TLS 1.1 on your server. There are very few clients that support TLS 1.1, but not TLS 1.2. So, either you are willing to support clients on TLS 1.0 (or SSLv3), or you aren't.


Apple only added TLS 1.2 to their SecureTransport lib in OS X 10.9, which was released in late 2013. Not so old!


Did they actually support tls 1.1 though?


1. Downgrade attacks.

2. Preventing people from shooting themselves in the foot.


Mozilla and Chrome and others will need to work with the various sites testing TLS (and "SSL") to make sure their tests stop asking for this.


You don’t “fail” due to lack of 1.1, you can still get an A+ as evidenced by Pete’s link. That said, you’ll notice that his server is blocking a bunch of clients that maybe you care about.


As far as security goes, that sounds like a good thing.


Does anyone think that all this security handling stuff should be kept separate from the browser and moved into a local proxy that handles this?

So that people with old browsers or other clients can still access the web.


Interesting, including the comments on HN. But personally I wonder more when we can disable old TLS versions for MTAs


A lot of MTAs are improperly configured. There's still a lot of plain text email sent across the Internet. The secure mail servers often have trouble connecting to anything with a TLS version higher than 1.0 (if even that). Many mail servers also don't have a valid server certificate (self-signed, expired or even from the wrong domain). In my opinion, the email ecosystem is hopeless with regards to TLS security. Gmail started showing red padlocks for plaintext or insecurely sent emails a while back and I still see the red pad lock to this day.

Bexause of this, disabling old TLS versions could make your server unreachable for large parts of the Internet or have servers fall back to plain text if they're improperly configured. Nobody wants to be that one company that can't receive your grandma's emails so everybody just keeps accepting improper configuration.

Another problem is that a lot of MDA servers share their TLS config with the MTA side.

I know from experience (worked at a small company that upgraded their email servers to TLS 1.2, at least on the MDA side) that old Microsoft mail clients (Office 2007 and lower, Windows Live Mail 2012) have trouble with TLS 1.2, especially on older Windows versions.

Although these clients have all been deprecated for a long time, a lot of users with few tech skills still use them because that's what their PC was set up with years ago when they bought it, or because they don't want to waste their time learning how to deal with a new UI (you can see this with a lot of elderly people).

For programs that use the Windows 7 TLS libraries TLS 1.2 is even disabled by default in Windows 7 because at the time Windows 7 launched, other implementations had major bugs. It can be enabled using a registry key though. This includes Office 2010, which still gets security fixes from Microsoft.

So, if a large company would disable TLS 1.0/1.1 on their MTAs it might get a large amount of customer support calls from their least technical customer base. You can tell your customers that their program is out of date and that their program is the reason they're getting errors, but in the end the customer will still blame you for "breaking their mail program".

Aside from a massive blow to a company's reputation, this would also overload the customer support desk and cost a lot of man hours.

<edit> Actually, I've seen the built in mail app for Samsung smart phones fail on TLS 1.2 for Android versions up to Android 8. Other Android vendors generally have trouble up to Android 5/6. With the lack of system updates on the Android ecosystem, this could be an even bigger problem. </edit>

I believe disabling old TLS versions is the right thing, but not until large parties such as Microsoft and Google decide to take the first step if you still want your server to receive any email.


What needs to be dropped the most are non-TLS non-localhost http:// URLs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: