Using telemetry Mozilla was able to precisely measure how many connections are actually established with TLS 1.0 and 1.1. Without numbers, they'd have been flying blind, making decisions with no rational basis. That's why I personally choose to leave telemetry on in applications that I trust. It helps the dev makes sensible, data-driven decisions.
Some unrelated advice: what would you do if business brings up Google analytics data showing Google Chrome, IE, and Safari are three main browsers people use and the usage share of Mozilla Firefox is under 5% for a particular website? I tried to show my own use case with uBlock Origin which means when I go to the website it doesn't show up on Google Analytics.
How do we make decisions based on self-reported data?
You'd think institutions funded by public monies would be more sensitive on issues like this but my pleas fall on deaf ears.
EDIT: I'm mostly suggesting this because it sounds like all you need is proof that your Google Analytics data might be incorrect.
I really didn't expect my adblocker to do that when I installed it.
I've had so many things break because someone somewhere didn't think a request was valid enough and blocked it and it was put into a common blocklist. WebRTC, several fonts, first party analytics, even whole services like Google shopping were blocked entirely because "they are ads" or they were "trackers". We even had to modify an app which was using css modules one time because class names were compiled down to a hex encoded hash of the css properties, and any class names beginning with `ad` were blocked by EasyList at one point...
It is giving an uncomfortable amount of power to a very small amount of developers. And it ends up ruining efforts to avoid 3rd party analytics because not only are first party harder to implement and maintain in many cases, but they get blocked just the same for very innocuous reasons, and give worse insights in some ways.
Just look at the list of what EasyList considers "unacceptable data to gather" in first party analytics . Things like the useragent, timezone, IP address, and language are considered "personal information" and should be blocked even in first party scripts with EasyList! IP address or useragent alone literally makes all first party tracking impossible...
I will say this is why people install content blockers, to avoid giving control to random developers behind countless websites, which have shown that there is no limit in the amount of data they will try to gather without a hint of informed consent.
On the other hand, content blockers are opt-in and people are free to install or not after reading the description/documentation and further researching them.
But it's the lack of choice in terms of what is blocked, and the limited ability to understand that it's the content blocker that is potentially breaking the application which is what i'm lamenting here.
I can't overstate how much I want to respect the user if they don't want some aspects, or some features, or anything which they just don't like, or which they feel can impact their privacy. (I'll argue with you about the ethics of blocking ads but still using a service, but that's another point entirely!)
My issue is that there is a group of people that are deciding what things are allowed and which aren't, and they are going to necessarily have to make tradeoffs (I don't think that all of amazon.com should be blocked because they can track your purchase history by virtue of handling your purchase!). But for the most part the trade offs made are opaque to the user and site owner, and the UX is lacking in showing the user that the content blocker can be the cause of the problem, not to mention that by default these lists are very powerful tools able to pretty quickly harm good actors (and this isn't something i'm single-ing out content blockers for, antivirus vendors, ISPs (AKA net neutrality), and even social networks all have this issue to some extent).
There might not be a good answer, and I sure as hell don't pretend to have it, but I'm just wishing for a way that users can make informed decisions on what they want to block, rather than have someone decide for them. Without some kind of give and take here, the arms race is just going to continue, and as it escalates, good actors (of which I'd like to think i'm one of) will get increasingly pushed away as they are blocked just the same as bad actors, leaving them with the options of "play dirty and survive" or "die peacefully".
I'm sad to say that I ended up taking a page out of the "play dirty" book and got a new domain name for the app in question to get around the content blockers, but I'd do it again in a heartbeat since I'm not abusing that information or power, but I'm blocked as if I am. And while I feel dirty that I'm forcing my wants onto others, the ratio of "unwilling participants" to "those who actually wanted to use my app, but not one it's core features" is so unbalanced that I ended up ignoring the latter to save the former. And while I wish I could respect the wants of the latter group, if I engage there the former will have the content blocked for them as well again, and I'll be right back to square 1.
But don't forget the good old HTTP access logs, which give you all this information anyway without having to load extra scripts.
But that's my point, whole domains have been added to EasyList under these rules that break the entire application because the person who added it assumed it was tracking related and not part of the application. People see xhr requests which save the current timezone so the server can update calendar notifications at the right time as "tracking" and break the application entirely because of it. They see a request they don't understand and block it because "it includes the useragent".
Take Google shopping for instance, an entire service shut off because "it's tracking", while Amazon or eBay or any other online shopping service isn't blocked.
If you really want that blocked, I'm more than happy to not serve you! I don't want to track you if you feel that using your timezone is over the line! But when your feelings toward that are pushed to who knows how many computers and now my site is broken for a large number of users, who are now contacting me to fix it, it becomes a problem.
To use an analogy, I don't have any issue with people that are allergic to peanuts. I don't want to try and force feed you peanuts, I want to make sure everyone knows I serve peanuts, and I want to give you the option of not eating at my establishment if you are allergic or simply don't like peanuts.
But if you start removing my establishment from maps, and start putting road blocks in front of my building preventing others from getting in, now it's overstepped a boundary.
I don't pretend to have a good answer, this isn't an easy problem to solve, but giving a handful of devs a "break this website for all adblocker users" button seems like a lot of power, and it especially hurts those who are trying to accommodate you.
If I didn't care, I'd just work around it, use server logs and send them to 3rd parties, lie and actively route around your attempts to block. And I'd not only make more money, but have to deal with less issues from users...
My hope is that the devs that believe these things can give more options to the user to decide for themselves and make informed decisions about the tradeoffs, rather than saying "this is bad, I blocked it for you".
In the instance I'm talking about, the app was about reminders for taking medications. Having those go off at 3am local time because you flew to Hawaii for a week doesn't do you any good. Not to mention that the app tracked and helped tune dosages and timing, and needed information like when you woke up, time of day for you, the kind if medication, and a lot more.
Again, you are free to not give it, but that was a core point of the application, and trying to disable or break the application in the name of privacy is like cementing your door shut to prevent breakins.
You don't always know better than the user, and you don't always know better than the developer.
In the end, I switched domains, and since then have managed to fly under the adblocker radar. I'm no longer involved with it, so I don't know if they managed another solution without me.
However. I don't think that content blockers are third parties. They are part of the user agent. The choice to use a blacklist is up to the user. It is not a third party breaking your app. It is the user themselves.
To use a more controversial phrase, the "blocklist" groups are now a cartel which gets to decide which apps work and which don't, and you have no way of appealing or even reasoning with them in a lot of cases. They get to decide that the Amazon cookie is okay, but a Google font is not, that login-with-facebook isn't allowed, but login-with-twitter is okay. That my app which uses timezone information was crossing a line, but my competitor wasn't.
I'm more than happy to help work with those who want to protect their privacy. I genuinely agree with a lot of viewpoints and I work to protect my own privacy as well, but at some point things go too far and I feel a lot of these tools are throwing the baby out with the bathwater here, and are even causing a false sense of security as people think they are protected from everything when really they are hurting the good actors the worst while the bad actors continue to operate just fine.
As I’m one of these people who aggressively controls what I let my web browser load maybe I can answer some of your questions.
Facebook, Twitter, Google Analytics and tag manager are blocked outright, all the time.
If your site needs several different CDN’s to load it’s assets, for seemingly no reason then I generally leave; unless I need something from that particular page, in which case I give it about 2 or 3 goes of playing “which CDN serves the information I actually care about?”.
Any domain you use that isn’t obviously yours or a CDN or obviously related to what’s on the page (web stores get to load shoplift for example) is blocked because assume it’s just another advertising/analytics company.
Sites that load the DOM and contents into memory, but display a blank page until I unblock a bunch of JS are my pet peeve, and I make a note to never return to them.
> I feel a lot of these tools are throwing the baby out with the bathwater here
The carté Blanche approache is used because it’s faster and easier for me to find the minimal set of domains and resources to load than it is to get every website out there to not stuff their site full of trackers and unnecessary JS.
>Any domain you use that isn’t obviously yours or a CDN or obviously related to what’s on the page (web stores get to load shoplift for example) is blocked because assume it’s just another advertising/analytics company.
I'll generally agree with you (for the most part). With the exception of using a CDN to serve static assets that I control (like amazon s3 for instance), I have no issues with you or anyone else blocking all requests that they don't like! (and hell, i'm fine with you even blocking first party requests, but since you are the one doing it you'll know that it is the reason the site breaks).
> Sites that load the DOM and contents into memory, but display a blank page until I unblock a bunch of JS are my pet peeve, and I make a note to never return to them.
But I'm talking about applications, not static web pages or documents. Complaining that an application needs a programming language is like refusing to turn on any computer that runs any code at all. The code is the application, and if you don't want to run JS, then you don't want to use the program. (outside of viewing some static assets like images, or possibly the minimal DOM put into the HTML for bootstrapping, there's literally nothing else).
>The carté Blanche approache is used because it’s faster and easier for me to find the minimal set of domains and resources to load than it is to get every website out there to not stuff their site full of trackers and unnecessary JS.
Again, I'm more than happy to let you block what you want, i'll even help support your use case if you want and it's possible.
But when you take your approach, and you start making a large percentage of the web use it, and you auto-update their browser plugins to use your approach so that things that were once working yesterday are now broken, and users don't have any real reason or understanding why. Or even worse, you begin making wild assumptions about why I need data backed by nothing but scaremongering (displaying warnings like "this site is trying to track you" when I give the option of offering the app in multiple languages...), then we have a problem.
In these case, I'm more than happy to break their stuff as it's seemingly the only way to get them to pay attention.
This isn't a blog that loads 50 trackers and tons of superfluous animations and video, this is an application which is designed to do things for the user. It's stuff like a music making app, or a medication reminder and tracker application that takes a lot of your personal information to help you find accurate dosages. It's stuff like games or offline-caching documentation browsers with IDE integration and powerful client-side search, or self-hosted home automation controller systems, or chat applications, or a barcode scanner app that checks for coupons for things that you scan in a web browser, etc...
And again, i'm fine if you don't want to run one aspect of it, you are the client and I can't technically control what you do, but complaining that the car won't work when you ripped the spark plugs out is dumb, and a mechanic that goes around taking out spark plugs every chance he/she gets so that people won't speed isn't helping.
> start putting road blocks in front of my building preventing others from getting in
It's more like you've erected a billboard, and the Campaign Against Ugly Billboards have given some people AR glasses which edit out the billboard .. and a bit more of your storefront. It's not really in the same category as fraudulent Yelp reviews.
> If you really want that blocked, I'm more than happy to not serve you!
I think you'd be better off just accepting this. The user and their user-agent has a "right" to display a website how they want, including in ways that the developer didn't think of - and including ways that are broken. The user can break your site if they want. I appreciate that this causes problems when they've not realised that it's their choices that break it, or put in annoying requests ("why doesn't this site work in Lynx"). But fundamentally they're customers, customers are always annoying. Given your context of a paid app it sounds like it's worth asking them to disable the adblocker pre-emptively, but presumably you've tried that already and they haven't.
> Take Google shopping for instance, an entire service shut off because "it's tracking", while Amazon or eBay or any other online shopping service isn't blocked
ublock Origin blocks bits of Amazon within the website - the internal sponsored advertising. I only noticed when some of it flashed up and then vanished.
It’s something at least akin to unethical to deliberately break something I’ve built and then come complain to me that it’s broken.
And sure, they might be would-be customers, but not all would-be customers are worth serving.
Just as customers have the right to buy from whoever they like so to do seller have the right to sell to whoever they like (barring certain narrow exceptions like racial discrimination.)
It’s always the negative value customers that throw the biggest tantrums when they are told a business would rather not serve them. Or in the case of ad block users, not even customers just people that want something for nothing.
But your analogy misses a key aspect which is that the adblocker breaks the app entirely. It's not like just hiding my storefront, it's preventing users from entering the store at all unless they completely disable the device. Often those same users don't realize that these blocklist update silently in the background and it's their adblocker that is breaking the application that they have been using every day for years.
> I think you'd be better off just accepting this.
I think I worded that poorly. I do accept this. If you don't want to use an aspect, or want to render something your own way, I'm happy to let you! But I'm not happy when you start imposing those restrictions on others who don't know it's happening and don't know why core aspects of the app aren't working.
Informed decisions are good, blanket disabling or breaking applications because a developer "knows better" isn't.
Useragent issues I can work around, bugs can be fixed, and those who don't want that part of the app to work are happy. But I can't work around an adblocker blocking things for users that don't want then blocked. If I work around it I'm then "tracking" those who are attempting to block it (and IMO should be called out for it), and if I do nothing I lose users and paying customers who will move to competitors which for some reason are not blocked. (And I don't blame them, my app doesn't work, so why would you stick with it?)
> ublock Origin blocks bits of Amazon within the website
But back a year or so ago, it blocked the entire google.com/shopping URL, and the shopping.google.com subdomain (I haven't checked if it still does).
The entire service blocked because someone somewhere decided it wasn't worth it, but Amazon and eBay and AliExpress all only have some elements blocked because it's somehow different.
And while you can go in and re-enable those parts, it takes a long time and a good amount of technical know how of things like globbing and regex to whitelist it correctly. And then must be repeated on every device you use.
What I am hating most is that sites where I am willing to send money to (web shops) do these things and send data to ad net works too, no less.
You have no guarantees at all, but that information is always sent across, and in many cases must be sent to even make a request. You literally can't prevent an IP address from being "sent" because it's needed to route the information back to you. Sending an "Accept-Language" header lets a website be translated into multiple languages. And virtually 100% of my users want times and dates in their local time, not UTC.
Just because it's potentially identifying information doesn't mean you can block it and pat yourself on the back as "stopping tracking". When you don't allow an IP address to be sent, you have just created a system where you can arbitrarily choose literally any request and block it under that rule. It's like making a law against breathing oxygen, then arresting anyone that you don't like for any reason because they are breaking at least that law.
> That's why I personally choose to leave telemetry on in applications that I trust
The problem is that when I have to "leave it on" rather than explicitly enable it then it's no longer an application I trust.
On first launch or after a set period, show a non-dismissable dialog asking users to choose between the two, with no default option selected, so you can't just click through.
This strikes me as the best compromise between respect for user privacy and acknowledging that data collection can be useful, with the trade-off that some users will choose not to use the software/website rather than decide on an option (thus, 'no default' must be used sparingly).
In that context opt-in has a bit of an issue with people even knowing it exists, not to mention caring enough to go find the option to opt-in, and then doing so. This route will, for better or worse, produce a very self-selected group. The proposal above is a way to maintain the benefits of opt-in (namely, consent) while trying to improve the quality of the group.
For instance, some corporate proxies will parse TLS and drop connections they don't understand. Theoretically, they do this to combat things like Heartbleed; in practice, they do it because the same tools will (with the flip of a switch) do termination and interception.
A lot of organisations (at least, ones I've talked with) are starting to look realistically at how they handle things like proxying and traffic monitoring in a world where they can't reasonably MITM HTTPS sessions any longer.
Reality for most is, they're starting to realize they can no longer paper over a people problem with technology, and are starting to look realistically at how the issues they were addressing via MITM can be solved in softer ways.
Genuine high-security environments are a different matter. If you absolutely have to safeguard against data-loss events and such, there are a wide variety of other techniques available such as full air gapping, or maintaining full end-client control at the tail of the tunnel so you never need to peek halfway along the pipe. The other common need for MITM is to police time-wasting behavior and perceived risks of staff web activity, and those are better solved via good old fashioned people management.
I really dislike this "browser smarter than the user" design.
a) Know what TLS is
b) and, have a secure channel to their destination website that allows them to determine that it intends to serve TLS 1.0
c) and, aren't in a position to just upgrade the darn thing to at least TLS 1.2?
d) and, know that there are no undisclosed weaknesses in the outdated design of TLS 1.0 or in the outdated cryptography that it mandate
Most users fail a). Basically the only way to pass b) is to be the website operator or someone that knows that person or group in real life. But, to pass c), you can't be the operator. Then, finally, no one can really pass d), but, the closest you could get would be to be a part of a sophisticated government sponsored security agency, probably working as a cryptographer and definitely being kept up to date on pretty sensitive intelligence. And for reasons that aren't clear, you are totally fine with the website in question running on outdated crypto - so, in addition, you are probably bad at your job.
How many people fit that description? Those are the people that have a right to consider this a user hostile change. I'm willing to bet its a pretty small group. Everyone else benefits since they either know they can't make a good choice as to whether to accept TLS 1.0 from a website, or, mistakenly think they can.
Supporting old stuff costs time. If the company (or a group of companies) believes they know better, they can contribute the code to change this. And maintain it. Or pay someone else to. It's perfectly viable and it accurately reflects the cost of the business's decision to not change something.
Otherwise no, most browsers must be safe for the lowest common denominator, and they do know better than most.
Users MUST have ultimate control over software and not the other way around; even if it such control is used to do something very stupid. Software deliberately designed to go against the wishes of its users is defective, malicious, or both.
PS: point (d) is a non-point.
I think its quite a stretch to say that Mozilla choosing not to support a technology makes their product "defective" or "malicious". They get to choose what they support. They beauty of open source software, is that if someone disagrees with that decision, they are free to support it themselves. That is unlikely to happen in this case - and that just validates Mozilla's decision.
Point D) is highly relevant - if it's hard for users to present a rational reason that a feature should exist, it further justifies Mozilla not wanting to support it. The IETF, NIST, browser vendors, PCI security standards, vendors such as Cloudflare, etc have all moved away from TLS 1.0 or recommended no longer using it as described in https://tools.ietf.org/html/draft-ietf-tls-oldversions-depre.... That document also lays out various technical reasons to no longer use TLS 1.0. TLS 1.2 has been the recommended version of TLS since 2008 - 10 years ago and it will be 12 years by 2020 when Mozilla stops supporting it. That is all overwhelming evidence that anyone that thinks that they are the special exception for whom using TLS 1.0 makes sense, is almost certainly wrong. People have the right to be wrong, but, it's hardly Mozilla's ethical obligation to enable them.
You can build your own Firefox. You can even download the source and build an old version. What more control do you want?
And yes, it's a heavy handed way, but the fact there are "There are still some essential government, military and corporate websites relying on these protocols that will not be updated any time soon" shows the soft touch isn't working.
Fortunately most of those sites still use "not secure" plain HTTP (I wonder if they're going to remove that too!?), but this feels to me like yet another sad sacrifice of freedom for security, and in this case it's almost --- but not quite --- book-burning. The Internet used to be a much more diverse and interesting place, if perhaps more dangerous; but in encouraging the dominance of this "safe and secure" censorship, sites run by large corporations and centralisation of power into them and the CAs that essentially act as access gatekeepers, I feel like we've lost a lot of what made the Internet a really unique and fun (including the risk) experience.
I think an appropriate real-world analogy is https://en.wikipedia.org/wiki/Slum_clearance
Besides, we all know there will be plenty of organizations that issue convoluted instructions that are the equivalent of "reset your clock to before the cert expiration".
As someone who had to deal with fallout from Equifax, I'm all in favor of smarter, yes smarter, parties acting in the collective security benefit of us all. As you point out, some will drag their feet otherwise.
I'm sure that alternatives will exist for people who know they need to deal with TLS 1.0 for a while longer.
And even as a person who wants to have toggle for everything i don't think this is a good option in this particular case. If someone wants legacy, they can stick with an old browser instead.
With open source software you have every opportunity to customize it to your needs. The question that remains of cause is: Which is more expensive, upgrading the outdated software or maintaining your own Firefox branch.
But uh isn't it better that it breaks in peacetime* than in wartime?
Don’t jeopardize my security just so you can keep living in the Stone Age.
In the Windows world, you have a weird collection of SSL/TLS settings available out of the box. Unfortunately you have at least three sets of these to worry about: SChannel, WinHTTP and .NET (this one has several sub versions). Updates enable extra features but some need reg keys to switch them on.
Then, once you get things tuned correctly so that TLS <1.2 is unavailable, something will need an older version and you have to switch it back on ... unless your IT security officer is also the Managing Director, which means I get to tell the whiner to piss off and fix their application. I do accept that I have a luxury unavailable to most people!
ssl_protocols TLSv1.3 TLSv1.2;
The correct design would be something like:
And Edge: https://blogs.windows.com/msedgedev/2018/10/15/modernizing-t...
Seems like this was coordinated.
Dominant position talking :(
There's a cost to using outdated systems, and this is it. If you need it, you can proxy requests to upgrade their protocol to something newer browsers speak, upgrade the device in question, keep old versions of browsers around for those single purposes (and probably gap that device as much as possible as it's now a much easier target for attackers).
Calling for all software to support all outdated options and code paths forever or even to always keep the code and flags or settings to enable it is absurd, and leads to unsustainable software that becomes impossible to change and improve.
There's heaps of old modems that use a weak DH key and will never see a firmware update. You're left with either accessing the device insecurely over HTTP, hoping your ISP will send you a new one (good luck with that) or paying for your own modem which will probably never be allowed on the ISPs network.
Weak DH keys should not be that hard to keep in the code base yet still most browsers will present an impassable TLS error screen.
Perpetuating it won't do, and if in doing so we're perpetuating a larger impending security issue, then we need to resolve it stat, not defer everything because there is heaps of old hardware lying around.
That may be easy to say and harder to resolve, but there comes a time when problems need to be resolved. Maybe that won't be 2020, if the desired timeline proves unrealistic, but two years is plenty of time to move on it. It generally takes far longer to deprecate and remove protocols from the web than it does to get a replacement modem.
A recent example:
Not bragging, just curious where you fall down.
Your server can advertise SSLv3 support alongside TLS 1.2, and Chrome 70 will still happily connect to it.
DROWN shows that merely supporting SSLv2 is a threat to modern servers and clients. It allows an attacker to decrypt modern TLS connections between up-to-date clients and servers by sending probes to a server that supports SSLv2 and uses the same private key.
I posit there's no reason to support TLS 1.1 on your server. There are very few clients that support TLS 1.1, but not TLS 1.2. So, either you are willing to support clients on TLS 1.0 (or SSLv3), or you aren't.
2. Preventing people from shooting themselves in the foot.
So that people with old browsers or other clients can still access the web.
Bexause of this, disabling old TLS versions could make your server unreachable for large parts of the Internet or have servers fall back to plain text if they're improperly configured. Nobody wants to be that one company that can't receive your grandma's emails so everybody just keeps accepting improper configuration.
Another problem is that a lot of MDA servers share their TLS config with the MTA side.
I know from experience (worked at a small company that upgraded their email servers to TLS 1.2, at least on the MDA side) that old Microsoft mail clients (Office 2007 and lower, Windows Live Mail 2012) have trouble with TLS 1.2, especially on older Windows versions.
Although these clients have all been deprecated for a long time, a lot of users with few tech skills still use them because that's what their PC was set up with years ago when they bought it, or because they don't want to waste their time learning how to deal with a new UI (you can see this with a lot of elderly people).
For programs that use the Windows 7 TLS libraries TLS 1.2 is even disabled by default in Windows 7 because at the time Windows 7 launched, other implementations had major bugs. It can be enabled using a registry key though. This includes Office 2010, which still gets security fixes from Microsoft.
So, if a large company would disable TLS 1.0/1.1 on their MTAs it might get a large amount of customer support calls from their least technical customer base. You can tell your customers that their program is out of date and that their program is the reason they're getting errors, but in the end the customer will still blame you for "breaking their mail program".
Aside from a massive blow to a company's reputation, this would also overload the customer support desk and cost a lot of man hours.
Actually, I've seen the built in mail app for Samsung smart phones fail on TLS 1.2 for Android versions up to Android 8. Other Android vendors generally have trouble up to Android 5/6. With the lack of system updates on the Android ecosystem, this could be an even bigger problem.
I believe disabling old TLS versions is the right thing, but not until large parties such as Microsoft and Google decide to take the first step if you still want your server to receive any email.