
Intent to deprecate: Insecure HTTP - dochtman
https://groups.google.com/forum/#!topic/mozilla.dev.platform/xaGffxAM-hs
======
jwr
How about we solve the "certificate" situation first?

I've just paid lots of money for "certificates". I quote the word, because
they don't actually certify or even signify anything. The whole procedure for
"domain-verification" is a joke, and many outfits are incompetently run (their
"verification" e-mails bounced from my servers because they ended up in RBL,
which nobody seemed fit to correct).

I see this as a scam, or extortion — pay up, or you won't be "certified". And
pay up significant amounts of money, if you want a wildcard cert.

If we care about encryption just for the sake of encryption, let's change our
browsers to allow self-signed certificates. Label them as such, but don't
label them as "unsecure", because the padlock icon really isn't any more
secure than a self-signed cert.

~~~
kej
How do you check the difference between your self-signed cert for your site
and _my_ self-signed cert for your site?

~~~
bandrami
I rarely care. Even the CA-signed certs are usually for "RL Media Enterprises
Inc." or something equally opaque to me, rather than something more
meaningful.

Bluntly, the refusal of a certain part of the security community to _simply
secure transport first_ and then worry about authentication on top of that is
both frustrating and mind-boggling.

~~~
breakingcups
I sure care. If my browser starts to accept YOUR self-signed certificate for
gmail.com, we're back at http.

~~~
bandrami
_we 're back at http_

No. With http I can phish you, and anybody else can read or alter that
phishing attempt. With self-signed certificates, I can phish you, and you know
that my phishing attempt was neither altered in transit nor read by anyone
else. We now have a channel over which we can negotiate authenticity.

If you went to my blog and saw that a CA had verified that I am who I claim I
am, that doesn't particularly help you, because you don't know anything about
me. But you might like to know that, whoever I am and claim to be, no _other_
party is interfering with our communication. My issue is not with things like
Gmail or my bank, but with the thousands of "ordinary" sites where learning
the identity of the business that owns the site doesn't actually give me any
useful information. That is, even if I see the name of the company in the
certificate, I don't have a reason to trust them more than I would trust a
phisher because I have absolutely no sideband interactions with them to begin
with.

------
bifurcation
Hey, this is Richard, the author of the post. All the feedback here is great,
but if you've got thoughts on whether we should pursue this strategy or not,
please comment on the mozilla.dev.platform list.

[https://lists.mozilla.org/listinfo/dev-
platform](https://lists.mozilla.org/listinfo/dev-platform)

[https://groups.google.com/forum/#!forum/mozilla.dev.platform](https://groups.google.com/forum/#!forum/mozilla.dev.platform)

~~~
yellowapple
As I mentioned in the thread in question:

"Basically, the current CA system is - again, to put this as gently and
politely as possible - fucking broken. Anything that forces the world to rely
on it exclusively is not a solution, but is instead just going to make the
problem worse."

Please keep that fact - that the CA system is, as I put it with all the
gentleness and politeness it deserves, "fucking broken" beyond any repair - in
mind.

(And no, "the CA system is fucking broken" is not an opinion; it is a
verifiable fact, as much as the concept of gravity is a verifiable fact)

~~~
brazzledazzle
If it's not a bother can you elaborate on the way(s) it's irreparably broken?

~~~
yellowapple
The idea of having to trust a central authority for verification is the root
cause of the vast majority of the brokenness; it means that proper TLS-based
security for the web is not only financially prohibitive for even individuals
in developed nations, let alone developing (with very few exceptions in the CA
space providing cheap or free certificates), but is also a single-point-of-
failure in terms of security.

Nowadays, we have this magical thing called a "blockchain" that can be used
for everything from currencies (Bitcoin) to domain names (Namecoin); with some
further refinement, using a blockchain as a certificate authority would fix
both problems right away.

~~~
brazzledazzle
I certainly agree it should be replaced, but I think it's a bit off the mark
to say that self signed certificates should be trusted the same as CA signed
ones (something that badrami was suggesting throughout the thread). Yes,
certificates insufficiently identify a site's owner but self signed
certificates are as bad as a CA's worst case scenario.

~~~
yellowapple
When it comes to security, you should always assume worst-case scenarios are
going to occur.

In which case, the equivalence of self-signed and CA-signed is entirely on-
the-mark. There's no _real_ guarantee that the certificate authority is any
more secure or trustworthy than, say, my five-year-old niece.

This is why decentralized systems (lately, that's been interpreted to mean
"systems using a cryptographic ledger or blockchain" or "systems that rely on
mesh topology graphs" (i.e. something similar to Namecoin or something similar
to PGP, respectively), but those aren't the only models out there) are
ultimately necessary for this; that way, you don't _have_ to trust one
arbitrary centralized authority, but instead can trust, say, a majority of a
collection of hundreds or thousands or millions of such authorities
coordinating via an agreed-upon protocol/convention/etc. My own bet would be
on a cryptographic ledger (PGP-style webs-of-trust aren't nearly as end-user-
friendly, whereas a "blockchain" has more potential in that area, since it's
easier to abstract away from the end user), but pretty much anything at this
point would be less convoluted - and more secure/trustworthy/effective - than
the current system.

~~~
brazzledazzle
I disagree. There's a significant amount of security we gain from collectively
using the CA system over self signed certificates. If a CA is subverted my
browser or OS vendor can pull the CA or the CA, if trustworthy, can revoke the
certificates.

Let's say a CA has issued certificates for example.com to someone with
nefarious intent. It's discovered that the CA's security is completely
compromised and my vendor pulls the plug. In our current scenario I can visit
example.com while being MitM'd and my browser vendor has made sure I get a big
alert when I connect.

In a scenario without CAs, I visit example.com and my browser vendor has no
idea that I'm being MitM'd nor do I since I've never been to example.com and
examined the certificate.

Is it perfect with CAs? No. Will some get victimized by a CA's carelessness
regardless of when it's caught? Probably. But most of us remain more secure
with it than without it. For most users on most sites it works albeit
haphazardly. It should absolutely be replaced. But to suggest that the
security benefits should be abandoned because it's possible that it could
happen is short sighted. It would be open season on internet users.

~~~
yellowapple
> I disagree. There's a significant amount of security we gain from
> collectively using the CA system over self signed certificates.

You're actually _losing_ security by trusting the CA model, though. You have
no means of control or independent audit. This is the same reasoning behind
free-and-open-source software being inherently more secure and trustworthy
than their closed-source counterparts; "transparency is a dependency of trust"
is just as applicable here as it is in any other security-sensitive situation.

This is why decentralization is absolutely essential, and the longer we go on
sitting on our haunches and pretending that the current system is "good
enough", the worse the problem becomes.

> If a CA is subverted my browser or OS vendor can pull the CA or the CA, _if
> trustworthy_ , can revoke the certificates.

That trustworthiness is a very big if.

> In a scenario without CAs, I visit example.com and my browser vendor has no
> idea that I'm being MitM'd nor do I since I've never been to example.com and
> examined the certificate.

There are numerous ways to achieve certificate verification without relying on
a centralized CA system. Even with self-signed, you can detect private key
changes (this is how SSH is protected against MITM attacks; in practice, this
rather-simple security measure has been _very_ hard to circumvent). For more
verification, there are plenty of ways to achieve that in a decentralized
manner, be it web-of-trust (PGP-style) or a cryptographic ledger (Namecoin-
style) or something else entirely. Hell, there are already systems like
DNSChain that implement the latter approach; that would be infinitely better
than the current system.

> But to suggest that the security benefits

What security benefits? All the purported "benefits" are _entirely fictional_
, since they rely exclusively on arbitrary trust in arbitrary entities. That's
not security, no more than me handing you a briefcase full of cash and you
promising you'll hold onto it for me is "security".

The sense of security you feel with the current CA system is very much false.
You're relying enirely on luck, and have absolutely zero assurance that your
luck will continue to be good.

------
josho
What is the point? If you need a secure connection then use https, if there is
no value securing the connection then use http. Why deprecate a working
technology that continues to have many valid use cases?

As an analogy this strikes me as deprecating an array for a linked list
because a list has certain features that array's don't. Ie. there is a time
and place for both.

~~~
mabbo
Can you list some of the use cases where you can do something using HTTP that
you can't using HTTPS? I had always assumed one was a superset of the other,
feature-wise.

As long as HTTPS offers all features of HTTP, then the reason to deprecate
HTTP is to prevent accidental use of it, and websites which don't know/care
from leaving their users vulnerable.

~~~
dheera
Things you can do with HTTP that you can't do with HTTPS:

* Create a website for free (you have to pay for certificates to use HTTPS)

* Report sensor data from small embedded devices with extremely limited CPU

* Your JS can talk to APIs that don't yet support HTTPS (e.g. NextBus). If your serve your JS over HTTPS, your browser will complain if you try to access a HTTP-only API.

* Transfer large files at gigabit speeds on consumer-grade hardware

~~~
mike_hearn
_> Create a website for free (you have to pay for certificates to use HTTPS)_

No you don't. You can get free certificates.

But even if that was true, it's still a misleading argument. You can only make
a website for free if you are OK with your website not having a domain name
and running it off your personal laptop with electricity paid for by your
roommates/parents. For any _actual_ website, there are already a multitude of
real costs of which a certificate is just one more.

 _> Report sensor data from small embedded devices with extremely limited CPU_

If your CPU is really so limited then why are you using HTTP at all? Use a
custom binary protocol with or without encryption as appropriate.

 _> Your JS can talk to APIs that don't yet support HTTPS (e.g. NextBus)._

Then those APIs suck - avoid them and/or ask the providers to get with the
program.

 _> Transfer large files at gigabit speeds on consumer-grade hardware_

AES-NI can decrypt at 3.5 cycles per byte. With _modern_ consumer grade
hardware you will not find symmetric streaming crypto to be a serious
bottleneck.

~~~
dragonwriter
> You can only make a website for free if you are OK with your website not
> having a domain name and running it off your personal laptop with
> electricity paid for by your roommates/parents.

Actually, if you are willing to do without your own second-level domain name
and just have a third- or fourth-level domain name, there are plenty of
services where you can have a free web site (or app) running of over HTTP.
E.g., any number of free (or free-within-limited-quota) static site hosting
services, or even Google App Engine.

Of course, in the Google App Engine case, you also get HTTPS for free (within
usage quota), as long as you are willing to have an <app>.appspot.com domain
name, so the "create a website for free" isn't really a "with HTTP, but not
HTTPS" thing.

~~~
dheera
Sure, although GAE is blocked in China. AWS works, and has a free tier that
will get by for a year, but doesn't give you free certificates.

Also, for a lot of newbies, installing SSL certificates is a PITA.

0\. You realize you need a SSL certificate. You're presented with a dizzying
variety of options and already lost. Are you supposed to get Positive SSL,
Negative SSL, Essential SSL, Comodo SSL, Start SSL, Wildcard SSL, EV SSL,
Rapid SSL, Slow SSL, or EV SSL aux Mille Truffles et Champignons? Most newbies
ask, "Why isn't there a simple [click here to get HTTPS certificate] button?"

1\. You get your certificates by e-mail, but you still can't install them
directly. Your webserver wants a .pem file, so you Google "How do I create a
PEM file". The top 10 tutorials tell you to concatenate THREE files:
your_domain_name.crt, DigiCertCA.crt, and TrustedRoot.crt in that order. What
you received by e-mail was FOUR files: AddTrustExternalCARoot.crt,
COMODORSAAddTrustCA.crt, COMODORSADomainValidationSecureServerCA.crt, and
your_domain_name.crt. You're lost, no tutorial is helping with what to do with
FOUR files instead of THREE, in what order to concatenate them, and
StackOverflow bans your question. You're fed up, quit, and use HTTP. (Not me;
I'm describing an actual case of observing someone else's frustration trying
to set up HTTPS.)

The only way HTTPS will gain popularity is if we can get rid of the
certificate-issuing economy and make it _easy_ for newcomers. The majority of
content creators unfortunately do not understand the basics of security nor
can we expect them to have the patience to learn it.

~~~
FullyFunctional
That. I have managed to get through the steps but there was nothing simple
about it and I wouldn't be able to replicate it without looking it all up
again. Unfortunately, SSL isn't even an option on my el-cheapo shared
webhosting service (as I understood it, it needs a dedicated precious IPv4
address).

Contrast this with Dan J. Bernstein's wonderful CurveCP. It's so simple to set
up and requires no CA involvement; you just need to be able to add a NS server
entry.

~~~
yellowapple
> as I understood it, it needs a dedicated precious IPv4 address

A _good_ webserver would actually be able to provide multiple SSL certs on a
single IP address by using "Server-Name Indication" (SNI). This is definitely
(as far as I know) supported on nginx, and probably supported by Apache's
httpd.

------
gbl08ma
I'm not saying it's a bad idea, but the thought that once HTTP is deprecated
I'll need to go through extra effort to manage the myriad of devices in my LAN
is bothering me. The configuration interfaces of these devices (routers,
printers, IP cameras, etc.) mostly don't support HTTP. Most never had updates
available and certainly never will. We're not just talking about 5+ year old
devices (which despite their age still function as designed): none of the most
recent routers I bought support it, except if flashed with some powerful and
well maintained alternative firmware.

I'm already annoyed enough by firmware update tools that only work with a
specific old version of Internet Explorer... I rather not throw dozens of
equipment away just because browsers started to insistently refuse HTTP
connections. Let's not talk about the fact that I often spin up simple HTTP
servers on my computer just to be able to transfer files to other machines on
the same network, and I rather not have to worry about creating certificates
(to then find out that the machines in question are "obsolete" too and don't
support HTTPS?).

Can we just consider HTTP to be like telnet? Old, super insecure, definitely
not meant to be used by everyone daily and definitely not over the Internet,
perhaps not even available/installed by default, yet super compatible and
simple to implement both for the client and the server.

I really hope that should mandatory HTTPS become a thing (and I'm not saying
it shouldn't), an exception is added for local area connections.

~~~
higherpurpose
This just makes me think that we need is not an "improved HTTPS", but to rip
out TCP/IP and encrypt everything at that level.

Now we only need to convince either Google or Microsoft to implement something
like that in their OS and _encourage /mandate_ the use of it. Only these 2
matter because they are the owners of the biggest platforms.

Yes Apple has a pretty big platform, too, but it's kind of irrelevant since
it's a closed Apple-only ecosystem anyway so if it adopts something it doesn't
mean the others will too. On the other hand, if either Android or Windows
adopts something as major, I think the other one would, too.

What I'm thinking is something like MinimaLT or perhaps Trevor Perrin's
"Noise" if it ever becomes real.

[http://cr.yp.to/tcpip/minimalt-20131031.pdf](http://cr.yp.to/tcpip/minimalt-20131031.pdf)

------
wyldfire
I find myself with some cognitive dissonance. This makes perfect sense.

But selfishly, I've worked for many corporations that operate an http proxy
that they scrutinize and they proxy https which they cannot scrutinize
(without detection), so I feel comfortable that they are not. (Yes I
periodically review the certificate store for changes).

If the vast majority of sites were https, they might decide to do MITM for
those https connections and either instruct everyone to ignore the warning or
install their own certs on all of the computers. Indeed I think many
corporations may already do this. I would probably not use many websites if
they made that change.

~~~
jfoutz
Wouldn't the corporation install it's own root certificate in that case? you
can MITM without notification. presumably they could tack it on after
workstation install.

If they're already going through the trouble of monitoring everyone's traffic,
a few extra steps don't seem like that big of a hassle.

Of course, you're right, some large fraction won't bother with their own root
cert, and their users will learn many bad habits.

~~~
wyldfire
> Wouldn't the corporation install it's own root certificate in that case?

Yeah, but I can audit the certificate store (and I have). On some systems, I
am granted local administrator privilege. I wouldn't take out any certs that
they install, but if I found that they installed and/or used one, I'd probably
stop using most public websites (at least the ones that I have a
username/password with).

------
exelius
All I ask is please don't block non-HTTPS sites from browsers. I don't want to
have to set up a valid SSL cert for every development environment I work with.

~~~
tomjen3
I see no reason not to blindly trust localhost - there is no way to MITM the
connection anyway.

~~~
mikegioia
That's interesting. I've never tried so I don't know, but can you issue an SSL
cert to CN "localhost" or does it need a fully qualified domain name?

~~~
notatoad
In don't think any ca is going to give you a cert for local host, but open ssl
is happy to make a self-signed cert for random single-word CNs, and chrome
will trust them if you tell it to.

------
maxk42
Nobody has brought this up yet:

If we deprecate HTTP, then certificates can be used as tools of censorship by
governments.

We require an alternative to the current HTTPS scheme first.

~~~
userbinator
...and the governments will then be virtually the only ones who can still
inspect traffic, since they can still get CAs to issue certificates for them.
Moving everything to HTTPS dependent upon CAs only makes the whole Internet
more centralised, which is horrible for privacy and anonymity.

They will all say it's "for your security", and arguably having this
centralised root of trust does lower risk of attacks from random groups, but
at the same time it's allowing them more power. To CAs and governments, a lot
of things around encryption seem to be oriented in the direction of "if we
don't have control over it, we disapprove." Self-signed certs are only one
example of this; see all the other issues surrounding mobile device
encryption. It's only good encryption to them if they are the ones in control
of it.

Quite frankly, to me it seems random hacker groups have a lower chance of
cooperating and tracking you in the same way that governments can.

I believe the relevant quote in this situation is again the classic "those who
give up freedom for security deserve neither."

------
brlewis
It's unfortunate that this is necessary. I like the concept of decentralized
cache systems, but in the future CDN will be the only option for low latency.

~~~
Animats
That's what bothers me. HTTPS Everywhere means MITM Everywhere. Terminating
HTTPS connections at a CDN means the CDN is the man in the middle. You don't
have to attack secure servers that handle important stuff any more; just get a
backdoor into Cloudflare. This makes "authorized law enforcement access"
easier than ever.

On top of that, HTTP2's "it all has to go through one hole" approach means
that CDNs become almost mandatory for big sites. The Web is becoming a lot
more centralized. It's more secure against random attackers, but much less
secure from inside jobs at CDNs, authorized or not.

The Great Firewall of China people must love this.

~~~
jerf
HTTPS has never guaranteed that you were speaking directly to a given entity.
In the case of a corporation, what does that _really_ mean, anyhow? Your web
request is being handled by the CEO him- or herself?

It only means you are speaking to a device or set of devices that the
certificate-holding entity has authorized to speak for them. No certificate
technology can prevent a company from delegating authority to another entity's
devices.

It isn't that this isn't necessarily a problem... it is that there is no way
in which certificates ever solved it, nor a way in which it can solve it, and
there's no choice. Whenever you're talking to X.com, you are almost certainly
also talking to a third-party web stack, for instance, which means that trust
has been delegated by the certificate holder to some other party's software.
There's hardly a website around that doesn't have a whackload (technical term)
of third parties already in the connection anyhow.

The certificate-holding entity is ultimately _responsible_ for what they do
with your trust. But the certificate can do nothing to constrain those
actions. It's just a glorified number with some other glorified numbers
attached to it.

It does seem to me though that HTTP2 should actually make it _easier_ to do
without a CDN in the end, though. Initial HTTP2 support will just be "HTTP1,
but on HTTP2!" which really provides minimal advantages over HTTP1, but over
time as we see web frameworks start to take direct advantage of being able to
push down resources preemptively, the advantages of CDNs for all but the
largest sites start to fade. (Perhaps not "eliminated", but certainly
lessened.)

(Incidentally, as people will presumably start releasing HTTP2 benchmarks
soon, keep on eye on the details. Embedding HTTP1 inside HTTP2 is not the
interesting performance question and will never have big gains... the correct
question to investigate is what are the gains to be had from _fully_ using
HTTP2 natively. Many SPDY benchmarks had the same problem... of course SPDY
isn't faster if it's still essentially speaking HTTP1 to the target website.)

------
dragonwriter
Generally, I think this is a bad approach. I think that configurable blocking
of HTTP with user-controlled (or, for managed environments, policy-controlled)
opt-in to allow HTTP for safe domains makes some sense. But not deploying new
features for HTTP and limiting existing features on HTTP does not; true, one
could set up a root CA and deploy certificates and manage TLS for internal,
including local-box, testing, etc., but it doesn't make sense for browser to
require it.

Configurable blocking of HTTP provides all the benefits of HTTP deprecation
without the adverse side effects for the situations where HTTPS is an
unnecessary headache, so it should be preferred.

------
0x0
The biggest downside I see is that it would no longer be possible to host a
simple website, without also having to publicly expose the vastly increased
attack surface that the OpenSSL code base brings to the table.

------
pilif
If this is a very long-term plan, I'm all up for it. In the short term, we're
just not quite where we need to be in order to enable SSL even on less-
important sites or sites that don't require any credentials.

From a hardware perspective, I'd say that by now the problem is actually
solved. Hardware is powerful enough to handle SSL connections.

What's currently tricky is that too big a chunk of clients still doesn't
support SNI which really doesn't go well with the increasing scarceness of IP
addresses.

Needing one IP address per unique domain name, aside of the administrative
overhead (multi-homing still is somewhat inconvenient) will just not be
feasible as the costs for IP addresses is starting to skyrocket.

This will be fixed by either IPv6 growth (you wish) or the death of non-SNI
systems, but it'll be years if not decades before we can ignore XP and Android
2.3, especially as there's no good fallback path for these clients as the SSL
negotiation (and subsequent hostname validation failure) happens way before
the host could react.

~~~
davidjgraph
"too big a chunk of clients still doesn't support SNI"

I'm not sure that's true today, what's your list?

XP is only a problem when using IE. Android 2.2 and 2.3 are well under 10%
last numbers I saw. They are very close to where we can make a greater good
argument.

~~~
pilif
Windows XP, Android 2.3 and many scripts running on various old but still
supported Linux Distros (RHEL 6 for example) and, of course, the Bing Bot.

------
mark-r
Doesn't HTTPS require a dedicated IP address? With the exhaustion of the IPv4
address space, the timing couldn't be worse.

~~~
Dylan16807
No, they fixed that. The hostname is sent before the cert is chosen.

~~~
rstupek
No it isn't fixed for all devices. Windows XP is one example and even though
it is no longer supported by Microsoft, at least 17% of devices on the
internet are still using it.

~~~
Dylan16807
That's not a problem with XP, that's a problem with people that use obsolete
versions of IE.

Note that the subject here is talking about what to do in future browser
versions, so IE8 never comes into the picture.

------
api
I agree with some of the skeptical comments here, and wanted to add one more
use case they are not thinking about:
[http://localhost:NNN/](http://localhost:NNN/)

There are many cases where you might want a service on localhost to be
reachable via HTTP. Technically this should be exempt from https rules
entirely, since who would the man in the middle be? If someone can MITM
127.0.0.1, they own your box already.

It is _impossible_ to get https certificates for localhost/127.0.0.1, making
SSL with an approved cert impossible for local services.

------
IgorPartola
I am wondering if this will go anywhere. It seems to me that if anyone would
be willing to do this it would actually be the Chrome team, not Mozilla.
Mozilla currently lacks the market share to pull this off, and lately their
moves have been more targeted in a different direction.

Having said that, I am really happy others are thinking along the same lines
as I: HTTP should be relegated to a legacy protocol and the warnings need to
very similar to what you get when accessing a site secured by a self-signed
cert.

------
aidenn0
The recent disabling of TLS1.0 on so many sites means I'm finally going to
have to upgrade my cell-phone, as there aren't any TLS1.1+ supported browsers
for PalmOS.

~~~
MrRadar
What sites are disabling TLS 1.0? According to the SSL Pulse report[1] it's
the most-supported version of SSL/TLS by far (99.7% support). TLS 1.1 and 1.2
are disabled by default in older versions of Internet Explorer so not
supporting 1.0 would lose those viewers which is enough reason to keep it
enabled for almost all HTTPS sites.

[1] [https://www.trustworthyinternet.org/ssl-
pulse/](https://www.trustworthyinternet.org/ssl-pulse/)

~~~
aidenn0
Sorry, my mistake, it's SSL 3; I can't even load the Qualys SSL client test.

------
zaroth
What about devices I access via IP address which have no DNS. Like my home
router, or my printer, or my iPhone when I'm connecting to an admin web
interface, or my NAS...?

Obviously I'm not going to setup a home domain, DNS, and SSL certs installed
on all these devices just so they can use the latest HTTP/2.x features in
their admin panels or user interfaces?

------
fiatjaf
What the hell. Someone please stop this madness.

------
nadams
I know Google wants to do something similar with Chrome.

I have so many comments - but the first being don't they realize that they
will have to make this option which will be disabled by most enterprises?

------
jackreichert
In order for this to really take off a campaign with the popular basic plug
and play hosting providers to install free/cheap SSL would be needed.

------
malkia
Do child protection software filters work with HTTPS?

~~~
zurn
Child protection filters would work much better as browser extensions.

~~~
nfoz
Why?

I'm not familiar with how those typically work, but intuitively from an
architecture standpoint, that sounds like something that should sit as a
filter between the browser and the outside world. Browser as monolith of
functionality seems undesirable.

~~~
AlyssaRowan
Trusted filters (such as ad blockers, parental censorware, network monitoring)
must to be _inside_ the trust boundary to be effective. That is the best way
to ensure they are not imposed _en masse_ against people's will.

In-path filters outside the trust boundary are, I'm afraid, the very first
casualty in our efforts to mitigate the threats of nation state adversaries,
as they resemble the attacks used there too much to survive. I, for one, will
not mourn them.

------
baby
In my opinion:

* A big warning sign when the site is not using TLS, but not a pop-up

* No warning when the site is using TLS

That would be a correct way to force websites to use TLS.

------
fiatjaf
First .google domains, now this? We should stop using this stupid thing named
DNS.

[https://gnunet.org/gns](https://gnunet.org/gns)

