
Securing web sites with HTTPS made them less accessible - jwfxpr
https://thenextweb.com/contributors/2018/08/19/securing-web-sites-with-https-made-them-less-accessible/
======
r3bl
Original article discussed here (87 points, 46 comments):
[https://news.ycombinator.com/item?id=17707187](https://news.ycombinator.com/item?id=17707187)

------
Freak_NL
> Google, Wikipedia

Caching Google makes no sense beyond some static resources. Wikipedia can be
made available off-line, just not by using a man-in-the-middle server. This
may make sense for a school with limited connectivity to do.

> That’s great for modern browsers, but not everyone has the option to be
> modern. Sometimes they’re constrained by old operating systems to run older
> browsers, ones with no service-worker support: a lab full of Windows XP
> machines limited to IE8, for example.

You don't have the option to run legacy browsers _and_ expect everything to
work.

Just don't use a legacy proprietary OS if you want to go on-line with it.
Either install a free (as in gratis, but libre makes sense too) operating
system, pay for the windows upgrades, or scrap the computers.

~~~
mehrdadn
> Caching Google makes no sense beyond some static resources.

Really? So you cannot make any sense of the idea of caching, say, news
articles? Blog posts? Software documentation? StackOverflow Qs/As? The cached
pages are 100% useless in your mind?

~~~
majewsky
All of those are not Google. OP was probably thinking of SERPs.

~~~
mehrdadn
I think lack of sleep got the better of me there, sorry :\ but I still don't
see what the issue with caching search engine results is. Why shouldn't result
pages be cached? I would totally want to cache them locally when on such a
high-latency connection, especially when I can expect similar queries (like in
a classroom).

------
mehrdadn
Regarding the caching problem, is there no way to trust a locally signed
certificate that your caching server uses so that you can cache over HTTPS?

~~~
georgecalm
It’s possible and is what Charles and MITM proxies do. Specifically, Charles
generates its own certificates for sites, which it signs using a Charles Root
Certificate. So the clients of this proxy would accept the proxy’s CA and
continue as usual. This of course does make browsing less secure, as a
compromised proxy negates the security of https of all of the proxied
websites. A slightly more secure but less efficient caching solution is to
install the proxy locally, but then, of course, the cache isn’t shared.

~~~
mehrdadn
Can local certificates not be restricted to the few domains you want cached,
so that the user can be sure their browser isn't using the local proxy's
certificate for anything other than the sites approved on the certificate
(like Wikipedia)? I would've thought this is possible, but if not, I feel like
browsers should implement the ability to restrict local certificates to
specific sites. It shouldn't be hard. They'd obviously have to still upgrade
their browsers but then they wouldn't have to twiddle their thumbs waiting for
every website to implement its own service workers (or not end up doing so).

~~~
georgecalm
Yes, come to think of it, that’s possible either manually or with a PAC file.
It seems like a decent compromise too to whitelist the proxied sites the
weakened security of which you are willing to tolerate, at least in the case
where the alternative is not being able to teach/learn in that area, as in
Eric’s case.

------
moviuro
1\. webpages are fat, and no many care [0] Some other recent HN submissions
compared page weight to the number of words in Moby Dick - and it's just as
bad as one can think.

2\. Blaming HTTPS is stupid. Intercepting http without the user knowing was a
bad practice to begin with. Setting your own computer to use an HTTPS proxy
sounds reasonable, though I understand it's quite a PITA. Having the user jump
through difficult and scary messages could be a good feature IMO: " _Setting
up an https proxy can compromise your information such as bank account
numbers, your passport information, your religious and political penchants,
etc._ " As for technological solutions for one's home or organization: see
Squid, which does provide examples [1].

[0] Wikipedia does care actually as it uses local cache a lot, and text-only
articles such as
[https://en.wikipedia.org/wiki/Project_Xanadu](https://en.wikipedia.org/wiki/Project_Xanadu)
used only 271 kB;
[https://en.wikipedia.org/wiki/United_States](https://en.wikipedia.org/wiki/United_States)
is around 5MB)

[1] [https://wiki.squid-
cache.org/ConfigExamples/Intercept/SslBum...](https://wiki.squid-
cache.org/ConfigExamples/Intercept/SslBumpExplicit)

~~~
mehrdadn
> Intercepting http without the user knowing was a bad practice to begin with.

It baffles me that you assert this as an absolute truth. This is just your
opinion, shaped by the environment you've lived in. It is perfectly possible
that another person just as sane and knowledgeable as you would have different
priorities than you do, especially when their experiences are different than
yours.

