
Securing Web Sites Made Them Less Accessible - davezatch
https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-them-less-accessible/
======
013a
Large tech companies seriously do not care. They say they do, and they point
to all these heuristics and optimizations, they point to Chrome's dev tools
where you can simulate slow connections, etc. Great.

The problem is, they're taking an experience that is fundamentally
ridiculously heavy, and then spending thousands of man-hours trying to
optimize it. No one even _considers_ that maybe its the experience itself that
is too heavy, and no optimizations can help that.

Youtube Home Page. Load it up, and you'll find its making over 200 requests,
transferring megabytes of data. Google's most obvious solution: Lets speed up
TLS, make each request go faster, lets invent new image and video compression
algorithms to lower the size of each response, lets batch requests to reduce
latency, technology, complexity, more code, more code.

No one actually takes a step back and asks if the Youtube home page should
make 200 requests. What if it only made 20 requests? We gotta load some
thumbnails, so there's bound to be a lot of requests there, but otherwise what
the heck is all this JS?

TLS on one request isn't the problem. The problem is the hundreds of requests
a typical website leans on.

Uncomfortable opinion: The only reason the internet has survived for so long
is because of Moores Law. We've developed all of this technology and SDLC
process in an era where another 20% jump in performance is just around the
corner, so who cares if it's slow today. Yeah, that era is done. And we, as an
industry, are completely fucked. Its not an understatement to say its a "back
to the fundamentals" moment, and its going to cost us billions of collective
dollars engineering for it.

~~~
Footkerchief
The industry is fucked because it's focusing on making core technologies
better and faster instead of optimizing specific websites? I couldn't disagree
more.

In 10 years, the fixes made to existing websites will have been replaced with
a new set of inefficiencies, but the benefits and drawbacks of today's
physical infrastructure, protocols, and algorithms will endure.

~~~
013a
The industry is fucked because we've had 30 years of "X at any cost"
mentality, which has permeated most product managers, developers, C-Suite,
anyone technical.

X can be a few things. Growth. Data. Analytics. Usage. Engagement. Security.
But the point is the "at any cost" part, because we rarely consider the costs
of the engineering decisions we make across any of these.

Over 30 years, we've been able to sustain this because there's always been
"more" around the corner. Facebook and Snapchat can sustain "growth at any
cost" because there's always more people in some (moving target) country that
the service hasn't hit yet. JS, Python, or Ruby is used to support
"development speed at any cost" because we'll always have faster computers
next year. Google wants "more data at any cost" because there will always be
new applications of AI to find value in it, there will always be advancements
in storage technology to store it, and users will never care that we're
slurping up so much.

But... none of this is true over a long enough timeframe. And when this idea
of "the cost doesn't matter because we'll figure that out later" permeates the
core of our tech stacks and people, pivoting that is ridiculously difficult.
Especially when the negative changes are beginning to happen so quickly.

Don't get me wrong; we need the people working on optimizing TLS, new media
compression, leaner HTTP request formats, all that. But I'm saying that we
need more; we need a mind-shift in the way we build software and our software
companies.

I don't know what that should look like, but I think what we'll discover is:
Keep it simple, Abstraction layers kill performance, Making it work exactly
how you want is less important than it working, working fast, and working for
everyone regardless of physical, technical, or geographical disabilities, and
Backward compatibility might just have to be sacrificed every once in a while.

------
Spivak
The complaints seem to be two-fold:

* Websites are big and take a long time to download.

* They had previously solved this problem with a caching server but theirs broke with TLS.

The author is apparently unaware of the options they have.

* Run a proxy server that caches pages. Basically all software supports proxies. Secure and well understood. No PKI to manage. And you can allow access to the proxy with no TLS or get a public cert. Works very well with those old devices too.

* Run a HTTPS caching server and add the CA to the systems. Little more effort but transparent.

~~~
zerkten
> The author is apparently unaware of the options they have.

Do they really have the options you outlined? The author appears to dealing
with a situation involving students in rural Uganda. It might not be as simple
in that context as you suggest.

~~~
marsokod
I thought that the ISP were providing that service as part of the package.
Given they do a lot of IP traffic alteration (acknowledging TCP packets at the
gateway level instead of letting the client doing it), this is not much.

------
howard941
> Lots of things along those long and lonely signal paths can cause the
> packets to get dropped. 50% packet loss is not uncommon; 80% is not
> unexpected.

TCP doesn't perform well with 5% packet loss. 50% packet loss coupled with the
tremendous latency of the link makes it close to useless. The long and lonely
signal path needs a link layer between the terminals and the sat.
Unfortunately it's probably a bent pipe transponder.

------
blakesterz
"Even in the highly-wired world, you can still find older installs of
operating systems and browsers: public libraries, to pick but one example.
Securing the web literally made it less accessible to many, many people around
the world."

I'm a librarian and I talk to hundreds of other librarians a year about
technology and security and all this stuff. In my experience it is incredible
rare to find a library that is running anything THAT old.

~~~
GunlogAlm
Even here in rural Devon in England, nothing running in our libraries is
particularly old, and there's nothing truly "ancient" left at all.

~~~
btrettel
Ancient can be nice in some instances. I had Apple IIs in elementary school in
the late 90s and to be honest I'd prefer that for elementary school kids over
something more modern. It has less opportunities for distraction.

------
lousken
While I don't have to deal with high latency but I have to use 64kbit from
time to time so I developed some processes to deal with this.

\- wiki, old.reddit, most news sites, github ... - css and js are loaded from
my tampermonkey scripts and i update them every once a half year or so so.
Then in umatrix i block loading css and js from their servers so that only my
own is loaded(also has benefits of using custom themes/fixes)

\- google(youtube)/facebook sites are a major pita. You can use youtube-dl to
download videos, you can even perform some basic search like `youtube-dl
ytsearch5:keyword --get-title --get-description` but I haven't researched if
there're any better youtube-alternative sites because on 64k it's unusable
anyway. Otherwise using mobile apps instead of sites is the only option here
because google changes these assets quite a lot and the
compression/obfuscation changes the names of css classes

\- use RSS (inoreader) as much as possible - RSS you can get all the updates
and especially inoreader has a neat feature called "Load mobilized content"
which only grabs the text from the site and sends it back - also using it for
my youtube subscriptions

------
zaarn
Middleboxes can intercept and cache HTTPS, you need to operate your own CA
however (some Middleboxes can do this fairly automatically and it's fairly
touch-and-go).

------
3pt14159
It is a real problem. While traveling last summer and working remotely I
experienced it first hand.

Is there was a really easy way of mimicking all the effects of this type of
latency so I could periodically test the stuff I set up?

Also, if it is just HTTPS, then it is possible to proxy through something that
downgrades the protocol, but it feels dirty.

~~~
callahad
> _Is there was a really easy way of mimicking all the effects of this type of
> latency so I could periodically test the stuff I set up?_

Your browser's developer tools can simulate latency and constrained bandwidth,
at least in Firefox and Chrome. Firefox instructions:
[https://developer.mozilla.org/en-
US/docs/Tools/Responsive_De...](https://developer.mozilla.org/en-
US/docs/Tools/Responsive_Design_Mode#Network_throttling)

At a system level, Clumsy
([https://jagt.github.io/clumsy/](https://jagt.github.io/clumsy/)), Comcast
([https://github.com/tylertreat/comcast](https://github.com/tylertreat/comcast)),
and Network Link Conditioner ([https://nshipster.com/network-link-
conditioner/](https://nshipster.com/network-link-conditioner/)) are relatively
user-friendly and work at a lower level. Okay, Comcast isn't _as_ user
friendly, but it has a really cheeky name. Also, the GIF on Clumsy's homepage
is brilliantly well-done.

Apparently Charles
([https://www.charlesproxy.com/](https://www.charlesproxy.com/)) and Fiddler
([https://www.telerik.com/fiddler](https://www.telerik.com/fiddler)) can also
simulate bad connections, if you're already using one of those tools.

> _Also, if it is just HTTPS, then it is possible to proxy through something
> that downgrades the protocol, but it feels dirty._

Not necessarily. Consider HSTS, HPKP, Expect-CT, etc.

~~~
3pt14159
Thanks for these performance links, they're great.

> Not necessarily. Consider HSTS, HPKP, Expect-CT, etc.

Yeah, I'm aware of those (and use them on my own site) but the reality is that
the vast majority of content sites out there do not use HPKP. Even if they use
HSTS, many do not pre-load and as a worst case scenario the MITM can just
switch the domain to something like google.unsecure.com or something like
that.

------
dredmorbius
The problem is caching, trust, and delegation. Too many proxy tools simply
don't play well with SSL/TLS, and yes, there is good cause to not trust public
infrastructure and ISPs, so HTTPS itself _is_ desireable.

There's also the problem, generally, of one-size-fits-all security so far as
websites are concerned: there's really very little content I receive that's
specific to me,and much of that is Hacker News and a few other forum sites.
The content itself is almost wholly public. But I cannot cache or otherwwise
proxy this.

Locally, I've setup both Squid and Privoxy, mostly for shins and grits, but
also to explore the use and viability of proxies these days.

Squid caches less than 10% of my traffic.

Privoxy can filter by hostname, but little within pages -- no path or content
actions work for HTTPS URLs.

I've looked at the SSL options of each -- Privoxy seems a lost cause, but
Squid looks as if it _should_ be aable to MITM. TLS traffic, but I can't sort
out how, or sensibly verify it. And I understand browsers will start screaming
bloody murder if they detect this as well.

The notion of a trusted delegated proxy seems potentially useful. As with the
author of the article, I'm wondering if there is any movement in developing
HTTPS-friendly proxy tools, in a sane manner?

~~~
Spivak
> Squid caches less than 10% of my traffic.

Well duh, ideally Squid without a TLS MITM would cache 0% of your traffic.

> And I understand browsers will start screaming bloody murder if they detect
> this as well.

Sorta, you need to configure your clients to accept your proxy's CA but
browsers should be good after that.

------
seangrogg
Couldn't a (relatively small) proxy server be set up that intercepts the
request, checks URL+cookies against a cache, and makes its own HTTPS requests
on cache misses? Maybe even whitelist cacheable domains so you reduce how fast
it fills?

I haven't really caffeinated yet so I may be missing something important here
but this seems like a few hours worth of work?

------
mg794613
A bit of a clickbait title, but let's entertain the idea for a second. Another
one of those articles that claim that securing everything is a bad idea simply
because their routine got shaken or altered. What is his alternative then?
Remove it so you have 'slightly less slow' experience. (although, to be fair
he states he does not know the solution) It's like complaining trams are less
accessible because they have doors instead of just a platform with wheels.
Sure, trough that lens you are right. But you are also willingly ignoring all
the other facts. The problem here is super slow internet not encryption. And
no matter how hard and often these sec-nay-sayers repeat it. It does not make
it a valid reason to roll back. Slow internet in Africa needs to be solved for
a multitude of reasons. Not one of them is 'experience'

------
gnode
With the advent of LEO communication satellite constellations enabled by
miniaturisation and reduced launch costs, hopefully the problem of satellite
Internet access's extremely high latency should go away in the future. I
expect the more localised signal could improve the bandwidth and cost too.

That said, I'm sure the weight of typical pages will grow leaps and bounds
too, as they've been doing.

------
souterrain
Plain text websites load just fine with HTTPS/TLS via very-small-aperture
terminal (VSAT) ISPs. This is based on my experience with 1024k/256k Hughes
service in rural US.

------
User23
Making systems less accessible is the defining characteristic of computer
security. Steve Yegge addresses this very point cogently in one of his old
rants.

------
baronswindle
This may be extremely naive, but why wouldn't a service like AWS CloudFront or
any other CDN that supports TLS solve this problem?

------
rtkwe
There's also the question of does HTTPS even make sense for most sites. Why
bother with the extra security overhead for a simple blog with no user login
and a basic comment system. Surely there's a middle ground between preventing
man in the middle attacks for simple content sites and creating a complete
bidirectional encrypted connection right? Signed content hashes maybe?

~~~
throwawaymath
_> Why bother with the extra security overhead for a simple blog with no user
login and a basic comment system._

Can you clarify what you mean by "security overhead"? In 2018, there is
virtually no performance difference between HTTP and HTTPS. For example, see
performance evaluations and comparison overviews [1] and [2].

It's also very easy to set up HTTPS. If you have the technical capability to
administer your own Apache or Nginx server, setting up HTTPS doesn't require
much more configuration. Let's Encrypt is straightforward to use. If you're
setting up a blog on WordPress or Squarespace, it's a matter of flicking a
switch (or the choice has been conveniently made for you).

As for your idea regarding content digests, how do you propose to implement
this? They need to be signed, sure, because an attacker intercepting my
connection could just supply their own hash digest corresponding to the
content. But then where do I obtain the public key corresponding to the site?
How do I grab that key in such a way that I know it's the correct one? You'll
find that any protocol you can device to safisfactorily resolve this problem
will substantially reimplement TLS itself.

____________________________

1\. [https://www.httpvshttps.com](https://www.httpvshttps.com)

2\. [https://istlsfastyet.com](https://istlsfastyet.com)

~~~
Avamander
> 1\. [https://www.httpvshttps.com](https://www.httpvshttps.com)

To be fair, this is in conjunction with HTTP/2.

