
SSL considered bloated - xylon
http://www.naughtycomputer.uk/ssl_considered_bloated.html
======
marcosdumay
I was really expecting a serious discussion about useless and dangerous flags,
outdated encryption, expensive and dangerous renegotiations... I got a one
line complaint about "network traffic" (take a read about latency and
bandwidth difference!), caching, and bad tooling (go learn some better
tooling, it's out there).

There are plenty of things to complain about in TLS, but the article touches
none of them. What a bummer.

~~~
gant
I'm personally still trying to figure out where the hordes of new users that
just install Apache are. In these times, if you can install Apache 2 on a
computer permanently connected to the internet you can probably also install
Caddy or Certbot.

------
king_magic
I really hope I'm not the only person who mentally groans whenever I see yet
another "X considered Y" clickbait title. It's the tech equivalent of "this
one weird trick" or "X Happened And You Won't Believe What Happened Next".

~~~
watwatwatwat
What's clickbaitish about the title? If the article is aligned with title,
then the title is fine.

~~~
king_magic
clickbait (google search): "(on the Internet) content, especially that of a
sensational or provocative nature, whose main purpose is to attract attention
and draw visitors to a particular web page."

I think this safely fits under the sensational/provocative attention-grabbing
umbrella.

------
tptacek
The problem with this argument is that there are _very_ high-security pages on
the Internet --- things that protect people's bank accounts or most sensitive
personal information --- and they're not going away. The junction, at the
protocol level, between insecure web sites and secure ones is a major design
weakness; we would have fewer attack vectors in the long run if we could count
on uniform encryption across the web.

~~~
tyleo
This is precisely my thought on SSL. I'm no expert (correct me if I'm wrong),
but if I understand the technology correctly: if your http website connects to
an https login page, who is to stop someone from spoofing a link to a fake
login page on the http website.

~~~
watwatwatwat
EV certificates help.

~~~
vtlynch
EV certificates may improve a user's awareness of a spoofed page, but cannot
do anything to make it more technically difficult to execute.

Providing an HTTPS login with an otherwise HTTP site is vulnerable to
redirection to HTTP or to another site.

There is lots of evidence that suggests that in this configuration, cookies
are often not set up properly (secure only) and can therefore be transmitted
and stolen over HTTP.

~~~
watwatwatwat
> EV certificates may improve a user's awareness of a spoofed page, but cannot
> do anything to make it more technically difficult to execute.

This is what I meant, this is why I used "may". Obviously the user must know
the details of how ssl works which is not many of them.

------
CiPHPerCoder
> Seems to me a bit like equipping everyone with armour to make shooting them
> more difficult. Solving the problem the wrong way?

I don't know, making humans immune to bullets would be an elegant solution to
the gun control debate which doesn't involve disagreements over the second
amendment, and would make everyone win.

~~~
watwatwatwat
Just move to autralia/europe.

~~~
zeveb
> Just move to autralia/europe.

Being in Europe doesn't seem to have helped the editors of Charlie Hebdo …

~~~
watwatwatwat
Rare cases vs daily gun deaths.

------
atemerev
The problem with SSL/TLS is that it is binary. There's currently very strong
pro-binary movement in the ranks of Internet infrastructure engineers,
probably originated in Google. Yes, binary protocols are marginally more
efficient, but they are inherently harder to understand, debug, and generally
see what's happening, especially in high-stress conditions when something
fails in production. Binary protocols are more complex than text protocols,
and more complexity leads to negligence and security problems (e.g. recent
OpenSSL bugs). Secure systems are simple systems (OpenBSD gets it right).

Text-based protocols are the greatest thing that UNIX brought to the world.
There should be more of them, especially in security sensitive areas.

~~~
tangent128
Text-based protocols are simple for humans to read, but anything requiring
parsing is a security smell.

Even if you are sure your buffer handling is free of bugs (reasonable in new
languages, but the known-size of binary fields has been a security strength
for them), the ambiguity of text is dangerous.

Interpreting as text easily corrupts binary embeds if you aren't careful, and
escaping bloats the size of what's already the largest part of your message.

Many security bugs have been triggered by implementations disagreeing about
when they interpret UTF-8 and when they don't. UTF-8-encoded ASCII characters,
for example, may cause one parser to recognize a keyword that another ignores;
nevermind different sets of accepted whitespace characters.

You could define a very-strict encoding and delimination scheme, but at that
point you can't trust text editors to edit it- making it effectively a
needlessly-complicated binary protocol.

~~~
atemerev
Also, some sort of "parsing" (normalizing) is always required even for binary
messages. Endianness, alignment conventions, etc. — just mapping the network
bytes onto memory is inviting trouble.

(btw, network byte order is big-endian).

~~~
tangent128
Point, though binary normalization is easier to define in a manner everybody
will consistently implement as opposed to, say, SMTP headers.

...I guess it's abominations like SMTP that I mainly think of when I hear
"text-based protocol".

JSON and bencode are probably safe enough.

(Canonicalization is not useful from a security perspective, since an attacker
is under no obligation to canonicalize their message.)

------
Hello71
amusingly, the "one-line" server is not only "not really one line", but also
contains a number of errors and other incongruities:

1\. there's no reason to put : at the start

2\. z=aa is the same length as z=$r

3\. there are double quotes where there shouldn't be and none where there
should

4\. the sed quoting is wrong and only works since file names cannot be empty

5\. useless use of subshells

6\. won't work on echos which don't parse escape sequences or don't accept -e

7\. parsing ls

but most importantly, the whole first part can easily use TLS with "openssl
req -x509 -newkey rsa:4096 -nodes -subj /CN=localhost -keyout server.pem -out
server.pem; openssl s_server".

------
gregmac
There are several other very important reasons missing from this article,
which I think invalidate part of the argument.

One is widespread use of open wifi networks. I know many people don't bother
to redirect traffic through a VPN when on open wifi, which means anyone on the
network can monitor their traffic. This might be mostly innocuous, but at the
worst, they can steal login credentials and personal info.

The second is ad/analytics tracking networks. By using SSL, you force your
trackers to be SSL as well. Small comfort for those who despise this anyway,
but it's better than these networks moving plain text identifiers and info
about you around, allowing it to be monitored as you surf around the web.

I believe the third is widespread government surveillance/mass spying. By
using SSL you do two things: prevent (or at least complicate) the 3rd party
interception of data, and also decrease the signal-to-noise ratio (making it
less likely that any given encrypted stream is actually something valuable and
worth breaking).

------
roliver
Hopefully the argument about back/forth traffic in SSL will soon be obsolete
if Zero-RTT handshakes are implemented in TLS1.3. Surely this would then be
comparable to standard HTTP requests?

~~~
tptacek
Zero-RTT addresses latency, but not "bloat"; the same amount of data is
exchanged, but application data can piggyback on the handshake messages.

------
mjmasn
Total clickbait. More like websites with black backgrounds and bright green
monospace fonts considered unreadable.

No major browser will be supporting the insecure mode of http/2\. I don't
think I'm alone in thinking that is a good thing. I like to know that the page
I'm interacting with hasn't been tampered with, whatever website I'm on.
Nefarious certificate authorities aside, TLS is the way to do that.

Besides, connections (especially mobile) are getting faster all the time. I'd
say encouraging better connectivity is a more worthwhile pursuit than allowing
everyone to turn off TLS.

------
hbz
A counter to the author's "webserver in 1 line of code" \-
[https://gist.github.com/denji/12b3a568f092ab951456#simple-
go...](https://gist.github.com/denji/12b3a568f092ab951456#simple-golang-
httpstls-server)

I prefer proxying of SSL (and automatic generations of LetsEncrypt
certificates) using containers so that my web servers don't have to worry
about that aspect of configuration.

------
kardos
This post focuses only on the technical costs of TLS. The reality that we
currently live in contains a hostile network where unarmoured packets are the
easiest of targets. The movement to put TLS on everything is a reaction to the
hostility and is overwhelmingly driven by #1: A legitimate interest in
security.

------
t_fatus
Considered by you ... And you explanation is not that convincing.

------
tscs37
SSL/TLS is bloated but that's not a reason _not_ to use it.

Rather it's a reason we need some TLSv2 that just removes the crap and focuses
only on three encryption/authentication modes:

* Desktop: High throughput, lots of CPU, minimal latency * IoT: small throughput, very little CPU, latency acceptable * Mobile: small to medium throughput, some CPU, minimize latency

A lot of bloated protocols are still good, they're bloated because backwards
compatibility and everyone and their kitchensink needs to be able to decode
it.

~~~
fkooman
It seems to make more sense to just have _ONE_ that can accommodate all those
scenarios in a secure way. One doesn't solve bloat by introducing more bloat.

I'd say more can be won by removing e.g. ASN.1 and X.509 for certificate
handling and encoding that are a very difficult (impossible?) to get right and
switch to something simple that solves the 99% use case of current TLS.

~~~
tscs37
I agree with ASN.1 and X.509.

Those two are part of my plaintext-offenders list, like SMTP. They make life
equally painful for both man and machine.

------
mkj
I didn't check the link, but I bet that "HTTP server in one line of code"
doesn't include a TCP stack.

~~~
liviu-
The "one-line" server is just multiple commands separated by ';' and joined
into one line. At best a pointless exercise, and at worst a deceiving one.

~~~
kraftman
Did you know you can rewrite jQuery in just 4 lines of code?
[https://code.jquery.com/jquery-2.2.4.min.js](https://code.jquery.com/jquery-2.2.4.min.js)

------
zeveb
In my perfect world, you'd receive a certificate from your ISP when it assigns
you one of the IP addresses it was itself assigned, and you'd receive a
certificate from your registrar when you purchase a domain name. The former
certificate would be good for the duration of your IP assignment; the latter
for the duration of your domain ownership.

The IP-level certificate would be used for IPsec; the DNS-level certificate
would be used for HTTP and other protocols; if you needed some other, stronger
sort of identity verification then you'd need to take other measures.

This would solve the accessibility problem.

As for proxying, I think that HTTP had a really interesting idea with
proxying, but it just doesn't work in practice. Proxies are untrustworthy, so
it doesn't make sense to use them.

As for speed, I don't think SSL is noticeably slow from a modern phone.

------
tlsfan
The author lists legitmate motivations for why people want to see 100% SSL
adoption.

The CA system also began with such good intentions. But motivations for profit
enveloped good intentions. Certificates became a business, and the quality of
the software became an afterthought.

The same may be or is happening with SSL/TLS deployment. With a function such
as encryption, one cannot ignore software quality. Poor quality can defeat the
whole purpose of the software. There is no point in using bad encryption
software.

One of the good intentions the author cites is that people want ubiquitous
encryption. Is encryption synonymous with SSL? Why? SSL is not the only system
ever written to encrypt internet traffic. And it is probably far from the best
one that could be written.

Nothing wrong with the good intentions. But is SSL is an asset or a liability?
There is a cost to taking on SSL's baggage of complexity and maybe it's only
worth it if the benefit achieved is real and not illusory.

If SSL can so easily be exploited, then the false sense of security it's name
inspires may cause more problems than SSL solves. But that's only for users.
Others with purely commmercial goals stand to profit immensely from SSL
adoption, the same way businesses did from CA certificates.

SSL was not created with the intent to protect non-commercial communications.
It was created in the 1990's by Mozilla to allow for "e-commerce" using their
Netscape browser. It served it purpose.

SSL is old and people are attempting to retrofit it with "improvements". Such
as being able to host multiple sites with one wildcard certificate on one IP
address. This is a hack. It's called SNI and it breaks a lot of software.
People should consider why such a "feature" even needs to be implemented. Is
it for the benefit of the user? The CA business has become nothing more than
an impediment for many people.

Costs vs benefits. Not just for business but for users.

------
Olap84
Missed the biggest point which is cognitive overhead. HTTP is simple to
understand and it has thrived because of this. What a pain it is to get
Wireshark to decode TLS traffic, which is not just cognitive overhead but
debugging overhead too.

------
nailer
Trusted (ie, no warnings) HTTPS localhost for Mac requires about 10 mins to
set up. After that:

    
    
        https-server
    

Will give you: [https://certsimple.com/images/blog/localhost-ssl-
fix/trusted...](https://certsimple.com/images/blog/localhost-ssl-fix/trusted-
localhost.png)

Details: [https://certsimple.com/blog/localhost-ssl-
fix](https://certsimple.com/blog/localhost-ssl-fix)

------
mschuster91
> It stops proxies from caching responses between different clients. There is
> no way to fix this.

There is, at least in corp environments. We have, via proxy.pac, a couple of
ordinary proxies which act as regular cachers with low TTL, and additionally a
_huge_ (read: multiple TB storage) proxy which caches with extremely high TTL
the auto-updaters from Apple, MS, Debian, Ubuntu as well as the media CDNs of
some major newspapers.

It works because our machines have its CA certificate locally installed.

------
krupan
Somewhat related: I went to check something on my home router for the first
time in months and learned that:

a) it uses an old version of SSL to serve up its admin page

b) all modern browsers refuse to load that page and no longer offer an
override

I had to dig up and load an old unpatched browser so I could turn off SSL
completely on my router so I can continue to administer it. Am I more secure
now? I'm not sure.

~~~
zokier
A better option would have been to use something like
sslstrip/sslsplit/mitmproxy to strip/bump the SSL connection. Admittedly the
situation is bit unfortunate, but there aren't really many good solutions when
dealing with broken crypto.

------
maplecup
As a compromise between SSL and plain http, wouldn't it be enough for most of
the content to be signed? E.g. background images don't necessarily have to be
encrypted. They can be sent in plain sight with a signature which ensures that
the image hasn't been modified. The signature has to be computed only once so
that the overhead can be neglected.

------
marcusarmstrong
The point he makes is valid... but the action item here should be "Make SSL
better" not "Stop using SSL".

------
cpg1111
I just want to point out that TLS verification is extremely fast in HTTP2 to
the point where speed is arguably a non-issue.

------
noja
"I think [SSL on all websites] is motivated by: ... A desire to make traffic
shaping and censorship more difficult."

Eh?

