
The Federal HTTPS-Only Standard: Necessary and Overdue - DiabloD3
https://www.eff.org/deeplinks/2015/04/the-federal-https-only-standard
======
sneak
If only Apple undertook this for OSX.

In latest Yosemite, three-finger-click definition lookups, map tiles, and even
some geod location communications are _still_ unencrypted in flight. The whole
Starbucks knows what words you look up, what map locations you view, et c.

Added bonus: definition lookups happen for every Spotlight search now, too.

It is amazing that in 2015 this is not standard policy everywhere people care
about security (which should be any organization that touches PII or develops
software for use by more than 10 people).

~~~
amelius
I'm amazed that in 2015 every application has to implement its own encryption.

Why doesn't the OS let us open "encrypted sockets"?

~~~
lmm
You can't separate encryption from authentication, and the application is the
only thing that can tell you what qualifies as authentication.

With IPSec you can tell the OS "open an encrypted connection to 69.231.14.52"
and the OS will verify that you're actually talking to 69.231.14.52 and no-one
else can read your messages. But that's not actually terribly helpful, because
what you want to ensure is that only Bob can read your messages. And only the
application knows who Bob is.

~~~
nothrabannosir
Ok, what am I missing?

Now: Application: Hey OS, what is example.com's IP? Okay, now give me an
unencrypted connection to example.com's port 80.

Desired: Hey OS, give me an encrypted connection to example.com's port 443. (
_\+ optional encryption params like minimum key strength_ )

Why can't encryption be an OS level task, exactly?

~~~
lmm
Hostname-based encryption only covers a small subset of use cases. If it's a
web browser then sure, hostnames have become the way of doing that (although
even then, browsers are starting to display the organization name in the top
left alongside the URL). But for any kind of peer-to-peer program e.g. instant
messaging or file sharing, hostnames are irrelevant; you care about the
identity of the person on the other end, who's probably using a home
connection that may not even have a public hostname (and will in any case
change rapidly). Even for something as basic as email, you'd rather encrypt to
a specific person than encrypt to gmail.com.

So in short hostname-based authentication is mostly useful for web browsers -
which is why it's in the web browser, not the OS.

(Also, pragmatically, existing OS-based key management is terrible - just try
and use the Windows dialogues to set up an SSL client certificate. For many
users the very idea that you manage your keys separately from the programs you
use them in is confusing. So browsers would rather provide their own cross-
platform UI for key management, even if the OS offers a "managed SSL keystore"
as most OSes do)

~~~
nothrabannosir
I can't think of a network related app I ever used where hostname based auth
wouldn't be a net plus. Mail, chat, web browsing, VPN, VNC, network
filesystems, ... Not to say "as long as I know that it's the right host, all
my encryption needs are satisfied!" Sure, sometimes you want more. But knowing
I got the host matching the hostname would always help.

As to the UI: Yes, but that's precisely because it is currently unclear who is
responsible. Java has its own keystore, FF has its own but Chrome doesn't,
even some libraries have their own or they don't, depending on the OS. It's a
mess. A mess that I don't care about, as a user.

I want a central key store that apps tap in to. When I revoke a cert for
Diginotar, I don't want to have to grep every file in my entire filesystem for
that string on the off chance some crazy statically linked lib installed a
local cert store somewhere.

And sure, an app wants to create a special case just for itself (e.g. a client
certificate only for mail), let it. But I'd like that to be the exception, not
the rule.

Cert management is a mess right now, and both developers and users are duped.

~~~
lmm
You wouldn't want to be encrypting twice or three times over - at that point
there really does start to be an overhead associated with it.

> Mail

Encrypting at the host level would be actively unhelpful (e.g. it would make
failover or forwarding harder) and wouldn't gain you much in the way of
security. You just want to encrypt the message to the actual recipient.

> chat

Again hostname-based auth is actively unhelpful in a federated system like IRC
or XMPP (you _want_ to failover to a different host if a particular host is
down), and offers minimal security in a centralized-server system.

> VPN

I think a VPN is inherently the wrong approach to security; it's trying to
create a network you can trust, but networks are too big to secure.

> VNC

How often do you VNC to a box with a stable hostname? I only ever use it to
connect between my (coffee shop or wherever) laptop, my (dynamic IP) home
machine and my (dynamic IP) dev machine at work.

------
nowarninglabel
Our community has to shoulder part of the blame for the lack of adoption of
HTTPS in many governments both federal and local. In my jobs at two different
universities they both expressed vague concerns of "We heard that it hinders
performance" as a reason to nix doing it.

This should never have been a conversation about performance and it was the
technical folks making it one of performance. The only issue raised should
have been about proper certificate management.

Fortunately at the university we pushed through https on the product despite
the pushback, but it never would have even been a struggle in the first-place
if not for the focus on the wrong metric.

~~~
adventured
I still run into technical people very regularly that believe HTTPS is a big
performance hit. I'm not sure what's perpetuating that myth (present day myth,
15 years ago it was true). I build everything side-wide HTTPS, and have rarely
run into a situation where it caused even a 1% performance loss, and at those
levels it's simply irrelevant in comparison to the benefit.

~~~
Dylan16807
Was it true 15 year ago? Are you sure?

~~~
acdha
Yes. Remember that 15 years ago doesn't mean only that processors were much
slower but also that you don't have the heavily-optimized software
implementations and hardware acceleration for the most common implementations.

You can run OpenSSL today on an embedded processor which might appear to be as
slow as a 90s server processor but it's still likely to have things like AES
support in the CPU, an optimized bignum library, tuned assembly
implementations for the most common algorithms and at least the very least the
compilers in use now generate much better code. I'd be surprised if you
wouldn't see a significant benefit taking some 90s crypto code and compiling
it with a current version of clang/gcc.

~~~
Dylan16807
Well I've done some research to get exact numbers. The 2010 Google
announcement that gets cited a lot talks about cores capable of doing 1500
handshakes per second (1024 bit RSA), and only needing 1% of CPU power.

I found a couple sources claiming that a Pentium 3 should be capable of 200
handshakes per second max (1024 bit RSA), and also that it could hash over
50Mbps with 3DES or well over 100 with RC4-MD5.

I can't find if those Google servers had AES acceleration, but the chips with
it had only _just_ come out.

So looking at the year 2000: Pentium 3 is about 10x slower at handshaking, and
somewhere around 10x[1] slower at encrypting depending on algorithm. So if you
can spare 10-20% CPU, I don't see any major problems with going full-SSL.

[1] This one can vary more depending on exactly what you compare. Could be
close to equal speed if you accept a worse algorithm for 2000, and don't have
brand new AES-NI chips in 2010. Could be a huge multiplier between 3DES and
AES-NI, but your server doesn't need a gigabyte per second of SSL traffic.

------
joelcollinsdc
While it seems easy, there are lots of challenges with this where I work. We
have 10+ different 'top level' domains (*.example.org), lots of sub-sub
domains example.example.example.org, and route everything through an external
CDN. Rough back of the napkin calculations for 100% SSL coverage with our
current setup and provider is $10k/month, and that doesn't cover maintenance
overhead expenses (my salary) which adds up probably to more than anything
we'd pay to a SSL cert provider or CDN provider.

~~~
falcolas
Ten wildcard certificates would run you in the $1-5000 range a year. A few CDN
specific certificates another grand or so a year.

Maintenance costs shouldn't change, unless you're contract only (and SSL only
adds one additional bottleneck per year or two over running the base web
servers).

I'd be curious to hear how yoru costs would change, as well as where the $10k
per month figure comes from.

The benefits of encrypting (or at least offering encryption) to your customers
would be very fairly significant.

~~~
joelcollinsdc
the certificates themselves are a small fraction of the cost, our cdn
provider's monthly costs for hosting the certificates are the lions share.

------
adricnet
Not saying anything really about EFF's efforts or publication, however the
topic is common now ... Please don't senselessly "encrypt all the things".

May I encourage some requirements gathering and threat modelling? Who are you
trying to keep out (in?) of your application communications? Why?

The unfortunate requirement of common business systems is that we need
visibility into what these systems are doing for troubleshooting, audit, and
incident detection. Logs are good, but packets and flows are better. Trust,
but verify.

Driving to encrypt all traffic, all the time may be a noble goal^1, but please
understand that there are good people who need to adapt requirements to their
organizations and projects. This is one of the points, along with firewall
policies, user privileges, useful remote logging, and ... well, have a look at
the CC20[1] for more.

[http://www.sans.org/critical-security-controls](http://www.sans.org/critical-
security-controls)

^1 though doing it effectively may be much more difficult and expensive than
you imagine.

~~~
Someone1234
I agree with doing threat modelling, but here's where I think we diverge: I,
along with the EFF, think that HTTPS should be the "default" choice. You seem
to think that HTTP should remain the "default" and that HTTPS should only be
used where expressly justified.

If there are legitimate technical reasons to use HTTP then use HTTP. However
HTTPS should be on most sites or services unless you have a specific
technical, legal, or otherwise reason not to do so.

Keep in mind that every HTTP site you visit at Starbucks can be MITM-ed. So if
you're visiting a HTTP site and then hitting a "login" or "contact" button
which redirects you to HTTPS one then for all intents and purposes the entire
site is now not MITM protected since an attacker can change that link to point
to an insecure domain they control. HSTS won't even protect you from this if
you're using sub-domains (since the attacker controls the link itself, so can
link to another site they control).

Another popular example is: What if someone sitting in Starbucks is looking up
AIDs information or something else potentially damaging to their reputation?
If all of the federal government's web-sites are HTTPS by default then that
little AIDs informational site is likely HTTPS also, however if HTTPS has to
be justified using some kind of threat model, then more than likely it won't
be HTTPS as it doesn't contain the kind of content people typically protect
(just static HTML pages and images).

So in the threat model are you going to examine the possibility of reputation
damage and complex phishing scams (e.g. replacing a government department's
phone number)? And if you do look at it at that level are any sites really
going to be able to justify not being HTTPS?

~~~
adricnet
I'm not making a broad recommendation, expect maybe to slow down a bit :)

It sounds like you have lots of usecases for applications that need protection
against injection , MITM, or eavesdropping. It's great that you understand
that. Those sites and apps need that protection, and the network protocols are
one place to look for it.

Also, there is a huge difference between "all web software should support
encryption" and "turn all of that on by default". I just want a broader
discussion of the costs and more discussion about the problems people are
trying to solve.

Thanks!

------
ck2
Yeah but have you seen all the SSL caveats lately?

If you passed ssllabs last year, give it a go this year and you may be shocked
to see all the new known problems.

[https://www.ssllabs.com/ssltest/](https://www.ssllabs.com/ssltest/)

HTTPS-ONLY is useless if it is not done right.

and MITM is still too easy

~~~
vtlynch
Agreed. SSL is a pain to set up and its hard to get right. But in my (limited)
experience, SSL is simple compared to other aspects of your network's
security. "At rest" data storage, network intrusions, DDOS attacks, etc, all
seem like MUCH more difficult problems.

------
ceequof
In somewhat related news, the .trust gTLD is HTTPS-only:
[https://whodoyou.trust/globalassets/documents/trust-
technica...](https://whodoyou.trust/globalassets/documents/trust-technical-
policy.pdf) (As well as having a bucket of JS restrictions (no eval() can be
used, apparently ever)

Adam Langley said something about enforcing secure TLD restrictions in Chrome
a while ago, not sure if that ever got done.

------
bougiefever
First reaction: What!? They're not already doing this? Second reaction: If
they start doing this, they will be hypocrites if they try to pass laws
banning encryption. Not that the White House cares about looking hypocritical.

------
PaulHoule
I dunno, I think this means that when you visit a government web site, half
the time you will get a box that says the certificate is expired.

