
How to implement HTTPS in an insufficient manner - tomelders
http://www.troyhunt.com/2013/04/5-ways-to-implement-https-in.html
======
kijin
The author lists 5 problems, but all of them have to do with the fact that the
website in question is not 100% HTTPS. So it's really just one problem that
has many different implications.

When you maintain both an HTTP site and an HTTPS site, it's nearly impossible
to toss state back and forth between them without exposing yourself to at
least one of these problems. Even if you do everything perfectly, people will
complain that they get logged out when they visit HTTP URLs. The only solution
is to treat HTTP, from now on, as if the only response it could produce were:

    
    
        HTTP/1.1 301 Moved Permanently
        Location: https://www.example.com/remainder/of/url
    

That's it. If you handle personal information, never send "200 OK" responses
over plain HTTP, ever.

Need more server resources? Maybe. And this is probably why 99% of websites
that have the problem the author mentions resist making changes. But the
"HTTPS uses more server resources!" complaint is so 20th-century it's not even
funny anymore. If you're not going to invest in 10% more server resources (or
5%, or 25%, or whatever) to protect your users' personal information, I
seriously doubt that you deserve anyone's business in the first place.

~~~
nodata
No, please don't do this:

If a visitor goes to <http://domain/some/url> they should _not_ redirect to
<https://domain/some/url> \- it makes it harder to catch pages that use the
wrong protocol.

Better to redirect <http://domain/some/url> to <https://domain>

Edit: if you're logging referrers from your own domain with non <https://>
urls AND using secure cookies, ignore my comment!

~~~
trust-me
Nooo, don't erase the address that took me 5 minutes to type. What you say
doesn't make any sense. You should obviously configure the https redirect as a
site wide setting and not per page.

~~~
ivanr
That's clearly true from the usability perspective. However, the problem is
that bulk redirections like that make it very difficult to catch insecure
resources.

For example, let's suppose you have a secure page that's referencing some
JavaScript resource via a plain-text connection. Because of the redirection,
the browser will ultimately fetch the required file, but only trying plain-
text HTTP first. That one plain-text request can be hijacked by a man-in-the-
middle attacker and abused to take over the victim's browser (and account).

Further, if your users have bookmarks to plain-text pages, their first access
to your web site is always going to be insecure, which means that they can be
attacked via sslstrip.

These problems are solved with Strict Transport Security, but it will probably
take some time until we can fully rely on it.

In the meantime, the plain-text URL can be preserved, and the redirection
carried out via an intermediary page that explains why links to plain-text
pages are dangerous. It's ugly, but I don't think there's a better (safer)
way.

~~~
trust-me
Yes, these are real problems, but how is redirecting to the home page helping
in any way? When the user navigates back to the page she wanted the in-secure
script will still be fetched. A MITM is still able to serve a copy of your
page.

~~~
ivanr
I wasn't arguing for redirecting to the home page, only to avoid redirecting
(to the intended destination on port 443) automatically.

If a user's browser ever sends a port 80 request, you've already lost
(assuming the MITM is there). On your end you may even never see a plaintext
request. But in all other instances, displaying an intermediary page is a
chance to educate your users, and possibly get them to change their bookmarks.

Further, with little work you may be forcing the MITM to do some custom coding
(a lot of work) in order to make the attack seamless.

I wouldn't do this for just any web site, but if you're running a bank or
something similarly sensitive, it would probably be worth it.

~~~
kijin
_If a user's browser ever sends a port 80 request, you've already lost
(assuming the MITM is there)._

Not really. Although old browsers don't support HSTS, they still respect the
"secure" flag in cookies. So if an old browser ever requests an insecure
resource, no cookies are sent with it, so the bad guys can MITM your
connection all day long and no harm will be done. You only need to make sure
that your own web pages never request resources over HTTP, and this is
relatively easy to do.

> _On your end you may even never see a plaintext request._

If so, there's no point doing fancy redirects anyway. How do you redirect a
request that you never see? Therefore this scenario isn't really worth losing
sleep over.

> _displaying an intermediary page is a chance to educate your users, and
> possibly get them to change their bookmarks._

Users don't want to be educated, period.

Also, technically when a browser encounters a 301 redirect, it should update
the relevant bookmark automatically. In reality no browser does this, but
that's what the standards say anyway.

~~~
ivanr
> Although old browsers don't support HSTS, they still respect the "secure"
> flag in cookies. So if an old browser ever requests an insecure resource, no
> cookies are sent with it, so the bad guys can MITM your connection all day
> long and no harm will be done.

Not true. Once a MITM hijacks the victim's communication with the server, she
can do whatever she wants, including stripping the "secure" flag from session
cookies. She may not be able to compromise a previous secure cookie, but she
can hijack a brand new session, wait for the user to authenticate, and gain
access that way. The communication from the victim to the MITM will be plain-
text with insecure session cookies; the communication from the MITM to the
server can be SSL with secure cookies.

And if we're talking just about an insecure resource (not a page, but, for
example, a JavaScript file), the MITM can simply inject malicious code into it
and hijack the browser that way.

------
markhemmings
I'm the guy who originally contacted Troy Hunt about this, as he mentions in
the blog post.

What annoys me is I'm a very young developer, and I've only really just become
interested in security (12 months ago I didn't even know what hashing was!!!),
yet there's developers out there with years and years of experience making
huge sites for the likes of Tesco and TopCashBack for vast sums of money and
they don't think about incorporating even the simplest foundations of internet
security a novice like me would implement without even thinking! How is this
possible?! If I'm doing it in tiny little php sites with 1 unique visitor
ever, why are these 'experts' not in there huge corporate sites with hundreds
of thousands of users a month?!

~~~
troyhunt
Years and years of experience? Often not, and that's speaking from years and
years of experience!

Vast sums of money? Yes, at least the outsourcing vendors who churn this sort
of thing out.

Unfortunately you're the exception Mark so good on you for that. Well I mean
unfortunate for the greater web using population, but very fortunate for you!

~~~
ZoFreX
To expand further on those points with a personal anecdote... I worked on a
"big website" for a very big government department. I was still in university
at the time, had about a year's experience with the platform, and was earning
close to callcenter wages.

The government only contracts to companies on a particular whitelist, which we
weren't. They paid a company something in the region of £300,000 for the
website.

That company then outsourced 100% of the work to the tiny (single digits
employee count) company I was at.

They paid us £40,000.

My share of that works out at about £2,000.

The fact something cost half a million dollars in no way implies any kind of
quality, or that the people working on the software will know what they're
doing. (Really: another horror story involved a website that only needed to
work in IE6... and had been built and tested purely in Firefox)

------
csmattryder
This is the same guy who exposed similar flaws with Tesco's (UK supermarket
chain) systems [1].

Why on Earth these companies are given free advice, but think they (or their
PR folks) know better is beyond me. Take the advice! You've now got a security
flaw, documented, waiting for Joe Hacker to take your customer's data and
shoot a hole through your business, and its reputation.

[1] [http://www.troyhunt.com/2012/07/lessons-in-website-
security-...](http://www.troyhunt.com/2012/07/lessons-in-website-security-
anti.html)

~~~
claudius
Could someone explain to me why storing un(salted/hashed) passwords is such a
big thing? Sure, it doesn’t hurt to salt and hash passwords[0], but since
users aren’t supposed to reuse passwords anyways, what's the problem in
storing 7UgHxJYjkgWDyCa9gsrH rather than
db5670ac4a274055e3f785300bec563c48306bf3ab8cb32223a4cd311984a3b4?

[0] And possibly normalise their length/character set, something that comes
for free with hashes.

~~~
webignition
Firstly, an attacker with access to your backend database will have full
access to user credentials if passwords are not salted and hashed. An attacker
can then sign in as any user. The attacker may not be trying to steal all your
have and may instead be looking to make life difficult for a single specific
user.

Secondly, users will reuse passwords. An attacker with access to the plaintext
password and the user's email address can use this to try to gain access to
other services.

Thirdly, the attacker in the first example could be a disgruntled employee.
You don't want a disgruntled employee causing trouble.

Lastly, I like to protect myself. A user of your services may claim you or
your employees are signing in as them. It is convenient to be able to honestly
state that this cannot happen.

~~~
claudius
I give you the first three points, though at least the first one sounds a
little academic, given that someone with access to the backend database could
probably effect queries ‘simulating’ an active user.

However,

> Lastly, I like to protect myself. A user of your services may claim you or
> your employees are signing in as them. It is convenient to be able to
> honestly state that this cannot happen.

sounds odd, since the user transmitted their password in clear text to the
server at least once when registering, unless you use browser-side
hashing/salting, in which case this hash would then take on the role of a
password (i.e. wouldn’t help at all). Furthermore, anyone with database access
could overwrite the hash in the database, log in with the password matching
the new hash and then put the old hash back in place.

~~~
csmattryder
> Furthermore, anyone with database access could overwrite the hash in the
> database, log in with the password matching the new hash and then put the
> old hash back in place.

With a salt, it removes all doubt. If an employee has a grudge against one
customer, they could take the unsalted pass, and authenticate as that
customer, no database transactions; essentially no paper trail.

If that scenario happened on a hashed and salted database, you'd have
transactions that X employee changed the salt & hash, then 20 minutes later
changed it back. As soon as (CEO/CTO/Mr. Manager) finds that out, X employee
is held accountable for their actions.

------
ancarda
SSL should really be all or nothing. The same way browsers don't like loading
insecure content from a secure origin, they should also squirm at the idea of
submitting a form to a secure area (from non-secure) -- including if the
parent is non-secure. An iframe isn't good enough.

------
dragcdxfbv
Do i have to pay or create accounts on third party services to use HTTPS (in
nginx)?

I never used https on my sites because there are some giant warnings
("dangerous") in browser when you go to a website that is self-signed. No
warnings on plain http.

~~~
JennyZ
You can get a free SSL certificate at <https://www.startssl.com/> , so that
should not be a reason to keep you from using it.

I do agree that the extreme mistrust of browsers towards self-signed
certificates is an odd thing.

~~~
derefr
> I do agree that the extreme mistrust of browsers towards self-signed
> certificates is an odd thing.

No, not at all; it's the whole point of SSL. The guy MITMing an SSLed website
can create a cert for that site himself, but he can't get it signed by a CA.
So he has to sign it himself. Thus, from the browser's perspective, all self-
signed certs are possible instances of "there used to be a CA-signed cert
here, but now you're being MITMed."

Now, that's not to say something _like_ self-signed certs wouldn't be nice--
given some sort of distributed pin cache, we could have something closer to an
SSH/PGP model where everyone's current self-signed cert "fingerprint" is on
file, and alarm bells go off if you see a cert different from the one you're
supposed to see. But without that, self-signing is literally no more secure
than no SSL at all: anyone else can also self-sign to MITM you.

~~~
claudius
> but he can't get it signed by a CA.

I strongly doubt that.

~~~
derefr
Well, like I said--it's the whole point of SSL. If he can (and the CA
responsible isn't immediately shut down), it means SSL is fundamentally
broken.

------
lucb1e
Https article served over http. Lovely.

Edit: Besides that problem #1 is offtopic since it has nothing to do with
https and that 3/4 other points are captain obvious, #4 is actually a good
one. It's so obscure that many will forget to enable it ("all pages are
secured anyway"), but whenever a user visits _any_ http page, an attacker can
inject a small frame loading the http version of my website, and even if I
redirect, the cookie was already sent and read by the attacker. Only an HSTS
header or enabling the secure-cookie option protect against this.

~~~
nwh
If every blog on the internet had a static IP to serve HTTPS over, we'd have
even less IP addresses available than we do now.

~~~
newman314
IPv6 or SNI

~~~
lcampbell
SNI has the added merit of working on practically everything these days.

