
The “Happy Path” to HTTPS - jgrahamc
https://www.troyhunt.com/the-6-step-happy-path-to-https/
======
jacquesm
While I love the concept of Cloudflare and know and like many of the people
behind it I'm no longer promoting them. Cloudflare royally messed up and then
messed up their response as well. I can forgive the former but I have a hard
time with the latter. Also, unless your site is high traffic or get hit with
DDOS attacks I don't see any need for it. Just optimize your site and you'll
be fine even without a CDN.

~~~
feedjoelpie
Is it weird that for a certain category of tech company that I intuit would
learn from their mistakes, I trust them more _because_ they've had one
catastrophic failure? And just sort of assume that many of the ones who
haven't are still riding on a wave of blissful ignorance? Maybe that's
nonsense, but it's still a thought that occupies my mind.

~~~
jacquesm
I would totally agree with you had they not blown their chance to handle this
responsibly. The finger pointing and downplaying of what had happened totally
destroyed my will to give them the benefit of the doubt in that respect.

------
SAI_Peregrinus
The one part I disagree with is the section on Cloudflare. I agree that it's
better than nothing, but you do have to trust them. I'm not sure that's wise.
Not because I expect them to deliberately misuse the access, but because it's
an increase in attack surface.

~~~
voidmain
Yeah, it's only been 6 months since
[https://en.wikipedia.org/wiki/Cloudbleed](https://en.wikipedia.org/wiki/Cloudbleed).
I guess we have to hope that they got religion after that :-/

It's kind of funny that we have Google pushing to force everything to be
HTTPS, and in response everyone adopting a service provider that MITMs them
through shared proxies written in a memory unsafe language, and doesn't
require a certificate or even HTTPS from the origin site (so that third
parties can still do MITM attacks), etc.

It's still an improvement over an `http` scheme site in an absolute sense --
at least it protects customers from their own ISP or unencrypted wifi -- but
it also hides the insecurity from the user. Oh, well.

Maybe what we really need is strict liability for data breaches. A few
companies getting successfully sued for $100 per user account after a breach
would actually start to change the culture around security.

~~~
jacquesm
I see CF as a step backwards at this point. I'm sure they have learned their
lesson but their response was absolutely terrible and that makes me wonder
about their leadership. As long as that doesn't change there is a good chance
that there will be a repeat at some point and so I'm not comfortable with
placing that much trust in them.

Fortunately I don't need them, if you are in a position where a CDN is a must
then that is a decision you're going to have to live with (or find another one
than CF).

~~~
voidmain
It's really not my area of expertise, but it seems to me that today you could
use subresource integrity to serve everything static from a CDN without having
to trust it a lot.

The real problem is the other stuff they do, like defense against DOS attacks.

~~~
jacquesm
It's not about subresource protection, it's about them essentially man-in-
themiddle-ing each and every connection to your website which defeats - imo -
the whole purpose of using HTTPS in the first place. What's the point if it
isn't end-to-end?

~~~
mrkurt
(disclaimer: my company does this too)

TLS between users and a proxy protects users from lots of different attacks
(including the lovely wpa stuff). It's useful, and the alternative is
frequently "no TLS at all", not dyi.

Yes end-to-end is better. But you're still going to have to trust
infrastructure providers along the way, whether it's at the proxy level or
they can just read your disks.

Also I pretty much agree with everything you said about CF ...

~~~
lmm
You are usually going to have to trust some third parties, i.e. your
datacenter provider (though even then, I'm a believer in locked cages etc.).
But there's a difference between trusting a named third party and trusting the
public internet between CF and your hosting.

------
floatboth
Cloudbleed, security of MITM… all that stuff is NOT why I dislike CloudFlare.

It's the centralization of the internet! If everyone except
Google/Netflix/Facebook/… is routed through CloudFlare, the internet would be
even more centralized than ever :(

------
peterwwillis
" [..] we've known it's coming for quite a while now [..]"

Who is "we" ? A lot of people are about to start calling random tech support
numbers asking them why their internet is not secure. However, I think the UI
changes coming in are a vastly improved solution to mitm than the cheap hack
that is HSTS.

I'm also about 90% sure that in two years, most people will have at least one
custom CA cert from a job, isp, or other as-yet-undetermined need to inspect
content. I also expect it to become commonplace to ignore certificate errors,
as the number of new HTTPS sites also increases the amount of faulty TLS
setup.

And what's _really_ annoying is that HTTPS doesn't really affect user security
much. It mainly just affects privacy. Most people are not hacked by a man in
the middle. They're hacked by a person accessing a database, or running an
authentic looking website, or exploiting a bug. So while a lot of headaches
will be caused by adopting HTTPS everywhere, people won't necessarily be any
safer.

~~~
MrManatee
> And what's really annoying is that HTTPS doesn't really affect user security
> much. It mainly just affects privacy. Most people are not hacked by a man in
> the middle. They're hacked by a person accessing a database, or running an
> authentic looking website, or exploiting a bug. So while a lot of headaches
> will be caused by adopting HTTPS everywhere, people won't necessarily be any
> safer.

Maybe I misunderstood, but it sounds like you're criticizing HTTPS for its
success. People only rarely get hacked using man-in-the-middle attacks,
because popular sites are already using HTTPS. If they didn't, I'm sure they
would be MITM-hacked all the time.

~~~
peterwwillis
MITM is a lot more work for a lot less payoff. Say you found a small ISP that
allows DNS cache poisoning. Once you succeed you get X users over a day or
two. It's still a fraction of users of a small ISP. With a botnet, they can
collect hundreds of thousands to millions of users, and all they have to do is
own one site. Or they can send spam all day and not have to own anything.

Of course MITM is a concern, it's just not the biggest concern, IMO.

~~~
MrManatee
Ah, I wasn't really thinking about DNS cache poisoning. I was thinking about
someone going to a public place (a school, a cafe, an airport), setting up a
deceptively named Wi-Fi hotspot on their smartphone, and intercepting all non-
HTTPS traffic that's going through.

Maybe this is not a lucrative opportunity for someone who also has the skills
to gather a botnet that consists of millions of computers. But this attack
requires minimal skills. If Gmail didn't use HTTPS, there would be an easy-to-
use Gmail hacking app. If Facebook didn't use HTTPS, there would be an easy-
to-use Facebook hacking app. The risk of getting caught is small. And by going
to the right place, there's a reasonable chance of targeting a particular
person, which many would find appealing. I think that the only reason attacks
like this aren't more common is that most of the high-value attack targets are
already using HTTPS.

------
knieveltech
+1 for LetsEncrypt. The web agency I work for standardized on this for all new
builds and existing clients when their existing cert expires. Zero to HTTPS in
less than five minutes.

------
DoodleBuggy
Funny timing, I was speaking earlier today with a startup CTO who was annoyed
with various HTTPS problems. Put simply, businesses don't want to deal with
this stuff. That means they'll pay for it, that's an opportunity for someone.

Offer this "happy path" as a service, companies will pay for it right now.

~~~
mrkurt
Everyone is offering this happy path. And people do pay for it, but the market
price for "a Lets Encrypt cert on my site" is $0. It's mostly just a great way
to get people started on a more valuable service.

~~~
michaelbuckbee
Lets Encrypt works fine for personal sites, etc. but it absolutely crushes the
use case of being a large scale service needing to support thousands of
individual sites, each with their own SSL cert.

Heroku, Hubspot, Shopify, etc. all have implemented LetsEncrypt and UX wise,
is now pretty much just the default.

~~~
netzone
Yup. I use LetsEncrypt for 12 sites with their own individual certs, and even
that gets painful, but managable.

~~~
majewsky
If that is painful, you should have automated it. I don't even know how many
certs I have ATM (maybe 20 or 30 or so), because it's so easy to add new
domains. I described my setup here: [https://blog.bethselamin.de/posts/how-i-
run-certbot.html](https://blog.bethselamin.de/posts/how-i-run-certbot.html)

------
systematical
Nice article. At my company, $9 per year SSL through namecheap means we go
with that for the support, which has come in handy. For non-critical things
and my personal stuff I am on letsencrypt. If you are looking to go beyond
this article, use a free SSL scanner such as
[https://www.htbridge.com/ssl/](https://www.htbridge.com/ssl/) which will dig
into cipher suites which comes into play with HIPAA and PCI.

~~~
imron
> which has come in handy

How much support can you really get for $9 a year?

Let's encrypt is basically set and forget (making sure to also set up
appropriate notifications for when things go wrong).

~~~
dalore
Yeah for $9 a year they are getting the service where when their cert expires
they have to manually go through all the process again. Making sure to
manually create a signing request and keeping keys secure.

For $0 a year with letsencrypt you can have it autorenew and not need any
support.

Also what support would a https provider provide other then here are the
commands to generate a csr?

------
fps
As much as I appreciate https for sensitive information, I feel that for many
things, it also functions as a way to lock down the standard protocols of the
web and lock users in to proprietary software and services. No longer can you
spy on the requests that that piece of closed source software is making while
it phones home. No longer can you rescue and reuse IoT devices whose
manufacturers have shut down their servers, or refuse to provide updates or
promised functionality. Now all communications are inscrutable binary streams
that can't be examined or improved, especially from devices that don't allow
you to upload a replacement certificate authority.

------
greggman
still no solutions for IoT devices serving webpages like routers , nas, ip
cameras, etc

~~~
prophesi
I'm hoping in the future it'll be easier (in a secure fashion) to implement
self-signed certs for those sorts of things that only have to serve pages on a
local network.

~~~
tzs
I've never found a good, single document that covers self-signed certificates
well. Googling for guidance results in a lot of different ways to do it, and
if you don't find one that does exactly what you want, trying to combine that
one with a different one that does the things missing from the first might not
work because they are using different approaches.

I did manage to glean enough to make a pair of simple scripts that seemed to
work for my needs, but have no idea if they are actually right, or if putting
together things from different guides messed things up.

Here's the script I use for my root certificate (makeroot.sh):

    
    
      #!/bin/bash
      NAME=${1:?Must specify name for root}
      CN=${CN:-My Little Certificate Authority}
      O=${O:-My Home Network}
      C=${C:-US}
      DAYS=${DAYS:-365}
      PW=${PW:--aes256}
      SUBJ="/CN=$CN/O=$O/C=$C"
      openssl genrsa $PW -out $NAME.key 2048
      openssl req -x509 -new -subj "$SUBJ" -nodes -key $NAME.key -sha256 -days $DAYS -out $NAME.pem
    

So suppose I'm setting up a private certificate authority for my iot devices.
I would do:

    
    
      $ ./makeroot.sh iot
    

That will make iot.key and iot.pem.

To make a certificate signed with the iot certificate I use this script
(makecert.sh):

    
    
      #!/bin/bash
      NAME=${1:?Must specify name for cert}
      ROOT=${2:?Must specify name for root}
      CA="-CA $ROOT.pem -CAkey $ROOT.key -CAcreateserial"
      O=${O:-My Home Network}
      C=${C:-US}
      DAYS=${DAYS:-365}
      SUBJ="/CN=$NAME/O=$O/C=$C"
      openssl genrsa -out $NAME.key 2048
      openssl req -new -key $NAME.key -subj "$SUBJ" -out $NAME.csr
      openssl x509 -req -in $NAME.csr $CA -out $NAME.crt -days $DAYS -sha256
      rm $NAME.csr
     

E.g., to make a key for an iot doorbell:

    
    
      $ ./makecert.sh doorbell iot
    

That makes doorbell.crt and doorbell.key.

That worked well until the need arose to make an SNI certificate. I had
expected that it would just be a matter of slightly tweaking makecert.sh, add
a few more arguments.

But all I could find in the way of examples took a different approach, where
most of the information is passed in a config file, and some of the guides I
read indicated that you cannot pass the names on the command line to openssl
when doing SNI. Anyway, this is the script I came up with (makesni.sh):

    
    
      #!/bin/bash
      function makeconf {
          CN=$1
          DIR='$dir'
          shift
          cat > tmp.conf <<HERE
      [ req ]
      distinguished_name = dn
      req_extensions = req_ext
      unique_subject  = no
      prompt = no
      
      [ ca ]
      default_ca = tzs_ca
      
      [ tzs_ca ]
      dir = ca-files
      private_key	= $DIR/iot.key
      certificate     = $DIR/iot.pem
      new_certs_dir   = $DIR
      database	= $DIR/index.txt	# database index file.
      unique_subject  = no
      default_md	= sha256		# use public key default MD
      serial		= $DIR/iot.srl 		# The current serial number
      email_in_dn     = tzs@mouse-potato.com
      default_days    = 365
      
       
      [ dn ]
      CN = $CN
      O = My Home Network
      C = US
      
      [ policy_anything ]
      countryName		= optional
      stateOrProvinceName	= optional
      localityName		= optional
      organizationName	= optional
      organizationalUnitName	= optional
      commonName		= supplied
      emailAddress		= optional
      
      [req_ext]
      HERE
      
          if [ -n "$1" ]
          then
              cat >> tmp.conf <<HERE
      subjectAltName = @alts
      
      [ alts ]
      DNS.1 = $1
      HERE
              shift
          fi
          POS=2
          while [ -n "$1" ]
          do
              echo "DNS.$POS = $1" >> tmp.conf
              shift
              POS=$((POS + 1))
          done
      }
      
      NAME=${1:?Must specify name for cert}
      makeconf $*
      openssl genrsa -out out/$NAME.key 2048
      openssl req -new -key out/$NAME.key -config tmp.conf -out out/$NAME.csr
      openssl ca -policy policy_anything -out out/$NAME.crt -config tmp.conf -extensions  req_ext -infiles out/$NAME.csr
      rm out/$NAME.csr tmp.conf
      

This assumes a directory, ca_files, that contains iot.key and iot.pem, and
also an empty file named index.txt. Also it assumes in out directory.

To make an SNI certificate for a device that has three hosts, named fridge,
freezer, and icemaker:

    
    
      $ ./makesni.sh fridge freezer icemaker
    

That will make fridge.crt and fridge.key in the out directory, and that will
have fridge as the CN and freezer and icemaker as subject alternate names.

OK, so now I can put doorbell.crt and doorbell.key on the doorbell device, and
fridge.crt and fridge.key on the device that has the fridge, freezer, and
icemaker sites, and all is well, right?

Well...I also need to give my browsers iot.pem so they will recognize those
other certificates. I'd also like curl and Perl and Python to recognize them,
so that I can write scripts that do fancy iot things.

I didn't find any good guide to installing the CA certificate. From a bunch of
Googling I came up with:

• On Debian Linux, copy iot.pem to /usr/local/share/ca-certificates, and
change the extension to .crt.

Run update-ca-certificates.

That should make it available to Perl, Python, and curl.

For Chrome and Firefox, find their certificate management dialogs and use them
to add the certificate.

• On OS X, double-click iot.pem. That should open Keychain Access. Let it
import the certificate. Set it to be trusted for SSL. That will make it work
in Chrome and Safari.

For Firefox, same as Linux.

For Perl, do one of:

1\. Set environment variable PERL_LWP_SSL_CA_PATH or HTTPS_CA_DIR to point to
a directory containing iot.pem, or

2\. Set the environment variable PERL_LWP_SSL_CA_FILE of HTTPS_CA_FILE to
point to iot.pem, or

The first requires that the directory contain symbolic links of the form
hash.N where hash is a hash of the certificate and N is a sequence number. You
need two of these, because there apparently is an old hash and a new hash.
'openssl x509 -subject_hash -noout -in iot.pem' to get the new hash, and
'openssl x509 -subject_hash_old -noout -in iot.pem' to get the old hash. Or
just go to the cert directory and run 'c_rehash .' and 'c_rehash -old .' and
those will make all the symlinks for you.

For Python3, similar to Perl but with SSL_CERT_DIR and SSL_CERT_FILE as the
environment variable. Same hash symlinks as Perl (although I think you only
need the new hash).

For curl, "\--cacert iot.pem" or "\--capath <dir>" where <dir> is a directory
with the certificate. Same symlink considerations as Python3. Or set
CURL_CA_BUNDLE env variable to point to a bundle that contains iot.pem. A
bundle is just a bunch of certificate fills concatenated.

• Windows, I have no idea.

~~~
feelin_googley
Not using dhparam? Dont like ECDH?

I wrote a similar amateur script years ago which has required updates from
time to time to reflect new developments in the ongoing SSL saga.

This was started before anyone cared about SNI.

If the use case for the CA is a home network, I prefer using a local SSL-
enabled proxy for connecting to both remote and local SSL-enabled endpoints.
This lets me specify a short list of acceptable ciphers. It also permits me to
use legacy, non SNI-enabled SSL/TLS clients. The remote and local hosts are
"backends" that go in the proxy config file. I control DNS via local root or
use HOSTS to redirect to the proxy.

If I was running multiple local SSL enabled devices/servers from the same
local IP on a home network, then I believe I could just put these in the
config file as backends listening on high, unprivileged ports. I do not
believe I would need SNI because I could filter requests based on domain name
and/or the filepath in the url.

The ultimate solution IMO would be an sslwrap type utility that has been
revised to use more modern encryption, i.e. NaCl. Any application could then
use NaCl instead of SSL/TLS, without having to modify the application. The
NaCl author has menitoned several times that he has written such a utility and
even said he will release it at some point.

IMO, that would be a big step in freeing us from the ills of SSL/TLS and the
3rd party CA system. Users would get easy-to-use, high quality, high speed
cryptography without having to learn all of the SSL/TLS complexity, not to
mention having to keep up with incessant bugs and security updates.

~~~
tptacek
If you shed all legacy compatibility and narrow your configuration down to
modern AEAD ciphersuites and an ECDH handshake, you can get TLS to a point
asymptotically as secure as the best secure transport you'd achieve with NaCL
and no other crypto primitives. Which leaves you to wonder why you'd bother
doing the NaCL thing at all --- which is probably why not many people use
custom secure transports.

Most (not all) of what's gone wrong with TLS has gone wrong in things that had
already been outmoded for a very long time, and long since replaced with
better constructions in later versions of the protocol.

~~~
feelin_googley
One of the things that perplexes me as a naive user about TLS is the
ridiculous number of cipher choices. How do you explain it? Some people still
need them? Impossible/impractical to cut TLS down to size? People who comment
intelligently about crypto seem to agree that complexity is the enemy of
security, but does TLS even try to reduce complexity?

There is so much accumulated cruft to SSL/TLS and the implemented x509
certificate scheme that to me, _as a noob who knows nothing about crypto_ i.e.
the average user, the easier path is to scrap all that ("shed all legacy"
junk) and focus on learning a few things that are both flexible and known to
be useful. I believe NaCl fits this role.

I have some basic UNIX-like utilities built with NaCl, one for each function
(crypto_box, crypto_secretbox, etc.). Minimal non-djb code. In the interests
of experimentation and learning, I prefer using small, separate utilities
versus applications that do multiple jobs.

I would be interested in an "expert" opinion on my experiments with these
utilities but crypto is such a devisive topic. Though they may be initiated as
sensible, honest questions, online discussions quickly turn toward dogma
supporting status quo, mindless memes, subtle insults and are sometimes
derailed into the realm of absurdity. I would post an example of the usage for
comment, but I am not interested in being chastised by the HN peanut gallery.
It is just intellectual experimentation, nothing more.

IMO, as a general principle not limited to crypto, there is nothing wrong with
"custom" anything _if_ it has passed the same QA tests as the "mass-produced"
version. Sometimes in fact custom is higher quality than mass-produced.
Popularity does not always signify higher quality. Perhaps nowhere is this
more evident than in the world of software.

IMO, a "naclwrap" program would be very useful, even in a "TLS world". Like
stunnel but for nacl. Whether "everyone" would use it is an interesting
question but ultimately not something I care about.

~~~
tptacek
There is no question that TLS is loaded up with cruft and dangerous legacy
goo.

But, again, you can remove 80-90% of it if you don't care about compat. People
run into a cognitive block when they think about TLS because they assume TLS
means "browser compatible". But no browser speaks a NaCL transport today, so
that's out the window.

Without the requirement to support browsers, you can:

* Allow only an ECDH handshake.

* Allow only the Chapoly ciphersuite, in TLS 1.2's AEAD format.

* Eliminate CAs and do an SSH-style key-continuity scheme.

This TLS "subprotocol" already exists and is already supported by most of the
TLS libraries, all of which have been audited. It's supported by middleboxes
and monitoring tools so it can be deployed operationally. Every mainstream
programming language has bindings to it. Meanwhile, NaCL doesn't actually
provide a transport protocol or even the security semantics of a secure
transport; that's work you'll have to do _de novo_ , and you will generate
bugs doing it.

So what's the advantage to scrapping TLS and redoing it with NaCL?

~~~
feelin_googley
I agree. Especially with the part about CAs and a more SSH-like approach.

But unlike you I have little interest in "browsers". This is why I would want
a "naclwrap" utility but you might not see any point. CurveCP is the
experimental transport protocol. I do not need it in a browser because I am
not interested in browsers. I like _experimenting_ outside of browsers and the
"web" so I have no reason to resist CurveCP.

What is the advantage of scraping TLS? If you mean _for everyone_ , maybe
there is none. Why not allow both TLS and NaCl to coexist? Why does TLS have
to be "redone" with NaCl?

For _me personally_ , the advantage of "scraping TLS" is that I get to ignore
all the cruft and complexity that I have to sift through to get to the proper
"subprotocol" within TLS. Too much work.

I am not the one who will write "naclwrap" if it ever is released. The person
who wrote it does not introduce bugs and security issues. He is not like the
people who work on TLS and most developers in general. He is careful.

In any case, I am not tasked with persuading _anyone else_ to scrap TLS. I am
simply a user who 1. likes NaCl, 2. prefers the idea of per packet encryption
to the notion of encrypted "tunnels" and 3. is apt to complain about TLS only
because like other web users I am forced to use it whether I want to or not.

For someone focused on influencing developers who write programs for other
people, shaping web standards or at least very interested in where they might
be heading, your comments are poignant. But I am just a user. I write trivial
programs for myself. I am not futilely trying to tell developers what to do,
shape standards nor am I very interested in where things are heading, except
to the extent I can minimize the computer usage-related annoyances I must
endure.

~~~
feelin_googley
Compared to NaCl, for someone who wants to learn how cryptography libraries
work, "TLS" is too much of a moving target. It is unfinished software that may
never be finished. Too many versions of too many libraries supporting too many
ciphers by too many developers. Too many knobs and switches.

As for OpenSSL, not everyone is on 1.1 yet. There is no ChaCha20-Poly1305 in
1.0.1_. So while some bits of NaCl have been adopted into the TLS suite, it is
only a subset of web servers that are supporting the so-called "subprotocol".
And still no Ed25519, even though it has been used in OpenSSH for some time
now.

All these factors make TLS undesirable _for me_. Too much complexity compared
to NaCl, IMO. _For others_ , TLS may be the right choice.

------
k__
I have to say, with LetsEncrypt the whole things got rather simple.

I once needed a HTTPS server for IPA distribution and got this set up in under
an hour for my existing nginx.

------
HurrdurrHodor
The appropriate way for users to defend themselves is to simply install https-
everywhere and check "Block all unencrypted requests". This avoids sslstrip,
requires no redirect magic and no HSTS.

Although somebody should really patch it to just display big fat warnings
because it is somewhat annoying to turn it on and off all the time.

------
ge96
Just putting this out there, check out Qualyss to see how your HTTPS is. Maybe
using weak cipher suite etc...

------
ahochhaus
Does using `Content-Security-Policy: upgrade-insecure-requests` in addition to
HSTS add value?

~~~
lstamour
Yep: HSTS only applies to _your_ site, while upgrade-insecure-requests applies
to every resource your site loads, even on third-party domains? Meanwhile,
upgrade-insecure-requests does not replace HSTS because it doesn't help secure
links from offsite or direct entry, which HSTS solves especially with
preloading. Monitoring CSP headers and actually fixing bugs would help fix
things in browsers that don't support upgrade-insecure-requests.

~~~
ahochhaus
Thanks for the clarification. I did not realize that `upgrade-insecure-
requests` applies cross origin. If you do not load any insecure content is
setting HSTS and `block-all-mixed-content` the best strategy?

~~~
lstamour
As pointed out by Microsoft earlier today, MDN is one of the best resources on
this sort of thing. Here they write:

> The upgrade-insecure-requests directive is evaluated before block-all-mixed-
> content and If the former is set, the latter is effectively a no-op. It is
> recommended to set one directive or the other – not both.

Source: [https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Co...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Content-Security-Policy/block-all-mixed-content) and
[https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Co...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Content-Security-Policy/upgrade-insecure-requests)

Yes they are similar, and you have to watch which one you set -- but you can
also achieve a similar effect at a more granular level using CSP as also
indicated in MDN. These rules are equivalent to saying `default-src https:` in
the CSP rule.

In fact, the best option is individual CSP directives which can get more
granular than the `https:` scheme alone, because you can then specify which
trusted third-party domains (if any) are allowed to load resources on your
pages and conditions (like nonces) for running script tags, data URIs, etc.
After all, your secure third-party resources could still have servers
compromised and they might then send malicious assets over SSL to your
unsuspecting users' browsers.

CSP, if trusted enough to set it to block instead of just report (though you
can run both modes at the same time), is one of the best defence-in-depth ways
to protect your page from attack, right up there with HttpOnly and Secure
flags on cookies. [https://developer.mozilla.org/en-
US/docs/Web/HTTP/Cookies#Se...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Cookies#Secure_and_HttpOnly_cookies)

If you're looking for checklists, have a look at
[https://wiki.mozilla.org/Security/Guidelines/Web_Security](https://wiki.mozilla.org/Security/Guidelines/Web_Security)
and [https://blog.appcanary.com/2017/http-security-
headers.html](https://blog.appcanary.com/2017/http-security-headers.html)
though remember no checklist is going to deliver bulletproof security on its
own (you'll have to inspect your app and environment for flaws, implement
monitoring tools, etc.), and blindly implementing security headers or features
without knowing what they do can obviously break your app. (Again, monitoring
your app can help.)

------
767576
ooooooooooooo.+oooooooo

ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo

+3 .3+/0,0./ 3+* 3 ll

~~~
767576
please delete, cat + keyboard

------
chrismorgan
I just got a 502 Bad Gateway from Cloudflare. Reloaded the page and it worked,
but mildly bizarre.

