Hacker News new | past | comments | ask | show | jobs | submit login
Does my site need HTTPS? (doesmysiteneedhttps.com)
106 points by WallyFunk on Jan 31, 2022 | hide | past | favorite | 138 comments



Eh, I really wish there would be more pushback against HTTPS.

It does include a lot of problems, including the reporting of browsing information through non-stapled OCSP, there's still major MITM-problems (yes, still; for example CloudFlare is huge-ass MITM), and no matter what this site claims, HTTPS is definitely still a lot slower than HTTP, even with HTTP/2; and it further makes it a lot easier to hide which data is extracted from a computer from the user. Encryption is great if you are using it, but it can also very much be used against you. The centralization it drives also creates unpleasant attack vectors for snooping governments.

I wish there was a way non-sensitive data could be transmitted plain-text, but signed with a server certificate. This solves many of the same problems, while avoiding many of the problems with HTTPS.


> reporting of browsing information through non-stapled OCSP

This is a client bug, IMO. And sensible servers like Caddy (and yes, before people complain about disclosure, I made it) staple OCSP automatically.

> here's still major MITM-problems (yes, still; for example CloudFlare is huge-ass MITM),

But optional (and often discouraged), not required! Separate issue.

> HTTPS is definitely still a lot slower than HTTP, even with HTTP/2

That's just plain false across most dimensions. https://css-tricks.com/http2-real-world-performance-test-ana...

> it further makes it a lot easier to hide which data is extracted from a computer from the user

That's a separate issue too; don't use clients you don't trust, period. And some big-name clients like Chrome let you dump keys to inspect data transfers locally.

> I wish there was a way non-sensitive data could be transmitted plain-text, but signed with a server certificate.

TLS supports a "NULL" cipher in certain cipher suites for signed-only transmissions, but mainstream server and client support are limited (for good reasons).


> This is a client bug, IMO. And sensible servers like Caddy (and yes, before people complain about disclosure, I made it) staple OCSP automatically.

As a client I don't get to choose how the servers interact with are configured.

> But optional (and often discouraged), not required. Separate issue!

Not at all a separate issue. The issue is the absolute fantasy that HTTPS makes your connection secure.

> Don't use clients you don't trust, period. And some big-name clients like Chrome let you dump keys to inspect data transfers locally.

Are you really telling me I have to implement every piece of software I want to run myself from scratch, write custom firmware for my processor, for my router, then sit down and create a custom ROM with a custom operating system for my phone?


> As a client I don't get to choose how the servers interact with are configured.

Clients are the ones that cause the privacy leak, not servers.

> The issue is the absolute fantasy that HTTPS makes your connection secure.

Oh, OK.

> Are you really telling me I have to implement every piece of software I want to run myself from scratch, write custom firmware for my processor, for my router, then sit down and create a custom ROM with a custom operating system for my phone?

If you don't trust them, then yes. Nothing new here though.


> Clients are the ones that cause the privacy leak, not servers.

Regardless of where the leak is, this is not a problem that exists in HTTP. You could not just go to a central server and gather a log of some NN% of the HTTP connections being made. No, DNS is not analogous. You can choose your DNS server, you can even run your own.

> If you don't trust them, then yes. Nothing new here though.

What's new is that it's become a lot harder to examine the information my software is sending to servers on the Internet. If they all talked HTTP, it would be trivial for me to intercept and examine. You can do that with HTTPS, sometimes, but it's a lot harder and relies on the software obeying OS proxy settings.


You probably want a layer 7 firewall that only allows HTTPS traffic to/from the internet to your terminating proxy.

Stuff that doesn't obey the system proxy (or a few of the other pitfalls e.g. HPKP) will break, but at least you'd be aware of what's not allowing itself to be intercepted.


> Not at all a separate issue. The issue is the absolute fantasy that HTTPS makes your connection secure.

HTTPS is reasonably secure if the person in control of the domain is making sure it is reasonably secure. Setting up your whole domain to be MITM is not that.

Your connection to whatever place the person in control of the domain has delegated your connection to is reasonably secure.


> HTTPS is reasonably secure if the person in control of the domain is making sure it is reasonably secure. Setting up your whole domain to be MITM is not that.

And how do you know how secure is a certificate issued by Google or Cloudfare or Microsoft ? Those certificates are just trusted. Good luck with security.


As a client you get to choose whether to check ocsp or not. If you don't like it, don't do it.


> The issue is the absolute fantasy that HTTPS makes your connection secure.

Well, this is a kind of compromise. It is similar with COVID-19 vaccines: even though they don't prevent transmission, hospitalization and death, they do reduce them (at least the last two).

Even though HTTPS can not offer a very high protection level, together with accompanying technologies they do make certain attacks more problematic - especially sniffing. So for most intents and purposes, HTTPS may make a connection more secure.


> That's just plain false across most dimensions. https://css-tricks.com/http2-real-world-performance-test-ana...

I always feel like any performance claims are nowhere near my real life experience, not even when talking about synthetic benchmarks.

For example I recently set up an Apache server that returned a 304 response, on an AWS t3.micro instance and with the Apache Benchmark tool I got around 3.5k[0] requests per second. Once I've enabled HTTPS that fell to around 500 requests per second.

From what I've read at the time (because I was aware of the general consensus that they should have comparable performance), there is some type of underlying hardware feature the TLS decryption/encryption routine can use to speed things up, but it's only available in AWS in m3 type instances and above. Read it from a Mozilla (I think) article, that I couldn't find again right now.

[0] that 3.5k/500 rps might be higher for you because I used maxminddb IP mapping on each requests which I'm sure adds some considerable factor to the rps.


If all your server does is respond with a single small no-body response, then yes http will be much quicker. If you do a somewhat normal page (even something considered small by current standards like 5-10 resources each being 10-ish kb) it will be a wash or http2 will be better.


Relying on microbenchmarks is always dangerous.


Still, it's not a great look when your claim that something is "just plain false" is readily disproved by the obvious microbenchmark. You can certainly argue that it doesn't matter, other factors mitigate it, &c., but at least then you can have the honest talk about how that's true and why you need encryption to make those optimizations.

... except that you don't, they're flagged behind https expressly to strongarm https implementation.


Is there a layperson explanation for why https is faster than http? On the surface, "read file-> send to client" seems like less steps than "read file -> encrypt -> send to client"


I didn't see any claim that https is faster. If you benchmark https on a path where encryption is the only significant work being done, and on hardware that does not have encryption acceleration, then it will appear to be a lot slower. However, in real world applications the https encryption is usually accelerated by a dedicated processor feature, and the other work being done for a request is going to make the encryption time irrelevant. If you are serving static files over http, on a machine with an old cpu, with enough volume to saturate your cpu on http, then https will slow it down significantly. But typically, any server doing that will have 100x more capacity than it needs anyway.


"Faster" is ambiguous, and there are a lot of factors involved. HTTPS isn't just "add encryption", it's also "do more at once and don't block as much" because HTTP/2 (a faster version of HTTP) only uses HTTPS in practice.

You can discuss "HTTPS performance" in a purist way that makes it look slower than plain HTTP, but in practice (the only way that matters) HTTPS is faster than HTTP.


http/2 is faster (for various technical reasons but basically it sends multiple things at once better than old http) and http/2 requires security.

So technically http/2 is faster, but it is only possible with https.


> So technically http/2 is faster, but it is only possible with https.

Because of arbitrary client restrictions. http/2 can be unencrypted, and would be faster unencrypted than encrypted. It's just that all servers/clients decided to not implement the optional unencrypted mode.

I'm not arguing either way, but technically it is possible to do it without https.


The article they linked explained it pretty simply, I thought. One of the improvements is that rather than grabbing resources one-by-one in separate requests, it groups them into bundled single requests.


> TLS supports a "NULL" cipher in certain cipher suites for signed-only transmissions, but mainstream server and client support are limited (for good reasons).

IIRC Windows Update used to do that in XP days, to guarantee responses were signed but to alleviate the pressure of encryption from the CDNs handling updates at the time. (When SSL/TLS was fairly heavy, and Windows Update traffic was 10s of % of internet traffic)


> I wish there was a way non-sensitive data could be transmitted plain-text, but signed with a server certificate. This solves many of the same problems, while avoiding many of the problems with HTTPS.

Isn't this, more-or-less, HTTPS? I think you'd end up with something very similar as you'd likely need to solve the same problems HTTPS attempts to solve:

1. How do you know the authenticity of a message? You need some out of band channel of communication or trusted intermediary to verify anything you receive (e.g. a certifying authority)

2. How do you know when somebody's signature has changed?

3. When does a message expire? Can somebody replay a previous valid response?

And stepping past a few of these implementation issues; how much overhead is HTTPS really? TLS is definitely a chatty protocol, but as I understand things that has more to do with TCP and the complexity of the CA system than the encryption itself.

I like your idea of a faster HTTP variant that only guarantee message integrity rather than secrecy, but I myself can't see how you'd get there without implementing a lot of the things that makes HTTPS slower than vanilla HTTP.


You can in fact do this with HTTPS, but you'd have to find a client and server that support it. There's choices of ciphers, and one of them is the "null" cipher I believe it's called. I'd be shocked if really anything supports it though, without configuring it on explicitly.


You don't actually have to use OCSP if you're willing to accept that you might erroneously trust a revoked certificate. Having short cert lifetimes helps reduce the need for this unless you're really worried about security. There is no MITMing TLS that isn't explicitly allowed by the client (proxy) or server (reverse proxy). Nothing about TLS makes hiding data from the user any easier or harder. If you really wanted to hide your data extraction you would just encrypt the payload and then send it in "plain text."

Centralization is the point where I agree with you, I would love to see browsers support a scheme where certs can be TOFU with a private CA.


> and it further makes it a lot easier to hide which data is extracted from a computer from the user.

I don't understand this argument. If the world was "HTTPS-optional" like it was 10 years ago, anything that wants to hide which data is extracted can still (optionally) implement HTTPS (or whatever encryption they please).

So the only way this argument makes sense to me is if 'pushback against HTTPS' means ban encryption, and it seems to me that's not what you're saying.

> The centralization it drives also creates unpleasant attack vectors for snooping governments.

The internet is already heavily centralized around a couple of large IXes. Not having HTTPS will only make snooping governments' jobs easier. It doesn't matter to them whether their wiretaps are in CloudFlare's datacenter or in the IX next door, and they'd probably get greater coverage with the latter.


Does your site need to redirect me to the French version (https://faut-il-https-sur-mon-site.fr/) just because I'm in Canada?

No, no it does not.


It's based on your browser's Accept-Language header, nothing to do with location.

(It's only a naive implementation of parsing the Accept-Language header. Sorry, was all I could do at the time.)


Chrome sets it if you tell it you want to use its spellchecker with some languages.

I'm not sure what the standard says, but i suspect you'd surprise users considerably less if you used the first language listed in that header rather than the last (assuming equal or missing q values).


It's been years since I looked at the code/config but I literally think it's a "strings.Contains()" call... so not even parsing at all.

A proper implementation would make for a good Caddy module.


Yeah you're parser is too broken to be useful. For example with this header, listing English and French in that order, even explicitly indicating a bad quality value for French:

    Accept-Language: en-US;q=1.0, en;q=1.0, fr-FR;q=0.5, fr;q=0.5
your website redirects to faut-il-https-sur-mon-site.fr. Turn this off.



Doesn't redirect for me, also in Canada /shrug


Me neither. I am even in Quebec, but on an English device.


Oddly, I got the Dutch version. I do have Dutch language installed on my phone, but it's definitely not the default. Odd.


> "HTTPS is difficult to set up and maintain."

> It just works if Caddy is your web server.

I wonder what percentage of people who thinks HTTPS is difficult to set up and maintain are able to run their own VPS and properly install and configure caddy.


If you're not running your own VPS, HTTPS should already be handled for you by your hosting provider?


Letsencrypt is a godsend compared to what we had before. But it can be difficult depending on what you run and after a few hundred domains things pile up.

You just purchase a domain. You decide to host on apache. You first have to setup http get the letsencrypt to perform the challenge. Once that's done you can install ssl.

The letsencrypt auto renewer is great until you run a version of linux unsupported.

The extra cost per request does add up as well.

The cost to support ssl isn't free but the certificate is and pretty seemless all things considered


> The letsencrypt auto renewer is great until you run a version of linux unsupported.

Consider using an ACME client written in shell:

* https://github.com/dehydrated-io/dehydrated

* https://github.com/acmesh-official/acme.sh

There's a minor change for the pre/post-scripts to restart your web server, and telling the web server where "/.well-known/acme-challenge/" should be served from, e.g.,:

* https://salsa.debian.org/letsencrypt-team/dehydrated/-/blob/...

But otherwise I find there are a lot fewer moving parts (and dependencies) than ACME clients written in other languages.


I've had issues with certbot in the past as well. Modern versions of Apache have mod_md[0], which implements AMCE, replacing certbot. Configuration looks like adding 2 lines to your Apache configuration file.

[0]: https://httpd.apache.org/docs/trunk/mod/mod_md.html


Caddy replaces both Apache and certbot (or whatever ACME client you picked), and runs on any platform Go can compile for (because it's pure-Go).


For most web hosting providers, enabling HTTPS is a button click, if not enabled by default, thanks to ACME and LetsEncrypt et al.


Must be a lot! I'm definitely one of them. Even the allegedly simplest to configure server behind Caddy (Traefik) makes https unnecessarily hard to configure. Caddy is the long-missing counterpart to Let's encrypt.


Close to zero


Another interesting side effect of not using HTTPS is that other sites won't trust yours. In particular, if you try to use Open Graph or similar metadata to generate previews that other sites can embed when your link is posted, many of them simply won't do it because they don't trust the origin.


Correct and the real reason I use HTTPS, having "your connection is not private.. the site may be attempting to steal your information.." come up instead of your web site doesnt exactly inspire confidence in your company or its products.


Good point! I should update this site and mention that.


I guess public linux distribution repo mirrors can still be http, if you are fine with leaking which packages you are installing.

The packages themselves are signed and checked locally before installing them, so MITM shouldn't be possible. If your local trust is broken, then you lost already.

And you can easily setup caching proxies for the repos, without requiring to setup your own CA.


While its not common enough to matter much, there have been cases of apt vulns (DSA-4371-1) which would give anyone who can exploit it with a MITM root access.

One example of where this could lead to a wide spread attack is distros like whonix.org which update over Tor. They mitigate this with https/.onion package servers.

Theres also the smaller problem of package-set fingerprinting like you said.


Note that deb packages are not usually signed unless something major changed in the last 5 years or so. The repository metadata is signed and contains a checksum of all packages. It’s all safe as long as you install from a repo, but installing a deb package directly doesn’t usually do signature checks. See https://www.debian.org/doc/manuals/securing-debian-manual/de...

AFAIR there are options to sign packages themselves, but there’s at least two competing incompatible signing schemes.

rpm packages carry a signature in the package itself.


Though https still has the privacy advantage, if I update over http, my ISP knows I downloaded the tor package, this information won't be leaked over https. and since it's minimal trouble to set up https, I think it's fine


> if you are fine with leaking which packages you are installing

I'd rather not ...


The website misses the reason that I have not moved my domains to HTTPS: Google.

Google treat the HTTP and HTTPS pages as separate for link ranking purposes, so there is a chance that a move will destroy 10 years of link ranking. Even with redirects, there is a non-zero chance of the business being destroyed.

If Google would treat HTTP and HTTP pages as the "same page" then I would move tomorrow.


Google doesn't penalize redirects from HTTP to HTTPS: https://moz.com/blog/301-redirection-rules-for-seo


I would suggest an experiment, to move one single (well ranked) page to HTTPS, and see the traffic impact over some reporting period that makes sense. In terms of Google search, nowadays I'm not sure link longevity really matters; otherwise I don't think I would get spammy pages consistently as results.


I'm not that knowledgeable on SEO, but thought that this can be controlled with the `link rel=canonical` and redirecting.

https://developers.google.com/search/docs/advanced/crawling/...


For everyone here saying "Just use Let's Encrypt" - well, they've had some security issues over the last couple of years. Most recently [1]. They revoke certs and change challenges seemingly on a whim. I've had a number of fires to put out in the past 12 months because of LE.

Also, good luck using LE in a web farm type environment "easily". Given the challenge limits there's usually a fair bit of plumbing required to get multiple servers on the same domain with the same certificates. It's anything but "just works".

[1] https://www.bleepingcomputer.com/news/security/lets-encrypt-...


The instance you linked to wasn't a security issue, it was a compliance issue: https://news.ycombinator.com/item?id=30085948

> Head of Let’s Encrypt here. This is a compliance issue, there is no security or validation integrity risk.


It works for a whole lot of people and usecases. It's not perfect, the whole CA system is pretty terrible, but just as the site says, it's what we've got. The kinds of sites which don't have HTTPS to this day likely don't need high-availability. It's sad LE doesn't work well for your usecases, but you shouldn't dissuade people from using it in the many cases where it really does "just work".


If you have cronjobs/scheduled tasks running every day to try and renew the certificate (as recommended), then you'd not have any issues with them revoking. Any certificates that are going to be revoked will be renewed before then; this is how LE works. They gave 5 days notice, and during that 5 day period any certificates that will revoked would be renewed.

For multiple servers running the same domain, you can configure them all the same and they will get certificates fine. If required, they will get a new certificate from LE; if this is not required then LE will provide the current certificate to the server. There maybe a short time where the actual certificate on two servers maybe different, but both would still be considered valid. So there really shouldn't be any plumbing required. (edit: This is dependent on you having a sensible way to load balance them. If you're just running IP round-robin then it's going to be difficult, but that is what scp and custom routes are for).

I use LE for multiple domains, on multiple systems. Internal and external with no issue. I've even had certificates revoked by LE and it's never had any operational impact.


""It's the browser's job to keep users safe."

True, but incomplete. It is not SOLELY the browser's job. Browsers can only keep the users safe if the server provides credentials through an HTTPS certificate. As a site owner, it's your responsibility to provide these credentials for your clients."

HTTPS, or even using the internet in general, is not the only way to provide credentials to clients (users). For example, public keys can be provided using other protocols or even out of band.

Not every website is engaged in commerce nor otherwise needs to "scale" in a way that only computers can enable.


I find certbot a PITA to use and maintain despite the EFF's efforts.

And caddy is still not available in the Debian repos.


You can thank Debian for that. We've tried for years and eventually gave up.

But you can add our source anyway: https://caddyserver.com/docs/install#debian-ubuntu-raspbian


What's the deal regarding that?


Packaging on debian requires individually packaging _every single dependency_ of Caddy, because that's what they require from Go projects. That's way, way too much effort for us to spend, for very little gain. So we just have our own APT repo, graciously hosted by Cloudsmith, since we can automate pushing to there via github actions, on release.


Apache now offers mod_md[0], which implements AMCE directly, meaning that you don't need certbot. (I don't use Debian personally, but it looks like Buster and later support it.) I'm sure there are other options for other web servers.

[0]: https://httpd.apache.org/docs/trunk/mod/mod_md.html


> The only reason you should open port 80 on your server is to redirect all requests to port 443 and then close the connection on port 80. (Someday, maybe we can drop port 80 altogether.)

I think it is fine to support both if you are not handling forms, etc. Obviously you prefer people to use HTTPS, but there may be cases where HTTP is preferred. One example might be a large download where you can verify the hash afterwards, or interacting with old hardware/software.


I think the biggest bug bear with this approach (an approach I agree with, fwiw) is that the content of a page can be modified to include malicious content, including legitimate looking forms.

This wasn’t really a widespread problem before “https everywhere” became a thing, but it’s definitely possible. I distinctly remember projects that replaced images with cat pictures in-line, or made everything upside down; by exploiting the fact that can be modified in transit.


> [..] a page can be modified to include malicious content, including legitimate looking forms.

Sure, I've been there with "free WiFi" services injecting crap into a page. I believe some ISPs in the US would also put JS into HTTP pages. But this is why I argue for both HTTP and HTTPS.

I think it ultimately depends on your security model. Perhaps a workaround could be to disable forms in browsers whilst in HTTP mode, disable JS, parts of CSS, etc, by default. Require that the user explicitly ask for content in an insecure way.


I seriously doubt that encryption/decryption is going to bottleneck a large download as opposed to network overhead.


> I seriously doubt that encryption/decryption is going to bottleneck a large download as opposed to network overhead.

I think it really depends on the scale of what you're working with. It's not just the network overhead but also the CPU overhead. It's the cost of a copy operation (and maybe not even that with io_uring) vs an entire encryption process.


HTTPS (TLS v1.3) does add ~100% overhead. I personally prefer simple HTTP sites, but everyone is scared of "Not Secure" message in address bar ... So we all have to pay that penalty, even on static sites, easily checked against WayBack Machine or Tor service.


That's a lot of overhead. There's no way that's right for anything like common behavior/content.


Yes, I was also surprised to see that. Just opened TLS website in Firefox, Inspect->Network->site's home page->Timings. -> TLS Setup: 84ms, content generation+wait+receiving: 30ms ... Fun, huh? :-)

/e: non-CDN site, hosted on EC2, safe ciphers+hash algos without crypto HW accelerators etc. Average business website. With WordPress it is even worse. The CDN lets us terminate TLS and fetch origin in plain text + optimisations on crypto etc.


Why would anybody care about performance more than caring about malicious code being injected?

Very strange way of thinking.


What if you don't trust any certificate providers?


You really can't, and there's no easy way to selectively trust some. For example, one time I went through the huge list of certificate providers in Firefox and manually disabled a bunch of countries and only left in a few major providers I heard the name of before. My day to day browsing worked just fine, with no noticeable impact, but it was not something I can manage on a regular and just a one-off experiment.


Tor onion service? Unfortunately, self-signed cert. will terrify most visitors when browser will start screaming red and yellow on them ...


Trust them to do what? Trust them with what?


To not sign malicious agents so they can act like they're the website you're actually trying connect to. There's a worryingly large amount of CA in Firefox or other browsers, many of the quite shady or government-owned


This can happen regardless of whether you set up a certificate yourself.

If you're using plain HTTP, someone can MITM in HTTP, and a rogue authority can issue a certificate to someone who isn't you. This is not an argument against you using HTTPS at all.


So, you are admitting that HTTP and HTTPS are both equally insecure (they can be blocked or falsely approved by malicious authorities/service providers). At least, plain HTTP seems harder to censor: you do not need anybody's permission to transfer HTTP.

Self-signed HTTPS would be even better, but it seems to be frowned-upon by browsers these days.


> you are admitting that HTTP and HTTPS are both equally insecure

Absolutely not. I am pointing out that even within your (incorrect) assumption, you using HTTPS does not hurt your security at all.

That you take it to mean "HTTPS is insecure" is your own assumption, that you take it to mean "equally as insecure as HTTP" is something you made up.

A parallel: A lot of companies make seatbelts, you're not sure you can trust all of them. Even though they would be caught by quality testing and instantly go under, it is possible that one of them would build them with cheap materials that wouldn't offer much protection. Therefore seatbelts are not completely safe. Therefore wearing a seatbelt is equally as safe as not wearing one.


One of the major issues with web PKI is that any root certificate authority can sign any domain.

For Google this is fixed by hard coding the SSL hashes into chrome, for the rest of us we rely on http headers; which of course are much more fungible.


The CAA dns type indicates which CAs are allowed to sign for a domain.


If you’re in the middle; this is the most trivial thing in the world to block; in fact the CAA dns record is so limited in what it protects against that I’m genuinely confused as to why it exists.

Maybe with DoH it’s better. But that’s more https to make https not suck.


From the linked article. > Use CAA records[0] to restrict which CAs can issue certificates for your site.

Disclaimer, I didn't know about them either. TIL.

[0] https://datatracker.ietf.org/doc/html/rfc6844


If you're using HTTP, you are trusting them not to MITM your website already.


Yes, so what's the difference?


With HTTPS you aren't also trusting literally everyone else on the internet to not mess with you.


Having been around for a while I remember the concerns companies and sites had for the computer power required to serve https traffic.

They used to make SSL/TLS and more accelerators you could slot into servers to make it faster.

As a devil's advocate, I wonder how much energy the world would save by using HTTP, instead of HTTPS.

That is a hell of a lot of processing going on every single second globally.

The fatter and fatter and fatter websites get, the more compute is required to encrypt and decrypt everything.

Think of how much electrical power could be freed up and used for better things. </s>


If they serve ads via http - good luck now any ISP can put their own ads instead theirs and they will never know and will never get any money from it :)


Didn't realize certificates are really free (according that that site), I've been paying GoDaddy $94.99 for a Standard SSL Renewal yearly.


If you change to letsencrypt, maybe route some of that to them?


ZeroSSL is another one that will give you a free certificate.



Let's Encrypt FTW


>"I can't afford a certificate."

>They're free.

May be free but is it still applicable on hosting providers like Godaddy?


> hosting providers like Godaddy

You mean ones that will try to exploit and defraud you on every opportunity? Probably not.

If you mean hosting providers in general, yes, they'll usually handle a Let's Encrypt certificate for you for free.


I've had DreamHost forever and Let's Encrypt is literally a check box on a domain. They may even support others like ZeroSSL but I haven't bothered to look because I don't care. I take the savings on paying for SSL and donate it to the EFF and Let's Encrypt.


Don't use GoDaddy. ¯\_(ツ)_/¯

It may look like I'm being facetious, but I'm not. GoDaddy to me has always felt like the domain registrar for non-technical people that will overpay for things because they don't know any better. Like...GoDaddy's customers are the same people that continued to pay for AOL even after getting a DSL line or cable modem. The same people that have paid for a shady ad blocker despite uBlock Origin existing and being free. The same people that, in the early days of Android, paid $5/month to Verizon to get Verizon Maps and Navigation despite their phone having Google Maps on it for free.

Anything GoDaddy does can be done somewhere else for cheaper or even free. They are ripping you off.


Yes, absolutely.


This site needs translations for the intended audience. Looking at you Japan, South Korea, et al.


I haven't signed up for a cert in a while, is some form of personal/business validation still required? I'm just wondering if forcing HTTPS everywhere will make it difficult to anonymously own your domain in the future.


https://www.ssl.com/article/dv-ov-and-ev-certificates/

There are several types of certificate. The most basic is DV where you only need to upload a file or modify DNS record to prove you own a domain.

EV certs used to be visually distinct in browsers but it was shown to not be that useful and in fact counterproductive. The browsers are going in opposite direction: show nothing special for HTTPS, and show red warnings on HTTP.

At this point EV certs are mostly only used by legacy SSL sellers to milk rich customers who can't tell the difference.


I really miss the days when EV certificates where highlighted in the address bar. I feel like nowadays it would be easier to get scammed, due things like unicode characters in URLs for example.


Oh, I did my masters thesis about this [1]!

EV certs have long been known to be ineffective at preventing fraud. In fact, they can enhance fraudulent activity with a false sense of trust. And that's if users even notice it and know what it means (most don't).

Chrome has since adopted some ideas from my thesis which attempt to warn you if you're at risk based on suspicious characteristics of the site you're on, like unusual patterns in domain names.

[1]: https://scholarsarchive.byu.edu/etd/7403/


AFAIR someone bought EV certs a few years ago that visually looked like apple.com and paypal.com which showed the EV thing was useless.


(2017)

some previous discussion: https://news.ycombinator.com/item?id=14753993


While I agree with the premise. What is this about:

"Our site displays ads over HTTP."

Sorry, not sorry.


Yes.


But it runs in my LAN.


There's a reason google and other wifi hardware providers are restricting internet DNS lookups for rfc 1918 addressed. Just because it's on your lan does not mean you're not vulnerable to attack from outside

True, tls won't make a dramatic improvement, but it's still and improvement


you can use lets encrypt certificates on your lan using DNS verification


effort / gain ratio is crap tho. Especially assuming IPv4 and reasonable firewalling.


effort is basically none, aside from getting an API key from Cloudflare(could be any provider really) and then downloading the version of Caddy that includes supports Cloudflare DNS verification

in terms of gains you get the benefit of easy to remember names, and you don't share everything in plaintext which is becoming more and more important as we get more and more devices on our LANs


FWIW, the setup can be simpler and more portable without a specialized HTTP server and with a standardized protocol (RFC 2136): just certbot and its python3-certbot-dns-rfc2136.

Edit: certbot has plugins for a bunch of custom APIs too.


It's may be simpler to use split horizon DNS and http verification outside the lan instead of dealing with DNS challenges.


"It's me, UR hax0r in your I0T, steelin your unencrypted packets"


Some counter-arguments from n-gate.com: http://n-gate.com/software/2017/07/12/0/


This article comes off as basically victim-blaming, when it comes to "not my problem" if some bad actor injects ads etc.

The arguments against Caddy are no longer true. Caddy runs on a ton of platforms, essentially any that Go can use as compile targets (except for plan9 for the moment because of a dependency of Caddy's that has a compatibility problem https://github.com/caddyserver/caddy/issues/3615#issuecommen...). Caddy also doesn't have to run as root, nor does it by default with our apt/yum packages.

Also a passing comment essentially calling Let's Encrypt... with their track record at this point, I don't think that can be said.

The rest is basically just vitriol.


Yeah, I've seen this n-gate page before.

It's nothing more than victim blaming and circular logic. Damn near every argument being made is "That attack doesn't matter to me because I don't use HTTPS because my site doesn't need HTTPS".


Classic n-gate.

> > If we encrypt only secret content, then we automatically paint a target on those transmissions.

> None of those things are my problem.

> > [HTTPS] guarantees content integrity and the ability to detect tampering.

> The legions of browser programmers employed by Mozilla, Google, Apple, and Microsoft should do something about that. It's not my flaw to fix, because it's a problem with the clients.

I re-ordered the quotes a bit, but I'm reasonably confident I didn't misrepresent what he was trying to say. The counter-arguments after this are good, but the first couple of things are, imo, already sufficient to make HTTPS a very very important thing.

Though… I find myself wondering whether he's really all that wrong, after all.

> Users must keep themselves safe. Software can't ever do that for you. Users are on their own to ensure they use a quality web client, on a computer they're reasonably sure is well-maintained, over an internet connection that is not run by people who hate them.

> It's just software. It can't fix your society.


> Users must keep themselves safe. Software can't ever do that for you. Users are on their own to ensure they use a quality web client, on a computer they're reasonably sure is well-maintained, over an internet connection that is not run by people who hate them.

And not use insecure websites, I guess. I don't know how that person expects the browser to magically protect the user if their server transmits in plain text.


What's the point you're making with your first two quotes? Are they supposed to be self-evidently incorrect? If you're just serving static content, why should you care whether there are governments out there that may be inserting content into it?

And while "encrypting only sensitive content calls out that content as being sensitive" is certainly true theoretically, almost every site has HTTPS, sensitive or not, so in practice it's not a concern.


That just bring to to a page with a never loading captcha on Android Firefox.


Try pasting the link into a new browser tab. It's a redirect if you're coming from HN.


You have to open it in a private window


this seems to be ba marketing website made by the developer of caddy httpd


four letters in schema bad, five letters in schema good.


[flagged]


Citation needed.

According to https://www.nature.com/articles/d41586-018-06610-y datacenter uses 200TWh/yr which ~= 22GW. Nuclear plants are ~1GW, so you're saying TLS is responsible for ~50% of all IT power usage?


Internet uses 84 to 143 gigawatts of electricity every year, HTTPS uses the majority of that.


Claims without evidence. Most processors released in the past decade have some sort of cryptographic acceleration. Even before the likes of AES-NI there's been two decades of vectorized optimizations for encryption and hash algorithms. Sun among others released SSL accelerator expansion cards 20 years ago.

The processing overhead for TLS is trivial anymore. You'd be hard pressed to find devices (in any market segment) sold in the past decade without some sort of acceleration or access to vector optimized code paths.

The idea that HTTPS is using tons of extra power over exabytes of HDDs spinning up, inefficient microservice architectures, or poorly optimized code interpreted code is pretty laughable.

But you've made bold claims. I'm sure you've got compelling evidence backed by extensive and reproducible testing.


You really think that the simple act of encrypting and decrypting the traffic of web requests is the heavy part?


I know it is, because I made my own HTTP server from scratch.


If your site is pure static files, sure, it's added overhead. But if you're doing anything dynamic on the server, connecting to a database etc, then HTTPS is a tiny, tiny percentage of the work being done by the hardware. CPUs are very well optimized for these tasks these days https://en.wikipedia.org/wiki/AES_instruction_set.


Imagine all the energy wasted making those instruction sets, hardcoded crypto silicon and that part of the CPU... the power that part of the CPU wastes while not in use.

For eternity, or atleast until the CPU breaks.

You just made the situation worse for your own arguments it seems!


You can actually pinpoint exactly where the argument goes from not great to super dumb.


That's just because the majority of content is served over HTTPS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: