- 800 new non-TLS connections or flows per second.
- 100,000 active non-TLS connections or flows (sampled per minute).
- 1 GB per hour for EC2 instances, containers and IP addresses as targets.
- 50 new TLS connections or flows per second.
- 3,000 active TLS connections or flows (sampled per minute).
- 16x cost for new connections or flows per second.
- 32x cost for active connections or flows.
- 1x cost for traffic carried in GB per hour, which you'd expect.
If you run a multi-tenant system that services N "vanity" domains (where N is around the same number as your users on a $5/month plan), there is still no service in AWS to do transparent TLS termination for a reasonable cost. Which is a pity, since it really costs almost nothing to generate these certificates.
For example, Application Load Balancers (ALB) come with a limit of max 25 certificates, which is non-starter for us. So you cannot avoid terminating your TLS in nginx/caddy etc (it also needs to be HA of course) and then hit another LB that stands in front of the actual service. You end up with a 2-layer LB architecture that adds cost and complexity.
The biggest operational headache by far is on renewals. If one customer on a commingled certificate adds a CAA record that doesn't include your CA of choice (or more commonly, the customer churns and no longer points to you), your renewal fails. If you're running a SaaS business you really want one certificate per hostname that's lazy loaded based on the incoming SNI. This keeps renewal failures and support costs down, and keeps customers from seeing a list of their competitors on their certificate.
As @elithar points out, that's why we built SSL for SaaS. You make a single API call with the name of the hostname you want a certificate issued for, indicate the validation method (HTTP ./well-known is by far the "happiest path"), and then in about a minute you've got a certificate deployed worldwide. You tell us where to route traffic back to by providing a default origin (which can be a load balancer) and optionally overriding it on a per-hostname basis.
So long as your customer is CNAME'ing to your domain and that domain resolves to Cloudflare's edge, we can automatically complete—and keep completing—domain control validation (DCV). We then issue two certificates per hostname: one P-256 keyed, SHA-2/ECDSA signed certificate that gets presented to modern browsers and one RSA 2048-bit, SHA-2/RSA signed certificate that gets presented to browsers that don't support ECC.
1 - https://blog.cloudflare.com/tls-certificate-optimization-tec...
> You make a single API call with the name of the hostname you want a certificate issued for ... [magic TLS things happen]
This, precisely, is how the ALB _should_ work, without a silly 25-certificate limitation. Sounds like an excellent service.
I wonder what is the cost of issuance for a TLS certificate. There is minimal storage/network load, some computations (likely in hardware). Perhaps the main cost is the collection of sufficient entropy?
You should reach out to https://twitter.com/prdonahue (PM), who can share the details.
(Used to work there, now at GCP)
Here is what Caddy's author wrote about this:
> I still don't like the idea of SAN certificates. Too much room for error... what if you go to renew a SAN and one of the domains fail, the other 99 don't get renewed either. (Sure, we can code in logic to make a different certificate with 99 names, but that gets complicated quickly.) Also, we'd need a database as we have a many-to-many relationship rather than 1:1 which is much easier.
If you do per hostname, you can ask a CA to send a validation request to http://sub.domain.com/.well-known/pki-validation/... and by nature of that hostname CNAME'ing to your domain (which they'll need to do anyway), you can complete it.
If you do at the apex, you'll need to complete DCV out of band, e.g., by having an email sent or adding a separate DNS record above/beyond what's being used to point to you.
Does anybody know if they support ALPN to announce h2 as the http2 protocol?
They don’t do this for ELBs, I would hope for NLBs.
If they do this, this would finally be the sane solution how to use NLBs + ACM to expose gRPC services at the edge.
Other solution might maybe be (as popping up here in the threads):
Use NLB + ACM and let it do MITM and then forward the traffic to a service with a self signed certificate. The hope here would be that the NLB would pick up the ALPN header from your backend service and communicate it transparently to the outside world. Don’t know if this architecture makes sense.
Anyways, I just would love gRPC capable secure workloads on the NLB with ACM.
Doing Self signed certificates or LetsEncrypt works already - the Gamechanger is to use ACM + ALPN and by that also having upstream http2 traffic behind the NLB (compared to the ALB, which only supports upstream http1)
If they had some kind of way to use ACM generated SSL certs on the VM and had a method for autorenewal that would be ideal.
I would question whether this is a problem though; basically if someone is in a position to MITM traffic in your AWS VPC this would indicate a compromise of AWS at a fundamental level (or loss of your AWS control plane).
The approach I looked at to solving this problem was using an Envoy proxy sidecar.
Wait. That is not what "end to end" means. This is more like "piecewise end-to-end", and it is not the same thing as "end to end".
Now, people are going to say that "it doesn't matter since the only party you're disclosing the comms to is AWS, who you already implicitly depend on because you're running all your stuff under their hypervision". That's true. (ofc now there are 2 places in AWS's infra where the comms are in plaintext, so now an adversary has two teams of engineers/opsen to social engineer / compromise, and will win if they break either one).
I'm reacting to the dilution of the term "end to end" here, because it's a vital concept.
You can still communicate over ssl internally. What do you mean with it will decreases the defence?
"... well, NLB runs on Amazon VPC. On VPC we encapsulate, authenticate and secure traffic at the packet level. Packets can't be spoofed or MITMd on VPC. Traffic only goes where you send it. That makes it possible to use a self-signed, or even expired, certificate on your end."
Someone posted a quoted from the HIPAA regulations where it could be interpreted as not being the case. I don’t think I would risk taking that chance though from a compliance standpoint.
I'm still waiting for GCP to support more than ten (!) certs per load balancer, which seems like a ridiculous limit when you consider the need to serve dozens or hundreds of low-traffic customer domains on a single domain. If you're on Kubernetes, they're basically asking you to split your ingresses up just to avoid the limit.
We ended up going straight to running our own in-cluster load balancer (Traefik, which is meh, but works okay) with Let's Encrypt so we don't need to provision anything at all. It's so much nicer than fiddling with manual cert registration. I really wish cloud providers such as Google would get on board with Let's Encrypt already.
With the current system, we can use one consistent IP or CNAME for all new domains. With the 10-certs-per-GLB limitation, we'd have to manage the DNS accordingly so that all the domains are correctly spread out among the N different ingress GLBs.
At the time we started with Kubernetes, cert-manager was listed as alpha quality and not recommended for production. Even today, it seems Gandi (which we use for DNS) still isn't supported; at least it's not in the documented list . LEGO supports Gandi, so I'm not sure if maybe the documentation is wrong here.
SSL termination becomes the responsibility of the cluster. The certificate (and private key) is stored inside the cluster, too, via secrets. To have cert-manager automatically create and renew certificates, all you have to do is update your ingress host/tls YAML configuration.
Using cert-manager, you can continue exposing your app under many domain names with a single IP (via an A record) or abstractly through a CNAME record.
I still haven't added more than 10 domains to manually verify your concern, however. But Kubernetes isn't making any changes to my load balancer as I've added domains. I would re-visit to see if cert-manager is right for your use case: I noticed no mention of it being alpha quality on the README. However, they do point out it is 0.x and there could be breaking changes to the API later.
With a HTTP GLB, you get a very cheap distributed, effectively global CDN that doesn't require special configuration. You also get features like health checking, logging (which you can pipe to BigQuery), Google's "just works out of the box" edge caching (negating the need for an external caching CDN like Fastly), and so on.
My understanding is that the typical use case is to run cert-manager together with ingresses, where cert-manager's will allocae certs and create Kubernetes secrets for each. If you wire up ingresses with those secrets, and you use the "gce" or default ingress class, then you end up getting TLS termination at the GLB level, very easily.
However, because of the 10-certs-per-LB, you run into the problem I described before. There's no way around it except to create ingresses that contain a maximum of 10 hosts. (It's a hard limit, not a quota; you can't petition to have the limit increased.) If you have 60 domains, that necessarily means 6 GLBs, and 6 different IPs to point DNS to.
We're using Traefik today as a custom ingress controller with a TCP GLB. So we get one external IP, and Traefik handles TLS termination (via Let's Encrypt). So this is effectively the same as running cert-manager plus something like the Nginx ingress controller.
aws will transparently renew them for you if they are associated with a load balancer or cloudfront. no moving parts required in your infra.
The current standard is TLS, which superceded SSL back in 1999. Uptake and awareness was initially slow, and SSL itself wasn't deprecated until 2015. It doesn't help that a bunch of projects (OpenSSL being a prominent one) still have "SSL" in their name.
Technically, they're not "SSL certs". They're X.509 certs. Certificates show up in many places unrelated to TLS/SSL.
If you expect your certificates to work in common client software like web browsers or some random Python client code a third party is writing then you're going to want:
* PKIX, currently RFC 5280 plus revisions, the Internet's chosen standard for how X.509 should be implemented. PKIX says a bunch of things about what you should or should not write in the X.509 certificate, you will probably have better luck conforming as much as possible even if you don't care about:
* The Web PKI. A Public Key Infrastructure has Certificate Authorities (plenty of those across all of X.509 or you can roll your own) but they're trusted by Relying Parties (parties who are _relying_ on the certificate's attestation to be true) and you probably want certificates that will be trusted by most RPs. Grandma doesn't know what the X.500 directory system is, or who a Certificate Authority is, but she does use Safari, and Safari trusts certain CAs on her behalf as does macOS. So you're going to want certs from one of those CAs. The Web PKI is a loose name for (despite that word "Web") the PKI covering SSL/TLS services on the Internet.
* You may want to get more specific. Although the Baseline Requirements which say roughly how a Certificate Authority should work are agreed across industry, each of the Major Trust Stores has their own additional rules. You probably care about all of them. They roughly correspond to operating system vendors. Microsoft and Apple (for their browsers and operating systems), then Mozilla (for Firefox on all platforms, and for Free Unix systems), Google (mostly just for Android, not Chrome), and then runners up Oracle (for Java), then a long tail of people including Nintendo and all those crappy in-car entertainment systems people...
For example, probably fifty people in the whole world have ever bothered trying to use the Web from a Nintendo WiiU (not to be confused with the popular Wii) console. If they try now though lots of things don't work. Because Let's Encrypt isn't trusted by the WiiU, and since it's an end-of-life product, probably never will be.
But "SSL Certs" gets across what you mean pretty well, only pedants are going to insist it's wrong, and unless you're currently playing "Um, Actually" they should cut it out.
> So basically, the “SSL vs TLS” taxonomy represents the ownership change in 1999, but not the technical details of the protocols. From a purely technical point of view, lumping together SSL-3.0 with SSL-2.0 but separating it from the TLS versions is about as wrong as, in biology, classifying whales as a kind of fish (despite what Herman Melville wrote about them).
> Therefore, I tend to use “SSL” (or “SSL/TLS”) to designate the whole conceptual family of protocols, from SSL-1.0 to TLS-1.2. This is the main reason why BearSSL is called BearSSL and not BearTLS.
> A secondary reason is more about marketing. People who know what TLS is also know what SSL is, but not the other way round. For instance, Web site owners look for a “SSL certificate”, not a “TLS certificate”. By using the “SSL” acronym, I thus reach a wider audience, even if it is at the cost of irritating some of the more pickish taxonomists (and I am totally ready to out-pedant them if need be, as demonstrated above).
SSL references "sockets", which is still where TLS is implemented but TLS is conceptually about transport security independent of implementation (ala sockets)....for instance consider this TLS inspired solution (https://www.cipheredtrust.com/doc/), it has nothing to do with sockets.