Is it worth the hassle? All you're saving is latency in the backbone between the point of presence and your home data center. Does this beat directing requests to the nearest AWS region? As long as you're in roughly the right hemisphere and time zone, you're probably close enough.
If you have a stateless app, or state that can be deployed in multiple locations, then these are good solutions compared to doing it yourself and are relatively easier to deal with than the IaaS options of the clouds. Having it built into the CDN layer only helps with performance.
At StackPath our definition of Edge is basically being as close as possible to the eyeballs aka users. Think of it as the front door to the internet.
Today our Edge expands across major IX's around the world and that's just the beginning. 5G is approaching us quickly along with container data centers.
The way we built our orchestration system it can deploy and manage workloads anywhere. In the future, that will include 5G container data centers which gets workloads even closer to things like self-driving cars, smart cities, IoT devices, <insert your idea here>.
Oh, please. If you really need those last few milliseconds of lag reduced, you need local computation. If you don't, an AWS datacenter on the same continent is probably good enough.
Our CDN certainly will validate SSL certificates at the origin if the setting is enabled.
With that said, you may be on a legacy CDN product often referred to as "Secure CDN" and that setting may differ from our current CDN offering.
I'd be more than happy to loop our CDN team in to clarify. Feel free to reach out directly to product at stackpath.com and I'll get it setup :)
However, it should be made clear that our container and VM solution is not a "function" type offering. You can deploy a container and/or VM workload on our Edge similar to what you might find at Cloud providers. The main difference is we sit a layer above the cloud providers and make deploying worldwide simple, secure, and fast.
With a few clicks or a single API call you can deploy a micro-service all over the world (even add an anycast IP if you need one) in under 60 seconds.
The pricing is higher though and some early testing showed inconsistent performance, might have to wait until it matures a bit. Also missing IPv6.
With a lot of our experience coming from the original founding team at Softlayer, we've built an incredible Platform.
More information on our network can be found at https://www.stackpath.com/platform/network/
Today our container/vm solution does not have a warmup concept other than the initial deployment of your container. You simply specify your image, some attributes, and it's deployed to the locations specified on our Edge. Once deployed, you have the ability to delete the workload at any time, but it's not elastic based on requests to the workload.
This announcement to me sounds like the approach didn't get the expected adoption and now they're allowing regular containers in the edge.
We've seen amazing adoption on the EdgeEngine product, it just serves a different purpose than full on Containers and/or VM's at the Edge.
The ability to run a container/VM at the Edge enables the ability to run more complex workloads outside of scripting in a CDN like delivery pipeline.