If you factor in that not much work is getting done in many western countries until early January the actual migration time is more like less than two weeks.
Now I don't know how easy it is to move from hyper.sh to alternatives, but that is unprofessionally short regardless.
I would have expected at least three months. Is it possible that they are that short on money that they need to shut it down immediately?
Seems like it.
Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.
My service is entirely dependent on Hyper.sh and there seems to be no trivial migration path. I think I shall have to drop everything I was planning to do after the holidays and rearchitect for Kubernetes. :(
A few years ago, I ran a similar company that I also shut down, so it would be hypocritical of me to complain much. Competing with AWS is hard. (We did give users 3 months to migrate though.)
Not disagreeing per se, but I believe it's much more important to design for migration between providers instead of designing for abstraction over providers. The difference is subtle, but trying to make predictions about what needs abstraction will most likely lead to the same (or bigger) costs when a migration actually happens.
It's a bit like the general advice to design to avoid getting stuck in a hole as opposed to designing to getting out of future holes.
A few years ago (when I last checked} their container startup times were a lot slower than hyper.sh's
Right. If you design to be cloud agnostic that means only using the lowest common denominator of features, and while it’s doable it’s not cost effective. The price structure is designed to shepherd you into the managed services.
We need SaaS: Services as a Service
>Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.
Yeah, but no one that matters, cares - whether it's writing blog posts or putting your entire company brand/presence/blog on Medium (which harasses all of your readers), or relying on low-margin infra startups that often amount to a thin docs/marketing layer around Docker.
Just like the same people that get hosed by their hosting/DNS providers with a history of screwing users... because they didn't think it would happen to them -- people are lazy, people think they're exceptional, and I think a bunch of people don't like learning infra and don't see it as important.
But their email doesn't say anything about being out of money, just that they are focusing on a different technology.
That's why I brought it up. If you have to shut down, own up to the real reason. If not, you are screwing your users to focus on something else.
I'd bet that they just can't cover the bills anymore, though.
Personally I’d rather just pick the most popular platform as a better decision to hedge against the risk.
If they do however, it's a dick move to pull out the rug with this much notice over a period when a lot of people are away for much of it...
I don't think they have many, but I'm one of them. Not pleased at all w/ the migration time.
And the situation has gotten a lot worse in the last year.
EDIT: Looks like an email went out (copied in a comment below) and that they are not shutting down as an organization, just this product. Even more curious as to why they'd offer such a short migration window to former customers. I would, especially in an organization, be truly hesitant to rely on any of their other technologies if this is the pattern being set.
We are writing to let you know that we decided to pursue a new direction in 2019, and will be closing down the Hyper.sh cloud platform on January 15, 2019.
Over three years ago, we set out to create an open secure container-native platform. We believed that containers represented a sea change in how software would be developed, deployed, and maintained.
Along the way, we created one of the first container-native cloud offerings, the Hyper.sh platform, which utilized our open source technology, called runV, which last year was merged with Intel’s Clear Containers project to become Kata Containers. We’re proud of the platform we built, and the influence we have had on the overall container industry. We are even more grateful to you, our customers, who have deployed hundreds of thousands of containers and built out new business on our platform.
The Hyper.sh platform, while trailblazing, is not where Hyper’s future efforts lie. Moving forward, Hyper is focusing all our attention and efforts towards the upstream Kata Containers project and in developing our Enterprise Kata stack for deployment in the major public clouds.
As of today, it is no longer possible to create a new account on Hyper.sh, and on January 15, 2019, the Hyper.sh cloud service will be shut down. Per section 11 of our terms of service, we wanted to provide you time to migrate off the platform and for the next month, our priority is to help your transition to other cloud services. If you need assistance, please feel free to reach out to us via Slack or your account dashboard. On January 15, 2019 any remaining user data and accounts will be deleted from the platform.
Please start now migrating your containers and data volumes off the platform. Directions on how to migrate your container volumes can be followed here. Please note, you will not be charged for either the container or the FIP in performing the migration.
Thank you for your business and support of our platform. It has been a privilege to serve you.
The Hyper Crew
So nice of them.
It was far too low to build a sustainable business around, and so I decided to write my own stack. What I needed they served very well ( ability to run one-shot docker containers as an HTTP rpc ) — similar to AWS Fargate but much much simpler.
Fargates’ RunTask is close but requires way too much configuration of roles and permissions and “provisioning” just to run a container. And even at the scale AWS is providing fargate, their pricing was still higher! I remember the AWS pricing being something like 5c / hr priced out in seconds, but with a minimum charge of 1 minute, but hyper.sh was 1c/hr with a min charge of only 10s! So, roughly here’s why they’re going out of business: the business requires very scaley things ( think dinosaur not lizard ) in order to make very small amounts of money. And it’s making 1/5th of the market dominant service.
Does such a blacklist exist? Given the short notice I expected that Hyper.sh would be gone (and thus free of commercial backlash) but it appears the company will continue in new areas, which makes this decision even more surprising.
I'd definitely be interested in having such a list and an easy way to determine if they are involved in new companies, which doesn't sound trivial. "Foo Co was terrible to their customers" doesn't tell me Bob made the decisions, nor does that tell me that Bar Co is also run by Bob.
If anyone needs help migrating off hyper.sh I am pretty free over the holiday period. I have extensive DevOps experience. I'm happy to help (gratis).
My email is on my profile.
(From the shutdown email users received)
I was not aware of these products. Interesting.
I think in the future there will be a big shift off of runc (docker) as the k8s default runtime now that CRIO has made them pluggable.
I don't think there's going to be a big shift away from runc (though I'm biased, I'm one of the runc maintainers -- and runc is quite separate from Docker) for a couple of reasons:
1. Containers are still more than decent, and will always handle certain usecases and setups better than VMs can (due to the pliability of namespaces and shared kernels -- VMs have the fixed DRAM problem just like they always have).
2. At the moment, plain containers are arguably more secure than kata containers (though this can be fixed "fairly" easily with some minor memory penalties) because they disable a bunch of security features in their VM kernels -- so you don't get seccomp or AppArmor protections for your containers. Now, you do get hypervisor security, but there was a study some time ago which claimed that a well-tuned seccomp profile is about as secure as a hypervisor.
3. Hooks (like the NVIDIA ones) will always work better with plain containers because the whole idea behind hooking into a container runtime is that you can attach things to the containers' namespaces (with NVIDIA this would be vGPUs). kata is trying (and succeeding in most cases) to emulate these sorts of pluggable components with their agent, but fundamentally they're trying to pretend to be a container (which is going to cause problems).
I think kata is a really good project (and I'm happy that Intel and Hyper.sh joined forces), but I don't think it will replace ordinary containers entirely (even under Kubernetes). But hey, I could be proven wrong -- at which point I'll switch to working on LXC. :P
I'd agree though that kata (or other VM based containerization solutions) won't completely replace runc based solutions.
One of the things I like about standard linux containers is the ease with which you can remove part of the isolation without turning it all off or on. Being able to easily do `--net=host` or add a capability is very handy in some circumstances.
Also the security story definitely isn't as clear as VMs>containers. Every isolation layer has had breakouts in the last year, VMs, gVisor, Linux containers.
What problems do you see/think arise from kata pretending to be a container?
> What problems do you see/think arise from kata pretending to be a container?
There are a few.
One of the most obvious is that anything that requires fd passing simply cannot work, because file descriptors can't be transferred through the hypervisor (obviously). This means that certain kinds of console handling are completely off the table (in fact this was a pretty big argument between us and the Hyper.sh folks about 2 years ago now -- in runc we added this whole --console-socket semantic to allow for container-originated PTYs to be passed around, and you cannot do that with VM-based runtimes without some pretty awful emulation). But it turns out that most layers above us now just have high-level PTY operations like resizing (which I think is uglier and less flexible, but that's just my personal opinion).
Another one is that runtime hooks (such as LXC or OCI hooks) now are a bit more difficult to use. There's nothing stopping you from doing CNI with Kata, but it's one of those things where either the hook knows that it's working with a VM (which requires hook work) or the hook is tricked into thinking its dealing with a container (which requires lots of forwarding work, or running the hook in the VM). I'm really not sure how Jata handles this problem -- but the last time I spoke to the Kata folks the answer was "well, we're OCI compliant" which isn't really an answer IMHO (they're also cannot be OCI compliant, because OCI compliance testing still doesn't exist -- but that's a different topic). I imagine their point was "we copy runc", which is unfortunately what most people think when they say "OCI compliance".
There was a recent issue a colleague of mine (who works on Kata) mentioned, which is that currently "docker top" operates by getting the list of PIDs from the runtime and then fetching /proc information about them. Obviously this won't work with Kata and will require some pretty big changes to containerd and Docker to handle this (though I would argue this would be a good thing overall -- the current way people handle getting host PIDs for container processes is quite dodgy). There is currently some kernel work being done by Christian Brauner to add a new concept called procfds, and all of this work will be completely useless for Kata (even though it'll fix many PID races that exist).
But as I said, Kata is quite an interesting project (the work done for the agent is quite interesting) and it fulfills a very important need -- people are still worried about container security and adding a hypervisor which is lightweight will dissuade those fears.
Can you recommend any great blogs? Jess's is the only one I'm aware of that sometimes dives into multitenancy/container stuff.
Here are all the container runtimes that CNCF is tracking: https://landscape.cncf.io/grouping=landscape&landscape=conta...
as in no Kubernetes, OpenStack, Multi-tenancy, nothing.....
Just one bare-metal server, configured as KVM host, and how can I [run/start/stop] one Kata container?
I feel like everyone just assumes you're a Kubernetes Pro and runs infrastructure at the scale of Google/FB/Amazon these days... :-(
Kata is actually just several binaries that talk via grpc (kata-agent, shim, proxy, runtime) and interface with QEMU/NEMU. For instance kata-proxy proxies commands over virtios serial interface that's exposed via QEMU.
You could install the binaries and qemu-lite and have a similar system but I'm not really sure how you'd benefit as it's the management through k8s that really won me over. I think in your scenario you'd just be making very complicated QEMU vms. I've linked this to the contribs, maybe they have some thoughts.
For context when we launched ECS years ago the goal was always to build a serverless container platform. But first we had to build out our own container orchestration platform capable of keeping track of all the containers at the scale that AWS requires, and build out a lot of underlying tech that didn't exist yet. Recently we open sourced Firecracker (https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...) which is one of the pieces we built along the way to enable the data plane. It took us a few years to get to this point, which is a really long time in startup lifetime.
The major cloud providers have the resources to spend a lot of time building the technical depth required to offer full featured serverless container platforms that can truly scale up and out. In my view the smaller providers like hyper.sh really nailed the initial developer experience but they are missing the depth behind the scenes that allows serverless containers to be scaled out cost effectively.
In time these two ends of the spectrum will hopefully converge. By open sourcing tech like Firecracker AWS enables small providers to have access to tech that we built because we needed it, and now its available for them to use. Conversely AWS learns how to improve our developer experience by seeing how these small startups create a great developer experience.
Nothing puts me off faster than having to click through dozens of web UI screens (or read a hundred pages of API docs) just to get a container launched and accessible via a URL. The Zeit Now v1 model, with everything configured from a single (optional) JSON config file and deployed with a single "now" CLI command, was absolutely ideal for me.
I do wish AWS would build experiences like this without me having to turn to the wider developer ecosystem.
After several downtimes I switched to Google's GKE and never looked back. Hyper is easy to start with but impossible to finish. There was no managed database at that time (not sure whether they had it now) and using Postgres with their persistent volumes (which are backed by Ceph I think) performance was really sad.
All in all, it was a great service for trying out ideas or running small & less important applications, but if you really want it to always be available, then probably it's not the right tool.
They have some really cool underlying tech.
I can't say I'm surprised though - there have been signs that this was coming.
Makes me wonder.. maybe it's worth building a tool that monitors the Twitter accounts of common services and products and raises a bat signal if they don't tweet for a month or two. It seems to be where budget gets tightened first.
I'm going to be honest AWS Fargate isn't as easy to use as hyper.sh if you are looking for an alternative. That said our increased complexity is because Fargate is orders of magnitude more full featured for applications that outgrow small platforms. If you want to give Fargate a try I'd recommend this open source tool (http://somanymachines.com/fargate/) as it provides you with an easy, opinionated starter command line experience similar to what you might have seen for hyper.sh
You can run a container on a f1-micro VM for free forever:
If you want something more similar to Hyper.sh, serverless containers are coming out publicly soon, and you can sign up for the alpha here:
I had to drop them because it's hard to start a HIPAA compliant product on their platform as a solo dev. AWS on the other hand signs a BAA with you with the click a button.
This forced me to learn AWS and to learn that all I really need for the MVP is a single EC2 instance, RDS, a redis subscription from redis labs, and cloud watch.
Deploying is just pulling from github, shutting down the service and restarting it. It doesn't really need to be simpler than that.
I'm pretty sure that I could scale to 10k users per day on 1 ec2 instance and by that point I would hopefully have venture capital and hire an expert to handle all this for me.
As in, is there something k8s might do better with its CronJob feature? I don't think so no...
If you just wanted to run a container on a schedule, and you didn't have a cluster running already, Hyper sh was a lower bar to do that.
If you do have a cluster, using a CronJob on said cluster is easier for me than maintaining something on Hyper sh...
I might have missed the question...
Azure has Azure Container Instances. AWS has Fargate. Zeit has Docker deploys with v1. GCP is coming soon with Serverless Containers.
"Each Fargate task has it's own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task."
Official docs on Fargate isolation here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...