Hacker News new | past | comments | ask | show | jobs | submit login
Hyper.sh cloud service will shut down on Jan 15 (hyper.sh)
145 points by carimura on Dec 21, 2018 | hide | past | favorite | 99 comments



Less then a month migration period.

If you factor in that not much work is getting done in many western countries until early January the actual migration time is more like less than two weeks.

Now I don't know how easy it is to move from hyper.sh to alternatives, but that is unprofessionally short regardless.

I would have expected at least three months. Is it possible that they are that short on money that they need to shut it down immediately? Seems like it.

Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.


Indeed. This is a headache for me.

My service is entirely dependent on Hyper.sh and there seems to be no trivial migration path. I think I shall have to drop everything I was planning to do after the holidays and rearchitect for Kubernetes. :(

A few years ago, I ran a similar company that I also shut down, so it would be hypocritical of me to complain much. Competing with AWS is hard. (We did give users 3 months to migrate though.)


"Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic."

Not disagreeing per se, but I believe it's much more important to design for migration between providers instead of designing for abstraction over providers. The difference is subtle, but trying to make predictions about what needs abstraction will most likely lead to the same (or bigger) costs when a migration actually happens.

It's a bit like the general advice to design to avoid getting stuck in a hole as opposed to designing to getting out of future holes.


The thing that drew me to Hyper.sh in the first place was the fact that it used the Docker Remote API. That gave me some confidence it would be provider agnostic. Unfortunately no other providers have done this, so that idea didn't pan out.


I thought Joyent had a container service that used the Docker Remote API?


Joyent Triton

https://www.joyent.com/triton/compute

A few years ago (when I last checked} their container startup times were a lot slower than hyper.sh's


I believe it's much more important to design for migration between providers instead of designing for abstraction over providers

Right. If you design to be cloud agnostic that means only using the lowest common denominator of features, and while it’s doable it’s not cost effective. The price structure is designed to shepherd you into the managed services.


> Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.

We need SaaS: Services as a Service


You're basically describing Openshift.


Hm, "professionalism", to me, is a weird criticism to levy against a product or business that is presumably shutting down due to failure or lack of funding. Is it common, or expected, that startups allocate funds for clean shutdowns? I assume it's just burning through cash trying to make it work, until it becomes so unavoidable that you have to shutter suddenly.

>Let that be a lesson to not rely on cloud infrastructure too much without designing your architecture to be somewhat cloud agnostic.

Yeah, but no one that matters, cares - whether it's writing blog posts or putting your entire company brand/presence/blog on Medium (which harasses all of your readers), or relying on low-margin infra startups that often amount to a thin docs/marketing layer around Docker.

Just like the same people that get hosed by their hosting/DNS providers with a history of screwing users... because they didn't think it would happen to them -- people are lazy, people think they're exceptional, and I think a bunch of people don't like learning infra and don't see it as important.


If they are actually out of money, they don't have any other option than to shut it down. You can't blame anyone for that. A lot of startups wouldn't have the option to budget for an orderly shutdown.

But their email doesn't say anything about being out of money, just that they are focusing on a different technology.

That's why I brought it up. If you have to shut down, own up to the real reason. If not, you are screwing your users to focus on something else.

I'd bet that they just can't cover the bills anymore, though.


You’re still making a decision to slow feature development for the relatively small risk that your cloud provider goes under.

Personally I’d rather just pick the most popular platform as a better decision to hedge against the risk.


That way you'd be indirectly promoting a monopoly and raising their incentives to raise their prices ...


Maybe they don't have any customers for this product?

If they do however, it's a dick move to pull out the rug with this much notice over a period when a lot of people are away for much of it...


Well, they're not out of cash - at least not right now. It appears to be a pivot. I'd therefore have to agree with your assessment that it's a dick move.


> Maybe they don't have any customers for this product?

I don't think they have many, but I'm one of them. Not pleased at all w/ the migration time.


Could this be due to a fatal security issue? Running untrusted code side by side on native hardware doesn't give you a whole lot of room to patch vulnerabilities ...

And the situation has gotten a lot worse in the last year.


But Hyper uses lightweight virtual machines for multi-tenancy - they were actually one the providers who got it right. It was one of their unique selling points.


Ok, I wasn’t the only one. Plus, right during the holiday season!!! What were they thinking!?


Weird, no statement or anything that I can find, just a banner at the top of the page. As well, they're shutting off service in less than a month during a time in the year when at least many western countries have limited staff/availability due to holidays. While I understand that once they decide they will no longer be operating, they don't have a legitimate business reason to help (former) customers ease through the transition, it really seems like they went out of their way to be as customer-hostile as they could.

EDIT: Looks like an email went out (copied in a comment below) and that they are not shutting down as an organization, just this product. Even more curious as to why they'd offer such a short migration window to former customers. I would, especially in an organization, be truly hesitant to rely on any of their other technologies if this is the pattern being set.


Here's a copy of the email I received:

We are writing to let you know that we decided to pursue a new direction in 2019, and will be closing down the Hyper.sh cloud platform on January 15, 2019.

Over three years ago, we set out to create an open secure container-native platform. We believed that containers represented a sea change in how software would be developed, deployed, and maintained.

Along the way, we created one of the first container-native cloud offerings, the Hyper.sh platform, which utilized our open source technology, called runV, which last year was merged with Intel’s Clear Containers project to become Kata Containers. We’re proud of the platform we built, and the influence we have had on the overall container industry. We are even more grateful to you, our customers, who have deployed hundreds of thousands of containers and built out new business on our platform.

The Hyper.sh platform, while trailblazing, is not where Hyper’s future efforts lie. Moving forward, Hyper is focusing all our attention and efforts towards the upstream Kata Containers project and in developing our Enterprise Kata stack for deployment in the major public clouds.

As of today, it is no longer possible to create a new account on Hyper.sh, and on January 15, 2019, the Hyper.sh cloud service will be shut down. Per section 11 of our terms of service, we wanted to provide you time to migrate off the platform and for the next month, our priority is to help your transition to other cloud services. If you need assistance, please feel free to reach out to us via Slack or your account dashboard. On January 15, 2019 any remaining user data and accounts will be deleted from the platform.

Please start now migrating your containers and data volumes off the platform. Directions on how to migrate your container volumes can be followed here. Please note, you will not be charged for either the container or the FIP in performing the migration.

Thank you for your business and support of our platform. It has been a privilege to serve you.

Sincerely,

The Hyper Crew


" Please note, you will not be charged for either the container or the FIP in performing the migration."

So nice of them.


This is somewhat common. They've probably emailed all their customers with more details. One of them can post a screenshot of their email and link it here.


Wow! Just checked my email and got this. I was considering using this, but KNEW I couldn’t have built parts of my infra around this, because of one thing: the pricing. I even sent a message on their website asking what their roadmap was. After not getting a response for days, I figured the end was nigh.

It was far too low to build a sustainable business around, and so I decided to write my own stack. What I needed they served very well ( ability to run one-shot docker containers as an HTTP rpc ) — similar to AWS Fargate but much much simpler.

Fargates’ RunTask is close but requires way too much configuration of roles and permissions and “provisioning” just to run a container. And even at the scale AWS is providing fargate, their pricing was still higher! I remember the AWS pricing being something like 5c / hr priced out in seconds, but with a minimum charge of 1 minute, but hyper.sh was 1c/hr with a min charge of only 10s! So, roughly here’s why they’re going out of business: the business requires very scaley things ( think dinosaur not lizard ) in order to make very small amounts of money. And it’s making 1/5th of the market dominant service.


What would be good pricing in your point of view?


Micro pricing only works when your customer is running high frequency transactions. I would have been happy to pay $25 but even 50-100/ month for a starter service that does this well.


Have you tried DigitalOcean Ocean or Joyent to see how they compare in complexity to AWS Fargate?


Woa... I was evaluating this service a couple of months ago. The conclusion was, even if it looks nice the major downside was that the company hadn't been around long enough and we didn't know if they were profitable. These kinds of moves with sub-30 days during major holidays is really scary. I feel for everyone out there having to work during the holidays. I hope the CEO did this out of financial necessities. Otherwise he should be put on some type of do-not-purchase-from-these-founders-blacklist.


> Otherwise he should be put on some type of do-not-purchase-from-these-founders-blacklist.

Does such a blacklist exist? Given the short notice I expected that Hyper.sh would be gone (and thus free of commercial backlash) but it appears the company will continue in new areas, which makes this decision even more surprising.

I'd definitely be interested in having such a list and an easy way to determine if they are involved in new companies, which doesn't sound trivial. "Foo Co was terrible to their customers" doesn't tell me Bob made the decisions, nor does that tell me that Bar Co is also run by Bob.


The usual HN feedback for trying to track bad CEOs so you don't get bit in the future is that we shouldn't because they're humans who make mistakes and always deserve second chances. Not saying I agree but it's what I've noticed.


Hah I did the exact same thing. Neat product that I wanted to build something new on, but I didn’t trust that this company would be around in a year.


> The Hyper.sh platform, while trailblazing, is not where Hyper’s future efforts lie. Moving forward, Hyper is focusing all our attention and efforts towards the upstream Kata Containers project and in developing our Enterprise Kata stack for deployment in the major public clouds.

(From the shutdown email users received)

I was not aware of these products. Interesting.


kata (Intel clear containers + hypers runv) is big in the nested virtualization space but is still a tiny project (contributor wise) consisting of mostly Redhat. It's unlikely you'd run into these projects if you're not at an IAAS or dealing with containers accessing custom hardware (fpgas, gpus, etc). They're really cool, along with gvisor, kubevirt, nemu, etc. The really exciting part is everyone is using these projects for extremely different reasons. IAAS, rendering farms, android emulators, it's a really fun project to watch.

I think in the future there will be a big shift off of runc (docker) as the k8s default runtime now that CRIO has made them pluggable.


> I think in the future there will be a big shift off of runc (docker) as the k8s default runtime now that CRIO has made them pluggable.

I don't think there's going to be a big shift away from runc (though I'm biased, I'm one of the runc maintainers -- and runc is quite separate from Docker) for a couple of reasons:

1. Containers are still more than decent, and will always handle certain usecases and setups better than VMs can (due to the pliability of namespaces and shared kernels -- VMs have the fixed DRAM problem just like they always have).

2. At the moment, plain containers are arguably more secure than kata containers (though this can be fixed "fairly" easily with some minor memory penalties) because they disable a bunch of security features in their VM kernels -- so you don't get seccomp or AppArmor protections for your containers. Now, you do get hypervisor security, but there was a study some time ago which claimed that a well-tuned seccomp profile is about as secure as a hypervisor.

3. Hooks (like the NVIDIA ones) will always work better with plain containers because the whole idea behind hooking into a container runtime is that you can attach things to the containers' namespaces (with NVIDIA this would be vGPUs). kata is trying (and succeeding in most cases) to emulate these sorts of pluggable components with their agent, but fundamentally they're trying to pretend to be a container (which is going to cause problems).

I think kata is a really good project (and I'm happy that Intel and Hyper.sh joined forces), but I don't think it will replace ordinary containers entirely (even under Kubernetes). But hey, I could be proven wrong -- at which point I'll switch to working on LXC. :P


I noticed that Kata have started integrating firecracker which will be interesting and should help their performance and security stories going forward.

I'd agree though that kata (or other VM based containerization solutions) won't completely replace runc based solutions.

One of the things I like about standard linux containers is the ease with which you can remove part of the isolation without turning it all off or on. Being able to easily do `--net=host` or add a capability is very handy in some circumstances.

Also the security story definitely isn't as clear as VMs>containers. Every isolation layer has had breakouts in the last year, VMs, gVisor, Linux containers.


My last line was bait to get some low level container talk going on in here. Glad it worked ;p

What problems do you see/think arise from kata pretending to be a container?


Dammit, I got baited.

> What problems do you see/think arise from kata pretending to be a container?

There are a few.

One of the most obvious is that anything that requires fd passing simply cannot work, because file descriptors can't be transferred through the hypervisor (obviously). This means that certain kinds of console handling are completely off the table (in fact this was a pretty big argument between us and the Hyper.sh folks about 2 years ago now -- in runc we added this whole --console-socket semantic to allow for container-originated PTYs to be passed around, and you cannot do that with VM-based runtimes without some pretty awful emulation). But it turns out that most layers above us now just have high-level PTY operations like resizing (which I think is uglier and less flexible, but that's just my personal opinion).

Another one is that runtime hooks (such as LXC or OCI hooks) now are a bit more difficult to use. There's nothing stopping you from doing CNI with Kata, but it's one of those things where either the hook knows that it's working with a VM (which requires hook work) or the hook is tricked into thinking its dealing with a container (which requires lots of forwarding work, or running the hook in the VM). I'm really not sure how Jata handles this problem -- but the last time I spoke to the Kata folks the answer was "well, we're OCI compliant" which isn't really an answer IMHO (they're also cannot be OCI compliant, because OCI compliance testing still doesn't exist -- but that's a different topic). I imagine their point was "we copy runc", which is unfortunately what most people think when they say "OCI compliance".

There was a recent issue a colleague of mine (who works on Kata) mentioned, which is that currently "docker top" operates by getting the list of PIDs from the runtime and then fetching /proc information about them. Obviously this won't work with Kata and will require some pretty big changes to containerd and Docker to handle this (though I would argue this would be a good thing overall -- the current way people handle getting host PIDs for container processes is quite dodgy). There is currently some kernel work being done by Christian Brauner to add a new concept called procfds, and all of this work will be completely useless for Kata (even though it'll fix many PID races that exist).

But as I said, Kata is quite an interesting project (the work done for the agent is quite interesting) and it fulfills a very important need -- people are still worried about container security and adding a hypervisor which is lightweight will dissuade those fears.


Really appreciate the response. I'm pretty new to the more low level aspects of containers.

Can you recommend any great blogs? Jess's is the only one I'm aware of that sometimes dives into multitenancy/container stuff.


Note that it's CRI that has made runtimes pluggable in K8s and CRI-O is one of the alternatives.

Here are all the container runtimes that CNCF is tracking: https://landscape.cncf.io/grouping=landscape&landscape=conta...


This landscape chart is awesome!


I don't think it's accurate to say it's mostly Red Hat, they are a sponsor and sometimes contributor but most of the work still seems to come out of the Intel and Hyper.sh folks who co-founded the project out of Clear Containers and runV.


Yeah, most Red Hatters in this space work on CRI-O, or Kubernetes, or runc, or KubeVirt, or the pieces on or around Kata from a VM side. There’s a lot of other things that need investment.


Yeah I totally had kubevirt on the brain when writing this


This really sucks for those that have invested in the platform and now have less than a month of warning.

If anyone needs help migrating off hyper.sh I am pretty free over the holiday period. I have extensive DevOps experience. I'm happy to help (gratis).

My email is on my profile.


Over the past 5 years I’ve seen too mans PaaS provider shutdown. Every time those affected ask why and my answer is the same: PaaS is a great tool but an awful business. We studied many PaaS providers and their business model and I believe there are fundamental business issues with a PaaS provider that provides infrastructure as well as the platform. The only way to survive the market is to have a rich owner (like Heroku / Salesforce), be a cloud provider (GoogleApps), run it yourself (DIY Kubernetes like solutions) or use a service that manages a PaaS on your own servers (Cloud 66).


Wow, a two week notice for an infrastructure provider is borderline criminal. I'm out for the holiday during this period... Guess it's going to be a scramble to get back on reliable 'ole Heroku for me. What a nightmare.


The promise of "Speed of containers and security of VMs" is enticing, but is there a simple 101/quickstart for somebody that just wants to run one container in this way?

as in no Kubernetes, OpenStack, Multi-tenancy, nothing.....

Just one bare-metal server, configured as KVM host, and how can I [run/start/stop] one Kata container?

I feel like everyone just assumes you're a Kubernetes Pro and runs infrastructure at the scale of Google/FB/Amazon these days... :-(


k8s is really just the scheduler and gives you a uniform way to deploy the "vm" containers in the usual scenario. With k8s you can have workloads run on different runtimes like trusted=runc, untrusted=kata, etc and this is even easier now with RuntimeClass which you can write right inside of a regular k8s deployment yaml.

Kata is actually just several binaries that talk via grpc (kata-agent, shim, proxy, runtime) and interface with QEMU/NEMU. For instance kata-proxy proxies commands over virtios serial interface that's exposed via QEMU.

You could install the binaries and qemu-lite and have a similar system but I'm not really sure how you'd benefit as it's the management through k8s that really won me over. I think in your scenario you'd just be making very complicated QEMU vms. I've linked this to the contribs, maybe they have some thoughts.


The documentation for kata seems fairly straightforward for a single host install. Install the kata packages, modify docker daemon to change the run time, then use Docker the usual way.

https://github.com/kata-containers/documentation/blob/master...


That is where serverless is really awesome as it give you scale and flexibility, and the providers behind are using containers


Sad to see this. What happened to the "serverless containers", especially startups working on the idea? Months ago, Zeit.co gave up the idea of hosting containers and changed their direction to FaaS . Wondering if there is a technical reason (e.g. cost effective and scalable) behind both changes. On the other hand, big cloud vendors are all providing the serverless containers while the experience may not be as smooth as the startups provide.


AWS employee here. Serverless containers are hard, and there are a lot of technical challenges along the way.

For context when we launched ECS years ago the goal was always to build a serverless container platform. But first we had to build out our own container orchestration platform capable of keeping track of all the containers at the scale that AWS requires, and build out a lot of underlying tech that didn't exist yet. Recently we open sourced Firecracker (https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...) which is one of the pieces we built along the way to enable the data plane. It took us a few years to get to this point, which is a really long time in startup lifetime.

The major cloud providers have the resources to spend a lot of time building the technical depth required to offer full featured serverless container platforms that can truly scale up and out. In my view the smaller providers like hyper.sh really nailed the initial developer experience but they are missing the depth behind the scenes that allows serverless containers to be scaled out cost effectively.

In time these two ends of the spectrum will hopefully converge. By open sourcing tech like Firecracker AWS enables small providers to have access to tech that we built because we needed it, and now its available for them to use. Conversely AWS learns how to improve our developer experience by seeing how these small startups create a great developer experience.


The best developer experience I have ever had was Zeit Now v1 - their serverless containers product. I would absolutely love for AWS to offer something similar (serverless Docker containers where I only pay for the periods during which they are serving traffic), ideally with a similar developer experience.

Nothing puts me off faster than having to click through dozens of web UI screens (or read a hundred pages of API docs) just to get a container launched and accessible via a URL. The Zeit Now v1 model, with everything configured from a single (optional) JSON config file and deployed with a single "now" CLI command, was absolutely ideal for me.


... huh, it looks like that fargate CLI project gets much closer to what I'm looking for: http://somanymachines.com/fargate/

I do wish AWS would build experiences like this without me having to turn to the wider developer ecosystem.


AWS Fargate, Azure Container Instances, GCP Serverless Containers (private beta). All the clouds have something to offer now, but you can also look at Kubernetes which can also run a single container very easily and has managed offerings everywhere.


Unfortunately the serverless container supported by Zeit v1 is on a deprecation path...


I used Hyper.sh for my side project 2 years ago during the initial development phase. It was really easy to start with but I wouldn't say it was stable. Every month or so there would be downtimes or the container would run but for example networks wouldn't work. Or storage volumes wouldn't mount/unmount.. :)

After several downtimes I switched to Google's GKE and never looked back. Hyper is easy to start with but impossible to finish. There was no managed database at that time (not sure whether they had it now) and using Postgres with their persistent volumes (which are backed by Ceph I think) performance was really sad.

All in all, it was a great service for trying out ideas or running small & less important applications, but if you really want it to always be available, then probably it's not the right tool.


In fact, I felt this coming since quite a long time. One day, they abrubtly shut down their Pi (serverless K8s) without a notification. Their forum was deserted. A little activity in Slack. Nothing but these.


This seems like a sudden shut down, only giving users a few weeks to migrate.


less confusion for https://hyper.is/


Exactly what I thought the link referred to initially.


>...is an Electron-based

Alt-F4


Sad to see them go. They were easy to use and cost effective (for my use cases at least).

They have some really cool underlying tech.

I can't say I'm surprised though - there have been signs that this was coming.


I’m curious, what signs were there that they might/would be shutting down the service?


The forums quieted down. Direct product and support questions in the forums were unanswered. Frequent changes, almost pivots, in product development. No updates in github repos for key product related software.


I note that their Twitter account has had no activity for a few months. This seems to be a common thing with services that shut down or products that get discontinued.

Makes me wonder.. maybe it's worth building a tool that monitors the Twitter accounts of common services and products and raises a bat signal if they don't tweet for a month or two. It seems to be where budget gets tightened first.


I've been looking for a service like this for a while, and the first I find out about it is that it's shutting down. Does anyone know of similar offerings that are as simple? I know AWS, GCP and the likes offer container clusters, but I would like to just run a single container (for cheap) and not worry about it.


This is a short overview of Kubernetes and major cloud providers offering (AWS ECS, EKS, Fargate, Azure AKS, GKE, and DigitalOcean)

https://blog.cloud66.com/public-cloud-managed-kubernetes-our...


AWS employee here:

I'm going to be honest AWS Fargate isn't as easy to use as hyper.sh if you are looking for an alternative. That said our increased complexity is because Fargate is orders of magnitude more full featured for applications that outgrow small platforms. If you want to give Fargate a try I'd recommend this open source tool (http://somanymachines.com/fargate/) as it provides you with an easy, opinionated starter command line experience similar to what you might have seen for hyper.sh


Azure App Service's 'Web App for Containers' [1] allows you to run a container, but might not meet your definition of cheap. Also consider Azure Container Instances [2]

[1] https://azure.microsoft.com/en-au/services/app-service/conta...

[2] https://azure.microsoft.com/en-au/services/container-instanc...


GCP Employee here.

You can run a container on a f1-micro VM for free forever: https://cloud.google.com/compute/docs/containers/deploying-c...

If you want something more similar to Hyper.sh, serverless containers are coming out publicly soon, and you can sign up for the alpha here: https://g.co/serverlesscontainers


Surely you don’t actually mean “forever”


Just found out about this service. It looks like a solid concept. Any alternatives out there?


Zeit is dropping the support for containers for their v2 and now Hyper is shutting down their services. Any other one sentence deploy ("now" or "hyper run") for containers available out there?


App engine is really nice, it's `gcloud app deploy` and they have support for multistage deploys.

I had to drop them because it's hard to start a HIPAA compliant product on their platform as a solo dev. AWS on the other hand signs a BAA with you with the click a button.

This forced me to learn AWS and to learn that all I really need for the MVP is a single EC2 instance, RDS, a redis subscription from redis labs, and cloud watch.

Deploying is just pulling from github, shutting down the service and restarting it. It doesn't really need to be simpler than that.

I'm pretty sure that I could scale to 10k users per day on 1 ec2 instance and by that point I would hopefully have venture capital and hire an expert to handle all this for me.


They had a cron feature which was really good. I've switched to running a Kubernetes cluster for my side projects but Hyper.sh cron was really easy to get started with.


Any specific areas for Kubernetes to focus on?


I'm not sure I follow.

As in, is there something k8s might do better with its CronJob feature? I don't think so no...

If you just wanted to run a container on a schedule, and you didn't have a cluster running already, Hyper sh was a lower bar to do that.

If you do have a cluster, using a CronJob on said cluster is easier for me than maintaining something on Hyper sh...

I might have missed the question...


I was just considering using this service for a side project that needed isolated containers to run jobs in. Anyone have suggestions for similar alternative services?


To just run a container and have it available over the web?

Azure has Azure Container Instances. AWS has Fargate. Zeit has Docker deploys with v1. GCP is coming soon with Serverless Containers.


No, for short-lived (a minute or two per run) batch jobs. But, it looks like most of the ones you mentioned can also be used for that sort of task. Thanks!


Azure Container Instance billed per second, specifically for short lived tasks.


Me too, ( see thread for my comment ). I decided in the end that using my own ec2 Ubuntu t2 micro was simpler and far cheaper to run than using and configuring Fargate. I assume you want to execute untrusted code (e.g. run CI on behalf of customers or provide a repl.it-like service), like I’m doing? I couldn’t find a good service that did this well ( well I did find hyper.sh ).


Well, in theory the code that's executing is in a sandbox already but I'd rather have that extra layer of protection. I was really looking forward to just having a little bit of glue code to hit a hyper.sh endpoint and not have to worry about any server administration. manigandham suggested a few alternatives that I'm going to look into.


Right now I believe the only major players with isolated vms are Oracle Cloud and Alibaba. There's a lot of movement in the space right now, though, so you'll likely find a number of options next year.


You can add AWS Fargate to the list as well:

"Each Fargate task has it's own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task."

Official docs on Fargate isolation here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...


gitlab?


I think "builds" was the wrong term. (I've edited it to "jobs" above.) GitLab CI is great, but I don't know if it fits my use case, which is running intermittent batch conversions. I was planning on using hyper.sh to spin up containers on demand, instead of having to maintain a job server and queue of my own.


You say another, which other products are shutting down ?


We changed the title from "Yet another holiday shutdown: Hyper.sh" to one that is less baity and uses representative language from the article.

https://news.ycombinator.com/newsguidelines.html


Maybe this one:

https://news.ycombinator.com/item?id=18683362 (backplane.io)


Well I guess that is one valuable lesson to not choose no name company as your cloud provider. Big buddy is your friend.


I was planning on using this for my side project. I thought it looked great for MVPs and small projects. What happened?


Interesting to see one of these happen. Put me down for this happening to Mongo's Stitch product next.


I thought it was a brilliant idea for builds. Not many customers?


I JUST signed up for their service this week. :(.


you should consider yourself lucky. you could have signed up a while ago and invested in it


It's a shame. It was one of the most innovative cloud provider and my personal favourite.


What is hyper.sh ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: