Hacker News new | comments | show | ask | jobs | submit login
Hyper.sh – Effortless Docker Hosting (hyper.sh)
549 points by rocky1138 on Nov 7, 2016 | hide | past | web | favorite | 211 comments



Congratulations on launching/going public! I remember seeing your hypervisor/container tech a while back, and it's nice to see a service based around it.

A couple of thoughts:

1) Your quickstart ends with a command to remove the test container, but leaves other resources, like the pulled image, billed at 10 cents/started GB intact. That's probably going to surprise some people that start to play with your free credits, and then maybe end up eating that/getting a (small) bill at some point due to dangling images.

Might want to add a "hyper rmi nginx" on the end, along with commands to remove the shared volume?

2) The binary for Linux seems to work fine under "bash/Linux subsystem for windows" on windows 10.

3) Inbound bandwidth on the smallest images are abysmal - I didn't test bigger ones, so I'm not sure if are just those that are oversold/under-provisioned. I got 2-300 Kbps from Ubuntu mirrors and http://speed.hetzner.de/1GB.bin on a fresh Ubuntu container -- while from my small vps on Leasweb[1] I got a solid 10 MB/s (basically 1 Gbps).

Granted the small VPS is almost 5 Euros a month - but that includes an IP - and drops with a longer term commitment (again apples to oranges, I know -- the whole point of containers on demand is that they are, well, on demand).

And Leasweb is pretty close to Hetzner - but still, at least breaking a solid 1MB/s should be an absolute minimum.

[1] https://www.leaseweb.com/cloud/public/virtual-server


10MB/s is closer to 100Mbps not 1Gbps.


Yes, of course, you're right. I'll have to check if that old vm maybe has a 100mbps uplink - or if hetzner have started limiting the speed test.

The main point that the hyper.sh inbound bandwidth is abyssimal still stands.


Did you try the test with a FIP?


A what? :)



Ah, no I did not. I suppose it's possible that nat-ing to an internal ip is more contested than routing/nat-ing to a floating ip.


Hey all, founder is here. I'd like to thank you for your votes here. Really appreciate!

Also, I just want to share our public roadmap: https://trello.com/b/7fEwaPRd/roadmap. Feel free to comment. It actually helps a lot for us to prioritize. Thanks!


Where is Hyper.sh (specifically its containers) actually hosted?

As I explained in my other comment https://news.ycombinator.com/item?id=12892243 it really feels like Hyper.sh is hosted on Amazon, and there were references to that fact before, and you guys are trying to minimize that in your site now.

If you're on Amazon, that's OK. I don't think that minimizes how cool this technology is and how much easier it makes things. Amazon has an Elastic Container Service, but this is more nuanced than ECS is, and much more painless. But if the containers aren't on Amazon, a little more detail on how that works would be awesome, because right now it really feels like they're on Amazon. Which is fine, but when folks are making decisions (like putting their stuff on multiple platforms for reliability), it's important to know.

Edit: I signed up and looked around. It appears they're hosting on ZenLayer, a Chinese hosting company that has hosting in LA as one of their options. Not sure why they stick so closely with AWS on terminology though.


RE: "Not sure why they stick so closely with AWS on terminology though"

Makes sense from a user familiarity perspective -- AWS is what most cloud users are familiar with, and describing things in terms that are most likely to be understood is generally good practice.

I'd agree that it would make sense though if their site clarified who owns and runs the datacenter their running their service out of, if only for answering the question if they're hosting on top of AWS or not.


LA according to their website.


Where as in what infrastructure provider, not the physical location of the data center.


See OPs edit.


Have you thought about allowing users to provide their own cloud provider (AWS or Google Cloud) API access? That way we can use own own VPC and servers, and you can just charge us monthly for the software. See MongoDB Atlas [1].

This also means you as Hyper.sh don't have to worry about servers, uptime, buying hardware, power, bandwidth peering, what a headache. Let AWS and Google worry about the commodity physical hardware.

[1] https://www.mongodb.com/cloud/atlas/


Fun fact: this is basically what Tutum did before they were acquired by Docker and became Docker Cloud


This is still possible with Docker Cloud. You can either choose to have a node auto-provisioned on a cloud provider or install their agent on one of you own.


No, they are in different games basically.


> serverless cron.

damn, thank you. Anywhere I can see implementation details? Are you rolling your own system from the ground up, or using something like dkron (http://dkron.io/) behind the scenes?


if you're interested in serverless cron you can use AWS lambda for the same experience


Agreed for some things but a massive limitation of Lambda with its timeout of 300 seconds is that it cannot be used for long running tasks.

If this feature lands it sounds like it will give the ability to run any container arbitrarily and pay by the second - this would be huge for things like web scraping and other tasks that don't occur all the time yet still come with all the pain of server maintenance and uptime fees.

Imagine the scenario where you have a web scraper that runs once a week for 3 hours. Ideally you only want the machine to be on for those 3 hours, and to only pay for those hours of usage; but you also don't want the hassle of writing all the scripts that go with creating and deleting a cloud machine, mostly because those would also need to be hosted somewhere. For that kind of use case there is little out there as far as I'm aware.


There are ways to organize your lambdas to get around the 300s timeout. In classic AWS fashion... you just need to give them more money.

Look for ways to divide-and-conquer your lambdas into smaller parts. If you need to run some logic for every record in some table, give N records to a single lambda (where N is some small number which doesn't make the lambda take anywhere close to 300s).

You can orchestrate this workflow with several different AWS tools which exist all over the spectrum of cost and ease-of-use.

Easiest is definitely just having the master lambda directly invoke the other lambdas with InvocationType:Event.

SNS is another easy option. Lambda(master)->SNS(per N records)->Lambda(splinter). Downside is that you'll probably completely blast out your global AWS concurrent function execution limit pretty quickly because you have no control over how quickly SNS will trigger your functions.

Kinesis is a more powerful option. SQS also has potential, but you can't directly trigger a Lambda from SQS. One pattern I've seen used is to have a CWEvents cron trigger a lambda every M seconds to read N records from SQS. Depending on how consistent your workload is, this might make sense because it gives you really fine-grained control over that ratio between "how quickly will my jobs be processed" and "am I approaching my AWS global account limits". But if your jobs are really disparate you'd be invoking lambdas all day to do a whole bunch of nothing 90% of the time.


If I'm playing devil's advocate: yes and no. Lambda is still partially limited by the officially supported languages (I'm pretending that the hacks around this don't exist, they have issues). That said, Lambda is great, but I'm not really on-board with ECS, so it's nice to see an alternative here from someone who is also offering container based serverless infastrucutre.


Is there a write up on howto use lambda for this? I've been wanting serverless crown for ages and I feel like an idiot for not thinking of lambda for this... this should be promoted (if it isn't) - but I'm an example learner... point me please?


Don't have an example, but I have set up a few "serverless" cron jobs:

1) Create a Lambda function

2) Trigger it using CloudWatch Events. You can set up cron like rules and AWS will trigger them for you.




Hey jdc0589,

We have the first version of hyper cron ready for internal demo and would love to get your feedback.

If you're interested in being involved, please email us on talk@hyper.sh and we'll show you what we have so far.


man, sorry I missed seeing this. HN doesn't do comment reply alerts very well.

I did get the email about the beta this morning, and will definitely play around with it over the next week or so.


Cool!


Another "serverless cron" option from aws is their Data Pipeline service. It is intended for data processing as its name implies but does fits the bill for lots use cases as a generic "serverless cron". Note that Lambda has executing time limit that's pretty short, while data pipeline can be configured to have much longer timeout.


Lambda is already limited enough when it comes to how you can write the functions. I'm not even sure how you'd write a generic JS/Python/Java function on DPL, unless its got some functionality which I'm not aware of (through a custom shell command bootstrapping your language environment maybe?).


We are going to release it very soon. Stay tuned :)


Is serverless cron something you´d pay money for?


sure, if I was using the same vendor for something else already it would be a nice value ad.


More details coming up ASAP.


Is it possible to easily run a local Hyper backend on, say, my mac laptop?

I'd like to learn and experiment, but without "running the meter" and without sending my bits off of my laptop, for the time being.

If I find Hyper appealing, I'll certainly be willing to pay to deploy/move projects to your service!


It's almost the same as using docker locally.


Okay, but that doesn't answer my question. :-)

I'd like to run a local instance of the "hyper system," fully contained on my laptop, perhaps running atop virtualbox. If I find it appealing, with respect to the usage experience (i.e. deploying/composing containers), I'd be willing to run some apps on the real deal. I am, though, not willing to experiment with the "real deal" as a means of evaluating it.


Lol, I think they even give you a free credit to start. It takes like 5 minutes to get set up. Not sure what your hesitation is.

It's like "I absolutely won't try DigitalOcean unless they let me create a VM locally first!" Doesn't make any sense.


It takes 5 minutes to get set up, then 15 minutes to make sure you deleted everything afterwards :(


You may login to your console, and the overview page 'https://console.hyper.sh/overview' shows all what you have.


I did, but I still needed to delete every single thing by hand, and some things could not be deleted because other things were using them, even though I had stopped all the things, so it had to be in a specific order that took a while to figure out.


Not even for the micro-pennies that experimenting with the real deal would cost you?

If you think you might end up using it, why try and replicate it on your machine (any more than Docker already does) rather than just cross the deployment bridge sooner?


I prefer an "always local first" approach. Before Docker, it was Vagrant + VirtualBox for dev, some VPS hosting company for prod. With docker, it's basically the same thing (better, though, imo), but I no longer think much about the virtual machine on which the docker daemon is hosted and instead rely on boot2docker, Docker Machine, and now Docker for Mac for the local side. For prod, I've used Docker Machine to setup docker daemon hosts on Digital Ocean, et al.

I'd like to worry even less about the container host and get comfortable with a system like Hyper, but it's important to me to get used to it running locally for dev prior to employing it "in the cloud" for prod.


Taking the vagrant example, you'd typically play with vagrant locally and then move to AWS you mean?

You can do exactly the same with Hyper. Play with Docker locally and then move to Hyper.

The CLI commands are pretty much identical, docker compose works in the same way.

Of course it's not exactly indentic, neither is a vagrant image and an AMI.


I've not spent much time with AWS. I was thinking more along the lines of Rackspace, Linode, Digital Ocean, those kinds of hosts.

Yes there are important differences between running a dev server on VirtualBox and a prod server on one of the above, but there is parity in the workflow. The same is true when thinking about docker-machine + docker, locally then remotely.

I understand that Hyper's cli is quite similar to docker's cli. But my preference is to not consider it seriously until I can bang against a version of the Hyper backend running locally. If that's not forthcoming, fine, Hyper's not for me. :-) If it is, then great! I can't wait to play with it, locally, on my personal computer/s.


If it's really about workflow parity then I would encourage you to give it a spin. I think the parity gap will be similar to your previous experiences with VMs all things considered.


Philosophically, does the hyper.sh approach reflect Exokernel's vision? https://en.wikipedia.org/wiki/Exokernel


I would say Unikernel is more close to that.


Hey gnepzhao, great work! I had a question on your quota/metering. How can I get in touch?


Sure, join our slack channel https://slack.hyper.sh/, and DM me there @gnepzhao. See you around!


Are there any plans to offer hosting in germany or at least europe? We love the service you offer, but privacy laws here make it hard to use it when it is only hosted in the US


Yes, we are looking to expand to Europe, probably Frankfurt or Amsterdam.

I'd like to keep in touch. Could you drop a message to peng at hyper.sh? Thanks.


We already had a chat with him on your page, thanks!

Maybe it helps to prioritize that topic if I tell you that your service not being hosted here is the only ting keeping us from moving our complete microservice ecosystem to your service ;-)


Any more details about the Hyper Func?


Think a Docker-based, language-agnostic, unlimited version of "AWS Lambda". That's it!


I'd absolutely love to see that! At codebeat we run parsers and static analysis algorithms which are very resource-intensive but typically run for a short period of time so we have those powerful dedicated OVH machines sitting mostly underutilized. I've been doing some preliminary (mostly design and some dirty coding) work on that so if you're interested in comparing notes please feel free to reach out to me at marcinw [at] codebeat [dot] co.


Uh,can I get a more detailed, longer write-up on this??


Once the feature is released we will provide a full write up. Please follow the twitter or sign up and we'll notify ASAP.

https://twitter.com/hyper_sh


Google Cloud is not far from this. Basically instead of "hyper" you are typing "gcloud".

Google Cloud is far more complicated but its tools so far are pretty good.

I couldn't find how you do custom networks with Hyper. Also as a Java + Postgres shop 16 Gigs memory (L3) is just not enough.

Per second also seems overkill. Google Cloud has per minute. It doesn't seem to make sense for "effortless". If you are that interested in saving money like that (ie margins) it seems you wouldn't be using a heroku like PaaS?

For me easy deployment is a small part of the story for a compelling PaaS. What I want is really easy metrics, monitoring, notification, aggregated log searching, load balancing, status pages, elastic stuff, etc. Many cloud providers provide this stuff but it is often disparate costly addons/partners/integrations that are still not terrible easy to work with.

IMO it is actually harder to get all the diagnostic cloud stuff vs the build + deployment pipeline.

EDIT:

As mentioned in another comment my company tried to use Docker but it would take to long to make Docker images so we just prefer VMs. That is it seems with something like Hyper you save on deployment times but your build times get worse (unless I'm missing some recent magic that you can do with docker now).

EDIT again:

We didn't have Docker cache (because of some issues) so please ignore my slow docker build time comments. Apologies.


Having a CLI doesn't mean they are close. In Google cloud, you still work with VMs, cluster, schedulers. In Hyper, you work only with Docker, everything is container native!

Per-second is perfect for Serverless, Data mining, CI/CD, etc. It is simply not cost effective to go with per hour/minute rate.


I admit I'm a little behind on Docker but I thought google provided that with Kubernetes [1]?

I work with JVM and servlerless is just not worth it for the JVM (not yet but maybe someday with better AOT). Thus I know very little on instant serverless deployment. I'm sure it is useful though.

[1]: https://cloud.google.com/container-engine/


Kubernetes lets you do stuff similar to this, but you still have to manage the infrastructure and the platform.

It seems like with Hyper, you literally are just deploying an image to be run in a container. You don't have to worry about configuring and managing a Kubernetes or Swarm cluster. Probably not worth it for very large companies, but for startups and hobby projects, this greatly lowers the barrier to entry.


If you use GKE (Google Kubernetes as a service), you really don't need to manage Kubernetes either.


Yeah but is still not as managed as I want. You still need to populate a Kubernetes cluster which is 3 node minimum. Instances show up in your instances list and you still have to be careful to pop your instances in different zones/region for availability.

In an ideal world I just want to run containers in a region with a LB in front, I don't care on which Kubernetes cluster they are. That the use case hyper.sh seems to address (but I didn't test it to be honnest).


Once you've got the GKE cluster stood up (two clicks or so), you don't need to care which cluster you are on. The gcloud CLI remembers whatever you set.

It's very hands-off. And if you ever do want to take more direct control, you've still got the option of doing more or all of it on your own.


Let's say you have two images: web and db. Web containers ask for high cpu, but small disk. DB requires big mem and disk.

With GKE, you either have different instance types for different container sizes; or you launch the BIG&TALL VMs for all.

The same story applies to public/private network as well. Point is that in GKE, there are two layers to manage: VM and Containers. In Hyper, the container is the infra.


That's a terrible argument considering Hyper gives you no fine-grained control over instance types. You just get a linearly-increasing allocation of CPU cores and RAM.


> 3 node minimum

Since when?


Yup, last I checked you can create GKE clusters as small as 1 node.


There is a separation of concerns slowly baked into Kubernetes in that as a normal user you shouldn't need to manage the infrastracture.


Right, but the point is that __someone__ has to set that up. Internally, you could use Kubernetes to build something very similar to this. But, then you have to support that yourself and need to hire people to manage that. As Hyper advertises, this takes away from your concern as a software developer: software development.

Hyper allows for the creation of a minimum viable product that you can move around. I can start on Hyper and then move to pure AWS/Kubernetes/mesos/swarm when and if I determine it makes sense to have people spending time managing the AWS infrastructure and handling deployments.

I haven't used Hyper, but this idea is really cool in principle. I'm excited to see how well they actually do it. It really seems like the Heroku of containers.


Why does this not make sense for effortless? I deployed my (already dockerized) simple db+site in 5 minutes after signing up for hyper.


How long does it take to launch an instance using gcloud docker run?

We tried Joyent Triton, which is almost identical to Hyper, but among other big problems it took a LONG time to launch containers. Minutes.


To be honest I haven't tried the docker run stuff yet and still use VMs. It does take time to provision (for me like 30 seconds). I also don't do much serverless stuff (as mentioned in other comments) so my opinion is pretty crappy at best :) .

I wasn't sure how fast Hyper was when I commented but I suppose it is fast (I missed the 5 second subline twice).

One of the big issues to why we don't use Docker is that making Docker images is really slow for us! So while we would get fast deployment/provisioning we would have to pay for it in longer build times. I'm curious how others speed up Docker image building?


What underlying filesystem does Hyper use?

I ask because, on Triton, cloning a ZFS dataset should be very fast, because it is a zero copy operation. It basically consists of copying the metadata for the data set attributes and root directory. So in principle, Triton could perform competitively.


In my experience, it takes 1min to launch a container in Triton, but 5-10s in Hyper.


> Per second also seems overkill.

For long running containers, true. But I want to manage some of my data processing as a bunch of individual components that may have very short runtimes. I don't feel like paying for 10 minutes as a minimum makes sense if I only need a machine for 90s.

Their recent examples of using it as a highly parallel build server make more sense there. Do you want to pay for 10 minutes every time you trigger a 1 minute build job?


How could it possibly be the case that it takes longer to create a Docker image than a VM image?


Well I can't speak for all IaaS but both gcloud, digital ocean and even Rackspace can make a VM in less than 5 minutes which is how long it was taking to make docker images on a good day for us.

Besides we don't always blow away a VM for all services (ie the ones that don't need a cluster of nodes). We reuse them (yes this is eschewed but we get super fast deploys). I suppose this could be said for docker as well though.

Also with Docker our build artifacts would be much bigger since instead of a executable Jar we would have images. The IO of transferring images from the CI server can shockingly take some time.


Sure, you can create a blank, empty VM in less than five minutes... but the point of creating a Docker image is that it has everything pre-installed, ready-to-go.... you're not even remotely comparing apples-to-apples.

Large images aren't an issue anyway, since the base layers will just be cached...


I said I'm comparing with what I know and experienced (and this is for a JVM shop). I'm sure we could have gotten Docker to probably to be faster especially given your passionate comments (and it appears after googling there have been improvements in docker build time).

But I just ran gcloud to create a VM with Java and copied a Jar in under a minute. I just can't figure out a way to get docker to that speed. Are you creating images and copying that fast? We must me do something massively wrong with docker.

EDIT: I found out the reason... It appears we had some issues with the Docker cache and had to disable it (I don't know the exact details why yet). Please disregard my comments on slow docker building. Apologies. I wish I could delete my comments and feel a little bad about potentially spreading incorrect information...


docker build time for adding a jar to an image that has been built before and has been cached is less than 10 seconds.


Surprised not to see much comparison to Joyent Triton on here.

We evaluated Triton, and while we encountered a depressing number of show-stopping bugs doing really basic things in the first week (like any container that installs `curl` failing due to a utf-8 character in the default ca set), it was pretty cool to use the native docker CLI to provision nodes. Local == remote on Triton.

Triton runs on top of SmartOS inside Zones. To me, this is the only setup I'd actually trust for production. The security story is a whole lot of hand-waving on Linux. What does Hyper run on top of?

Unfortunately for Triton, it does take as long as a minute to provision and the cost is 2x Hyper's for equivalent hardware. I haven't done CPU benchmarks on Hyper yet but the CPUs were anemic on Triton. The I/O perf was unbelievable, though, due to local SSDs and no virtualization layer.

Will keep an eye on this at least for dev and CI. Good luck!


Security concerns are what keep me on Triton/SmartOS. Not to mention security is much easier to manage, having been built into the subsystem from the origin of the OS.

FWIW scaling an existing Triton instance is nearly immediate, so my practice is to have a couple smaller containers with my running apps that I can scale up rather than having to deploy in order to start scaling. Then depending on the load I can add more instances after that. Different use case than AWS Lambda-style scaling, but works for 99% of the real world cases I've encountered.

I find the CPU is better than AWS instances, but can be a little bursty due to the way SmartOS shares resources between tenants.


10 minute review - i run phantomas for testing sites - specifically unfall24/phantomas

docker run --rm unfall24/phantomas http://xxxx

FAILS on hyper.sh GOOD on sdc-docker

conclusion triton works


How long does it take launch it on triton?


This has a chance to do to Docker (quick painless docker containers hosting) what Digital Ocean did to VM hosting (quick, painless VMs in the cloud).

This will definitely be my go-to hosting for personal side projects.

I wonder if the major cloud providers will have something similar (both Azure and AWS seem to spin up VMs on which they run the containers - but you do get charged for the VMs as well)


I'm trying this out by deploying my website (static files generated from Jekyll source and served, all in a Docker image).

I've written the following instructions for updating the site (build new image, push to Docker Hub, pull into hyper.sh, stop previous container, run new one, attach floating IP). Does it seem reasonable?

    HYPER_IP=209.177.92.197
    LATEST_HASH=$(git log -1 --pretty=format:%h)
    IMAGE_NAME=beneills/website:$LATEST_HASH

    docker build -t $IMAGE_NAME .
    docker push $IMAGE_NAME

    hyper pull $IMAGE_NAME
    hyper run -d -p 80 --name website $IMAGE_NAME
    EXISTING_CONTAINER=$(hyper ps --filter name=website --quiet)
    hyper stop $EXISTING_CONTAINER
    hyper rm $EXISTING_CONTAINER
    hyper run --size=s1 -d -p 80 --name website
    beneills/website:2994001
    hyper fip attach $HYPER_IP website


> LATEST_HASH=$(git log -1 --pretty=format:%h)

LATEST_HASH=$(git rev-parse --short HEAD)

is the more normal way to do that.

It also looks like you'll have downtime due to deleting then running. Eww.


Hey Beneilles, this looks reasonable but I'm not sure where you got your Hyper_IP from.

Could you drop a note on the forum [1] or join the slack [2] and ask there?

[1] https://forum.hyper.sh/ [2] https://slack.hyper.sh/


"All our servers are built on powerful Octo-Core machines" pretty much guarantees they're using some cheaper than E5 Xeons to save money, I'm wondering if it's something in the Xeon-D line. Has anyone specifically characterized what they're using? Could be Xeon D-1540s or similar, or they could also be selling 4-core hyperthreaded E3 Xeons as "8 core".


Edit: I went ahead and signed up for an account and made a container. The floating IP for the container appears to be an LA IP address. Their host appears to be ZenLayer, a Chinese hosting company that can apparently do co-location in LA, so while the IPs geolocate to China, it's possible they are indeed hosting in LA. The CPU is a E5-2630 v4.

Original: I'm pretty sure they're entirely hosted on AWS. Given that they say they're hosted in Los Angeles, I think they mean us-west-1.

Their API uses an AWS address as their endpoint, their authentication is just a veneer over AWS's authentication (including basically find and replacing header variables). They previously had docs that showed how to add floating IPs to the containers, and all the IPs were AWS Elastic IPs.

I'm pretty sure the docs specifically stated they were on AWS last time Hyper came up [1] (Hyper.sh had linked off the Hyper article), but now when I look it's not there. So either in a few days they've moved their infrastructure off AWS and just left their API up there (and are doing some crazy stuff to redirect elastic IPs), or they moved everything but their API off Amazon a while ago and hadn't updated their docs, or they've decided to make the fact they're on AWS less visible, while they're competing with Amazon's own container service. I have the feeling it's option #3.

To answer your question though, I think they're using M4 AWS instances [2], so Xeon E5-2686 Broadwell or Xeon E5-2676 Haswell. Probably the m4.10xlarge, since they talk about the 10 GB networking the containers use.

1. https://news.ycombinator.com/item?id=12873089 2. https://aws.amazon.com/ec2/instance-types/#m4


I have a feeling a huge chunk of the projects we see here are run on AWS. Duckduckgo runs on AWS. For getting things started quickly at a low initial cost, it's usually more viable to use a hosted solution (AWS, RackSpace, Digital Ocean, etc.)

However once you get big enough, the cost savings usually start falling the other way. Several companies I've been at have moved from using hosting to running their own boxes, either co-located or in their own data center (the CTO like to call this, "moving to our own private cloud" or some other marketing bullshit). Even then, careful decisions are made on to what to host locally and what to keep on a managed service due to cost.


Having this on AWS is perfectly fine. I even think it's viable to look at Hyper.sh as an alternative to Elastic Container Service, even though both are being hosted on AWS. But if it's hosted on AWS, it's important that people actually know that. If someone wants to build a highly reliable system, and they pick 2 resources -- let's say Hyper.sh and AWS ECS as the most likely candidates for that -- it's pretty important for the customer to know that both resources they're relying on are on the same service, and even possibly in the exact same data center, as that affects how effective their redundancy actually is.


As another side of the same point, hyper on AWS shifts the balance of costs for me as if I want to store my data in S3 it changes whether or not I've got to pay for network egress.


There are plenty of big companies that stay on AWS. Netflix is the obvious example. Companies like Zynga and Activision are moving gradually from self hosted to AWS (from what I hear).


Once you are big enough to be able to negotiate prices that are nowhere near the published prices for AWS, it clearly can start becoming cost-competitive...


Another thing to remind you is, E5-2630v4 was released in Q1 2016, which is not an old CPU

http://ark.intel.com/products/92981/Intel-Xeon-Processor-E5-...


Not on aws at all.

The hyper.sh API address is us-west-1.hyper.sh, which looks like the AWS style, however, it is not an AWS address and it is located in an independent IDC around Los Angels.


Edit: Tried it and confirmed it's indeed not on AWS, updated my previous comment.

Original: Not on AWS at all doesn't seem possible

The docs for Floating IPs [1] list 52.68.129.19 as an example, which is an AWS Elastic IP.

The docs for the API [2] says "Hyper.sh API signature algorithm is based on AWS Signature Version 4", and then proceeds to explain the differences, which is variable names. The API Domain is us-west-1.hyper.sh, which is the same URL schema as AWS (us-west-1 is also AWS's North California region).

Maybe the containers themselves are somehow not on AWS? Sure. But not on AWS at all doesn't seem to be the answer.

1. https://docs.hyper.sh/Feature/network/fip.html 2. https://docs.hyper.sh/Reference/API/2016-04-04%20[Ver.%201.2...


Based on AWS v4 does not means aws v4 is employed, on the contrary, the docs explains the signature of hyper, which uses different literatures.

At last, you can try it. Then you will found it is totally different from AWS.


You can try the api address us-west-1.hyper.sh


Resolves to 65.255.36.153 and 65.255.36.154, which don't have reverse DNS, but Maxmind Geolocates to China.


In Los Angels


I don't think AWS allows you to do nested virtualization. Perhaps they have a hack?


Does the CPU model matter?


Yes, it does. Not all CPU cores are created equal. Performance depends on microarchitecture, cache size, memory bandwidth and latency, clock speed, hardware acceleration of specific features, and hyper-threading capabilities, to name a few.

Take the Xeon E5-2403[1] and the Xeon E5-2637 v4[2]. Both are quad-core Xeons, but they differ by pretty much everything except core count.

Here's a comparison of their performance: http://cpubenchmark.net/compare.php?cmp%5B%5D=1827&cmp%5B%5D....

Granted, this is an artificial benchmarks, but the results speak for themselves. In this case, the Xeon E5-2637 v4 is almost three times faster than its little brother, the Xeon E5-2403.

Quantifying CPU performance by number of cores is disingenuous at best, and dishonest at worst.

[1]: http://ark.intel.com/products/64615/Intel-Xeon-Processor-E5-...

[2]: http://ark.intel.com/products/92983/Intel-Xeon-Processor-E5-...


Yes! Running applications that can't cope with NUMA means you need to know what model of CPU, and especially know whether your N-cores are on the same socket or not.


Building a cloud from the ground up is no small task, even more so when you build it on your own hardware, and your own virtualization technology.

Any idea who these people are / this company is? Seems to have come out of virtually nowhere.


It probably just seems like they came out of nowhere because there's been some confusing change in how their technology is referred to because they have a couple different projects — so maybe confusing between Hyper, Hypernetes (a Kubernetes distro), and now more accurately Hyper.sh. We wrote a couple articles in late 2015 profiling them, and that coverage upticks again this month. http://thenewstack.io/tag/hyper-sh/


James, from Hyper here. We have kept a low profile up until now with marketing, because we were so focused on building the tech. But moving forward you will hear more about the team involved- the who, the why, and the how? Hope you try us out!



Looks an awful lot like Heroku.


Heroku docker support is still in beta: https://devcenter.heroku.com/articles/container-registry-and...

Outside of that you're limited to specific stacks.


Actually, you're not limited to specific stacks on Heroku. There are a number of options.

You can use our officially supported languages: https://devcenter.heroku.com/articles/buildpacks#officially-...

You can create a Docker image and deploy it via our container registry: https://devcenter.heroku.com/articles/container-registry-and...

You can create your own buildpack: https://devcenter.heroku.com/articles/buildpacks#creating-a-...

Or use a buildpack created by the community: https://elements.heroku.com/buildpacks


Hey jbryum, thanks for clarifying.


There's some poor English and they list a Chinese office too. Probably not Heroku.


Native-English speaking Hyper team member here. We were founded in Beijing but have since spread to NYC.

Could you point me to the poor-English in question?


e.g. https://hyper.sh/howto/

> This guide shows how you can launch a full functional Jenkins server in one minute And then configure this Jenkins works with you Github account.


Thanks for catching that. Will address.


Confusingly, zeit also has a terminal named hyper[0], despite not having the .sh TLD.

https://hyper.is


For a second, I thought it was by the same company (https://zeit.co/) as they too have a cloud service of sorts (for nodeJS apps I believe)


Even more confusingly, Zeit's "now" supports Docker deployments.


It's really a popular name...


Funny to see Deis is not mentioned here. It runs on top of Kubernetes and seems to deliver quite a similar experience to Hyper. Compared to Hyper, Deis seems a bit more towards the Heroku style of doing things, which is not a bad thing at all. And the Deis team came up with Helm which is an amazing way to deploy whole sets of containers as if your are installing packages with a package manager.


I haven't used Deis but I have used Dokku and Flynn.

I found both easy to get started with. Flynn works well with DO and AWS although I couldn't get it to work with any third-party hosts (ssh auth issues).

Took me about an hour to setup 3 load balanced nodes and deploy their test go app which writes to Postgres.

There are issues with Flynn though:

- Missing Dokku's rich plugin support (Let's Encrypt is a breeze on Dokku)

- Log shipping

- Docs need work. I had to dive into the issues to figure out how to setup the custom domain names

- The HA is a bit murky. I downed two of my three nodes and the cluster collapsed. Since Flynn runs on one machine I expected it to work

I personally think there is a big gap in the market for a company that can provides a thin layer on top of the various IaaS providers to create the ease of Heroku with lower costs.


> The HA is a bit murky. I downed two of my three nodes and the cluster collapsed. Since Flynn runs on one machine I expected it to work

I'd expect this is by design. In a 3-node system that takes CP out of the CAP theorem (consistency over availability), you can only lose one node before the system becomes unavailable. This is because, as far as the remaining node knows, the other two nodes could still be up but network partitioned off from it. To prevent a split brain in such a scenario, you need a majority of nodes to be accessible or else they'll intentionally stop working.

tl;dr: A 3-node cluster with 2 nodes down is not the same thing as a 1-node cluster.


Yeah I assumed so. I will re-run my test and kill only one node. I assume any of the three nodes can go down?


Flynn developer here. This is correct, a three node cluster can withstand loss of any single host before things start failing.

Also, log shipping and Let's Encrypt support are coming soon.


Just tested killing a node and it worked great.


If you're okay with AWS, check out https://convox.com/


If you want a (in my opinion) really good alternative to flynn, I 100% reccomend Deis. Since v2 it's built ontop of Kubernetes so it has a strong infrastructure base.


Really like the look of this, feels VERY Digital Ocean-esque from the UI (which is awesome). As a big fan of DO I'm looking forward to playing with it!

Edit:

One interesting thing I've noticed is that I was charged a dollar for an IP address that I released after 1 minute and 11 seconds. I'd have assumed that it would have been by the second as well. However:

fip 209.177.88.125 - 2016/11/07 16:27:08 2016/11/07 16:28:19 0.0197 $1.0000

From pricing: "Billing begins when a new Floating IP is allocated, ends when it is released. Partial month is treated as a entire month."


Hi, sorry about the confusion. We do this just to prevent abuse, e.g. per-second billing for IP.

PS: I work at Hyper.sh


At least now, it warns you about this:

% hyper fip allocate 1

Please note that Floating IP (FIP) is billed monthly. The billing begins when a new IP is allocated, ends when it is released. Partial month is treated as a entire month. Do you want to continue? [y/n]:


The pricing on this is amazing. It's cheap enough to run a few services that I'd prefer to keep off my home machines.

How secure are these containers though? I always thought that Linux containers were not designed to be a guaranteed firewall between multiple tenants.

EDIT: They use their own technology to run docker on "bare metal hypervisors" - https://github.com/hyperhq/hyperd. That's actually pretty cool.


They don't use Linux container, they use hypervisor-based container (see: github.com/hyperhq/hyperd). Therefore it is VM-level isolation.



Interesting, will give it a look. For small container-based projects I've been using bluemix[1] and it's very simple and cli-friendly. the web dashboard could use some help, but it works. Definitively light years more simple than aws or kubernetes for simple projects.

[1]: https://www.ibm.com/cloud-computing/bluemix/


This is exactly the service I've been looking for: a fast, cheap, and easy docker deployment service for my personal projects. It'll be interesting to see this grow, and more importantly, see how they handle security and privacy.


I was thinking the same, until I realized that half a gig of ram is $5 which is what VULTR and DO charge for a full virtualized system. My personal projects are mostly webapps that require at least a half a gig of ram. What kinds of personal projects are you working on?


They are spammers. Last week they spammed my email address unused for years (but still harvestable) write email advertising their product under the pretense that I'll find it useful "as a build it user" (which OSS project I indeed used briefly many years ago).

Such poor judgement goes to show company culture. Wouldn't even consider them for any service after this.


So you are upset because you got a marketing email?

Oh my. You must be very careful with giving your email address to anyone, then...


No, I'm upset because this is SPAM. I did not subscribe to their mailing list, I did not "give" the address to them to email to. It's a HARVESTED email address.

Do you have the same dismissive attitude when it comes to viagra spam?

Credit where credit is due, I suppose: somebody from Hyper.sh contacted me, apologized for lapse of judgment. IOW even they don't agree with you here.


"Hyper is a set of Linux kernel, init process and management tools, able to virtualize containers to improve their isolation and management in case of multi-tenant applications, eliminating the need of guest OS and all the components it brings. Hyper provides safe and fast isolated environments (virtual machines), on which portable environments (containers) can be easily scheduled. "[1]

The article[1] links to the hyper.sh site [2], as well as a github repo [3].

[1] https://wiki.xenproject.org/wiki/Hyper [2] https://hyper.sh/ [3] https://github.com/hyperhq/hyperd


Wow, very good experience you guys are delivering here! Congratulations on shipping!

Little question, you guys said you started in Beijing, and that your next plans are NYC and Europe... Why did you leave Beijing? Didn't the local growth attract you?


Hey Uniclaude, we didn't leave Beijing. Still there with people in NYC. We just decided to open the first AZ in Los Angeles.


This is really intriguing and seems like a strong fit for my use-cases. What are your plans to expand to other datacenters?

Personally, I'd like to see: NYC, Chicago, Germany, Middle East, and Australia


Hey, founder kicks in.

Yes, NYC and Europe are our next step. Probably Frankfurt or Amsterdam.


Would need Sydney before I could consider it, but it doesn't surprise me it's not next on your list.


A few questions...

1. If I wanted to launch my own docker service on top of this, would that be ok?

2. Any timeline on the Franfurk/Amsterdam data centers?

3. Policy on DMCA for European data centers?

4. Thoughts on more storage? (pricing on volume storage - ~100 TB+ or so)


1. Yes, though there is quite some engineering efforts needed (running a cloud is TOUGH)

2. In a few months

3. TBD

4. Yes, we are looking to expand the DC and add more options.

BTW, our public roadmap: https://trello.com/b/7fEwaPRd/roadmap


Unfortunate name collision with Hyper the terminal emulator, which has already changed name once...

https://hyper.is


Now we just need a Dockerized application called "Hyper" and you can schedule Hyper on Hyper from your Hyper window.


For anyone looking for easy to use docker hosting, I would heartily recommend docker cloud(1st node free, then 14$/node/month), along with bare metal providers like packet.net or scaleaway.

I Have a 8GB/4-core atom based bare metal server running on packet.net for only 35$. Is running 30+ moderately used containers without any trouble.

Got me off heroku finally!


Do you have a relational database? If yes, how do you manage it? HA, monitoring, backups?

There seems to be about a gazillion ways to get "some code running somewhere" but I'm not aware of many budget options for data persistence.


I don't have any relational database, but couch which also requires the things you are asking about - HA, clustering, backups etc. I am in process of launching my new app, so I have thought a lot about it as well. What I came up with is:

1) Database running inside docker, but using external mounted volume. Packet has external block storage(14$/month for 50GB high iops version), and you can configure backups on it ranging from 15m to every week. So that should completely cover the HA/backup stuff for most apps.

2) For monitoring/logging, the best solution so far seems to be datadog. It doesn't look very expensive, and seems to have most of the intergations you can come up with - couch, pgsql, docker, express, slack, github.

Combined it costs me 70$, which is a much better deal than PAaS I can think of,

Btw, if someone from datadog is reading this thread, your couch integration seems to be broken.


@abhishivsaxena Ilan from Datadog here. Mind shooting me some details to ilan@datadog ? Happy to dig into the Couch integration.


This is a concern for me. I really want the database and the app hosting in the same data centres.


This looks cool, but ~$300 a month for 16GB of ram?

Where am I supposed to run my DB?


Price reduction is on the way :)


%docker in production joke goes here%

No but really, this is a neat concept and the idea of micro instances is attractive for prototyping.


I'm curious: why is the pricing linear in RAM for the S* and M* types, but double that for the L* types? L3 = 4xM3, but the pricing is 8xM3. The pricing is very appealing on the lower end, but much less appealing for the still-quite-small large end.


I tried this a few days ago after [someone here suggested](https://news.ycombinator.com/item?id=12876472) it as a Lambda + containers tool.

I signed up for the trial and was pretty impressed how easy it was to get up and running. I'm hesitant to use for production-level workloads. As it matures I think it will be a great platform (and then Amazon, Google, or Digital Ocean, etc. will acquire it and maybe roll it into their offerings or maybe kill it.)


I wonder is this is better for spinning up low traffic static sites than DO.


With DO you need to manage the host system, actually installing Docker. Patch Docker, patch the host.

Hyper handles that responsibility, allowing you to free up more time for app dev. "Our platform removes the need for you to manage a VM cluster or any container orchestration engine, so you can focus solely on your containers and get back to coding!"


Why would you want to containerize low traffic sites?

If it's a low traffic site, why not just build a small site and use a cheap shared host?


I was imagining a scenario where a site is pretty low traffic 90% of the time but occasionally has an article hit the front page of HN or a popular reddit sub.


Why "low traffic sites"?


Maybe I'm just blind, but when I was looking at Hyper over the weekend I couldn't figure out whether it was possible to perform the rolling update for a service to update its environmental variables or other settings that you can specify when you create the service.

https://docs.hyper.sh/Reference/CLI/service_rolling_update.h...


Feature request: allow volumes to be mounted as read-only via, e.g.:

    hyper run --name mycontainer --volume myvolume:/mnt/point/:ro myimage


Is there something like this on top of DigitalOcean/AWS instead? I'd rather rely on those providers for the hardware and uptime.


Convox for AWS: https://convox.com/


There are a lot of contenders. Cloud Foundry is one of the many, you'll see others (Flynn, Deis, OpenShift, I always miss a zillion) mentioned in this discussion.

I like Cloud Foundry because I work on it. It runs on AWS, Azure, GCP, OpenStack, vSphere, RackHD and others. If someone wrote a BOSH CPI for DigitalOcean it'd run there too without much fuss.

Disclosure: I work for Pivotal, we donate the majority of engineering on Cloud Foundry.


Supergiant.io works for AWS, Digital Ocean, and Open Stack: https://supergiant.io/tutorials


Don't think so. Though their stack are open source.


Self hosted Docker Cloud / machine.


Feedback: there are empty pages in your docs, e.g. https://docs.hyper.sh/Reference/API/2016-04-04%20[Ver.%201.2...


Thanks!


What is the best way to deploy a CRUD app? Use MySQL over TLS? Are there any sort of persistent volumes?


Went through the docs, they have persistent volumes EXT4


Let me see if I got this straight:

1. Hyper.sh is a PaaS running their own open-source stack https://hypercontainer.io/ ?

2. That means that it's possible to host "your own hyper.sh" on AWS or DO?


Hi, founder is here.

1. Yes, our SW is open sourced: github.com/hyperhq 2. Not really, Hypercontainer runs on bare-metal, as it uses hypervisors (KVM, Xen) underneath.


Looks really nice.

Given that this is yet another place to run code, I would be very interested in a third-party security review. The result of that, and any other Certs or regulator reviews, would strongly define what sort of work loads can be run on it.


Do you plan to host a container registry? My use case is wanting to build an image containing SSL server certificates, which I cannot push to, e.g. Docker Hub.

Being able to do `hyper push myimage` would streamline the process.


My solution for the moment is sending my ssl data to a volume, but having 100KiB in a 10GB (minimum) volume feels ugly.

    tar czf - ssl/ | hyper exec -i 7f0b148b478e bash -c "cd /root && cat - | tar xz"


This is really cool. Any idea when you will start offering IPV6 floating ips?


> Amazing Hardware: All our servers are built on powerful Octo-Core machines [...]

Doesn't sound so amazing to me (or is just very low density compared to what's typically done with the current Xeon lineup).


zeit also has super simple dockerfile support with `now`: https://zeit.co/blog/now-dockerfile


> L2 PRIVATE NETWORK

I'm pretty sure they mean Layer 3, I mean the effort to have a SDN network/VPLS and reconfigure routes, etc for every client seems to be too off for me.


This is pretty cool. I would also love to see a "micro" instance with more cores but keep the lower memory.


That pretty much never happens in hosting. Because CPU cores are the more limited resource on these machines, you'd end up being charged the same if you opted for less RAM since they wouldn't be able to put too many other customers on the same box.


Looks like a competitor of Google App Engine with much lower price (but GAE provides 28 free compute hours per day).


Are you sure network is free? Because if I host IPFS it will consume all the network you have available.


My question as well. This doesn't look like something that will have a long life - and that uncertainty is bad.


Ok, how the heck do I view how much credit I have. Why is this so difficult.


Ok so you do have a billing page https://console.hyper.sh/billing/credit But no link to it on the overview page, or anywhere.


Can you please add a tab that says billing or something.


Thanks, we have improved it, you can find the menu by clicking your avatar on console.


Seems a bit expensive compared with the cheapest VPS/OpenVZ setups around.


But then can you run Docker on top of OpenVZ? Oh yes, technically you can starting from version x, but I've been told (by a VPS provider) there are a lot of issues in practice.

Of course, I'd love to hear if the opposite is true.


The bigger sizes? Yes, but the small ones work pretty sweet. And the per-second billing!


Yeah, per second billing is nice. I can see this working where you want to change/test lots of containers briefly.


The only type of service which seems comparable is Joyent and hyper seems to be a bit cheaper.

https://www.joyent.com/pricing


Hi, there are plugins for Buildbot and Jenkins github.com/jenkinsci/hyper-slaves-plugin, which are more like a "Serverless" CI/CD solution.

PS: I'm the founder :)


Are there any plans for autoscaling groups like in K8S?


Check https://docs.hyper.sh/Feature/container/service.html.

Kind of similar to LB+ReplicationSet in K8S, not automated scaling yet. But will do.


what kind of payments are accepted?


Currently, hyper.sh supports credit card backed by stripe.com


Any plan for PayPal?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: