A couple of thoughts:
1) Your quickstart ends with a command to remove the test container, but leaves other resources, like the pulled image, billed at 10 cents/started GB intact. That's probably going to surprise some people that start to play with your free credits, and then maybe end up eating that/getting a (small) bill at some point due to dangling images.
Might want to add a "hyper rmi nginx" on the end, along with commands to remove the shared volume?
2) The binary for Linux seems to work fine under "bash/Linux subsystem for windows" on windows 10.
3) Inbound bandwidth on the smallest images are abysmal - I didn't test bigger ones, so I'm not sure if are just those that are oversold/under-provisioned. I got 2-300 Kbps from Ubuntu mirrors and http://speed.hetzner.de/1GB.bin on a fresh Ubuntu container -- while from my small vps on Leasweb I got a solid 10 MB/s (basically 1 Gbps).
Granted the small VPS is almost 5 Euros a month - but that includes an IP - and drops with a longer term commitment (again apples to oranges, I know -- the whole point of containers on demand is that they are, well, on demand).
And Leasweb is pretty close to Hetzner - but still, at least breaking a solid 1MB/s should be an absolute minimum.
The main point that the hyper.sh inbound bandwidth is abyssimal still stands.
A Floating IP: https://docs.hyper.sh/Feature/network/fip.html
Also, I just want to share our public roadmap: https://trello.com/b/7fEwaPRd/roadmap. Feel free to comment. It actually helps a lot for us to prioritize. Thanks!
As I explained in my other comment https://news.ycombinator.com/item?id=12892243 it really feels like Hyper.sh is hosted on Amazon, and there were references to that fact before, and you guys are trying to minimize that in your site now.
If you're on Amazon, that's OK. I don't think that minimizes how cool this technology is and how much easier it makes things. Amazon has an Elastic Container Service, but this is more nuanced than ECS is, and much more painless. But if the containers aren't on Amazon, a little more detail on how that works would be awesome, because right now it really feels like they're on Amazon. Which is fine, but when folks are making decisions (like putting their stuff on multiple platforms for reliability), it's important to know.
Edit: I signed up and looked around. It appears they're hosting on ZenLayer, a Chinese hosting company that has hosting in LA as one of their options. Not sure why they stick so closely with AWS on terminology though.
Makes sense from a user familiarity perspective -- AWS is what most cloud users are familiar with, and describing things in terms that are most likely to be understood is generally good practice.
I'd agree that it would make sense though if their site clarified who owns and runs the datacenter their running their service out of, if only for answering the question if they're hosting on top of AWS or not.
This also means you as Hyper.sh don't have to worry about servers, uptime, buying hardware, power, bandwidth peering, what a headache. Let AWS and Google worry about the commodity physical hardware.
damn, thank you. Anywhere I can see implementation details? Are you rolling your own system from the ground up, or using something like dkron (http://dkron.io/) behind the scenes?
If this feature lands it sounds like it will give the ability to run any container arbitrarily and pay by the second - this would be huge for things like web scraping and other tasks that don't occur all the time yet still come with all the pain of server maintenance and uptime fees.
Imagine the scenario where you have a web scraper that runs once a week for 3 hours. Ideally you only want the machine to be on for those 3 hours, and to only pay for those hours of usage; but you also don't want the hassle of writing all the scripts that go with creating and deleting a cloud machine, mostly because those would also need to be hosted somewhere. For that kind of use case there is little out there as far as I'm aware.
Look for ways to divide-and-conquer your lambdas into smaller parts. If you need to run some logic for every record in some table, give N records to a single lambda (where N is some small number which doesn't make the lambda take anywhere close to 300s).
You can orchestrate this workflow with several different AWS tools which exist all over the spectrum of cost and ease-of-use.
Easiest is definitely just having the master lambda directly invoke the other lambdas with InvocationType:Event.
SNS is another easy option. Lambda(master)->SNS(per N records)->Lambda(splinter). Downside is that you'll probably completely blast out your global AWS concurrent function execution limit pretty quickly because you have no control over how quickly SNS will trigger your functions.
Kinesis is a more powerful option. SQS also has potential, but you can't directly trigger a Lambda from SQS. One pattern I've seen used is to have a CWEvents cron trigger a lambda every M seconds to read N records from SQS. Depending on how consistent your workload is, this might make sense because it gives you really fine-grained control over that ratio between "how quickly will my jobs be processed" and "am I approaching my AWS global account limits". But if your jobs are really disparate you'd be invoking lambdas all day to do a whole bunch of nothing 90% of the time.
1) Create a Lambda function
2) Trigger it using CloudWatch Events. You can set up cron like rules and AWS will trigger them for you.
We have the first version of hyper cron ready for internal demo and would love to get your feedback.
If you're interested in being involved, please email us on email@example.com and we'll show you what we have so far.
I did get the email about the beta this morning, and will definitely play around with it over the next week or so.
I'd like to learn and experiment, but without "running the meter" and without sending my bits off of my laptop, for the time being.
If I find Hyper appealing, I'll certainly be willing to pay to deploy/move projects to your service!
I'd like to run a local instance of the "hyper system," fully contained on my laptop, perhaps running atop virtualbox. If I find it appealing, with respect to the usage experience (i.e. deploying/composing containers), I'd be willing to run some apps on the real deal. I am, though, not willing to experiment with the "real deal" as a means of evaluating it.
It's like "I absolutely won't try DigitalOcean unless they let me create a VM locally first!" Doesn't make any sense.
If you think you might end up using it, why try and replicate it on your machine (any more than Docker already does) rather than just cross the deployment bridge sooner?
I'd like to worry even less about the container host and get comfortable with a system like Hyper, but it's important to me to get used to it running locally for dev prior to employing it "in the cloud" for prod.
You can do exactly the same with Hyper. Play with Docker locally and then move to Hyper.
The CLI commands are pretty much identical, docker compose works in the same way.
Of course it's not exactly indentic, neither is a vagrant image and an AMI.
Yes there are important differences between running a dev server on VirtualBox and a prod server on one of the above, but there is parity in the workflow. The same is true when thinking about docker-machine + docker, locally then remotely.
I understand that Hyper's cli is quite similar to docker's cli. But my preference is to not consider it seriously until I can bang against a version of the Hyper backend running locally. If that's not forthcoming, fine, Hyper's not for me. :-) If it is, then great! I can't wait to play with it, locally, on my personal computer/s.
I'd like to keep in touch. Could you drop a message to peng at hyper.sh? Thanks.
Maybe it helps to prioritize that topic if I tell you that your service not being hosted here is the only ting keeping us from moving our complete microservice ecosystem to your service ;-)
Google Cloud is far more complicated but its tools so far are pretty good.
I couldn't find how you do custom networks with Hyper. Also as a Java + Postgres shop 16 Gigs memory (L3) is just not enough.
Per second also seems overkill. Google Cloud has per minute. It doesn't seem to make sense for "effortless". If you are that interested in saving money like that (ie margins) it seems you wouldn't be using a heroku like PaaS?
For me easy deployment is a small part of the story for a compelling PaaS. What I want is really easy metrics, monitoring, notification, aggregated log searching, load balancing, status pages, elastic stuff, etc. Many cloud providers provide this stuff but it is often disparate costly addons/partners/integrations that are still not terrible easy to work with.
IMO it is actually harder to get all the diagnostic cloud stuff vs the build + deployment pipeline.
As mentioned in another comment my company tried to use Docker but it would take to long to make Docker images so we just prefer VMs. That is it seems with something like Hyper you save on deployment times but your build times get worse (unless I'm missing some recent magic that you can do with docker now).
We didn't have Docker cache (because of some issues) so please ignore my slow docker build time comments. Apologies.
Per-second is perfect for Serverless, Data mining, CI/CD, etc. It is simply not cost effective to go with per hour/minute rate.
I work with JVM and servlerless is just not worth it for the JVM (not yet but maybe someday with better AOT). Thus I know very little on instant serverless deployment. I'm sure it is useful though.
It seems like with Hyper, you literally are just deploying an image to be run in a container. You don't have to worry about configuring and managing a Kubernetes or Swarm cluster. Probably not worth it for very large companies, but for startups and hobby projects, this greatly lowers the barrier to entry.
In an ideal world I just want to run containers in a region with a LB in front, I don't care on which Kubernetes cluster they are. That the use case hyper.sh seems to address (but I didn't test it to be honnest).
It's very hands-off. And if you ever do want to take more direct control, you've still got the option of doing more or all of it on your own.
With GKE, you either have different instance types for different container sizes; or you launch the BIG&TALL VMs for all.
The same story applies to public/private network as well. Point is that in GKE, there are two layers to manage: VM and Containers. In Hyper, the container is the infra.
Hyper allows for the creation of a minimum viable product that you can move around. I can start on Hyper and then move to pure AWS/Kubernetes/mesos/swarm when and if I determine it makes sense to have people spending time managing the AWS infrastructure and handling deployments.
I haven't used Hyper, but this idea is really cool in principle. I'm excited to see how well they actually do it. It really seems like the Heroku of containers.
We tried Joyent Triton, which is almost identical to Hyper, but among other big problems it took a LONG time to launch containers. Minutes.
I wasn't sure how fast Hyper was when I commented but I suppose it is fast (I missed the 5 second subline twice).
One of the big issues to why we don't use Docker is that making Docker images is really slow for us! So while we would get fast deployment/provisioning we would have to pay for it in longer build times. I'm curious how others speed up Docker image building?
I ask because, on Triton, cloning a ZFS dataset should be very fast, because it is a zero copy operation. It basically consists of copying the metadata for the data set attributes and root directory. So in principle, Triton could perform competitively.
For long running containers, true. But I want to manage some of my data processing as a bunch of individual components that may have very short runtimes. I don't feel like paying for 10 minutes as a minimum makes sense if I only need a machine for 90s.
Their recent examples of using it as a highly parallel build server make more sense there. Do you want to pay for 10 minutes every time you trigger a 1 minute build job?
Besides we don't always blow away a VM for all services (ie the ones that don't need a cluster of nodes). We reuse them (yes this is eschewed but we get super fast deploys). I suppose this could be said for docker as well though.
Also with Docker our build artifacts would be much bigger since instead of a executable Jar we would have images. The IO of transferring images from the CI server can shockingly take some time.
Large images aren't an issue anyway, since the base layers will just be cached...
But I just ran gcloud to create a VM with Java and copied a Jar in under a minute. I just can't figure out a way to get docker to that speed. Are you creating images and copying that fast? We must me do something massively wrong with docker.
I found out the reason... It appears we had some issues with the Docker cache and had to disable it (I don't know the exact details why yet). Please disregard my comments on slow docker building. Apologies. I wish I could delete my comments and feel a little bad about potentially spreading incorrect information...
We evaluated Triton, and while we encountered a depressing number of show-stopping bugs doing really basic things in the first week (like any container that installs `curl` failing due to a utf-8 character in the default ca set), it was pretty cool to use the native docker CLI to provision nodes. Local == remote on Triton.
Triton runs on top of SmartOS inside Zones. To me, this is the only setup I'd actually trust for production. The security story is a whole lot of hand-waving on Linux. What does Hyper run on top of?
Unfortunately for Triton, it does take as long as a minute to provision and the cost is 2x Hyper's for equivalent hardware. I haven't done CPU benchmarks on Hyper yet but the CPUs were anemic on Triton. The I/O perf was unbelievable, though, due to local SSDs and no virtualization layer.
Will keep an eye on this at least for dev and CI. Good luck!
FWIW scaling an existing Triton instance is nearly immediate, so my practice is to have a couple smaller containers with my running apps that I can scale up rather than having to deploy in order to start scaling. Then depending on the load I can add more instances after that. Different use case than AWS Lambda-style scaling, but works for 99% of the real world cases I've encountered.
I find the CPU is better than AWS instances, but can be a little bursty due to the way SmartOS shares resources between tenants.
docker run --rm unfall24/phantomas http://xxxx
FAILS on hyper.sh
GOOD on sdc-docker
conclusion triton works
This will definitely be my go-to hosting for personal side projects.
I wonder if the major cloud providers will have something similar (both Azure and AWS seem to spin up VMs on which they run the containers - but you do get charged for the VMs as well)
I've written the following instructions for updating the site (build new image, push to Docker Hub, pull into hyper.sh, stop previous container, run new one, attach floating IP). Does it seem reasonable?
LATEST_HASH=$(git log -1 --pretty=format:%h)
docker build -t $IMAGE_NAME .
docker push $IMAGE_NAME
hyper pull $IMAGE_NAME
hyper run -d -p 80 --name website $IMAGE_NAME
EXISTING_CONTAINER=$(hyper ps --filter name=website --quiet)
hyper stop $EXISTING_CONTAINER
hyper rm $EXISTING_CONTAINER
hyper run --size=s1 -d -p 80 --name website
hyper fip attach $HYPER_IP website
LATEST_HASH=$(git rev-parse --short HEAD)
is the more normal way to do that.
It also looks like you'll have downtime due to deleting then running. Eww.
Could you drop a note on the forum  or join the slack  and ask there?
I'm pretty sure they're entirely hosted on AWS. Given that they say they're hosted in Los Angeles, I think they mean us-west-1.
Their API uses an AWS address as their endpoint, their authentication is just a veneer over AWS's authentication (including basically find and replacing header variables). They previously had docs that showed how to add floating IPs to the containers, and all the IPs were AWS Elastic IPs.
I'm pretty sure the docs specifically stated they were on AWS last time Hyper came up  (Hyper.sh had linked off the Hyper article), but now when I look it's not there. So either in a few days they've moved their infrastructure off AWS and just left their API up there (and are doing some crazy stuff to redirect elastic IPs), or they moved everything but their API off Amazon a while ago and hadn't updated their docs, or they've decided to make the fact they're on AWS less visible, while they're competing with Amazon's own container service. I have the feeling it's option #3.
To answer your question though, I think they're using M4 AWS instances , so Xeon E5-2686 Broadwell or Xeon E5-2676 Haswell. Probably the m4.10xlarge, since they talk about the 10 GB networking the containers use.
However once you get big enough, the cost savings usually start falling the other way. Several companies I've been at have moved from using hosting to running their own boxes, either co-located or in their own data center (the CTO like to call this, "moving to our own private cloud" or some other marketing bullshit). Even then, careful decisions are made on to what to host locally and what to keep on a managed service due to cost.
The hyper.sh API address is us-west-1.hyper.sh, which looks like the AWS style, however, it is not an AWS address and it is located in an independent IDC around Los Angels.
Original: Not on AWS at all doesn't seem possible
The docs for Floating IPs  list 220.127.116.11 as an example, which is an AWS Elastic IP.
The docs for the API  says "Hyper.sh API signature algorithm is based on AWS Signature Version 4", and then proceeds to explain the differences, which is variable names. The API Domain is us-west-1.hyper.sh, which is the same URL schema as AWS (us-west-1 is also AWS's North California region).
Maybe the containers themselves are somehow not on AWS? Sure. But not on AWS at all doesn't seem to be the answer.
At last, you can try it. Then you will found it is totally different from AWS.
Take the Xeon E5-2403 and the Xeon E5-2637 v4. Both are quad-core Xeons, but they differ by pretty much everything except core count.
Here's a comparison of their performance: http://cpubenchmark.net/compare.php?cmp%5B%5D=1827&cmp%5B%5D....
Granted, this is an artificial benchmarks, but the results speak for themselves.
In this case, the Xeon E5-2637 v4 is almost three times faster than its little brother, the Xeon E5-2403.
Quantifying CPU performance by number of cores is disingenuous at best, and dishonest at worst.
Any idea who these people are / this company is? Seems to have come out of virtually nowhere.
Outside of that you're limited to specific stacks.
You can use our officially supported languages:
You can create a Docker image and deploy it via our container registry:
You can create your own buildpack: https://devcenter.heroku.com/articles/buildpacks#creating-a-...
Or use a buildpack created by the community:
Could you point me to the poor-English in question?
> This guide shows how you can launch a full functional Jenkins server in one minute And then configure this Jenkins works with you Github account.
I found both easy to get started with. Flynn works well with DO and AWS although I couldn't get it to work with any third-party hosts (ssh auth issues).
Took me about an hour to setup 3 load balanced nodes and deploy their test go app which writes to Postgres.
There are issues with Flynn though:
- Missing Dokku's rich plugin support (Let's Encrypt is a breeze on Dokku)
- Log shipping
- Docs need work. I had to dive into the issues to figure out how to setup the custom domain names
- The HA is a bit murky. I downed two of my three nodes and the cluster collapsed. Since Flynn runs on one machine I expected it to work
I personally think there is a big gap in the market for a company that can provides a thin layer on top of the various IaaS providers to create the ease of Heroku with lower costs.
I'd expect this is by design. In a 3-node system that takes CP out of the CAP theorem (consistency over availability), you can only lose one node before the system becomes unavailable. This is because, as far as the remaining node knows, the other two nodes could still be up but network partitioned off from it. To prevent a split brain in such a scenario, you need a majority of nodes to be accessible or else they'll intentionally stop working.
tl;dr: A 3-node cluster with 2 nodes down is not the same thing as a 1-node cluster.
Also, log shipping and Let's Encrypt support are coming soon.
One interesting thing I've noticed is that I was charged a dollar for an IP address that I released after 1 minute and 11 seconds. I'd have assumed that it would have been by the second as well. However:
fip 18.104.22.168 - 2016/11/07 16:27:08 2016/11/07 16:28:19 0.0197 $1.0000
"Billing begins when a new Floating IP is allocated, ends when it is released. Partial month is treated as a entire month."
PS: I work at Hyper.sh
% hyper fip allocate 1
Please note that Floating IP (FIP) is billed monthly. The billing begins when a new IP is allocated, ends when it is released. Partial month is treated as a entire month. Do you want to continue? [y/n]:
How secure are these containers though? I always thought that Linux containers were not designed to be a guaranteed firewall between multiple tenants.
EDIT: They use their own technology to run docker on "bare metal hypervisors" - https://github.com/hyperhq/hyperd. That's actually pretty cool.
Such poor judgement goes to show company culture. Wouldn't even consider them for any service after this.
Oh my. You must be very careful with giving your email address to anyone, then...
Do you have the same dismissive attitude when it comes to viagra spam?
Credit where credit is due, I suppose: somebody from Hyper.sh contacted me, apologized for lapse of judgment. IOW even they don't agree with you here.
The article links to the hyper.sh site , as well as a github repo .
Little question, you guys said you started in Beijing, and that your next plans are NYC and Europe... Why did you leave Beijing? Didn't the local growth attract you?
Personally, I'd like to see: NYC, Chicago, Germany, Middle East, and Australia
Yes, NYC and Europe are our next step. Probably Frankfurt or Amsterdam.
1. If I wanted to launch my own docker service on top of this, would that be ok?
2. Any timeline on the Franfurk/Amsterdam data centers?
3. Policy on DMCA for European data centers?
4. Thoughts on more storage? (pricing on volume storage - ~100 TB+ or so)
2. In a few months
4. Yes, we are looking to expand the DC and add more options.
BTW, our public roadmap: https://trello.com/b/7fEwaPRd/roadmap
I Have a 8GB/4-core atom based bare metal server running on packet.net for only 35$. Is running 30+ moderately used containers without any trouble.
Got me off heroku finally!
There seems to be about a gazillion ways to get "some code running somewhere" but I'm not aware of many budget options for data persistence.
1) Database running inside docker, but using external mounted volume. Packet has external block storage(14$/month for 50GB high iops version), and you can configure backups on it ranging from 15m to every week. So that should completely cover the HA/backup stuff for most apps.
2) For monitoring/logging, the best solution so far seems to be datadog. It doesn't look very expensive, and seems to have most of the intergations you can come up with - couch, pgsql, docker, express, slack, github.
Combined it costs me 70$, which is a much better deal than PAaS I can think of,
Btw, if someone from datadog is reading this thread, your couch integration seems to be broken.
Where am I supposed to run my DB?
No but really, this is a neat concept and the idea of micro instances is attractive for prototyping.
I signed up for the trial and was pretty impressed how easy it was to get up and running. I'm hesitant to use for production-level workloads. As it matures I think it will be a great platform (and then Amazon, Google, or Digital Ocean, etc. will acquire it and maybe roll it into their offerings or maybe kill it.)
Hyper handles that responsibility, allowing you to free up more time for app dev. "Our platform removes the need for you to manage a VM cluster or any container orchestration engine, so you can focus solely on your containers and get back to coding!"
If it's a low traffic site, why not just build a small site and use a cheap shared host?
hyper run --name mycontainer --volume myvolume:/mnt/point/:ro myimage
I like Cloud Foundry because I work on it. It runs on AWS, Azure, GCP, OpenStack, vSphere, RackHD and others. If someone wrote a BOSH CPI for DigitalOcean it'd run there too without much fuss.
Disclosure: I work for Pivotal, we donate the majority of engineering on Cloud Foundry.
1. Hyper.sh is a PaaS running their own open-source stack https://hypercontainer.io/ ?
2. That means that it's possible to host "your own hyper.sh" on AWS or DO?
1. Yes, our SW is open sourced: github.com/hyperhq
2. Not really, Hypercontainer runs on bare-metal, as it uses hypervisors (KVM, Xen) underneath.
Given that this is yet another place to run code, I would be very interested in a third-party security review. The result of that, and any other Certs or regulator reviews, would strongly define what sort of work loads can be run on it.
Being able to do `hyper push myimage` would streamline the process.
tar czf - ssl/ | hyper exec -i 7f0b148b478e bash -c "cd /root && cat - | tar xz"
Doesn't sound so amazing to me (or is just very low density compared to what's typically done with the current Xeon lineup).
I'm pretty sure they mean Layer 3, I mean the effort to have a SDN network/VPLS and reconfigure routes, etc for every client seems to be too off for me.
Of course, I'd love to hear if the opposite is true.
PS: I'm the founder :)
Kind of similar to LB+ReplicationSet in K8S, not automated scaling yet. But will do.