We're Lyn & Colin and we built Layer - It creates a unique staging server for every commit.
Before Layer, I was CTO of a startup with a 10-person developer team, and dealing with staging servers (and end-to-end tests) was one of the most annoying parts of our workflow.
Layer lets you define staging servers the same way you define Docker containers, and get a unique one for every commit going back a week. We run them in the cloud, so there's no need to pay for AWS servers you're not using.
Also, if you're running a webserver, you get a unique URL for each unique server automatically so that you can post it to a slack channel and get feedback immediately.
When we're setting up the staging server, we use the same sort of cache as Docker so that you can skip repetitive tasks like setting up a database or running database migrations if they haven't changed.
Would love to get your feedback, it only takes five minutes to do the onboarding tutorial.
Since we give full VMs instead of just containers, you can keep the database running in the background. Because we do memory snapshotting, you could turn a 5 minute database start + migrate + populate into a 1 second "restore from memory snapshot"
Interesting contrast to flockport that runs multiple processes in a LXC  (treats it like a lightweight VM).
I'm Lyn, co-founder of Layer. Here's a quick 5 min demo video in case you don't want to do the sign up tutorial (even though it takes 5 mins toooo): https://www.youtube.com/watch?v=PH-n70gPQgA
Happy to answer any questions here.
50 staging servers per seat
11GB RAM per server
8 CPUs per server
Take your Hypervisor and sell it to companies that want to efficiently lift-and-shift legacy systems in to the cloud and make your investors super happy.
Bakevm with tooling > create userdata script for cloud init to copy payload to server > create systemd oneshot service to run payload && shutdown
Also, we keep the baked images around for a week and trigger them whenever they are requested (with, say, HTTP, or terminal access)
Some blog posts I remember from when I was exploring this idea a few years ago:
Our hypervisor does the logic though, we don't rely on the OS of the staging server itself for anything (think PXE)
So you guys boot the image in the background, then capture a memory dump of it so you can quickly launch VMs from that snapshot? Or you boot the VMs the traditional way the first time, and then just suspend them when not in use?
Also, we monitor which files are read by any step in the process, and let you skip setup chunks that don't need to run (i.e., installing libraries) without needing to micromanage what files you copy into the staging server.
Is there a way to try without giving access to my github account?
Can you put your email in your HN profile?
Can you add LayerCI Twitter handle to your site?
2. My email (for Layer) is email@example.com
Thanks for your feedback.
How do you run your layer file locally?
Can you describe automated tests in a layer file?
I built something similar to this for integration testing and those were my biggest hurdles. At a previous job where all of the code was python/node/java I was also able to generate cross-service end to end code coverage from these tests and that feature was extremely helpful as it helps you remove redundant tests.
Your service looks very polished and I'm extremely impressed someone is taking this idea to market.
We scan the entire repository for Layerfiles, so you could, say, create a layerfile per service for unit tests, and a bigger Layerfile in the repo root that does the whole deployment and runs a few E2E tests.
> How do you run your layer file locally?
They can be approximated by running a docker container and just copy/pasting the commands. We haven't built this yet because I know it will be a huge time sink to make it reliable (see, e.g., vagrant)
> Can you describe automated tests in a layer file?
We tell you if any step fails (e.g., returns non-zero exit code) so you can just as easily run "./deploy.sh" as "./test.sh" - you can also start a webserver and run E2E or acceptance tests against it, for example.
> generate cross-service end to end code coverage
We are trying to avoid anything that isn't very general - it's possible to run https://codecov.io/ within Layer if you want functionality like this.
Our goal is actually to help teams that have unmocked, difficult to test codebases. You can make multiple Layerfiles and share the setup time between them, so that you can run destructive tests with a real database if you want to test that, say, signing up via API works.
(it's an internal tool, don't look for a pricing page)
So far we support Firebase and their Firestore, RealtimeDB, and Cloud Functions. In action, the development with Foundry looks like this:
- (1) Start Foundry session in your terminal
- (2) Specify a slice of your production Firestore/RealtimeDB we should copy for you
- (3) Specify what Cloud Functions we should trigger for you and with what data
- (4) Every time you save your code we trigger those functions as you specified and send you back the output
Every development environment is created for each developer separately on the fly (it takes less than 30seconds to have your environment ready). You don't have to wait minutes for your Cloud Functions to get deployed. You get a response usually in just a few seconds.
We are also working on a new feature that will make it possible to have one "master" always-online environment that is publicly accessible.
Let us know if you have a chance to try out the tool and have any other feedback. :)
The "spin-down" feature sounds like the killer -- the only reason we didn't have an environment per build already is because it would be cost prohibitive. Instead, we tried to time-slice with a static set of staging servers... very frustrating with a 50 person team.
Cool stuff! I worked on a very similar project for a small chunk of my life. I was a PM whose life was made a lot harder by not having a tool like this and when I joined a startup that had built a similar tool internally, I realized that it would be a great business. I want to provide some free (and unsolicited) advice:
Your number #1 challenge is going to be go to market strategy. There are actually a number of companies that have tried to solve this problem and either outright failed or seemingly hit some bumps. YC funded a few of them: Release, FeaturePeek, Dockup. There's also a company called Runnable that was acquihired by Mulesoft. Initially I was encouraged by the proliferation of companies in the space - we couldn't all be wrong. In hindsight I should have realized that it wasn't as rosy as I had thought since it doesn't seem like these companies have had gangbusters success.
In my experience, there are roughly three types of pushback you're going to get from potential customers:
1) We can build that better in a week. This one is very difficult to overcome because at the end of the day it's not really about the truth of the statement, it's more about the engineering skill of your prospect. Building this product is a unique and interesting engineering challenge, and I haven't met a DevOps person who wouldn't be excited to try to build it themselves. I tried a lot of different approaches but have never successfully convinced someone it would be a lot easier, faster and more efficient to buy an already existing solution from us. Your mileage may of course vary.
2) Our setup is too custom to work with any generic tool. This one is probably pretty frustrating to hear since you folks have clearly put in a lot of effort to make it work with a variety of build configurations. The interesting thing here is that this reason can often then turn into (1). Our solution required people to containerize their app if it was not already and most people replied that if they were going to do the work to containerize their app, they would just do a bit more work and build this internally.
3) My database is too big/too sensitive to work with this. Many startups have small databases that can easily be copied quickly. However, some have multihundred gigabyte ones that are too large to copy on demand. You either have to include that latency in the product which in my opinion makes it borderline useless, or figure out some way to work around it. You can use RDS snapshots but by our calculations those will be quite expensive. You can hope your customers have some test database (they don't) or you can try writing back to their shared staging (or more realistically) their production database. None of these options are realistically very attractive.
All of that said, it's clear you have some really cool technology here. One thing I'd suggest you'd look into is a related market: developer environments. Most companies I've worked at have software engineers putzing around for the few days/weeks getting their dev environment set up. This is a very expensive waste of time IMO. If you could provision a cloud development environment on demand, that doesn't require any scripts to set up or database migrations to run, that would be huge. The value of skipping that ~2 week setup time multiplied by the number of engineers a company hires really begins to add up. Just my 2c.
Happy to chat about this more if you're interested. I don't feel like sharing my contact info on HN but if you are determined enough I'm certain you can probably get in touch. Best of luck :)
I'll second this as a real pain point. And I would absolutely love to be able to spin up a staging/test environment for myself when needed instead of having to wait for someone else to refresh staging and not knowing what precise combination of commits are on there.
 I'm new and on a Mac, everyone else is here years and on Linux; my local dev experience has been somewhat challenging as a result due to their assumptions.
Will give this a try.