It looks like Skaffold watches for code changes, and Draft doesn't. But Draft automatically creates the Docker and Kubernetes configs, while Skaffold requires you to create (or copy and edit) them.
Disclosure: I suspect I am partly responsible for the "Disclosure:" thing become a fixture.
The pace of K8S versions, and subsequently things like ingress controllers for various tech stacks, is currently a whirlwind.
I'm a big K8S fan. But wanting my cloud providers to deal with it instead of me.
Otherwise, it's all the worst off on-prem and cloud mixed together.
I'm also keeping a pulse on riff  as a k8s FaaS.
Shameless plug: We've recently re-written our platform which glues together open source data engineering tools to run on k8s  which may be worth checking out if you're interested in user analytics or ETL pipelines on k8s as opposed to writing for it at an app level. It's almost all open source as well. We're rolling out for multiple F500s currently :).
Disclosure: as you might have guessed, I work for Pivotal.
Thus, I continue to wait for the big 3 cloud providers to figure it out. Unrelated, but seems like an opening for Linode or DigitalOcean.
We like this model because (1) it roughly approximates the added value our customers derive and (2) it's easy to calculate without disagreement.
We do bill PWS based on total RAM-minutes, but that's simply because the existing hosted-PaaS market assumes that model. Our price for PWS is pretty close to what it costs us to run the bits on AWS.
Of course, I should emphasise, I do not work in the sales side of things, where prices can be discussed and Baldwin quotes abound. But I kinda like the sales folk in my office, so if you want to meet one, please email me.
Among other things, let's you package an app and kubernetes together into a unified installer, upgrade it, etc. No internet access required or kubernetes experience needed for end users.
Our k8s distro is PKS, which is the closer one here.
Disclosure: I work for Pivotal.
We wrote a short doc on the former for Kubernetes itself if you happen to be on a Mac.
As far as app code, I highly recommend building on Helm if you're not already. I imagine that's what skaffold and draft are doing behind the scenes.
Forge lets you do things like specify different profiles (QA, staging, prod) when you deploy, map branch names to profiles, do super fast incremental Docker builds, dependencies, etc.
What we've found though, over time, is that it's less about the tool and the best practices on how you use these tools -- that's something we're now spending a lot of time now since our users are showing us how they're trying to use this, and it's really fascinating.
Speeding up builds is something I'm interested in. Google's FTL is one good solution; I think we'll see some more from their container tech team before long. Personally I've pitched a notion based on automatically building layers for "synthetic" images by identifying commonalities between filesystems.
Do you have any more info on the idea about identifying commonalities between filesystems?
I work on Skaffold and really want to speed up builds.
1. The performance of bind mounted host volumes in Docker for Mac was pretty disappointing. We found our initial builds were substantially (~3x?) slower inside a (volume mounting) container than out.
2. Having to build all of our services when you started the stack was also slow, versus the minikube + telepresence setup where you by default run pre-build images (the same ones we deploy) and swap out services as needed. You can mitigate this with well written Dockerfiles that do a good job of caching layers but it can be tedious.
I've been thinking about integrating a local minikube lately but haven't started yet. I may end using Skaffold as a backend or just deprecating kube_maker entirely if it becomes obvious that Skaffold is the way to go.