I think the docs kind of misses what problem it solves and how it solves it. I do all the local stuff with docker-compose while using the same images for k8s deployments, how does this help me? Does this aim to replace docker-compose with k8s, or is this just for testing k8s resource changes in realtime, or is it testing your local k8s resource definitions against a remote cluster? Obviously this is useful for some people, I would like to understand what problems this project adresses and how it adresses them.
Also, doesn't not reusing the k8s definitions and re-defining them in a tiltfile cause yet-another-config-to-manage, which can quite easily digress from the production resource definitions, which is one of the problems using docker-compose for local development and k8s for deployment? I would really like to read the problem and proposed solutions in a detailed documentation page. I might have missed it if there is already one, please correct me if so.
The problem this solves is that distsys devs don't get enough feedback. You say you use docker-compose. Compared to docker-compose, Tilt has two advantages:
1) It updates the services as you edit them
2) Its UI makes it easy to see where errors are occurring, without you having to play 20 questions on the command-line or worry about things scrolling off-screen.
The Tiltfile doesn't redefine your k8s definitions; it reuses your existing k8s definitions. The Tiltfile just tells Tilt how to get them. (Many teams don't just use yaml files, but generate them, so Tilt has to be able to do that, too.)
There is a landing page at https://tilt.build which has more of the "why Tilt". Also, we're adding native support for docker-compose next week. I'd love to understand your case more, because at a high level you sound like exactly the kind of dev we want to help.
AFAIK, the whole idea behind the cloud-native approaches and tools in the ecosystem, such as Docker, Kubernetes, Helm, Terraform, focus on reproducible builds that are platform-independent, at least that is my understanding of the ideal. I imagine having an application code, my Terraform definitions, k8s resources and helm charts, and basically I should be ready to deploy my application to any of the big cloud providers. This doesn't have to be the case for each and every application, but I believe we should all be moving towards a decoupled infrastructure where the application code itself is independent from where it is running in terms of infra.
So, considering the point above, using Tilt for updating production cluster state and resources with hot reloading seems like a no-no, as this may easily cause configuration drift, which would make the whole cluster state independent from the k8s resource definitions. We should be moving towards reproducible builds and infra, so this usecase seems to fall short in my opinion.
Replacing docker-compose seems like an exciting point, though the value addition is still not clear for me to abandon a tool I have been using for months for developing the application code, not the infrastructure stuff. I can see all the logs from all the containers on a single terminal, every developer in my team can run the whole application with a single shared docker-compose definition, and everyone in my team is familiar with it already. As far as simplicity goes, I don't see how Tilt simplifies my workflow since it requires a configuration script instead of docker-compose.yml file, and it is yet another tool to learn in that regard. Ideally, I would like to remove all the dependency from docker-compose as keeping it in sync with the k8s resource definitions requires effort, though Tilt also requires a similar configuration AFAICT from the docs. The best workflow for local k8s-based development seems like using minikube with the mounted directories or something similar, which means every change in the k8s resources made locally would also be present on the production setup, hence keeping production setup completely reproducible on every cluster, including local stuff like minikube.
Please don't take my words in a bad way, I am really looking forward to replace my docker-compose based setup with something based solely on k8s definitions, therefore I am playing devil's advocate for Tilt in order to clarify its benefits so that I can make an informed decision about employing it or not. Thanks for jumping in on HN to answer questions and accept feedback, I really appreciate that.
Yes, please don't use Tilt to update your production cluster!
Tilt is just for precommit development.
If you email me (dan @ windmill dot engineering), we can talk more, but we are more reproducible and less config than mounting directories into minikube.
It looks like your basically recompiling/rebuilding the container and just updating what’s running (in what I assume is a k8s cluster on OS X). This all seems like a terrible idea. No way devs want to wait for a container rebuild to see updated code changes, that’s highly unproductive. They usually don’t run k8s locally either, because it’s an abstraction later they hardly want/need to concern themselves with while writing code. And as a say admin, truthfully I don’t really want them to try because it would double my own already heavy workload. This seems to solve a problem in a way that only works in the most ideal enterprise (slow moving) work environment.
I currently solve the problem with carefully built containers which permit volume mounting and and code reload natively using typical dev flags for the run command. It requires more thought upfront than is ideal, but does typically solve the problem. Cheers.
> There is a landing page at https://tilt.build which has more of the "why Tilt".
As a tangential by-the-way, I notice you're based in NYC. Of interest, Googlers working on Knative Build (hi Jason!) and Pivots working on Cloud Native Buildpacks (including me) are based in NYC also.
Hi! Thanks for checking in on HN. I'm very interested in k8s productivity but had some trouble understanding what Tilt is actually doing--is it a replacement for minikube, hosted kubernetes, etc.?
Is there an architecture diagram that shows what runs where?
There isn't an architecture diagram, and there should be. Thanks.
Tilt replaces `docker build && kubectl apply` or `docker-compose`. It watches your files, updates automatically, and gives you a UI that shows you error so you don't have to spelunk with kubectl.
It uses minikube or docker-for-desktop or a cloud k8s cluster (AKS, EKS, GKE, whatever).
"Tilt replaces `docker build && kubectl apply` or `docker-compose`. It watches your files, updates automatically, and gives you a UI that shows you error so you don't have to spelunk with kubectl." => that is very clear, consider merging https://github.com/windmilleng/tilt/pull/930 to add it to your readme
Does it allow you to replace the `docker build` step? Currently we use Nix's buildLayeredImage[0] to build our images and some custom machinery to generate our Kubernetes definitions from this, but I really like Tilt's status TUI.
It's a lot like skaffold. Main improvements (for now):
) a better UI that keeps errors on-screen. Check out our demo video: https://www.youtube.com/watch?v=MGeUUmdtdKA) fast_build allows you to do iterative container builds
If you use skaffold for local dev, a Tiltfile will take 10 minutes to write. Email me at dan at windmill dot engineering and we can pair to get you going. You're the user who would benefit from Tilt immediately.
It can be a pretty big pain to get local versions of your all your k8s services running on, say, minikube. Tilt makes this really easy because you're developing your app against the real services in your cluster.
I met the Windmill Eng team a few weeks ago at Kubecon in Seattle, awesome to see tilt here. It's definitely a young product, but it's a good start to exactly the experience I want.
By experience I mean that when I do local dev and edit code live while reloading or restarting the service, I see changes reflected immediately in 2-3 commands that take seconds.
In moving software towards Kubernetes, I come to rely on the service abstractions available therein. So it's sensical to develop in the same kind of environment. Tilt gives me the ability to get the save => build => package => push => reload loop into Kubernetes and without the manual hassle thereof.
I've used it in minikube, on-prem, and GKE. Running a dev box in a GCE instance and running tilt against GKE is a super nice environment.
(Fair notice: I'm a Googler who focuses on GKE, so bias and all, my opinions are not Google's, etc.)
YAML, especially how it's used in k8s, is terrible. Writing it by hand is mind-numbingly toilful and extremely error prone, and templating YAML with Go (like Helm does) is a travesty.
The most sensible approach I've seen is to use jsonnet+kubecfg. This lets me use a somewhat sane, composable, turing complete language that spits out a tree of k8s objects. I can use as much or little repeated definitions (or abstractions) as I like. Then kubecfg performs all the kubectl invocations to actually make the cluster state match my calculated definitions.
I'd love to see a writeup about your process or workflow.
I keep seeing helm, and the overall idea of a "package manager" sounds nice, but having looked at some charts, and tried to understand the value proposition, I get a very uneasy feeling somehow. Perhaps it's lack of familiarity? but your comment seems to suggest something is inherently wrong, so I'd love to understand more...
I'd also point out pulumi. It is a service, but it solves a lot of trouble when dealing with k8s configuration, and you have a real language to to create your definitions...
Being written in Node.js and running in an aaS model (unless you go for the call-us enterprise pricing), Pulumi unfortunately gets scratched off my list of software I want to use for provisioning production.
It actually uses your existing Dockerfile and existing k8s yaml definitions contrary to what some people are saying in the comments,the Tiltfile seems to be just a handful of glue logic. The logs ui is pretty nice too, I've definitely looked for an easier way to check logs from multiple running pods.
Seems like this is the main product of a full-time startup. In that case, I am curious how are you thinking about the future and business model considering it is 1) open source and 2) many competitors.
Hey, Tilt CEO here, so I'm certainly biased, but also semi-informed.
We think you deserve three properties:
1) it's easy to start your whole app
2) it's fast to update your app as you edit
3) common problems can't be missed.
docker-compose gives you 1. (some people hack in mount points and file watching to get 2, but it's hacky)
skaffold and garden.io give you 1 and some of 2. (Tilt has fast_build, which can update pods in-place, without rebuilding the whole image, which I don't believe either skaffold or garden.io do)
Our big value-add is the UI. Which may sound weird for a Terminal UI, but if you look at the demo video on https://tilt.build , I think you'll get a sense for how Tilt is working to keep the problem that's blocking you in your face, so you don't waste time playing 20 questions with kubectl.
Hi all, Garden CTO here. First of all, great job on Tilt! A lot of interesting stuff happening in terms of DevEx in the multi-service realm.
I just wanted to chime in on the points above. Regarding 2), Garden does indeed support updating pods without re-building and re-deploying via our hot-reload feature (https://docs.garden.io/using-garden/hot-reload) which essentially copies source files into the running container on file save (works best for dynamic languages).
Regarding 3), Garden has a terminal UI that shows the status of individual services and updates as changes are made to the codebase. It will for example print error messages for failed container builds and failed deployments. If configured so, Garden can also run tests on code changes and will print the error output if tests fail. Our next release will also contain the first version of a dashboard which displays service statuses and dependency graphs and updates in real time. However, our terminal UI is not interactive like Tilt’s—which looks really nice!
At Okteto we had similar issues so we built our own tool and open-sourced it as CND (https://github.com/okteto/cnd). Our approach is to be decoupled from how your pod is deployed so it works with Helm, CNAB, kubectl, etc…
You can just call k8s_yaml(local('helm template <args>')). That tells Tilt to parse the yaml generated by running (locally) helm template. Feel free to reach out if you'd like to pair to try it.
Is the problem with building containers that you don't have a Dockerfile and you don't want to spend time making one?
Draft does that pretty well with Draftpacks, in my experience if there is a Draftpack for what you're using then the Dockerfile that draft spits out will get you pretty close to a workable container image with basically no effort, or sometimes even all the way there.
I'm trying to understand what you mean that building containers is a problem. Is it the procedural act of actually building and rebuilding the container that is the problem? (Draft handles that too...)
I’m all aboard Docker, but I don’t use Linux natively, so I’d need to run a VM inside macOS, or have another box to build containers. Even if I did, I use GCR in production, so I’d need to template my container images in helm to use GCR only in production versus whatever I use locally for development. And what would that even be? I haven’t explored minikube to know what it does for local dev container registry.
Anyway, thanks for the link. Missed that.
EDIT: Looks like tilt is using GCR in those examples. I’m not trying to traffic container images off of google cloud for local development. That requires (fast) internet just for local development, and outgoing traffic really adds up quickly in terms of cost.
PSS: never heard of Draft. Checking it out now. Not using azure though, maybe that’s not relevant.
Sounds like you should try skaffold, it has a workflow for local builds, with fast hotloading to auto-rebuild your image. It also redeploys automatically too. If you're using Minikube it'll share the docker daemon so that you don't have to push anything anywhere.
I find Docker for Mac to be completely fine, and Minikube can be a resource hog, but still workable if you make your resource allocations configurable (i.e. don't request 1G for each pod in your local env, even if you need to do so in prod). There is an xhyve driver for Minikube that should reduce the footprint by not requiring you to run virtualbox.
Azure team open source products can almost all be used without Azure cloud.[1] I do use Azure but not for everything, you can definitely use Draft without Azure, it's Kube-native software so you can use it anywhere you have Kubernetes. (Like Tilt, from what I understand about it so far... Tilt and Draft look to be very similar tools, with different approaches.)
I helped someone on Twitter with this question last week. I have a (* single-node) Kube somewhere and I want to build images in it, and not have to pull them from a repository. The parlance in Kubernetes for this is, an imagePullPolicy of Never. (you actually do need a repository if you don't have a single-node Kubernetes, full-stop.)
Docker does actually run a VM with Linux in it, doesn't it? The modern version of Docker for MacOS also offers a Kubernetes checkbox, so if you're already using Docker, you needn't have another VM just for running Kubernetes.
This is certainly a problem that could use more clarity and for users, maybe a nice howto blog post for clarification, but it's absolutely possible to do this without a registry. I think Tilt mentioning "gcr.io" in the image tag is not necessarily a signal that you should actually push your image to the gcr.io registry. It's therefore possible to build and execute your containers while never pushing or pulling. If that's what you wanted to do, then that's exactly what you may do by setting your imagePullPolicy to Never.
(I have no idea how this works with Tilt, but I'd assume it's possible to use a similar approach since it's also on Kubernetes.)
[1]: the exception is things like aks-engine. You can only use aks-engine with Azure as I understand it, because Azure implements their own API for creating virtual machines, and nobody else implements that particular API. But I can't say how unique the Azure API actually is, or if it's also open source, could you build your own Azure in a rack, from all open-source components? That would be pretty cool, if you could...
Local kubernetes solutions (like docker-for-mac and minkube) usually let you push your image directly to the local docker daemon. No need to upload to a remote registry only to re-download it. A lot of tools in this space are pretty good about detecting this optimization, including Tilt.
If you tried something like docker-machine on Mac a while back and were disappointed with abysmal performance, it's worth another look. They've replaced VirtualBox with HyperKit, which uses the Apple-supported Hypervisor.framework and works much, much better.
Running a VM is a total drag. I only have 16GB RAM. Then even if I run a VM, the docker registry is not the same as my production docker registry (GCR). I guess I can account for this with a variable in my helm templates, but then it’s just more complexity.
Would you recommend minikube even with 16GB RAM? How much do you allot to the VM?
I'm a long-time minikube fan, I like the isolation (totally worth whatever minor VM memory overhead), reliability, ease-of-use; it feels like an appliance in a good way, which is something I traditionally associate with VMs. minikube contributed strongly to me personally getting excited about k8s in 2016.
The local docker registry is an asset because it gives me flexible when offline.
If you are running Docker for Mac, it should work out of the box :
Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.
This is technically true but in practice Docker for Mac hides this from you and does not require a prohibitive amount of memory. 99% of the developers at my company have 16gb MBPs and we all build multiple Docker containers daily.
As weird as it sounds, Microsoft had implemented the full namespaces stack in Server 2016 (unavailable in Win10 as the base for native containers is Server where kernel differs a bit), which Docker supports (and k8s in alpha/early beta).
Also, doesn't not reusing the k8s definitions and re-defining them in a tiltfile cause yet-another-config-to-manage, which can quite easily digress from the production resource definitions, which is one of the problems using docker-compose for local development and k8s for deployment? I would really like to read the problem and proposed solutions in a detailed documentation page. I might have missed it if there is already one, please correct me if so.