Hacker News new | comments | show | ask | jobs | submit login

This really helps with the dev-to-production story for containers.

When people first started using Docker containers, we were promised things would run identically in dev and production - no more "but it worked on my laptop" issues. Then the rise of orchestrators meant that there again became a significant difference between running an app locally (in compose) and in production (on Kubernetes). Docker for Mac/Windows will now bridge that gap, giving me a k8s node to run against in dev.

Whilst Kubernetes has provided a great production orchestration solution, it never provided a great solution for development, meaning most users kept developing with Docker and Compose. It's great to see these worlds now coming together and hopefully leading to a first-class solution all the way from dev to prod.




in order to test your app in prod-like env you need to run prod-like env locally. ie. a k8s-cluster that is close enough to prod. for that you will have to at least simulate multi-node-setup and run all cluster-addons like in production.

i am excited about this move from docker but i don't think it will solve all the problems. i think once you have a bigger team it is worthwhile to run a second k8s-cluster besides prod where people can just test things on it. otherwise it is actually not that hard to run a local k8s-cluster with vagrant, not sure how docker wants to top that - i think there is no need to top vagrant.


I believe when you say "simulating multi-node setup and addons" - you're seeing it from an operations perspective. Thing is, those concerns don't need to be repeated for every single application. When a consumer says "test", they mean testing functionality. Not testing operations like network I/O bandwidth, sysctl parameters, rebalancing, etc. The expectation is that, operational folks (kubernetes integration tests, ops integration tests, GKE tests, etc) already have tested and verified all of that.

This is not like ops vs dev. When you use a library or framework (say, spring) - you don't test whether HTTP MIME Types are working correctly in spring. You assume the library already has all that tested and covered, and as a consumer, you write tests for what you code. The library's code (and tests) are abstracted from you. This is similar, except for operational stuff. In fact, there is no major difference between them. Its just layering and separation of concerns.

There is no such thing as 100% prod except prod. Even for rocket launches. 90+% is good enough for majority of cases, and is already on the higher side.


Not necessarily, we have been using k8s for a while now where I work and what we've been doing is running a simple minikube setup locally, with the production add-ons (DNS, nginx ingress controllers, etc.) and circumventing what is AWS-related.

It's been working quite well, no need for multi-node-setup so far that I'm aware of.


PV volume support and failover has proven to not work very well on single nodes. Otherwise I agree.


It will be possible to run all your add-ons in the local setup, including networking. Multi-node is essential for some use-case, but arguably is not critical for most people, yet it is coming in the future.


Docker for Windows is pretty much unusable at the moment.


I mostly ruin it for compatibility reason, to integrate with or allow to run alongside of it. It works, butnhas major annoyances of which the shares is one of the bigger ones. Always need to restart containers as they are started before the share is mounted properly. And often looses the connection...


Did they ever fix file change events? Last I tried it you had to do polling which was far from ideal.


summarised my sentiment perfectly


There's minikube. There's been minikube for a long while.


Minikube (and its derivative, MiniShift) have been very helpful for my team in bridging the gap between local development and production for Kubernetes and OpenShift.


Thanks... Would love to know more about your experience.

/me is one of the main contributors of Minishift


As an ordinary developer, I'd love to hear your experience as well.


Note that minikube is built from docker's libmachine that was used in the pre-moby Docker Toolkit.


Yes. And I don't mean to knock minikube, but this is potentially simpler and easier to use.


Yeah, not to pile on with the comments, but more to help someone else who might not have used minikube, but it's been great for the past year we've been using it. As simple as simple gets.


I don't see how it could be much simpler and/or easier. In my experience, Minikube just works, out-of-the-box.


Not everyone had the same experience.

There a few minor UX flaws that make it frustrating to use, e.g. having to set Docker host, shared filesystem performance is poor, networking in enterprise desktop environment is broken (just to name a few top most issues).

Also, a lot of folks end up running Docker for Mac and minikube VMs, why should they have to run two VMs?

Additionally, minikube is completely different from production-grade deployments (single binary, which means a rewrite of main function for etcd and all control plane components, as well as hard to debug basic performance issues in control plane, there is one large process and you don't know what is wrong, also there is no way to use your favourite network add-on).

Additionally, minikube is based on legacy Docker libmachine, it is not really maintained anymore.


Certain things are not possible. However, we try to match functionality as much as possible with localkube (and soon kubeadm).

Shared folders, especially using 9p and/or cross-platform have been an issue, and I personally also experience this in the fork Minishift, and this likely the performance issue you meant.

But back to an earlier question I posted, have you filed the issues you had in the issue tracker? https://github.com/kubernetes/minikube/issues

Yes, the docker/machine code is an issue. For this, the libmachine is mostly moved inrepo and we are working on abstracting and even replacing this.


So in reading this link where information is scarce, this seems like an alternative to minikube (e.g., bundling Kubernetes as part of Docker CE/EE). Is that the right interpretation?

FWIW, we've found minikube a bit wonky. It's resource intensive, so if you want to run more than a couple services, your laptop starts to melt. One of our open source projects is Telepresence which relies heavily on Kubernetes networking, and we definitely see more weird/networking issues with Telepresence/minikube than with regular K8S clusters.


Have you filed an issue about this behaviour?

Note: resource intensive might be because of the hypervisor but generally shouldn't be that bad.


Docker for Mac and Windows will include a single node k8s cluster, so yes, effectively a replacement for minikube. Docker EE will include full support for k8s as an orchestrator as well as Swarm mode.


I like minikube for some things, but I'm hoping this will be better by allowing for multiple K8s nodes to be spun up which is handy for some learning/training scenarios


Is there an easy way to build an image locally and start it in minikube without an external registry or running a local one?


yes. `eval $(minikube docker-env)` will setup the docker cli to use minikube's docker demon.

https://kubernetes.io/docs/getting-started-guides/minikube/#...


We use minikube's support for Docker insecure registries to build and deploy images locally. It works perfectly for us.


i guess the easiest way would be to run a registry in minikube.

another approach would be to run docker inside a k8s-pod (docker-in-docker), that way you can run images without having to push them to a registry but still test it in k8s-environment (at least to some extent).


which is actually just docker-machine. This could be a infiniately simpler, better and scalable minikube ;-)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: