When people first started using Docker containers, we were promised things would run identically in dev and production - no more "but it worked on my laptop" issues. Then the rise of orchestrators meant that there again became a significant difference between running an app locally (in compose) and in production (on Kubernetes). Docker for Mac/Windows will now bridge that gap, giving me a k8s node to run against in dev.
Whilst Kubernetes has provided a great production orchestration solution, it never provided a great solution for development, meaning most users kept developing with Docker and Compose. It's great to see these worlds now coming together and hopefully leading to a first-class solution all the way from dev to prod.
i am excited about this move from docker but i don't think it will solve all the problems. i think once you have a bigger team it is worthwhile to run a second k8s-cluster besides prod where people can just test things on it. otherwise it is actually not that hard to run a local k8s-cluster with vagrant, not sure how docker wants to top that - i think there is no need to top vagrant.
This is not like ops vs dev. When you use a library or framework (say, spring) - you don't test whether HTTP MIME Types are working correctly in spring. You assume the library already has all that tested and covered, and as a consumer, you write tests for what you code. The library's code (and tests) are abstracted from you. This is similar, except for operational stuff. In fact, there is no major difference between them. Its just layering and separation of concerns.
There is no such thing as 100% prod except prod. Even for rocket launches. 90+% is good enough for majority of cases, and is already on the higher side.
It's been working quite well, no need for multi-node-setup so far that I'm aware of.
/me is one of the main contributors of Minishift
There a few minor UX flaws that make it frustrating to use, e.g. having to set Docker host, shared filesystem performance is poor, networking in enterprise desktop environment is broken (just to name a few top most issues).
Also, a lot of folks end up running Docker for Mac and minikube VMs, why should they have to run two VMs?
Additionally, minikube is completely different from production-grade deployments (single binary, which means a rewrite of main function for etcd and all control plane components, as well as hard to debug basic performance issues in control plane, there is one large process and you don't know what is wrong, also there is no way to use your favourite network add-on).
Additionally, minikube is based on legacy Docker libmachine, it is not really maintained anymore.
Shared folders, especially using 9p and/or cross-platform have been an issue, and I personally also experience this in the fork Minishift, and this likely the performance issue you meant.
But back to an earlier question I posted, have you filed the issues you had in the issue tracker? https://github.com/kubernetes/minikube/issues
Yes, the docker/machine code is an issue. For this, the libmachine is mostly moved inrepo and we are working on abstracting and even replacing this.
FWIW, we've found minikube a bit wonky. It's resource intensive, so if you want to run more than a couple services, your laptop starts to melt. One of our open source projects is Telepresence which relies heavily on Kubernetes networking, and we definitely see more weird/networking issues with Telepresence/minikube than with regular K8S clusters.
Note: resource intensive might be because of the hypervisor but generally shouldn't be that bad.
another approach would be to run docker inside a k8s-pod (docker-in-docker), that way you can run images without having to push them to a registry but still test it in k8s-environment (at least to some extent).