Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Magic Sandbox – learn Kubernetes on real infrastructure (magicsandbox.com)
392 points by mstipetic 9 months ago | hide | past | web | favorite | 89 comments



I found Kubernetes to be easier to learn than the docs would suggest, and I wrote up a short article as a high-level overview. I hope it helps someone, and that it saves you a lot of time:

https://www.stavros.io/posts/kubernetes-101/

You may want to spend two minutes reading this before going into the post's website, as an overview.


This is a really cool platform, but it kinda exposes the extremely high barrier of entry to getting running with k8s. It's a very powerful and useful platform, but it doesn't seem to strike the right balance with it's abstractions to actually make intuitive sense to developers.


This is the problem I have with almost all container orchestration systems (DC/OS, K8s, etc.) None of them can go from 1 node running all your apps to 100, in an easy straight forward manner. I wrote about this not too long ago:

https://penguindreams.org/blog/my-love-hate-relationship-wit...

I've also worked at both Kubernetes shops and DCOS/marathon shops and I honestly don't understand why people are choosing K8s. I'm sure a lot of it is the marketing and the power of Google behind it, but our DC/OS cluster had hundreds of nodes and could scale applications really well. Plus I found the marathon json files way less confusing. (Plus if you really want, you can use k8s to schedule containers on DC/OS, alongside marathon).

That being said, that DCOS platform team was 11 people and they had tons of custom scripts, wiki pages, labels and specific networking ingress/egress points that all worked together. (You could even use a label to specify if you wanted to run containers on the local VMWare cluster or in AWS; with nodes that auto-scaled out into AWS in high load).

I've met people at smaller startups who've gone the Nomad route instead, which seems a lot more sane in many ways, but still requires pretty careful planning and setup for large deploys.


I have used Mesos/Marathon from nearly the beginning and K8s. We currently use K8s. In your third paragraph about the 11 person team with custom scripts is why K8s has won out. It wasn't really the marketing by Google, it was that K8s provided by default what pretty much everyone needs (Discovery and storage management). Mesos took the approach that big shops like where everything can be extended and call APIs but if you are not very huge scale you really do not want to be writing scripts or doing any of that. You want an opinionated full stack.

Scaling from one node to 100s is not trivial in almost every stack. I just think that the requirement that something like that is trivial does not fit the use cases of almost everyone. There's always the "Can this be web scale?" question. We are able to add nodes on the fly but we would certainly face other non-k8s scaling issues with databases that are more complex.

And as for moving k8s to 100 nodes with the touch of a button pretty easily that is coming/is here. That is the power of a full stack that many people are on with. It will only get easier. Writing ad-hoc scripts and combining solutions will not.


So I'm heading a shop with some serious legacy deployment woes. I've got plans for everything but scaling. Now, we're likely to hire out Ops eventually, but for now I'd like to learn devops (backend eng currently) and cut my teeth by getting the ball rolling on a slightly more robust setup.

With that said, what do you recommend? We're a small shop, and our auto scaling needs a very very minimal. We're non-public facing, so our user base is pretty predictable. Yet, I'd like to migrate us to a more modern and mature system. Naturally I was looking towards K8s because of the overwhelming support I see for K8s on HN/etc, but your comment makes me question that.

Honestly, it's the vast options that really screw with my head. There are just so many tools. And where they lack is not immediately evident from the outside. It's difficult to pick a direction. At the moment I'm reviewing Nomad, as it sounds interesting to work with our existing by-hand infrastructure.


I'd suggest going in baby steps. Start with the absolute basics - standalone docker containers + docker-compose. Work in those containers, deploy those same containers. Build your deployment pipeline. That'll get you familiar with the workings of containers in the docker context, and you'll be forced to iron out issues you may find with this way of working.

Then slowly start to spread further - you could start with docker swarm, which is a smaller next-step for devs. Again it's to help you figure out what kind of deployment pipelines you would create and what constraints you need to introduce. The reason I call it a smaller next-step is because the docker compose file used for development is the same one you can use for deployment to standalone or swarm.

By the time you start hitting the limitations of swarm, you should be ready to move to k8s and look at the various options available to you.


I like this. This is good advice.

Containers are still fairly portable and I've worked at a shop where we moved from CoreOS+Consul to Marathon/DCOS and there were almost no app changes.

That is one of the big benefits of Docker. We had some stuff just running on nodes with Docker/docker-compose and even that stuff slowly got migrated over to the big platform.


Agreed, this method sounds nice. The only thing I'd like to find on top of it is some software to manage apps across container resources and physical resources. Ie, I've got some non-cloud old windows boxes that I want to deploy to, so ideally I need a mechanism further abstracts that. I'm currently looking at Nomad, though i'm very early in research.


Currently on this path almost exactly. Problem description and steps all the way up to limitations of swarm. Our swarm currently handles 1M/requests per day we haven't really hit any limitations yet. Good advice though, really


You almost certainly don't need containers. Just use a PAAS (platform as a service). Heroku is the "go to" default, and will be worth much more than they charge. If you're worried about scaling, I've run off the shelve PAAS providers past 5000+ TPS without any issues.

If for some reason you can't use Heroku, you could try a hosted Cloudfoundry (https://run.pivotal.io). If you want something that hybridises PAAS-like behaviour with the big IAAS services, Amazon's Elastic Beanstalk isn't entirely awful. All these setups largely honour the PAAS haiku of "Here is my source code / Run it on the cloud for me / I do not care how" (https://twitter.com/sramji/status/599255828772716544?lang=en).

Don't waste your time on getting container orchestration going, or even containerising your app yet. For internal, low scaling, app-focused scenarios, you'd be mad to go down this road _unless_ your code has been broken into many discrete deployable artefacts _and_ you need to run them all on the same machine (either locally or in production).

You have an app. You want to run it. PAASes do _exactly_ this.

Edit: I see you're worried about costs. PAAS are pricier than the equivalent AWS instance when you consider raw compute, sure. However, factor in your time in creating the same level of redundancy, deployability and monitoring, I'd be shocked if Heroku wasn't an order of magnitude cheaper. Note that PWS Cloudfoundry will host you any 12 factor app for $21.60/GbRAM/month with no bandwidth limit you'll ever hit.


I should have made it more clear, but one need of mine (as mentioned in another comment) is a deployment system that supports the ability to manage very old, archaic, wonky systems. WinXP, Win95, etc. Do you recommend any tooling to support this?

PAASs aren't going to help me much in managing our resources like physical legacy machines. Thoughts?


Depends on what you want to do with these machines, but for multi-OS setup you need VMs. I guess you could check out Vagrant, maybe with some AutoIT to help you avtomate tasks inside Windows. It won't be pretty, but it will work.


K8S won't support them either. K8S doesn't support Windows at all at the moment. And even when it does, it likely won't support WinXP.

To add to that, the container story for Windows is pretty dismal right now, even on Azure. I was at Microsoft Build this year, and most of the PMs gave me the impression that Windows containers were an afterthought on Azure due to lack of use -- Linux support were priority. I also did some PoCs on Windows containers both on-prem and on Azure and they were unwieldy and buggy. The microsoft/windowsservercore Docker image is also a very heavy image at 10+ GB. The microsoft/nanoserver image is smaller, but has certain limitations.

tldr; if you want to use containers, go with Linux.

If you really need XP support, VMs are your only option. Tools like Hashicorp Packer [1] may be able to help with managing machine images.

[1] https://www.packer.io/intro/index.html


i think kubevirt is being rolled in soon. It adds a VM type to the set of things k8s can manage. i'm not sure what's required inside the vm to play nice. I haven't used kubevirt at all. But it might be worth checking out.


Ah, a bit out of my experience alas. Sorry! However, I'm still not sure how container orchestration would help with this.

I can still endorse PAAS for any "cloud" applications you may want to deploy.


Yea it containers definitely won't. Though something like Nomad _(as I'm researching now)_ that tries to manage both physical machines and container fleets might fit nicely.


To be honest it sounds like you probably shouldn't. There is a risk to researching "the best" technology, it can be a full time job and you still wouldn't get any closer to your goal. You are likely to have a lot of low hanging fruit in a legacy systems shop. Start with those and move yourself forward from there.

Without any specific knowledge of your situation it's hard to be anything but generic advice. But first you need eyes and ears. Under no circumstances should customers know of problems before you do. Are you monitoring the right things?

If there is a product, take a long hard look at the build process. When those mostly work without that special someone that always does them because no one else can, start mapping out all the infrastructure, document it and place it under version control. Implement a configuration management system. No exceptions.

With repeatable builds and repeatable infrastructure you are in a much better place to see what your real requirements are.


What you are going through is why “Just use Heroku” is still great advice from wise people.

There are so many options out there that until you actually hit the point where Heroku is too expensive or no longer makes sense...Heroku is your best option (if AWS is an option).


Don't forget Dokku - https://github.com/dokku/dokku

Dokku is extremely flexible Heroku alternative, and a good introduction to dev-ops. You can start-out treating it like a "dumb" Heroku host, but install plugins and customise to your heart's content. You can easily migrate from apps/buildpacks to Dockerfile's one app at a time etc.

Whilst it's great for getting your apps up and running, and learning basic dev-ops at your own pace. It's not really designed for scale/reliability. So if your small business/app/service starts to take off, you'll quite likely need to migrate to K8s, Docker Swarm, Amazon ECS, if not for scale, for redundancy/reliability.


I rented a 10 EUR VPS from Hetzner and put Dokku on it. It currently runs ~10 side-projects and I couldn't be happier.


This is something I've been hearing. I have a small dev team that deploys many small (but important) apps internally.

We feel we need to hire more ops folks to help us with maintenance and troubleshooting, but people keep telling me that instead of increasing headcount, to just outsource my ops to PaaS like Heroku. After reading the comments here, I may look into this more.


As somebody who was used to running my own servers when I was first forced to use Heroku on a project...I'll go ahead and tell you there is an adjustment period.

But after using then going back to the alternatives, there's a lot more that I miss about it than the other way around. You take for granted how much stuff you no longer have to deal with, until you have to again. There will be a moment when you'll be irritated about some very low-level knob that you want to turn that you can't get access to but when that happens, don't take for granted all the other ones you're not dealing with.

And it simplifies life for developers tremendously.


I'm hoping to keep it cheap though, and cloud deployment services scare me a bit on price. We're a ~15y old company (of which I'm recent) and in a domain where margins are fairly thin. I'd rather spend the sweat learning this (or hiring for it) than paying Heroku, fwiw.

Plus lock-in scares me.. a lot.


Oh, Heroku doesn’t lock you in. It’s just running your code on containers. There’s even an open source library called Dokku that uses their open source build packs to give run the same thing in Docker containers.

The minute you price out the time and training investment to factor into your deployment solution, Heroku starts winning in a landslide.

Existing scale certainly changes things, but the fact that it forces you to go ahead and build for 12 factor makes moving elsewhere pretty simple.


Good to know, thank you!

On that note, we have some odd ball legacy machines that are dictated/required by our customers. Things like WinXP. So I'm surveying Nomad now as a potential to support non-traditional OSs (though I wouldn't be surprised in Nomad doesn't support WinXP). Do you recommend anything for odd deployment needs like this?

I imagine we'll write custom plugins to handle the odd ones, since who the hell releases builds for WinXP or Win95 these days lol. Nevertheless, I figured I'd ask. Especially since broader solutions like Heroku are not going to help our WindowsX deployment woes. (note, they're physical machines currently, though virtualization might be possible)


For Windows XP, I have no idea. Windows deployments are something I’ve never had to worry about (both by chance and on purpose).


Heroku locks you in in the sense that Heroku has this huge ecosystem of "add-ons" that often can't be replicated 1:1 outside of the Heroku environment, or if they can, require 100x the configuration management that Heroku does.

Much of the task of moving a project from Heroku to e.g. AWS is taking all those "add-ons" and going through the whole process of converting the Heroku-bound account (where everything is managed through your Heroku dashboard) to a separate, full-fledged account with the same service provider. If that add-on is a data storage provider, that also often entails a data migration.


I suppose that is true if using the add on store. I always went direct to the services anyway so it wasn’t ever really an issue.


I've met people at smaller startups who've gone the Nomad route instead, which seems a lot more sane in many ways, but still requires pretty careful planning and setup for large deploys.

As an admitted Hashicorp fanboy who has used Consul, Nomad, and Vault why do you say that?

The beauty of Nomad for me is that it can orchestrate anything - shell scripts, executables, Docker containers, etc.

Once you have a Consul cluster set up as a server, it’s dead simple. Both the Consul apps and the Nomad apps are small self contained executables that can run in server mode or client mode.

You set your Consul server cluster up first (crazy simple) and then your Nomad cluster automatically configures and registers itself using Consul.

Adding more Nomad nodes is a simple matter of copying the Consul app and the Nomad app and running them both as clients.

These days we do everything on AWS. It’s just as easy for us to spin up a bunch of small EC2 instances using autoscaling and we haven’t needed to use Docker for anything except for custom build environments for CodeBuild so I don’t have anything to compare it to.


GitLab's "Auto DevOps" with GCP Kubernetes Engine is pretty good at the 1 to 100 story. I clicked several buttons and got a google-managed k8s cluster that will pretend to be Heroku for me.

I've since outgrown the basic setup, but their docs showed me how to copy their "magic" configuration into my repo, and it was easy enough to edit to get what I wanted.

I kinda figured I'd be bypassing Kubernetes for this job, because I didn't want to spend a week setting it up, but they made it into an afternoon.


Thanks for the feedback, it's great to hear that our product is that easy to use.


Unless you are using DC/OS enterprise, DC/OS is incredibly insecure by default. Any container running in the cluster has access to the full marathon API (which has NO auth). So any compromised container will be able to: stop and delete everything, see all environment variables set for other containers (often contain secrets for databases or other external apis) and even start arbitrary new containers on your cluster.

So think about it, any compromise in anything you run on the cluster has the potential to access all your data.

This alone should be reason enough to never use the free DC/OS, and the enterprise offering has no public pricing information.


> None of them can go from 1 node running all your apps to 100, in an easy straight forward manner.

Redhat's Openshift platform can do that with one click.


Once you get it to install at least.

Seriously. Installing open shift has been the most frustrating exercise in my life.

I ended up finding rancher, and even though their 2.0 version is pretty much beta at the moment the installation is so much more painless than open shift.


The barrier is only as high as the functionality you want to take advantage of.

Running a single container as an app is 1 line with kubectl. Scaling it up in replicas is another line. Opening it up to a load balancer is another line. In 3 lines you have a fully distributed load balanced public app with monitoring and logging built in and easy deployments to the next version.

You can then graduate slowly to using YAML definitions for the parts you care about, and then put them in source control for repeatable deployments. It's a very nice system and arguably even better for simple isolated deployments than more complex apps.

Kubernetes really is no more complicated than any other major thing in the industry. Trying explaining a new programming language or framework to someone... is that easy?


You are assuming k8s is already setup and running, right?

Say I want to deploy my application via k8s to bare-metal, Google, and Amazon (so I am not dependent on one provider)...what does that look like?

It seems like we are moving away from standards towards proprietary abstractions with regard to cloud environments.


> You are assuming k8s is already setup and running, right?

I've only used Google's managed k8s solution, but setting up the actual cluster is another 30 seconds of clicking 2 buttons on a website.

> Say I want to deploy my application via k8s to bare-metal, Google, and Amazon (so I am not dependent on one provider)...what does that look like?

As with any other multi-cloud solution, you need to decide how you're going to handle your dbs and other persistent data stores, but it's as simple as spinning up a cluster (rather than a vm) on each service, and then deploying to each.

With a little bit of upfront work, you get a ton of extra features. I've used Elastic Beanstalk on AWS, and running an app on Kubernetes is easier, even including the cluster setup time, and as a bonus, your deployment configurations aren't tied to a single vendor (you can pickup your k8s configs and more to another provider if needed).


I have a k8s cluster running on "bare metal" of 3 old consumer desktops and 6 old laptops, mostly heterogeneous hardware. It's as easy as setting each of them up as a webserver or database server, but I only have to set up the software once per machine and forget about it.

Much better than trying to host 12 websites on three machines and dealing with each machine by hand. And also load balance by hand, network by hand, configure the database servers by hand, different logging locations, etc. And that's all assuming I'm using docker. Let alone if I don't and then also have to deal with multiple versions of software between sites, differing technology stacks at the webserver, etc.

It's the superior solution for personal clouds and hobbyist server management too.


Yes, if you have to install it from scratch then yes, it's more work. You can install from the source binaries or use KOPS or Rancher or a dozen other installers to set it up in a few clicks or CLI commands. If you're on the clouds, then stick with the managed offerings since they actually give you free master nodes.

Federation/multi-cluster is being reworked so multiple K8S clusters around the world on different providers can act as one, or you can use Rancher or some bash scripts to accomplish most of it today.

Also K8S is actually moving towards standards as every release includes a new interface for things that were previously proprietary, and now just use plugins for whatever provider you want.


That's because, for the vast majority of developers, there's no point in using k8s.

It's really only necessary when you have a big app (or a few instances of a few big apps) that will have lots of containers and lots of management required.

Please. Don't use k8s as a golden hammer. Docker alone (or with docker-compose) will be just fine for most people, especially when paired with a basic CI pipeline like Drone or Jenkins.

(If you're reading this, thinking "but it makes installing and managing services I depend on easier!" it doesn't obviate you of that responsibility entirely, not in production. You still have to make sure those services deploy properly, you still have to manage them to some degree... for some time, it was considered inadvisable to run your DB in-cluster because of how many ways it could fail.)


I am a solo developer and I have been running my side projects on Kubernetes (GKE) for several years now.

It helped me a lot to focus on things that I like: development instead of ops. I never had to SSH into any machine or do things manually. I just tag a github release and in few minutes I get my application updated.

With Docker I believe it would be lot more complicated, no zero-downtime upgrades, no load balancing, OS vulnerabilities. Then, when I would need to add a new service into my setup, I would have to yet again go into the host or multiple hosts :)

So no, I don't believe people who say that for small projects you shouldn't use K8s. It saves a lot of time and helps you not to burn out on your side projects as it takes the pain of running a service away.

P.S. I am an author of Keel https://github.com/keel-hq/keel, it's a relatively successful CD solution, I hope it saved a lot of hours for other companies too :)


Question: Why not Heroku or any other hosted service like that? They seem to give you the right compromise for solo developers as well as mid size applications.

If it was a learning experience than I totally get your choice — otherwise I'd be interested in the motivations! Thanks!


Personally, Heroku has too much magic around the build packs and what not.

Kubernetes for stateless apps is easy:

1. Create a Docker image with your app. 2. Write about 20 lines of YAML for a Deployment 3. Write maybe 10 lines of YAML for a Service

Done. You can automate the entire roll out via Travis and using some `sed` replacements on the Deployment.

I have got blue-green roll outs working with two deployments and two service descriptors and maybe 30 lines of `bash`.


I wanted my own ports with non http services as I had a grpc endpoint :) not sure how it is now but heroku only offered http.


For developers, or for operators?

Orchestration makes it trivial to recover from the downtime of individual nodes, unintentional (crashes) or intentional (managing updates of the node's underlying OS). Making load balancing, (basic) service discovery, storage as part of the deployment artifact drastically reduces deployment risk.

We can have a discussion about which orchestration platform is best, and we can have a discussion on whether you need orchestration if you're a two-person startup using mostly a lot of SaaS and you have Really Only One Thing running in your production environment. But doubting the value of orchestration? Really?


Nothing is "trivial" about orchestration. It introduces entire new layers of complexity, new failure modes, as well as second-order effects that are poorly understood. People are not afraid enough of complexity.

I also think that solutions like Kubernetes should be used only when you really have to. And even then very carefully and never without a tool like Chaos Monkey.


Orchestration does introduce complexity but it does so with a large base of users, developers and general ecosystem. As developers and systems implementors we are already sitting on top of a huge stack of what could be an impossibly complex system (CPUs, operating systems, networks) But people have worked on their layer and made it robust and everyone benefits.

With that said we have been running K8s for 2 years now in production and have not once had an issue that we could attribute to it. So yes we do have another abstraction (complexity) layer, but to us the complexity does not equate to complicated problems. The benefits gained from the seamless redundancy, discovery, scalability, encapsulation and resource management have not cost us anything. If we had a shaky hard to understand stack that failed often I'd 100% agree.


One other complexity that k8s (or similar) introduces is more security concerns. If you've been runnning for 2 years now, I'm sure you'll have seen the changes required to secure Kubernetes from the "old days".

Managing the multiple TLS CAs, RBAC, PSP, Network Policy etc is non-trivial.


One thing I never see mentioned in these k8's threads is Openshift. Openshift solves most of the crappy things about k8's, including being much more secure and getting security updates from redhat.

(openshift runs on top of k8's for those who didnt know)


Yep I tend to recommend Openshift if people want a good out of the box security experience.

The only challenge with it is that Redhat have their own naming around various pieces, which I think some people find jarring to move to.


Edited my comment to make it clear that I don't doubt the value of orchestration, but rather that orchestration doesn't always make everything just work especially in prod


Kubernetes still provides easy deployments, monitoring, logging, load-balancing, scaling/high-availability that is easy to take advantage of and can work with even a single app.


But, why do you need k8s to do those things, particularly for a single app?

It seems like a very complex architecture to ensure reliability.


You don't, but K8S has it built-in which removes overhead.

If you're installing K8S yourself then it's probably not worth it, but if you're using managed services with free master nodes, why not take advantage of it? You can use it with a single worker node without problems.


My company is currently struggling a bit with building out a k8s platform. The benefit we seek is to have elastic compute capacity a la any cloud provider and have it be on-prem, strictly governed but also fully self-service. I might have an app with only 100-1000 users total, but I'd still like to get georedundancy and autoscaling to minimize support costs. That's what container orchestration is supposed to provide but it's still a windy path.


I think the only high barrier of entry is actually getting a cluster going outside of Minikube. The abstractions for pods, services, replica-sets etc are pretty easy to understand and the alternative for scaling a service/making it discoverable/highly available has a higher barrier of entry IMO. We just don't see it that way because we have lived it. If I told someone that had no experience with dev-ops how to do this in K8s vs setting up load balancers/DNS/health checking it would be much more difficult.

The barrier to entry that is currently high (setting up a cluster) is getting much easier with projects and services on many cloud platforms doing that for you.


Pods, services and replica sets are not complicated in and of themselves. The problem is that they don't reflect the mental model of a user at all. I just care about instances and load balancers.


OTOH I don’t care about instances and load balancers at all(ideally), I just care about services.


I have a few raspberry pis at home that I run a few Docker containers on, mostly personal stuff/tools/scripts/servers that I update infrequently but use a lot. Any software I write goes into a container, I like that abstraction.

I really want to reduce the friction between developing/maintaing something and deploying/running it.

Right now I just have a script for each service that stops the docker container, pulls the latest version from a private repo and starts it again.

But that "solution" is getting cumbersome to manage as I add more containers to run and I have to remember what Raspberry Pi each container is running on if I'm running a service that I don't need to have on 2 nodes.

I tried to learn Kubernetes a few weeks ago by installing it on one of my Pis, but I quickly realised the learning curve was way beyond what I wanted to be spending my time with (right now - I realise this is a feeble excuse) - especially for home stuff sitting behind a private network.

I've been looking at Docker Swarm recently instead which looks a lot simpler. Maybe it will be a good stepping stone to grokking Kubernetes.


A side note, since you mention RPi-s... Do you have any trouble with SD cards (or USB sticks) breaking down on power loss? It seems like a recurring theme and I'm curious how you deal with it in your setup? I am thinking of getting a NAS for my projects instead.


K8S has many different abstractions and it's hard to get a feel for all of them from just reading the docs. I found the PluralSight videos (subscription required) [1] helpful for getting a comprehensive introduction to K8S, along with coverage of the gotchas.

The videos also helped me discern the kinds of problems K8S solves and the kinds it doesn't. If you need to deal with scale issues, the complexity of bespoke K8S may be unavoidable complexity. Otherwise, it is overkill. (exception: managed K8S services like GKE, AKS provide middle grounds)

[1] https://app.pluralsight.com/library/courses/getting-started-...


I mean... keep in mind that this is a vendor who will make money by convincing you that it's complicated and requires a lot of extra training. So, you gotta take it with a grain of salt.

Personally, I've found it to be pretty easy to grasp so far and I'm not the smartest person in the world.


I must say, I'm presently migrating a client from Dokku to Kubernetes and finding K8s tooling to be severely lacking.

I've plenty of experience with Docker and Docker Compose; both for development, and also running on cloud infrastructure (although admittedly not Docker Swarm). I also have some limited experience with Amazon CloudFormation and ECS.

I'm no stranger to performing Docker migrations, having previously been responsible for heading up migrations to Docker cloud deployments from "old school" customised on-site rack deployments, VPS LAMPP deployments, and "install everything on this one machine, what's reproducibility?" setups.

However, I'm finding that migrating to Kubernetes is by far the worst experience I've had when compared to the alternatives that I've tried.

I know cluster deployment/management tends to be a common complaint, however I actually found that to be not too difficult. Thanks in large part to kops.

Instead my biggest gripes are:

1. Supposed Immutability

Kubernetes treats certain "Dockerisms" (e.g. image tags) as immutable, when they're clearly not. This leads to many subtle "gotchas" and associated work-arounds that are non-intuitive.

2. Templating/substitution (or lack there-of)

The argument from the K8s core team seems to be that templating result in non-reproducible deployments. However, I find the opposite to be true. I don't want to maintain slightly different resources for all my different environments and have to manually keep them in sync. That's just asking for trouble. I want a single source of truth.

I know Helm (Charts) can be used for templating. However, I find Helm to be over-kill, particularly the Tiller runtime requirement. Helm v3 sounds very promising though, and I have every intention of using Helm more extensively once released.

3. Tooling immaturity / Lack of standardisation

I seem to be writing a lot of (simple) bash scripts to solve trivial problems, no doubt others are doing similar things and I'm reinventing the wheel. With CloudFormation/ECS or Docker Compose deployments I don't seem require any scripts at all. Problems can be solved trivially in a declarative way, or by running a single command - of course this approach is supplemented by project-specific documentation.

I suppose this largely comes down to K8s flexibility; there's so many different ways to achieve something. I'm not necessarily suggesting stripping away that flexibility, as it's part of what makes K8s great. However, there's a lot that could be simplified for 90% of use-cases, particularly if there were more standardisation and widely agreed upon "best practices". I hope this will come in time.

4. Debugging

This isn't at all a fault of K8s itself, just platform immaturity. However, despite the fact K8s seems to be emerging as the winner of the container orchestration war, most alternatives seem to offer significantly better tooling e.g. Docker Compose has proper integration into Jetbrains IDEs, including "remote" debugging of apps running in containers. Jetbrains K8s plugin is currently limited to interpretation of YAML/JSON resource semantics.

Don't get me wrong I'm using K8s because I think it's hugely powerful/reliable, and I'm betting on it long-term. Presently, it's truly fantastic from an ops perspective, just significantly less so from a development perspective.


> This leads to many subtle "gotchas" and associated work-arounds that are non-intuitive.

Exactly! Just because your pods is accepted and scheduled, you can be left in a lurch if an issue occurs at the container runtime layer that k8s didn't handle. Image pull issues are a big one. The pod just sits there and keeps trying to pull.


Hey guys, we're really trying to keep up, but we've gone to 300+ servers in a few minutes. I'm happy to answer any questions here you may have


Hug of death! Congrats :)


what does that mean? are you going to support more servers or you can't provide more servers?! Thank you for this. looks interesting. would love to get hands-on but it says "We are booting up your cluster!"


Hey we hit our hard limit of 500 servers, I'm trying to shut down old ones, really sorry about that, but traffic is just crazy


Sounds like the good kind of problem! Let me chime in and say thank you for implementing and sharing this.


Thank you very much! Rationally I know this is a good thing, but doesn't feel that way right now


How is this different to www.katacoda.com?


I would also like to know. I initially thought it was katacode.


we really love katacoda! we're a bit more oriented on trying to visually explain what's happening (not there yet) and we're trying to give you a very high level of interactivity


Super interested to see how this works once the hug has subsided. This is potentially fortuitous timing; my employer wants me to cover most of this material by end of quarter.


Thanks a lot, really didn't expect this kind of traffic. If you ever want to get in touch you can reach me at mislav@<website-domain> or through our slack that's on our landing page and inside console


Will it be possible to contribute lessons to your platform? I find conferences a terrible place to share knowledge on K8s, this makes it a lot easier and hands-on.


Yes! I'm working on an editor right now, feel free to contact me at mislav at magicsandbox dot com if you're interested


Super cool! Is only 1 level available right now?

EDIT: Further, the front page marketing and actual app don't line up 1:1 very well when it comes to listing lessons.


Yeah, sorry, we're still building it up, we didn't plan for this big of a reception, just wanted to test if there's any interest out there


Good problem to have. :)


What do the Facebook Google Microsoft logos at the bottom of this page mean? Are these companies Magic Sandbox customers?


Is this for someone who has no experience with any of this or is there prior knowledge required?


Hey, we currently host only an intro course but we're building out advanced courses now. So no kubernetes prior knowledge is necessary, but we assume you're a developer with some experience


> 1 year early bird access

> Get 1 year unlimited access to MSB Premium including all lessons and weekly releases.

> This is a non-recurring, one time payment.

You probably need to fix this, or at least clarify? Is it a yearly sum or one-time payment?


Hm, we're doing a presale with unlimited access for 1 year in one payment


That sounds good, but I would suggest clarifying this. Good luck btw!


Can't find it on your site... where are you spinning up all these servers for the sandbox? GCP? Azure? AWS? Somewhere else?


Digital Ocean currently. Just running minikube and kubectl proxy for the feed


This is awesome, I been looking for something like this.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: