Hacker News new | comments | ask | show | jobs | submit login

I'll stand by my assertion that for 99% of users (maybe even 99.99%), Kubernetes offers entirely the wrong abstraction. They don't want to run a container, they want to run an application (Node, Go, Ruby, Python, Java, whatever). The prevailing mythology is you should "containerize" everything and give it to a container orchestrator to run, but why? They had one problem, "Run an app". Now they have two, "Run a container that runs an app" and "maintain a container". Just give the app to a PAAS, and go home early.

Most startups - most large companies - would be far better served with a real PAAS, rather than container orchestration. My encounters with container orchestrators is that ops teams spent inordinate amounts of time trying to bend them into a PAAS, rather than just starting with one. This is why I don't understand why this article lumps, e.g. Cloud Foundry in with K8S - they solve entirely different problems. My advice to almost every startup I speak to is "Just use Heroku; solve your business problems first".

The article also mentions it enables "new set of distributed primitives and runtime for creating distributed systems that spread across multiple processes and nodes". I'll throw out my other assertion, which I always though was axiomatic - you want your system to be the least distributed you can make it at all times. Distributed systems are harder to reason about, harder to write, and harder to maintain. They fail in strange ways, and are so hard to get right, I'd bet I can find a hidden problem in yours within an hour of starting code review. Most teams running a non-trivial distributed system are coasting on luck rather than skill. This is not a reflection on them - just an inherent problem with building distributed logic.

Computers are fast, and you are not Google. I've helped run multiple thousand TPS using Cloudfoundry, driving one of Europe's biggest retailers using just a few services. I'm now helping a startup unpick it's 18 "service" containerised system back to something that can actually be maintained.

TLDR; containers as production app deployment artefacts have, in the medium and long term, caused more problems than they've solved for almost every case I've seen.




Containerization helps with one thing: end-to-end dependency hell management. You get the same executable artifact in prod and on every dev machine. You get to share arcane tricks required to bootstrap library X. You get to share the complete recipe of building your OS image. Hopefully, you pin versions so your build is not subject to the whims of upstream.

Kubernetes helps with one thing: taking your container and running it on a fleet of machines.

Building 18 services is an architectural choice made by the team. It has nothing to do with containerization or Kubernetes. For a single team, a monolith just works most of the time. You may consider multiple services if you have multiple [large] teams, think Search vs. Maps. Even then, consider the trade-offs carefully.


I deploy code with all of the dlls in separate folders. The executables/services don't share any dlls. I kept asking the "consultants" who are trying to push us to Docker, what is the business value over raw executables + Nomad.

The build server creates one zip file that is stored as an artifact that gets decompressed and released in each environment - in a separate folder.


> what is the business value over raw executables + Nomad.

It's not a given than any of the major business value generators are relevant to your shop, your domain, and your business demands. KISS is always good advice.

Low hanging fruit: Nomad (backed by HashiCorp), is a direct competitor to kubernetes (backed by google). One of those solutions is available turn-key on every major cloud provider and also the premiere Enterprise VM management solution. The other is called Nomad ;)

Raw executables pack up very nicely into containers, so if you're able to exist happily with just apps then just apps in containers won't change much (and therefore look like extra work)... For numerous domains raw executables are just a percentage of the deployment. Be it third-party apps/drivers that need to be installed, registry fixes, or whatever the Ops demands for server maintenance are a non-starter. And then things like load balancing and dynamic scaling pop up...

More importantly, for what I do, the binary validation of an immutable server in multiple zones is critical to ensuring security. Nothing can be changed, nothing shall change, and every point of customization will be scripted, or else it can't get near our data.

Cross platform and legacy scenarios are major players. More pressing, though, are the application level primatives that k8s provides in a cross-platform cross-cloud manner (which can also be federated...), so that your scaling story is adequately handled and your local apps become much more robust and cloud-native.

Bottom line: it's not a given that k8s will improve your life, here and now, apps + Nomad is viable. For the broader eco-system though the "other stuff" in k8s, and the rigidity/stability of dependency graphs in containers, are clear value drivers and highly meaningful.


Yeah KISS was very important when I first started working at my current company. I was hired to setup a modern development shop with three database developers who were just learning C#, no source control, no CI/CD, basically no development "process", they ran a lot of things manually.

I was going to be introducing a lot of changes.

Every decision I made was based on keeping things as simple as possible to keep them from getting frustrated. If that weren't the case, I would have gone straight to Docker. Knowing that I might need that flexibility later but didn't want to commit right now, I chose Nomad because I knew it could both handle phase 1 and allow us to move to Docker once appropriate.

But now, that we are in AWS, there is a big push to get to the next level of cloud maturity - not just moving VMs to the cloud, but how to take advantage of a "cloud first" approach and actually take advantage of some of the features that AWS offers.

So in that vein, there is a need for Docker to go "serverless". Lambda is not an option - we have long running processes.

Even when we do go to Docker, we will probably make a transistion from Nomad straight to Amazon's Fargate.

I see a path where we move from .Net 4.6 to .Net Core and Docker with Nomad to Fargate.

The only issue with Fargate for us now is the added complexity that Fargate only supports Linux containers. I don't know how much of a lift that would be. Theoretically it shouldn't be much with a pure .Net Core solution.


You are remarkably well positioned to take advantage of any solution from what you've told.

My group is skipping Kubernetes to go straight to Fargate and we are... not so we'll positioned as you happen to be.

Much to my chagrin, as a newbie to AWS who has loads of homegrown experience with Kubernetes and its predecessors (Fleet, etcd) I am wholly reliant on the AWS solutions engineers we have in-house to help me navigate this thing via CloudFormation and friends, it's too much for one person to figure out in 20 hours during a pilot/assessment study.

I am an application developer who learned Kubernetes in his free time over the past 3 years because it was free. There are thousands of us, with computers in our basements, learning these systems on our own, with no institutional support. Sure, I needed lots of help, but I didn't have to spend money on cloud instances just to learn, or be sure to remember to terminate them when the experiment was over.

By contrast, AWS has only just made Amazon Linux 2 available to run on your own machines less than two months ago. There is still no way to set up ECS or Fargate on our own metal, and probably never will be, because Amazon does not see a reason for it.

Vendor lock-in is real and it has casualties! There are real negative effects that you don't see. If you say "I would not hire someone like you because you have specific skills I won't take advantage of," you have to ask yourself is that because of something that I've done or is it something that Amazon is doing.


I look at these "solutions" and they don't seem to add much to how application servers work.


If you're juggling multiple incompatible versions of application servers across multiple platforms across multiple datacenters and multiple cloud providers with multiple dev teams... you're gonna see some real value to those kinda of "solutions". It's not random that this tech is coming from cloud leaders and Entprise shops, and they don't address problems common to development, they address problems common to cloud apps and cloud heavy Enterprise shops.

I think Assembler looks like ass and it doesn't add much to how I want to program... It's still frequently used, though, because it solves problems other than the ones I have.


We have a bunch of apps that run based on some type of external event - a time interval (Nomad supports cron like scheduling across an app pool), a file coming in, an external event, etc.

We submit a job via the api and it runs the job on whatever server has available resources. We specify the mininimum amount of RAM and CPU needed to run a job. If too many jobs are queued on a regular basis, we can either add more RAM or CPU to an existing instance or add another instance and install a Nomad agent.

Yes I know k8s can do the same thing but we don't have to use Docker, we can though.


My employer already had that problem solved in 2006, thanks to JEE.

An EAR packaged with everything needed by the application.

Each service, or micro-service as it is fashionable now, got their own EAR.

Deployment of UNIX based OS, JEE application server, Oracle and respective EAR packages, done.


That is a solid solution. As long as everyone in the ecosystem is on JVM, more power to you. If, for example, one needs to interface with some cutting edge DL modules written in Python, one needs something else. The transitive closure over "something else" is called "containerization".

PS. Maybe "EAR" also supports Python. But then I'd argue "EAR" is a "container".


With python you then use wheels + virtualenv, for Ruby you can use bundler. Each language has this issue solved.

Using containers is essentially:

- uh, I have problem with these dependencies, dealing with RPMs is such a nightmare, I need to generate OS specific packages and there might be conflicts with existing packages that are used by the system...

- oh I know, let just bundle our app with the entire OS!


You could also use nix to handle all languages dependencies(including OS packages) and avoid the complexity of handling disparate package managers.


Languages have the issue solved until your library is just bindings to a C shared library, in which case you also need the appropriate OS package.


You build this shared library as a relocatable conda package and add an explicit dependency in your Python (conda) package / problem solved.


I'd suggest that if containers end up looking like "the entire OS" then that misses a lot of the benefit of containers.

A container image should be "the bare minimum that allows the application to run".


Yes, I do agree with that.

If they are minimalistic and hold the app then this makes sense and then containers are essentially an unified packaging format that is accepted on "serverless" public cloud. This provides a value because you can then easily run your application on multiple providers where it is cheaper at given time.

I'm thinking that in the future your IDE could just compile your project into a single file that you then upload it anywhere and just run.

But the docker was promoted as something different with the union fs, nating etc. That works fine for development but it's a bit problematic operationalizing it.


This is what I noticed as well. Most of things containers are advocated for are already solved.

The selling point of containers is to solve certain issues (seems like package management, removing dependency on the OS etc are the most popular).

To me it looks like instead fixing the actual issues, we are taking a blanket covering all of that crap and building our beautiful solution on top of that. We have a beautiful world with unicorns on top of a dumpster fire of mixing system dependencies with our application dependencies.

Also yesterday found something amusing a coworker was complaining that putting a small app into a base container resulted with image that was almost 1GB in size, compared to ~50MB when using a minimalistic one. When asked why not just use the minimalistic one I learned that it was mandated to use the standard image for everything.

To me this is absurd since by doing that aren't we essentially making a full circle?


Absolutely. I think the actual issue are the OSes directory structure(FHS for example) that impedes having libs/packages isolated and coexisting with their different versions.

Containers add a heavy abstraction on top of that. For me the simpler/better dependency management solution is nix.


No, and it makes perfect sense because 1. Container size is a minor issue, docker images are layered so you only fetch the diff of what's on top anyway 2. Standardization simplifies knowledge sharing, i.e. someone else can help you


As usual, "it depends." JEE isn't magic and app servers have their own issues. I think you're better off packaging Java web apps as self contained fat jars (see Dropwizard, Spring Bootstrap)


For the dependency hell management part, nix is a solution that operates at a lower cost level of abstraction, it doesnt emulate the whole OS(avoiding overhead) and keeps dependencies isolated at the filesystem level(under /nix).

I think that for reproducible development environments is a much simpler solution.


I tend to agree with you and it's one of the biggest reasons that I'm a fan of Elixir.

Here's the path that leads to K8s too early.

1. We think we need microservices

2. Look how much it will cost of we run ALL OF THESE microservices on Heroku

3. We should run it ourselves, let's use K8s

One of the big "Elixir" perks is that it bypasses this conversation and lets you run a collection of small applications under a single monolith within the same runtime...efficiently. So you can built smaller services...like a monolith...with separate dependency trees...without needing to run a cluster of multiple nodes...and still just deploy to Heroku (or Gigalixir).

Removes a lot of over-architectural hand-wringing so you can focus on getting your business problem out the door but will still allow you to separate things early enough that you don't have to worry about long term code-entanglement. And when you "need" to scale, clustering is already built in without needing to create API frontends for each application.

It solves a combination of so many short term and long term issues at the same time.


100% agreed. A lot of the cloud computing products are simply re-implementations of what was created in the Erlang/BEAM platform, but more mainstream languages. IMO it's cheaper to invest in learning Erlang or Elixir than investing in AWS/K8s/etc.

Elixir and Erlang are basically a DSL for building distributed systems. It doesn't remove all of the complications of that task, but gives you excellent, battle tested, and non-proprietary tools to solve them.


And JEE :)


> One of the big "Elixir" perks

This is also true of Erlang, for those not aware that Elixir runs on the Erlang Virtual Machine (BEAM).

You do get a lot of cool things with clustered nodes though (Node monitors are terrific) and tools like Observer and Wobserver have facilities for taking advantage of your network topology to give you more information.




Same applies to JEE application servers.

They are basically an OS, with containarized applications.

Thanks to them I stopped caring about the underlying OS.


Not going to lie, Java app servers basically had me predisposed to see the appeal of Elixir. When I was spending a lot of time with Ruby I got really into Torquebox (Ruby-ized JBoss) specifically for the clustering aspects, ability to spread workers and clustered cache with Infinispan.

Elixir has a lot in common, but it takes it to another level. You can call functions from those other applications on the server with nothing more than a Module.function(arguments). You can call a function on another node in the cluster by just sending the node + module, function and arguments.

Because of immutability and message passing, this just works everywhere. With Java, a similar implementation would have to guard against memory references and mutex locks that wouldn't behave the same way on different nodes.


Interesting, I didn't know that about Elixir. Do you ever have to break them up into smaller Elixir apps or can you stick with that pseudo-monolith for good?


You break them into smaller apps. It’s little more than code rearranging though.

You can still call the functions through the same Module.function() approach you’d use if they were in the same app.

The $30 PragDave Elixir for Programmers course actually drills in this approach the whole way through if you’re looking for a good resource.


Seconded. It's a very good course for someone getting started with Elixir having already had a good amount of programming experience.

I originally bought it at $60, and even at that price point I would buy it again.


Each of the three digital production agencies I've worked with has the same problem: jobs come and go all the time, often have varied tech stacks (took over a project from a different company, resurrected 5yr old rotting dinosaur, one team prefers Node, another Django, etc), each project requires a dev/staging/live environment (and sometimes more than that, e.g. separate staging for code / content changes), and so on... In one shop we went thru 500 git repos in 4 years.

One day I spun up a k8s cluster on GKE and just started putting all projects there. This cluster enabled huge cost savings (running a fleet of 3 VM's instead of ~50), allowed cheap per-feature dev/staging environments, forced developers to consider horizontal scaling BEFORE we needed to scale (read: when we missed our only shot), and overall reduced ops workload tenfold. It wasn't without a few challenges of its own, but I would never go back.


I think you've hit on the major issue with the "anti-hype" around kubernetes and related products: they're not something you need, per se, to develop an app. They are something you need to manage multiple parallel development processes.

For devs stuck in a silo it's a little like putting margarine on butter. For DevOps looking at hundreds of little silos it's the foundation of operational sanity.


To sort of echo what you're saying, most of these articles seem to suggest that containers solve a technical problem. More often then not I've seen them as a solution to an organizational problem.


Kubernetes has helped to make our app less distributed.

Parts of the system were distributed not for capacity, but for HA reasons. So where before we had two instances of beanstalkd with their own storage and clients had logic to talk to both, we now have a single instance of beanstalkd backed by distributed storage and a Kubernetes service that points to it.

And I think we get more benefit deploying dependencies than we do our own apps. If one of them is low volume and needs mysql, just `helm install mariadb`. No complicated HA setup, no worries about backups, we already know how to backup volumes.


I'll stand by my assertion that for 99% of users (maybe even 99.99%), Kubernetes offers entirely the wrong abstraction. They don't want to run a container, they want to run an application (Node, Go, Ruby, Python, Java, whatever)

I agree completely and your comment gives me the perfect opportunity to praise how much I love the flexibility of Hashicorp's Consul+Nomad.

Nomad let's you run almost anything - Docker containers, executables (the raw_exec driver), jar files, etc.

https://www.nomadproject.io/docs/drivers/index.html

Dead simple to setup - one self contained < 20Mb executable that can be used in either client, server, or dev mode (client + server), configuration is basically automatic as either a single server or cluster of you are using Consul.

The stock UI is weak but The third party HashiUI is great.


Don't forget that Nomad has awesome integration with Vault, possibly the best secrets handling out there.


I played with Vault and it wasn't quite as simple as Consul and Nomad. My major issue was trying to figure out how I would "unlock" the Vault automatically in case of a system restart.

I punted for now and just stored sensitive values directly in Consul encrypted.


since you mentioned Cloudfoundry... I think it's a thousand times easier to get up and running with k8s, than with Cloudfoundry on Bare-Metal (no Cloud).

It's also a thousand times easier to maintain. (Thanks CoreOS) Basically if you want a managed simple no maintance, no cost bare-metal K8S installation you basically just use tectonic/kubeadm and you get something which is self-containing, or close to self-containing. and the only things you need to get it done is actually way easier than reading through cf docs (I'm pretty sure bare-metal isn't even supported that easily).

running some services on top of it is than pretty simple, especially if you want to use a single ip, insteand of roundobin dns (https://github.com/kubernetes/contrib/tree/master/keepalived...)

and if you have k8s running, adding some PaaS layer on top (openshift) can be pretty simple.


> I'm pretty sure bare-metal isn't even supported that easily

BOSH with the RackHD CPI does this. It's the same basic operator experience across every platform with a CPI.

Disclosure: I work for Pivotal, we work on this stuff.


bare-metal without openstack.


RackHD is not OpenStack.

https://rackhd.github.io/


True, and that's why I think a managed Kubernetes service like GKE is the way to go. It's almost like a PaaS but you still have a lot of the control.



Amazon's EKS is still in preview. I wouldn't expect it to be generally available (that is, stable) for several months at least. I've also heard reports that getting into the preview is really difficult at the moment.

It's using a new networking model: https://github.com/aws/amazon-vpc-cni-k8s

> Alpha This is an experimental release as part of the Amazon EKS Preview. Interfaces and functionality may change. Expect bugs (and please help us squash them). DO NOT use for production workloads.


What control do you have?


I agree that most startups should work at a Heroku level of abstraction.

You mention 18 microsevices, I think that small teams are better off with a monolith.

I would see Kubernetes as a new machine level. We're moving from bare metal, to VMs, to container schedulers.

Heroku was one of the first companies that ran a container scheduler internally. So I think we agree that is the future.

But a small team probably doesn't need to work at that abstraction level.

At GitLab we think most teams will want to work at a higher abstraction layer. Just push your code and have it deployed on Kubernetes. Without having to write a dockerfile or helm chart yourself.


The funny thing is I have 3 courses on Docker and I'm a Docker Captain but I pretty much agree with what you wrote about container orchestration.

A lot of people forget that you can just put your application up on 1 server and serve hundreds of thousands or millions of requests a month without breaking a sweat.

For that type of use case (1 box deploys), Docker is still amazingly useful so I would 100% containerize your apps for that purpose, but I agree, Kubernetes and container orchestration in general is overkill for so many projects.


I agree with this for the most part, but wanted to point out that docker's first big success was as a dev tool. Solving the "it works on my machine" problem, or the "oh you forgot to install v13.1.2 of X and then uninstall it and then install v12.4 because that's the only way it works for some reason" problem. So, avoiding k8s in order to avoid docker seems odd.

That said, a good number of projects don't require anything special about the environment other than a runtime for the app's language, where the remaining dependencies can be explicitly included in the build. For those, I agree, jumping on docker/k8s right away is overkill.

An additional benefit of working with something like Heroku initially, is that it will help guide your devs to sticking with more tried and trusted stacks rather than everyone pulling in their own pet project into the business's critical path.


I agree with pretty much everything you said and it's very heartening to not be the token Cloud Foundry person in the comments.

As a nitpick:

> This is why I don't understand why this article lumps, e.g. Cloud Foundry in with K8S - they solve entirely different problems.

In fairness, the reference was to Cloud Foundry Diego, which is the most analogical component to Kubernetes. And they are of comparable vintage. Diego never found any independent traction outside of CFAR.

> I've helped run multiple thousand TPS using Cloudfoundry, driving one of Europe's biggest retailers using just a few services.

We have customers doing millions of payments per hour, billions of events per day. Running tens of thousands of apps, thousands of services, with thousands of developers, deploying thousands of times per week.

CFAR doesn't get much press out of enterprise-land, but it works really well.

Disclosure: I work for Pivotal. We have commercial distributions of both Cloud Foundry (PAS) and Kubernetes (PKS).


There's even the higher-level desire, what users really want isn't a place to run their app, but to the function the app provides. e.g. in a more micro-services-like environment with a service that simply looks things up in a database, what they really want is just query access to the data. But now they have the data in some db, the db in some container, some API written using the latest API style, some software the provide the API (also in a container), some container orchestration to coordinate everything, load balancers, caches and so on.

So there's all these layers of stuff that sit between the user and the data just to make the act of asking WHERE DATATHING="STUFF" convenient.


> I'm now helping a startup unpick it's 18 "service " containerised system back to something that can actually be maintained.

There's a lot of work (and money) out there to fix systems implemented on the hype train.


The root of this is really people making distributed systems when they don't need to. This microservices trend really is a massive waste of resources for most smaller teams that get caught up in it.


you should check out docker swarm. the UX of swarm is brilliant - use a 10 line compose.yml file to get a stack up and running. Let's you specify tons of stuff if you want to.

The batteries included nature of swarm is a huge help as well - with k8s, you have to muck around overlay network, ingress, etc.

However, I think the writing is clear on the wall - k8s has won. Probably even to Docker Inc, given the kubernetes integration they are building into swarm now.

I think Docker Swarm can exist as an opinionated distro of k8s. I wouldnt mind paying it money for that.


This is why I primarily see Kubernetes as a set of low-level primitives for a PaaS to build upon.

We don't use Kubernetes at my shop, we've begun to use OpenShift though which layers PaaS tooling on top of it and the developers on my team love it. They create a deployment, point it at the git repository containing their code, set their configuration and the app is live - the underlying primitives are available if we need them still, but that's for me to worry about as the DevOps guy and not the developers.


Kubernetes team foten says than one if it's goal is to be a "low level" project which should be the base additional tool/services/... are using under the hood.

Helm (https://helm.sh/) allows you to define an app as a collection of K8S components then to manage (=deploy, update, ...) your app as a standalone component


Clarification: 18 containerised services can absolutely be the right choice. It’s just my experience says the trade off between the costs of maintaining that versus a smaller PAASed system rarely come out in favour of it.


This and This.

If you are looking for “I just wanna run my app” I found CloudFoundry to be dope among all the other PAAS solutions out there.


Yeah its not an overblown generalization at all to suggest Heroku for '99.99%' of workloads.


Luckily, the parent does not suggest that.


Service fabric from Microsoft comes to mind but it is not open source.


I think it's overrated though - not open source, doesn't have an ecosystem.. the dev experience is sub par - services take too long to come up even with the one node cluster on a beefy laptop. Plus you cannot run the service outside of SF as an exe now.

I migrated a decent sized solution still in dev from SF to .netcore and SF - 10/10 would do it again. Not to mention that you also end up saving 50% $$$ on vm costs with linux vms (not considering SF on Linux)


Thanks for sharing your experience, I did not understand it in full.

Do you recommend using SF or not? you mention that you would do it again - was that only about moving from Windows .NET to .NET Core on Linux (ie. NET Core rocks?) and the rest about SF is crap or would you recommend SF in general for any future work (instead of for example Akka.NET for service coordination in a cluster) ?


Kubernetes takes you to serverless, where you don't care about the hardware.

The next shift is what I've called "stackless" - why do you even care what platform it runs on?

All you want to be able to do is have your application run somewhere.

Kubernetes goes some way towards that, but there's another abstraction layer needed.

Similar to how Docker was an abstraction further to Kubernetes and away from Vagrant.

This is something I wrote about this not long ago[1].

1. https://wade.be/development/sysadmin/2016/11/17/stackless.ht...


> Kubernetes takes you to serverless, where you don't care about the hardware.

Serverless isn't a good name - but it doesn't stand for "don't care about the hardware". Devs are already not caring about hardware anymore since VMs.

What serverless removes is the abstraction level of a server/vm/container.

A simple example is scaling your stateless components. In a serverless FaaS, functions are scaled for you. You don't have to do anything to handle a peak in web traffic. You don't have to do anything to handle a peak of msgs in your MQ.

In k8s, you still have to go and fumble around with CPU/memory limits and better get it right. k8s also doesn't scale your containers based on the msgs in your MQ out of the box. You have to build and run that service yourself (or ask GCP to whitelist you should you be running their MQ https://cloud.google.com/compute/docs/autoscaler/scaling-que... ). AWS Lambda had that since 2015...


>> why do you even care what platform it runs on?

Isn't that what the JVM/wasm solved?


Well, yes — the problem is that the JVM was too big and too platform-independent. We don't want JVM everywhere; really, we want POSIX-everywhere. The JVM's also this weird level of statically-typed hyper-extensibility — it's Greenspun's Tenth Law in action, and the result is typically in really terrible taste. The end result is a JVM which is really, really impressive, but appallingly ugly.


Thanks to the JVM, I stopped caring about POSIX.

Not everyone is found of it.

Same applies to any other language with rich libraries.


Yes.

JEE application servers already offer all the benefits of containers and OS independence.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: