
Why Kubernetes Is the New Application Server - rterzi
https://developers.redhat.com/blog/2018/06/28/why-kubernetes-is-the-new-application-server/
======
dkhenry
I would say Kubernetes is becoming the new Application Server, but not for
anything listed in the article. Kubernetes is more and more being sold to
"enterprise" as a solution for running services much like Application servers
were, and much like application servers the standard Kubernetes deployment is
becoming a tangled mess of buzzwords and dreams, which work great in a demo,
but won't work at all without an army of consultants.

~~~
brozaman
I'm a former OpenShift (Red Hat's distribution of Kubernetes) consultant and
currently I work at Red Hat in a different position and I have to disagree on
the "won't work at all without an army of consultants".

While it's true that some of our customers have an army of consultants, the
vast majority of our customers don't use consulting at all or that only use it
very infrequently. If you don't want a lot of customization, have the right
amount of people and the right people and __realistic expectations __you don
't need consulting at all.

When people want to get a highly customized experience (often, for the wrong
reasons), or want to get in production after 2 months, when their people have
no experience in kubernetes and didn't do adequate testing (load, fault-
tolerance, etc) it will give lot of problems. But that's the case for
everything, not only for kubernetes

In my opinion (mine, not Red Hat's) getting someone to deploy it with you the
first time, showing how it's done and why things are done, and after that and
do a workshop explaining the basic concepts has great value, saves a lot of
time and isn't expensive.

~~~
mmt
> want to get in production after 2 months [...] > it will give lot of
> problems. But that's the case for everything, not only for kubernetes

Are you asserting that any modern production environment that takes less than
2 months to set up is bound to cause problems? Or the whole
company/startup/project?

If the former, I'd say there are many tech startups who would disagree, or at
the very least, point out that it doesn't matter because they'd cease to exist
during those 2 months without an MVP in production.

~~~
marenkay
I think the two month phase is pretty accurate. Deployment itself is not that
time intense and changing your applications to be container friendly is not
either.

The time intensive phase during the first Kubernetes deployment is changing
the mindset of the engineering team and everyone involved in IT.

Kubernetes IMHO is pretty much like moving from college into work life and
adapting to the fact that requirements for a decent adult life are very much
different from a college life.

~~~
mmt
That's as may be for Kubernetes, but the GP specifically stated "that's the
case for everything, not only for kubernetes", which struck me as an
extraordinarily broad claim.

One might reasonably expect "everything" to include traditional deployment
methods that require no changing of mindsets (not that that's likely a factor
in my nascent startup example).

~~~
marenkay
Transition phases actually make sense even if you apply "everything" instead
of just "Kubernetes" considering how every company transitions through stages
in terms of tooling.

Think being a startup where deployments most likely will be manual and
undocumented, upgrading to an initial automation using some scripting,
transitioning to CI/CD, etc.

So from various experiences at companies of all sizes I would actually support
such a broad claim, since companies are basically transitioning all the time
through new phases to greener pastures. That of course assumes that the
company values constant improvements and uses an iterative process for growth
and change.

------
rdsubhas
I feel a bit alone here. The most obvious benefit of kubernetes is that _It 's
an API_. There is definitely a tooling angle, but the core disruption that
kubernetes brings to the picture is, it allows to model infrastructure as an
API.

Take the standard cloud application stack: Every app needs an artifact (disk
image), a running service, an instance template, a group of VMs running using
those templates, networking and a load balancer.

Before kubernetes, you had to automate by writing terraform scripts which
mutate physical infrastructure as you apply.

With kubernetes, you POST a bunch of well-defined resources (container spec
for the service, pod spec for the instance, pod as a VM, deployment as the
group of VMs, services and ingress for traffic). Every physical cloud resource
has a clear API mapping. Basically you save in kubernetes what you need as a
first-class API resource. And then, kubernetes _responds_ to what you saved by
changing infrastructure to match.

You can do the same for practically any kind of infrastructure resources. Want
an S3 bucket? Duh, POST a S3 bucket resource and you can write a controller to
react and fulfil that bucket.

Kubernetes comes with a built-in set of functionality that fulfils a certain
core set of infrastructure using containers. But you are not restricted to
that. In theory, you could POST a VM as a resource, or an Instance Group as a
resource, and you can write a kubernetes controller to fulfil those resources.

Why? Because APIs are more powerful than tools. APIs allow a different axis of
infrastructure evolution, even if you distribute the control to everyone. By
modeling these as APIs, you can bake in a huge amount of infrastructure
intelligence into the API. Want to enforce different, code-driven resource-
based policies, linting, sane defaults, organizational context? Yeah, make the
controller do just that. If you just provide a tool to every engineering team,
you basically lose any form of cross-cutting orchestration, and you lose the
ability to evolve infrastructure in a separate axis independent of the tool
that each team uses.

Basically, kubernetes is "Infrastructure as an API". Thinking about it as a
"new application server" kinda hides the whole point of it.

~~~
zeckalpha
That’s not a benefit to me. Is it reproducible? Keeping desired state under
version control is valuable.

~~~
nojvek
Yep. Super reproducible. Even more than the old ways. You can store the docker
configs and store docker containers in registry. Store the yaml files, or auto
generate them using build process based on other central DSL.

We could probably bootstrap our entire product from scratch in a couple of
hours if it disappeared off the face of earth.

“Infrastucture as a config” is a very powerful concept. I’d say that’s what
the big power of k8s is.

------
fauigerzigerk
Here's my prediction: DevOps is dead. In 5 to 10 years developers will no
longer be mentioned in articles like this.

There will be specialisation once again, because it makes absolutely no sense
to make people whose main job it is to think about use cases, business logic
and user interfaces to also deal with stuff like this.

~~~
solomatov
Another prediction in 5 to 10 years infrastructure will be so streamlined, so
that developers won't need to think it at all. I see the progress when we cam
from manually maintained machines, to clumsy devops tools, to more streamlined
dev tools like ansible, to kubernetes, and with each new step, maintaining a
system requires less and less expertise.

~~~
some_account
No, the future is serverless applications in the major cloud providers where
every server is abstracted away and things scale automatically.

The reason for this is the enormous cost benefits (its super cheap, you only
pay when your code actually runs, no upfront costs, no passive server costs,
no overprovision or underprovision of compute resources etc).

~~~
bamboozled
If you’ve ever used a serverless framework, you’ll know this argument falls
apart the second you need to run something for a long period of time, manage
complex state or glue services together. There are numerous other details
being glossed over.

The servless approach has its place just like everything else, it’s not a
panacea though.

~~~
rhlsthrm
Eventually, it should be abstracted even further into something that looks
like Heroku, but deploys your code onto serverless functions or containers
depending on how your service needs to run. A nice, modular, microservice
backend automatically with zero config.

------
sewercake
We're in the process of migrating our (primarily) java services from straight
AWS to kubernetes.

At the beginning the author poses the following questions:

* Do you use Mac, Windows, or Linux? Have you ever faced an issue related to \ versus / as the file path separator? What version of JDK do you use? Do you use Java 10 in development, but production uses JRE 8? Have you faced any bugs introduced by JVM differences?

* What version of the application server do you use? Is the production environment using the same configuration, security patches, and library versions?

* During production deployment, have you encountered a JDBC driver issue that you didn’t face in your development environment due to different versions of the driver or database server?

* Have you ever asked the application server admin to create a datasource or a JMS queue and it had a typo?

I've experienced problems whose root cause are some form of all of those. Much
of it could be chalked up to growing pains etcetera, but, for example, there
are concrete differences between docker versions running on mac and linux that
have shown up for me.

This doesn't reduce the author's argument, but they do seem liked strange
examples.

Our choice to move to Docker and kubernetes came from developers, and
specifically spoke to the need for consistent, reproducible test environments.
We had dockerized most applications many months before the notion of using
them in production was put on the table. What remains to be seen is if the
switches in production reduce complexity and maintenance on the devops end of
things, as well. I'm also curious how many other organizations had containers
introduced 'from the bottom up' like us.

~~~
knesek
Hm. My team is running and developing a somewhat complex system based on jvm
for years, and never encountered any of the issues in the bullet points.
Having consistent test, staging and production environments produced by
ansible have been working out for us (and is not hard). Are we just being
lucky?

~~~
sewercake
I don’t have enough experience to say how common my experience (or yours) is,
but I don’t know how much luck is involved. I’m sure many institutions have
solved these problems without containers and sicker.

But For various reasons, political, personal, technical etcetera, consistent,
accessible test environments were not available. It was a huge bottleneck for
developers.

This is why docker was so appealing: it allowed devs to circumvent the
political and technical issues. We didn’t have to justify provisioning more
instances, since we could just use docker compose to run stacks on local
machines.

So I’m that sense, the choice and it’s benefits arose from non technical
hurdles.

------
testvox
Sorry but this article implies that containers can solve issues like the
difference between path separators on Windows and Linux. They can't even solve
issues arising from differences in kernal versions, not to mention operating
systems.

~~~
craftyguy
Always use /

Windows can, surprisingly, handle it just fine.

~~~
JetSpiegel
On interactive uses, sure. But it's not clear cut, some code paths are lower
level and bypass that.

~~~
craftyguy
Yea I guess most of my uses have been in things like python (e.g. open() and
as.*). Got any examples of code paths that bypass that check and fail if you
use /?

~~~
testvox
I think they are wrong and actually it's the opposite problem. Some
applications and libraries that do their own path mangling on Windows will
choke if you give them a /. The win32 file I/O API handles them correctly.

------
taeric
It is hard not to build this impression of containers. Worse, it seems that
the docker image format is massive compared to what most war files were like.
This is annoying not just in terms of raw bytes to move around, but in taking
stock of what is getting moved around.

My team originally pitched how docker solved much of the dependency upgrade
management by having layers for each major set of dependencies. That ignored
the fact that upgrading a layer is not really something you do.

So, then you can go around the path of coordinating many containers
communicating with each other. That works, but in that world, things really
don't seem any easier than the earlier alternatives. Harder, in many ways.

Don't get me wrong, the momentum and raw money being put into containers
certainly paints it as the future. It just feels like lying to say that they
have even come close to parity with what we were capable of not that long ago.

~~~
xj9
the linux world is slowly (and badly) recreating technology that has existed
in FreeBSD and Solaris/Illumos for decades.

~~~
grapeli23
You probably never heard or used - Linux VServer. Available from the later
version of the 2.4 kernel.

[https://en.wikipedia.org/wiki/Linux-
VServer](https://en.wikipedia.org/wiki/Linux-VServer)

~~~
xj9
> 2008

FreeBSD jails date from 2000

Solaris zones date from 2004

[https://us-east.manta.joyent.com/bcantrill/public/ppwl-cantr...](https://us-
east.manta.joyent.com/bcantrill/public/ppwl-cantrill-jails.pdf)

[https://us-east.manta.joyent.com/bcantrill/public/ppwl-cantr...](https://us-
east.manta.joyent.com/bcantrill/public/ppwl-cantrill-zones.pdf)

~~~
grapeli23
2003 [http://linux-vserver.org/Overview#History](http://linux-
vserver.org/Overview#History)

In the Linux distribution that I used (PLD), the first vserver support took
place in january 2004. [https://github.com/pld-
linux/kernel/commit/5be58c1bcc5568676...](https://github.com/pld-
linux/kernel/commit/5be58c1bcc5568676a75dd9f7096827bfcde08e8)

Maybe a little earlier because the util-vserver showed in Nov 2003.
[https://github.com/pld-linux/util-
vserver/commit/c4036d6e748...](https://github.com/pld-linux/util-
vserver/commit/c4036d6e748808e07671ba402b3fe7ca027db82a)

------
ironjunkie
I hope that the Kubernetes Steering committee will continue its good work to
keep the Core tidy and simple.

What really makes me afraid are the thousands of complex addons that are being
pushed by the community (For example, Istio, Networking addons, etc etc).
Those should be kept outside and it MUST be made clear they are definitely not
needed for a normal installation of Kubernetes.

Istio for example, is such a political brainwash power-move by some bigger
companies that benefit from it. I believe less than 5% of the use-cases really
require Istio, still it is being pushed as something you should always install
in your cluster. This is bad for everyone.

------
iKSv2
Guys who have worked with Kubernetes and virtualization, I have a genuine
personal question for you - I have been working as SysAdmin and then into
virtualization (VMware / HyperV sort of things) and while natural progression
in it says go for advanced VMware courses (related to Virtual machine and its
concepts), do you think its better to switch lanes now to container-tech
rather than some years down the line? I am totally 0 on Docker / Kubernetes /
OpenStack (if thats related with containers).

~~~
clhodapp
At this point it is clear that this is a _thing_ that _will_ happen for a
whole lot of orgs. You would be well advised to dabble at this point and see
if it connects with you. At the least, you should understand what problems are
and are not solved so you don't look foolish in a conversation. That said, VMs
and scripted/config-managed deployment will continue to be with us for the
foreseeable future.

~~~
iKSv2
"At the least, you should understand what problems are and are not solved so
you don't look foolish in a conversation"

Seriously, this is so embarrassing at the current point ( with me having
absolutely 0 understanding of the concept)

------
eecc
Well, until someone develops and releases (and strangle all naysayers) a
language independent thing remotely like JEE JNDI... it’s just not. Sorry, JEE
was horrible but also so beautifully ahead of its time

~~~
hyperman1
Unfortunately, JEE was horribly behind the times. A lot of the ideas came from
the COBOL middleware world. Others were a remix of CORBA.

------
humbleMouse
I think what a lot of people forget is that this is being driven by costs.
Companies don't want to pay for disaster recovery, and it's cheaper to set up
a k8's platform. The platform itself performs disaster recovery, so now that's
one less cost for the company.

It's all about saving money on DR. When these companies realize that
developers can't handle doing ops AND complex business logic, maybe they'll
rethink it. Until then I expect this trend to spread rapidly as companies look
for ways to abstract away DR costs.

~~~
hodgesrm
Can you define what you mean by disaster recovery? To me it means the ability
to recover business applications from a site failure.

Assuming you have a similar definition how does Kubernetes solve that problem?

~~~
humbleMouse
What I mean by DR is that if a data center gets nuked, you still have a
replica of your k8's platform running in your other data center(s).

So k8's basically runs a replica of your whole system behind the scenes so if
a physical location goes down you still have your system running.

~~~
hodgesrm
What about all the data in DBMS, file systems, object stores? Also what about
BGP routes, firewall settings, SNAT/DNAT rules and the like?

There's a lot more to replicate than just the bits in the apps.

~~~
SteveNuts
Those are the types of things developers simply don't have experience using.
And if Kubernetes tries to replace all of those it will become the new
OpenStack.

------
sheeshkebab
The software world keeps on going in circles :).

Next up - lightweight k8s server - stripped down of all the crap that can
easily run and deploy a single container.

~~~
marcus_holmes
yes, this. I'm just skipping the whole Docker thing this time around, and
waiting until the circle turns and the herd decides that containers create too
much complexity ;)

------
amdelamar
As mentioned in the article, The EFK stack (Elasticsearch, Fluentd, Kibana) is
great for unified logging. And its not just for OpenShift, its working wonders
for our current project with Kubernetes on AWS.

~~~
gog
Could you give more details about why Fluentd and not Beats + Logstash on the
other end? I don't like the idea of running Ruby on every machine just to ship
logs somewhere.

Beats are pretty lightweight and for what I used them they work.

~~~
mehblahwhatevs
Many people use the ELK stack which is Logstash. I imagine you can swap out
Fluentd for Logstash.

------
zokier
Is this a response for the Jib announcement and the inevitable comments that
followed? Sounds a lot like it is...

------
tzakrajs
Instead of the new Application Server, Kubernetes is the new OpenStack. A
conglomeration of projects that support a platform for deploying and operating
software in containers or virtual machines. Just like OpenStack, some of the
components deal with storage, others with networking, and others still with
service discovery or proxying.

Without getting too far into semantics, I think the author is using a well
overloaded terminology. When I hear Application Server, I immediately think
tomcat or some other jvm.

