
Cloud Native Application Interfaces - xkarga00
http://blog.kubernetes.io/2016/09/cloud-native-application-interfaces.html
======
dkarapetyan
Reactive and evented models are much harder to reason about and design
properly. When you're just doing things sequentially in a fabric script it is
much easier to make sense of what is going on. As soon as you make things
reactive and evented to support dynamic cloud topologies you are basically on
another planet and none of the old rules apply. This is why it is hard to
design "cloud native" systems. I don't think it is the lack of interfaces and
standards but the model being inherently non-sequential. In many cases it is
also non-transactional and barely eventually consistent.

------
drieddust
> Let’s go back to first principles. To describe Cloud Native in one word,
> we'd choose "automatable". Most existing applications are not.

This is doubly true for Enterprise Customers. Most of them are cloud minded
without understanding their applications which aren't cloud conditioned.

~~~
user5994461
Existing applications will work in the cloud. There is no need to use the
fancy cloud stuff (multi region, auto scaling...) if it's not needed [or not
possible].

At the minimum, renting an instance in AWS or GCE is equivalent to renting a
physical server. Applications don't care what's the brand of the server
they're running on.

~~~
AlexB138
>Existing applications will work in the cloud. There is no need to use the
fancy cloud stuff (multi region, auto scaling...) if it's not needed [or not
possible].

I'd like to unpack this.

>Existing applications will work in the cloud.

They may work, but there's a good chance they will not work well. Doing a
direct lift and drop of an application onto AWS can be catastrophic if you
don't understand storage persistence, or VM availability. An EC2 VM is not
like a physical server in that it will not continue to run indefinitely until
something breaks. I would say that existing applications will likely not work
well without a shift in the way you treat underlying infrastructure. There are
a lot of considerations around IO and locality as well.

>There is no need to use the fancy cloud stuff (multi region, auto scaling...)
if it's not needed

You've just said "There is no need... if it's not needed.

>[or not possible]

If it is not possible to use the surrounding services your application is
probably a poor fit for a cloud platform. It can become prohibitively
expensive to try to directly replicate your physical datacenter architecture
on a cloud platform.

Is it possible to just drop your existing application onto some VMs? Sure, but
it's probably a bad idea.

~~~
user5994461
> An EC2 VM is not like a physical server in that it will not continue to run
> indefinitely until something breaks.

Sorry to contradict but an EC2 VM does run indefinitely until something breaks
;)

There are differences in physical storage between local disks, SAN, NAS,
Network Storage, NFS, EBS volumes and Google Volume. A sysadmin should know
the characteristics of these, doesn't matter whether it's cloud tech or own
tech or homelab tech.

People with all this knowledge are rare and expensive, yet critical for major
migrations to go well. I can understand that this is an obstacle for major
migrations to the cloud (and a benefit for my payroll).

> You've just said "There is no need... if it's not needed".

I think it's VERY important for legacy migrations. A migration should be done
starting with the fundamentals, progressing in stages.

All articles and talks focus on shiny bleeding edge stuff, which is only the
latest stage(s). Depending on the applications and organization, this stage
may or may not be worthwhile, it should or should NOT be a goal in the first
place.

> If it is not possible to use the surrounding services your application is
> probably a poor fit for a cloud platform. It can become prohibitively
> expensive to try to directly replicate your physical datacenter architecture
> on a cloud platform.

I'm talking to clients who have to run their own datacenter right now and want
to migrate. It is prohibitively expensive.

------
jondubois
I understand the benefit of designing software components (and stacks) to run
and autoscale on Kubernetes - I actually did that with my open source project
SocketCluster. See
[https://github.com/SocketCluster/socketcluster/blob/master/s...](https://github.com/SocketCluster/socketcluster/blob/master/scc-
guide.md)

I think that standardisation should happen at the level of the stack/component
(not at the application level). Most application developers don't know enough
about specific components like app servers, databases, message queues, in-
memory data stores... to be able to effectively configure them to run and
scale on K8s (it's difficult and requires deep knowledge of each component).

I think it should be the responsibility of open source project owners to
standardize their components to run and autoscale on K8s. It's not practical
to delegate this responsibility to application developers (whose primary focus
is business logic).

Application developers should be able to use an OSS stack/component at scale
on K8s without having to understand the details of how that stack/component
scales itself.

So for example, if I wanted to run Redis as a cluster on K8s, I should be able
to just upload some .yaml files (provided in the Redis repo) and it should all
just work - Then I can start storing data inside Redis cluster straight away
(without having to understand how the sharding works behind the scenes).

Rancher has the concept of a 'Catalog' which pretty much embodies this idea.

~~~
thorgaardian
> I think that standardisation should happen at the level of the
> stack/component (not at the application level). Most application developers
> don't know enough about specific components like app servers, databases,
> message queues, in-memory data stores... to be able to effectively configure
> them to run and scale on K8s (it's difficult and requires deep knowledge of
> each component).

Can't agree more with this, but I would add that its not limited to the
specific components listed like databases, message queues, and others. Getting
any component or service configured to autoscale on K8s and work its way into
a larger infrastructure can often require far more working knowledge than
should be necessary. Standardizing the interface these components use to
publish themselves would help K8s take on this responsibility more fully. I
can only speak for myself, but I for one would happily adopt an interface like
this if it meant seamless distribution, autoscaling, and consumption for peer
components.

The last part about consumption for peers is important as well. Though the
standardized interface would empower a higher level of scale automation, the
standardization of this automation could be translated to interface
assumptions for external components as well. In the Redis example above, a
standardized interface for the service would mean that K8s can deploy it
automatically, but also that other services can make similar assumptions about
it's location in a deployed environment.

------
jdc
Would be cool if this were a thing nowadays:

[https://en.wikipedia.org/wiki/Single_system_image](https://en.wikipedia.org/wiki/Single_system_image)

