
It's time to reconsider going on-prem - twakefield
http://gravitational.com/blog/time-to-reconsider-going-onprem/
======
chrissnell
I'm a huge believer in colocation/on-prem in the post-Kubernetes era. I manage
technical operations at a SaaS company and we migrated out of public cloud and
into our own private, dedicated gear almost two years ago. Kubernetes and--
especially--CoreOS has been a game changer for us. Our Kube environment
achieves a density that simply isn't possible in a public cloud environment
with individual app server instances. We're running 150+ service containers on
each 12-core, 512 GB RAM server. Our Kubernetes farm--six servers configured
like this--is barely at 10% capacity and I suspect that we will continue to
grow on this gear for quite some time.

CoreOS, however, is the real game-changer for us. The automatic updates and
ease of management is what took us from a mess of 400+ ill-maintained
OpenStack instances to a beautiful environment where servers automatically
update themselves and everything "just works". We've built automation around
our CoreOS baremetal deployment, our Docker container building (Jenkins +
custom Groovy), our monitoring (Datadog-based), and soon, our F5-based
hardware load balancing. I'm being completely serious when I say that this
software has made it fun to be a sysadmin again. It's disposed of the rote,
shitty aspects of running infrastructure and replaced it with exciting
engineering projects with high ROI, huge efficiency improvements and more
satisfying work for the ops engineering team.

~~~
shermanyo
Honest question, if you're running at 10%, why have you gone with 12-core, 512
GB RAM servers? Couldn't you start with a more reasonable 4-core, 64 GB RAM
until some threshold? What value do you get from such an early overallocation
of resources?

~~~
TheBobinator
Who rips the $1,000 processors and $200 voltage control modules out of their
servers to upgrade to $2,000 processors and $250 voltage control modules, then
re-plans their entire infrastructure and possibly code back around that?

Symmetry between servers has a value.

The main board, raid controller, network, and usually the storage is going to
be planned out meticulously ahead of time based on the maximum load the server
is going to see during it's lifetime. Often, Processor and Memory come down to
"If we needed X Feature and we didn't have it" or "If we took the server down
for 1 hour to upgrade it, how expensive is that?".

~~~
shermanyo
I misunderstood, sorry. I was talking in the context of virtual servers that
can scale resources somewhat dynamically. (side note, isn't it awesome we see
numbers like "512GB RAM" and don't immediately assume its not a single node in
a deployment?)

I initially pictured someone picking the highest spec option they can afford
when setting up a new service, rather than choosing based on actual demand of
each node.

> "If we took the server down for 1 hour to upgrade it, how expensive is
> that?"

Putting my CI / DevOps hat on for a sec: who takes production servers down for
upgrades without some level of HA to avoid downtime? ;)

------
nodesocket
Recently just built and deployed an enterprise (on-prem) offering for my
startup [https://commando.io](https://commando.io).

The single biggest problem by far was ripping/replacing 3rd party SaaS
services that we use/pay for with native code. While building a SaaS it is
wonderful to use 3rd parties to save time and complexity. Examples include
Stripe (payments), Rollbar (error tracking), Intercom (support/engagement),
Iron.io (queue and async tasks), Gauthify (two factor auth)... Howerver, when
you go to on-prem often times the servers don't have a public interface, so
your entire application has to work offline.

2nd tip. While it may seem like a good idea to create a separate branch in
your repo for on-prem (i.e. enterprise branch), this is bad idea. It ends up
being much better to just if/else in the master branch all over the place:

    
    
       if(enterprise) {
           // do something
       }
    

If/else in the master branch is what GitHub Enterprise does (at least was told
so).

~~~
jwatte
Everybody works on master, with feature flags! Yes!

Also, if you have proper dependency injection for testing and developer
sandboxes, that's the right layer to vary your back end. If statements only
needed when setting up the dependency environment.

In PHP we do this with polymorphic classes; in Haskell with type classes and
the ReaderT monad.

------
sytse
I agree that Kubernetes is a game changer which makes it much easier to run
your own applications. If all you have is VMs then the services (RDS, EFS,
etc.) offered by AWS are more effective. With a container scheduler there is
less maintenance and the decision is harder.

BTW Shoutout to Ev from Gravitational, they are proper Kubernetes experts and
we appreciate their help on GitLab, especially making
[https://github.com/gravitational/console-
demo](https://github.com/gravitational/console-demo) for our k8s integration
efforts.

~~~
hueving
If you have VMs, AWS is not necessarily the answer if you have a relatively
static workload. You are lighting money on fire by the buckets if you pay AWS
for a fixed amount of always on instances.

~~~
toomuchtodo
But Netflix! /s

~~~
closeparen
Netflix serves an evening rush concentrated in the 4 US time zones; they
actually use elasticity.

~~~
vidarh
Netflix is also almost certainly far into the "contact us" price tiers, so
they're a useless example for cost-effectiveness of AWS unless/until they
actually publish what they pay. If it's anything near the listed tiers,
they're being ripped off, based on what little I know of the kind of discounts
much smaller players have negotiated.

------
twakefield
Inspired by a previous blog post popular on HN [1]. John E. Vincent's "So You
Wanna Go On-prem Do Ya" [2].

[1]
[https://news.ycombinator.com/item?id=11757669](https://news.ycombinator.com/item?id=11757669)
[2] [http://blog.lusis.org/blog/2016/05/15/so-you-wanna-go-
onprem...](http://blog.lusis.org/blog/2016/05/15/so-you-wanna-go-onprem-do-
ya/)

------
mey
Quick note, they define Private Cloud as "Customer’s private cloud environment
on a cloud provider" but that is not the definition I go by. Just a heads up
while reading.

I would define Private Cloud as a non-bare metal solution on Prem in a
traditional colo setting where hardware is owned by the company in question.
Hybrid Cloud would be a Private Cloud and Public Cloud bridged in some way.

~~~
rosege
Where I work we are building a private cloud but the hardware will be all
provided and managed by one of the main server manufacturers. Its private
because no other company will use these servers and they reside within our
firewalls. But we rent the infrastructure and are not responsible for
maintaining it or capacity management. We are just charged based on our
consumption of it. I would say that to be a cloud it also needs an automation
layer on top like openstack or vRA

~~~
mey
I would agree that would be called private cloud. I guess it needs a
definition behind "cloud provider" according to the author. I think hardware
solely for your use (including networking/disk) would be my minimum
definition, but not bare-metal. I'm coming at this from a security boundary
perspective and where liabilities are. Where provisioning of resources can be
dynamic and moved.

------
dantiberian
This is a great article, but it would be good to state up-front that the
author works for a company that sells a service designed to help you go on-
prem. It's not totally clear until later in the article that this is the case.
It would also help put the article in context better.

~~~
MattRogish
Another company that does this - very little marketing - is
[https://www.replicated.com](https://www.replicated.com) . I don't get paid
for recommending them but they do great work (some of our clients are theirs
etc).

~~~
nraynaud
I'm doing a contract for one of their clients, it's quite a challenge to
develop against that.

~~~
BenjaminCoe
We've had the opposite experience using Replicated for npm's on-prem npm
Enterprise software.

I was originally trying to build our Enterprise solution using Ansible,
targeting a few common OSes (Ubuntu, Centos, RHEL); headaches gradually began
to pile up, surrounding the "tiny" differences in each of these environments
-- I'm VERY happy to offload this work.

It took me a little while to wrap my head around best practices regarding
placing our services in Docker containers, but once I was over this conceptual
leap I was quite happy.

------
agibsonccc
Note: This may not generalize to your use case. We mainly serve big data
customers including banks, telco, and have also seen other environments
similar to the "air gapped environments" listed below.

That being said: Would just like to add some coloring to this.

>> However, not every customer that wants on-premise is a government agency
with air gapped servers.

This is the bulk of our customer base and also a very large portion of the
market still. I deliver software via dvd (flash drives nor wifi not allowed)

A few notes from these kinds of customers: They won't let you just install
anything.

Docker is great but doesn't have a lot of enterprise adoption (despite the
self perpetuating hype cycle) outside the companies that already have mature
software engineering teams as a core competency.

They are often running centos 6.5 or less yet.

A lot of these environments still require deb/rpm based installation.

Admins at companies that run on prem installations tend to be very reserved
about their tech stack. Docker looks like the wild west to folks like that.

Our core demographic: We do a lot of hadoop related work. We have a dockerized
version of[1] that we deploy for customers.

We have also been forced to go the more traditional ssh based yum/deb route.
We have automation for both.

They are right that many "on prem" accounts are now "someone else's AWS
account".

We also have to run stuff on windows server as well. Docker won't fly in that
kind of environment either. Microsoft still has large market share in
enterprise yet and will for a long time.

K8s and co is great where I can use it, but it shouldn't be assumed that it
will work everywhere let alone in most places in the wild yet. Hopefully that
changes in the coming years.

Again: This is one anecdote. There are different slices of the market.

[1]:
[http://www.forbes.com/sites/curtissilver/2016/10/03/skyminds...](http://www.forbes.com/sites/curtissilver/2016/10/03/skyminds-
deep-learning-skil-seeks-to-clean-up-enterprise-data-junk-drawers/)

~~~
rdtsc
> They are often running centos 6.5 or less yet.

RHEL. Quite often 5.x still. They wouldn't let us ship CentOS to them. It was
brutal

~~~
devonkim
Some are on Oracle Linux, which on a few occasions does not have compatibility
with how RHEL nor CentOS work despite the similarities and pitches. Patching
OEL is more awkward in my experience mostly because of Oracle rather than for
any technical reason.

~~~
di4na
Oracle Solaris anyone ? I am

------
jacques_chester
I work for Pivotal, which has a slightly different horse in this race: Pivotal
Cloud Foundry. It's based on the OSS Cloud Foundry, to which Pivotal is the
majority donor of engineering.

Lots of customers want multi-cloud capability: they want to be able,
relatively easily, to push their apps to a Cloud Foundry instance that's in a
public IaaS or a private IaaS. They want to be able to choose which apps go
where, or have the flexibility to keep baseload computing on-prem and spin up
extra capacity in a public IaaS when necessary.

It also happens that lots of CIOs have painful lockins to commercial RDBMSes,
and they don't want to repeat the experience. They want to avoid being locked
into AWS, or Azure, or GCP, or vSphere, or even OpenStack.

CF is designed to achieve all of these. The official GCP writeup for Cloud
Foundry[1] literally says "Developers will not be able to differentiate which
infrastructure provider their applications are running in..." (can't say I
_completely_ agree, GCP's networking is pretty fast, which is I guess what
happens when you have your own transoceanic fibre).

If I push an app to PCFDev -- a Cloud Foundry on my laptop -- it will run the
same way on a Cloud Foundry running on AWS, GCP, Azure, vSphere, OpenStack and
RackHD.

[1] [https://cloud.google.com/solutions/cloud-foundry-on-
gcp](https://cloud.google.com/solutions/cloud-foundry-on-gcp)

~~~
smashed
As a devops engineer working with CF daily, I definitely know where my apps
are running, and that moving an app from cloud to cloud or to on-prem is not
'seamless'.

I have moved dockerized apps to public cloud CF to on-prem CF. It all worked
because the apps where stateless to begin with. Pivotal appropriating all the
merit of easy migration of stateless apps is dishonest at best.

Stateless apps are easy to move. They can be moved from CF to Kubernetes to
ECS to Heroku.

Now, let's talk about the state, because that's where the real problem and the
lock-in might be?

~~~
jacques_chester
> _Now, let 's talk about the state, because that's where the real problem and
> the lock-in might be?_

State is hard, state has momentum. We do what everyone else does: we push it
out into a separate parallel space. Heroku do that with their PostgreSQL and
Kafka services. We do it with services. IBM, Amazon, Google, Microsoft all do
it with services.

Cloud Foundry is agnostic to the services. More agnostic if you use BOSH. Less
agnostic if you use a non-BOSH service (like RDS).

Heroku isn't. You're married to their PostgreSQL unless you want to build your
own or switch to RDS.

Kubernetes has more of an emphasis on attaching and detaching volumes. I can
do that with BOSH; in future it'll be an app-level feature too.

(edited to remove unnecessary grump)

------
ddorian43
Can't you just, like, go dedicated first ? Or better, start dedicated and turn
up cloud every day at 6pm when your traffic is 10x (really, none explained how
they can scale their db that fast(because you can't), only the app-tier, which
is probably badly-designed to be that slow).

~~~
api
Tangent but:

Is anyone else noticing that bare metal is getting unbelievably cheap? OVH is
well known but there are now many vendors also offering crazy cheap bare metal
hosting with <1hr setup times, etc. Compared to the popular virtualized cloud
vendors it's a steal, especially when it comes to CPU, RAM, and storage.
(Bandwidth tends to be about the same.)

~~~
qaq
Bandwidth is drastically cheaper than AWS

~~~
jacques_chester
It's not quite apples-to-apples. Bare-metal and VPS providers lump all traffic
together into a single pool.

If you do that with an IaaS, you're nuts. Your static assets (which typically
represent the bulk of bytes served) should be served from a blobstore or a
CDN, which are priced dramatically cheaper.

If your mp4s are served from an EC2 instance instead of S3 or CloudFront, for
example, you're keeping warm by setting $100 bills on fire.

~~~
qaq
S3 10TB transfer 902.16 from dedicated provider 60

~~~
e12e
Take serverloft as an example - _not_ a low-cost provider. With your 99$/month
server with a 1gps uplink you get "flat bandwidth":

"Fair Use Policy

Use as much bandwidth as required: With all dedicated servers from serverloft,
you receive a traffic flat rate free of charge. Here our Fair Use Policy
applies, which requires a fair usage of all our resources as well as a
reasonable traffic usage.

Therefore, in case the bandwidth exceeds 50 TB on a 1 Gbit/s server during the
current billing month, we will decrease the bandwidth to 100 Mbit/s.

You will be able to monitor you current usage in the customer panel. We will
also inform you via e-mail should you approach the 50 TB maximum usage. In
this case you can purchase additional bandwidth for $10 per TB in order to
extend your traffic usage."

So, 99 for the server and 50TB a month. Now if you want to max out your 1gpbs
connection, lets say 50% of max (1gbps * 24 * 30 * 60 * 60s/2 TB ~ 160 TB, or
a additional commitment of 110 * 10 = 1100+99 = grand total of 1200 dollars a
month. How much transfer does that get you out of s3? First off, 160TB out of
S3 each month, clocks in at 13227$ for just the bandwidth - not including the
any storage. For the price of 160TB on dedicated, you get approximately 12TB
on s3 (again without storage).

120TB out from cloudfront? 25693/month.

Now, one, low-end box might not be enough to keep everything going, but notice
the price difference here. If you bought four "low-end" boxes you'd get 200TB
of bandwidth included, and still be paying _a lot_ less than for just the
bandwidth on S3. It might even be cheaper to rent dedicated boxes set up as
caching proxies, than to serve your data from s3/cloud-front...

Now, I'm not saying the cloud doesn't have merit, but just pointing out that
the price difference is _real_.

~~~
qaq
that was exactly my point :)

~~~
e12e
Indeed :-) I just felt the need to go and look at some current numbers, from a
more expensive host than what I currently use (happy hetzner customer here).

------
jwatte
We've done our own hosting for twelve years now; hundreds of physical servers
in a co-lo. We do continuous deployment (have for basically ever) and multiple
identical shards with horizontal balancing, so there's almost no "special"
server that we can't afford to lose or upgrade.

While our hardware and ops team and premise costs are a little lower than high
volume EC2 to the same scale, the real win comes at the network. High quality
transit through multiple redundant links, high capacity routers, our own BGP,
is all much lower cost and more flexible than we can get from EC2. It's not
even in the same ballpark! (Also, we out put SSDs into databases years before
they were usable in EC2, so there was a win there, too!)

Finally, we're hiring into our on prem team :-)
[https://IMVU.com/jobs](https://IMVU.com/jobs)

------
alexwilliamsca
This is an interesting post, as Gravitational is doing something very similar
to us - [https://jidoteki.com](https://jidoteki.com) \- in fact that blog post
reflects much of what I wrote a year ago on our company blog -
[http://blog.unscramble.co.jp/post/134388066008/why-we-
care-a...](http://blog.unscramble.co.jp/post/134388066008/why-we-care-about-
on-premises). We focus on the more commonly known concept of on-prem (not
AWS/Kubernetes), in other words on dedicated hardware in a private datacenter
or hosting facility, by creating fully offline, updateable, and secure virtual
appliances designed specifically for self-hosting.

------
mcs_
Small team here... 20 customer on-prem. It takes more than a year to update
all of them. In some cases "policy" want us to connect staging over TeamViewer
sharing mouse with the the DBA.

CI is basically impossible to accomplish.

It is hilarious discovering a bug after both (us and customer) approval to go
live with the last version.

The most exotic technology we have implemented is a CDN to speed up some
component's delivery. And... In some cases we did not yet have authorization
for that.

Honestly... on-prem is perfect to preserve your job... but man...

~~~
SteveNuts
It's not always like that, your issue is cultural/management.

We've got a solid CI/CD pipeline on prem using "old school" tools like vmware

------
throw2016
The whole point of cloud and one of the main drivers for rapid adoption has
been that it removes the need to invest and manage your own hardware, staff
and capex.

Amazon, Google or others takes care of that unless regulations require you to
be on-prem.

It never really had anything to do with the lack of software or management
tools. Not to take away from Kubernertes but Vmware, Cisco, Microsoft and
other enterprise vendors have always had this market sealed up with relatively
sophisticated capabilities.

Things like Vcenter seem far ahead in terms of management, capabilities,
feature set, policy and fine tuned for enteprise use. Just taking distributed
storage as an example where options like VSan, Nutanix or ScaleIO do not
really have any open source counterparts. But this is not surprising given the
billions of dollars spent in enterprise.

~~~
smn1234
Which regulations require you to be on prem?

~~~
Steeeve
I don't know of any that _require_ you to be on prem, but if you can't arrive
at a reasonable liability agreement with your providers, cloud can be taken
off the table pretty easily for any information that requires public
disclosure of breeches (e.g. ssn) or comes with hefty fines (e.g. patient
information)

------
lamontcg
The biggest obstacle is going to be keeping yourself lean in the face of
employees that want to build their own empires internally.

Since I worked at Amazon for 5 years I saw how to cut costs on networking and
hardware and to build only as much as you actually need. My experience after
working at Amazon is that most Enterprise IT is full of people who hide behind
Best Practices in order to expand their budget.

You can very easily wind up with massive amounts of spending to VMWare, EMC
and Cisco. Unless you have an executive with a vision on how to contain costs
with networking, storage and servers, I wouldn't recommend on prem, you can
easily wind up spending an order of magnitude more than you should be.

------
HorizonXP
We've been running a hybrid on-prem solution for nearly 3 years now. It's been
challenging, but Kubernetes has drastically simplified it for us. It now means
we can spin up a client site in 1-2 business days, provided we have a server
on-site ready to go.

------
gomox
I think unless there is a big turn of the tide, supporting on-prem is just
trouble these days. Most large enterprise customers are willing to use SaaS
already.

Forget about the huge issues in the support organization for a second: the
impact on-prem has on your release cycle has consequences that are hard to
fully grasp. So much for "continuous development and release" if you have to
keep supporting old versions of software for a year.

Build your stack so that you can easily migrate clouds (i.e. don't use all the
super high-level AWS APIs). It's a good idea in general, and it should make
going on-prem doable enough if you are worried about having that option at
all.

~~~
old-gregg
One of the points the author makes is that "on-prem" these days often mean
"someone else's AWS account". When we started about a year ago we suspected it
would come up, but today it's turning into a most popular use case. And
adopting something like Kubernetes in a smart way largely takes care of your
concerns regarding agility.

Now, the story time!

When I was at Rackspace, I was trying to analyze the top reasons our startup
customers would stop using some of our SaaS offerings. The most common one,
unsurprisingly, was they'd run out of business. But another top one was "they
got successful". As they got bigger and more successful (can't mention names)
they'd bring more and more in-house, eventually getting to a point that the
only products they were interested in were just servers and bandwidth.

So it depends. Some things just don't work well as SaaS, especially for
customers with money who can't scale their IT fast enough.

------
gagabity
The managed services are the killer feature in the Cloud, RDS Aurora or
DynamoDB especially for AWS, and once your DB is in AWS you pretty much have
to put everything else in there.

------
gregmac
The product I work on started life as an on-prem-only software. We're mostly
hosting customers in AWS these days.

The smallest on-prem customers run on a single server. We have multiple
separate web applications, and a couple separate backend services. In AWS, we
have each separate web app/backend service on its own set of auto-scaling EC2
instances, and have ELB+haproxy and some scripts to manage everything. Each
customer (100s) get their own DB, but the DBs are all hosted across a few
actual DB servers.

We use the same codebase for on-prem and AWS, and there's really not much
difference in operation: as far as the applications are concerned, on-prem is
just the same multi-tenant environment that happens to have a single tenant.

However, I would absolutely go all-cloud if I could. Two major reasons:
AWS/cloud services, and database.

The first point is easy: we just can't really use any AWS or other 3rd party
SaaS stuff ourselves, other than for our own support infrastructure. The app
has to deploy to a self-contained on-prem environment. This is really both a
blessing and a curse. On one hand, we have to re-invent some things that could
otherwise be off-the-shelf from AWS or other offerings. On the other hand,
while switching to another cloud would not be trivial, there's nothing in our
core application code that relies on AWS at all.

This also makes our technology stack a bit constrained: we could benefit
immensely from a time-series database, but instead we use a RDBMS (MS SQL
Server) because, well, we have enough trouble with on-prem administering the
database as it is.

The second point -- database -- is the cause of so, so much frustration. Very
few people can run databases well. It's the same old story: when you're small,
it's no problem, but as you grow the DB performance becomes critical (for
reference, while our smaller customers have DBs that are 1-2GB, dozens of our
customers have DBs in the 100s of GB, and a couple are over 1TB, and typically
there's about 15% inbound data per day, though after processing not that much
gets stored). I/O speed is always an issue.. we've had customers trying to run
with their I/O on a NAS that's also shared with a dozen other applications
("We spent 100k on our NAS and the sales guy said it'll do everything, so it
must be your application that's slow"). I have heard our customers say things
like "But I can't make a full database backup, I don't have enough disk
space".

There's also the inevitable "what hardware do I need to buy?" question, that
even our own sales team is frustrated by. Overspec, and they are upset at
cost, or -- probably worse -- are frustrated you made them spend all that
money when their machine sits mostly idle. Underspec, and it's now your
problem that it doesn't work, because they bought what you told them. It took
me months before I convinced even our own team the best we can do is provide a
couple examples of hardware setups, and caveat the heck out of it ("with this
many devices monitored, with an average of this many parameters, at this
frequency, with this many alerts configured, this many concurrent users,"
etc).

It can be hard enough to isolate the performance problems under load when you
operate the whole stack -- it's incredibly difficult when you can only
approximate a customer's setup, and even then, you have to somehow convince
them their DB config or I/O is inadequate and then need to spend money to fix
it.

------
OliverJones
Are your premises in a flood plain? What about those of your power supplier
and network supplier? These are important questions for many of us.

~~~
johnezang
... or under the approach and departure airspace of a major airport?

------
discodave
Reading through the bullet points near the top, I was mostly on board until
this one:

> Economies of scale: some customers have the capacity and ability to acquire
> and operate compute resources at much cheaper rates than cloud providers
> offer.

AWS is probably spending a billion dollars per year or more on hardware...
they get special things that are not available to the public like chips and
hard drives. You really can't compete with that.

~~~
kuschku
Actually, you can, because AWS still has insanely high profit margins.

Even going from EC2 to classical dedicated servers in cloud (IVH, Hetzner,
etc) will get you about an order of magnitude cost reduction.

~~~
ZenoArrow
Exactly. There's a reason multiple large IT companies have invested heavily in
cloud services, and it's not so they can be in a market with hard to obtain
profits.

In terms of why cloud services are becoming popular for their users, it makes
things cheaper in the short term, and is a easier sell to management than on-
premises resources. It's also useful for putting together solutions when you
work in a company with overly restrictive local computing resources. However
if you're looking to save money in the medium to long term, and you can find
people you trust with server administration, then you're better off with on-
premises (perhaps only keeping a cloud-hosted load balancer if you wanted to
guard against unexpected spikes in network traffic).

------
sargun
I work in on-prem infrastructure software. The reasoning that our customers
have given us is that they care about security. We have to deploy to customers
who don't air gap their systems, but have fun access control policies. For
example, we regularly support customers who have issues using a web session.
We can't control their computers, but we can ask them to type (or copy and
paste) anything in we'd like. Often times, the commands we ask them to run are
like: [K || K = {kv, [navstar, _]} <\- ets:lookup(kv)]. Do you know what that
does? I can almost assure you that our customers do not. We ship compiled
binaries to the customer, and although we would never ever send them code that
could hurt them, we link against a dozen libraries (that we ship), and who
knows who could have poisoned those?

We also have a requirement for our software that time has to be synced. We
also recently started to actually check if people's time was synced (using the
requisite syscalls), and it quickly became our #1 support issue for a little
bit. Customer environments are far too uncontrolled to simply be tamed by such
a system as Kubernetes.

I think that if you can avoid on-prem software and use XaaS, you should. You
probably don't need to run your own datacenters, databases, etc.. because it's
unlikely that you're better than GOOG, AMZN, Rackspace, DigitalOcean, and
others. It's very unlikely that your application will be running at sufficient
economies of scale to benefit from the kind of work Amazon, and others have
done to run datacenters efficiently at scale. Not only this, but Google has
figured out how to run millions of servers.

Although GOOG / EC2 tend to makes hardware available (NVMe, SSDs, etc..) far
after the market releases it, if you compare it to enterprise hardware cycles,
it's lightning fast. We still have customers who run our system on SAS 15K
disks, and prefer that over SSDs, even though we recommend it. On the other
hand, if you control the environment, rather than spending days, if not months
on how to make your application more I/O efficient, you can simply spend a few
more cents an hour, and get 1000 more IOPs.

I have a technique (Checmate) to make containers 20-30% faster and expose
other significant features. Unfortunately, we have customers which run Docker
1.12, the latest version of our software, want to use containers, and yet they
want to use a kernel no newer than a third of a decade old. I literally have
spent months making code work on old kernels, at significant performance, and
morale cost. If we controlled the kernel, this wouldn't be a problem, and it's
yet another problem that K8s cannot solve.

Lastly, networking on-prem tends to be an afterthought. IPv6 would make many
container networking woes go away nearly immediately. Full bisection bandwidth
could allow for location oblivious scheduling, making the likes of K8s, and
Mesos significantly simpler. BGP-enabled ToRs give you unparalled control.
Unfortunately, I have yet to see a customer environment with any of these
features.

I really hope the world doesn't become more "on-prem".

------
fouc
This can help get your SaaS to be on-premises:
[https://jidoteki.com](https://jidoteki.com)

------
bitmapbrother
He runs an on-prem services company. So, of course, it's time to reconsider
going on-prem.

~~~
ec109685
Read the blog and react to that instead of his motivation for writing it.
There aren't too many selfless acts in the world.

------
partycoder
Nope it is not.

Going on premise might have some advantages but it comes with completely
different problems that you didn't even thought about. Some of them:

* You thought of all the power redundancy, ideal cooling, humidity, etc... but then your office gets robbed and all your computers stolen.

* Network wiring... some people are lousy, create an entire spaghetti mess with them, used a crappier type of cable, cable crosstalk from other equipment, someone stepped over a cable and damaged it. Which one is it? good luck finding it out.

* Hardware fails, power supplies fail, disks fail, everything can fail.

You can pick these battles, or you can focus on your software based service. I
suggest the latter.

~~~
brazzledazzle
I think when people say on-prem they mean a lot of different things but
generally that doesn't include a crappy server room in your office building.
They're probably thinking of a real data center.

