
Gravity: Upstream Kubernetes packaging tools - gk1
https://github.com/gravitational/gravity
======
catern
I find each new development in the field of deploying Kubernetes to be grimly
humorous. We had lots of techniques to package and deploy things before
Kubernetes, but they were complex and inadequate in some ways, and Kubernetes
fixes some of those issues. But it turns out Kubernetes itself is
substantially more complicated to package and deploy than the old solutions,
if you're deploying it yourself rather than using some proprietary cloud.
Oops!

What I just can't decide is whether we'll successfully put another layer of
container-container-containers on top of Kubernetes, or whether the whole
effort will eventually collapse and we'll extract the good parts out into
something simpler.

~~~
cdoxsey
> But it turns out Kubernetes itself is substantially more complicated to
> package and deploy than the old solutions

Not really. It's pretty much the same level of complication if you used the
same components. You could use all the same tools: chef, puppet, ansible, etc.
Once you have it available though other applications are easier to deploy.

At any rate, this tool provides something entirely different. It lets you
image the entire data center and reproduce it somewhere else. Not sure how you
would've done that before.

------
alexk
Hey all, I'm part of the team at Gravitational - a company behind this effort.
Will be happy to answer any questions about this project.

~~~
mwcampbell
I wonder if you've considered building a complete VM image instead of a
tarball. A minimal, immutable VM image could be more finely tuned and hardened
than a tarball installed on top of a general-purpose distro. You could take
inspiration from CoreOS or LinuxKit here. Or do you find that the sysadmins in
the kind of organization that install these on-prem packages really want to
install on top of their favorite distro?

~~~
alexwilliamsca
A minimal immutable VM image which is finely tuned and hardened is the exact
approach we're taking with [https://on-premises.com](https://on-premises.com)
\- except we haven't focused on k8s workloads. We've found that customers
would much rather import a VM than "install" something, however there are
valid use-cases which require special monitoring tools and other
customizations which are not possible on an immutable system. Gravity is
interesting and seems to meet that demand.

~~~
moondev
This look interesting, is there a way to try out meta without being funneled
into the sales pipeline?

What is the workflow for baking a machine? Are you using packer under the
covers or some other tooling? What on-prem machine image formats are
supported?

~~~
alexwilliamsca
There's still a bit of "manual" work to get someone up and running on our Meta
appliance, so unfortunately you would have to go through our sales process.
However if you just want a video or screen recording of how it works then I
can put something online in the next hour or two.

For baking the machine, we use Ansible under the covers and have a set of Lisp
scripts to manage everything. As for image formats: qcow2, raw, vhd (and vmdk
in the .ova file).

Not trying to hijack Gravitational's thread, please contact me (email in
profile, or 'aw-' on FreeNode) if you want to discuss more.

------
mwcampbell
Requiring a dedicated volume with at least 50 GB of space and 1500 privisioned
IOPS just for etcd [1] seems excessive to me. How big a cluster, e.g. how many
nodes and pods, is this for?

[1]:
[https://gravitational.com/gravity/docs/ver/6.x/requirements/...](https://gravitational.com/gravity/docs/ver/6.x/requirements/#etcd-
disk)

~~~
alexk
We are staying on the safe side here on our recommendations. We have seen many
scenarios when high latency on volumes on AWS creates a lot of problems even
for small 3 node clusters.

------
moondev
I played with this briefly last year but couldn't get it going in the time
frame so I gave up. I wanted to like it but it was a bit unwieldy getting
started, and I was not a fan of needing helm to bootstrap an installation.
There also was a quirky installation workflow, could not set a key/password
successfully for the life of me and it was unclear on if the cli needed a
browser?

May have to check this out again, hopefully the quck-start experience has
improved.

~~~
kevin_nisbet
Sorry to hear about your experience, the getting started experience is an
ongoing challenge, there have been improvements in the last year, but we still
have a long way to go as well.

Sorry if this wasn't clear, but helm actually isn't required at all, it's just
a majority of our examples are written to use helm due helms popularity. The
installation hooks really just boil down to kubernetes jobs, so anything that
can be represented as a kubernetes job can be used for any of the hooks. This
can be a simple script, a helm command, or a complicated custom built
application.

The only feature really tied specifically to helm is the catalog feature,
which is for building additional applications to be installed on top of an
existing gravity cluster. That feature was built around a helm chart as a
building block.

The cli should only need to invoke a browser when doing third party
authentication flows, ie to use github for login. Using the gravity users
invite will also generate a link to send to the enrolling user, so they can
set their own password, setup 2fa, etc through the web interface.

We've also been trying to use our community site
[https://community.gravitational.com/](https://community.gravitational.com/)
as a resource for being able to search and ask questions.

Disclaimer: I'm a developer on gravity.

------
redwood
Hard not to wonder if k8s is the new hadoop. "If you build it they will come"
platform team thinking.

~~~
Thaxll
Hadoop did not event reach 1/10 of Kubernetes popularity.

~~~
pas
That's very hard to eyeball in my opinion. Hadoop and its ecosystem fascinated
a big chunk of the dev (and ops) world. Everybody and their dog wanted to be
"into" Big Data. Data Lake. Realtime feeds, just add nodes (Cassandra), CQRS,
and of course when Facebook said they are using HBase for Messenger, it meant
that HBase is the new MySQL/Mongo/sliced-bread.

Then there was YARN, then Tez, Spark, Flink, and Drill, and various other
projects that added to the hype (Aerospike, RamSQL, Kafka + Storm).

And ever new system had to be built like it'll be web scale from day one.
Instagram was acquired in 2012, just 18 months after launch, and everyone knew
that meant every new even barely "social" thing will blow up even faster than
that. So you absolutely need to plan ahead, scale scale scale.

Compared to that people seem to be a bit more wary of k8s, especially because
it's targeted at ops folks, and they are naturally predisposed to oppose
changes they don't understand.

But that's just my - probably ridiculously non-representative - take on this
:)

