
How containers became a tech darling, and why Docker became their poster child - tosh
https://medium.com/s-c-a-l-e/how-containers-became-a-tech-darling-and-why-docker-became-their-poster-child-bfaf9ac87825
======
staunch
This answer doesn't include any mention of OpenVZ, which means you're not
asking a Linux expert. The answer sounds like the type confused competitors
invariably give, which boils down to "we did it first, they just made it
flashy and cool" and is never the real reason.

> _When you think about why would someone pick one Linux distro over the
> other, it’s because of — literally — the aesthetics of how the directories
> are laid out and what they’re called; the default shell and how much stuff
> was in there; and the packaging system. That was it._

This is straight from a Slashdot flamewar in 1998.

~~~
dasil003
It's worse than that. I was there at the founding of TextDrive, I shelled out
the cash for "lifetime hosting" which got them off the ground. You can call
this sour grapes, because it definitely is, but this picture that Jason paints
about how awesome his technology was and how he was there first on all this
stuff is laughable—because the technology _did not_ work. When he says
something like:

> _...the smartest thing we could’ve done was never let all the web 2.0
> companies go away. We had everybody back then. It was probably the biggest
> tactical error, if you will, but the tech was fine._

The reason they lost those customers was that their tech _was not_ fine. In
the early days, Jason was posting dozens of posts a day to the TextDrive forum
waxing on about how awesome the tech they were working on was, however at no
point did it ever stabilize into something reliable. It was initially sold as
being the first non-oversold shared hosting service, which implicitly promises
stability, but in fact it was half the reliability of traditional shared
hosting like Dreamhost at 10x the price.

Now it was true that they were bleeding edge, and so they got a lot of these
companies on board, and even were the first "official Rails host" before any
other shared hosting could handle Rails, and they had a lot of brand equity,
which is how they managed to have this great early customer list. But it was
all just based on hype and a compelling sales pitch from Jason. It really rubs
me the wrong way how he has parlayed his mediocre-at-best technology execution
into an image as a technology pioneer which he continues to use to leverage
himself to new heights. The guy is a mediocre technologist, but a fantastic
salesman.

~~~
jimpick
Ex-Joyent employee here, but I don't go back that far.

The company has quite the history.

I have to disagree with your characterizations. The products changed much over
the years, many things were tried, many failed. Most startups change their
product significantly over time and it's hardly surprising that the business
they are in now isn't one of the early iterations they tried and failed at.

I strongly believe that Jason is one of the best technologists on the planet.
So there.

~~~
wpietri
What puzzles me about your comment is that you don't directly refute what
Dasil003 was saying, which is that the technology didn't actually work, that
from the customer perspective the execution was "mediocre at best".

I'm sure he's a bright and likable guy, and from his resume I figure he must
be an inspiring leader. But for me, being a great technologist is all about
things actually working, about users actually getting served. Sure, one should
fail a lot in the lab, and sure, startups should be trying enough things that
some just don't have product/market fit. But that's very different to me than
promising the moon and delivering stuff that doesn't actually work.

~~~
jimpick
During my time there, the stuff was amazingly solid. It was Solaris, not
Linux, with a heavy dose of NetBSD ports. The customer support was the best in
the biz too, but only the larger customers really saw that.

There were always those legacy customers from years before who had there stuff
running on some boxes in an old datacenter that only a few of the original
staff knew about, and required continuous care and attention. Ihere's a
business lesson to learn there - if you sell a "lifetime" subscription for a
product that will become obsolete, you'll pay a lifetime PR cost in the future
when you can't continue with that product.

Coming from a pretty long Linux background, Solaris took some getting used to,
but once I figured it out, it really was a better, more solid, better
engineered solution. But Linux has a huge community - the people that loved
Solaris are the old Unix graybeards, and that community wasn't growing.

Before my time, there were some debacles, such as a storage platform product
that ended up losing data, but by the time I was there, they had learned the
lessons, and I'll still say their core stack is still superior to any Linux
based stack used in the datacenter today. It's been 4 years since I worked
there.

~~~
wpietri
Interesting. Thanks for the further details.

In their shoes I think the lesson I would learn is "don't go back on a deal".
They could have returned the money, transitioned people to another product,
shifted people to a different vendor, or offered people the option of any of
the three. I don't think it was initially a PR problem; it was bad behavior,
unilaterally breaking a promise, that got them in hot water.

------
pjc50
_I think Linux took off [versus Solaris and FreeBSD] because of package
management. I think that’s basically it. Docker’s taking off because it’s the
new package management. It’s just that simple_.

Solaris used to cost money and not ship with all kinds of user-friendly things
like a nice shell or a C compiler.

He's right about package management. This is being fought out again with the
language-specific package management systems. People want to install the tip
of the iceberg that is the software they actually want, without worrying about
the rest of the iceberg of supporting software and all its security updates
and version conflicts.

Containers also exist because UNIX has no good standard way of delegating
resources that aren't files. You can give users disk quotas but not memory
quotas. You can't delegate management of a TCP port or IP address to a user.
You can't delegate users into "sub-users" for their administrative
convenience.

Containers also answer the question of "what do I need to clean up when this
system has been compromised?" Because security is still a disaster area,
admins need a means of effectively partitioning systems and cleaning up after
breaches.

------
amelius
I haven't used Docker, but looking at the wikipedia page, it seems that most
of what Docker provides is actually supplied by the Linux kernel. So I'm
wondering: what is Docker really, besides a thin layer on top of the kernel?

~~~
NeutronBoy
There was this [1] posted not long back, and it basically says exactly that -
Docker is a convenient layer around existing kernel functions and systemd. You
can do it all manually.

[https://chimeracoder.github.io/docker-without-
docker/#1](https://chimeracoder.github.io/docker-without-docker/#1)

~~~
digi_owl
Taking systemd for granted, sheesh...

