
ClusterHQ is shutting down - henridf
https://clusterhq.com/2016/12/22/clusterf-ed/
======
chrissnell
We've been running Kubernetes (500+ containers) in production for over a year
now. I believe (and hope) that 2017 will be the year that persistent data
storage will be solved. We are ready to move our data out of OpenStack and
have our data services (Elasticsearch, Cassandra, MySQL, MongoDB) join the
rest of our apps on Kube-orchestrated infrastructure.

But, we're not there yet. The options just aren't good enough. Look at the
list of PV types for Kube [1]. You have technologies like Fibre Channel that
are simply too expensive when compared with local storage on a Linux server.
There's iSCSI, which is mostly the same story. Ceph is great for object
storage but not performant enough for busy databases. GCE and AWS volumes are
not applicable to our private cloud [2]. Cinder, to me, has the stench of
OpenStack. Maybe it's better now? NFS? No way. Not performant.

I'm looking forward to seeing what shakes out in the next few months. It's
just really hard to beat local storage right now.

[1] [http://kubernetes.io/docs/user-guide/persistent-
volumes/#typ...](http://kubernetes.io/docs/user-guide/persistent-
volumes/#types-of-persistent-volumes)

[2] Beyond a certain size, it becomes more cost-effective to host your own
Kubernetes cluster on managed or colocated hardware.

~~~
contingencies
_Look at the list of PV types for Kube_

What I see is a lot of complex network filesystems, vendor-specific solutions
and gateway protocols to expensive SAN solutions, which are already chalk and
cheese in terms of features and performance.

Arguably one of the best features of unix-style systems is support for
arbitrary mount points, filesystem drivers and (network or local) blockstores.
Storage is, essentially, a well-solved problem at the OS level. The fact that
this option is marked "single node testing only – local storage is not
supported in any way and WILL NOT WORK in a multi-node cluster" raises
eyebrows.

By choosing to expose individual remote storage model semantics as Kube-level
PV drivers instead of just leaving this to the OS, what I would argue we
essentially see here is the legacy of a cluster orchestration system that came
out of Google... a system optimized for large, homogenous, dynamic workloads
to provide organization-internal IaaS, and not reduced feature-set systems
with simpler architectural properties (eg. no multi-client network-aware
filesystem locking).

I would argue that, in fact, what many people actually want is simpler, and
the current pressure to use 'one size fits all' cluster orchestration systems
with a high minimum bar of functionality and nodecount (read: minimum hardware
investment) is misplaced. At the very least, there's some legitimacy to this
line of thinking.

~~~
cookiecaper
Yes. k8s is cool but it is vastly overcomplicated for the needs of the non-
Googles. We've been porting my company's production infrastructure to it over
the last year and while it's been fun, I don't think it's been the correct
thing for us.

Since suggesting your company is not in the same class of the companies that
see literally billions of unique users every day, and thus may not need such
overcomplicated solutions, is sure to make your boss irate, it's a good idea
to familiarize with whatever new hotness has Facebook or Google's name
attached to it.

Your clueless colleagues will race each other to announce the latest Google/FB
engineering blog post in Slack so they can look the smartest and then convince
your boss that since your Google-dom will be upon you tomorrow, you must adopt
HotNewStuff today. This impulse is behind the proliferation of Hadoop and "Big
Data", containers and orchestration, and MongoDB and NoSQL. All of these are
useful tools that are valuable and good as necessary, but widely abused
because people who don't really know what they're doing think this will give
them an out.

You'll be stuck maintaining something interesting but really not mature or
production-ready like k8s for years, just about long enough for it to become
smooth and stable, at which time something else will come along to repeat the
cycle. :)

~~~
ownagefool
Out of interest, what are you migrating from?

~~~
cookiecaper
Deployment across EC2 nodes, managed with devops scripts from a few different
tools and monitored with conventional monitoring solutions like Nagios/Munin.
We migrated from colocated racks to that a few years back.

Personally, while there is undoubtedly a convenience factor with being pure
EC2 and a cool factor with k8s, I think 80% of our stuff would be better off
in the racks (which included a couple of hypervisors, so we still had some
cloud-style flexibility and could do things like auto-scaling).

~~~
TheIronYuppie
May I ask - what's the biggest issue you've been facing? Anything we can do to
make it easier/more useful? We've found that there are a ton of things that
people just end up reinventing unless it comes in the box (e.g. autoscaling,
rolling deployments, roll backs, replications, aggregated logging/monitoring,
etc).

Disclosure: I work at Google on Kubernetes

~~~
cookiecaper
To be honest, I haven't gotten super-into-the-weeds on Kubernetes. Another guy
is the main k8s guy, but I have used the cluster he's configured and deployed
a few containers on it. I've also had to troubleshoot a few nodes. A lot of
these complaints may be things that are already solved, but we just don't know
how/where/why yet. I think we're also using a relatively "old" version of k8s
(in young technology, "old" is anything more than a few months old), so some
of these issues may have already been addressed.

First issue for me: the recommended way to run k8s for local testing, etc., is
minikube. I run a hybrid Windows-Linux desktop env since June (full-time Linux
for 10+ years before that), where Windows is the host OS and my Linux install
is running as a VBox guest with raw disk passthrough. I have it configured
essentially so that Windows acts like a Linux DE that can run Photoshop and
play games, while I do all my real work through an SSH session to the local
VM, which is my Linux install (and which I can boot into natively if desired,
but dual-booting always impairs workflow, which was the reason I switched to
this setup in the first place; previously, I would reboot into Windows maybe
once a year even though there were games and things I wanted to try and photo
editing in VMs hosted on my Linux box was painfully slow).

This means that minikube, itself dependent on VMs to spin up fake cluster
members, won't work because VM hardware extensions aren't emulated through
VirtualBox's fake CPU. So that's the first hurdle that has stopped me from
tinkering more seriously with k8s clusters. I know there is "k8s the hard way"
and stuff like that too, but it'd be really nice if we had a semi-easy way to
get a test/local k8s up and running without requiring VM extensions, as I
imagine (but don't actually know) most cloud rentals don't support nested VMs
either.

Besides this big hurdle to starting out, many of the issues are high-level
complexity things that create a barrier to entry more than things that
actively get in the way of daily use once you understand them.

For example, we have 3 YAML files per service that need to be edited correctly
before something can be deployed: [service]-configmap.yaml,
[service]-deployment.yaml, and [service]-service.yaml. We have dozens of
services deployed on this cluster, so we have hundreds of these things
floating around. They're well-organized, but this alone is a headache. The
specific keys have to be looked up, they have to be in the right type of
configuration; if something that is supposed to be in the configmap is in the
deployment file, k8s will be unhappy, the right env variable won't get set
(more dangerous than it sounds sometimes), the wanted shared resource won't
get mounted correctly (and my experience is that it's not always obvious when
this is the case, and the mount behavior is not always consistent), or
whatever. Keys must be valid DNS names, or something like that, because etcd,
which runs under the covers here somewhere, doesn't accept names that would be
invalid DNS entries. This means no underscores. There's nothing wrong with any
of that per se, but it's a lot to wield/remember.

I also remember mostly thinking the errors related to k8s configurations and
commands were unhelpful. For example, it took me a long time (a frustrating
60-90 minutes probably) to realize that `kubectl create --from-file` wasn't
reading in my maps as config structures, but rather as literal strings. This
seems like something that should've been made obvious through something like a
warning on import ("\--from-file imports your map as a literal; if you want to
parse the contents and use it as a config, please use `apply -f`" (and `apply
-f` means apply the config read and parsed from the file, _not_ "apply with
force", while `create --from-file` means "create a literal string as a
resource instead of parsing and creating this config as a config object;
however, be careful with kubectl apply, because it will try to silently merge
existing configs with new values, which is sometimes helpful, and sometimes
can drive you nuts if you forget about this behavior; I dunno if kubectl
delete configmap my-configmap.yaml and recreate is always feasible or if that
would give dependency conflicts or what?)).

To deploy, kubectl apply -f changed-yaml.yaml, which sometimes does and
sometimes doesn't clean up the running pod (service configuration thing? or is
it a matter of which config type I'm applying, cm, deployment, or service?),
`kubectl delete pod old_pod_id` if not automatically reaped, restarting is
automatic under our config after a delete which I'd guess is configurable too,
then you have to `kubectl list pods | grep service_name` to get the new pod
id, and `kubectl logs pod_id` to get the logs and make sure everything started
up normally, though this just shows the logs output by the container's stdout,
not necessarily the relevant/necessary logs. Container-level issues won't show
on `kubectl logs`, but require `kubectl describe pod pod_id -o wide`.

 _Then_ you have to `kubectl exec -it pod_id /bin/whatever` to get into the
right container if you need to poke around in the shell (and I know, you're
not supposed to need to do this often). Side note here, _tons_ of people
trying to containerize their apps that run on Ubuntu or Debian today onto
Alpine, another mostly-unnecessary distraction, and seems to result in just
grabbing a random container image from Docker Registry that claims to provide
a good Ruby runtime on Alpine or something without looking into the Dockerfile
to confirm, which IMO is a much larger security risk than just running a full
Ubuntu container.

Lots of extended options like `kubectl get pods -o [something]` are non-
intuitive. I guess they're JSON pathspecs or something like that? Again, that
probably makes sense, but it's pretty unwieldy. Often have to do `kubectl
describe pod pod_id -o wide` to get useful container state detail.

When a running pod was going bananas, we had to `kubectl describe nodes`,
again a long and unwieldy output format, and we have to try to decipher from
the 4 numbers given there what kind of performance profile a pod is
encountering. This leads us into setting resource quotas to make sure that
pods on the same node don't starve each other out, which is something I know
the main k8s guy has had to tinker with a lot to get reasonably workable.

Yes, we have frontend visualizers like Datadog that help smooth some of this
over by giving a near-real-time graph with performance info, but there's still
a lot of requisite kubectl-fu before we can get anything done. I also know
that there are a ton of k8s and container ecosystem startups that claim to
offer a sane GUI into all of this, but I haven't tried many yet, probably
because I'm not really convinced any of this is necessary as opposed to just
cool, which it undoubtedly is, but that's not how engineers are supposed to
run production environments.

I mean all of this doesn't even scratch the surface, and I know they're not
huge complaints, but they just speak to the complexity of this, and a
reasonable person has to have some incentive to do it besides "It makes us
more like Google". Haven't talked about configuring logging (which requires
cooperation from the container to dump to the right place), inability to set a
reliable and specific hostname for a container in a pod that will persist
through deployments, YAML/JSON/etcd naming and syntax peculiarities in the
deployment configs, getting load balancing right, crash recovery, pod
deployments breaking bill-by-agent services like NewRelic and Datadog and
making account execs mad, misguided people desparetely trying to stuff things
like databases into this system that automatically throws away all changes to
a container whenever it gets poked, because everything MUST be using k8s,
since you already promised the boss you were Google Jr. and he will accept
nothing less, and a whole bunch of other stuff.

All of this ON TOP OF the immaturity and complexity of Docker, which _itself_
is no small beast, on top of EC2.

That's QUITE the scaffolding to get your moderate-traffic system running when,
to be honest, straightforward provisioning with more conventional tooling like
Ansible would be more than sufficient -- it would be downright sane!

SOOOOOOOOO ok. Again, I'm not saying there's anything _wrong_ with how any of
this is done per se, and I'm sure some organizations really do need to deal
with all of this and build custom interfaces and glue code and visualizers to
make it grokable and workable, and of course Google is among them as this is
the third-generation orchestration system in use there. None of this should be
taken as disrespectful to any of the engineers who've built this amazing
contraption, because it truly is impressive. It's just not necessary for the
types of deployments we're seeing everyone doing, which has nothing to do with
the k8s team itself.

I'm sure that given the popularity of k8s, people will develop the porcelain
on top of the plumbing and make it pretty reasonable here in the not-so-
distant future (3-5 years). However, like I said in my original post in this
thread, I don't think this is benefiting many of the medium-sized companies
that are using it. I think, to be completely frank, most deployments are
engineers over-engineering for fun and resume points. And there's nothing
wrong with that if their companies want to support it, I guess. But there's no
way it's necessary for non-billion-user companies unless you REALLY want to
try hard to make it that way.

I could write something extremely similar to this about "Big Data". Instead of
concluding with suggesting Ansible, we could conclude with suggesting just
using a real SQL server instead of Hadooping it up with all of those moving
parts and quirky Apache-Something-New-From-Last-Week gadgets and then
installing Hive or something so you can pretend it's still a SQL database.

Is there a way to make over-engineering unsexy? That's the real problem
technologists who value their sanity should be focusing on.

------
BinaryIdiot
How many people were employed at ClusterHQ? Honestly I never even heard of the
company but I had heard of some of the open source projects. Maybe I'm just
out of the loop.

Also any information as to lessons learned, etc? Basically why it failed?
Looking at the marketing material I didn't see anything really remarkable
about it (nothing that stood out as a "oh this is why I would give them
money") so I'm curious.

> I’ve been part of big successes as well as failures. While the former are
> more pleasurable, the latter must be relished as a valuable part of life,
> especially in Silicon Valley.

Relished? I never really understood the Silicon Valley "failing is awesome!"
mentality. Failure is failure. It's not awesome. Why would you relish it? Take
the lessoned learned for sure but you likely just lost several people's money
and you lost your employees their jobs, what is there to take enjoyment from?
Seems a little sadistic and a tad lacking in empathy for others involved.

But maybe that's just me.

~~~
acidbaseextract
Ed Catmull, Pixar cofounder and inventor of the Z-buffer, has a great take on
mistakes and failure in his book Creativity, Inc. Here's a pretty decent
summary:

[https://www.brainpickings.org/2014/05/02/creativity-inc-
ed-c...](https://www.brainpickings.org/2014/05/02/creativity-inc-ed-catmull-
book/)

Essentially, we're going to fail. It happens. Might as well get it out of the
way.

Secondly, failure averse cultures don't actually prevent failures, and they
have a tendency to squash innovation.

~~~
geofft
Having a culture where failure is okay is different from having a culture
where failure is _valued_. Like the parent commenter said, if failure means
you look back at what happened, learn some lessons, do better next time, then
that's fine, you've gotten value out of failure. If you say "Hey, I got a
failure out of the way," you're playing slots and falling victim to the
gambler's fallacy.

> _Make New Mistakes. Make glorious, amazing mistakes. Make mistakes nobody’s
> ever made before._

> _Mistakes aren’t a necessary evil. They aren’t evil at all. They are an
> inevitable consequence of doing something new (and, as such, should be seen
> as valuable; without them, we’d have no originality)._

So, was this mistake glorious, amazing, or new?

~~~
hinkley
David Anderson (kanban) once observed that if your estimates are accurate then
they should be wrong about half the time (and half of those overestimated).

The pressure to make them into promises instead of estimates is, I think, a
form of failure aversion, one that Almost everyone deals with and one that
causes a lot of unnecessary drama.

I don't remember whose quote this is, but the old line about how if you aren't
failing from time to time it means you aren't trying hard enough. Your reach
can't exceed your grasp if you don't reach at all.

However its easy to fail from sheer stupidity as well. Failure is a trapping
of success, not an indicator.

------
ferrantim
This is Michael from ClusterHQ. Just wanted to say thanks to everyone in the
community who helped make the last 2 and a half years a great experience. Sad
that it's ending now, but excited for what's to come.

~~~
jat850
Thanks for posting, Michael, and I hope good things for you in the future.

Do you foresee Flocker living on in opensource form?

~~~
ferrantim
Yes, Flocker will remain open-source and my hope is that the community
continues to improve it. Fli too, btw, for creating and managing ZFS snapshots
[https://github.com/ClusterHQ/fli](https://github.com/ClusterHQ/fli)

~~~
jimjag
Maybe donation of the code to an FOSS foundation, like maybe Apache, might be
a way of ensuring the community continues. Or, at least, gives it the fighting
chance to do so...

DM me if curious

~~~
ryao
I raised that question during the final company meeting yesterday. The
ownership of the code is in limbo until the investors decide what to do with
it at least a few months from now.

------
calgaryeng
\- December 22, 2016: ClusterF __*ed \- December 15, 2016: Reflecting on a
Year of Change and What’s to Come in 2017 ( "All in all, 2016 was full of
tests and triumphs and I can promise that 2017 will also be a big year for the
company.")

I'll be the first to admit I don't know anything about this company, but
that's an interesting change of heart.

~~~
dsp1234
For reference, here is the post from 12/15 where the statement, " I can
promise that 2017 will also be a big year for the company" was made by the
CTO. Having the CTO make that statement, knowing that the company was shutting
down, seems odd. Which implies it was not an orderly shutdown.

[https://clusterhq.com/2016/12/15/container-
predictions/](https://clusterhq.com/2016/12/15/container-predictions/)

~~~
mathattack
And that the CTO wasn't kept in the loop.

~~~
B1FF_PSUVM
Or the Wotton hypothesis: "An ambassador is an honest gentleman sent to lie
abroad for the good of his country."

------
Johnny555
It seems a little odd to do this 3 days before Christmas. Holiday depression
is already a real problem, and making people unemployed a few days before the
holiday sounds like a bad thing.

And it's a terrible time to be job hunting.

Why not hold on a few more weeks, let employees enjoy the holidays and
announce in mid-January when employees can actually talk to hiring managers
and get some good job prospects instead of being met with out of office
messages?

~~~
mangeletti
It's likely that people within the organization already knew things weren't
going well.

So, the bright side of this is that:

A) employees can spend the entire holiday week or two with their families, and

B) employees can start out their new year unencumbered by the stress of
working for a failing company

Personally, I'd rather have it this way than come back from whatever fun
holiday adventure I'm on to find out a week later that I'm losing my job...
that's a terrible way to start the year's momentum.

~~~
Johnny555
Sure, there could be mitigating circumstances, but I've worked for a startup
that folded just after Christmas while the office was closed for the holiday,
we were all fired by phone.

It sucked.

Spending the holidays with family can be even more stress inducing when you've
just learned that you lost your job. No ability to go out with local friends
or former coworkers to commiserate about being fired and talk about job
prospects. I was too distracted with job searching to really relax and enjoy
time with family.

Rumor was that the investors pulled the plug before the end of the year for
tax reasons, but I'm skeptical since it took months to sell off assets
(physical and virtual) and wind down the business.

I made out pretty well though, got a lucrative contract with the company that
bought the core software to keep it running for them until they could merge it
into their systems.

------
slantview
Or maybe the technology wasn't correct for what most people are trying to do
with Docker these days. Flocker never felt like it quite fit in the ecosystem
along with Mesos, Kubernetes, etc.

Great efforts guys, the tech is cool, but technology will continue to evolve
and if you bought into something completely that doesn't fit nicely with the
movement, you will get left behind.

Edit: not sure why the downvotes, I was not being sarcastic. The comments
about why "pioneers get arrows" in the post made it seem like they had a
perfect product, the world was just not ready for it.

~~~
moondev
Since when is Flocker competing with those platforms? It's designed to work
with them. [http://kubernetes.io/docs/user-
guide/volumes/#flocker](http://kubernetes.io/docs/user-guide/volumes/#flocker)

~~~
slantview
I never said it was competing. I said it didn't seem to fit nicely. I run
several very large clusters, and we evaluated Flocker and it didn't fit nicely
into ecosystem. It felt very "bolted on".

~~~
moondev
I see, sorry for misunderstanding. What did you move to for persistent
volumes?

~~~
slantview
We are using DC/OS (Mesos) and found it to be much more feature rich for what
our needs were.

------
wmf
I appreciate the honest tone of this announcement without any "incredible
journey" nonsense.

------
jcoffland
This is why it's a really bad idea to rely on PaaS/SaaS for your next project.
When the company tanks (or cancels the product, changes the API, raises it's
prices, etc.) you're screwed. Hope no one out there was heavily commited to
FlockerHub.

What we really need is better business models for supporting Open-Source.

~~~
gkoberger
Counter argument: if you don't rely on PaaS/SaaS, it'll take you 3-4x to
launch. So, rely on them, but be ready to switch. A few bad things like this
shouldn't take away from how services enable rapid development and iteration
of ideas.

~~~
jcoffland
Why would it take you 3-4x longer to launch? Running your own servers in the
cloud is not that hard. You can actually save time by not being restricted by
the PaaS. I.e. if you have special needs, and you will, you can go in and hack
the software.

~~~
karthikb
If you don't have special needs, then you as a small team or company are
taking on a burden that is large enough in scope for an entire company to
focus on. It's why most startups just use GMail instead rolling their own.
Sure it's easy to set up an email server, but now it's one more thing to think
about.

~~~
jcoffland
GMail is a bad example. If GMail were to close you could just move to another
email server.

Besides, using a PaaS is not free and I don't mean in terms of money.

------
abrongersma
That's a shame. I've always had great interactions with the ClusterHQ team.
Michael, Mohit and Carissa have always been incredibly friendly when I've run
into them at Dockercon. Unfortunately my engineering team was never able to
fully integrate flocker into our production environment as we relied heavily
on custom storage driver actions. Wish you folks all the best in your next
projects.

------
Animats
Maybe "stateful containers" aren't a good idea. The whole point of containers
is supposed to be that they can be duplicated and loaded into many machine
instances. "Stateful containers" with changing databases inside can't be
treated that way.

------
FrankenPC
I should make a startup called Trampoline. Other startups pay me insurance
premiums and I hop in with a team and salaries for ejected employees to keep
doors open for however long they paid for after a crash. As part of the
customer SLA they cite Trampoline and the duration of post mortem life being
paid for.

~~~
doublerebel
I like this idea, but I wonder how it would affect the business decisions of
founders once they know they have a safety net.

------
matt_wulfeck
> _Mark Davis (CEO) explains this opportunity as, “Imagine if you were the
> 10th engineer at VMware. That’s the kind of experience you’re going to have
> with us at ClusterHQ.”_

That was from a clusterHQ recruiter's email that I received just a week ago. I
thought it was weird to sell a position in that way.

I don't find it unreasonable that a recruiter was hiring people while the
company is closing down (what do they know?) I'm reminded it's always
important to ask for specific financial information when hopping on to a
startup. What's your revenue? Expenses? How long is your runway?

My condolences to anyone who had hope, time, and effort invested in clusterHQ
stock options.

------
activatedgeek
I remember trying to setup a Flocker cluster and got brainf __*ed. A cert-
based auth in local development clusters was probably an overkill.

~~~
moondev
Same thing here. The barrier of entry was too frustrating for a casual
evaluation.

------
moondev
Wow this surprising. I wonder what the reason for shutdown is? Flocker looked
like a really cool product but was pretty involved setup wise when I was
evaluating it.

What are best options now for bare-metal? Ceph? NFS?

~~~
Goopplesoft
Maybe the docker infinit acquisition[1] caused it? Given Kubernetes' plug and
play storage classes (and gluster's maturity) + docker planning to add infinit
natively there might not have been much space for them.

[http://venturebeat.com/2016/12/06/docker-acquires-file-
synci...](http://venturebeat.com/2016/12/06/docker-acquires-file-syncing-and-
sharing-app-infinit-will-open-source-the-software/)

------
finid
A little bit more info on why the outfit failed would have been nice.

~~~
dexterdog
They probably just ran out of money or a key relationship dried up causing
their runway to become impossibly short.

------
afulay
Its not like good container storage solutions don't exist for databases and
other stateful applications. The problem is in expecting it to be free and
open source. Building orchestration or simple file or object storage is easy,
but building high performance, resilient, scale out storage that can run on
cheap commodity boxes is a difficult task. Once you get over the "free"
requirement, there are some good options like ScaleIO and Robin Systems.
[https://robinsystems.com/containerization-platform-
enterpris...](https://robinsystems.com/containerization-platform-enterprise-
applications/)

------
sidi
It would be interesting to hear what are the alternatives now to what they
were trying to do with
[flocker]([https://clusterhq.com/flocker/introduction/](https://clusterhq.com/flocker/introduction/)).

The post seems to make the point other alternatives came up that removed their
competitive advantage. Anyone here either using flocker or the other
alternatives?

------
aorth
I'm sad to hear this. I loved reading Richard Yao's blog posts about ZFS on
Linux.

[https://clusterhq.com/2014/09/11/state-zfs-on-
linux/](https://clusterhq.com/2014/09/11/state-zfs-on-linux/)

------
plandis
They are ceasing all operations. Someone might want to update their careers
page.

~~~
dexterdog
Or their home page.

------
EDreicer
How is this company shut down?

Almost the same day my Us-based company officially announces to close down,
but we haven't been paid since June 2016. I really want to know if this
happens in Us

See also
[https://news.ycombinator.com/item?id=13242516](https://news.ycombinator.com/item?id=13242516)

Thanks

------
gtirloni
Oh the immediate shutdowns! After going through the immediate Nebula shutdown,
I'm glad we weren't depending on ClusterHQ.

Same question I had for Nebula: you had no idea that a month ago you'd have to
shutdown, right?

I've started to follow these ex-CEOs so we avoid their next companies. This
kind of shutdown is just terrible.

~~~
tlb
Many -- probably most -- ultimately very successful companies had near-death
experiences. They aren't usually written about. Apple's near-death in 1997 is
well documented. Tesla's is described in Ashlee Vance's biography of Musk.
[http://foundersatwork.com/](http://foundersatwork.com/) has firsthand stories
about some others.

They would have become full-death experiences if the CEO had said, "Hey
everybody, we're near death, just FYI". So in the alternative world where
company deaths are always announced well in advance, far more companies would
die. Probably not a better world.

I don't know the story here, but in most cases there was some deal on the
table that would have saved the company but fell through in the couple of days
before the announcement.

Regardless, the right thing is to have enough payroll in reserve for an
orderly shutdown and transition plan for customers. It's not clear whether
that's happening here -- I hope so.

~~~
martincmartin
The founder of FedEx once saved the company by taking its last $5,000 and
turning it into $32,000 by gambling in Las Vegas.

~~~
caminante
Careful. There's so much BS in that guy's myth story. [0]

[0]
[http://www.snopes.com/business/origins/fedex.asp](http://www.snopes.com/business/origins/fedex.asp)

~~~
tlb
Yes, but the essence is true: at some point they had to scrape together enough
cash to buy the day's jet fuel or it would have been all over.

------
jrochkind1
> please accept our enduring, deeply felt gratitude.

Sure, but would 'apologies' have been out of order too?

------
zitterbewegung
So whats next for the software projects by ClusterHQ? Making them a part of an
Apache Incubator?

------
schmichael
If we're going to celebrate failure can we at least fail with respect,
humility, and maybe even a tiny bit of class?

The word "sorry" does not appear in this post. Instead of apologizing to
investors, users, and employees for letting all of them down the CEO writes a
contentless self-aggrandizing post.

The CEO also doesn't bother to thank anyone despite being literally and
metaphorically indebted to investors, users, and employees for getting as far
as they did. [Update: there was "gratitude" \- my mistake; sorry]

Besides the self-aggrandizing "we did it first" tone of the whole post, here
are a few more parts I'd love to see future farewell posts skip:

> it’s often the pioneers who end up with arrows in their backs

Unless your point is that you were a company who tried to take what wasn't
yours and was punished for it... this phrase is awkward-at-best.

> I called these “Friends of ClusterHQ” by the sobriquet “FoCkers”

The use of "sobriquet" doesn't make your adolescent play on words classy.

> The big successes are literally impossible without the many failures. Take a
> moment to think about that.

What a ridiculous thing to tell your audience that includes employees looking
for jobs, investors out of money, and users without a service they may have
depended on. Out of those 3 groups only investors care about such things. The
other 2 groups are collateral damage to your hubris.

~~~
hoodoof
Unless an investor _additionally_ gave a loan, there is no debt owed in any
way to investors - that's the point, they made a decision to invest at risk
and presumably did due diligence based on accurate information.

~~~
Benjamin_Dobell
Errr, wait what?

Firstly, to say there's no debt, monetary or otherwise, to someone who
believed in your company enough to hand over their own money (or money in
their control) to back you is just pure arrogance.

Secondly, there absolutely _is_ debt. When a company goes into administration
the assets are sold to pay out those who hold _equity_ in the company. They're
literally owed debt.

 _EDIT_ : To clarify for those responding saying that equity does not
constitute debt...

Creditors and investors are different, absolutely. It is also correct that
investors are typically paid last during administration (although investors
may form agreements to come before _other_ investors). Nonetheless, during
administration, the administrator determines how much money the investors are
"owed", which is literally the definition of debt.

~~~
jdoliner
No matter how many times you say that this situation is "literally the
definition of debt" you're still not going to the legal definition of debt so
that it applies here. Equity is not debt, they do both until eventually owing
another entity money. But in a legal sense that is not literally the
definition of debt, because it is also part of equity which again isn't debt.

From a practical perspective the difference is that with equity you accept a
lower floor (you might get nothing) in exchange for a higher ceiling (your
invest might 100x if the company goes public). That's the deal these investors
signed up for and unfortunately for them they got option number 1. They're not
owed anything because that's the deal they agreed to.

~~~
Benjamin_Dobell
I have zero idea what the legal definition of debt is in the
country/jurisdiction you live in. I also didn't specify what country I live
in.

Additionally, my first point was one of metaphorical (social) debt.

Given the context and the fact I'm typing in English it ought to have been
clear I was referring to the _definition of debt_ in English. If that wasn't
clear, I apologise for the confusion.

~~~
praneshp
> I have zero idea what the legal definition of debt is in the
> country/jurisdiction you live in. I also didn't specify what country I live
> in.

Only definition that matters is the one in the jurisdiction this startup was
in.

~~~
Benjamin_Dobell
Last I checked this was a tech website not a legal one...

~~~
praneshp
So?

------
stevelandiss
You can't easily make a company out of selling free stuff?

------
alex_hitchins
Could this be in any way akin to the shutdown of Lavabit? I know it's not the
same type of company, but if there was pressure to put back doors in or any
sort of compromise, I would support the action.

If not, then it's a really bad way to shut up shop. OK the source is out there
but given the holiday season people might have appreciated a little warning.

~~~
robhaswell
Hi, founder of ClusterHQ here. I was just reminiscing over the demise of my
company, and saw this comment had not been replied to.

I can categorically state that there was no pressure to install back-doors or
any Lavabit-style problems.

As for your other comment, as a business you don't get to choose when you run
out of money. I believe there was a plan to secure more money, and when that
plan failed the employees were told immediately. The timing is irrelevant.

~~~
alex_hitchins
Thank you for the comment.

I appreciate as a business you don't have complete control of your destiny.
There is a fine line between keeping employees informed and scaring them
witless and I appreciate you were doing the right thing for your company. I or
anyone else would likely have done the same in your situation.

