Hacker News new | past | comments | ask | show | jobs | submit login
RIP Flynn.io (github.com/flynn)
285 points by corobo on Feb 28, 2021 | hide | past | favorite | 136 comments



Cofounder here.

We’re surprised and flattered at the interest and discussion.

For those wondering why we shut down here’s a brief summary:

We created Flynn to be Heroku that you could run on your own infrastructure (cloud or otherwise). We started Flynn as a crowdfunded project with support mainly from other startups, we imagined it would be non-commercial and community-driven.

After the prototype it became clear that we would need a full time team to develop and maintain the project. When we couldn’t find anyone who wanted to fund the project long term we applied to YC and then raised a seed round which let us build a 1.0.

Unfortunately we were never able to raise more VC so we spent several years trying to build a business that would allow us to keep developing the project.

We spent the last four years with a skeleton team and while we were able to build a business doing “ops as a service” for other startups we ultimately couldn’t generate enough revenue to cover both development/features and the 24/7 on call team we needed. While we were able to support our paying customers we couldn’t deliver the features we wanted to build or support unpaid open source users.

We also take stability and support really seriously. We were extremely uncomfortable with the fact that people were running Flynn but didn’t know enough to run it safely. So we’d get panicked emails and github issues from users who couldn’t pay us but needed help. That’s a terrible situation for everyone involved.

The ecosystem has come a long way since we started (we were one of the first Docker powered projects and started about a year before kubernetes was announced). But we still feel like there’s a need for something like Flynn :(

Unfortunately it’s expensive to develop full-featured platforms and we could never get VCs interested in a Series A.

The 24/7 on call lifestyle got to be too much for our team, especially during COVID so with no prospects of things changing we decided to call it quits.


Just a hypothetical question I want to know as a FOSS dev, but someone who's never used Flynn before:

If someone came to you right now and offered an amount to keep going, no strings attached, enough to support the applications which remain hosted on your platform after those who have already lost their confidence leave, would you consider keeping it running?

What amount, in whatever currency you prefer, would make you consider it?

I've no horse in this race, but I'd like to get at least one datapoint.

This is not the first time I see a post like this, and it always makes me wonder if you could've gotten enough funds if youd gone out and said, "Look, Flynn is going to shut down if we don't get X amount minimum to support us. If this platform means anything to you, now is your chance to save it."


Interesting question. I think there are two angles:

1. Having already shut down and the team gone separate ways and onto new projects how much would you need to get back on the horse so to speak

2. Are there still things you want to accomplish and what would it take to make them succeed

Personally I’d happily take a check to do more development. There are a lot of Flynn 2.0 features that were in various stages of completion, if someone was willing to fund development to complete them up front that’s great.

However I wouldn’t want to do sales for this/run a b2b company around this again.

So if one of the companies that was using Flynn called and said they wanted to pay our development team to keep building, that’s great. But if a VC called and said “wait! Let’s finally make this company take off” there would have to be a decent sized bonus check attached. Mostly that’s just because after years on a ramen profitable startup as a founder your finances aren’t awesome so saying “yes” means turning down higher paying offers.

The biggest problem we had financially was being able to develop features before our customers needed them. We knew what to build but didn’t have enough budget to ship them. Any offer would have to come with the guarantee that we could focus on development for day 6 months before starting sales back up.

At the end of the day there’s work we left unfinished and if someone was willing to fund that in advance I’d generally be up for it. But if someone just wanted to keep the business wheels turning that’s not super attractive at this point, so it would need to be a https://levels.fyi salary rather than a startup founder salary.


Thank you for providing such a detailed answer.

You've gifted a lot of insight.

If you don't mind me asking, what are some features you think are most desirable for you to develop, and what would be your time budget for developing them, if it is more than 6 months?

And which title from levels.fyi do you think is most applicable to the person or people who would be working on that feature?


Happy to — it’s a great question! I want all software to be open source but we really haven’t figured out how to fund many types of open source projects yet.

- Database appliances (RDS) for major DBs

- turnkey security, especially for compliance (SOC2, HIPAA, ISO 27001)

- close the loop on development environments (vscode)

- CICD

That’s pretty close to our series A roadmap. Mythical man months aside, I’d say all of that could be production-ready in around 18 months with a few engineers per bullet point.

In terms of salaries it’s a little tricky for a number of reasons (people are willing to work for less for a startup and/or on open source plus we hired around the world), but for many of the developers I’d say equivalent to L4-5.

The important thing to remember about PaaS is multi-node vs single node are different worlds. There are great tools like dokku that were designed only for single node operation, then there are things like Flynn or k8 for many nodes. As soon as you’re doing many nodes you’re in distributed systems territory where engineers have to manage much more complete systems design and as a result are more expensive and in more demand. That’s not saying anything bad about simpler tools, they’re great and important, it’s just a lot harder to design around distributed systems where failure modes, consensus, partitions, etc are a lot more complicated. A lot of the challenges when we started were figuring out what the problems were and how to solve them rather than implementation time. In the last 5 years there has been a lot more written so less research would be needed to implement some of these today.

Our overall goal was to automate anything in the devops lifecycle that could be automated, so that’s a never ending process. Unfortunately it means that the more you scale the more technology you have to make, so Flynn for smaller companies with fewer users is less expensive to design and build than Flynn for huge companies with lots of traffic. (Again, distributed systems are hard) so knowing the scale of your prospective users is a big part of the cost equation.


For me as a totally uninitiated, the database appliances sounds like the lowest hanging fruit with the fewest unknown unknowns and the biggest payoff, is that correct?

At what you described, for 6 months, that's about 200K USD per dev, so it would cost about 600K USD to develop the database appliances "superfeature"?

Do you think this is a realistic figure? And how realistic do you think it is to get that much money together from all the users?


It’s not too far off. Here’s how I’d look at it:

$180k base salary is appropriate but most of those people are expecting stock options as well (compare to Levels) so the fully loaded price* is closer to 300-500k/year x 3 people, so $1-1.5mm

We’ve already done most of the conceptual and architectural work for Postgres, MySQL, and mongo (and redis but not highly available). Much of that was inspired by the Manatee project at Joyent. So (though I haven’t looked at this I a while and the DBs may have changed in a way that becomes more challenging) this is probably a case of put money in, get value out.

However I can assure you you aren’t going to get even 600k out of the community.

We worked really hard to get the first 120k and twisted a lot of wrists to get there. (Of course if you know someone who wants to write 600k checks for open source stuff please have them call me)

The 1-1.5mm number is about what a standard YC company gets at demo day. So that’s the path I’d recommend. Add in another 500k for overhead and biz dev people and that’s a pretty solid startup. Honestly if we had just focused on that we would probably have got a series A.

One of the great things about open source is that you create 100x+ the amount of value that you capture. The dark side is that no one wants to contribute cash as users to make that happen (I’m thinking more of companies than individuals - lots of individuals will give a few dollars)

So, unfortunately, and as always, it’s easier to raise VC than sponsorships for open source.

That being said I strongly encourage anyone who’s interested to pick up where we left off. Either as a community project or a funded startup this would be a huge benefit to the ecosystem.

* of course you can find people who are cheaper and passionate about the project to do it for less, maybe much less. But if someone called me and said for what price can you guarantee this could be done, I’d say 1-1.5mm USD.


I was basing my estimate on a 6 month run.

Thank you very much once again for responding with this level of detail.

When you got the first 120k, who paid that? One entity or multiple?


Cool. We got the first 120k from I think around 12 companies. It followed close to a power law distribution, very long tail.

Our strategy was to encourage developers to talk to their bosses rather than contribute individually which we think but can’t prove worked well. Basically we tried to get on calls with CTOs and CEOs and explain that this would help the company.


Is there anything you would've done differently during that campaign?

Did the money come from the companies' tech budgets?

Was it a one lump sum pre-payment or periodic payments? Were there any strings like deliverables attached?

Hope this is not too much to ask, I'm really interested in the process.

I am developing something almost completely the opposite -- a website platform designed only for small-scale deployments by individuals and small communities, with purposeful mechanisms to limit/control growth, and for now deliberately without any finances, fregan.

Recently I have been coming to terms with the idea that I may get a lot further if I attract outside help, but currently I don't have much to pay with, and I've not made it easy for someone to just jump in and start contributing.

I am learning a lot from your answers.


Honestly there aren’t a lot of products that were funded this way so I can’t point you to specific resources.

In our case the funding came prepaid with no strings attached. Can’t speak to where in their budgets the funds were allocated from.

I think we should build an ecosystem where companies support open source projects that benefit their companies with great ROI. Unfortunately I don’t know how we get there yet.

There are so many different funding models now for different things (https://humanipo.app/ for example) that there must be a good answer. We should all work together and try harder to find what the best option is.


Thank you. I agree.

The numbers I'm seeing on HumanIPO are not encouraging :)


Other cofounder here. I just wanted to add that we still have some t-shirts and stickers left. So if you want some defunct project/startup swag, we'd love to send it to you! https://shop.flynn.io


Still wear my flynn shirt from 2015 when you kindly did a YC practice interview with us! Thanks for that BTW, y'all captured the intensity of it very well :)


You’re welcome! Hope you’re doing well!

(For those who don’t know, all the partners in a YC interview ask questions of everyone simultaneously— this is complicated at the best of times and totally nuts when you’re trying to answer technical questions about distributed systems and questions about how you’ll build the business, but it’s a great mental workout!)


Thanks, we are doing well - never did get into YC but it was a valuable process every time we applied - gave us time when we're not occupied with the day-to-day operating to think about our strategy.

We're back in NZ and focusing solely on the education market which really accelerated our growth. Not sure when we'll be able to travel to the US again but would be great to catch up when that happens!


Out of curiosity, why do that? To see how people function under pressure?


I think that’s part of it but it’s also the natural consequence of having several people on one side of the table trying to find out everything they can from the founders in a very short period of time and being genuinely interested if you and what you’re building.

Think speed dating plus lots of coffee.


Nice! Picked up a shirt and some stickers :) Sorry to see you guys go. Flynn was one of the first things that made me legitimately excited about the future of Docker. Thanks for all your great work.


Sorry to hear the bad news, John and Daniel. Thanks for all your great work, especially in promoting TUF. Hope you guys have found greener pastures now.


I posted a question for the co-founders, it's next to your comment. Would you mind taking a look, please?

https://news.ycombinator.com/item?id=26298288


24x7x365 is an expensive support option, and not all customers need/want it. I found that if I budget staffing for a manager and 7 employees, create "slots" for access at different tiers (segment by business hours in different time zone ranges, response time, and support contract duration), price access to 24x7x365 slots accordingly, the price-first customers will sort themselves out. Good support that performs properly as a function of the continuous development cycle is staffed by highly-technical and high-soft-touch skill sets, and it is expensive to find and retain those people. Don't shortchange them or yourself, and charge the market accordingly.

3-7 years is about the maximum I would expect an unsustainably-structured support organization to last before people burn out, so you had a good run.


Just wanted to say thank you for answering the questions here! You gave a lot of insights! You could be a great mentor!


Sorry to hear about the lack of VC interest. I'm the founder at Render (https://render.com) and would love to chat and compare notes. Email in profile.


technically illitare here, pardon the stupid question. I hear so much buzz around k8s/docker/AWS/GCP/Azure and all that but simply don't understand what they actually do. if you go with the extra effort to run your product on you own infrastructure, what do you need Heroku/Flynn et al for? isn't the idea of PaaS to take care of server/database/hardware management etc for you, so you can focus on developing your product?


Do you or anyone you know still provide "ops as a service"? Would love to talk to someone who does this.


Sad news, but I'm not surprised with this. The complete ecosystem was "killed" (if that can be said) with K8s buzz and hipsterism (sorry guys, but I see K8s as Hadoop/BigData of modern days - a solution from a huge company that has no place in 90% setups). Alternatives like Deis [2] moved to K8s a long time ago. My favorite tool for some time, Rancher [3], did that as well.

I've been using Dokku [1] for a few years on a small setup, surprisingly without a single problem, taking into account it was written in "not-so-cool" bash. And I was considering Flynn as the next step if I need to scale it because Dokku doesn't have clustering support (added: looks like clustering support for Dokku is in work [4]).

After many checks, I got the impression Flynn simply wasn't there yet. Either because of low development pace, low number of supported appliances, or something else, I'm not sure. In the end, I picked up Ansible for more distributed installations.

[1] https://dokku.com/

[2] https://deislabs.io/

[3] https://rancher.com/

[4] https://www.reddit.com/r/devops/comments/bgpw5w/flynn_vs_dok...


Dokku has been a workhorse for me. For all the fears of “but what about redundancy?” I have run a fairly successfully service on it for five or so years on a single Vultr VPS and made more money from that than any other side project or all of them combined. Glad I directed my attention to the product and not the devops like I had in the past. 10/10 would recommend.


Absolutely this. While I'm mildly curious what will come of dokku + kubernetes (and maybe a little excited, though have tempered expectations), actually setting up some simple scripting that points to dokku means I can be fairly sure that when I've set up a dokku box, the databases are backed up, should the box go down and all the backups get fried, I can have a new box set upp they will be back up and running in much less than an hour from the point that I notice.

So the only reason I might want something like Flynn is in the case of unexpected uptick in load. And unfortunately, in my first experience with Flynn, things went bad and I wasn't able to restore from backups, which scared me off more than I was scared about load balancing (read: ran out of time for the experiments and vowed "one day I'll try again. Maybe."). That said, overall flynn did seem reasonably polished, so I'm sad about the announcement.

Maybe, some day, someone will take on this middle ground between Dokku and Kubernetes. But until then, Dokku has definitely proved itself.


Caprover is also nice, you use docker files instead of heroku build packs and it has a nice gui.


I recently transitionned from capistrano deployments to CapRover and it's mostly working fine. The variety of ways to deploy apps was especially relevant to my situation.

My biggest complaint would be the non-zero downside deployment.


Slightly off topic, but when is 'big enough' for k8s to make sense? Is dozens of eng teams plus 50+ microservices make sense?

For a company that is a single eng team, it's obvious. But progressively larger organizations, it seems like a gray zone.


I moved us from Heroku to GKE a few years ago when we had ~4 engineers - Took a bit to get things figured out and get CI deploys etc working well, but it really wasn't all that complicated.

Honestly baffled at how many people are convinced you need hundreds of engineers before k8s can work well - they must be doing something very different from what we are. We did keep databases out of k8s (until recently when we added CockroachDB running inside the cluster) and only had ~6 services to move which may have kept things simpler. We're now scaling to run thousands of vcpu at peak times and a dozen different services, and it's still not all that hard to manage.

Can somebody who has had the opposite experience comment on what actually made it so difficult to implement? I imagine that if you are managing the cluster control plane yourself that will make things much more difficult - but unless you have some very specific requirement you can use a hosted k8s to reduce that.


If you use GKE, you have been spoiled. Other Kubernetes platforms - in particular EKS, which is what most people are going to use since most people use AWS - are way more work to setup and maintain. (Emphasis on the "maintain".)

Also, I think a lot of companies have really terrible devops practices that don't work in Kubernetes. So their move to Kubernetes includes a lot of extra work that isn't really caused by the move to Kubernetes.


Yeah, that's probably a major factor here - we moved to google cloud specifically for GKE and it's been pretty great. Have had a couple of issues (mostly with kube-dns autoscaling not keeping up, node-local dns helped a lot), but ultimately it's saved far more work than it's created.


Not everyone can run in the cloud, and of the ones who can there are a number of organizations who can't put their data in clouds that have to comply with US court orders.

Now if you are in Europe then Scaleway got a K8s offering not that long ago, but the rest of the world have to run that stuff local and running K8s securely on a locked down corporate network can be very complicated.


Sure, but the advice people generally give is that small teams and startups shouldn't use k8s - and those are the organizations who can most likely use one of the hosted k8s implementations (which are available anywhere AWS/GCP/Azure have datacenters). Larger corporates will have more complexity to work in with but also will have larger teams which can handle that.


Roughly how many person hours were needed to implement review apps on GKE? Do your GKE review apps auto-hibernate like Heroku review apps do when using AutoIdle (https://autoidle.com)?


We don't have review apps yet, we just have a few testing environments which devs can push changes they want to have available to. It is a really nice feature of heroku though so something it'd be nice to replicate. I think it'd probably take a week or so of dev time to do, the hardest part would be automating spinning up copies of all the dependencies we need.


I do this for a living (help companies migrate to k8s).

The advice I give everyone is: Stay off k8s until you care about binpacking. That is, making sure you're fully utilizing the instances you pay for. When the cost of your architecture is taking up some brain cycles, start digging in.

If that's low down on your priority list, it's not worth the investment. If you're reasonable considered "a startup", invest your time/money elsewhere. PMF and getting to default alive is far more important.


How do you suggest companies which don't need binpacking run their workloads, and is it really that much simpler than using k8s?

If you need automated deployments, centralized logging, autoscaling, etc which many teams do, then you're going to be dealing with a bunch of complexity anyway.


Honestly, package a container and run it Serverless. AWS Fargate, GCP Cloud Run, and similar are better fits.

There will come a time when cost of paying the overhead for a devops person (and eventually) team is worth it. At that point, k8s can be a great fit.

In my experience, that tends to be when you're at scale enough to care about costs a lot. Total spend and/or reducing COGS make it worth while. But when you look at it from the time an engineer costs, it's easier to see.

Are you gonna save 200k/year (minimum) in costs moving to k8s? Then do it. If you don't have line of sight to that, pay AWS/GCP to manage that for you, and focus on your business.

Also note, there's stages even with running k8s. Don't go all in running it all.

Start with a container, run it serverless. When k8s becomes a better fit (to reduce costs, or with other small exceptions), use EKS or GKE. Don't run your own control plane.

If you really have a need for a lot of custom stuff, then start to run your own control plane. But by this team, you probably have a team managing all this. If that cost (remembering how expensive engineers are) is shocking, you should be running a different solution.


I'm not certain if this is what you meant, but optimizing cloud spending is one of our organizationally agreed upon goals.


It’s not dependent on size but what you do. If you run a mono stack you initially spend less time on devops than when you run microservices. Anyone that tries to tell you otherwise hasn’t done a Heroku deploy lately. When your engineers spend enough time on devops to impact productivity is when you reach for a different solution. I can only see one other reason to start with microservices: you are developing something with much higher than usual security requirements and need an “air gap” between pieces of your system. Like if you store raw credit card numbers and want to separate their storage mechanism from the res rig the system with a really tightly controlled API.

Even so, K8 is a solution where running containers on a container as a service system is prohibitively expensive. The newfangled systems aren’t free in terms of operating costs and would you rather pay for extra hardware or for engineering time, knowing that hardware doesn’t take vacations or leave your company for a better job?


> It’s not dependent on size but what you do

It is dependent on size. You could for example start with an engineering team working on a mono stack and outsource data and analytics operations. But at some point, you will decide that data and analytics have to be in-house. You start with a mono stack for each of them. But then you suddenly start splitting each of these three into sub-teams. Suddenly mono stack doesn't make all that sense.

K8s solves the problem of horizontal scaling in both teams and infrastructure. It's inevitable when (and only if) you plan on scaling up. Not that all teams/companies will/want to follow that path.


Is K8s the only way to deploy two different services that talk to each other? Why not something like AWS’s or Google’s or Microsoft’s container service? Why not a second app on Heroku or Google App Engine, etc?

If you have a team of 50 engineers but 40 of them work on the front end of your SPA, and the backend is simple CRUD, why do you need your own container infrastructure?

I stand by that the decision should be primarily based on how much of your team’s effort is diverted to devops. K8s is a devops solution, not a way to organize code or a development framework.


I'd take the alternate view. K8s is a huge relief from both development and operations, but it absolutely requires an entire team of dedicated folks along with re-tooling everything to fit it's paradigm. This is true of all orchestration systems, but especially true for k8s. If you have a dozen eng teams and 50 microservices, and 2 or 3 devops/SRE people because it's all in AWS/GCP/Azure, k8s is going to crash and burn. If you have a dozen eng teams with 2+ devops/SRE folks in each of them, and a handful of extra folks to form a whole new team for K8s you're in great shape.


We have multiple data centers and a similarly sized devops team (separate from SRE). They're exceptionally skilled and lord I pray they'll make it, because I know it won't be an easy journey.

None of our teams have dedicated devops engineers. Teams just have engineers that do everything: back end, front end, deployments, monitoring dashboards. Teams write run books for SRE. Dev Ops supplies eng teams tooling for deployments.

I've argued many, many times we spread our engineers too thin and we need greater specialization for better outcomes. We have enough engineers, but the problem is everyone owns their own thin but very tall vertical slice of the total system.


If everyone truly understands their thin and tall slice, please keep those people they are worth their weight in gold. If anything, it sounds like you have an SRE organization right now, and could use some extra developers and operators. :)


My experience doesn't match yours.

I've seen k8s managed by a couple of DevOps (among the rest of the infrastructure) and used in a company with 3-5 teams (and 200 microservices, but not that much code, they just drank the microservices kool aid). That was a migration from running containers on EC2 (semi automated orchestration) to running and maintaining k8s on EC2.

Best infra experience I've seen in any company I worked at (and the company wasn't that great overall).

I also ran some smaller scale K8s by myself while doing eng management work, so I disagree K8s is as hard as people make it to be


We have 3 infra eng + manager for over a dozen devs and a bunch of services deployed on k8s (including all DBs). K8s itself is self managed (nothing is hosted) and has been the least of our concerns operations-wise.


Jealous of your 4:1 ratio.


We’ll probably grow that ratio quite a bit in the coming year or two but ownership is setup such that it shouldn’t be a proportional increase in our workload. K8s and some other tooling we use do make that considerably easier if your thread that needle right


> For a company that is a single eng team, it's obvious.

I'm not sure I get this. Are you saying that it obviously does or doesn't make sense?

My team is three people and k8s makes our lives easier than any alternatives I'm aware of. We used to be on Heroku, which is cool, until you need to do run anything other than a monolith or more secure than all publicly-accessible services.


May be it's less about kubernetes then about the market itself? If I don't want do any ops I also don't want to maintain a server and keep that safe, so I would go for Heroku instead of running that myself. And if you need more there are also managed k8s offerings which seem to be working fine for many small teams as well. For us it's basically a cheaper version of Heroku with a little bit more effort. We are actually using small managed k8s clusters from Digitalocean since nearly a year now and had zero issues with it so far. I really like not having to take care of the servers.


Dokku, Deis, Rancher, and finally Flynn. Another one falls. I've been around all these projects and small scale docker PaaS for almost a decade, and k8s has just killed them off, which is so sad. As you say, you don't always want to use k8s for a small three machine setup. I guess it was always going to happen, after containers stopped feeling cool, people stopped working on projects for them.


Re: Dokku, reports of our demise are greatly exaggerated.

We're still chugging along, and have recently added Cloud Native Buildpacks support[1], as well as integrations with Kubernetes[2] and Nomad[3]. Happy to hear where you found out that development on the project was halted, given that I made a release yesterday[4]...

    [1]: https://dokku.com/docs/deployment/methods/cloud-native-buildpacks/
    [2]: https://github.com/dokku/dokku-scheduler-kubernetes
    [3]: https://github.com/dokku/dokku-scheduler-nomad
    [4]: https://github.com/dokku/dokku/releases/tag/v0.23.9


Thank you so much for you're work. Seems like you give attention to every issue filed


> Dokku, Deis, Rancher, and finally Flynn. Another one falls.

Isn't Dokku alive and well? I don't use it but I read about it in some related research recently, and some people on this thread report to be happy users.


It works perfectly fine, chugging along nicely. This whole thread confuses me; the whole point of Dokku is that it isn't k8s.

It's not like you were gonna deploy Dokku at $BigCorp, no Kubernetes is a huge selling point for anything I'm gonna use in my free time.


> Dokku, Deis, Rancher, and finally Flynn. Another one falls.

AWS ECS is alive and kicking!


Sure K8s is kinda heavyweight but it's simple to use and complete.

My minimal self hates k8s but my get-stuff-done self uses k8s.


Aside from an easily-missed sidebar, there's nothing that indicates what flynn was. This happens a lot it seems with XXX is dead posts on HN. The link is to something that says that XXX is dead but there's often little or no indication of what XXX was in the first place.


In this case there is a very recent version of their landing page on archive.org: https://web.archive.org/web/20210203121152/https://flynn.io/

(Not that the page had changed in any meaningful way in pretty much forever)


Their tagline was: “Flynn is an open source platform (PaaS) for running applications in production.”

In some way an alternative for using Kubernetes and managed services of cloud providers.

I think there is a need for something that is easier to use than Kubernetes. I’ve started the 5 minute production app for this. The idea is to use managed services and Terraform to make stateful services easy. This way you’re not locked in but have flexibility going forward.


Go with Nomad then


That's fair, but I guess the people who will feel sad about this are the ones who know what it is/was. I remember back when it was being developed, along with Deis and later, Rancher. But the shadow of Kubernetes has caused them all to wither. :(


seriously the lack of context surrounding these sort of posts is lazy and awful.


You can submit text or url mate. I would love to have given you context. I'm not going to cram a companies history into the title, it wouldn't have got any upvotes :)

In simple terms it was a self hosted Heroku. It got massive attention here and raised a load of cash before kubernetes etc were a thing


Sometimes people will post a quick comment after submitting a link indicating why they submitted it.


I got distracted and forgot about it till I saw it on the homepage


I didn't mean to pick on you specifically, it's just a trend symptomatic to posts on HN. People just tend to assume we know what a thing is just because it's been posted here.


I wonder about that... why can't you submit a couple of sentences of context with your URL? Even a single sentence would be better than the nothing we have now, where you need to guess what the URL is about.


I had no idea it wasn't possible either until submitting this, I just thought everyone was being lazy and assuming knowledge of a product as the person I responded to did with me haha



Cofounder here again.

I just wanted to say that while we’re sad to shut down and definitely didn’t accomplish everything we wanted to we are deeply grateful for having had the chance to build something significant and get past a 1.0.

Our original crowdfunding round and supporters - almost all of whom found us on HN, Y Combinator, and our seed investors helped make that possible (plus of course all our open source contributors, alpha and beta users, early customers, and even the people who filed GitHub issues panicked because their sites had just gone down).

Most of our team worked together on Tent (protocol for decentralized social networking) before this which no VC would get behind. So we know how much it sucks to have a great idea that no one will pay for you to build.

We’re very grateful to HN for the opportunity to build something so cool.

Neither of those projects would have been possible without HN. I know for a fact that we only got meetings, term sheets, into YC, and more on the strength of the HN comments and later the GitHub stars.

Before we had a working product we could at least say, “hey, these people all say they want what we’re trying to build” and that got us in the door.

We had lots of different proof points to show investors and even customers, but the most important thing was the number of upvotes and enthusiasm of the comments.

Thank you all for being so open minded and supporting our work. We literally wouldn’t have been allowed to do it without you.

Hope to have something even better to share with you all one day.


I was one of the maintainers of Flynn for a period of time and I still use it on a personal cluster that has been humming along problem free for several years.

I'm very sad to see it go of course but the writing was on the wall for a long time after the demise of all non-k8s container management systems.

This is the ultimate fate of tools like Nomad and services like ECS. k8s changed the game by providing a target API/platform that gained enough traction to force each of the large providers to support a managed implementation of it.

I miss my time working on Flynn, we were a very small team but we wrote some good software and I learnt a ton in the process.


Disagree with your assessment for other container orchestrations. Agreed that k8s has changed the game, but ECS has the support of the behemoth that is AWS and it has huge existing applications relying on it.

ECS Fargate makes it easy to deploy containers easily.

Note that I don't work for AWS, but do use AWS, Azure and GCP cloud services, I don't have any vested interest in ECS. I do believe that ECS has a long life in its future.


Long life isn't the same as meaningful market share. I find it much more likely that ECS as the orchestration layer but stuff built from it (Fargate) live on as backends to k8s API.


I don’t see ECS going anywhere. AWS still maintains old services like SimpleDB and ECS development seems to be speeding up if anything.


We will see. My prediction is ECS becomes pretty much a zombie service. It's neither easier to use or better than managed k8s in any dimension that matters so overtime will bleed even more market share.

What AWS has to be careful with here is that when people make the move to k8s they might depart AWS altogether and go to GCP/Azure as you are throwing away a lot of their lock-in in the process.

This would be less of a problem if EKS was better so they probably need to be spending more time there than ECS, that ship has sailed.


I used Flynn for a side project. When it worked it was awesome and simple, but it was a bit too unpredictable and flakey for me to feel comfortable using it in my company.

Dokku has been rock solid for almost 5 years now for my business. Would love to have multiple nodes but otherwise fantastic and easy. I had to move off Heroku as my customers require an Australian based host and heroku doesn’t support the region. Migrating was a breeze.

I wanted to love Flynn and checked back in it every year or so but the blogs stopped a long while back and the brief documentation never really fleshed out.


How do you deal with maintaining and securing the servers? The big advantage of Heroku is that for small projects you don't have to deal with server maintenance, you just upload and forget. Isn't using Dokku kind of miss the whole point?


Maintainer of Dokku here.

If you can use Heroku, I would encourage you to do so. While the price seems high, the value add you get from just its feature-set and the ecosystem is tremendous. Seriously, even if you just pay 7 bucks for an always-on dyno, you can get pretty damn far without needing to worry about a server. I've been on a few platform teams, and we've always been _miles_ away from the functionality Heroku provides.

That aside, there are a number of reasons something like Dokku (or insert your favorite self-managed platform here) may be a good idea for your organization:

    - You need to own your availability: With Heroku, you're at the mercy of Heroku, Salesforce, and Amazon (roughly in that order). Sometimes there is a mandate that your org cannot use services provided by certain companies, or you need an SLA that you can control for a third party. While I don't necessarily agree with those reasons, there are plenty of cases where owning your availability is more important than saving on the cost of building and running the platform.
    - You cannot use an external vendor: Sometimes you want something to run in your closet for the 40 self-hosted apps you run, and you cannot install Heroku in your closet. Maybe you work at a non-profit and you cannot easily justify the extra expense (either because of paperwork or because money is tight). Running your own platform is quite enticing in these sorts of cases.
    - Running "at scale" is expensive: If you work at a consulting firm and are working on a couple dozen apps for customers, something like Heroku can start adding up, especially if you want staging environments for functionality. We have a ton of users that switch to Dokku for staging, and then end up hosting in other places for HA functionality they don't want to maintain.
        - While we might support Kubernetes and Nomad, the complexity of maintaining these systems just for staging can seem silly.
    - The platform does not support the functionality you need: While Heroku is _excellent_, it doesn' support everything, and it doesn't need to in order to service it's customers. When you find yourself reaching outside the box, that is when owning your platform becomes important.
Yeah, Dokku might seem like it's missing the point, and yet there are thousands of companies - small and large - building and maintaining their own version of Heroku (with varying degrees of success). I think in that sense, Dokku does quite well since it is flexible enough that it's "batteries included but removable".


Jeff Lindsay who was heavily involved in developing Flynn (and Dokku) is working on a pretty cool new project called Tractor, it's basically a modern Smalltalk environment: https://github.com/sponsors/progrium


I’m sad/surprised that a project with almost 8K stars on GitHub is dead. Does anyone know why they decided to not build/maintain it anymore?


Software is hard [0], especially in the face of a rapidly changing tech industry. Incumbents are upended on a regular basis and even the mightiest FOSS projects falter, some less violently than others.

Besides, stars may not be the right metric unlike community participation [1]. Could Flynn have picked up community participation had it been donated to a Foundation like, say, the Cloud Native Foundation?

Another reason could be that because momentum is everything when faced with stiff competition, things may have slowed down for Flynn.

Some of the things that slow projects down:

1. Overwhelming feature gap.

2. Spiralling technical debt.

3. Rise in dramatic hard-to-fix bugs.

4. Inadequate staffing / inability to keep up with the changing ecosystem.

Also, looks like the lead developer (@titanous) has deleted their blog and tweets too, and so, it is likely we will never know what went on. Hopefully, they consider publishing their thoughts so that we may all learn and improve.

I wonder if Pieter Hintjens (of the ZeroMQ fame) had the right idea that a software project should always attempt to reach "completion" and stay "completed". That is, you complete one key part after the other and build on top of that solid foundation that doesn't change...? This approach may help keep things in control for a small team of core developers to tackle issues arising from taking up as complex a task as Flynn's.

[0] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...

[1] https://medium.com/runacapital/open-source-growth-benchmarks...


If a project is primarily developed by one company, contributions will drop to approximately zero once funding runs out.

Even if outside people are willing to contribute, there is usually no one with maintainer / merge powers who is willing or permitted to step up to handle contributions.


With enough alternative commercial supporters, someone will fork and keep going. But in this case it looks like there was no ecosystem of alternative providers who could pick up the mantle.


I am also extremely interested as to why this stopped being maintained.


Macro: Kubernetes took all the air out of the room. Micro: Lack of funding.

i.e pretty much the same as everything else that has went the way of the dodo in this space.


UPDATE: The README.md has now been updated with a link to the older version that describes what the project is about. Thanks to the maintainer for doing this. The remainder of this comment is obsolete (except for the last paragraph).

As others have mentioned, the current site (README.md) says very little about what Flynn is.

You can see a version of the repo and README.md, just before it became unmaintained, here:

https://github.com/flynn/flynn/tree/2c20757de8b32a40ba06f7e5...

IMHO it would have been much better if the maintainers had added the "Flynn is Unmaintained" information to the top of the README.md rather than removing all the existing information. I've submitted a GitHub issue suggesting it: https://github.com/flynn/flynn/issues/4623

A plea to project maintainers in general: Please include enough information in your top-level README (or README.md, or README.whatever) so that someone who has never heard of your project can find get a good overview. I've also seen READMEs that only discuss the most recent changes. That's fine for readers who have already been following the project, but not for a more general audience.


What ever happened to Docker swarm? Considering how popular Docker (still) is, I'm surprised it's not solidly #2 behind K8s.


The Docker company gave up on all that and switched to selling "developer tools".


I was definitely a _very_ late adopter of the program, but because it was so badly documented and seemingly dead even a few months ago I dropped it as well.

I use https://fly.io now for this purpose. It's not self-hosted, but it does the job for me :)


> I use https://fly.io now for this purpose.

That looks pretty cool, thanks.


Consider checking https://appfleet.com as well, we worked a lot to simplify the process with a nice UI and a constantly improving UX

Disclosure: I work for appfleet


Our staging environment is down because something went wrong with Flynn a week or two ago and there is absolutely no information on how to debug it.

I'm happy they marked is as unmaintained because I got the impression it wasn't all that well maintained for last few years - can't remember why anymore, but something to do with having to use master branch otherwise the package was 3 years old in ubuntu main repository...

I guess we'll be moving staging to k8s without hesitation now.

It was decent and served the purpose for our staging environment. Rip.


I'm disgusted that all competition to k8s is dying a slow death. I'm counting the days for nomad with expectation and whishing something better will arise.


Curious about what are your pain points with Nomad? We are using it and it's pretty good. Biggest pain with it is the lack of versioned documentation IMHO. But other than that it can do wonderful job in scheduling workloads.


The amount of tooling you have to write yourself to keep a sane workflow is excrutiating.


indeed, can relate!


I have the feeling that nobody is making (or trying to make) any money on Nomad? Their cash cow must be terraform.


Hashicorp is selling enterprise versions of nomad which are really expensive, the same with consul enterprise and terraform enterprise. They have high pricings so they don‘t need thousands of customers.


Is there a good maintained alternative?


I'm happily using Dokku[1] and another one in this host your own PaaS space is CapRover[2]

[1] https://dokku.com/

[2] https://caprover.com


I can second caprover. Personally I find it easier to use than dokku. The admin ui is quite nice.


Aside from having an admin UI, would you be willing to talk a bit about how caprover is easier to use than dokku?

Feel free to hit up the `@dokku` twitter account, catch us on the gliderlabs slack (https://glider-slackin.herokuapp.com), or even provide feedback in the Discussions section of the primary Github repository (https://github.com/dokku/dokku/discussions).

(I am the dokku maintainer).


I can second Dokku, though haven't used CapRover. Dokku has its quirks but it has served me well in the past two years.


Maintainer of Dokku here.

Would love to pick your brain as to what our quirks are and how we might avoid them in the future. Feel free to hit up the `@dokku` twitter account, catch us on the gliderlabs slack (https://glider-slackin.herokuapp.com), or even provide feedback in the Discussions section of the primary Github repository (https://github.com/dokku/dokku/discussions).


@josegonzalez, thanks so much for all of your work on Dokku and for being so responsive in the community for the last couple years. We tried Flynn at one point and had enough issues we moved back to dokku within 2 months and have been happily living on it for years now. You helped me directly with an issue with DB naming at one point, much appreciated!


Going to take you up on this offer. We've been running Dokku in production for years; and have a few quirks which have resulted in us evaluating moving to k8s.


@josegonzalez Thanks for the reply. :) I’ll try to write something down when I have some time.


Dokku is FANTASTIC, thank you so much for your work on it.


Kind of related: is there something like Dokku but for FaaS?


There's faasd [https://github.com/openfaas/faasd] which is OpenFaaS without the Kubernetes overhead.


And alexellis is a pretty active maintainer.


I'm working on a self-hosted, open source PaaS called Swarmlet. I really like Dokku, but I needed something that's a bit more scalable to my needs.

The installer is currently broken, I simply don't have enough time / bandwidth right now to work on it unfortunately.

That said, if you want to contribute, please let me know! I hope to get things running again soon.

https://swarmlet.dev or https://github.com/swarmlet/swarmlet


This looks amazing. Thanks


Assuming you’re a company (as opposed to an individual), if you want a PAAS, there are a few different options that in my view are sustainable which I think is the key criteria for adopting something for your platform.

- Heroku. Sure it’s a bit expensive but it’s still super easy if you’re a dev. - OpenShift. If you’re a really big enterprise, OpenShift is a reasonable choice for PAAS. But only if you’re huge. - Kubernetes. Yes, it’s complicated. Yes, it has a steep learning curve. But it’s open source, has a huge and growing ecosystem, and it has less lock in than any other PAAS-like thing that I can think of.

The main downside of Kubernetes beyond its complexity is that you still have to build abstractions on top of it for your developers. But that world is improving regularly.


I went through this evaluation process again recently for an open source project for a client and came to the conclusion that, for small projects, Heroku provides immense value. Given the features and free-tier add-ons it’s definitely worth the 7p/m and I don’t know if I would class it as expensive anymore when taking all it’s features into consideration. I’d like to see http2 support though.

Watching and waiting for https://render.com to mature a little as it seems slightly better value.


(Render founder) We're getting there. What are the biggest blockers for you right now?


Yes, I'm a huge fan of Convox [1]. One thing that don't make very clear is that the paid "Convox Pro" hosted console is optional, and convox/rack [2] is completely free and open source. You can set everything up with a single command, and then interact with your rack and app directly through the CLI. I really like how simple and opinionated it is, even though their new v3 version uses Kubernetes behind the scenes.

[1] https://convox.com

[2] https://github.com/convox/rack


CapRover is a nice Docker-based PaaS you can run yourself: https://caprover.com/


+1 on CapRover. Complete provider independent deployments. Just write a Dockerfile and you are ready to deploy.


The problem with all these self-hosted PaaS is that they have to reinvent the wheel. Every PaaS needs some database to store information.

So what's really missing is that someone would build a self-hosted RDS. A complete solution with a good backup concept (e.g. wall-g for postgresql), ability to restore from backups and in later versions maybe the ability to setup read replicas and do manual failovers for high availability (because automatic is still not easy).

So if there would be such a solution every PaaS could just say use "awesome free rds" for database storage and they have to implement a lot less functionality. And even if you don't want to use a PaaS and just install everything directly on a server, a DBaaS solution is still needed by a lot of folks.


Wonder if anyone will take this up now, since its BSD licensed and all.

Guess it’d have to have a different name though.


Sad to hear about Flynn - it inspired us to start building a modern open source PaaS last year. We're currently running a closed beta with a bunch of companies and still have a few spots left. Feel free to reach out at info@runx.dev if anyone is interested!


Worked for a company a couple of years ago that used Flynn (with one of the maintainers) and it was pretty easy to use and understand. I guess it's Dokku or Nomad for my next adventure then!


This is a bit sad to read. I mainly use Dokku but having tried Flynn I think they had something great going on. Ultimately I reverted back to Dokku because I had to modify my application to fit Flynn's deployment style (no automatic redirection from http to https for example).

Of all the open source PaaS that mimic Heroku to a certain degree, I stick to Dokku because I can always fall back to Heroku with minimal app rewrite since I use buildpacks. This might not be an issue if you deploy with DockerFiles though.


Qovery is partially open source and have a 100% free plan for individual developers. https://www.qovery.com


Just reading about this, and looking at the Flynn website from the archives, it looks to me that Flynn does/did more or less the same as Cloud Foundry does?

If Flynn was still active, what would be the main reasons you'd chose it over the "cf push" experience?

Disclaimer: I have not used Flynn myself before stumbling over this thread, but I'm very interested in understanding the above given the ongoing discussions now of the future of CF.


CLU: "Kevin Flynn! Where are you now?!?"


Oh I totally forgot about Flynn and I even had a t-shirt for fixing a typo.


So sorry to hear this. A nasty strike on the self-hosting folks out there. All the tips of "partially open-source" solutions in this thread make me want to go back to bed.


Hi everyone, I'm one of the maintainers of Porter (https://github.com/porter-dev/porter). We are building a Kubernetes-powered PaaS that runs in your own cloud. We consider Porter to be the next-gen successor to Flynn in the era of k8s.

We had the pleasure of getting on a call with the Flynn founders last week and learned so much from their experience. It was incredible to hear the first-hand account of the past 7 years building Flynn (seriously, what a ride). We took their lessons and advice to heart and hope to fill the void Flynn is leaving behind.

To the founders of Flynn and all contributors, thanks so much for building such an awesome project and paving the way. Porter is still in its early stage, and we're super excited to start sharing our progress with the community. Exciting stuff coming your way soon - stay tuned!




Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: