This seems like an eminently reasonable thing to do, with the difficulties of supporting self hosted for a product like this on the infinite variety of customer setups. I can't imagine the time sink it must have been trying to support them all, it can't have been profitable.
It sounds like the dedicated version essentially lets enterprises run in on their own accounts while letting gitpod manage everything, which seems like a reasonable middle ground to me. Also sounds like you can still self host, they just won't support it directly as a customer, which again, seems reasonable.
CEO from Gitpod here. Some background on why we moved to a managed enterprise cloud product: there are parts of Gitpod itself that are closer to a Kubelet then a Kubernetes application. We use much of the Kubernetes surface, interact with containerd, and use bleeding edge Linux features. The only way you make Cloud and Self Hosted co-exist is to have one codebase. What we deployed in SaaS we wanted to deploy in Self-Hosted. But not all Kubernetes are created equal (think GKE node label, EKS custom AMI to get Linux kernels but not other places). Today there are features in SaaS that are not available in Self-Hosted. It is just not possible given the limitations of the managed Kubernetes services. In SaaS, we can vertically integrate the stack down to the exact version of the Linux kernel. In Self-Hosted we’re limited to the common denominator across the various services. Cloud development environments are going to change the way developers work. In order to do that they have to be fast, opening a new terminal sort of fast. We can only deliver that if we have full control over the infrastructure.
They really shouldn't have to do much support. You can do self-hosted a number of ways: ship a VM, ship a container, ship a static binary, ship some automation (ansible/terraform/etc). It doesn't matter where it's run or how after that point, because it's a commodity building block. It's up to the customer to figure out how to set it up.
But once it's running, troubleshooting it is the same everywhere. Can you perform network operation X? Y? Z? Is the process doing A? Is the VM in state B? Is the VM's OS up to date and on version C/D/E? It's all just basic networking and basic Linux systems administration.
Other vendors do the same thing. If you know what you're doing, self-hosted should be a small, well-understood feature of your overall offering. The fact that they're going full-cloud is more likely a factor of just not having a strong engineering organization, and seeing more money coming from their cloud offering from small players that don't do self-hosted. Later on they will want the whale money and go back to self-hosted, or they'll get their lunch eaten by the other players in this market.
> It's all just basic networking and basic Linux systems administration.
This might be possible in the fast and loose world of scrappy startups, but the bigger (and more lucrative) customers will have a lot more homework to do.
It requires a team to manage databases and a team (or several) to administer on-prem or cloud ops. They're going to have to learn this new thing themselves or meet with whatever team owns this third party thing. And have vendor meetings.
It's work and upkeep. At enterprise scale, this is one FTE bare minimum (probably spread out across multiple people), plus an oncall rotation (this is a pretty important business function), and definitely involves maintenance and upgrades that need to be coordinated.
Plugins, peering with external IPs, vendor security evaluation, BeyondCorp, certs provisioning, service discovery, traffic routing ...
This is all more work than "just", which is to say, I understand why the vendor is throwing up their hands and going after easier money. It's hard to support this, as each company does all of these things differently [1]. It just doesn't scale as well as SaaS does.
My day job is working for big enterprises setting up self-hosted solutions like this. Yes, it's work and upkeep, and they do need in-house specialists, but you already know that if you're self-hosting... that's kind of the point... If they didn't have in-house specialists they would use the dedicated managed cloud hosting.
The only purpose to self-hosting is because you can't use the dedicated managed cloud, such as due to regulations, contractual stipulations, networking limitations. If you're already in that situation and you don't have the headcount to manage this technology, you're screwed.
So, again, for the vendor, providing self-hosted is as easy as "here is a binary, here's some docs, good luck to you". If the client can't hack it... they shouldn't be doing self-hosted. It's self hosted, not we'll help you host it.
I can confirm, it’s a much more complicated problem than it appears on the surface (or at least based on other commenters perceptions of how easy it should be to do this). Where I was even used Replicated to try and reduce the hassle but even that is just a series of different trade offs.
So while our cloud/SaaS offering had a shared codebase there was still a considerable engineering team responsible for the “on prem” packaging and support. Alongside a pretty extensive CI pipeline to try and catch all the nuanced customer setups that had bit us in the backside in the past.
I think you may have missed that they do indeed let you sort of self host, in that you can run it on your own AWS resources. They just need access to manage it for you. Which I'd think would cover most orgs requirements out there that would actually use gitpod, but I could be wrong.
I think in this particular case it's maybe not quite as easy as just shipping a container or whatever, because a huge amount of the value and effort is actually the orchestration of VMs/containers/however they do it under the hood. Again, I might just not know enough here, but it seems more like they've constrained their self hosted option a bit to keep it manageable rather than getting rid of it in practice. It also seems like you could just use the agpl licensed open source code if you want?
Finally, it seems unfair to characterize them as having a weak engineering culture when they've managed to create something rather impressive with, I think(?), quite a small team.
I don't know much about gitpod, but from reading the github repo, it appears they were deploying into k8s clusters with a fairly complicated application.
I suspect that is the issue.
At $curjob, we offer self-hosted versions, but our product is a monolith (in java, if that matters) that needs a RDBMS and optionally a proxy and elasticsearch. That architectural simplicity lets us offer self hosted solutions in a variety (deb, rpm, zip, docker image, docker compose, kubernetes).
But we're also an application component (auth server) not a full featured application. A full fledged application would be tougher to support, but if I were to try, I'd definitely start with a monolith.
It sounds like the dedicated version essentially lets enterprises run in on their own accounts while letting gitpod manage everything, which seems like a reasonable middle ground to me. Also sounds like you can still self host, they just won't support it directly as a customer, which again, seems reasonable.