They really shouldn't have to do much support. You can do self-hosted a number of ways: ship a VM, ship a container, ship a static binary, ship some automation (ansible/terraform/etc). It doesn't matter where it's run or how after that point, because it's a commodity building block. It's up to the customer to figure out how to set it up.
But once it's running, troubleshooting it is the same everywhere. Can you perform network operation X? Y? Z? Is the process doing A? Is the VM in state B? Is the VM's OS up to date and on version C/D/E? It's all just basic networking and basic Linux systems administration.
Other vendors do the same thing. If you know what you're doing, self-hosted should be a small, well-understood feature of your overall offering. The fact that they're going full-cloud is more likely a factor of just not having a strong engineering organization, and seeing more money coming from their cloud offering from small players that don't do self-hosted. Later on they will want the whale money and go back to self-hosted, or they'll get their lunch eaten by the other players in this market.
> It's all just basic networking and basic Linux systems administration.
This might be possible in the fast and loose world of scrappy startups, but the bigger (and more lucrative) customers will have a lot more homework to do.
It requires a team to manage databases and a team (or several) to administer on-prem or cloud ops. They're going to have to learn this new thing themselves or meet with whatever team owns this third party thing. And have vendor meetings.
It's work and upkeep. At enterprise scale, this is one FTE bare minimum (probably spread out across multiple people), plus an oncall rotation (this is a pretty important business function), and definitely involves maintenance and upgrades that need to be coordinated.
Plugins, peering with external IPs, vendor security evaluation, BeyondCorp, certs provisioning, service discovery, traffic routing ...
This is all more work than "just", which is to say, I understand why the vendor is throwing up their hands and going after easier money. It's hard to support this, as each company does all of these things differently [1]. It just doesn't scale as well as SaaS does.
My day job is working for big enterprises setting up self-hosted solutions like this. Yes, it's work and upkeep, and they do need in-house specialists, but you already know that if you're self-hosting... that's kind of the point... If they didn't have in-house specialists they would use the dedicated managed cloud hosting.
The only purpose to self-hosting is because you can't use the dedicated managed cloud, such as due to regulations, contractual stipulations, networking limitations. If you're already in that situation and you don't have the headcount to manage this technology, you're screwed.
So, again, for the vendor, providing self-hosted is as easy as "here is a binary, here's some docs, good luck to you". If the client can't hack it... they shouldn't be doing self-hosted. It's self hosted, not we'll help you host it.
I can confirm, it’s a much more complicated problem than it appears on the surface (or at least based on other commenters perceptions of how easy it should be to do this). Where I was even used Replicated to try and reduce the hassle but even that is just a series of different trade offs.
So while our cloud/SaaS offering had a shared codebase there was still a considerable engineering team responsible for the “on prem” packaging and support. Alongside a pretty extensive CI pipeline to try and catch all the nuanced customer setups that had bit us in the backside in the past.
I think you may have missed that they do indeed let you sort of self host, in that you can run it on your own AWS resources. They just need access to manage it for you. Which I'd think would cover most orgs requirements out there that would actually use gitpod, but I could be wrong.
I think in this particular case it's maybe not quite as easy as just shipping a container or whatever, because a huge amount of the value and effort is actually the orchestration of VMs/containers/however they do it under the hood. Again, I might just not know enough here, but it seems more like they've constrained their self hosted option a bit to keep it manageable rather than getting rid of it in practice. It also seems like you could just use the agpl licensed open source code if you want?
Finally, it seems unfair to characterize them as having a weak engineering culture when they've managed to create something rather impressive with, I think(?), quite a small team.
But once it's running, troubleshooting it is the same everywhere. Can you perform network operation X? Y? Z? Is the process doing A? Is the VM in state B? Is the VM's OS up to date and on version C/D/E? It's all just basic networking and basic Linux systems administration.
Other vendors do the same thing. If you know what you're doing, self-hosted should be a small, well-understood feature of your overall offering. The fact that they're going full-cloud is more likely a factor of just not having a strong engineering organization, and seeing more money coming from their cloud offering from small players that don't do self-hosted. Later on they will want the whale money and go back to self-hosted, or they'll get their lunch eaten by the other players in this market.