> I understand why customers want Outposts. Despite the grousing in this post, I’m an AWS fan and have tremendous respect for their rate of innovation. Customers want API-driven infrastructure, the flexibility and speed of development that it provides. Traditional on-premises vendors, most concisely represented by the EMC/Dell/VMware conglomerate, have failed to evolve their way into this experience which is why the hardware and software from the hyperscalers (including Outposts) looks quite different. Customers are faced with a literal dilemma: on one hand they can continue buying the moderately priced, expensive to operate, inefficient to use, legacy gear from Dell/HPE/etc or they can pay exorbitantly for something like AWS Outposts. The former is throwing good money after bad, investing in an ecosystem that has continued to underdeliver; the latter cedes more and more control to AWS and locks them into an unaffordable future.
is a big reason for why I cared about making GKE on-prem / bare metal a thing: I don’t believe (most) customers on-prem want to buy new hardware from a cloud provider. They mostly want to have consistent API-driven infrastructure with their hybrid cloud setup, and don’t want to burn their millions of dollars of equipment to the ground to do so.
I recognize that Oxide’s bet is that customers will prefer to stay on-premises if they can get cheaper / “better” hardware. That’s an interesting thesis! But many folks really want nothing to do with owning and managing infrastructure, they just feel forced to do so (and I agree with Adam here, it’s non trivially about the economics).
Edit to add: I commonly troll people with dell.com list prices, combined with colo provider cost of power, percentage spare, redundant networking. A pile of boxes != Cloud, but that is what companies want to compare to (the crossing point is not same cost, but it’s also not some huge integer multiple if your cost of infrastructure matters to you; for some businesses infrastructure costs do not matter, anymore than their power bill does).
Anthos certainly has a different (one might say opposite) approach to Outposts. I was surprised that Anthos apparently doesn't use the GCE hypervisor, control plane, or Container-Optimized OS. Is the customer fully responsible for hardware management, OS/hypervisor installation/management, patching, etc? Does all that work end up negating the benefits of consistent API-driven infrastructure?
Right, I should have been more clear: Anthos on-prem "just" gives you the Kubernetes part of that API consistency story.
On machine management, it's intentionally the case that administrators can control the base machine part via "Bring Your Own Node", because of all the processes and compliance things that customers have in place for their other boxes. They don't want their GKE cluster to be some rogue actor in their datacenter (more than it will be already).
Some customers are happy to let a provider own the end-to-end story of the base OS, including patching/upgrades, though we're not (currently) ready to support them running Container-Optimized OS (aka COS) on bare metal. We do have in-place upgrading of the K8s parts via ```bmctl upgrade cluster``` or ```kubectl apply``` [1], but expect that enterprises will prefer to use their existing patch management procedures. I don't expect most enterprises to jump on "single cloud provider" patch management (e.g., our own patch management [2]) anytime soon.
Think of the progression this way:
- many large enterprises wanted to add in K8s on-prem into their existing vSphere universes
- others wanted out of vSphere but still had their own machines and own ticketing/OS/patching systems
- others still want to "bring their own metal" with the rest managed by a provider.
- Outposts sort of jumps to "you want our boxes but in your datacenter"
The Outposts angle is definitely a great fit for customers who want a more "appliance-like" behavior. And I think that works great for new workloads, or customers who don't mind replacing their existing hardware. My take is that many folks want to modernize in place though, and setting up and maintaining K8s was the hard part for them (they kind of still like their patch management and machine ticketing systems). Getting their developers to just do kubectl apply all day instead of requesting boxes is already a big leg up.
Adam’s conclusion:
> I understand why customers want Outposts. Despite the grousing in this post, I’m an AWS fan and have tremendous respect for their rate of innovation. Customers want API-driven infrastructure, the flexibility and speed of development that it provides. Traditional on-premises vendors, most concisely represented by the EMC/Dell/VMware conglomerate, have failed to evolve their way into this experience which is why the hardware and software from the hyperscalers (including Outposts) looks quite different. Customers are faced with a literal dilemma: on one hand they can continue buying the moderately priced, expensive to operate, inefficient to use, legacy gear from Dell/HPE/etc or they can pay exorbitantly for something like AWS Outposts. The former is throwing good money after bad, investing in an ecosystem that has continued to underdeliver; the latter cedes more and more control to AWS and locks them into an unaffordable future.
is a big reason for why I cared about making GKE on-prem / bare metal a thing: I don’t believe (most) customers on-prem want to buy new hardware from a cloud provider. They mostly want to have consistent API-driven infrastructure with their hybrid cloud setup, and don’t want to burn their millions of dollars of equipment to the ground to do so.
I recognize that Oxide’s bet is that customers will prefer to stay on-premises if they can get cheaper / “better” hardware. That’s an interesting thesis! But many folks really want nothing to do with owning and managing infrastructure, they just feel forced to do so (and I agree with Adam here, it’s non trivially about the economics).
Edit to add: I commonly troll people with dell.com list prices, combined with colo provider cost of power, percentage spare, redundant networking. A pile of boxes != Cloud, but that is what companies want to compare to (the crossing point is not same cost, but it’s also not some huge integer multiple if your cost of infrastructure matters to you; for some businesses infrastructure costs do not matter, anymore than their power bill does).