Ubicloud is such a good idea. There's no reason why the major cloud providers need be considered more than data center providers. AWS 'bare metal' instances should be priced as a commodity, just as the data centers it used to rent space from are. OSS software can and should do pretty much everything above that layer, with room for commercially licensed software as well, of course.
Not exactly. The use-cases suitable for public IaaS are:
0. Temporary prototypes
1. Overflow capacity
2. One-time jobs
It's also often used to evade local IT restrictions. In general, public IaaS is damn expensive and a waste of money for continuous, predictable workloads regardless of compute, storage, and/or networking usage.
> There's no reason why the major cloud providers need be considered more than data center providers
They have no incentive to do this or to offer low cost bare metal instances. If anything, the opposite incentive exists in my experience: make these instances extremely expensive and push everyone to cheaper, locked in managed services. Because surely anyone that needs bare metal has fat stacks of cash?!
Making public cloud a commodity kills the major public cloud providers valuation, IMO.
That's what we should hope projects like Ubicloud can provide, a way for the mid-tier providers that would've done PHP-hosting 10-15 years ago to have a competitive and _portable_ service offering.
I always wonder how the big cloud providers manage to scale their IAM services (from a distributed systems perspective) given they presumably need both low latency and some reasonable level of consistency. Anyone have any pointers to architectural descriptions/publications?
Wouldn't a geo-replicated single-primary DB suffice? Reads would have low-latency, and only writes would pay for the latency to reach the primary far away.
In the article, mentioned Linux is mentioned as the underlying OS. Wonder what approach Ubicloud takes (if any) to have actual diversity in the software stack for the purpose of reliability and security. My assumption here being, that different OSes, while increasing the attack vector also make it more likely that the whole fleet is not susceptible to the same software problem or vulnerability at roughly the same time. Just something I started pondering about after seeing Hetzner, which is quite popular in the BSD land.
There's not many people/projects/companies that take this approach; so if they're not telling you about doing it, you can safely assume they aren't.
IMHO, it's a nice idea, but it at least doubles your system integration work, and the benefits are mostly hypothetical, unless you're willing and able to dynamically shift your infrastructure between OSes if one of them usually performs better but is susceptible to some DoS that's inbound today.