Hacker News new | past | comments | ask | show | jobs | submit login

I've found that a lot of the complexity of the cloud is self imposed, the other half is because they're running a huge range of mixed-quality software that needs a lot of scaffolding to even run properly.

For example, if the clouds used IPv6, then 90% of the networking complexity would simply evaporate. But they can't, because despite two decades of warnings, nobody has working IPv6 LAN networks. Similarly, IPv4-only server software is still being written, today.

In comparison IPv4 needs a ton of infrastructure to function. Stateful subnet and IP assignments. Methods for dealing with overlapping private networks. Split DNS, private endpoints, NAT, and on and on...

I could list other examples, but you get the idea. The big clouds are pandering to their customers, and the customers can't modernise their end fast enough, so you end up with complex features to account for all the legacy software and networks.




> I've found that a lot of the complexity of the cloud is self imposed

Indeed. I prefer the terms circumstantial- and inherent complexity. Inherent is the easiest to define: it’s the bare minimum to do what needs to be done, analog to Kolmogorov complexity and in the spirit of “if I had time, I would have written you a shorter letter” or “make it as simple as possible, but not simpler”. In the real world, your software interacts with imperfect and legacy systems with their own issues, but doing that is also part of the inherent complexity (because if you removed it, it wouldn’t satisfy the requirements).

Circumstantial complexity otoh comes in many forms: initial poor design, tech debt from scope creep and requirement changes. In adversarial environments it can even be deliberate, such as DRM, inter-team politicking, and “job security through obscurity”, or “if nobody else understands this doc, it’s less likely to be changed and my coworkers will think I’m smarter”.

In the case of AWS, I suspect there’s all kinds of circumstantial complexity, but in particular there’s “green field accidents”. As an early mover (all cloud tech is extremely new), you don't get the simplest possible design on the first try. Instead, you get something that works, but is full of redundancy. For one, the best way to layer your systems aren’t clear, so you end up with individual products having to reinvent things like consensus, caching, durability, replication, data integrity, yadda yadda. But at the same time, it’s an immense pressure to get stuff out the door, so we make do with what we have and keep cargo-culting until it’s cost efficient to replace a lot of the garbage with something better. That can take 10 or 20 years.


I've used both AWS and Azure extensively. Everyone seems to think Azure is "weird and difficult", most likely because they had internalised the circumstantial complexities of AWS and can't wrap their heads around a cleaner but unfamiliar model.

In AWS, everything has non-human-readable identifiers shown in a flat lists in some random order. This adds a lot of unnecessary complexity. Almost the entire circus around having to create dozens of AWS accounts just evaporates in Azure's model that has folders called Resource Groups with resources in them with names.

Yeah, that's right: folders and object names. The magic anti-complexity technology that harks back to the 1960s UNIX era that AWS still hasn't been able to replicate in the 2020s despite a decade of trying.

A lot of other incidental complexity stemmed from old issues that have been resolved, but still linger around due to backwards compatibility. For example, not being able to change the IP address of an EC2 VM resulted in all sorts of craziness. Similarly, both Azure and AWS have unexpected naming restrictions on things like KMS / Key Vault secret names. I.e.: Key Vault secrets can't have names that match typical "web.config" parameter names or environment variable names in Linux... "for reasons". Stupid reasons. Hence, you need to have a back-and-forth encoding or escape/unescape mechanism between two things that should be identical.

And on and on...


> Yeah, that's right: folders and object names. The magic anti-complexity technology that harks back to the 1960s UNIX era that AWS still hasn't been able to replicate

Right on. These choices are often frontloaded to the greenfield stage, where you have to make some decision, way before you can say which data model makes the most sense in the future. Even the wisest of architects cannot predict everything, so it’s not for a lack of competence. The people I knew at @faang-gig were incredible bright, but technical design as an early mover is still an incredibly delicate art form.


Eventually the “happy path” will simplify. AWS is spending all its time going up market wooing enterprises, but if there is money to be had at the low end, they’ll go after it it.

Beyond AI, I don’t see any upcoming paradigm shifts, so what’s out there will just continue to get better versus chasing something fundamentally better.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: