(1) You save a ton of money on bandwidth when you move data from AWS to AWS
(2) Your stack, in most cases, needs to be near each other to minimize latency. Databases get wrecked by this.
This is why, cloud database providers have to often transparently show you which cloud you're launching on  which effectively means AWS is going to get a good share of it anyway. My uninformed guess is that EC2&S3 are by far their biggest money maker which is going to be what unbundlers target.
I'm all for the unbundling and will probably take part in some of it, but I don't think it will be that easy.
This is only because AWS grossly overcharges for bandwidth. If you move all services that have high bandwidth requirements to providers with reasonable prices you'll save a significant amount of money.
Further, Snowflake is a good example of an unbundled service that has more capabilities than AWS Redshift (for instance, zero copy clones). They use AWS for infrastructure - the entire warehouse is stored on S3 - and value add on top of it.
So why choose differently ?
AWS's killer feature has nothing to do with tech, it's the smooth billing process (techies choose, management pays and supervises). If you can put together a smooth process for paying for 3rd party software in your organization, you can unlock massive improvements in quality for a pittance.
I almost mentioned IAM in my post as a co-killer-feature, but I've only ever witnessed its power in AWS learning material / conferences, not being leveraged IRL, so I decided not to speculate. I'm more dev than ops, so even though I've seen a decent number of AWS environments the fact that they have all used IAM in a clumsy, coarse manner doesn't really mean much.
Do you have a feel for how frequently ops manages to actually leverage the fine-grained power features in IAM?
The issue I have with IAM is that it is not possible to be sufficiently fine grained - for example I cannot grant an instance permission to read its own tag values but not those of other instances, since the EC2 IAM API is stuck in the state of 85% done at which most AWS services eventually seem to plateau.
From my experience, Snowflake is both more performant AND cheaper than Redshift or other RDS options from AWS.
Would add that edge compute, running cloud paradigms (code instead of config; automation; management abstractions), partially addresses these limitations for many use cases.
Costly to move the data off, but longer-term ROI for those orgs that are willing to make long-term decisions.
Meanwhile, as edge matures, greenfield apps should be edge-centric, rather than cloud-centric (doesn't mean they won't have cloud components...they will do the processing and storage where it best makes sense).
Couldn't you build your software stack in a cloud-portable way but, at any given time, still have 100% of it on whatever your current cloud provider is? And then switch to another cloud provider from time to time if/when the costs make it worthwhile.
The first problem is that you wind up using the least common denominator. You’re paying public cloud providers a premium, so this essentially equates to throwing away money.
The second is that there are some cases where what seems like the same thing isn’t, or best practice is wildly different between providers. Get in a room with an AWS expert and an Azure expert and talk about what an account is.
In other words, it's not about moving your entire stack off Amazon's cloud, it's about moving parts of your stack off Amazon's software even if all of it may still run inside Amazon's cloud.
You're simply moving from being locked in by AWS to being locked in by a bunch of much smaller cloud vendors.
2) Contemporary site/app development renders the latency of individual requests irrelevant. The whole page is going to boop and bounce around for 20 seconds anyway, why make a big deal about it?
The web is a measurably worse experience now than it was 10 years ago (maybe even 5), so it's ironic that Goodhart's Law has led so many astray toward measuring smaller and smaller trees within a growing forest. "Well but we have to get the request for number of friends below 10ms" while the other 400 requests on the page are dilly-dallying and experiencing their own latencies. Then the CSS gets applied.
I've managed racks for customers too, but managed hosting at Hetzner is now usually cost-effective vs. colo-hosting in London where I am. Since they also offer cloud services (though pretty basic) now, there's the option of mixing and matching.
Paying for use isn't awful, as long as you're not paying AWS list prices which are rather high.
AWS is lets say .08 per GB or $30K/month for same bandwidth?
And if you beleive you can run your business on their $15/month network then they would own the market - but oddly they peer with basically NO ONE of any quality - because THEY don't actually pay for the bandwidth either and just totally oversaturate their peering links.
for some of the common whining at least in the past.
Also, AWS is slow. If I have a filer with 500TB of disk I should be getting 5-10GiBy/s reads, and burst writes should be that fast too. With EFS I get 200MiBy/s MAX. Likewise EBS and emphemeral SSDs top out at 250MiBy/s, which is just abysmal.
AWS security is so complicated now, at the control plane layer, that hardly anyone understands it.
So if there was a competitor that gave real hardware performance and a simpler security validation at a reasonable price, they could win business. But I don't see Oracle/Azure/GCP etc doing that.
> If I have a filer with 500TB of disk I should be getting 5-10GiBy/s reads
That kind of performance requires a 50-100 GbE SAN fabric. I'm unaware of any cloud provider that offers this yet, let alone an on-prem fabric that isn't extremely expensive. (Our customers can get close with the u-24tb1.metal instance type, which offers 28 Gbps dedicated EBS throughput.)
> Likewise EBS and emphemeral SSDs top out at 250MiBy/s, which is just abysmal.
There are plenty of EC2 instance types that offer 1750MiB/s of EBS throughput - see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptim... for a list. You'll need to stripe some 1.3TB EBS volumes together to achieve it, but it's absolutely attainable. (I did just that when I worked at a well-known YC company to build their high-throughput Kafka clusters.)
As for instance storage, I just ran a fio test on an i3.large's NVMe SSD and it performed at 80k read IOPs, 311MiB/s sustained sequential read. These too can be striped for higher performance -- I was able to achieve nearly 24GiB/sec (at 644k IOPs) on an i3.16xlarge instance.
It's trivially easy to spin up an instance or three and run FIO.
Since 40-100GBE is extremely cheap now, I don't really see how the bandwidth can be so low. When we're building compute clusters it is multiple rails of 25/50/100GBE (or other fabric like IB). Is Amazon using anemic 2x10GBE LOM or something?
Relatedly, its always mysterious debugging on AWS because you have to theorise about the underlying hardware and topology. Maybe there is a VIP level of CloudWatch where you get told how it works underneath...
YES! Here is an example.... I wanted to use Elastic Container registry and have Fargate pull from it. I had to figure out that ECR is actually implemented like a private s3 bucket and my app needed read access to S3. Why should I be exposed to the implementation details as someone granting permissions to ECR?! Also to figure this out it is not trivial as the only way to get a clear error message is to run Cloud Trail and wait 10-15 minutes for the error to be reported there as there is a delay. Also another instance where something needed permission to an weird internal S3 endpoint for gateway access on a VPC....
When you say get a filer with 500TB that can do 10GiBy/s as a developer I might hear this as the reality.
Talk to infra group -> who will sit with a bunch of sales people and go round and round then get some quotes, then go back and forth with dev group trying to really spec out what is needed, then get finance / accounting approvals, more questions and answers, then put the order in, then have sales people apologize for this and that delay, then get filer onsite, the a big todo to install, then find the gigabit Ethernet can't actually push 10GiBy/s, then have to find add-in cards for all the servers and do a big network redo to get some kind of speed to the actual servers. Then after developers play with this a bit they decide to go a different route and the dance starts again.
Contrast with this AWS. I can try out my flow with a few clicks on console.
Security is painful though - no question.
So what you are arguing is that many places have limited competency, and you can outsource the need for that to AWS -- at significant cost. I dealt with a fair number of turf guarding drongos so I'm sympathetic to this idea, but it remains true that you can get much more peak performance, and perf/$ from your own hardware.
On prem cloud is still mayhem, VMWare is very very bad, Openstack is baroque. Moving anything from AWS or Azure to on prem cloud would be very frustrating.
IAM and security management are the worst parts of AWS. It's complex an unintuitive. I always second guess my choices, wondering if I've left a gaping security hole somewhere.
Yep, and in the context of OP asking to pull out authN and ID, it's a lot clearer how to pop in an alternate DB than IAM. Even pulling out Cognito makes some sense, but not IAM. Notice OP didn't even mention authZ because IAM roles and policies are so intertwined with every AWS service, there's another major disincentive to pulling something out. It seems any pull-out solution will need to wrap IAM such that it will incur treble complexity and performance hits.
And AWS has other complexities, so the devops costs with AWS are different, but rarely lower than managing physical hardware.
I'd never pick AWS based on total cost - it's an expensive convenience that first starts to pay off in terms of cost once you're big enough to negotiate huge discounts.
If you can afford it, it's great, but a lot of people who use AWS have no idea how much of a premium they pay for it.
I.e. if developer A uses AWS, this does not affect developer B.
What AWS enjoy is economy of scale (huge capex) which should help reduce price (but I am not sure that this is happening), and being first to market (Nobody got fired for choosing IBM).
Basically the main value prop today, as I see it, is saving the operation costs (human cost), by offloading them to amazon.
This will be solved by Kubernetes operators.
Moreover, most of the new computational intensive workload (E.g. IOT / AI ) is better done on the edge.
As more developers start using AWS, there forms a large developer ecosystem around AWS tools and services, and therefore "developer B" will find it much easier and cheaper to use AWS, indirectly.
There are similar ecosystems around Azure and Google Cloud, and there are plenty of tools that support all 3 clouds equally.
I really don't think AWS is getting any kind of meaningful competitive advantage here these days. And if there's any at all, it's dwarfed by technical, financial, and strategic factors.
I'm increasingly convinced all this means we are entering a telco-like monopoly era for Big Cloud software, starting at the infra layers and steadily moving up. I'd love a more savvy approach from Elizabeth Warren to be less 'break them up' to more 'these are the anti-competitive bundling violations.'
there are plenty of tools that support all 3 clouds, but most of those suffer from some sort of impedance mismatch and/or do not fully support the features which those clouds supply.
“Equally badly”, you forgot ;-)
Cloud agnostic tools really mean “lowest common denominator” which means you end up paying premiums but not using the features. Just say no to Terraform!
They might enjoy knowledge sharing, but at the speed that AWS evolves, the half-life of such knowledge is very short.
If I would bet on anything it would be Kubernetes. At least my knowledge is the same across clouds/on-prem. And there is no gatekeeper.
It damn well does. If everybody else is using AWS it pays to use it too, because:
* The IBM/CYA effect - i.e. "nobody got fired for picking AWS".
* The supply of experienced users - it's easier to hire AWS skills than GCP skills.
> There are obvious differences between Craigslist and AWS. The most important is that Craigslist (and each of the category spawn) is a marketplace, and so has the powerful advantage of network effects. Another distinction is that AWS has relative cost advantages over its unbundlers when it comes to the fundamental components of infrastructure (compute, bandwidth, storage, etc.), and I can’t see a parallel to Craigslist. So it’s not a perfect analogy, but the premise of unbundling certain categories still holds.
Scale effects are obviously at work in AWS. My premise is basically that higher-level things may be advantaged over AWS offerings because their scale advantages dissipate the farther away you are from their 'primitives.' I don't deny that there are some holes in the logic and comparison.
I won't quibble over the definition of "marketplace" but it most certainly does enjoy network effects. AWS is the LAST place I ever look to host apps and software and yet I am almost always forced to use it because everyone I work with knows it. The result is that I use AWS even when I don't wan to use AWS because everyone else does.
I'm quite sure I am not alone in this.
Craigslist is a consumer facing app, which is much easier to "unbundle" than something like AWS which faces enterprises. Even if some "unbundle wannabe" starts getting traction, I think Amazon will simply catch up by lowering the price as much as possible and putting more resources into improving the developer experience for the corresponding service.
But if anyone has some great insight, please share. I would love to see this "great unbundling of the AWS" happen.
In enterprise, Joe developer can often add / use more AWS services and it just goes on the giant enterprise bill, no questions asked.
If you want to use a third party service, waay more work. Supplier assessment needs to be done (by the department that does that) looking at security, company stability, data sovereignty etc. Procurement get involved to negotiate supplier agreement with the vendor and your project needs a specific budget line item which might require a trip to accounting. To get signoff and justify your vendor selection you might also have to do a stupid internal evaluation thing / bakeoff (even though of course you know the thing you really want) where you build feature matrix and carefully adjust the rows so your preferred vendor gets the most ticks.
All this paperwork can take weeks or months.
While a lot of the AWS service offerings are sub-par compared to alternatives, the technical work of papering over this is usually less risky than dealing with all the internal departments which will delay your project if you try to use another vendor.
I think this is a big driver towards bundling being a win for them.
Actionable advice on this would be: If you're B2B, have an option for yearly billing via invoice.
Billing yearly via invoice vs a monthly direct debit means our finance people deal with payment, and just ask Engineering once a year if they should pay this weird bill they got. Billing via direct debit means I have to do paperwork every month. I don't enjoy things like paperwork. I do enjoy things like building a single customer MVP version of your service on AWS in a couple hours. You can guess which I'm going to choose if you don't offer yearly billing.
Of course, if your small company would like to pre-pay towards an approximation of your early invoice I'm pretty sure some company would be accommodating.
If you try to get some services in an entreprise, maybe a physical server or get a database, you may easily find that it takes months of paperwork and waiting. Then it's poorly operated and broken half of the time (why are they provisioning a hundred databases on a single shared host? or using a shared NAS as storage?)
AWS services may not be perfect but it's miles ahead from the shitty services you get in an enterprise.
They haven't lowered prices on RDS, Elasticsearch, etc. There are far superior options available. The lock-in on billing explains the adoption of these services, but there a tremendous number of accounts of folks struggling to make any of the secondary services scale worth a damn.
I agree that the future is distributed operating systems. I think we desperately need to load in the core features of a D-OS into Linux for it to stay relevant, or a competitive startup will create a tailored environment for programmers that will not even be recognizable as Linux, yet it will be used everywhere because of its ubiquity. (Docker, anyone?)
For example, Terraform is kind of garbage usability-wise, but it just works with every cloud provider feature you can think of; it's a no-brainer. If a D-OS also naturally bundled hooks for every single feature of every managed cloud provider, along with a simplified way to hook custom applications and bundle the whole thing, literally everyone would use it for all their apps and run it everywhere.
As will that Epyc XXXX server, absent a well-peered and hardened services mesh for it to plug into.
Cloud shouldn’t be used for just someone else’s computer. It’s not the compute or the storage, it’s the network.
We've shown a remarkable ability to keep increasing the computational requirements foo the shit we run on computers to keep up with the computers themselves.
You’d have to look at DB providers like Elastic to see how to do this, I think. But it’s going to involve a very complex and high touch deployment & support environment. This effectively “prices out” certain classes of applications as too expensive for value returned. Again I could be looking at this very wrong, and figuring out how to build profitable unbundled AWS services would be a great business, but it’s hard to see how to successfully execute.
We forgot what it’s like to have dialup, and the innovative generation of decentralized software protocols (IRC, Usenet etc) faded into obscurity. Even Email and the Web are now centralized!
Well the companies won and now we have a feudal kingdom again where we are the data-serfs. We have been well-trained like pavlov’s dogs using notifications and rewards (likes, comments) to till the datafields and plant fresh crops, use our social capital to reshare articles etc. And you know that you’re a serf when that notification lights up on your master’s device and you pause talking to your girlfriend, wife or child to look at it and maybe act on it.
And of course governments love having a central place they can implement censorship and spy on everyone.
This has got to stop. It’s not just a technological topology problem, it’s become a sociological problem and even a problem of democracy and society. THE SOLUTION IS OPEN SOURCE SOFTWARE that will disrupt the landlords of the Web (Facebook, Google, Amazon etc.) as much as the Web once disrupted the landlords of the Internet (AOL, MSN, Compuserve).
The first HTTP browsers weren’t better than AOL at anything except one thing... ANYONE could permissionlessly host a web browser.
What realistic alternatives do we have to Facebook today? Google Docs? That anyone can easily host and share stuff with friends to easily collaborate? It would have to have people’s identity and contacts stay consistent across domains, without owning them and storing them on centralized servers like Facebook.
I know a few attempts. Diaspora, Solid, Mastodon and Matrix. I also liked Sandstorm a lot, and OwnCloud was ok. They have not reached mass adoption. Scuttlebutt is slightly better in that it can interoperate with everything. The biggest winner in the last few years has been the DID standard, although it solves one problem only.
There is also an innovative project called MaidSAFE which raised around $10M and spent 12 years so far building this back-end. I have high hopes for it once it is released and plan to make Qbix Platform interoperate with it. Check it out: https://maidsafe.net/
We will eventually replace the idea of a startup running a database that has to scale, with an open source startup that releases software that anyone can pay any hosting company to run and makes money via tokens. That was the dream of cryptocurrency. The first kind of startup extracts rents via a SAAS model and forcibly bundles infrastructure management (Slack, Salesforce, Twitter, Etsy, you name it). The other doesn’t care about cheaper “cloud” costs so much, because anyone can host it (Wordpress, Magento, Drupal etc.) and the hosting costs are miniscule for each individual client business. Wordpress and Magento and Drupal are all valued in the billions, as open source companies. Maybe people on HN should strive for this rather than “Zero to One” “competition is for losers” “build a monopoly and extract rents” Peter Thiel style facebook startups.
Do we need something like this? Or am I just ranting?
EDIT: why whenever I post this subject, it is heavily downvoted with no actual replies or explanations showing me why my position is wrong? Is there a coordinated effort to downvote this position without further engaging with it?
A big fear of mine is that many of the younger hackers have lost or were never trained in that cypherpunk-phreak-hacktivist attitude that many of us had and learned on/with in our and the internets younger years. I know they still exist, but the sheer number of users drowns that type out as a much smaller percentage of the net... but it was largely that group that fought for the rights of users and major holdouts that influenced the patterns that made the internet what it is today. I suspect the increase in types who downvote topics like you bring up are just that majority, middle of the road, corporate placation user type, and it has gotten noticeably worse over the last few years on hn.
What about the Web? Why did people install web servers instead of hosting on AOL? Because web Browsers existed, and let you consume that data. Couldn’t you likewise argue that no business would want to mess around with hosting their own website and a bevy of competing hosting providers would never displace AOL?
This is a very bad example since most of them are insecure, out of date, and not actually run by the non-techies. Most people just contract it out, once to someone who said he could setup a Wordpress site for cheap, and then they just pay them or someone else to make any major changes. If it isn't that, it's another company, or Wordpress itself hosting their instance, again, the software isn't being run by the "non-techie".
> Why did people install web servers instead of hosting on AOL?
I don't think a lot of people did, again, they paid someone else to host it. Shared web hosting was, and still is, a very large market.
People pay them to take care of hosting. But they have a choice.
Plenty of people installed the facebook, twitter, and instagram apps on their phone - and in many cases don't even use the web interfaces. Local software installation is not an issue when it's as streamlined as the current mobile experience.
> THE SOLUTION IS OPEN SOURCE SOFTWARE that will disrupt the landlords of the Web (Facebook, Google, Amazon etc.) as much as the Web once disrupted the landlords of the Internet (AOL, MSN, Compuserve).
Using capitals like this implies to me that you are not making a reasoned argument, otherwise your points would stand on their own rather than BEING BLASTED INTO MY FACE.
Capitals were wonderfully used by Pratchett as the voice of one of the characters in written text. And the late Christopher Lee did a wonderful interpretation of that character.
Cyrillic letters for example resemble capitals in that they have a constant height and are generally more square and if anything that doesn't make them more difficult to read, just different.
Actual shouting hurts my ears. It is about a thousand times more annoying than reading capital letters.
I think the real equivalent of shouting in a text media is the hated blink tag. With some bright neon colours. On black background. In a dark room.
On HN all caps is uncommon and, I would venture, considered somewhat aggressive/rude. IMO HN is somewhat of a bastion of courtesy and reasoned debate, a rarity on the internet today. Long may it remain so.
By all means use all bold, comic sans font, and flashing red text. But don't be surprised when people react negatively to it.
Gross exaggeration. Most startups should worry less about 100% uptime and scaling than about sales. A dedicated server or two goes a long way, and it isn't/wasn't really that much harder.
I said "most startups", not critical medical care. I've dealt with startups for nearly two decades and most startups can begin operations without the cloud, and with the possibility of downtime.
Even in healthcare there are plenty of systems where varying levels of downtime is acceptable.
Anyway, I don't think uptime is the main reason to use a cloud provider. For me, the main reason is the ease to expand. You can add and remove infrastructure quite easily. If course, you can do the same in a datacenter, but in my experience it takes planning ahead a bit more than you need with a cloud provider.
The myopia comes from experience. Startups (which aren't solely B2B Healthcare) are more often doomed by lack of customers than by other factors.
To me, that sounds like a death wish, but it worked for all the new high-tech consumer facing companies we read about every day!
My first company hosted the product on a shared hosting server, this was back in 1999. Availability was immediate once you paid. Since the early 2000s, dedicated servers were often available within an hour of payment. VM-based hosting took only minutes to get started with. Credit cards were the usual mode of payment, but PayPal was also quite popular.
What I'm saying is that cloud hasn't dramatically changed life for startups in their earliest phase of existence.
I would go further and say that some of their experiences are buggy and often non-functional. Ever tried to search for a service in ECS? The results load from the backend, and it's a frontend based search on the backend set. Meaning it will say your service does not exist, but you're only searching on that page * offset.
Then there's the Elastisearch fiasco... https://spun.io/2019/10/10/aws-elasticsearch-a-fundamentally...
There are many examples where basic AWS functionality is broken.
Are all hosted infrastructure prices this inflated?
With that said, Kubernetes might fix this because they have access to more parts of the system in a generic API. With tech like KubeDB and other operators we might see these prices coming closer to bare-metal.
Of course, if you're on AWS then you'd be paying the EBS tax either way.
For a lot of companies, an extra $35 per month to save hours of work is a no-brainer.
Many people still prefer hosted services because (a) total cost of ownership can be much lower once you factor in the cost of a qualified engineer installing it, ensuring it's online 365/24/7, and keeping it up to date (b) comparisons tend to be apples-to-oranges, since many AWS services are charged on a per-transaction basis. With a VPS, it often appears a lot cheaper by comparison, but you're also paying for a lot of capacity (RAM, CPU, storage) that you're not using, and generally can't use most of the time because you need to reserve enough for peak periods.
If I were to compete with aws, I think the area I'd really go after them with is better kubernetes support. I've yet to see any cloud provider really do it well, to be honest (which isn't surprising, development on kubernetes moves so fast and it's so advanced, it's kind of amazing, but that does mean it's really hard to make it nice outside of a "works for a demo" kind of deal). Azure and GCP do kubernetes a little bit better, but I think if someone were to come in and say, like, I don't know, we can do kubernetes on bare metal so it's much faster than through a VM, and all our services are natively integrated, that would be a cool story.
The other area I might compete with AWS is in finding niches where organizations might be hesitant to build an operations department, but they really need cloud computing. So, for instance, maybe scientific computing or something like that. If you could make a really useful cloud that can be administered by someone who barely knows linux, that could be a thing.
For the same reason it’s difficult to argue there’s opportunity for “unbundling Facebook” by launching a new image sharing service, building a news aggregator to compete with newsfeed, building a new messaging app to compete with Messenger, etc.
The main reason I disagree with the premise is security / compliance requirements of large customers (a large percentage of the market). The largest purchasers of AWS-like services prefer to work with as few vendors as necessary to minimize the number of vendors they need to worry about during security audits, contracting, etc.
The other obvious problem is, similar to Facebook, AWS has a “network effect”, in that all AWS services live in the same physical data centers, which results in potentially competing services suffering from higher latency than “native AWS” services which might make it harder to compete.
Consumers did not have the same level of loyalty to Craigslist as business have to AWS.
"selling AWS at a loss" is crisp shorthand for a lot of startups' business models!
I dont understand how Craglist and AWS relates? Craglist.... is a .. list or a market place. AWS is literally like a Fortune 2000 business that is in itself fully vertically integrated from Hardware to Software and Network.
And then you have economy of scale, and Good enough is enemy of Best. The barrier of Entry to recreate Craglist is less than a rounding error in recreating AWS or even any part of its sub component.
I dont have any numbers to back me up on this, so take this with the biggest grain of salt. I do think there is a Market for provider that sits underneath AWS or GCP, something like DigitalOcean. AWS or GCP are like Enterprise product with millions of features being offered to all, while DO is like a simple to use product that is scaling up its feature offering.
And it offers the essential of Cloud Hosting with Better experience. ( Comparatively Speaking )
Or Heroku with their own Infrastructure, ( or Cheaper Pricing )
If you just Allow: *, you might not care. If you practice least privilege, this is hard to do another way.
Isn't AWS also a marketplace?
The AMI marketplace is probably the biggest and most long running example of a marketplace where others are selling services (software licenses/support), but privatelink allows people to make a SaaS and not only sell it on a marketplace on AWS, but also do so with an endpoint in your VPC so you don't have to go out over the public internet.
I have no idea how well utilized that sort of thing is, so in reality it might be similar to there not being an option in the first place, and I'm sure in general Amazon sells far far far more services themselves, but it is possible for people besides Amazon to sell services on AWS.
That’s why open source unleashes an explosion in innovation.
in particular, declarative dashboards (from code) and declarative alerts (from code) would make my life a lot easier
feedback-based / ML alerting thresholds might also hit the spot -- this is an area where black box isn't safe enough and some innovation is needed
getting any piece of information from datadog or amzn / goog's in-house dashboards is like pulling teeth -- they're so slow and clunky
There should be open platform that would eliminate that advantage.
2. EC2 and S3 when used properly would be harder to beat. You can get reserved discounts, spot, elasticity... Moving them would require paying for bandwidth which easily makes playing field uneven.
3. The higher layer of services CloudWatch, ElasticSearch, Cognito got much higher margins, yet lacking functionality and quality. Much easier targets for disruption.