Not as bad as snap but what could they possibly be spending $100 million a year on?
~400PB of data in S3.
~2600 bare metal "x1 type" ec2 instances running 24/7, 3 year upfront reservation.
~60M Write IOPS in dynamodb
~300M Read IOPS in dynamodb
~3500 16xl RDS aurora instances
Again, each of those is spending the entire budget on a single service, but that seems like a nonsense level of spending.
Maybe they really have that much data. Maybe they have 100PB of data in S3. Assuming 1B rides since day 1, that's 100MB per ride, which seems high. If the average ride is 20 minutes, that's 80KB per second. That would be 25% of the budget.
But assuming they generate 80KB/s/ride, that's ~1MB/second (assuming 1M rides/day). So maybe all of that hits DynamoDB, and between duplicate data, secondary indexes, and size of dataset we have 6 million write iops. And then we do big data processing jobs and have 5x the read load. That's 20% of the budget.
And to process all these events there is a massive EMR cluster of bare metal instances. About 1750 of them. That's 50% of the budget.
Leaving 5% (a measly 400k) for load balancers, and the like.
Those numbers are all a little outrageous to me, but I can see how they might be using that much.
Just trying to get a rough ballpark for infra at that level of spend.
~3500 16xl RDS aurora instances? I worked at one of the 100 biggest websites on the internet (a search engine), and we only had 3. Why/how would Lyft need 1000x that!?
It highlighted pretty well, evidenced by your comment, that RDS was not likely a significant portion of their spend.
Category three was relational, IIRC like ~2TB. We originally stored it all on a custom MySQL cluster, and we started running into pretty bad replication lag. Amazon came around with Arora and promised we'd never have any replication lag. We switched over. Still had replication lag.
Source: I work at Lyft.
Worked for us at Twitch.
Their surges are nothing like yours.
If you have engineers making less than $250k annually, then you have a lot more staff. $8mm monthly is a LOT...
Amazon's clearly making a profit after $8mm monthly.
Lyft could definitely build and maintain their own infrastructure for this kind of money... probably do it better (customized to their needs) and cheaper.
Businesses don't flagrantly throw around money just to upset people. There are huge advantages to offloading non-primary business costs to other businesses.
Netflix is doing this too. I think we can assume not all of them are just idiots that haven't figured out they could build this themselves.
But businesses do throw around money for the wrong reasons, and keep on doing so if that's the status quo. No one gets fired for buying IBM.
> Netflix is doing this too.
IBM stuff was bought by a lot of people.
> I think we can assume not all of them are just idiots that haven't figured out they could build this themselves.
That statement is very misguided and misses the problem. For example if you built your infrastructure around a specific solution then you also end up building a team of professionals whose livelihood is tied to a specific supplier of said infrastructure.
Businesses are wasteful because that's the natural status of a bureaucracy. They aren't throwing away money on infrastructure because they are unaware, they are spending more than they potentially have to because infrastructure isn't their core business.
> IBM stuff was bought by a lot of people.
That's such a tired argument. Just because they could save money doesn't mean it's a good idea, and with Enterprise pricing from Amazon combined with tax advantages, you honestly have no idea how much "cheaper" it really is.
> That statement is very misguided and misses the problem. For example if you built your infrastructure around a specific solution then you also end up building a team of professionals whose livelihood is tied to a specific supplier of said infrastructure.
No, the fact that you think this is a "problem" is the problem. Do you honestly think dev ops guys couldn't figure out how to use a different tool? By your own logic, you also shouldn't build data centers because you end up building a team of professionals whose livelihood is tied to managing your own infrastructure.
That's not true at all. The "isn't their core business" argument is meaningless and absurd. Any company, big or small, does not want to waste 300M dollars on something they don't need, whether it's their core business or not, particularly when said company is still far from turning a profit.
> That's such a tired argument. Just because they could save money doesn't mean it's a good idea
You are aware you're stating that baseless assertion on a discussion on how a company which is burning through cash and looking for investors is needlessly wasting 300M on infrastructure costs.
> No, the fact that you think this is a "problem" is the problem.
Needlessly spending 300M dollars is a problem in every single business in any corner of the world. I have a hard time understanding how someone can throw around the baseless assertion that this sort of inefficient while operating at this particular scale is not a problem, and pointing out this problem... is the problem? That's crazy.
It feels like you could fit half a Lyft into live low-latency transcoding and redistribution of just the top 10 streamers feeds on Twitch.
Source: also work at Twitch.
Oh, or are you making fun of using Bare metal?
I think the parent was pretty clearly sarcastic and suggesting that this was a bad idea.
The problem is that provisioning, reliability, and security are by themselves really tough problems. If those issues aren't in your company's core competencies, it's not necessarily efficient to invest in building out all of that.
I look at it as the question: can you get the same set of agility/reliability/security guarantees for your narrower set of use cases by paying for your own hardware and engineering? I won't even begin to pretend I have any answers there, but I think that's the calculus.
Maybe that's just the story cloud providers tell you.
Until you try, do you really know if it's all that complicated? People have been running datacenters for a long time, and not all of them work for Amazon.
But there may be also a beneficial side effect of having gearheads around, and maybe that's the real cost to going cloud.
Operating bare metal at scale requires talent that doesn't exist, not necessarily at an engineering level, but at all levels.
As an example, I worked at a place that had a large bare metal deployment, i.e. >1MW worth of compute. It was woefully inefficient and costly to operate. The product that they offered required network QOS and compute with real time capabilities, neither of which was available from any cloud provider at the time.
One of our executives (formerly a leader in the DC ops org at AWS) left the company to be replaced by another executive by another well-known silicon valley org who then insisted we should migrate everything to the cloud.
I showed him the relatively easy math that efficiently utilized bare metal was way less costly and that the aforementioned QOS and RT requirements would be a deal breaker anyway. He failed to fully grok this and remained insistent. When I quite, he seemed surprised. After the fact, I discovered that they'd made a deal with IBM to move everything into their cloud. A year later it was an utter failure and they abandoned the project.
There are lots of folks in the valley with lots of experience on their resumes that suggests that they should be capable of understanding these kinds of things that simply don't. Lacking that understanding leads to poor decision-making, which leads to failure, which leads to risk-aversion, which leads to everyone believing that it must be cheaper in the cloud.
Or so goes the old adage, "nobody ever got fired for buying IBM."
EDIT: To whoever downvoted this, the commenter hasn't listed an email address, or I would have reached out directly. This is an honest attempt at communication that doesn't require someone to break anonymity.
Though the question I received was somewhat nonsensical, which was to be expected.
Perfectly willing to admit I'm wrong if and when that time comes. At this point, that's my theory.
Bare metal works when your workload is well-defined and understood. Then you can actually put reasonable estimates for what you need and hire/purchase infra accordingly.
The balance here is tricky. Based on public data, it seems that Netflix has ~$16B in revenue against $300m/yr cloud spend. 2% seems much more reasonable to me.
I feel like a drive toward efficiency is a worthwhile endeavor for a startup in terms of establishing a competitive advantage.
RDS doesn't really scale without costing a fortune. It buys you HA and backups. Great, but what if you need performance?
DynamoDB? It scales in terms of IOPS, but again, it's unaffordable.
SNS exists and isn't terrible, but why wouldn't I just run Kafka?
But if you need bleeding-edge Postgres performance, you hire a DBA, and they probably build something on EC2 or bare metal.
As I understand it, RabbitMQ is probably a better point of comparison for SNS/SQS, and Kinesis is the Kafka peer.
Regardless, the reason you don’t “just” run Kafka is: you don’t have a team that knows how to tune, deploy, and operate a production Kafka cluster. I learned enough about SNS and SQS to get it running in an afternoon, and I really haven’t needed to think about it since. Kafka (or RabbitMQ, or ActiveMQ, or etc) need instrumentation and monitoring and patching and quorums and capacity planning and etc, and at some scale those are worthwhile, but that scale is MUCH larger than what most Kafka clusters are actually serving.
The theme here is: if you have a business requirement for 90th percentile specialized performance, great! Hire domain specialists who can make your systems run at that tier! But for everyone else in the world, when you can get usage-based pricing, elastic resources, and automatic durability and patching... why would you go to the trouble of learning how to deploy and manage a service?
I remember how hard it was to hire senior operations people. There are not many of them, and there are not many of them at the level of being able to deliver something amazing. The ubiquity of the cloud has only made these kind of experts less common.
Every place I've worked that did bare metal was always drowning in maintenance instead of working on the next big thing. And no big surprise, our internal infrastructure was nowhere near as high quality or capable as AWS. And most of our developers had experience working directly with cloud providers, without ops people, so we were delivering them a worse experience and slowing them down, and we required more ops people to help them and maintain it and keep everything online.
Also, a move to IBM's cloud isn't the greatest example. I had hundreds of bare metal servers in an IBM-owned datacenter and their cloud offering was consistently behind AWS/GCP; if anyone recommended IBM cloud to me I would have laughed at them. It seemed to me that IBM was trying to up-sell on the "cloud" buzz word without actually delivering anything except higher prices, just like how they're now trying to ride the buzz of the blockchain.
Dropbox is a good example of a company that took quite a while to move to their own platform, away from AWS (and they still have 10% of their stuff in AWS to this day). Dropbox is basically a storage infrastructure company, unlike Lyft, but it still took them years to invest in the development (and migration) of that custom platform to replace AWS, an investment that not many companies are going to want to gamble on, especially if their primary business is not storage:
And I think it's telling that Dropbox started on AWS, grew the business on AWS, and moved to a custom platform once their business model was perfected and they wanted to cut costs prior to going public. If Dropbox had started on bare metal from day one, would they have been able to pull it off?
There's nothing you've written that I disagree with. It's easy to do the math that shows where bare metal saves money inclusive of the labor costs. For some reason most everyone seems to fail at it. I could expound one why, but this:
>I remember how hard it was to hire senior operations people. There are not many of them, and there are not many of them at the level of being able to deliver something amazing. The ubiquity of the cloud has only made these kind of experts less common.
Those folks just don't exist. Building infra is more than just buying infra. It takes actual development, which is why I think so many fail at it.
Your anecdote about Dropbox is telling. They adopted cloud, and more importantly cloud methodologies and then went back to bare metal. There are others that have done the same. I recall a talk at an Openstack conference given by Verizon in which they described their approach. Developers begin in AWS, utilize a cloud-based approach, and then when cost concerns become an issue, they aim to offer similar services in-house on bare-metal.
This is true but its never really hit me before even though I've already been operating based on the assumption that trusting the cloud is less risky than trusting my own skills.
They want to make money brokering rides.
Taking on their own cloud infrastructure -- in theory -- could economically make sense. But that's just an extra layer of risk and complexity they'd rather forego to focus on their core business.
After all, their core business is already losing $930M on $2B in revenue. They're cash-flow doesn't put them in a good position to make large up-front investments on data centers.
So, yeah, like a broke renter in an expensive city. In theory, it might be better to buy a house, but you don't have the down payment, and maybe you should be focused on increasing your earning power rather than saving money anyway...
- Recently had to purchase new servers, because of signed contracts the only servers we were allowed to purchase and put in the datacenter were four years old and technically EOF.
- Firewall changes, AD changes, provisioning a VM, etc. are 48 hour turnaround. Purchasing new hardware requires 4-6 weeks.
- Had an intermittent issue with their edge firewall, it'd slow certain connections to a crawl and eventually they'd timeout. Took six months to fix it, for the first three months they told us it wasn't their fault (turning off their deep packet inspection ended up fixing it).
I still remember when we opened the first ticket about it, and the reply was "no other customers are experiencing problems" and it was closed.
That's just a few examples of how painful it can be. To give you the other side of the coin, having worked with an enterprise contract in AWS, we were having an intermittent issue with DNS resolving failing for a few seconds every few days. They put an engineer on it full time till they found the problem (we misconfigured it), and it didn't cost us anything more than the enterprise support. I was actually shocked they'd invest that much on such a vague issue.
Yes AWS is expensive, but you're getting world class engineering proven at scale, and access to some very smart/motivated people to support it (and they have access to the teams who built it, when they can't solve it). I don't think I'd ever choose managed datacenter over AWS/GCP/Azure/etc. Either do it in-house where there's accountability, or use cloud providers who have proven they're competency.
To be clear, I'm talking about VPC/EC2/etc. I can't really discuss a lot of their higher level and newer managed services; they either weren't as good, or I haven't tried them. But the bedrock these clouds are built on is solid, and that's worth paying good money for.
> I don't think I'd ever choose managed datacenter over AWS/GCP/Azure/etc.
Who mentioned managed datacenters? I'm pretty sure people are talking about leasing space and doing everything else in-house.
- storage clusters
- database clusters
- compute clusters
They are often very easy to setup, but when things go wrong, they go very wrong. And welcome to a stressful environment because if you can't figure it out and your people can't, well, your business just sits and burns while you do.
Even when AWS has a system-wide outage, it's nice to know that I don't have to be dealing with those underlying problems anymore and I know they have the best people working on them.
I cannot put into words, after operating MySQL clusters on my own and playing back transactions after failures, how nice it is to use AWS RDS and how it's just been zero problems. Zero. I sleep through automatic updates of our database system with RDS. I would have never done that on our own system.
And in most places, even "managed" leased hardware, you still will need to purchase/lease and run your own hardware firewalls and ddos mitigation. The datacenter might offer that protection "built-in" but you'll soon find the limitations of that offering when you face a substantial attack.
Having spent my entire working life automating infrastructure of all kinds I know you can achieve an enourmous increase in efficency rather easiliy with a few well placed automated processes.
I’ve always been baffled by the fact that at any given larger company there are 100’s of employees trying to supply the business with tools to automate business processes — the IT dept.
Yet, they are completely incapable of using these very same tools to automate their own ”business”.
And the resistance I’ve been met with at different places through the years when trying to implement the simplest of automation is massive.
I used to laugh at the ”cloud” bacause, back then, at 25 years of age, sitting at a medium size company with boatloads of cash, I assumed everyone was doing it the way we were; automating all the things.
Now, many years later I’v obviously realized that many places simply does not have the right culture and mindset as it’s not “core business”.
I believe however that this is changing, and changing quickly. In many ways thanks to the “cloud”.
Since we're internal and we manage a lot of capacity, we do often provision and roll our own equivalents of things that cloud providers will sell you, rather than just buying a cloud solution. It's often ambiguous whether it was a good use of time/money. If it weren't for the economies of scale that kick in at the sheer size of this operation, it would definitely not be worth it.
It slowed down both them.
Remember that staff cost money too!
Conversely, with GCP 4 years ago now had some support issues - didn't come away impressed - I'm convinced even internally GCP isn't well doc'd or something.
But what I paid for and got on aws support is so far out of whack there is NO way they made money on my account for that whole year. And the person was actually competant which was a shock. So many "technical support" folks seem like idiots.
Comcast for example, I'd purchased my modem, they started charging a rental fee - I had to call these bozos every month to reverse the charge - a total waste of time. I cancelled finally - I just couldn't take it, and each one lied to me or didn't have a clue. Things like condescendingly saying - you have to pay for the modem.
That's not a judgement on whether it's worth it for Lyft or not, but especially for a growing company with spiky load the decision is not just a dollars to dollars comparison.
If you gave me $300m to spend, largely up-front, for significant capex purchases? Sure. We could do it. The team I would build would also probably still make mistakes that AWS et al have already largely learned how to avoid, but we could do it. But capex and opex are very different beasts. By the end of that three years I'm already looking at spending way more to refresh what I bought at the start of that three year period because I'm starting to near the end of early contracts and I'm figuring out how best to wrangle, in a way that makes the rest of the business succeed most optimally, a now-heterogeneous environment, etcetera etcetera and etcetera. It's all solvable. But whether it's cheaper, at scale, and more reliable, and presents a unified tool for use by the business...that's a harder question.
Understanding how capex and opex work and how they differ is pretty critical to successfully running an engineering organization, to say nothing of a company.
The reason that AWS, Google, Azure, et.al do so well is that they don't just buy some servers. They do actual capacity maangement, and not a very good job of it I might add. They also manage the lifecycle of every component in the infrastructure such that the next iteration of that component is understood and interchangeable.
Network architecture, for example, should suit the needs of the application, but should also be decoupled from the underlying hardware as that hardware is going to evolve.
Compute is fairly straightforward as well. At the data center level, one makes a bunch of 400W holes. What you fill those 400W holes with is relatively irrelevant.
The care and feeding of fleets of (physical) machines is really, really hard and not to be underestimated.
It's all about leadership. The dearth of skilled leadership is the issue. I'd wager this is how some FAANG companies are managing this. They're hiring people that know what they're doing. One doesn't need to design and build their own servers and network hardware to do well at the scale of folks like Dropbox or Lyft.
Cloud adoption is all about making the issue someone else's problem, which is only kicking the can down the road.
Eventually, every company that does a thing will realize that their survival is contingent upon becoming a software company that does that thing.
If you have $100m OpEx per annum, it'll cost you maybe a point or two to convert that to $300m CapEx.
If you do that, though, my answer will be "right, so we're done here."
Considering how most layman are completely wrong in their understanding of finance, I'd say that isn't a good endorsement...
- person who knows how hard it is to run your own infrastructure
> fierro Profile: SWE @ Google Resource & Capacity Planning
I think you mean "Person who's job it is to convince others it's really hard and they should just buy your product"...?
- person who knows how hard it is to run your own infrastructure.
There is way, way, way, way, way more to a running a successful business than "saving money".
So basically, I disagree. These aren't estimates.
I was a cloud skeptic and ran Tech Ops (including our DCs) for years. About 5 years ago, it dawned on me that even owning the whole budget for Tech Ops, that I wasn’t capturing the full costs of trapping my org onto our in-house solutions.
At tiny, small, and medium scale, cloud is obviously the way to go, IMO. At large and huge scale, I think letting some hybrid leak in where systems change rarely and cloud costs are WAY out of line (DropBox storage, Netflix CDN, etc) makes sense.
Most are running a few small internal-facing servers hosting some internally developed apps, and need very little resources.
Just run ESXi, XenServer, Xen or something, and spin up a few VM's on a few thousand dollars of hardware, get a couple people to maintain it, and be done.
Even at large scales, like Lyft, having your own internal team and hardware is going to save money. Amazon is profiting off your instances... which leaves room for you to do it for less. Maybe not $7mm less monthly, but even a $1mm savings is significant... but likely a lot more.
Fast forward to today, and now it would be a serious undertaking with serious risks to move off AWS, not to mention the costs of building up the staff and assets to reimplement their requirements in parallel of AWS until reasonably confident they can flip the switch and still have an operating company afterward.
So, they're probably stuck - beholden to Amazon's whims and pricing mood of the day. They've bought convenience from Amazon in trade for massive technical debt, one which may be even more costly to get out of... Or impossible.
AWS isn't going to get any cheaper in the future..
Those free AWS credits Amazon gives students really pay dividends.
The “whims” of Amazon’s pricing are no more unpredictable than the pricing “mood” of your colo or your hardware vendor.
I agree that it would be a serious undertaking to move off AWS today. But it's probably also going to provide marginal benefit. No one on the finance side of their business is probably losing sleep over it. If/once it makes sense to move off then the finance dept will tell the eng dept they need to reign in infrastructure cost...and eng will do that.
Arguments like yours are why business people tend to roll their eyes and ignore engineers when it comes to anything outside of engineering.
Not trying to be dismissive, but you are so far from the mark I don’t know where to start...
And management of such organizations might also have ideological tendencies that further skew the calculation.
There are certainly examples for big companies that benefit from having their own infrastructure (i.e. Dropbox since they have relatively specialized hardware needs compared to what cloud providers set prices around), but the number of people you need to hire to build and maintain datacenters is very high.
Sure. If they are paying $100M/y on hammers, it's at least worth running the numbers and investigate alternatives.
Strategically, you probably want to focus on what your core competencies are, even if you could in theory do something for cheaper. It's easy to ignore the foregone best alternative of iterating on your own product instead.
* If the hammer manufacturer decides not to sell you any, you'll still have hammers.
* If the hammer manufacturer gains enough power to fix prices, you won't be paying them exorbitant prices.
* If the hammer manufacturer or their country gets embargoed and you're unable to legally purchase their hammers, you'll still have hammers.
All the above grant you a strategic advantage since you'll still have the necessary tools to continue your business while your competitors won't (or will have to pay much higher prices for their supply of hammers).
This is expensive and risky and also difficult to do in piece meal
Disclaimer: former AWS + Amazon employee
> Adobe, Airbnb, Alcatel-Lucent, AOL, Acquia, AdRoll, AEG, Alert Logic, Autodesk, Bitdefender, BMW, British Gas, Canon, Capital One, Channel 4, Chef, Citrix, Coinbase, Comcast, Coursera, Docker, Dow Jones, European Space Agency, Financial Times, FINRA, General Electric, GoSquared, Guardian News & Media, Harvard Medical School, Hearst Corporation, Hitachi, HTC, IMDb, International Centre for Radio Astronomy Research, International Civil Aviation Organization, ITV, iZettle, Johnson & Johnson, JustGiving, JWT, Kaplan, Kellogg’s, Lamborghini, Lonely Planet, Lyft, Made.com, McDonalds, NASA, NASDAQ OMX, National Rail Enquiries, National Trust, Netflix, News International, News UK, Nokia, Nordstrom, Novartis, Pfizer, Philips, Pinterest, Quantas, Sage, Samsung, SAP, Schneider Electric, Scribd, Securitas Direct, Siemens, Slack, Sony, SoundCloud, Spotify, Square Enix, Tata Motors, The Weather Company, Ticketmaster, Time Inc., Trainline, Ubisoft, UCAS, Unilever, US Department of State, USDA Food and Nutrition Service, UK Ministry of Justice, Vodafone Italy, WeTransfer, WIX, Xiaomi, Yelp, Zynga .
I take it you don't use the bank listed. That's fine. Does your bank do transactions with them? Other banks? Other institutions/stores? Do you use NASDAQ? Do others? Since everything is so interconnected, it doesn't take much for one of those services to immediately or eventually affect a bunch of others. It might be relatively trivial if AWS goes down for a few hours, but what about a longer duration and the avalanche effect? Is that impossible?
You also changed the goalpost a bit from "worry me more than being unable to get a Lyft," which is the comment I responded to, to "safety-critical infrastructure." I can't give examples of that because no one in their right mind would list that anywhere.
Giant EMR clusters to develop fraud models
Running a giant dynamic marketplace
Running giant EMR jobs for pricing/demand
They are working on self-driving cars — which likely comes with massive storage requirements for recorded sensor data, and the compute to crunch it.
EDIT: now I see, page 3: "Simultaneously, we are building our own world-class autonomous vehicle system at our Level 5 Engineering Center, with the goal of ensuring access to affordable and reliable autonomous technology"
I strongly dislike the notion that on-prem hosting is somehow a bad thing, or too cumbersome, or otherwise totally solved by cloud providers. AWS specifically is hugely convenient in a number of ways, but it doesn't come close to the cost savings from running your own infrastructure. You need a pretty large amount of capital and engineering talent, but it really is worth it even in the short term (~3-5 years).
I think people would be shocked at what the money comes out to be if they saw costs from companies doing their own physical infrastructure. AWS makes you pay through the nose, seeing the difference would change a lot of minds I'm sure.
Facebook is a similar story: being the biggest website in the world is a core competency for them, and one of the ways they outcompeted rivals early on was by scaling their website better.
Uber has yet to turn a profit.
If datacenters are a part of your business proposition - not necessarily "we're selling datacenters to other people" but rather "we will be able to outcompete our rivals because our datacenter strategy will be better" - then self-hosting makes sense. But if the datacenter is a commodity from the point of view of your business - and I would assume that would be the case for Lyft - then it makes sense to buy off the shelf.
You have to rewrite application to use something else that's open source and self-hostable.
For new startups, I honestly recommend using DigitalOcean or Vultur. You don't get all the AWS components, but you can build around flexibility. If you have to move, you can take all your Terraform and Anisble scripts, and port them to a new provider (and yes, you do have to rewrite your Terraform config. Every provider is insanely different and the magic of multi-cloud is a myth, but it's still easier than trying to move off of AWS specific services).
I remember back in the day, Stackoverflow ran everything off of a single, very expensive, dedicated server. I've worked at other shops where we've migrated stuff from AWS to self hosted solutions to reduce our $200k/month AWS bill.
The trouble when people build things that have nothing to do with their core value propositions, they get locked into those services too. It is very easy for companies to get locked into their own homebrew garbage frameworks, clustering solutions, reporting & data analysis apps, or whatever else people hacked up because "omg vendor lockin!!".
I agree with you on Uber and Facebook though.
Given that cloud costs easily 6-7x for the equivalent amount of hardware resources as a well priced dedicated server provider, you can just buy 2-3x the resources you need for extra scalability and not have to share those resources with anyone. Or if you are in the tiny minority of companies that really does have extremely erratic load requirements, you can put your base load on bare metal and your excess load on cloud.
I don't understand why people on HN always put forth a false dichotomy between cloud and running your own data centre when there's a plethora of different mixes of infrastructure and managed services that falls in between.
This is a little tongue in cheek, but: Lyft is an abstraction. It doesn't own anything or have any customers because it's a market maker.
Lyft is an efficiency mechanism for maximizing liquidity and minimizing bid-ask spreads in hyperlocal ride trading :)
In other words, holding real estate in a Corp that does other stuff isn’t efficient.
If you've got billions then you can create your own limited liability company, poach a bit of top talent to fill it (overpay a bit if you must) and get a decent operation going. One that will jump when you say jump no matter what.
You can't replace AWS global scale, but for your rental example its definitely possible. Companies rent mostly due to tax & liability reasons from what I can tell.
That didn't turn out very bad for amazon
And frankly i 'd rather invest in a cloud company than a money-losing taxi company.
That assertion makes no sense at all, particularly if we acknowledge the fact that they are in the business of providing a web service. IT infrastructure is critical to Lyft's core business.
Would it make any more sense to criticise Lyft for hiring developers because that would mean they would slowly turn into a software development company?
Article says it takes 20 people to run. GM hasnt turned into a datacenter company...
Is a century old car manufacturer in Detroit able to do what a startup in Silicone Valley can't?
Interestingly enough, GM owns 7.8% of Lyft.
While in theory you'd "just need a database and some REST API" it is never as simple as that. Say you have one set of systems for production, you may want one or more duplicates for engineering purposes. And then you'll want tools to managed those systems, and tools to manage those tools. Then there is AAA, versioning and storage, and you'll have some sort of forensic/auditing log.
Up to some point, what makes a system expensive isn't the one set of parts that make production, that is just the tip of the iceberg. It's that you need everything else as well.
So regardless on whether you are doing a relatively simple service (getting people from A to B), or doing buying, sales and logistics for retail, which isn't rocket science either, you get the same initial cost and overhead.
Maybe they're harvesting more than ride information. Perhaps they're aggregating behavioral data on customers to sell.