Needing this amount of resources implies that Snap is expecting huge growth. This sounds like a really bad move on their part and they should have committed to building out their own infrastructure on 'bare metal' over the next 5 years instead.
If you read their S-1, they list a dependence on Google cloud as one of their big risk factors. Yet they then go ahead and make this commitment instead of working towards eliminating it.
There's so many advantages to owning your stack and if Snap thinks that it's going to need 2 billion dollars to pay for cloud infra, they're at the scale where it makes sense to build your own infra. Just look at Facebook, they're able to create tailor made data-centers that fit precisely what they need. The success of Snap relies on huge scale on the consumer side, if they want to scale their infra to support that 5 years down the line then this sounds like a poor move since they will either need to play catch-up later on or prepare to pay serious dough to Google.
Paying for cloud services seems like a great idea when you are not able to predict your needs in the coming years, given a deal like this I don't think that's the case.
Facebook reported $27 billion in revenue in 2016. Snap reported $400 million. You're talking two orders of magnitude lower than Facebook and almost three lower than Google. Snap simply does not have the resources to pour into custom data centers, even if they can raise $2 billion for infra over 5 years. Unless they have serious talent already, they're not going to match Google's massive 15 year investments by a long shot, even if you only consider the services they actually need to use.
You don't need to have $400 million revenue and do custom data centres to pay less than the published rates for any of the public cloud providers. Heck, you don't need a million in revenue to be able to save on going managed hosting or bare metal colocated.
The key here is: Compared to published rates. They'll not be paying published rates, the same as there's no way Netflix is paying published rates at AWS.
They'll be paying extremely highly discounted rates, assuming they're negotiating team isn't staffed with a bunch of people who failed their business degrees.
Anything more than a few tens of thousands per year, I'd say. I'm not sure where the lower threshold would be of when they'd offer it, but I know once you get into a couple hundred k a year, getting massive discounts is at least possible (know of specific cases). I'd say do your homework on what renting managed hosting would cost, and what using reserved instances would cost, and talk to your account manager and use the managed hosting prices as the argument (they will be far lower).
A lot of my consulting is on cutting hosting costs, and I've yet to have a client where we couldn't come up with substantially cheaper alternatives than AWS, but sometimes being able to show your account manager that you know how insanely high their margins actually are and that you have a credible alternative makes enough of a difference for them to end up sticking with AWS.
It also depends on ease of cutting the cost, I'd say. E.g. if 90% of your cost is bandwidth, and it's mostly serving up static assets, it's trivial to cut the cost dramatically by rolling your own mini-CDN outside of AWS (bandwidth prices at AWS are between 10x and 50x higher than the cheapest competitors depending on region if looking only at managed hosting or other cloud providers - more if you're large enough to look at peering options).
Is the comparison to Google's investments a good benchmark? Sometimes it's better to make something custom than to buy from a big corporation doing a more general thing at 100x your scale.
Sometimes. But look at Netflix. $8.8 billion revenue in 2016 and a good third of Internet traffic and nearly every single server they have is in AWS. They had their own custom data centers and made the choice to migrate to somebody else's infrastructure. I'm pretty sure they're doing just fine with that decision.
That's a really good example. They talk a lot about AWS, and nearly every service they run is in AWS. If you don't pay close attention, you might think they're all-AWS. I expect that Netflix' AWS pricing reflects that perception.
The exception is serving films. If you watch a film, you receive bytes served by hardware built to Netflix' specification, running in leased space at a mixture of colos and ISPs. That's a really big exception. The third of internet traffic is that exception.
>"Snap simply does not have the resources to pour into custom data centers, even if they can raise $2 billion for infra over 5 years."
They have 1800+ employees as disclosed in their filing. So clearly people resources are not a problem for SNAP.
Do you believe its not possible to build 4 datacenter - 2 US, 1 EU and 1 APAC datacenter for 2 billion dollars? These are tangible assets that you can depreciate as well. The lifetime TCO of a center rack is $120K
Disclosure: I work for Google, but nowhere near Cloud, and I have no knowledge of this deal.
Quite apart from the dollar costs and human capital required to build and maintain a DC, there's the lead time required to build the thing, and I'd speculate that that's potentially a significant factor in Snap's decision. Perhaps Snap are looking at e.g. how quickly Pokemon GO scaled their operation, and they're thinking for whatever reason that they might need to do something similar?
It's not just about hardware and data centers. Google cloud offers a global secure high performance sdn, scalability, manageability, things like versioning and deployment tools, code debugging, world class security, and high speed of innovation, all out of the box, or even as intrinsic qualities transparent to customer.
By the time you build your infra (5 years was mentioned), Google would have iterated on their offering and their own network and data centers for 5 years.
There are datacenters for sale, every day of the year, in the USA, that are already operational, lit with fiber, already staffed with the needed people, etc.
Thinking about it more, there must be some other strategic reason behind their decision to go with Google.
I totally agree, you can't discount the lead times of things like zoning, permitting, and local bureaucracy. But on the other hand the time frame mentioned is a 5 year period.
I'd imagine not if that figure includes sales team for ads, content team for discover, and so on. Especially sales (I don't have any inside information, so this is purely speculation) can be pretty personnel intensive if you're trying to court ad buyers.
> Needing this amount of resources implies that Snap is expecting huge growth.
It's a hedge. If they don't grow as rapidly as expected then they're betting that someone else will buy the excess reserved capacity from them for close to market rates. So in the case that they only use 1.5 billion dollars worth of hosting, they're betting that they'll be out, say, $25M rather than $500M. On the other hand they want to make sure that if they do grow rapidly then Google has the capacity to meet their needs.
I've never heard of anyone buying excess cloud capacity from a private party. Is this a thing? Is there a market for this? Sounds like a security nightmare.
Security wise it's no different than any other instance; specific machines aren't associated with the party who originally reserved the capacity in any way.
Cycle Computing does this. ATLAS, an LHC experiment, recently demonstrated this. I even built a product at Google that made this possible. We used Native Client (users compiled their binaries using the NaCl toolchain) as one part of the security container.
> This sounds like a really bad move on their part and they should have committed to building out their own infrastructure on 'bare metal' over the next 5 years instead.
It doesn't mean they won't. Apple is reportedly spending a similar amount on Google cloud.[1] They're also investing billions in building their own data centers.[2] Those things go together.
Do you really think a company facing literally hundreds of millions of dollars in cloud service bills is not going to run the numbers on plugging in their own servers.
There is nothing about using cloud services that makes this a unique risk factor. With self-hosting replace "Google" with the companies own name and list all the things that can go wrong - it's pretty standard for a tech co. S1.
For ex. this is Twitter's risk item for running hosted services:
> Our business and operating results may be harmed by a disruption in our service, or by our failure to timely and effectively scale and adapt our existing technology and infrastructure
Outsource the risk to someone who has an entire product devoted to that one area, and has many years of experience and is probably the largest provider on the planet for that service...
On the one hand - you're right. Google is good, I use GCP, they are competent and good to work with.
On the other hand "someone who has an entire product devoted to that one area, and has many years of experience and is probably the largest provider on the planet for that service" also describes Enron at the time.
Google's no Enron, but Size isn't the metric one should use.
But even taking Enron into account, A Taxi company would be stupid to try to drill and refine their own gas, even right before Enron went down.
Sometimes it's just better to pay someone else to do a job.
I agree that too often "put it in the cloud" is abused, but there are areas where it really shines, and looking at their filings it looks to me like Snap has done their homework and that it's going to be a good fit for them.
They are signing on with one of the biggest, they are building out their own infrastructure over time, they are signing up with a secondary provider just in case, and they lay out their reasoning pretty well for why they want this to be handled by someone who has magnitudes more experience at it than they do (the big one being that they point out their demographic is VERY fickle, and any kind of performance issues, outages, or major problems could easily be the thing that switches someone to an alternate service forever. And they themselves don't quite have a grasp on expected growth so going somewhere that you can scale at a moments notice is a great benefit!)
Facebook is not a good example here. Back to early 2000s there was no Cloud infra so they started from inhouse. Snap is a different story since they not only use GCloud but also have Google engineers support and oncall for them. Snap focuses on the Product and Growth that creates much more value.
The other thing you get with GCloud is multiple datacenters around the planet. Assuming Snap are going after the 2billion+ people not in America, this seems like an advantage to use someone else's gear. 2billion seems high, maybe it's the maximum negotiated on paper for the deal as opposed to the guaranteed?
When you have Adwords dollars pouring in it makes sense to go send out a few people to buy up fibre and land/power/water for datacenters on the down low....
This seems like the key to me. Yes, if I'm a SaaS company with low bandwidth requirements but have a ton of users domestically, maybe I'll build my own data center (think IBM, or maybe Salesforce). But if I am raising money in the hopes of scaling my user base into the billions, and I need hundreds of millions of people to be able to upload to and download hundreds of megabytes of data to my servers daily, and have those images be instantly available to every user's contacts worldwide, I'll build on top of Google's infrastructure, which was literally designed for these kinds of tasks. Buying the metal is just the tip of the iceberg when. Snap would have to design a high-throughput, globally consistent network of data centers, likely lay their own undersea fiber (as all the cloud providers have done), and assume all the technical risk that comes with building and operating that kind of infrastructure. I think they made the right call on this one.
To be fair, they did address this concern under the 'Operating Leverage in Our Business' section where they said that they may look for another third party to rely on for cloud computing or they build their own infrastructure.
"We have committed to spend $2 billion with Google Cloud over the next five years and have built our software and computer systems to use computing, storage capabilities, bandwidth, and other services provided by Google Cloud, some of which do not have an alternative in the market. We are currently negotiating an agreement with another cloud provider for redundant infrastructure support of our business operations. In the future, we may invest in building our own infrastructure to better serve our customers."
We run our infrastructure on public cloud but almost our entire stack is cloud agnostic (except for a particular big database service we use). So our serving nodes are distributed across many public cloud providers and we can make decisions based on location/cost etc. in different regions of the world.
The rule of thumb I've heard is that you should start looking beyond public cloud once your cloud spend hits ~200K/month. Since at this level, engineering and ops investments you need to make to maintain your own infrastructure start making more sense. I think it's safe to say Snap is beyond that threshold right now.
Having a basic scaffolding in place on a hosted cloud and making sure your devops scripts are up to snuff is a good idea when you don't know how much infrastructure you need, because then when the situation calls for it you can fire up a new node on-demand.
But unless you're still "in the garage" and a couple of DigitalOcean droplets are good enough, it's going to be much, much cheaper and usually much wiser to run your core infrastructure on your own colocated bare metal.
I've seen companies increase their server expenses by ~$1M/yr by moving everything to EC2, and they sit around congratulating themselves for it because now "they're in the cloud". There's no reason to do that!
Little humorous tangent: an AWS rep told someone I've worked with that Amazon really wanted to help them secure better pricing, because as new CFOs come from self-hosted companies and into AWS-dependent companies, the CFO's eyes bug out when they see the Amazon bills and EC2 becomes the first thing on the chopping block.
Script your stuff out in Ansible or something similar, run it on your own hardware, and use GCloud/EC2 as secondary data centers for failover/backup/support/emergency bursts/whatever. You can have the flexibility without paying through the nose.
> Script your stuff out in Ansible or something similar, run it on your own hardware, and use GCloud/EC2 as secondary data centers for failover/backup/support/emergency bursts/whatever. You can have the flexibility without paying through the nose.
Except then you have to run your own networking and when shit fails (as disks, links, and switches are want to do), it's now "your problem". Hybrid clouds and not being a tenant is nice, but not without time and monetary costs -- by the time you have geographically distinct failover, you've also spent a non-trivial amount of opportunity costs making phone calls, flying around, and writing lines of code and config for things customers don't even know exist.
And when EC2 falls over, like it tends to do a few times a year? Hosts fall over, stuff dies. Something the scale of Snap, you're going to be doing setups that look a lot like cloud anyways. Bringing new systems up either by cloning a disk or through using PXE, setting up clustering, possibly by using the stuff they're already using, etc. You're going to be writing a lot of the same fallover code if you're running on someone else's hardware, so why rent?
> And when EC2 falls over, like it tends to do a few times a year?
Multi-AZ, multi-region complete failures are very, very rare. How often do you get a failure in your data center per year (that you notice)?
> You're going to be writing a lot of the same fallover code if you're running on someone else's hardware, so why rent?
The answer is in the question -- when rented things fall down and go boom™, your code runs and someone gets a text message with the receipt.
When a handful of the "wrong" disks decide to revert to air-blocking bricks or your upstream network provider has an outage, you're lucky if it's something you can fix by heading to the data center. I promise that AWS or Google is better at running a DC, and unless you're trying to enter the hosting business, I wouldn't advise spending the time and money to meet their uptime and features.
I've only managed data storage in the scale of many petabytes (and this was a handful of years ago) and honestly, I think it required at least 20 hours a week of babysitting by various staff. At Snap's scale and traffic patterns (viral content, lots of writes, so on), I imagine this is a very non-trivial spend on scaling, staffing, tech implementation.
At 2bb over 5 years, maybe Snap would benefit from rolling their own -- hiring 50 great hackers at a mildly conservative 250k/head (say 200k average + benefits + taxes + employee support costs (HR, payroll, recruiting, legal, etc)), eating a year or two of transition costs off their cloud hosting providers, then probably saving a bit of money even after hardware, bandwidth, facility, insurance costs. Hell, maybe they'd even open source some software and recruiting would get easier after conference talks of how they did it. Or maybe they get bought by Google or Facebook in a year. Snap's in the business of selling ads and getting more eyes on those ads. Whatever enables growth and doesn't serve as a distraction or speedbump is a "fine" decision.
>Multi-AZ, multi-region complete failures are very, very rare. How often do you get a failure in your data center per year (that you notice)?
First, if you don't notice some random/unexpected EC2 instance failures, you don't have a big EC2 deployment. Even though there is a lot of pomp and circumstance around the cloud, when it comes down to it, your instances are still on a physical server in a datacenter somewhere and they can, and sometimes do, fail. In that case, as in every other robust production deployment, your application (hopefully) performs an automatic and graceful failover to its standbys. The location of the standbys is usually an configuration value. Not seeing any unique value proposition here for "the cloud".
The point is that even when you're using EC2, you still have to set all of that up. Contrary to popular belief, EC2 is not a panacea that can magically make your software reliable and redundant. It's just a nice interface that makes it easy to rent servers from Amazon.
The only benefit you get from EC2 is that someone paid by Amazon has to go pull the box, but your company could hire such a guy in-house for _much_ less than it's paying Amazon.
The onus is still on the developers to figure out all of the application stuff that's necessary to accommodate failover and make sure that everything plays nice with each other, and getting that working right is by far the most time-consuming part of deploying a high-availability application.
So EC2 doesn't add any extra resilience; it's just outsourcing the job of pulling a server to an Amazon employee/contractor instead of YourEmployer employee/contractor. If your company is big enough (and at Amazon's prices, you don't have to be very big at all to be "big enough"), that doesn't make sense.
I know EC2 et al are popular because people like buzzwords, but that doesn't make it good business (or does it? Investors love cloud because it keep capex low, and because investors are buzzword-driven like everyone else; saying "cloud" will make them like you more and want to give you more money).
For companies that are still in the garage (literally in the garage), shelling out $20/mo for a couple of cheap VPSes from something like DigitalOcean is going to be just fine. But once you get bigger than that, there's no way to avoid paying attention to this stuff, even if paying Amazon tons of money creates a false psychological connection that makes you think they're doing the work for you.
>The answer is in the question -- when rented things fall down and go boom™, your code runs and someone gets a text message with the receipt.
Let me fix that for you: when things fall down and go boom, if your code is written and your deployment is configured to support it, your product continues to work, and someone, somewhere, has to get a broom and sweep up some ashes.
Whether or not cloud is a reasonable proposition is primarily a question of whether it makes more sense for that someone who sweeps up the ashes to be on the corporate payroll of YourEmployer or YourCloudProvider.
>I've only managed data storage in the scale of many petabytes (and this was a handful of years ago) and honestly, I think it required at least 20 hours a week of babysitting by various staff. At Snap's scale and traffic patterns (viral content, lots of writes, so on), I imagine this is a very non-trivial spend on scaling, staffing, tech implementation.
EC2 is not a silver bullet. It's just an interface to allow you to rent servers from Amazon. EC2 users still have to babysit stuff, just not the hardware (though they still have to monitor resource usage, clean up disk space, and be prepared for things to blink offline with 0 notice -- again, all the normal things; only difference is that your hardware jockey is accessed through EC2's web support interface instead of Slack/cell).
>At 2bb over 5 years, maybe Snap would benefit from rolling their own -- hiring 50 great hackers at a mildly conservative 250k/head (say 200k average + benefits + taxes + employee support costs (HR, payroll, recruiting, legal, etc))
Vastly overallocating here.
>Hell, maybe they'd even open source some software and recruiting would get easier after conference talks of how they did it.
Unnecessary, there's already tons of great open-source software to handle HA deployments (usually, this is the software underneath the commercial UI that makes everything work; it's surprising how much "revolutionary" commercial software is just glue code and a point-and-click wrapping around an OSS workhorse).
Of course, once you get unicorn-scale, everything has to go custom and/or highly modified because no out of the box solutions can handle the load, and that will be the case whether their hardware is hosted by Google or not. Again, "cloud" does very little to relieve workload for all non-hardware employees.
And the added benefit of being a trendy tech company is that after your company creates some extremely specialized solution, you can open-source it and watch with an uncomfortable mix of amusement and horror as 90%+ of other companies's tech departments contort themselves into pathetic, desperate architecture pretzels so that they can become cool by abandoning a stable, proven, mature stack for your company's experimental, sputtering, duct-taped abomination that requires a PhD to even get to compile.
This pattern has become so commonplace that reciting any specific example feels trite. You can probably name 12 off the top of your head. Hadoop in particular is a victim of many gross offenses of this type.
>Snap's in the business of selling ads and getting more eyes on those ads. Whatever enables growth and doesn't serve as a distraction or speedbump is a "fine" decision.
Sure, but they don't have to set massive gobs of money on fire for no reason along the way. But then, I guess they wouldn't be part of the Silicon Valley family if they didn't.
Snap is using appengine, which transparently manages scale, availability, resiliency, deployment, and so forth. It's a higher level of service than ec2. Thus many of the valid concerns you describe do not apply to snap, or are at least minimized.
> First, if you don't notice some random/unexpected EC2 instance failures, you don't have a big EC2 deployment.
The parent didn't claim they don't happen, just that (1) they were rare (a point you agree with, given the minimum usage needed to notice them) and (2) multi-AZ, multi-region failures nearly non-existent.
> The point is that even when you're using EC2, you still have to set all of that up.
It takes literally minutes to set up an ELB and Autoscaling group across five availability zones. How long does the non-cloud version of that take?
> First, if you don't notice some random/unexpected EC2 instance failures, you don't have a big EC2 deployment. ...Not seeing any unique value proposition here for "the cloud".
Because when something fails, you don't have to care about the "why" as long as you can replace it. I see about 4 instances needing a maintenance per month per 1000. That's reasonable enough to not demand someone be full-time focused on making sure that only the good lights blink on the hardware.
> The point is that even when you're using EC2, you still have to set all of that up. Contrary to popular belief, EC2 is not a panacea that can magically make your software reliable and redundant.
You're making a strawman by suggesting people think it's a panacea. The advantage is that a lot of the work, maintenance, and feature improvements for 'infrastructure as code' is handled for you. Cloud hosting means writing the software layer and being done, no managing the infrastructure services, facilities, hardware, business relationships involved with rack/stack.
> It's just a nice interface that makes it easy to rent servers from Amazon.
To be fair, it's a _very_ nice interface.
> I know EC2 et al are popular because people like buzzwords, but that doesn't make it good business (or does it? Investors love cloud because it keep capex low, and because investors are buzzword-driven like everyone else; saying "cloud" will make them like you more and want to give you more money).
If you think cloud hosting is popular because of op-ex or buzzwords, I think you're out of touch. EC2 and Google Cloud are popular because they let you focus on getting shit done, even when you have variadic workloads that are uptime dependent.
> For companies that are still in the garage (literally in the garage), shelling out $20/mo for a couple of cheap VPSes from something like DigitalOcean is going to be just fine. But once you get bigger than that, there's no way to avoid paying attention to this stuff, even if paying Amazon tons of money creates a false psychological connection that makes you think they're doing the work for you.
They _are_ doing a lot of work for you. You say $20 is the point that it makes more sense to self-host. I'll be charitable and round that up to $100, but even at that price, there is _no way_ you'll be able to get something as fault tolerant or low-cost as a cloud hosted solution. Do you really think that for $100 a month you can self-host geo-close servers with redundancy to the point that you don't have to think about it? Keep in mind that "two is one and one is none" when planning your hardware purchase.
> Vastly overallocating here.
No, that's conservative for a major US city (e.g. where Snap would be doing the hiring). Have you tried to pull a handful of really good system hackers out of thin air recently? Even if you can get them, they're not cheap, and you'd need a sizable team to pull off the highly-redundant world-wide install that Snap needs for its growth projections. It starts off expensive to hire good tech and gets more spendy the longer you're fishing.
And that's even ignoring the costs on productivity (for that and other employees) when an employee isn't happy or decides it's time to leave -- staffing also takes money and attention to maintain.
> And the added benefit of being a trendy tech company is that after your company creates some extremely specialized solution, you can open-source it and watch with an uncomfortable mix of amusement and horror as 90%+ of other companies's tech departments contort themselves into pathetic, desperate architecture pretzels so that they can become cool by abandoning a stable, proven, mature stack for your company's experimental, sputtering, duct-taped abomination that requires a PhD to even get to compile.
You seem like you're speaking from personal experience. Having a working infrastructure that isn't a barrier to growth isn't trendy or sexy, it's a base competency for any internet-reliant business model.
> Sure, but they don't have to set massive gobs of money on fire for no reason along the way. But then, I guess they wouldn't be part of the Silicon Valley family if they didn't.
This isn't setting "massive gobs of money on fire for no reason", this is going with a high-performance datacenter that someone else maintains. They clearly have something very big in mind and I doubt they made a multi-$bb commitment without asking themselves "are we lighting this money on fire?"
> Of course, maybe they got some killer promotional deal with Google
For sure. How much is this free marketing that Google cloud service is getting worth? I'm pretty sure whatever discounted deal Google gave Snap is more than made up by this free marketing blitz they're getting.
Snap’s been a happy and public customer for some time, so any “free marketing blitz” would a) have essentially been used up before, and b) would truly have to be remarkable to work against some form of discount where the non-discounted remainder /still/ represents $2,000,000,000 over four years.
First of all, extent of Snap's dependence on Google Cloud and this extreme volume of spend was never public.
Also, there's a difference between something being public (like press releases) and actively generating buzz where lots of (relevant) people are actively talking about this.
I think if you were in a GCE sales meeting yesterday you'd have noticed a lot of people jumping up and down in joy. They've been playing second fiddle to AWS and in desperate catchup mode. Their next cold call got so much easier. Their next close got so much easier. Screw all that, their inbounds suddenly went through the roof. Lots of smaller startups etc. who would have never thought of Google cloud as an option are now seriously considering it. A lot of people who are already on AWS just signed up for GCE out of curiosity "just to see what the big deal is about". I don't think there's any way to overstate the impact of this news on Google Cloud's future.
> If you read their S-1, they list a dependence on Google cloud as one of their big risk factors. Yet they then go ahead and make this commitment instead of working towards eliminating it.
If they ran on bare metal then they'd list that as one of their big risk factors too.
True, but what's the opportunity cost of transitioning to bare metal? Is it worth slowing down development / feature releases / possible service outages? In the time it takes to transition, is it possible that cloud actually becomes cheaper than running your own infrastructure?
> There's so many advantages to owning your stack and if Snap thinks that it's going to need 2 billion dollars to pay for cloud infra, they're at the scale where it makes sense to build your own infra
Just to be clear, how many billion dollar infrastructures have you built up? What about played a significant role in, witnessing the various trade-offs that have been made? Taken part of, in any shape or form?
Of course, if you have experience that's all well and good. But I would expect someone with experience to lead with it, not omit it, precisely because they know how important it is with all the trade-offs involved.
I think they're paying for technical runway. They're not a big company yet, but expect substantial user growth. Right now, they probably believe that growth is more important than profit. So they're going to spend their engineering resources on getting more users to use and keep using their stuff.
GCP isn't sticky. They can leave, or renegotiate, or even try to get Google to do some of their engineering work for them ("We'd really love if this API did this as well..."). Or some combination thereof to minimize risk vs max profits.
If you watch Jeff Bezos Ted talk years ago, he made the comparison that back in the day, Beer Brewers used to have to generate their own electricity.
Similarly, services like AWS provide companies with that type of infrastructure (data or internet in this case) so they can focus on building out features.
But to your point, it gets expensive at some point and I am surprised that Snap is still outsourcing this instead of having their own data centers. I guess it's because the CEO is not strong Technology-wise so the company is focused on Product?
But ensuring 5 years of guaranteed service at a fixed price is reducing their risk. It prevents Google from pulling the rug out from under them, or from the price going up and they have no option but to accept it.
I imagine part of their plans are to move away from the dependency, but while it exists - this is exactly how you reduce your exposure to that risk.
All in all, I would say that if needing a new cloud provider in case Google defaults is a big risk factor for your company, your company is in pretty good shape.
Difference is that a comment that simply states "bad move" is voted down in short order. This comment makes a cogent point about scales and infrastructure. This might be valuable. I love Hacker News too, just without the snark.
How about a HN reader who is taking part in a discussion? They offered their view and why they believe that. Nowhere did they they claim "authority." Unlike your snide commentary they are actually contributing to the conversation.
Yes, the OP does claim authority, by stating his opinion as a plain fact, without knowing anything about it other than the publicly available information, and at the same time labeling a company that has raised 2.6 billion and has 100+ employees as complete idiots.
Good discussion would have ensued after a statement like "I wonder why they don't build out their own infrastructure because at this scale it is usually cheaper to...".
I could imagine that with this big a deal, they could get rate reductions that get the cost down to a comparable level to building out your own datacenters (at least 2) and hiring qualified ops people.
It might very well be a bad move, but you really can't say unless you know more.
Would it be better for everyone to preface their comments with "This is only my opinion but..."? Making a statement in a comment implies speaking your opinion, unless you start making strong claims to fact or citing sources. We don't need the extra linguistic clutter of framing every statement as a matter of opinion.
>"This sounds like a really bad move on their part and they ..."
That's hardly a "factual" statement.
>"Good discussion would have ensued after a statement like "I wonder why they don't build"
Do we really need to preface our comments with personal opinion disclaimers? When someone says "it sounds like", it seems pretty clear that they are admitting they don't know all the details, so how can what follows that phrase imply authority? That's absurd.
Especially as a fellow iOS dev, there's a few UI things I'd like to reverse engineer. Seems like every year they introduce something useful or cool-looking and I think, "Wait, how did they pull that off?"
That doesn't mean anything, though. "Some people who do X also do Y" does not imply that "people who do Y are qualified to comment on X."
For example: Elon Musk is a Twitter user. I too am a Twitter user. So's Kylie Jenner, Donald Trump, and random spambots. Using the same service does not mean they're equally qualified to speak with authority about the same things.
Correct. That's why I said "to be fair". I'm saying the playing field is quite level, so we shouldn't judge a comment on whether we recognize their username, but rather on quality of content.
> we shouldn't judge a comment on whether we recognize their username, but rather on quality of content
Ah right, I think I see the nature of our disagreement / misunderstanding. I totally agree with you on the general principle that quality of content should be allowed to stand on its own.
However, I believe that there are things that are context-specific things that the men and women in the arena will face. And these are things that those of us in the stands, however thoughtful and discerning, will never be able to appreciate them, because we simply do not know. (For a great read about this, check out Daniel Ellsberg's message to Henry Kissinger, on the reality of having access to top secret information: http://www.motherjones.com/kevin-drum/2010/02/daniel-ellsber...)
So for example. It seems obvious to me that the top comment is sensible and correct. Snap's CTO or whoever else made that decision is surely very familiar with the costs of being dependent on something like Google. So if they decide to do it anyway, I'm of the opinion that they're quite likely to have done it because of concerns that I am not able to appreciate, because I am not in their context.
Of course, there's a non-zero chance that Snap is making stupid decisions. But I think it's far likelier that they're making decisions that SEEM stupid to a 3rd party, but make perfect sense once you appreciate their context.
I don't think you're wrong, but I'd like to point out something. Armchairing decisions like this is a wonderful learning tool. Not only does it give people the opportunity to mentally work through issues that most of us will never face in our careers, but it's a wonderful opportunity to practice diplomatic, yet persuasive writing.
You're right, we don't know everything that has gone into a decision, but that's part of the value of an exercise like this. Being able to debate about something, remain civil and deal with specific arguments is an incredibly valuable skill that only gets more valuable as you age.
That random person is right. One can run a DC at scale of 50 racks at least 20% cheaper than GCP. That's $80M a year to hire 20 smart people at $2M a year and 200 reasonably smart people at $200k per year. Snap will vanish like their pictures.
I think you're very conservative on that estimate. We run an infrastructure on dedicated leased hardware (Rackspace). Our infrastructure costs are a fraction of what the equivalent public cloud footprint costs. With technologies like Kubernetes and CoreOS, our private cloud practically runs itself. We focus on apps and the developer pipeline, much like we would do if we were on GCP/GKE. We have approximately 60 dedicated servers. We're almost at the scale where it makes sense to leave leased baremetal for colocation. For a company like Snap, it's hard to believe that they couldn't save a few hundred million by building their own footprint in leased datacenter space.
The days of needing massive ops teams to run on owned and colocated hardware are long gone.
For a company that has always been handed (basically free) money at obscene valuations, why would you assume they care about two billion? Maybe it's just as simple as that?
It's an excellent argument as to why I will not be purchasing this stock.
Bingo. They aren't paying advertised rates. Google will have margin but not as much as usual. Snap also has google by the balls here. What if snap comes out with a statement saying Gcp sucks and they're moving to aws?
Yes, I do think Google's biggest customer leaving their platform vocally would be more impactful than Google promoting/defending itself.
If Snap threatened to leave I'd wager that even the CEO of Google would get involved to keep their biggest cloud customer. Snap's revenue and brand name have enormous value to Google. There simply aren't that many $2 billion dollar cloud customers right now.
without knowing what's in the contract there is at least one way I can think. Make a statement saying all new workloads are being deployed elsewhere. Lots of ways to play the game.
Large companies aren't always right, but they're not always wrong either.
Snap has competent engineering execs that have built a very strong company. I would have a hard time believing that they haven't put way more thought into this than OP.
I don't think anyone is assuming they can't be wrong. I think it's more that the post said this was a bad move "with authority" when you don't have all the information unless you actually were involved with that decision.
It might be a bad move still but we don't have all the information necessary so it's strange to just assume we know better.
Completely wrong way to think about it. In fact, those companies are cases of fraud and lying, and we generally do not assume companies lie. When Google releases their quarterly earnings, we assume the numbers are correct, unless there is evidence to believe it's fraudulent.
Bear Stearns didn't fail due to fraud, it failed due to not adequately tracking and assessing the risk of its assets. It is worth noting that there have been serious allegations made a few weeks ago against Snap that they are lying on the S-1.
You comment would be stronger if you removed the snark and actually refuted the parent's points. If you want to refute the points refute them, but this kind of comment adds nothing to the discussion.
The parent made some good points about scale and infrastructure. I've worked with infrastructure on many projects and I can't come up with a coherent argument why any of the parent's points are wrong. Can you?
I think asking him to refute points gets to the heart of the issue -- in an environment where nobody has enough information to speak authoritatively on a subject, the people who are willing to do so anyways are advantaged. (Think about the qualifications or knowledge that one would have to have to make an informed assessment of this.) This is bad for productive discussion.
I agree that empty snark is usually unproductive, but in this case, I think there's a useful point being made. Still, always best to rephrase into non-snark. :)
Counterpoint: Maybe the agreement is breakable or they know they can resell the resources. This metric could be a good way to trick "tech savvy" analysts who don't care about total users.
I assume spending this much would give them a below market discount and they could recoup some of their losses if nec.
All this assumes it is a positioning tactic w/ a hedge. Maybe, they will just build a developer ecosystem a don't want to bifurcate their engineering output. If twitter fails, it would validate this choice as they can pay a slight premium and if their metrics increase build infrastructure in 2.5 years w/ more info to spec it
Snap needs to have consistent booked revenue in order justify this sort of outlay.
This is akin to dot-com era companies signing long leases on buildings or small business owners buying lots of inventory. Plus, 2 gigadollars could easily buy 4-10 soup-to-nuts datacenters that have tangible (although less) resale value.
Overexpansion is easy to do and super risky... these sort of moves increase expectations and scare off wise investors.
If I could do it all over again I would probably opt for Google. The Kubernetes support is wonderful and the overall user experience blows AWS out of the water.
Glad to hear GKE's working well for you -- thanks hosh and whalesalad!
We do invest a lot in making Google Container Engine a great experience, including integrating it with other parts of GCP (e.g., IAM [1]), but at the same time, the core is plain vanilla Kubernetes.
Why opt for Google if you're going to use containers in Kubernetes? You then become cloud agnostic. You can even move to your own datacenter at some point (relatively) easily.
Dropbox built out their own environment (and did it migrating 500PB out of S3) [1] [1a]. As did Twitter [2]. And Facebook [3]. And GitLab [4] (too soon?) As well as Mixpanel [5]. Even Twilio is multi-cloud (last time I checked it was split between AWS and Rackspace; this was several years ago during an interview, so maybe its changed). Sure, start in Google, or AWS, but at some point you will either need to use multiple compute/storage providers (redundancy) or go to your own gear (redundancy and cost).
Example: "In 2014, Moz CEO Sarah Bird said that it was spending “$6.2 million at Amazon Web Services, and a mere $2.8 million on [its] own data centers.” Simply put, the cloud killed its margins." [6]
EDIT:
simonebrunozzi: Forgive me, but when you're talking about hundreds of millions of dollars in spend, "easy" is relative. It is much easier when you're not relying on underlying primitives that are difficult to reproduce on your own at another provider (witness how terrible Open Stack is; no one wants to do that if they don't have to).
Am I minimizing the effort involved for this discussion? For sure. But the money involved...it solves most problems you would have migrating between providers.
> It seems to me that you have no serious experience in the real world.
You are entitled to your opinion. I have seen the pain, and it is relative. Its easier when someone says, "Here is the budget, just fix the problem", and your vendor's (AWS/Google) margins are 20-40% (these are real margins pulled from earnings reports); that's a lot of money you can put back in your own (or your shareholders') pockets.
If you we're spending $2 billion dollars, and I told you I could save you $400 million by spending $100 million, wouldn't you take that deal? Even at $200 million, its a bargain!
For less than what Snap is spending on Google cloud infrastructure, SpaceX built a rocket that can take a payload to orbit and return the first stage successfully (SpaceX has taken on ~$1.2 billion in funding over the last 14 years). Moving out of a cloud provider is comparatively hard?
EDIT: Maybe this is a roundabout way to kick back to Google in order to get preferential treatment on the Ad network. It sure isn't a logical decision.
EDIT 2: @ashayh: I'm not saying go back to good ol' bare metal. For $2 billion, you could build your own cloud provider out as an internal operation. The amount that's being spent on Google Cloud is egregious, and worse yet, common shares have no voting rights to push back against poor decisions like this.
EDIT 3: @hueving: HN throttles my posting; editing this comment is my only way to respond. Sorry about that!
Ffs, reply to the comments replying to you, that's what the feature is there for. Don't preempt them by editing your comment so people see your response first.
There is a penalty box for accounts that are determined unsavory, like mine, where they have a finely-controlled quota of commentary beyond which they are "submitting too fast" and told to slow down. Everyone has this limit, it's just for some accounts it has been set significantly lower. I suspect with that system you'll see this more, rather than less.
Easily move? It seems to me that you have no serious experience in the real world. There's something called "data gravity", and the non-secondary issue of how to migrate a "live" system (in production) from one cloud to another over the course of typically several weeks.
Moving from one cloud to another, even with containers, is never easy at large scale.
(source: I have worked at AWS for 6 years, at VMware for 2, and I've seen hundreds of clients go through this exercise)
I consider 'toomuchtodo a voice of authority on operations based on much reading and discussion (particularly on moving to physical, which we've discussed before), have performed the very exercise being discussed four times in my own career ranging from a couple cabinets to a couple hundred million in capital, completely agree with the entire comment to which you are replying, and feel that your jab about "serious experience in the real world" was totally unnecessary.
If moving operations around is insurmountably difficult, you built operations incorrectly. Put another, even broader, way with a few more implications: if you are totally reliant on one vendor for continuity of any part of your operations, you built operations incorrectly and are introducing unnecessary risk. If us-east-1 goes down and you cease generating revenue as a result, you have built operations incorrectly. That's really all there is to it. And yes, I realize this means 80%, maybe more, of the operations in in the world is built incorrectly. We just learned Snap's is[0]. Maybe even yours! And that's fine as long as you're working on it. Good news: said exercise is a good chance to fix it!
Now that half the crowd is inhaling to bombastically retort that undoubtedly controversial, yet completely true, paragraph, allow me to quickly redirect:
What's "the real world," anyway? Most of HN forgets a Windows/.NET ecosystem exists, not to mention extra-valley gigs in, say, Nebraska. Would you say the lone sysadmin holding together a hospital in Des Moines is gaining "real world" experience and able to meet you in discussion? Seriously, I hate "the real world" and the people who fire it as a volley during an argument. Even your career is not indicative of "the real world." (Nor is mine.)
[0]: Flagrantly so. I've spent the better part of an hour trying to concoct a scenario where that deal is even remotely in the win column for Snap. Still trying. You enshrined a business disincentive (nay, prohibition!) toward optimizing your opex into a five year contract and $400mm a year operating Snap was some kind of win? ... How on Earth? Even with a quarter billion DAUs...
I really appreciate the kind words, and wish I had more than one upvote for your comment. I'm not here for profit or ego, just to share what I have learned and experienced for the benefit of others. "I am old, here are my mistakes, do not make them" sort of thing. Whether anyone believes it, there's a serious Bay Area echo chamber (which HN overlaps with); there is a whole tech world out there that isn't startup culture/tools/methodologies.
If you're ever in Tampa or Chicago, let's grab a beer or dinner my treat. Would love to share war stories.
While I agree with your position in principle, I also come from a "legacy IT" world, the kind where systems are moved to public cloud because developers are several steps ahead of the DC admins, or because the CIO wants all net new dev to be on infrastructure he or she doesn't own. These are not "digital natives" (who have no excuse, imho). These are the thick, long tail or large businesses with aging infrastructure and a lack of willingness -- or ability -- to pay for top talent.
An additional point is that even the second tier PaaS/IaaS providers (like GE/Predix, for example) are trying to get out of the DC ownership business. There's no compelling reason for them to keep their own DCs when a) it's expensive to run & keep fresh, b) it's CapEx, and c) it costs them expensive heads to organize and manage everything.
"Real world" and "real life" dismissals of said ideal are stupid, lame excuses for people who want to make a stupid, lame argument to cover up stupid, lame technical debt in operations by either (a) assailing the credentials of the speaker, as this entire thread has spent much time doing, and/or (b) providing a "well, everybody else isn't investing in this, neither should we" cop-out. And yes, you are doing it wrong. Rather than getting defensive, pulling the knives out, downvoting on sight, and trying to wipe away or justify doing it wrong, why don't you instead realize that it's motivation to do it better? We should all do things better, and every time a single AWS region goes down and takes out half the Internet I sigh because it's this exact, stupid, lame justification fest that results in that situation.
Work toward correct operations. You will never reach it. The cognitive dissonance of these two statements is totally acceptable. If you build a new service today and don't account for DR and security up front it just goes to show me that you haven't learned from the very public failures of those who have come before you, and that to me shows a lack of real world experience.
And yes, I'm aware of several shops that can literally flip a switch, and we're talking cage and ASN, not hobby-scale iOS backends. It's not like finding the holy grail and requires developers who "get" operations. It does exist. And it exists in real life! \u1F632!
Actually it is. I just moved about 35k instances and 6PB in s3, multiple many TB pg instamces from AWS to google. It is not that bad anymore. Containers and fast internet make it easyish
Alex, I'm curious to hear more about this experience. Did you use any physical device (Snowball, etc), or simply a very fast internet connection between the two? What made you move to Google?
Whether you want to run fully in the cloud, or homegrown DCs or a combination, you need talented people who bring the required savings about.
When it comes to in house DCs, very few companies seem to have that top talent.
For example, most companies are simply buying regular Dell/HP servers, Cisco switches/routers, slapping them in cabinets and calling it a day. They simply do not know how to take advantage of high density platforms like open compute, or of SDN. They also throw millions at software vendors like Vmware/CA, instead of building their own provisioning or CPU/RAM/Disk aggregating solutions with open source or custom tools.
If they don't know how to do it, then the theoretical savings for certain companies, of bringing things in-house simply won't materialize. And then its better to throw money at AWS if your in-house "engineers" are also breaking things 10x more often.
In a company with a badly run DC, developers very quickly latch on to cloud benefits like not having to wait for 3 days for DNS request, and 2 weeks for a VM.
Yeah, some of the stuff in this thread is crazy. Are all these people new or what? Not too long ago, running on bare metal was the only serious option (shared hosting is non-serious). Cloud might offer some conveniences (just because AMZ and friends have made it so easy to give them more money), but the alternative is not that hard!
I agree the alternative isn't hard. I'm saying even very large companies just can't seem to do a great job of it. Which is why they run to AWS/etc in droves.
I've seen innards of very large DCs for more than a decade. At one of my first jobs at a fortune 500, I was responsible for everything from rack and stack to the command prompt. The expected turnaround time for a single physical server was 4-6 weeks until the application could be installed on it! One of the reasons was that they did not have automated DHCP/PXE provisioning. I started the process to enable it.. going through all the political, security mess, it was 9 months until it was enabled. I was gone by then.
An extreme example for sure, but if AWS revenues are growing like they are, then surely such issues are everywhere to some extent.
Can anyone here provide some info or make a comparison with the Azure Container Service here, or any AWS option?
I'm going to go through all of the service offerings this weekend - from Docker Inc's Docker for Azure and Docker for AWS to the native container services on each.
Azure Container Service is simply a PaaS-ish offering of Swam, DC/OS, or Kubernetes. It still spins up VMs that you can log into, but handles deployment/provisioning of the product, and makes some assumptions about your use case. It's a great way to push a button and have a "real" deployment of those services to evaluate, especially if your goal is a platform-agnostic target. I work for an Azure-focused cloud consultancy and for any serious production environment we still build out a more custom deployment using a combination of Terraform, Chef, Cloud-Init, CoreOS, etc.
Thanks for the comment - I'm going to try it out. Want to email me your consultancy (email in profile) - I work with a bunch of diff companies who use Azue
> Also them starting the project along with the knowledge they have internally scaling containers helps.
FWIW, Most of this are advertising gimmicks though and Google has a pretty different internal infrastructure for orchestrating containers that has hardly much to do with K8s.
Google Kubernetes Engine still runs Kubernetes. I never looked at the Borg or Omega source code and had never been worked on a Google team. It is my understanding that there are some key insights developed from Borg and Omega that became part of the core concepts of Kubernetes that gives it an edge over other open-source orchestration systems. These include grouping containers into pods and using label selectors.
Yes, many of Google's technical leads working on Kubernetes and Container Engine are former members of the Borg and Omega teams, so Kubernetes and our hosted version, Container Engine, both benefit from what we learned building those other systems. (I think our 5 most-senior engineers have ~40 years of container management systems experience between them now?)
And it's not just the rather-large core team directly on GKE and k8s, nor the related products like Container Registry [1], Container Builder [2], and Container-Optimized OS [3]. GKE and k8s benefit in other ways too: Google's internal kernel team helps debug customer issues when we trace them to the kernel, and people like Kees Cook are helping with the upstream Kernel Self-Protection Project [4] that make container technology more secure. In addition to that kernel work, Google also has rather-decent security teams and they work with us to improve security in other ways too.
Finally, re: toomuchtodo's question, "Why opt for Google if you're going to use containers in Kubernetes?" Because we hope you find that Container Engine is the best place to run Kubernetes -- and benefit from the other parts of Google Cloud Platform. If you ever find GKE is not that place, and you don't derive value from the rest of GCP, then exactly as toomuchtodo puts it: "You can even move to your own datacenter at some point (relatively) easily."
This is not correct. I've worked with both borg and k8s and k8s is effectively a rewrite of borg using the same container infrastructure. There are differences, but they aren't meaningful.
I can think of a couple that seem meaningful, like cluster state management architecture (borgmasters/checkpointing vs. everything lives under consensus in etcd -- ish), which seem to have introduced real difficulty in bringing Kubernetes to parity with Borg, particularly in the scale department. Then I see a comment like yours and realize again that thought was put into it by much smarter people than me, but that one remains a perceived change that confuses me as an outsider. I'm familiar with the flaws of the borgmaster architecture, but the etcd architecture seems like an oddly drastic rejiggering to address them; I say that with a surface-level understanding of both systems based on a very short exposure to Borg several years ago, so I'm probably completely wrong or out-of-date here.
Am I totally off-base, if you're able to speak to this? (Maybe it exists and I've missed it, but I'd love to see a blow-by-blow of the differences and their rationale, too, because that'd be valuable insight on how Google learns.)
I'm not sure what you're talking about with consensus and etcd. That doesn't have anything to do with the end-user experience using k8s on a product like Google Container Engine.
When I say k8s is like borg I mean: it has the same concepts of tasks, jobs, and allocs. The scheduling of those is handled by a k8s scheduler which resembles the borgmaster scheduler (a lot of hand waving here), and the containers themselves execute in an environment much like the borglet provides for containers.
Many of the valuable features provided by borgmaster and borglet are provided in k8s and you configure them through similar mechanisms.
Beyond that, how they are implemented specifically, there are a ton of differences but for an end user who is just using k8s, not setting up and managing k8s infrastructure, it's conceptually isomorphic.
"For less than what Snap is spending on Google cloud infrastructure, SpaceX built a rocket that can take a payload to orbit and return the first stage successfully (SpaceX has taken on ~$1.2 billion in funding over the last 14 years). Moving out of a cloud provider is comparatively hard?"
>primitives that are difficult to reproduce on your own at another provider (witness how terrible Open Stack is; no one wants to do that if they don't have to).
I don't understand this statement. I have built systems that work with openstack and then scale up to public cloud. The primitive is the same as EC2, which is a virtual machine. What did you have difficulty with?
But that is one of the outs that Kube gives you. Start on Kube/GCE and then if needing, you can migrate to colo boxes, and dynamically even. The beauty of a Kube based cloud platform is you can literally start anywhere, go anywhere.
There are some companies using k8s not just for the ability to move, but as a hybrid starting ground.
One bandwith intense workload is running with GKE for all state/databases, but uses a bare-metal k8s cluster for compute and bandwidth intensive stuff. They actually use federation and the bare-metal cluster basically looks for a local database doesn't find one and gets routed to the next available global cluster, which is GKE.
Due to the cache and read dependend nature of the database queries the latency impact is worth it.
Gitlab reversed their decision to go with bare-metal. They still plan to ditch Azure and the most likely candidate (already testing to move their CI infra) for the move is GCP/GCE and GKE from Google.
You can even move to your own datacenter at some point (relatively) easily.
Dropbox built out their own environment (and did it migrating 500PB out of S3)
Like most discussions in this thread, this statement is way too general. Dropbox moved their storage from S3. What about EC2 or other AWS services they were using? Did they abandon all of those too?
So they hit $400m of revenue in 2016 and have committed to spend at least that much on infrastructure each year for the next 5 years? After all the costs for staffing and everything else they better I hope they achieve amazing growth if they ever intend to profit.
> On January 30, 2017, we entered into the Google Cloud Platform License Agreement. Under the agreement, we were granted a license to access and use certain cloud services. The agreement has an initial term of five years and we are required to purchase at least $400.0 million of cloud services in each year of the agreement, though for each of the first four years, up to 15% of this amount may be moved to a subsequent year. If we fail to meet the minimum purchase commitment during any year, we are required to pay the difference.
I would invest in Snapchat at any valuation that is below Facebook's. Sounds crazy but the first time I opened the app (on a friend's phone) and it landed me on a camera screen (almost forcing to contribute content to use the product) I felt like the guys were going to win whatever space they were in.
My understanding is that Snapchat is cool among teenagers and tweens, whereas Facebook is old news. I've watched people "use" Facebook and it seems to provide negative value. If you are interested in diverse things your feed will be a total freak show, images of deformed penises (Medizine groups have those) mixed with 5 step cooking gifs. Snapchat seems to have a much better focus on stuff people actual want to do, whereas the core Facebook experience seems to be just aweful and outdated.
They'll exist, doesn't mean they'll be relevant, or profitable. If I were asked to bet against the stock, I probably would. Everything about their valuation and IPO smells to high heavens.
It's a shame that what started as one of the most private messaging apps around still doesn't use end-to-end encryption and has since been surpassed by many on that front, though.
I would bet that they don't end up completing the terms of the Google contract as signed; and that it would be heavily modified by year 3 of the contract.
One supposes that might seem "obnoxious", to those who were somehow psychologically committed to what... the perpetual existence of America? That ain't me. It seems unlikely that the continent will sink into the ocean anytime in the next five years, however.
Perhaps I could have been clearer, but I didn't say that Alphabet would fail anytime soon anyway. I referred rather to G's habit of discontinuing popular services.
Normal expenses are things you pay, and then you declare how much you just paid.
This is not a normal expense, it's a long term contract. It states how much they'll pay IN ADVANCE over a long period of time. That can be used to make all sort of accounting magic , adjusted per year over multiple years however you like it.
Ha! Assuming we're discussing the US, I'm consistently amazed at how complicated it is. The number of tax accountants and wide success of TurboTax lend credence to the idea that it's more complicated than a lot of people want to deal with.
Even deductions can get pretty hairy, in my experience. And frankly, if you have to say "just the basics", you're talking about a system that has more than just the basics. :)
I don't know what you mean by "accounting debt" but yes companies only pay taxes on profits. In addition, present day losses can cancel out future profits. There is nothing nefarious about this.
For ones (like me) wondering what Snap is, it is the company behind Snapchat, they apparently changed their name few months ago (source: https://en.wikipedia.org/wiki/Snap_Inc.)
Many talking of software here, but at this scale I think we should be looking at the cost of energy. Suppose Google has a true edge on the rest of the market in terms of what the cost of a Watt is to them. Take that outlook over the horizon of 5 years; all software arguments are thereby moot. If Google can generate a Watt in 5 years at 10% the cost that AWS can, then this drastically changes the equation.
I don't follow these things closely, but what's the reason to expect that Google will be able to generate a watt at 10% of the cost that AWS can? 10x seems like a massively impressive edge to have over another huge player in the market for a commodity good like power.
Good question. No reason to expect Google beating AWS, but absolutely Google beats Snap on price per Watt if Snap builds this infra out themselves. I'm no expert, but I'd say that when the outlook is five years and you're Google and you have your own internal energy hedge fund, a big margin (perhaps not 10x) is within the realm of possibility.
https://en.wikipedia.org/wiki/Google_Energy
The timing is curious, they signed this dead and added $2B in fixed liabilities on their balance sheet days or weeks before the S1 filing.
That they didn't wait until after the IPO suggests Snap may see the partnership as a positive. Could also have been pushed from Google's end as it is a nice way of bragging about a big get but without having to formally announce it, and being associated with a hot IPO that will get a lot of coverage.
This makes no sense. One cannot spend $33M a month on Google Cloud. (Remember that it's half the price of AWS, and given a contract of that magnitude it's possible that they negotiated yet another half off).
The amount of hardware and services one would get for that bill is insane. Snapchat doesn't need that much computing power and storage.
I'm not so sure. $33M would buy 412PB of egress alone. At 160M daily active users, that's roughly 2GB per user. In just bandwidth. That's high, and they've probably negotiated some deals to lower their bills, but also consider instances, storage (photos and videos), 10% of their bill is easily support...
$33M a month is the right magnitude. The more I think about it, the more I wonder if it's actually way too low, and they're getting really deep discounts.
I was just offered 1 gigabit unmetered for $500/month in the Denver colo I run my servers in. I have to be sure, that Google gets their bandwidth for far, far less!
Which means, 50 cents per Mbps/per month. Usually a 1Mbps means about 190GB of transfer over the course of a month, I think. So 1000 * 190GB = 0.190 PB per month.
412PB thus means ~ 2200 Gbit/s . Under my pricing, I get a whole lot less than $33M / month.
So, does your calculation involve storing that same 412PB?
No. In what world would Snapchat be paying the rates that you pay at your Denver colo center? Why is that even relevant? They pay what Google charges them, which is $0.08/GB egress. Probably less due to negotiated bulk discounts.
Bandwidth is bandwidth these days. Plenty of companies like Internap are out there that can help you with low latency bandwidth, especially when you are spending a lot.
But given that Snap could be anywhere in the USA, they could locate their servers anywhere. I wasn't pushing Denver, or the colo I am in, as the solution.
It's just kind of shocking that people are willing to pay 20 times or more the going rate for bandwidth...
I guess that speaks about the scale at which they are operating in Google Cloud. Diane Green mentioned in a recent conference that one of their healthcare customers collect about 2 PB / user. Lot of companies struggle with managing / extracting value from data. Thats where the bottle neck is usually. If they have capability to handle more data, overtime their services evolve to collect, store and process more data. Once Big Data became reality, many companies started collecting orders of magnitude more data. With Google Cloud its easy to handle petabytes of data. That enables large scale computing companies on Google Cloud. (Think of driverless cars / genomics / large scale machine learning / social networks / ... )
That seems a bit too much storage, considering that Seagate only shipped around ~ 250 Exabytes worth of HDD in 2015[1]. Being extremely generous for world supply of Hard drive, we might have had only 1000 Exabytes of storage shipped in a year.
Assuming 2 Petabyte per user, mere 500,000 user will consume entire storage produced every year. May be they do, but 2 PB/user seems improbable. That's also $14k per person of data (assuming no discount from Google to this company).
Lets not assume that Google buys storage from Seagate. Google makes its own hardware for many things (networking, TensorFlow custom ASIC). Also I don't think that have that many number of customers.
Terrance from Google Cloud Platform Support here. If you are having any issues interacting with our support team please drop me a line with some case numbers (tsg@google.com) and I will take a look and try and resolve them.
Yeah, that seems really high. $400m a year? They're 10% of Google's yearly cloud revenue on their own? That said it's definitely what the s-1 filing claims:
"Any transition of the cloud services currently provided by Google Cloud to another cloud provider would be difficult to implement and will cause us to incur significant time and expense. We have committed to spend $2 billion with Google Cloud over the next five years and have built our software and computer systems to use computing, storage capabilities, bandwidth, and other services provided by Google, some of which do not have an alternative in the market."
"Google doesn’t break out revenues from its cloud infrastructure, choosing to lump it in with other non-advertising businesses like hardware and Google Play sales. But that segment totaled $3.4 billion in sales in the most recent quarter."
Assuming that all $3.4 billion is for cloud revenue last quarter, that is still less than 10% for last quarter ($400m/4 = $100m).
Yes, I wonder if they're over-engineering things. I've seen MongoDB, Cassandra, and MySQL, all 3 used simultaneously, for example (until a more sensible CTO came along and consolidated everything into one database).
They released Memories last year which lets you back up your sent Snapchats with them. So I'd expect some non trivial permanent storage costs, at least at their scale.
> 2. Like Whatsapp, server has to store the message until the receiver is online again (until they open the app).
The server actually has to store it for a maximum of 30 days [0]. Snaps get deleted after 30 days (even if they're unopened). They're deleted immediately once the Snapchat servers get confirmation they've been viewed.
Sure, why not? They might work with the FBI to store data for specific targets, and the NSA probably intercepts and copies a lot of data onto their own servers, but I see no reason why the company itself would lie about this. They probably do delete the data immediately for the vast majority of their users.
Last time I checked, Google and AWS both had very, very high egress prices. At these levels, it's far cheaper to connect to a Tier 1 provider or three.
This could be a major issue with the ipo. Wall Street investors don't take kindly to long term commitments such as this. It looks especially impertinent because it is on the order of their current revenue ($400m per year due to google versus $404m rev 2016). The similarities in the numbers just beg comparison..
Snap Inc can negotiate this number down if their revenue targets aren't met. So its really just a pro-forma agreement that can be changed... Source: The S1... the article fails to mention that part.
It is so concerning to me that the company has committed to spending an average of $400M per year on cloud infra when their revenue is only $200M, and they've revealed that user growth slowed from 17% to 3% in the quarter Insta released Stories.
It'd be one thing if they were going to use some of the IPO money to cost-optimize revenue, but I get the feeling that they need to focus on growing revenue due to how Insta Stories gutted them in 2016. That means hiring more people and writing bigger checks to Google.
And they're branding themselves as a "camera company." Their hardware division does not contribute materially to revenue (not profit: Revenue), and practically every other consumer camera company, from Kodak to GoPro, is dying.
Hah yeah, to me too. Wasnt until half way through the comments I found out who Snap was. Could have at least used the former name in the article somewhere or their logo.
Google also owns a piece of Snap through their venture arm. Plus I would think Snap will want to stay close to Google for access to their advertising exchange. We are quickly getting down to just two advertising exchanges Google and Facebook.
What do you guys think Google Cloud's margins are like? Presumably, Google is able to run a data center more efficiently than Snap could, meaning that the savings of running their own data centers will be strictly less than GC's margins... thoughts?
> Access to Google, which currently powers our infrastructure, is restricted in China.
What does this mean exactly? I could read the sentence above in two ways:
A- if a website is on Google infrastructure, then it will not be accessible in China
B- if a website is on Google infrastructure, then the cloud control panel will not be accessible to the IT operations personnel based in China
I think that it's the latter, I find it hard to believe that the scenario described at point A corresponds to reality. If that was the case it would have huge implications in terms of competition between Google and other providers.
I think it may be possible due to sharing of Google infrastructure that Google App Engine apps (what Snapchat runs on) are automatically blocked in China. This trick allowed Whatsapp to spoof censorship recently by feigning a connection to Google.com while truly connecting to an App Engine hosted proxy.
What's the back-end equivalent of "free scaling?" For example, with the front-end, if you have a SPA or a website that's completely static, you can serve static files and JavaScript in a way that scales horizontally for free.
Is there a back-end designed (with compromises and all) in a way where scaling horizontally is free, at the expense of compute power or some other sacrifice? I want to say Erlang/Elixir, but I haven't played around with it enough to say for sure.
Most server backends can easily scale horizontally, including Ruby on Rails, Node.js, and Elixir. The only requirement is that you don't keep any "state" in your server code. A novice programmer can make this mistake in any language, including Elixir.
Makes me wonder if Google will buy a substantial amount of Snap stock and if there were takeover discussions before the IPO. I'm not sure if I like this from Snaps POV but it seems like they expect massive growth and possibly want to focus more on the core business than on infrastructure (which could come back to bite them strategically but I'm sure they got a very sweet deal).
Snapchat could afford to build its own infrastructure if it wanted to. A similar sized Unicorn, DocuSign, has about 2000 employees and probably several hundred million in revenue per year (valuation is around a billion or two, depending on who you ask). They built their own data centers around the world.
But DocuSign derives a lot of money from B2B and uptime and location of the servers is important to other Enterprises. DocuSign also started building out its services more than a decade ago before a lot of the AWS or Google Cloud infra got built. So, the decision to build your own infra is as much a decision based on alternatives available. Few alternatives? Build your own infra.
By staying on GCP, Snapchat also guarantees its service and uptime will not change appreciably over the next several years. They built on GCP and migrating the whole service off would probably be a gargantuan task (how do you flip a switch and move all your compute overnight without hurting customer experience?). Staying with GCP allows Snap to maintain consistency of service while also buying time to build a transferable version of Snapchat that they could move to other infrastructure after the Google contract is over.
Investors on Wallstreet don't like seeing huge changes to company strategy too close to IPO. If GCP has worked for Snap thus far, it is far less risky to investors for Snap to keep on going "business as usual." It's better to overspend to guarantee certainty of service and business health over next few years than do a massive capital investment. Once Snap gets off the ground post-IPO, they can make longer term decisions about their infra.
Can I get someone's opinion on Snap? Is it worth paying attention to?
My understanding is that it's a package manager that installs applications in their own isolated Linux sandbox, meaning you can install/distribute them on any distribution.. right?
Does that mean software like node.js or nginx/apache will be available via Snap?
I think you're confusing the company formerly known as Snapchat, now called 'Snap Inc.' with the 'Snappy' package manager (hosted out of 'snapcraft.io') which makes linuxy packages called 'snaps'. The two are unrelated.
I also got confused (I was wondering "who is Snap, and why is he investing in Google"), Snap is not that popular at the news sites that I read (which is mostly Hacker News :), at least after their name change.
It seems dangerous to commit to spending so much money for so long into the future, with a particular vendor. Who knows whether Google Cloud will be best for your needs four years from now? Price, performance, reliability, support, whether it will have best-in-class abstractions, and so on...
I wonder if they're still using Google AppEngine, or have moved to something lower level. GAE resolved a lot of its scalability and isolation issues thanks to Snapchat.
In a way, Snapchat was to GAE what Hotmail was to Windows NT back in the day — trial by fire.
If you read their S-1, they list a dependence on Google cloud as one of their big risk factors. Yet they then go ahead and make this commitment instead of working towards eliminating it.
There's so many advantages to owning your stack and if Snap thinks that it's going to need 2 billion dollars to pay for cloud infra, they're at the scale where it makes sense to build your own infra. Just look at Facebook, they're able to create tailor made data-centers that fit precisely what they need. The success of Snap relies on huge scale on the consumer side, if they want to scale their infra to support that 5 years down the line then this sounds like a poor move since they will either need to play catch-up later on or prepare to pay serious dough to Google.
Paying for cloud services seems like a great idea when you are not able to predict your needs in the coming years, given a deal like this I don't think that's the case.