Hacker News new | comments | show | ask | jobs | submit login
Introducing Cloud Spanner, a Global Database Service (googleblog.com)
1068 points by wwilson 303 days ago | hide | past | web | favorite | 434 comments



Congratulations to the Spanner team for becoming part of the Google public cloud!

And for those wondering, this is why Oracle wants billions of dollars from Google for "Java Copyright Infringement" because the only growth market for Oracle right now is their hosted database service, and whoops Google has a better one now.

It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!


> It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!

There are also plenty of choices evolving for developers who aren't looking for hosted solutions (which can sometimes be a showstopper for enterprise on-prem deployments). There's a growing ecosystem of distributed open-source databases to look out for too.

Take Citus, for instance – a Postgres-compatible distributed store which automatically parallelizes normal SQL queries across machines. It's as easy to set up as adding an extension, and people are doing some staggering things in prod with it.

Different audience from BigQuery and Spanner, but no less exciting.

Disclaimer: no professional association, but love their product and the team.


Craig from Citus here. Thanks for the kind words. We've seen a lot of people scale-out transactional workloads with Citus as well. In particular, we've seen a lot of multi-tenant apps that need to keep scaling beyond a single node when they're running into memory or compute issues.

If you are looking for something that is more Postgres flavored (meaning we're just an extension to it so you get all the good stuff of Postgres such as JSONB, PostGIS, etc.) then we hope we'd be a good fit. And we run a managed service on top of AWS as well (https://www.citusdata.com/product/cloud) built by the team that built Heroku Postgres. If curious on pricing you can find it at https://www.citusdata.com/pricing/


It would be very interesting to have your product at the 200-300$ pricepoint. Currently, the lowest tier starts at almost 2000$ per month for the high availability version.

I'm not trying to compare on a per-mb level, but it would be nice for smaller scale workloads.


Helpful feedback, we do have a development plan for $99, but it's not really intended for production workloads. If you only have 10 GB of data we'd heavily recommend going with something like RDS or Heroku Postgres. At that amount of data single node Postgres works great.


I really like your attitude towards something like this, when your product would be an overkill for a use case and you just recommend a different product. I also really like your blog posts about Postgres, we use it a lot for developers explaining a bunch of internals, like the one with how to paginate in Postgres.


Link to blog post?


I believe this is the one they're referring to on pagination: https://www.citusdata.com/blog/2016/03/30/five-ways-to-pagin...

Though hopefully you'll find many more useful ones about Citus and Postgres broadly on our blog: https://www.citusdata.com/blog/


RDS is not single node - its multi-AZ replicated. And that's what we are paying 300$ instead of 99$ for.

Imagine.. RDS is literally the ONLY place where you can buy a 10 GB data multi-AZ replicated, snapshotted and managed postgresql.

Its pretty much a monopoly, now that Google seems to have officially closed the book on ever supporting postgresql.


> Imagine.. RDS is literally the ONLY place where you can buy a 10 GB data multi-AZ replicated, snapshotted and managed postgresql.

Not true. I was looking for a hosted Postgres provider and discovered these two:

https://aiven.io/postgresql (tried it, worked excellently)

https://www.elephantsql.com


> now that Google seems to have officially closed the book on ever supporting postgresql.

I would be hesitant to say that this is a fact.


> Its pretty much a monopoly, now that Google seems to have officially closed the book on ever supporting postgresql.

Uh, how? I wouldn't be surprised to see a Cloud SQL-like managed Postgres service from Google.

While there's obviously some overlap in the potential market for any relational datastore service, Spanner doesn't really overlap with a cloud Postgres service as much as Cloud SQL does.


I would be. its been years and google and has been building out various pieces of infrastructure around mysql - including cloud spanner.

the issue is that the migration path of self hosted mysql to cloud sql to spanner is pretty well defined. I dont see postgresql being strategically important or relevant to google for anything.

if I was a startup deciding on my database, there's a lot less compelling reasons to choose postgresql from the point of view of long term viability. hell, I can pretty much do a back of the envelope calculation on how much will it cost me to support 100 million users on mysql.

Is it safe to think that Evernote and Snapchat - startups who are giant success stories - are google mysql hosted? (in some form.. maybe even spanner)

So uber, snapchat, Google, Evernote and a clear cut path for upward scale.

I have very less hope for postgresql on google cloud.


> its been years and google and has been building out various pieces of infrastructure around mysql - including cloud spanner.

What does Cloud Spanner have to do with MySQL? It's neither API nor SQL-dialect compatible with MySQL. If there are MySQL bits used somewhere in the implementation, they are well hidden, and irrelevant to users.

> the issue is that the migration path of self hosted mysql to cloud sql to spanner is pretty well defined.

So what? Were there a Cloud SQL-like Postgres offering, the same would be true; Spanner is no closer to MySQL than Postgres. (If anything, it's SQL dialect is a little closer to Postgres's dialect than MySQL's, though not so much that you'll get away without doing substantial conversion going from either.)


You keep saying this - have you actually talked to them? Find some PMs and send a few emails. Postgres is definitely in progress.


>Google seems to have officially closed the book on ever supporting postgresql.

Er, that's a bet I'd strongly suggest you didn't make.


Right? Apps written to use Postgres aren't just going to be re-written to use Spanner.

If anything, hosted Postgres from Google Cloud will be priced in a way that makes Spanner some what more attractive, as a way to get conversions to Spanner in the long-run.


There are several managed hosting companies that will run Postgres (and other databases) for you on public clouds. Compose, Aiven, ElephantSQL, Database Labs, Heroku, etc. There are all kinds of price points and GCP is working on supporting postgres internally.

How many nodes are you looking to run for $300/month? Unless you have more than 150gb/node of data, you don't really need a distributed database which is what Citus is for.


not at the price point that RDS is at. the starting cost of a multi - az deployment is lower. For a startup thats just starting out, it is the best and safest choice. Even heroku, but im not very sure about its reliability versus rds.

Please note what im paying for is availability and reliability.. not for a database per se.

And im not even talking Aurora. That stuff is going to blow every other price point out of the water at probably higher reliability metrics.


RDS is only multi-AZ if you want it to be (and are willing to pay twice as much).


As someone who works (in part) in the MS SQL field, is it irrational to be a bit worried about the effects some of these platform advances might have one one's career?

For example, being a MSSQL performance tuning expert requires years of experience and probably pays very well, but just the other day I read an anecdotal story where someone switched a large BI database to use columnar indexes, allowing them to replace very complex (extreme manual tuning to achieve acceptable performance) queries with just standard SQL with comparable performance.

How long until the scale, pricing, and now transparent & full(?) sql compliance offered by these cloud platforms starts to make traditional RDBMS platforms a niche platform?


Microsoft has a history of sales and support that will allow them a certain longevity. They also have less "brand hate" than Oracle. I dont think MSSQL is going to be like Sybase any time soon, but I probably wouldn't focus on that stack starting now if you are into the startup or california scene. For many places in the USA, MS is the way to go.

EDIT: Also, most DB users don't need global-scale databases.


Not sure if this is what you intended, but you are aware that SQL server was developed in partnership with Sybase until the mid-90s (when they were substantially the same product) right?


Neat history! I was not aware of that! I meant to imply that sybase's rdbms offering is no longer a big player and I would not want to bet the future of my career on being an expert in it.


> ... but I probably wouldn't focus on that stack starting now if you are into the startup or california scene.

That is not necessarily a foregone conclusion. In tandem with their marketing of Azure, MS is pushing SQL Server on Linux heavily [0].

"SQL Server is Windows-only" is no longer a valid argument to choose another RDBMS if a startup uses lots of MS tooling but deploy on Linux servers.

[0] https://www.microsoft.com/en-us/sql-server/sql-server-vnext-...


I'm not saying MS is going to abandon the platform, but to me it seems entirely possible that "very soon" these cloud platforms using cheap, shared, commodity hardware might be so affordable and technically capable that it might be a no brainer choice unless you have a very good reason to use MSSQL (kind of the opposite today, on-prem by default, cloud if necessary).


What it will do is encourage MSFT to drop the cost of MSSQL licenses in Azure.


> As someone who works (in part) in the MS SQL field, is it irrational to be a bit worried about the effects some of these platform advances might have one one's career?

There's a reason why regulated (including self-regulated) professions have continuing education requirements; progress happens and you become obsolete if you don't keep up with it.

Just because tech isn't regulated doesn't mean it's any more sensible to expect to remain valuable without keeping up with progress in the field.

That being said, MSSQL experts will likely have good-paying opportunities for quite a while, for the same reason thats then case for any well-established enterprise technology: lots of systems are going to be around using it long after it has become distressingly uncool to spend time learning.


I write software as a developer. This is how I earn my livelihood.

Four years ago, I determined that while development work might seem to be near the top of the food chain, there will at some point where my work will be replaced by AIs.

This is not so different from how word processors replaced the specialist job of typesetters. Word processors make "good enough" typesetting. You can still find typesetters practicing their craft; the rest of us use word processors and don't even think about it.

At the time, I was learning to put the Buddhist ideals of emptiness and impermanence to practice, and to become more emotionally aware: the _main_ reason I had thought I would never be replaced by an AI writing software has more to do with wishful thinking and attachment than any clear-sighted look at this.

I also made a decision to work on the technologies to accelerate this. Rather than becoming intoxicated by the worry, anxiety, and existential anguish, I decided try to face it. Fears are inherently irrational, but just because they are irrational does not mean it is not what you are experiencing. Fears are not so easily banished by labeling them as irrational. Denial is a form of willful ignorance.

Now, having said all that, whether our tech base will come to that, who can say?

Since then, I have been tracking things like:

Viv - a chat assistant that can write it's own queries

DeepMind's demonstration of creating a Turing-complete machine with deep learning using a memory module.

I watched a tech enthusiast write a chat bot. He does not write software professionally. Talking with him over the months when he tinkers with in his spare time, I realized that in the future, you won't have as many software engineers writing code; you would learn how to _train_ AIs when they become sufficiently accessible to the masses. Skills in coaching, negotiation, and management becomes more important then some of the fundamental skills supporting software engineering. And like typesetting, I can see development work being pushed down the eco-ladder.

It's not surprising to me to see that Wired article about how coding becoming blue collar work. And even that will eventually be pushed down even further.

It's not surprising to me about Google's site-reliability engineering book, branding, and approach. I have done system admin work in the past, and I can already see traditional, manual sysadmin work being replaced.

It's easy to get nihilistic about this, but that isn't my point here either. I know the human potential is incredible, but I think we have to let go of our self-serving narratives first.


I find this fascinating. There are a few ideas that are at play. One is the march of progress seeking to automate everything. The rationale of automation is to improve productivity. But what happens when everything is automatic? I don't see a corollary being played out at the moment. There are a small number of people reaping the benefits, and huge swathes of the population being marginalised and disenfranchised as a result.

The second idea that interests me is this idea of very high technology. It is built upon layer after layer of very clever tech year after year that I wonder how long it would take to start again from scratch if some disaster rendered a large part of one of these layers unusable.

For instance, if you were on a desert island, could you (would you want to?) build some piece of tech? An electric generator would be useful, perhaps. How long would it take to build? You'd need knowledge, raw materials, plant, fuel etc. It's not an easy solve. And that's way down the tech stack before you start talking about AIs. I suppose what I'm saying is, that the AI layer is based upon such high tech, that is inherently fragile, because it is so hard to do.


> There are a few ideas that are at play. One is the march of progress seeking to automate everything. The rationale of automation is to improve productivity. But what happens when everything is automatic? I don't see a corollary being played out at the moment.

I don't know! :-D

I don't know what society would look like from a purely technological point of view. From a spiritualist point of view, though, it could either go very well or very badly. When everything is automated, would people have enough time and space to really start asking the really big questions? Or would it accelerate and intensify existential anguish?

> There are a small number of people reaping the benefits, and huge swathes of the population being marginalised and disenfranchised as a result.

Yeah. Arguably, this has already happened.

> The second idea that interests me is this idea of very high technology. It is built upon layer after layer of very clever tech year after year that I wonder how long it would take to start again from scratch if some disaster rendered a large part of one of these layers unusable.

The stuff of sci-fi :-D Among them, alt-history novels (what happens when someone drops into a lower-tech era; you'd have to start from 0 ... literally, 0, as in Arabic numerals).

Open Source Ecology is trying to preserve some of this tech base. I find their aims awesome, though I am not sure how effective it is.

The flip side are things being spoken from well outside the techno-sphere, (for example, shamans and mystics) It is the perspective that the further evolution of human consciousness will, at some point, no longer require a technology or artifacts. Technology seen as the last crutch. The collapse of a high-technic civilization then sets the stage for a removal of that crutch, and humans learn to stand with two feet (so to speak).


Anything that you can easily specify and describe in detail can be automated. In practise the world is filled with computers that need programming to cope with ever changing chaotic actions of our users. Personally I'm long on software developers, despite being very excited by how AI has blown up lately.


Agree. I remember someone advising me against getting a degree in computer science, back in 2000. Argument was -- look at MS Word. what else you want to add to this? It has more features than you need.

Not a fair argument against the point made above, however, I believe we will find the next big challenge for software to solve as soon as traditional problems are commoditized/automated and considered solved. Also, just knowing how to code is not going to be enough. You must complement it with domain expertise to solve challenging unsolved real world problems.


I would say: quite long enough, but the scale/importance of traditional deployments will be going down through that time. If you're looking around the market regularly, you should have ample time to notice that gigs are not what they used to be, so maybe it's time to change area.


People are abusing databases like MSSQL to do things they may not be not good at. Large scale analytics is an example where databases like Infobright give amazing performance.


It'll be interesting to see how well customers will adopt this. When I was at one of the two companies you mentioned above, we tried adding global snapshots (a la TrueTime, which is the real innovation in Spanner not the clocks) and demoed it to our DBAs/MVPs. They didn't understand what on-earth was going on. Wanted something that worked with existing clients. We just gave them classic 2PC and they went home happy. I think that's the reason why Oracle will keep on chugging. There just aren't that many workloads that need this sort of scale. It is real cool technology though and we always used to wonder why Google wasn't offer Spanner as a service.


As a bit of a veteran in the database industry, I concur (at least about the impact on Oracle's database business). There is a lot of pent-up demand for anything that offers distributed consistency.

We've been seeing this demand at Fauna. FaunaDB offers distributed consistency, based on Raft and the Calvin protocol instead of depending on specific networking and clock hardware. We've seen a big part of our appeal is the ability to run FaunaDB across multiple cloud services.


Wait, Fauna uses the Calvin protocol? Would you mind linking a white paper? I didn't realize it was in use outside Calvin/CalvinFS.


We have kept it under wraps. The whitepaper will be ready next month most likely.


Awesome! I'll keep an eye out.


What is the monetisation plan? Purely SAAS with on-premise or an open source version with support like postgres/mysql?


The serverless cloud is pay-as-you-go. There is no minimum spend, unlike Spanner's $1000 per month (apparently). And it's cheaper than operating any open source on cloud hardware.

On-premises is licensed by core.

We have a developer edition you can use on your local machine, but we don't currently have plans to open source FaunaDB itself.


Where did you read that Cloud Spanner has a $1,000 per month minimum spend? I can't seem to find any mention of this.


Minimum 3 nodes * node pricing of $.90/hour = 30.924*30 = 1944/month.


Where is this 3-node minimum mentioned? URL please?

Edit: Looks like maybe you're referring to the recommended 3 node minimum in production mentioned at https://cloud.google.com/spanner/docs/instance-configuration


While I'm not familiar with Spanner's inner workings, I would guess that they recommend 3 instances for quorum establishment in case a region becomes unreachable. If that's the case, using fewer than 3 instances could cause major problems.


Since Spanner currently only supports single-region deployments, it clearly isn't recommended as protection against a region becoming unavailable.

It may be recommended as protection against an availability issue on an instance, though, which is, after all, a big reason why you'd want a distributed DB in production.


I suppose the loss of a region doesn't apply (yet), but yes, the quorum requirement would still apply even if you only had instances in a single region.


Even within a single region you can put three nodes in different zones.


>well that would be a lot of choice for the developers!

A sad choice though. The centralization of computation is likely not a good thing in the long run.


The movement from ownership to renting on the web is absolutely terrifying to me. Within the span of a few years we've gone from owning our technology to renting it out from a big players for monthly fees that we cannot completely predict or control.

The advantages of owning your own hardware will never go away, but soon this will be made quite intentionally impossible as the big players coalesce and continue building their walled gardens.

This is already happening. All the big players own their hardware and rent it out to everyone else, while trying to convince everyone it's not worth owning your own hardware at the same time.

These companies have already begun closing off server platforms by developing custom hardware and software systems that cannot be bought for any price, only rented. These systems represent a new breed of technology with unbreakable vendor lock in.

Theses same companies compete with each other and countless other companies across the space. Take for example a start-up that wants to run their own app store. Google, Amazon, and Microsoft all run app stores. Where will this company go for cloud services? Their only big name options are to host their software on the hardware of a direct competitor. Their host has full visibility on how their system works, and control over the pricing and reliability of their machines.

It's laughable to think their "cloud partner" will give them any chance to compete if they enter the same market.

We've seen UEFI BIOS and un-unlockable mobiles enter the market in droves the last few years. A lot of new PC's can't run anything except windows. A lot of new phones can only run the carrier's version of android. We have all these general purpose CPUs that can no longer run general purpose programs because "security", and a lot of lobbyist pushing to make it actually illegal to run your own software on these with "anti tampering" laws, again for "security" . Soon the big guys (same companies, MS and Google) will make it impossible to run your own software on any reasonably inexpensive devices and the walled market will be complete.

Mark my words, I've never seen an industry with a couple big players where growth and innovation doesn't eventually turn into collusion, higher prices, and market stagnation. Once MS, Google and Amazon have their slice of the pie and they've killed off everyone else, we will see the death of general purpose computers and mobile devices. Everything you buy will be "android computer" "windows computer" and "apple computer". Anything general purpose will be massively more expensive because individual companies can't get the kind of volume discount of the giant behemoths that increasingly control large swaths of the world's computing power. We've already seen the endgame, with Amazon trialing an "on premesis" version of their compute platform which is basically a super locked down server that you can't buy, only rent endlessly. The future of on premesis will be a cloud in a black box if these companies have anything to do with it. Why? Because once they've got you locked in it makes no sense to sell you anything for keeps. Why keep improving their product so you buy the new version when they can just make it incompatible with everything else and force you to rent it forever, for whatever price they feel like charging?

One day running your own servers will be like running your own ISP . Massively impractical because the free market has been manipulated to the point that it effectively no longer exists


> One day running your own servers will be like running your own ISP . Massively impractical because the free market has been manipulated to the point that it effectively no longer exists

What? People use cloud computing because it already is massively impractical to run your own servers. Hardware is hard to run and scale on your own and experiences economies of scale. This principle is seen everywhere and can hardly be viewed as something controversial. Walmart for instance can sell things at a really low price because of the sheer volume of their sales. Similarly, data centers also experience economies of scale.

As someone who cares about offering the best possible, reliable user experience, cloud computing is absolutely the next logical step from bare metal on-prem servers. When your system experiences load outside the constraints of what it can handle, a properly designed app that has independently scaling microservices horizontally scales.

Even if you had the state of the art microservice architecture running on a kubernetes cluster on your own hardware, you still wouldn't be able to source disk/CPU fast enough if your service happens to experience loads beyond what you provisioned.

And there is the rub, buying your own hardware costs money, and no one wants to buy hardware they may not ever use. Another advantage of cloud computing.

You are seeing the peak of free market right now, because of cloud computing, which enables people with little upfront cash to invest to form real internet businesses and scale massively.

You think a game like Pokemon Go can exists and do the release they did without cloud computing?


"Even if you had the state of the art microservice architecture running on a kubernetes cluster on your own hardware, you still wouldn't be able to source disk/CPU fast enough if your service happens to experience loads beyond what you provisioned." That basically means you never planned. As everyone moves to cloud what makes you think AWS, Azure wont have same issue. If entire region is down do you think other regions can handle the load. If you think so you're kidding yourself. Unless you have business where you dont know your peak number then cloud does not matter.


You can plan all you'd like, failures happen not necessarily due to poor planning but because in real life, shit happens. Pokemon Go for instance experienced like 50x the amount of traffic they planned for.

Secondly, software companies like Microsoft, Google and IBM might know a thing or two about running data centers. Due to economies of scale, these companies are inherently in a better position to supply hardware at scale.

> If entire region is down do you think other regions can handle the load. If you think so you're kidding yourself

Netflix routinely does just this to test the resilience of their systems. They pick a random AWS region, and they evacuate it. All the traffic is proxied to the other regions and eventually via DNS the traffic is routed entirely to the surviving regions. No interruption of service is experienced by the users.

Here's a visualization of Netflix simulating a failure on the US-east-1 region and failing over to US-west-1/US-west-2

https://www.youtube.com/watch?v=KVbTjlZ0sfE

The top right node is the one that fails. As the error rate climbs, traffic starts getting proxied over to the surviving nodes, until a DNS switch redirects all traffic to the surviving nodes. Netflix does this monthly, in production. They also run https://github.com/Netflix/SimianArmy on production.

The cloud enables fault tolerance, resiliency and graceful degradation.


I think you missed the point, Netflix evacuating a region is not the same thing as that region failing. If the whole region goes down, their (AWS's) total capacity just took a major hit and unless they have obscenely over-provisioned (they haven't), shit is going to hit the fan when people start spinning up stuff in the remaining regions to make up for the loss.


>The cloud enables fault tolerance, resiliency and graceful degradation

No, tooling to failover and spin up new instances does that. An enterprise with 3 data centers can do that.

"the cloud" is just doing it on someone else's hardware.


Have you run your own servers in a colo? I've done it myself.

One person, with maybe 3 hours a week of time investment after a few weeks of setup and hardware purchase. Using containers I can move between the cloud and my own servers seamlessly, and long as I never bite the golden apple and use any of the cloud's walled garden "services" like S3. If I need more power I can spin up some temporary servers at any cloud provider in a few hours. For me the cloud is a nice thing because I don't use too much of it. If AWS disappeared tomorrow it would be a mild inconvenience, not devestating like it would be to many newer unicorns.

Go ahead and try to use the cloud you're paying for as a CDN or DDoS sheild, or anything amounting to a bastion of free speech. You'll quickly find out that your cloud provider doesn't like you to use all the bandwidth and CPU you pay for, and they don't like running your servers when they disagree with your views. They quietly overprovision everything pulling the same crap as consumer ISPs where they sell you a 100mbps line and punish you if you use more than 10 of that on average. That's the main reason the cloud is so cheap.

Hardware is cheap, colo's are cheap, software is largely easy to manage. The economy of scale they enjoy is from vendor lock-in and overprovisioning more than anything else.

Is it really that hard to double the amount of servers you own every few weeks? No! If you're using containers or managed KVM you can mirror nodes basically for free over the network as soon as the Ethernet is plugged in. Your time amounts to what it takes to put the thing in a rack, plug in the Ethernet, and hit the "on" button. Everybody in SV land thinks you have to use cloud to "scale massively" but they forget that all of today's technology behemoths were built years ago when the cloud didn't exist. Oh yeah, they all still run all of their own hardware too and have from the early days. Using their model as a template, you should own every single server you use and start selling your excess capacity once you get big enough.

Did you ever read about how Netflix tried to run their own hardware but can't because they have so much data in AWS that it would basically bankrupt them to extract it? Look at how these cost models work. Usually inbound bandwidth is extremely cheap or free but outbound is massively more expensive than a dedicated line at a datacenter, 50-100 times the cost if you're saturating that line 24/7. The removal fees from a managed store like S3 or glacier are even more ludicrous. The cloud is like crack and as soon as you start using it more than a few times a year you will get locked in and unable to leave without spending massive $$$. Usually companies figure out this shell game once they're large enough, but by then it's far too late to do anything about it.

Why are they marketing these things so heavily to startups? Because lock in is how they make their money. They make little or nothing on pure compute power, but since you don't have low level hardware access they can charge whatever the hell they want for things like extra IP's, DDoS protection, DC to DC peering, load balancing, auto scaling. You give massive discounts to new players using these systems and inevitably some of these will become the next Uber or Netflix. Then you are free to charge whatever exhoribitant rates you please once it's so impractical to move that it would require a major redesign of the business.

I see it a lot like franchising. By building on Amazon's cloud services you become "Uber company brought to you by Amazon". Like franchising, your upside is limited because any owner with a significant share of total franchises will begin to put pressure on the service owner itself.


To be honest, you sound like conspiracy nut hell bent on hating the Cloud. Maybe you should try taking a deep breath, and try to open up to the possibility that the Cloud is actually a good thing, and Cloud providers aren't the illuminati trying to "lock you in". Well maybe they are. Of course every cloud provider wants you to use their services.

But any "lock in" is totally up to you. Take a look at this: https://kubernetes.io/

You can architect your system in a way that it'll run on any cloud provider. All the major Cloud Providers support kube for orchestration.

To be honest I don't think you know what you're talking about. You should refrain from making uninformed opinions on hacker news, especially on a throwaway.


Did you ever read about how Netflix tried to run their own hardware but can't because they have so much data in AWS that it would basically bankrupt them to extract it?

Where did you read this? You can have Amazon send you a truck full of hard drives. I doubt it costs more than Netflix can afford.


Nevermind, I misremembered the story I read about them. They moved the main site to AWS with the huge omission of their movie streaming system. Their own Open Connect servers are far cheaper to use for this becuase of massive AWS outbound data costs.

Also, the truck is for data in, not data out. Getting data out of AWS is far more expensive than putting it in. That's the lock in.


The 'huge omission' is by design.

Also, the truck is for data in, not data out. Getting data out of AWS is far more expensive than putting it in. That's the lock in.

This is also not true. The bulk transfer service is bi-directional.


The Open Connect servers are for the edges, not the core.

They cache popular content close to the users, they don't manage their catalog at the edges.


You did not ever own your own globally consistent, massively scalable, replicated database. The fact that you can now rent one by the hour is strictly an improvement for you, if you need that kind of thing.


Cassandra also does that without requiring the "magic" of a system you can only get from a single vendor and never buy. At the same time these walled gardens have come up free software has grown to fill the gaps


Cassandra is sort of a Bigtable without transactions. It is not comparable to Spanner at all.


Spanner is unique in a lot of ways, but it still trades off consistency for speed.

The most unique thing about spanner is the use of globally synchronized clock timestamps to guarantee "comes before" consistency without the need to actually synchronize everything.

There is nothing stopping startups and open source developers from building the same thing in a few years. The missing ingredient is highly stable GPS and local time sources which will hopefully be available on cloud instances sometime soon. This is a new piece of hardware so it will be interesting to see if cloud providers make one available or use the opportunity to sell their own branded "service" version you can't buy. Unfortunately I think we'll see the latter far before the former, it it ever even exists. Without a highly stable timesource doing what spanner does will be completely impossible.

Yes spanner is special right now but that's even more reason to not go near it. Google has a complete monopoly on it, the strongest vendor lock in you can possibly have


> This is a new piece of hardware

Only "new" in the sense that it is currently not commonly offered, the devices themselves have been available for ages. (If you are a large enough customer you apparently can get at least some colo-facilities to provide you with the roof-access and cabling needed for the antennas). If cloud providers make precise time available I don't see much potential for locking you in with their specific way of providing it, as long as it ends up as precise system time in some way.


I'm saying I doubt they will ever offer it precisely because it will conflict with their paid offerings. The fact that it takes its hardware is a great excuse to not give your customers the option.

I know GPS time sources have been available forever but a fault tolent database needs a backup. The US GPS is incredibly reliable but there have been multiple issues with both Glonass and Galilio.

It sounds like Google has an additional time source making this possible, probably a highly miniaturized atomic clock, possibly on a single chip. There's no way they're running on GPS alone


Yes, they clearly say that they use atomic clocks in addition, but that's commercially available as well. Atomic clock for frequency stability short- to mid-term, GPS to keep it synced to global time. E.g. in many cases, mobile-phone base stations contain just such a setup, and the data-center versions should fit in a few HE.


And then all you need is a team of 12 full time SREs to manage it.


A system build on top of it? Possibly, but thats the trade-off if you don't want to pay for/be lock-in to somebody else running it. For just the timing stuff: not really. Of course it adds complexity, but these things are established and should be quite stable.


The absolute level of computation available isn't changing at the consumer level. What's happening for the next decade is the destruction of businesses hosting their own IT infrastructure and moving it to a couple of core centers.

So, the computational "Gini index" is increasing, but no one is being thrown into computational poverty.


>What's happening for the next decade is the destruction of businesses hosting their own IT infrastructure and moving it to a couple of core centers.

Yes, and this will be disadvantageous over the long run for people that want to run things themselves. Ultimately companies like AMD/Intel go where the big money is at. As things centralize further and further, there will only be 3 customers they care about in the server market.


This isn't true. They desperately want to enable users outside of the big cloud vendors as they have very little price leverage over the big vendors


But that won't matter if users keep moving to the cloud. They will be forced to just work on whatever Amazon/Microsoft/Google want.


> The absolute level of computation available isn't changing at the consumer level.

Maybe not, but consumers increasingly use centralized computation resources. I would guess that by now most applications used by consumers run in their web browser, such as Facebook.


The parent comment doesn't seem to specify "consumer level" and the loss of businesses having their own infrastructure is equally troubling. Everyone is putting a lot of eggs in a very small number of baskets.


I would disagree about the character of the situation. This isn't about people putting eggs in a few baskets, it's that it's more efficient to have centralized chicken coops instead of every family in the world owning their own chickens.

Now, you could play with that analogy further and see some issues as well, but I don't think the issue here is centralized failure; all these data centers/"clouds" are at least good. The Cloud is about businesses focusing on core business and not supporting functions.

[Disclosure, I work on the Google Cloud team, I'm biased]


>focusing on core business and not supporting functions.

Having a devops team with the necessary expertise in Google Cloud or AWS is still a supporting function. You've just traded one skill (managing physical servers) for another (managing proprietary virtual resources).


But hopefully a smaller team, and one that keeps diminishing in size over the years if the trend continues. At least for the same level of service (in availability, security, etc.).


Monocultures are efficient, but not healthy ecosystems in the long term.


Let's look at your metaphor. It's more efficient for the raising of a large overall number of chickens. It's less efficient when I need fast access to a single egg.

Hence we get caching. There's the farms, then the inbound warehouses, then the distribution centers, then the grocer, then our refrigerators by the dozen or dozen and a half. When your local cache is empty of eggs, though, it requires a trip back out to the grocer to get an egg even if you need nothing else that trip. Then you generally have to buy at least half a dozen if not a dozen or more eggs just to get the one you wanted.

If I have my own couple of hens, I can go out into the yard and get an egg. If that's the whole of my fetch list, it's much more efficient for this single egg to have the hens laying right out back.

This whole few baskets metaphor breaks down from another point of view, though, when we consider that by the very nature of using a globally distributed hosted service we're actually eliminating a single basket problem. Yes, there's not much choice among just Google, Amazon, and Microsoft. (That they are the only options is a bit of a strawman, but lets grant this one legs.) However, putting just your own employees in charge of all your infrastructure in just your own datacenter(s) in just PostgreSQL or just MySQL is another single-basket problem. Spreading it out so that someone else gets to manage the hardware and the service and replicating your data widely within that service is from that point of view more baskets. You get more datacenter baskets, more employee baskets, and more software baskets. Using standard SQL means you can move among compliant software later, too, so you're not as tied to those baskets.

Now, back to your coop analogy. What's stopping me from having my application talk to Cloud Spanner and a local database proxy (or a work queue that sits between the app and the DB or whichever) so I can use Google's reliability for transactions and my local cached replicant for query speed when I'm querying older data? Why can't I keep a few eggs around?

Also, why would I be scared of Google or Amazon "having my data"? Why would I put sensitive data into my own database in plaintext and then replicate it among multiple datacenters that way?


> it's that it's more efficient to have centralized chicken coops instead of every family in the world owning their own chickens.

Only if the owner of the chicken-coop has everyone else's best interests in mind. Protip: They don't.

The Cloud isn't about efficiency, it's about data control. Getting people's systems and data into Google/AWS/etc helps with data mining, vendor lock-in, etc. Often times that can be efficient, but also it often isn't.


That's like being sad about the emergence of banks, because everybody's money is being kept in a small number of vaults instead of under each one's mattress.


A good point, but there is an up and downside to everything. The centralization of IT does impact civil liberties and possibly innovation - unlike FOSS and other local systems, aspiring hackers can't tinker with Facebook code and see how it works.


I don't understand how Facebook relates to this, as they don't rent their cloud. Aspiring hackers couldn't tinker with MS Word 2000 code either.


> Aspiring hackers couldn't tinker with MS Word 2000 code either.

They could tinker with the binaries, something many did with game binaries. But your point is well taken; open source is also very valuable to innovation.


Web apps were also very useful for learning JS and browser APIs, before everybody started minimizing and obfuscating it. I learned how to write a rich-text editor just by looking at the code of Hotmail's email editor.


Fair enough, but think of that free and open stack: (layer 1), Ethernet, IP, TCP/UDP, HTTP/SMTP/DNS/etc, HTML/JavaScript. How many cut their teeth on those?

The apps on top, Facebook, Snapchat, etc., are not so open and much of what they do is out of reach from the user.

Also, I meant to add above: People could tinker with data files (e.g., Word docs), configurations, etc. The whole system was local and accessible. You could write local code, such as VB or for Windows, that integrated with those systems.


Creating 3 massive banks that the entire world gets to choose from would be terrible.


Not sure the banks:mattresses and cloud-companies:IT-companies ratios are that different.


That strategy resulted in the Great Depression and later 2008 crises. Damage was so high that country had to be rescued by the federal government. So, banking is a decent example of how such consolidation into private hands can go wrong. Now we just apply that to IT services and data.


That's a ridiculous argument: Banks started being a thing at the end of the Middle Ages. The Great Depression and the Great Recession were not caused by banks emerging, nor by people putting their savings in them.


Not emerging. Just being themselves with all their schemes and an economy dependent on them. A distrust of banks and their schemes at a national level might have reduced their ability to cause those problems. On top of the smaller stuff such as them delaying deposits or withdrawing stuff for bogus reasons.


Putting your savings under the mattress instead of in a bank account wouldn't have prevented the Great Recession. It was caused risky mortgages (debts, not savings) being sold as low risk from bank to bank, and then defaulting.

Putting your savings under the mattress instead of in a bank account wouldn't have prevented the Great Depression either.

The only thing it would have accomplished is making your savings easier to steal.


Storing gold or other valuables instead of Gederal Reserve notes for sale or bartering wouldn't have helped during Great Depression? I havent heard the angle that there was nothing to barter with on top if worthless dollars.


We've already been through it. People eventually abandoned mainframes for everything they could. Many of the current customers are interested in better solutions but just stuck due to lock-in of piles of COBOL, etc.


> The centralization of computation is likely not a good thing in the long run.

I agree. It only makes sense if you need special data for statistics, AI training, etc.

In all other cases the classic way of programming on pc and notebook is smarter. If you do everything in the cloud, what if you lose Internet connection? I had that experience several times over the last years.


My internet connection is more stable than my computer.


I'm not sure that's widely true. Consider:

* Most Internet usage is via smartphone

* Computers are much more stable than they used to be

* Much of the world lives in places with less stable connections

* The most expensive spec in an Internet connection is availability. You can get a low-end 15 Mbps connection with no availability guarantee for $40/month; a T1 is one-tenth the speed and costs 10 times as much (all numbers are rough estimates).


You mentioned smartphone. Once your computer dies you can use your smartphone. You can use your neighbours or colleagues computer.

It's only so many heavy industries that definitely need some sort of local infrastructure (probably not master tho) to be locally.


>the only growth market for Oracle right now is their hosted database service, and whoops Google has a better one now

It could end up that way, but lacking INSERT and UPDATE will likely limit this to a niche market for now.


I thought SQL compatible means INSERT and UPDATE available. Why aren't these statements available ?


Technically those are dml statements.


Amazon's Aurora databases seem to be solving the same problem, and are MySQL or Postgres compatible to boot.


Aurora is very cool but won't help you much after you vertically scale your master and still need more write capacity. With Cloud Spanner you get horizontal write scalability out of the box. Critical difference.


So if I'm understanding you, with Aurora all writes go to one master and you're constrained by the biggest instance AWS offers. Is that right?

Do you have a sense of what that limit is?

There's a pretty big price difference between Spanner and Aurora at the entry level so it's useful to explore this.


> Do you have a sense of what that limit is?

Per their pricing page[1] it looks like the largest instance available is a "db.r3.8xlarge", which is a special naming of the "r3.8xlarge" instance type[2] which is 32 cpus and 244gb of memory.

That's a hell of a lot of capacity to exhaust, especially if you're using read replicas to reduce it to only/mostly write workloads. Obviously it's possible to use more than this, but the "sheer scale" argument is a bit of a flat one.

[1] https://aws.amazon.com/rds/aurora/pricing/ [2] https://aws.amazon.com/ec2/instance-types/#r3


Wouldn't the write master be I/O-bound, rather than CPU- or memory-?


I disagree, the "sheer scale" argument is not flat. The fact that one can scale horizontally and the other can't is significant.

Let me present a quote to you: 512 kb ram ought to be enough for everybody


You can disagree on that if you'd like, but note that I explicitly acknowledged the possibility of exceeding these limits. In my opinion, for most cases/workloads, it's highly unlikely that you will and designing for that from the outset is a waste of time and resources.


Yes, Aurora has a single write master, though it does have automatic write failover -- i.e. if the Aurora primary dies, one of your read replicas is promoted to the primary and reads/writes are directed to the new instance. That does constrain your primary's capabilities to the largest instance size (currently a db.r3.8xlarge).

I don't have a good idea what the upper limit is for an Aurora database setup.


How does Aurora know that the primary is dead? Automatic failover is problematic in a distributed system.


AWS uses heartbeats for detecting liveliness. If x heartbeats fail the failover procedure is started. Generally 10s - 5minutes. In practice (for me) the failover has been less than 15s.


My concern was more around split brain. If you fail over while the write master is simply unreachable, pain results.


Aurora's read replicas share the underlying storage that the primary uses, so AWS claims that there's no data loss on failover. They also claim -- and I've never heard anyone say they were wrong -- that Aurora failovers take less than a minute. So the pain should be limited to under a minute of lost writes, which most applications can handle (with an error). It can still be painful depending on the application.

See here for more info: https://aws.amazon.com/rds/aurora/faqs/#high-availability-an...


Yeah, the latency on that failover isn't specified.


Do you mean the amount of time it takes to initiate a failover or the amount of time for a failover to complete?

For the former, I don't think they specify beyond "automatic".

For the latter, "service is typically restored in less than 120 seconds, and often less than 60 seconds": http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora...


That's a pretty good cutover, but as you say, they should also include the time needed to detect a failure and initiate the transition.


Amazon provides a testing methodology here: https://d0.awsstatic.com/product-marketing/Aurora/RDS_Aurora... which might be useful to explore when benchmarking the two services against each other.


Aurora is a 'better MySQL mousetrap', IMO.

This is a globally-available, nearly-CAP-beating datastore that powers one of the biggest websites on the internet.

It's not quite apples and oranges, but this is definitely a different problem they are solving.


That's vague. AWS also powers huge websites and Amazon is recommending Aurora as the "default choice" for most workloads.[1] There are certainly significant architectural differences but I would say we can definitely make a direct practical comparison.

[1] http://www.computerworld.com/article/2953299/cloud-computing...


If Aurora powers huge websites, spanner is for ginormous websites. Think a multiplier to netflix's database needs.


Curious to know what are Netflix's needs for relational database?

Doesn't strike me as a business with complex logic.


Netflix mainly uses Cassandra as their database.

And their needs are reasonably complex. They use machine learning and big data analytics to generate the list of videos that you should be watching. In order for those to work they need to capture a whole raft of end user metrics e.g. at what point you paused video X.


I'd assume they keep track of who watches what for their 'continue watching series...' pain.

Netflix was given as an example of scale. I guess for another example, spanner could be used to store every visa transaction


While Aurora doesn't provide true horizontal scalability, the same-node scalability seems so strong it might allow many companies to stay single-node for quite a while.

For example, see this benchmark:

http://2ndwatch.com/wp-content/uploads/2016/09/Graph-3.jpg

from this article:

http://2ndwatch.com/blog/benchmarking-amazon-aurora/

Thoughts?


Aurora's other zone replicas are read-only. Probably no atomic clocks and GPS for time synchronization.

To be fair, Spanner's cross-region service is coming "later 2017".


It is not close to equivalent. But I do want to get a better feel for if Google really has figured how to do basically the impossible. I want to see if this truly scales horizontally but of it does then competitors better hope for a much more detailed paper :)


> It is not close to equivalent.

It's equivalent, with different (unknown) constraints. Aurora is specifically for scaling workloads in the same way. You can say it's horizontal (machine) over vertical (resource) but it's all a matter of accounting.

The big nono is the Spanner pricepoint. I will stick with Aurora for scaling based on traffic I use, over pricey timeslices.

You would have to have quite a load to justify the switch from cheaper de jour solutions right now (AWS). Relying on the few that do, is a risk.


It seems like Oracle could have a play here, working on adapting cloud infrastructure tools for managing on-premises data centers. That keeps them in play for customers who can't put their data in the cloud and those that haven't because they're already Oracle customers.

This pendulum swings. We're pretty near the apex now. A little work on ergonomics and these tools could be turn-key, and back we go to decentralized hardware.


> It seems like Oracle could have a play here, working on adapting cloud infrastructure tools for managing on-premises data centers.

That's what the ExaDatas are supposed to be.


> the only growth market for Oracle right now is their hosted database service

This is not true. Oracle is far more than a database company nowadays in the same way that Microsoft is more than Windows. Oracle has been acquiring high-growth startups at a significant rate.


I was going by this document: http://s1.q4cdn.com/289076952/files/doc_financials/quarterly...

Which has their 'cloud services' doubling their contribution to revenue year over year and licenses losing 50% of their contribution to revenue year over year.

There 'cloud' collateral is pretty opaque though.


Yes, the "cloud" includes all of their non-database offerings too which have been the focus of recent growth/acquisitions.


A lot of Oracle's "cloud" products are just their traditional single-tenant/on-prem software that they offer to run for you in their own environments.

You're still buying the same stuff, but you're outsourcing your dev ops to them on top of it (which may not be a bad thing).


Well the product manager is from Oracle.


How is Google's cloud MySQL better than Oracle?


I didn't downvote you. It is important to note though that the Spanner project isn't related to MySQL and there is some discussion of that in a the stories around Spanner. It would nominally compete directly with Oracle's flagship database product.


Spanner is related to MySQL at Google: a product with Spanner's semantics was required to replace MySQL for some important business operations (https://landing.google.com/sre/book/chapters/communication-a...)

it's actually hard to beat MySQL for a lot of things. i was skeptical about this when I joined google, but as an SRE on the MySQL team around this time, I gained a lot of respect for it.


That is an interesting way to look at it; I have wrestled with MDB[1] while working at Google, it was a ginormous MySQL database (possibly one of the world's largest). And I would characterize Spanner's relationship this way, "If you think you are actually going to build an ACID database that scales, then make sure you can support the MySQL api that MDB uses and we'll see just how well it scales."

I don't know if anyone put it to them that way but as Spanner was just getting started when I left I know that one of its success criteria was to be able to be a scalable replacement for MDB. Given the white paper and other papers on their results, I'm sure it managed that requirement.

[1] MDB, Machine Data Base, used throughout the org but especially in Platforms and SRE to keep track of machines and their parts.


I don't know why you think MDB is ginormous by MySQL standards; it was actually quite modest.

Everything you need to know is here: https://research.google.com/pubs/pub38125.html

it walks through the architecture of the Ads DB, the issues with replacing MySQL, and some of the heroic efforts to implement it via Spanner.

At the time, I asked the team why they used MySQL instead of Postgres (which I prefer) and the short answer was: MySQL replication worked at the time.


Google Cloud SQL (hosted MySQL) is a completely separate product.


[flagged]


Please don't do this here.


Really a CP system but with the Availability being five 9s or better (less than one failure in 10^6)

How: 1)Hardware - Gobs and Gobs of Hardware and SRE experience

"Spanner is not running over the public Internet — in fact, every Spanner packet flows only over Google-controlled routers and links (excluding any edge links to remote clients). Furthermore, each data center typically has at least three independent fibers connecting it to the private global network, thus ensuring path diversity for every pair of data centers. Similarly, there is redundancy of equipment and paths within a datacenter. Thus normally catastrophic events, such as cut fiber lines, do not lead to partitions or to outages."

2) Ninja 2PC

"Spanner uses two-phase commit (2PC) and strict two-phase locking to ensure isolation and strong consistency. 2PC has been called the “anti-availability” protocol [Hel16] because all members must be up for it to work. Spanner mitigates this by having each member be a Paxos group, thus ensuring each 2PC “member” is highly available even if some of its Paxos participants are down."


> with the Availability being five 9s or better (less than one failure in 10^6)

Anyone know how exactly this is defined for them? (Time? Queries? Results?)


In general availability is defined in terms of time.

https://en.wikipedia.org/wiki/High_availability

Five-9s means 5 minutes of downtime per year.


Looks like they define it as "downtime where downtime instance > 30s"?


Well, this is what the spanner website says if you read the fine print:

>> This feature is not covered by any SLA

So I would guess that you don't get _any_ guarantess. Not five nines and not even one nine.


Currently yes, but that's because it's in Beta.


MTBF of 2PC-strapped-to-quorum is no different from MTBF of a 2PC-strapped-to-spof replicas.

MTTR is bounded by reelection latency, rather than replica recovery, although you still may eat a write amplification cost for rereplication.

write amplification is 3-5x of non-quorum-backed 2PC system, depending on replication ensemble size.

google further multiplies write amplification with geo-redundancy, so bump that WA by another 3x+.

it's an insanely high cost to pay for availability, but for an advertising company it's important to count the beans accurately.


You're making some assumptions. For example if you use epaxos for the quorums there is no unavailability due to a leader failure and re-election. Even then, re-election is likely going to be a lot faster than any sort of fault tolerant 2PC coordinator recovery protocol. You're exaggerating the write amplification. The above offering is single region. Google says they do 5 way geo in their various papers/etc, and there's no way they're paying some cumulative 15 copy write amplification as you imply.


I've worked on a storage system that used 9. 15 doesn't seem impossible to me.


epaxos is not general purpose replication. It leaves the problem of reconciling multiple replication streams to another layer. I've heard that they do 20+WA (distributed, before you consider on-disk WA) for much of it. shrug


9x write amplification isn't that high. If you are setup with a master slave configuration, that creates write amplification and for reliability, you probably have some sort of raid, that increase the number of places writes go to as well.


Not necessarily RAID, it might be an erasure coding.


The team here at Quizlet did a lot of performance testing on Spanner with one of our MySQL workloads to see if it's an option for us. Here are the test results: https://quizlet.com/blog/quizlet-cloud-spanner


What's the SQL and wire compatibility level? MySQL?

EDIT: Found quite a bit of my answers in your linked article:

> Cloud Spanner uses a SQL dialect which matches the ANSI SQL:2011 standard with some extensions for Spanner-specific features. This is a SQL standard simpler than that used in non-distributed databases such as vanilla MySQL, but still supports the relational model (e.g. JOINs). It includes data-definition language statements like CREATE TABLE. Spanner supports 7 data types: bool, int64, float64, string, bytes, date, timestamp[20].

> Cloud Spanner doesn't, however, support data manipulation language (DML) statements. DML includes SQL queries like INSERT and UPDATE. Instead, Spanner's interface definition includes RPCs for mutating rows given their primary key[21]. This is a bit annoying. You would expect a fully-featured SQL database to include DML statements. Even if you don't use DML in your application you'll almost certainly want them for one-off queries you run in a query console.

> Though Cloud Spanner supports a smaller set of SQL than many other relational databases, its dialect is well-documented and fits our use case well. Our requirements for a MySQL replacement are that it supports secondary indices and common SQL aggregations, such as the GROUP BY clause. We've eliminated most of the joins we do, so we haven't tested Cloud Spanner's join performance.

This seems like it'd prevent any kind of easy switch over to Spanner.


Just to be clear, the JOINS were removed for the vertical sharding prior to looking at Cloud Spanner. Cloud Spanner fully supports complex JOINS of many times (e.g. INNER, OUTER)

Details - https://cloud.google.com/spanner/docs/query-syntax#join-type...

Disclaimer: I work on Cloud Spanner


How did you go from 10s of ms for an update for 2012 Spanner, to sub-ms update (best case) for Cloud Spanner according to Quizlet? Did TrueTime get an order of magnitude better? Or is Quizlet measuring the wrong thing?


Sorry for the confusion but I meant the DML portion.


It sounded like you can only modify by primary key? Can you make a transaction that contains a query and a bunch of updates by PK ?

And yeah it makes it sound like writing an OEM adapter will be much more difficult.


From reading the docs, the answer seems to clearly be yes, but I'm open to being corrected.


It has the same basis as standard SQL in BigQuery: https://cloud.google.com/bigquery/docs/reference/standard-sq...


Seems more like they want all the data that ever existed for the database.


I'm reading the test results and had a question.

>> Cloud Spanner doesn't, however, support data manipulation language (DML) statements. DML includes SQL queries like INSERT and UPDATE. Instead, Spanner's interface definition includes RPCs for mutating rows given their primary key[21].

Does this mean I need to rewrite my application?

My application uses an ORM and it typically converts my logic to SQL statements and fires them off to Postgres. Would I need to change it such that it doesn't issue INSERT / UPDATE statements?


No, though it does imply you would need to write adapter code to have your ORM call the equivalent stored procedures for inserts or deletes. Most ORMs can do this to varying degrees.


yes.


> We've eliminated most of the joins we do, so we haven't tested Cloud Spanner's join performance.

The join performance is by far the most interesting part of this to me. A more traditional NoSQL solution sounds like it would have worked just as well for you, sans all the atomic clock fanciness. Joining across geographically disparate data is a real trick, and it seems like there would be some physical performance limits?


> Not every application can handle Spanner's ~5ms minimum query time, but if you can, then you can have that latency for a very high-throughput workload


This is the tradeoff we've all been looking for. Cool product, anyway!


Just wanted to say thanks for this writeup. This is really excellent, to the point I was passing around your blog post in lieu of the GCP announcement.


I'm curious about how you have MySQL configured. If the query cache is enabled you will see MySQL plateau. There's also the all important innodb_buffer_pool_size and innodb_flush_log_at_trx_commit. See this for more info https://www.percona.com/blog/2013/09/20/innodb-performance-o...


We typically do yearly performance audits with Percona to ensure our databases are optimally configured. We disable the query cache. We set innodb_buffer_pool_size based on a % of total memory (as MySQL will use more than that in total, much for allocating connection structs and things like that). We set innodb_flush_log_at_trx_commit to 0 which is not ideal for data integrity but gives us more performance. In practice, because of Google's Live Migration technology we have never experienced a crash due to hardware on our master nodes (LM will run an emergency migration for any server that has hardware that is detected to be going bad before it becomes a problem) and disks are abstracted away even further. Our main risk is the kernel crashing or MySQL crashing which we've been fortunate enough to not happen on our masters. Spanner provides ACID compliance by default with all the scaleability and performance we get out of it.


> So a query that accesses 10 rows in disparate parts of the primary key space will take longer than one where the keys reside on the same splits. This is expected with a distributed system.

No, why? Query can be executed in parallel.

BTW, isn't 20k/sec is a very very small performance for 30 node installation. Cassandra can handle 50k+ (both writes and read) on a single node. When in most queries you are trying to collect data from many nodes it will scale almost linearly.


I don't think that comparison holds. It's easy to push 50k+ on a single node, you're basically only resource bound on that machine. Pushing 20k+ on something that's globally consistent spread out over so many instances is a different exercise entirely. It also depends on the level of consistency you're asking from Cassandra. You'd probably need to set this to EACH_QUORUM or ALL to mimic the behaviour Spanner gives you.

And yes Cassandra will scale linearly-ish as long as you're in the same datacenter. Try running a geo-distributed 30-node Cassandra ring and it's a whole different story at that level of consistency and availability.


Sure, but you can make geo-replication and even implement proxy for providing consistency across DC, sure it won't be that fast in terms of latency, but throughput will be the same.


How does this proxy that provides cross-DC consistency work? And how does it guarantee that you'll get globally consistent reads?

Most cases I've seen latency can affect throughput plenty so I doubt your assertion that it won't affect throughput quite a bit. Even more so for anything that relies on TCP/IP.

I'd highly recommend reading the Bigtable[0] and Spanner[1] papers first and maybe then we can have a sensible and fruitful argument.

[0]: https://research.google.com/archive/bigtable.html

[1]: https://research.google.com/archive/spanner.html


Quickly skimmed this.. how is comparing Cloud Spanner to a VM running MySQL comparable?


it isn't


This release shows the different philosophies of Google vs Amazon in an interesting way.

Google prefers building advanced systems that let you do things "the old way" but making them horizontally scalable.

Amazon prefers to acknowledge that network partitions exist and try to get you to do things "the new way" that deals with that failure case in the software instead of trying to hide it.

I'm not saying either system is better than the other, but doing it Google's way is certainly easier for Enterprises that want to make the move, and why Amazon is starting to break with tradition and release products that let you do things "the old way" while hiding the details in an abstraction.

I've always said that Google is technically better than AWS, but no one will ever know because they don't have a strong sales team to go and show people.

This release only solidifies that point.


This isn't entirely accurate. BigTable was Google's earlier cloud database, and it's certainly non-traditional, and you have to build your application without traditional consistency guarantees, the way you describe.

Spanner doesn't exactly hide the details, but it lets you make transactions that span multiple shards. You still eat the cost of the transaction, you're just free from having to implement it at the application level, which is a more difficult and error-prone way of doing things. The bottom line is that if you need consistency, it needs to be implemented somewhere in your stack. If you don't need consistency (analytics workloads come to mind) then you have more flexibility with your database.

Disclosure: Google employee, reconstructing what I know from published information.


Same for GAE, or now GKE (basically microservices before microservices were a thing) vs. EC2. GAE was pushing a fairly non-traditional architectural model while others were just trying to provide cloud-based VMs.

Discolsure: Also a Google employee, also reconstructing.


Amazon: Create usual services and sell them.

Google: Make unique products that push the boundaries of what was previously thought possible.

Amazon: Don't care about inefficiencies and usage. Inefficiencies can be handled by charging more to the clients, usage doesn't matter because the users are mostly the clients and they don't feel their pain.

Google: Had to make all their core technologies efficient, performant, scalable and maintainable or they couldn't sustain their business.


Not fair.

Amazon: IaaS

Google: PaaS

Amazon is philosophy is being 'close to the metal' to allow Enterprise customer to migrate 'regular apps' into a 'regular environment' in the cloud.

Most of Google's offerings are (at least were) novel, but proprietary ways of doing specific things.

Amazon is not a laggard: they have provided a number of interesting and useful 'helper' things to facilitate IaaS - as well as a number of 'pure cloud' type things.

Amazon is very, very customer focused. Their products come from customer demands.

Google often 'cool things they've done internally' and exposes them, hoping that they might have some use-case in the rest of the world.

Google and Amazon are equally interested in profit.


Google is IaaS, their PaaS offering (App Engine) never gained much traction AFAIK. I also find the comparison fair, Google is a software engineering company, Amazon is a sales/marketing company.


At what point do infrastructure services become the platform? IMO, between GKE, Spanner, BigQuery, etc., it's basically a PaaS for non-trivial applications.



> Google: Had to make all their core technologies efficient, performant, scalable and maintainable or they couldn't sustain their business.

Which Amazon totally didn't have to do with their firehose of cash?


Google has to support Google, Youtube and many of the most resources intensive services in existence on Earth. They needed to be "efficient enough" to operate that, meaning incredibly efficient.

Amazon runs nothing, it's an outsourcing firm. They needed to make services "good enough" to be sold. If a service is somewhat inefficient, it just charges the clients more to cover the costs.

Technologies reflect the business they were created in.


> Amazon runs nothing, it's an outsourcing firm.

What the fuck are you talking about. It's one thing to say AWS services are "good enough" but "Amazon runs nothing" is a ridiculous statement.


Agreed, and I work on Google Cloud. We may have different styles and core businesses, but I wouldn't say "eBay ran nothing" either. Logistics alone is a super fascinating space!


Interesting. But Do Amazon actually do much Logistics? I thought they are much more Warehouse and outsourced delivery to DHL and UPS.


You do realize the way most services get into AWS is that they're first built in the retail side of Amazon (without any thought towards AWS) and then once people realize it's effectively solving an actual problem, it's rebuilt for AWS. Having to support Amazon retail is a pretty demanding stress test -- I'm not sure why you're getting this notion that Amazon doesn't run anything. I should think handling Black Friday alone would count for something..


That is something of a myth. AWS was created and evolves completely separately from retail, which didn't really use it in anger until 2010ish. Retail is effectively a large customer to AWS. They're very good at watching what customers are doing in general.


The 'rebuilt for AWS' phrase is key.


No one is saying amazon doesn't test their stuff. The argument here is that Google is inherently a more technical company, which is a fair comparison. Their products are more technical. Ad Sense, Gmail, YouTube are incredibly technical products due to their scale, and the argument here is that nothing of similar technicality exists in Amazon's core business, which I think is totally fair.


> The argument here is that Google is inherently a more technical company, which is a fair comparison.

I suspect that Google knows this, and their reputation for have poor customer support and sales comes from that knowledge.


Yeah that's all pretty accurate too. :)


I would phrase it as:

AWS prioritizes building blocks that support very high throughput and avoid leaky abstractions at all costs, and they're happy to push forward as long as these criteria are met. IMO they really succeed at this goal. Minus specific bugs that they're generally good about acknowledging, their services reliably do what they say they're going to. And they definitely solve a lot of problems for you, even if sometimes you're still required to get further into the weeds than you might want.

I'll buy that Google Cloud is better at questioning underlying assumptions and sometimes succeeds in releasing higher-level abstractions than AWS without any leakiness (a great example of this now being Spanner vs Aurora). It also feels to me that with releases like this Google is leveraging the full value of their own experiences running their services, and seems to be more advanced than amazon in some areas so this has a lot of value, whereas AWS seems to build a broader range of products with a specific customer in mind which is not necessarily themselves (e.g. all of their move your on-prem stuff to the cloud helpers).

If you consider Spanner vs Dynamo, it definitely matches up as Google wrapping the old way and Amazon forcing a new way (though to be fair, Dynamo was released 5 years earlier). But on the other hand considering Spanner vs Aurora, Amazon is the one embracing the old way with full MySQL & Postgres compatibility whereas Spanner sounds like a pretty dramatically different subset of SQL in not supporting insert and update statements. It's a very reasonable compromise for basically getting to ignore the CAP theorem, but it is a a significant difference that every developer will have to learn.


I would probably compare Dynamo to Google Datastore (released.. 8 years earlier?), or even to Bigtable, which went GA last summer. I'm not sure Spanner matches up with anything AWS at this point.

(work on google Cloud)


I think aurora is the closest.


The "old way" was sacrificing functionality such as transactions and joins to get scalability (BigTable, DynamoDB).

Google tried that a decade ago and found it lacking, this is why Spanner exists in the first place.


Well, I'd say the "old way" is SQL with joins and schemas and transactions, and the "new way" is KV with eventual consistency.


Chronologically, we have: SQL -> NoSQL -> NewSQL

You're both right.


I'm not sure if what you said applies, they have severe restrictions and spanner offers subset of MySQL functionality which is already bare compared to other databases. Changes can be done by primary key only, so it almost feels like a KV store that can do joins...

I don't think it's easy to port existing applications to use it and in the end you will still need to accommodate shortcomings in your application.


That's a fair assessment. But I'm assuming they will make it do more "SQL" things in the future. I could be wrong though.

Either way, they are trying to abstract away having to think about eventual consistency with this offering.


Spanner doesn't use SQL for writes it seems, so its still a significant rewrite for legacy applications (especially ones that dont use an ORM).

The thing thats really different here is Google are basically saying, heres this awesome system, yes it has obvious risks from partitioning, we are going to stake our reputation on those partitions not happening.

In contrast AWS are saying, this is DynamoDb, its really limiting but because of those limitations it should be pretty reliable as long as you write your application correctly.

It will be interesting to see if Microsoft and Amazon have to follow Google's lead here.


"I'm not saying either system is better than the other, but doing it Google's way is certainly easier for Enterprises that want to make the move, and why Amazon is starting to break with tradition and release products that let you do things "the old way" while hiding the details in an abstraction."

No, they said this in the F1 RDBMS or Spanner papers. They originally did the NoSQL, eventual-consistency type of stuff. This had app developers required to do a lot of work to avoid problems that model can create. Apparently, even their bright people had enough problems with it that they decided to eliminate or simplify that situation with stronger consistency. Took some brilliant engineering but now they have a database easy to use as old model with advantages of newer ones.

If anything, they learned some hard lessons with a good solution to them. Now, they're offering it to others. I was hoping they'd do this instead of keep it internal only. F1 and Spanner are amazing tech that could benefit many companies.


People used to make similar comparisons between the Russian and American space programs.


Oh yeah? Which was which in that comparison? I'm not familiar with that.


Presumably a reference to the classic "Pencil myth": Americans when faced with a need to write things down spent millions of dollars inventing a low gravity ballpoint pen and Russians just used a pencil.

As cutesy of a sentiment as it is, it's also full of misconceptions. The pens were invented by an American corporation that wanted better pens to sell in general (a smoother flow in a pen, regardless of gravity/orientation, is a better pen), and they saw a good opportunity to market the pen to NASA for use in space. Both NASA and the Russians used pencils in space, but the problems with pencils is the flakes can pollute an environment pretty quickly in low gravity and the pens turned out to be a much better solution. (So far as I've heard, every space agency these days buys similar pens.)


You presume wrong. Pens are not a space program.

I meant the differences in design philosophy that permeate aerospace engineering on both sides. Russian, built ugly but for strength and longevity. American, built for high capability with finesse and finer tolerances. The emergent properties of these different principles explain why Soyuz is still a preferred launch vehicle, but it was the Americans who got to the moon and operated the STS.

Amazon is more like the Russians: built in the knowledge that things fail, but less magical as a result. Google is more like the Americans: remarkable technology, you just need a herd of geniuses to run it.


Sorry if you think I made the wrong presumption, but I've seen the "pencil myth" trotted out many times as a "definitive" example of precisely the dichotomy that you have spelled out. (A low gravity ballpoint pen is high capability that requires finesse and fine tolerances; a pencil is ugly but strong and typically exhibits longevity... and so forth.)

It is an interesting analogy this dichotomy you see in the design philosophies (both between the space programs and the mega-corporations), but perhaps my point, if I were attempting a point, is to beware of false dichotomies.


You can see the same in comparing the military aircraft design. Both schools are ingenious, but it is applied in different ways. There are some excellent comparisons connecting principles to outcomes on Quora.


>and release products that let you do things "the old way" while hiding the details in an abstraction.

However, by 'abstracting' this away, you're not being forced to think about failure domains. If there is ever a massive country-wide connectivity break to the wider Internet (feasible for lots of people inside censored countries), you'll be pretty pissed when you can't use the DB services for your servers in the Google-local datacenter that you still have connectivity to because it can't get quorum.


Cloud Spanner is currently a regional service, not a global service. So you would only lose availability for failures within the region.


Exactly my point. I would say I personally prefer the Amazon way of forcing you to think about these things.


I'd encourage you to read the F1 paper and the Spanner paper if you haven't already. The big thing that stood out to me is that Google started that way (with BigTable) where you'd roll your own availability and transactions. It didn't go well.

That's the motivation behind both Spanner and F1: take the experience of how painful it is to do transactions on a Regional or Global level, and never make individual teams do it again.

I see it a bit like "Don't roll your own crypto". Clearly some people are exempted from it, but you better be able to tell me why you get an exception.

Disclosure: I work on Google Cloud and want you to pay for Spanner :).


Some interesting stuff in https://cloud.google.com/spanner/docs/whitepapers/SpannerAnd... about the social aspects of high availability.

1. Defining high availability in terms of how a system is used: "In turn, the real litmus test is whether or not users (that want their own service to be highly available) write the code to handle outage exceptions: if they haven’t written that code, then they are assuming high availability. Based on a large number of internal users of Spanner, we know that they assume Spanner is highly available."

2. Ensuring that people don't become too dependent on high availability: "Starting in 2009, due to “excess” availability, Chubby’s Site Reliability Engineers (SREs) started forcing periodic outages to ensure we continue to understand dependencies and the impact of Chubby failures."

I think 2 is really interesting. Netflix has Chaos Monkey to help address this (https://github.com/Netflix/SimianArmy/wiki/Chaos-Monkey). There's also a book called Foolproof (https://www.theguardian.com/books/2015/oct/12/foolproof-greg...) which talks about how perceived safety can lead to bigger disasters in lots of different areas: finance, driving, natural disasters, etc.


> perceived safety ... driving

I became a way better winter driver when I started intentionally fishtailing in snow and ice (in low risk situations).


There's also research showing that removing safety features, e.g. white lines between opposing lanes, increases safety.

http://www.bbc.com/news/uk-35480736


I experienced this in Drivers' Ed with special tires that were deflatable and inflatable under instructor control! Until I moved for grad school, I hadn't realized this wasn't more common.


I wonder how this will affect adoption of CockroachDB [1], which was inspired by Spanner and supposedly an open source equivalent. I'd imagine that Spanner is a rather compelling choice, since they don't have to host it themselves. As far as I know, CockroachDB currently does not support providing CockroachDB as a service (but it is on their roadmap) [2].

[1] https://www.cockroachlabs.com/docs/frequently-asked-question...

[2] https://www.cockroachlabs.com/docs/frequently-asked-question...


(Cockroach Labs CTO here)

Google launching Spanner is generally a positive thing for our industry and our product. It's more proof that what we're aiming for is possible and that there's demand for it. We expect that in five years, all tech companies will be deploying technology like ours.

One of the big differences is that Spanner only uses SQL for read-only operations, with a custom API for writes. We use standard SQL for both reads and writes, which means we also work with major ORMs like GORM, SQLAlchemy, and Hibernate (docs should be live today or tomorrow). Spanner's custom write API will make it difficult to work with existing frameworks, or to convert an existing application to Spanner.

Cloud Spanner only works on Google Cloud and is a black-box managed service. CockroachDB is open source and can be run on-prem or in any cloud on commodity hardware. (We don't offer CockroachDB as a service yet, but may in the future)

At this point, both products are still in beta and are still missing features like back-up and restore (according to the Quizlet blog post). We plan to launch CockroachDB 1.0 with back-up / restore enabled.

* For anyone wanting to know more about how we make CockroachDB work without TrueTime, see our blog post: https://www.cockroachlabs.com/blog/living-without-atomic-clo...


> Google launching Spanner is generally a positive thing for our industry and our product. It's more proof that what we're aiming for is possible and that there's demand for it. We expect that in five years, all tech companies will be deploying technology like ours. Echo on this! It's truly exciting moment for each and everyone in the field.


Exciting times on the horizon for Cloud technologies. Godspeed.


I for one would love to see a hosted offering of cockroachdb!


Would Cockroach 1.0 comply with SQL:2011?


(CockroachDB CTO here) We haven't implemented everything in the standard yet (Nor will we by 1.0 - there's a lot of stuff there!), but we are aiming to ultimately be compliant with the SQL standard. For example, when we introduced "time travel queries" (https://www.cockroachlabs.com/blog/time-travel-queries-selec...) we adopted the SQL-standard syntax "AS OF SYSTEM TIME" (as opposed to the non-standard out-of-band parameter used in Cloud Spanner)


The main sales pitch of Cloud Spanner is Google's network infrastructure.

No startup will be able to replicate that anytime soon, a lot of time (and money) has been put into it by a lot of people over a long time.


Curious: is there any company in the world that could replicate its breadth, performance, and reliability in the next decade?

Could any government? Has any government?

My impression is that, infrastructure wise, Google is genuinely in a class of size one.


Its probably a class size of 2, with Amazon. Beyond those two though, no one else is close.


I honestly don't think Amazon is even close to Google.

How much more infrastructure do they have besides AWS? How much does Google have besides GCP?


That's ignoring just how much larger AWS is than GCP.


Sure, but those are the public offerings - the largest part of Google's infrastructure is not public.


Capital expenditures might shed some light on this. I don't think there's enough public data to be clear but in 2015 Amazon (4.8B), Google (9.9B) and Microsoft (5.9B) were at least on the same order of magnitude in terms of CapEx, whereas other major "datacenter" companies like Rackspace (475M) are much smaller.

I don't think you can draw any definitive conclusions from this, but calling it a class of size 1 or 2 is probably an overstatement of Google (+/- Amazon)'s advantage over Microsoft at least.


What? Close to what? There are many many companies with lots of network infrastructure. Google and Amazon are not by themselves.


Ever heard of Facebook ? :)


> Could any government?

NSA's annual budget is $50bn. U.S. military budget is about $600bn.

Google's revenue is $90bn and they don't spend all of it.


AWS, they just don't talk publicly about it so much.


https://www.cockroachlabs.com/blog/living-without-atomic-clo...

> A simple statement of the contrast between Spanner and CockroachDB would be: Spanner always waits on writes for a short interval, whereas CockroachDB sometimes waits on reads for a longer interval. How long is that interval? Well it depends on how clocks on CockroachDB nodes are being synchronized. Using NTP, it’s likely to be up to 250ms. Not great, but the kind of transaction that would restart for the full interval would have to read constantly updated values across many nodes. In practice, these kinds of use cases exist but are the exception.

CockroachDB is waiting for time keeping hardware to improve.


Eric Brewer's post on Cloud Spanner mentioned that Google intends to expose TrueTime to customers at some point. If/when that happens, it would be very interesting to see CockroachDB's performance on Google Cloud. (They might have to do some engineering work to accomodate whatever TrueTime API is exposed, but when timekeeping is fundamental to your product, that seems worthwhile.)


If the clock offset is too high (more than 250ms), we should use another transaction model, Google Percolator is a good fit before the unforeseeable improvement in the hardware. Based on the monitoring of clock offset on cloud, TiDB chose to use the timestamp oracle to allocate timestamp, which is much faster.


Maybe you could help fund Eric S. Raymond to improve NTPD, he might have some good ideas about improving normal PC-class hardware cheaply too.

https://www.ntpsec.org/


CockroachDB can be hosted on any cloud for a fraction of the cost, I'd think that's a huge advantage for small/solo startups.


Given that Spanner starts at $650/mo/node + storage costs, I think Cockroach could still see huge usage as a self-hosted alternative.


That's not very much given the capabilities and managed service. Anything cheaper probably means the single-node managed SQL offerings are more than enough.


But single node with unlimited room to grow is always a good value prop. Cockroach can market that, but Spanner can't.


I imagine the globally distributed database market is big enough for more than one winner. The presence of competitors can sometimes even be a boon, increasing the visibility of a market's goods relative to other similar goods.


And assuming some reasonable compatibility/portability/migration story it can help in reducing the rational fear of proprietary lock-in.


I am also interested in how it compares to NewSQL databases like NuoDB. NuoDB has been positioning itself as a very similar type solution (no compromise relational distributed database) to Cloud Spanner for a while (minus the cloud hardware provided for you).

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: