Hacker News new | past | comments | ask | show | jobs | submit login
What is Eventual Consistency? (concurrencyfreaks.blogspot.com)
105 points by ingve on July 17, 2017 | hide | past | web | favorite | 14 comments



Perhaps due to the title (which is evocative in Seattle) I often think of "Starbucks Does Not Use Two-Phase Commit" [0]

[0] http://www.enterpriseintegrationpatterns.com/ramblings/18_st...

P.S.: I feel like I may have thrown this one out without enough context, so I'm just gonna bloviate a bit in this edit.

The "object" referred to in the OP link -- or what I like to vaguely call the "unit of consistency" -- is a single Starbucks employee. They hopefully have an internally-consistent history of what they believe to be true about the universe. (And stay sane during coffee-rush times.)

The phrase "eventual consistency" describes the relationship between multiple employees. Individuals can drastically disagree with one-another, but there's a framework for detecting disagreements and resolving the discrepancy in some way, even if that just means agreeing to ignore it and logging it for management to fix.

A lot of eventually-consistent systems involve allowing certain kinds of discrepancies to occur, while simultaneously promoting those errors into real business concepts in the domain. Banking and accounting systems are particularly great demonstrations of this, because they started doing it centuries ago when nodes were in cities connected by ink-and-parchments packets on horse-ridden routes.


Jim Starkey (founder of NuoDB) is rumoured to have said that any adjective before the word "consistency" is equivalent to the prefix "in" as in "eventual consistency is inconsistency". A bit harsh, and not completely true, but a good warning to a world that increasingly believes eventual consistency is the best way to handle the CAP theorem.


I would assume that most people are making an informed decision that eventual consistency is the most palatable compromise for their domain. I don't think people just look at the CAP theorem and think that eventual consistency is a magic bullet that gets around one of the necessary compromises, it literally is one of the proposed compromises.


A lot of the time it's not even really a compromise. This is why it makes no sense to debate about eventual consistency - the implications are entirely situational.

This kind of buzzword is harmful; it leads to overthinking, over-engineering, misunderstanding and a lack of focus on what matters, which is implementing solutions that are appropriate for the current context.

It's also a fallacy (or an artefact of enterprise thinking) that you need to either adopt eventual consistency or not. Modern systems are increasingly webs of loosely connected components and subsystems where some processes may be immediately consistent and others not. This is something that we take for granted most of the time (e.g. a search index is almost always "eventually consistent" and nobody would ever think to question that) until we start talking about the concept in an abstract/contrived fashion.


I agree. Eventual consistency is not consistency. Actually, many NewSQL databases nowadays have achieved strong consistency, such as Spanner, Cockroachdb, TiDB, etc.


It's a great way to achieve shapes of liveness and/or latency guarantees for when globally observable consistency isn't necessary for a given problem.

The problem with many eventually consistent systems is that they're not eventually consistent. They're lossy. They often suffer from a variety of problems that are "lost update" shaped, which keeps them from ever converging on consistency no matter how long you wait.


It's how banks work.

You write a check and buy something. The banks reconcile and determine you don't have enough money, and your account goes negative.

If it were atomic, your account could never go negative.


I knew I'd see this in the comments. Yes it is how banks work, but I hate how this comment usually plays (and I'm not accusing you of this) into "It's good enough for banks therefore it's good enough for us" arguments.

Banks have a really special set of historical and regulatory availability constraints that most systems do not deal with, not to mention, time constraints across the globe.

If your business process can handle lag via auditing and reconciliation processes then by all means adopt EC, but I'm not sure that plays out as much as engineers reach for the EC toolbox.


> If your business process can handle lag via auditing and reconciliation processes then by all means adopt EC, but I'm not sure that plays out as much as engineers reach for the EC toolbox.

Not sure I agree with that. I'd argue that there's very few (if any) businesses that are strictly consistent. Sure, there may be pieces where consistency is important, and eventual consistency isn't a silver bullet you can fire and forget, but its uncommon to see entire systems modeled in a strictly consistent manner.

This past year I've built a number of apps to support both core products as well as internal initiatives, and I don't think a single one was strictly consistent. There were parts of the apps that needed consistency, but as a whole they were pretty much all eventually consistent.

What's important is that you understand when strict consistency is necessary, and when it can be relaxed, and that you understand when your operations are eventually consistent and when they aren't.


Your comment would only need minor rephrasing to be a defense of C++/C code for greenfield projects in 2017.

"You can be smart about when to use it, if you're good you'll know when you need it"- I personally feel that this is a dangerous default.


It's how banks the 20th century worked with external systems (people in this case), I bet they were transactional internally once they were computerized.

In the 21st century, if I buy something at a shop and there are not sufficient funds in my account the transaction is canceled and I walk out of the shop empty handed. The non-transactional system is still there as a fail over.


This is not true either. All modern POS systems have an offline mode. Have you ever tried to use your card and the employee says 'the machine is really slow today'? That's because the machine was offline, but your purchase still worked. In offline mode the POS will still authorize the purchase up to a certain amount, typically $75 (up to the store). Those charges are pushed through after the machine goes online. Yes, it's a failover, but the point is even the modern online system is still not atomic and has eventual consistency.

Legitimate purchases outnumber illegitimate purchases by a large factor. It makes sense to err on the side of legitimate purchases. If it were atomic you wouldn't have been able to make that offline purchase, even though you had the money. Since it's true in the majority of cases they let those go through, up to a certain risk factor (the $75 limit).


Except no. They are transactional in the small. But the system itself is still eventually consistent. There are just too many transaction that ends up with a false positive or a false negative.

Plus ebery you do a transaction, you have up to 60 days to cancel it. You will get your money before that.


I'll tell you later :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: