Hacker News new | past | comments | ask | show | jobs | submit login

There are a ton of real-world systems that actually do deferred settlement and reconciliation at their distributed edge - for example, in reality ATMs work this way (and credit cards, in a way). These systems should actually be thought of as sources feeding into a distributed log of transactions which are then atomically transacted in a more traditional data store after reconciliation. In systems like this, you must have a well defined system for handling late arrivals, dealing with conflicts, often you need some kind of anti-entropy scheme for correctness station keeping (which people should think about anyway and must people ignore), and so on. These systems are hard to reason about and have many many challenges and many of them that I have personally seen actually are impossible to make robustly secure (a byproduct of having been implemented prior to security being a strong consideration).

In these applications the deferred abort problem is dealt with in the settlement and reconciliation phase and these events are themselves just events handled similarly.

But this article is blurring the line between underlying transactional data stores where invariants can be guaranteed and the kinds of front-ends that loosen up on 2PC as a requirement.

As an observation 2PC is not the problem, the problem is data scope. If the scope of the data necessary for the transaction to operate correctly is narrow enough there is no problem scaling transactional back ends. This gets back to traditional sharding, though, which people don’t like.




ATMs are a great example of a system that really don't need a two phase commit protocol. You have an account, it has a balance, and the transaction is a subtraction from the balance.

A dumb ATM would read the balance value, subtract, and issue a "set balance to $$$" transaction. A good ATM would check the balance, ensure theres enough money (or overdraft..) for the request, and record a "subtract $40" transaction.

If this message gets delayed, oh well, the customer might end up with a negative balance - sucks for the bank if the customer decides to go into hiding - but as the customer typically can't control the delay - it's hard for them to abuse this feature.

(I only consider delay here, as I'm sure ATMs make multiple durable copies of their transaction log - making all but the biggest disaster unlikely to prevent them from being able to retrieve the TX's eventually)

On the other hand, most systems are nowhere near this "simple". What happens when 3 Github users all simultaneously edit the same Github organisations description? You can't just add/subtract each of the changes. One change has to win, or in other words, two changes need to be rejected.

I feel like the authors text really only covers the ATM style use case, a valid use case, but one that's already reasonable to solve without 2 phase commits. Once you are willing to accept & able to handle check+set race conditions, things get much easier :)


Two-phase commit deals with network failure, not change semantics. The message itself (whether "BALANCE=40" or "SUBTRACT 40") isn't something that matters to 2pc.


You're absolutely correct, but change semantics changes how you deal with network failure, there not entirely independent - and I'm aware my example is likely too simple, but typing a better one on a phone is hard :)


You're actually both wrong. 2 phase commit is used for all kinds of things. It doesn't always require a network to be involved.

And you were wrong because your example is not a good example of something not needing 2 phase commit, it was in fact an example of a 2 phase commit. That over-drafting you mentioned, that is deferred reconciliation.


That is not the 2PC I know. From Wikipedia: "It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction". ATMs don't do 2PC. They can't about a money transaction which already happened.

Also wiki: "two-phase commit protocol (2PC) is a type of atomic commitment protocol". That's different from deferred conflict resolution from event log.


The two phases in the ATM scenario would be,

Phase 1 ATM Request to withdraw funds secures a lock

Phase 2 Gets the all clear, writes the new balance to the ledger, distributes funds.

Anything trying to modify the ledger at that same time should be blocked for the short time it takes to process the transaction.

This is how these systems work.

The reconciliation is what happens when you can't do 2pc or when it fails.

Re it being a network protocol, no, its been around d since the dark ages.

https://link.springer.com/content/pdf/10.1007/s10619-008-702...


> The two phases in the ATM scenario would be,

> Phase 1 ATM Request to withdraw funds secures a lock

> Phase 2 Gets the all clear, writes the new balance to the ledger, distributes funds.

> Anything trying to modify the ledger at that same time should be blocked for the short time it takes to process the transaction.

So you're saying 2PC is basically just "acquire a lock... do two things... release lock"?


In essence yes.

Once you get into implementation detail you have to deal with a heap of failure modes which is what the article is complaining about in the premise.

What I don't really understand is how their solution isn't basically just sharding (in some form.)


I don't quite understand the issue with the GitHub use case. If the operation is just "set name to X", then multiple such operations are trivially serializable and the latest one will win. All prior changes are accepted, performed and immediately overwritten, there's no need for any coordination at all. Or am I missing something?


Yes, In my opinion, you're missing the user experience aspect. Two of the three should be rejected - not overridden. - so the users get the feedback they expect, rather than reloading the page and seeing something totally different.

And yes - It's a trivial example, one defined by UX - but there are many examples of needing to reject a transaction that can't simply be overridden (or replayed later like simple addition/subtraction) - and I don't see how the authors proposal replaces two phase commit in something like that.


From the UX aspect, I actually would not expect it to be rejected. I would indeed expect to see something totally different than what I typed with a label nearby saying "Edited by $NAME 5 seconds ago"


I'd much rather see my edit form shown again, with my content still in it, and a conflict warning... Nothing worse than loosing that 20 minutes of typing to a conflict!


Wow, 20 minutes is a long time to spend on a "Github organisations description". But yes, I'm absolutely with you on that. Whenever I need to submit a long text field in an online form, I got into a habit of always first copying the contents into my clipboard, to prevent myself from causing irreperable harm to things around me if the request fails.

But going back to the UX, I would prefer that the site accepts my change and stores it in a journal (a-la git or Wikipedia), before overseeing it with the next change, so that I could easily revert to it or merge it with the newer change.


But this is easy to do. Just embed a version number (or hash) in the form. If after submitting the version to be changed is not the same as in the form, then you know there is a conflict.


I'd rather see the edit warning when multiple people are editing the same info before 20 minutes of typing even starts, and a interface for accessing history for the edge cases.


But assume the system updates changes infinitely fast. Then assume user A makes a change, and only after 30 microseconds user B makes a change. Will user A now be confused because their change is overwritten by user B? If so, then the problem has nothing to do with the aforementioned situation and the UX should probably show: "user B is also editing this field" or something like that.

The point is: it does not matter if the system is slow and rejects changes; because the effect to the user will be the same as in the "infinitely fast" case.


How fast the system makes updates has nothing to do with it.

Premise: The current state is "foo". Alice would like to change the state to "bar". Bob would like to change the state to "baz". Alice and Bob are friendly (non-antagonistic) coworkers.

Naive sequence:

1. Alice: READ STATE => foo

2. Bob: READ STATE => foo

3. Alice: SET STATE=bar

4. Bob: SET STATE=baz <-- this is where the "confusing"/"wrong" thing happened. Bob did not expect to overwrite his coworker's work.

The solution is that instead of a naive "set", to use "test-and-set" operation.


Bob may or may not be concerned with what the state is/was, and more concerned with what state the system needs to converge on.

I take your point, although the assumption is that Bob wants to set state=baz IFF (if and only if) state==foo. However he may simply need the state to be baz, regardless what the previous state was.


Many systems (especially caching systems) make a point of differentiating the `set` operation from a `check and set` operation (usually known as `cas`) in systems where both of these operations are available you are quite able to intelligently differentiate those two resolution states.


I think it's fair to assume no collaborative text editing would actually occur on that field at all. And so last write wins is perfectly acceptable strategy here and no coordination is actually necessary, and neither is necessary to inform the user about conflicting edits. For UX it might be useful to observe own edit, but that can be done completely locally with data only eventually propagating farther. Incidentally this is also pretty much the only way to make the edit reliably synchronized with the remote system, because users don't have perfectly reliable computers, perfectly reliable internet connections and won't wait long for anything.


You could record a character wise diff of each name update, and apply all the diffs in succession. GP is saying not to do that, and instead to only apply one of the name changes and ignore the rest -just as you've described.


>If this message gets delayed, oh well, the customer might end up with a negative balance - sucks for the bank if the customer decides to go into hiding - but as the customer typically can't control the delay - it's hard for them to abuse this feature.

Nope! When I'm just at +40€ on my account, I can get three or four times (in separate transactions) this amount in a short window of time. I have no authorized overdraft, but end-up with -80€.


The bank still controls this delay, and deems it acceptable - you can't extend the delay, and use the extension to withdraw thousands and thousands of extra euros! Chances are, the costs of a real-time system, vs the cost of unauthorised overdrafts isn't a tradeoff worth fixing for them.

(That and, at least here in Ireland, the banks can and do charge for unauthorised overdrafts - which is ridiculous IMHO, but that's a separate thing)


If I were implementing the ATM example, I'd do something similar to Auth/Capture where the balance check places a hold on the funds until an eventual capture / reversal.

Of course, that is basically 2PC in spirit.


Actually... no it´s not.

The underlying technology is a medieval single error correction/detection process called double entry book keeping. So the operation is some variant of [credit cash, debit account] or [debit deposit account1, credit deposit account2]

tldr: banking is more interesting than generally realised.


> deferred settlement and reconciliation

We call this eventual consistency nowadays.


Settlement is somewhat different than the EC models you're used to. For one thing, a settlement may never occur and an unwind will be required. At least the EC systems I've dealt with where there application is responsible for resolving the conflict don't really have the concept of doing a chain-of-events unwind.

My complaint here is that there actually are two data layers here with completely different semantics and the difficulties of 2PC are meaningfully relevant to the system-of-record layer, not so much the log layer (since enqueue into the log is not really problematic in the real world).


This thread shows more about what people know (or don't know) about the CAP theorem than about two-phase commits.


In the context of distributed monetary transactions I think it's worth mentioning CRDTs - for readers of this thread who have not yet heard about them, and who might want to look into the topic if they have a similar problem to solve.


Had heard about them, but indirectly. Thanks for putting a name on them!

https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: