What scares me about microservices is the case where some operations must be transactional.
What if, in a given use case, multiple microservices are involved but the operations must be transactional : if one of the services fails, all previous operations must rollback. What are the recommended ways of implementing this kind of transactional behavior in a modern HTTP/REST microservices architecture?
I know the pattern is called "distributed transactions" and is often related to two-phase commit protocol. But there doesn't seem to be a lot of practical information available about this topic!
I found this recent presentation that talks about it, but I'd like to learn more on the subject. Also, I'm looking for practical tutorials, not highly academic ones! I'd really love to see code samples, for instance.
We've moved down this path from a massively complicated distributed transaction environment on top of MSMQ, SQL Server etc and you know what? With some careful design and thought about ordering operations and atomic service endpoints, we didn't need them at all after all.
Transactions can be cleanly replaced with reservations in most cases i.e. "I'll reserve this stock for 10 minutes" after which point the reservation is invalid. So a typical flow for a order pipeline payment failure would be:
1. Client places order to order service.
2. Order service calls ERP service and places reservation on stuff for 10 minutes.
3. Order service calls payment service (which is sloooow and takes 2-3 mins for a callback) and issues payment.
4. Payment service fails or payment fails.
5. Order service correlation times out.
6. Order service calls notification service and tells buyer that their transaction timed out and cancels the order.
7. ERP service doesn't hear back from the order service and kills reservation.
At step (4) you have an option to just chuck the message back on the bus to try again after say 2 minutes. If everything times out, meh.
Thanks for this! It seems like a very interesting pattern and I was completely unfamiliar with it prior to reading your comment. Looks like a search for "reservation pattern" gives lots of good places to start digging, but I'm wondering if you have any favorite resources on the subject. Is there a good treatment of it in some particular book? Or maybe presentations you've found particularly enlightening?
I was interpreting the parent poster's question to mean:
1. reservation is placed.
2. Payment succeeds, but either success is not known, the process requesting payment crashes before the response, etc.
Since we never got to telling ERP "hey, that reservation will be permanent because the payment succeeded", but the payment succeeded… what do you do? Does the reservation expire (but my potatoes!)? How do you even know that the payment succeeded, if perhaps a network connection goes dark and requires 2h to fix?
In this case it's not really any different than other distributed transaction systems... another process (potentially manually) has to review, and correct things...
What happens when your payment processor succeeds in processing the transaction, but you don't get the success code? You either retry/confirm/correct... One would assume you would, upon not getting confirmation that your reservation was made permanent, retry the commitment, if it was already committed, then the erp service can return the appropriate response.
I missed the confirmation step above. That would happen once the payment has been correlated.
At step 3 in your list above the payment would time out and a refund would be issued. Usually payments time out as well so you can usually reserve cash (pre-auth in banking terms). So we end up with stacks of reservations.
If something breaks you can retry within a reasonable limit or wait for everything to drop all the reservations.
The concept of Aggregates in Domain-Driven Design is based around the need for business invariants that must be maintained with transactional consistency in a system that is generally eventually consistent.
Overall, you have to learn to love eventual consistency, but small portions of the domain should absolutely be clustered together around transactional consistency needs that are absolutely necessary.
Check out "Implementing Domain Driven Design" by Vaughn Vernon; chapter 10 in particular talks about this.
A concrete example we've faced. A certain operation requires writing data to N flaky services. You successfully write to N-1 of them, but the Nth fails. Now what do you do?
If these N things were just database writes to the same DB, transactions would save you, as you could just rollback. Without that, the answer has to be handled in code -- do you reverse the previous changes (if possible) by sending delete events, or leave the system in some sort of half-baked state and rectify things later via some other process? (I'm interested in hearing of other options...)
The answers I got were:
1) apologetic computing (Amazon)
2) consensus algorithms / paxos
The problem I see is that these may be non-trivial to implement and/or not fully understood or standardized.
We have one instance where we update an account, that goes out to potentially 11 service calls which are not transactional. We are having to maintain state in our app because the micro services have split this up so much.
You can interact with JS libraries, but exposing some API to JS-land is a completely different matter.
There are some vague plans to improve this, but there hasn't been much interest, because if you use Dart anyways, you do of course want to use it as the "host" language. It has the tooling and all that jazz. Of course you'd want to use it for as much as you can.
Dart can be run in both modes. Chrome (dunno if enabled by default and/or in stable) contains a Dart VM, so you "only" need to transpile into JS for non-Chrome.
The main problem I had with Dart was [a year ago] that using existing JS libraries like jQuery wasn't as seemless as I would liked. So that would be a reason against Dart, if Angular wants to make that aspect easier.
Haven't looked at TypeScript's interop with JS. I got frightened by the name Microsoft and the fact that they only had a huge language spec PDF and not a single tutorial or something easier to digest.
You can optionally also provide a binding file that tells TypeScript about the JS code so that it can perform compile-time checking on calls. There is a repository of these bindings for common libraries called DefinitivelyTyped .
> ... They only had a huge language spec PDF and not a single tutorial or something easier to digest.
Not sure when you originally looked, but there is now a tutorial , handbook , and samples  in addition to the language spec .
I imagine they've improved the runtime size since that stack overflow post was posted a few years ago. However I've never used dart and just tried it out using a basic html sample from the tutorial page and I see:
264 Mar 5 12:21 a.dart
237K Mar 5 12:21 a.js
101K Mar 5 12:22 a.min.js
If you add gzip compression, it's still smaller than doing the same thing with jQuery. "dart:html" provides an idiomatic DOM API where all list-like things are actually Lists, events are streams, Futures instead of callbacks, and so forth. It's pretty nice to use, but it's also one of the bulkier libraries.
Also note that you could now add a thousand lines of code and the file size wouldn't increase much. You already paid that one-time cost.
You can further improve the size by using a better minifier on top of --minify.
Ah, that makes sense re:same size as if you added jquery, and the API abstractions do sound better.
Though as a counterpoint (and relevant to angular 2 :) I read they're going to just use the raw dom api's instead of a jquery/jquery-lite abstraction layer as the dom api's don't need the smoothing over in modern browsers like they used to.
The argument that I recall for Chrome not having an optional master password was that it was often less secure than using the system's encrypted data store for their account, if available.
Requiring a master password to decrypt the network passwords is a perfectly fine idea if you want to maintain portability and reduce the chance that your network passwords are accidentally exposed. An attacker has to both have the password file and either figure out the master password or have code execution privileges on the user's account to gain the network passwords. This is more secure than trying to ensure the password file doesn't get "misplaced" (e.g. on an unencrypted drive, in unencrypted backups, unintentionally through a fileserver, etc).
Had to re-watch a couple times: on that move, white had already moved pawn from g2 to g3, the knight from d6 to e8 is a pre-move that happens very quickly (you can see the red square); while white was setting up that pre-move, black moved the queen.
Does that clarify or did I miss what you are asking?
That 2.99 euro deal was a disaster for them. Instead of new customers flocking to them, a large amount of their current customers were cancelling their existing servers that were more expensive and ordering the cheaper ones. And if I remember correctly it was in the tens of thousands of order backlog for that deal.