Hacker News new | past | comments | ask | show | jobs | submit login
Adding Optimistic Locking to an API (moderntreasury.com)
74 points by qin 5 months ago | hide | past | favorite | 37 comments



> Looking at our request traffic made choosing optimistic locking fairly easy. We expect the majority of ledger operations to be reads, and we didn't want reads to block writes (and vice versa).

I don't get it. If they use an SQL database that supports ACID already, why not just lock all the ledger rows necessary with an exclusive row access when writing and otherwise just with a shared access so that the write waits until the read finishes?


I think the missing context is that the db reads go all the way to an HTTP client, get modified on the client, and are sent back hoping nobody else has edited the row in the meantime. Pessimistic = check row is not locked on READ; Optimistic = check lock_version matches on WRITE.

For a table where single row ops are all you can do, this is basically enough to let API users read and update rows concurrently. SQL transactions don’t survive longer than a single batch of statements sent in one request, so inadequate.

Edit: Turns out ActiveRecord’s pessimistic locks don’t use a column at all, they use builtin row locking from MySQL/Postgres (ie ‘inadequate’). So you can’t use it for multi HTTP request transactions at all.

Final edit, if you read about how databases implement ACID in the first place, optimistic locking is one of the building blocks. You can use it to reproduce the various isolation levels of SQL transactions in HTTP APIs, manually, in SQL. (Also look at MVCC.) This does not sound fun. Distributed locking is kinda like doing a group project at university, in that you’re going to have to do it all yourself, but in another sense if one writer goes MIA you all fail.


Ahh, so they sent the lock_version with the read. Now it all makes sense. Thank you!

And as you say, in this case you couldn't even reliably lock on read because you don't know whether or when a client sends a POST anyways.


They wanted to add locking to their API, not internally.


> Since the protocol doesn't specify how the ETag should be generated, we could have passed in our "lock_version" version numbers. But because it seemed strange to only honor the ETag headers for a single resource in our API, we decided against it.

Odd choice. There's a standard, but the developers still chose to re-implement w/ specific semantics. There's nothing on the standard saying you have to support ETags for all the resources.


Yep.

The HTTP standard is rich with a caching, idempotence, etc.

You really have to craft a set of requirements to not find what you need there.


Optimistic locking in a stateless environment is actually not difficult to implement. The hardest part is actually making sure #devops are aware of the concept !

Tom Kyte, he of Oracle fame had a particularly good discussion of the concept in one of his books (Effective Oracle by Design ... IIRC).

IIRC, the Oracle way is to enable rowdependencies for the table(s) in question and then use ora_rowscn.

But in reality, you can use almost anything that changes in a defined fashion (timestamps, version numbers etc.). Then all you need to do is test for it in your database stored procedures (or elsewhere in your middleware if you are not using sprocs).


We had similar problem (banking service). Instead of "moving" directly to destination we include a safe account in the middle.

A > safe > B

Safe account is called this way because we don't risk the misuse of money in case of rollback.


Generally in banking you either separate preauthorization (or reservation of funds) from authorization, or you separate transaction from settlement and you compute an acceptable rate of loss.

Why did you decide to go with an escrow model in your use case?


optimistic locking is a useful technique that seems unfamiliar to many - it's worth looking into


I thought I didn't know what it was when I heard of them, then realized that it's not a lock at all, it's just a row version column, with the basic rule don't overwrite without having seen what you're about to overwrite.

Same goes for `git push --force`, always use `git push --force-with-lease` instead.


Wait, how do you ensure that the lock version itself is safely written by two or more writers? Not to mention what happens if you read the version and in the time you decide to do a write someone else incremented the version.


Why do you think it is not a lock?


It is a locking convention, achieving something locks are used for, but you can't point to one thing and say, there, that's the lock. Contrast with a database row lock where no other transaction can change that row or get a locked access to it until released.

e.g. if the version column is an incrementing number, then it relies on no client unilaterally incrementing the value on failure and retrying--not much of a 'lock'.


I agree. Optimistic locking is more akin to an atomic CAS operation than a lock. The usage of the API is the same; if the compare fails, fetch again and retry in a loop. (Incidentally, it doesn’t experience the ABA problem because nobody writes a lower lock_version unless they wait until the field overflows and wraps around… !).

Nobody would call CAS a lock. If your system only uses CAS, it should rightly be called “lock-free”.


The point to call Optimistic Locking by its other name Optimistic Concurrency Control (OCC) is a great one and I'll be doing that from now on.

In the spirit of trying to keep this complex subject free of misrepresentation... Compare-and-set (CAS, or alternatively compare-and-swap) is commonly used to implement lock-free algorithms.

A 'spin lock' where a thread uses only CAS in a tight loop and not carrying on until the lock is acquired is indeed a lock and not a kind of non-blocking or lock-free algorithm.


You can actually think of the version number in optimistic locking as a shared key combination. First party to use the key gets to pick the next combination, invalidating any other concurrent copies.

Performance and implementation characteristics differ but the end result is functionally the same as a pessimistic lock, which is why they're both called "locking" mechanisms.


The end result is functionally very different. If you were given a shared updatable value, and were told that the only API was non-blocking and you would have to retry in a loop until it succeeded, would you call it a lock? It’s straight up not a lock.

The functional difference is crucial here, the non-lock characteristics are what allows them to offer atomic updates via HTTP. They couldn’t have used pessimistic locking to do this, not only because it is is limited to a single db transaction, but because they can not trust API users or the network to ensure the rows ever get unlocked.


A lock does not offer a guarantee of completion but one of integrity. Any serious pessimistic lock granted will have a timeout associated with it (in case the owner dies, etc.), which might cause your operation to fail if it takes too long. Which would require a retry in the same way that you would retry an optimistic lock. Then again, it just depends on timing.


Do you define locks as things that guarantee data integrity? Is not offering concurrency at all locking? Is your point that optimistic is locking or that it bears some surface similarities to locking? I accept that surface similarities exist. But that doesn’t really help anyone understand the difference. Nobody is struggling to see that these two solutions are applicable to some of the same problems.

Locks don’t directly guarantee data integrity. They guarantee mutual exclusion around a critical section or mutable access to data they logically contain. Mutual exclusion is a coarse-grained way to get actually serialized execution of concurrent accesses. That gives you the data integrity. Mutual exclusion also gives you all the problems when you send an acquired lock over a network. It also explains why parallelism on the guarded section/data is 1. This can be blocky in a read heavy environment, because each read blocks all the others. Two transactions can get into deadlock by acquiring in a particular order. These are characteristics of all locks.

Optimistic locking (also known as optimistic concurrency control, because it’s not locking) does not provide mutual exclusion, in any way, shape, or form. You cannot acquire any kind of lock with it. There aren’t any. A lock has a pair of operations, one to begin mutual exclusion and one to end it, but OCC doesn’t have either. Because it’s not a lock. It therefore does not suffer from network partitions between the lock and the client, does not shove all readers through a parallelism=1 bottleneck, you do not have to worry about deadlock prevention or avoidance. Those problems simply do not exist for OCC. It is literally lock-free. It almost couldn’t be ANY further from being a lock. You don’t have to accept that retries being mandatory in the API makes it not a lock. You just have to observe that it isn’t a lock.

Some things are locks, some things aren’t. I think we should avoid calling things that aren’t locks locks, because it’s confusing enough as it is. Frankly, I think if people hadn’t been calling OCC “optimistic locking” then this blog post would have been easier to write. Similarly, as I illustrated above, calling it “lock-free” helps people understand OCC’s advantages and challenges in the same terms we use to talk about lock-free algorithms on a multi-core CPU. It gives you the intuition that every client side call should be a loop. It gives you the intuition that overflow ≈ the ABA problem. All of these bits of understanding flow from calling things what they are.


It feels like you have a firm lock on what locking is and how it is qualified. I will thus exclude myself from this conversation.


It's not a lock in the sense that that is succeeds or fails immediatelly.

It is a lock in the sense that it allows success only.

Pesimistic lock usually means "maybe wait then success". In complex locking spaghetti it may mean deadlock. It may also mean wait then timeout. It may also mean wait then timeout then I don't know what actually happened, maybe success, maybe not.


How could it be familiar to people as most of them hopped on the GraphQL hype train and treat HTTP as a dumb pipe.


I understand each solution has trade-offs, but I've traditionally done this with an in-db proc (we used stored procedures extensively on Wall Street for txns). How does this compare against having the entire transaction functionality inside the DB?


Using stored procedures is old fashioned and won't earn you any points on your resume if you're being interviewed by 20 year olds. It's a solid solution though unless you are at FAANG volumes.


Tradeoff is that it has very poor performance for frequently changed data, you can starve clients etc.

It's much better to create transfer api which atomically debits one and credits other account instead of low level individual ops.


if you're solely relying on "lock_version" numbers, you may wanna watch out for integer overflows (not sure how ruby handles it). I had faced a similar problem in the past, it can have massive ripple effects with downstream services.

Also, if you're using postgres. Its worth looking into advisory locks [1] for similar use cases. They are pretty light weight compared to other locking mechanisms.

[1] https://www.postgresql.org/docs/9.4/explicit-locking.html


Modern Treasury is a Ruby on Rails shop

I really wouldn't want to write a service that deals with money in an unsafe language. It's so easy to make a mistake in ruby there's no compiler to help you.


It definitely a scary proposition. But it's worth mentioning that two of the world's biggest money moving platforms (Stripe and Shopify) are written in ruby.


Stripe decided to write a type checker for it though, probably for good reasons: https://sorbet.org/


It should be mentioned that their reasons were mainly developer convenience/productivity, and not “correctness”.

I think people severely overrate the value of the language when it comes to avoiding bugs. We already know how to minimize bugs: Extensive testing regimes (automated, manual, or both) and a general focus on correctness over “shipping on an artificial deadline”.


You can have something that aids convenience/productivity and not correctness. You can also have something that aids correctness that increases developer convenience/productivity. I treat static typing as the latter.

Edit: Btw, the github.com/sorbet/sorbet project is really well run and surprisingly easy to get into. I've been drafting PRs. C++ is not nearly as rough as I remember from decades ago.


Ironically, advocates of dynamic typing often claim that it improves developer convenience and productivity.


Yup, also being used at Shopify. Types are awesome, but these companies got very far by betting big on the productivity of ruby before this became a concern.


Not claiming that ruby isn't productive, but I could as well rephrase that into "these companies got very far despite the lack of productivity of ruby before this became a concern". It's not really giving any new insight.


There's lots of Java development that deals with money movement that is suspect. All languages and frameworks are just tools. It depends on how you use them. Compilers are no guarantee for correctness if how you're modeling types is improper.


Well to me it's defence in depth.

I've used Ruby in a cryptocurrency project and I've used Rust. I know there are no guarantees but I wouldn't go back to ruby. There are just too many times when the compiler catches something I missed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: