I think the point of the article is they needs a lot of persistent storage. you'd have to constantly write that RAM back to disk, and then you've either bottlenecked again or have to come up with some queue system where you drop so many changes because of the disk bottleneck.
This is a clean solution that does really change how you think you can interact with persistent storage. All those little reads and writes without the performance hit.
I guess I say that because they talk about game-based workloads... I'd assume that only a limited number of events are persisted indefinitely (and/or need to be atomic).
You can keep all your data in RAM, replicate it for fault tolerance (still in memory, but on another machine, another rack, another datacenter) and just dump the entire thing on disk once in a while which would be fast.