Hacker News new | past | comments | ask | show | jobs | submit login

It seems like several of the issues aphyr found were due to optimizations the product was making. Fixing them in a point release suggests they weren't huge changes with massive ramifications. I don't know what the key benchmarks for software like VoltDB is. Are these operations which are (relatively) rare in the normal deployments of VoltDB? Did these optimizations make more of a difference in previous versions of VoltDB?



The stale and uncommitted reads was an optimization. It seemed like a harmless optimization to make at the time, and we were wrong. v6.4 allows you to pick strong serializability (default) or the old behavior at startup.

For 100% read workloads, the impact to maximum throughput can be significant. That's a pretty uncommon workload for us though. It's likely a single-digit percentage problem at 50% reads.

Two nice things about VoltDB users: 1) They often are very write heavy, 99% read-write transactions isn't uncommon. 2) Few are using anywhere near the maximum throughput for their machines. Most size their clusters for appropriate storage and redundancy, not throughput.

The lost write issues weren't optimizations, just implementation bugs. Sigh.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: