Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
slashdave
2 days ago
|
parent
|
context
|
favorite
| on:
100k TPS over a billion rows: the unreasonable eff...
Sure. Now keep everything in memory and use redis or memcache. Easy to get performance if you change the rules.
koakuma-chan
2 days ago
|
next
[–]
You can use SQLite for persistence and a hash map as cache. Or just go for Mongo since it's web scale.
reply
Yodan2025
2 days ago
|
prev
|
next
[–]
yep, then add an AWS worker in-between
reply
SJC_Hacker
2 days ago
|
prev
[–]
SQLite can also do in memory
reply
slashdave
2 days ago
|
parent
[–]
Yeah, very good point. It all comes down to requirements. If you require persistence, then we can start talking about redundancy and backup, and then suddenly this performance metric becomes far less relevant.
reply
andersmurphy
2 days ago
|
root
|
parent
[–]
Backups are to the second with litestream.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: