Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What kind of read/write ratio are you using? And would your solution work for a write-heavy workload?

Kafka has limits on the message size and i need a solution for storing large blobs (up to 10MB) at data ingestion on for a very short time until the job has been processed. So read/write ratio will be exactly 50% and there will be a high write load. Is FoundationDB capable for this specific task? Are there some knobs tuneable for acid? Perfect acid requirements would not really be needed, if everything is fsynced every second would be completely ok.



From own experience, I'd say FDB can handle it all, we've got 20% read 80 write during peak hours, and reverse: 80 read 20 write other time.

Without doing your own tests, here's per core numbers, from which you can extrapolate (e.g. via CPU mips) towards your own hardware: https://apple.github.io/foundationdb/performance.html#throug...

FDB is ACID as is shipped, you don't need to turn knobs to make it such. Toughest part is to figure out classes/roles of the system. Here are a couple of good starting points: https://nikita.melkozerov.dev/posts/2019/06/building-a-found... https://forums.foundationdb.org/t/roles-classes-matrix/1340/...


Kafka limits are customizable. 10MB is quite small. Kafka easily handles 100MB messages. If you need some temporal storage that works like a log then there's nothing better than kafka in terms of speed and scalability. But it's more about relatively long term storage. If you need to persist data for very short amount of time, why don't you use in-memory store? Do you have strict requirements around data loss? Something like Redis much just do the trick for out. With Redis cluster it even scales.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: