Hacker News new | comments | show | ask | jobs | submit login

I cannot give you exact numbers, but here are some information that might be useful: - LogDevice ingests over 1TB/s of uncompressed data at Facebook. This already has been highlighted in last year's talk in @Scale conference. - The maximum limit as defined by default in the code for the number of storage nodes in a cluster is 512. However, you can use --max-nodes to change that. There is no theoretical limit there. Each LogDevice storage daemon can handle multiple physical disks (we call them shards). So, If you have 15 disks per box, 512 servers. That's 7680 total disks in a single cluster. - The maximum record size is 32MB. However, in practice, payloads are usually much smaller. - Zookeeper is not (currently) a scaling limitation as we don't connect to zookeeper from Clients (as long as you are sourcing the config file from filesystem and not using zookeeper for that as well).

Hope that helps.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: