Hacker News new | past | comments | ask | show | jobs | submit login

Once conceived of a plan to store a lot of data per user and thought of the same idea. How do you store the dbs? Keeping it on say s3 means there's a multisecond (or longer) load time when a user logs in and you need to load the DB to the hard drive right? (I'm thinking a GB or more data per user).I considered an ec2 instance with a multi terabyte ebs attached, that can then effectively store a thousand users: data. Are there any other possibilities?



Our DBs are tiny for most users. We run s3 (minio rook ceph on k8s) locally so the network latency is a cluster latency.

I figure you could just throw hardware at it like you mentioned. Move them to nvme backed S3 if needed.

And our use case is only ever load, do a read or write, and then save. So they DBs aren’t open for very long.

And with S3 compression you could save on download time but pay a decompress cost.

This approach has its downsides don’t get me wrong but it scales nicely but forget running aggregates across the databases at least not for a real-time result.


Put sqlite in redis.


Got a link or more details?



I've never done it, but you can store binary blobs in redis, and a sqlite db is a binary blob.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: