Hacker News new | past | comments | ask | show | jobs | submit login

Out of curiosity, what database are you using to store the data?



By default it writes metadata about the stream (title, description, etc) using a file based db called nedb, and it appends the actual logged data to CSV files that are split into 500k chunks. When the user requests their logged data, all of the files are stitched back together, converted into the requested format (JSON, CSV, etc), and streamed to the user’s web client.

For the production server, we are currently using MongoDb for metadata storage and the same CSV module for logged data storage.


That's a pretty unusual setup :)

I'd be interested in a blog post about how you choose this architecture.


Sounds good. I'll work on one once the traffic stabilizes.


Is this the same Nedb? https://github.com/louischatriot/nedb/

Looks like a pretty nice project.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: