Hacker News new | comments | show | ask | jobs | submit login

we don't know the size of the db involved. multi-gig dbs dumping every hour may be production-limiting.



Well, I've worked on a 50T DB doing 800G of changes per hour, which was recoverable to any point in time. That's not even particularly big by today's standards.


and you dumped the entire 50terabytes every hour? The point above was that they should have been doing a db dump. that's not always the best way (or even possible) to deal with large data sets.


No, but why would you do that? It makes no sense when there are better backup strategies available (archived redo logs, hot standby, filesystem snapshots, etc etc).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: