Hacker News new | past | comments | ask | show | jobs | submit | f819934580bd48f's comments login

> potential data loss should be written more carefully.

Doesn't this apply to any code that touches data intended to eventually be persisted? If so, IMO this applies to a huge portion of all software, I would guess more than half, because writes tend to be much more complex than reads IME.


Data loss usually occurs when you "migrate", "backup/restore", "upgrade" data. A stupid internal tool can wreak havoc because it's something non-customer facing with less stringent testing.

The bugs on CRUD operations are usually ironed out early and can be limited to a small subset of data being lost. However a mingled migration script from one table to another is a really dangerous stuff but frequently it's treated as "internal tool".

Floating point calculation and storage is also tricky and should be written/tested with greater care.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: