1) How are writes/updates propagated to all other containers/hosts? Is this based on the assumption that the containers will only read from the database, and not write to it? And that any writes being made externally, only need to be deployed to new containers, and it is fine for existing containers to read stale values?
2) Is there anything special about SQLite that enables this design? Or would someone else be able to do the same thing with a config.json file that is then read and parsed by each instance of the application?
3) I'm presuming that latency is vastly improved because real-time network requests are avoided. Can the exact same result be achieved by eagerly prefetching the config data at startup? Deploying a complete copy of the SQLite database to all containers, is in essence doing the same thing.
4) How does this solve the problem of "major incidents were happening all the time"? Is reading from a RDS (or similar) database really that fragile?
I'm also left with some questions however. If the issue is the high read latency with a centralised settings database, why not deploy read replicas?
Sharing an SQLite database -> Sharing a SQLite database
In all my years in the industry working with SQL databases (since the early 90's), virtually no-one familiar with these products spelled out S-Q-L as the pronunciation.
the rule is about the sound of what follows, not whether or not it's a consonant/vowel:
- an SSD if you pronounce it ess-ess-dee
- a universal constraint - vowel but a is correct here due to sound
- an SQL database - again, pronunciation dependent