Well then maybe consider making a sql replication client that would ingest the changes (like most databases, mysql writes to a straming append-only log, before it is compacted). Just parse the log and act on them.
Well, good. Your problem is artificial, choosing tools and scenarios to avoid "push". Your problem isn't with invalidating caches being hard in 2025. I've explained how to do it in both pull and push scenarios. Certainly not "the only hard problems in computer science." And neither is naming!
I’m also not making my own programming language, I’m using PHP. And I’m using MySQL. What is your point?
Are you saying that “invalidating caches is hard… if you insist on using only systems with no way to push updates, and oh if they have a way to push updates then you’re using someone’s library to consume updates so it doesn’t count?”
I agree with you, but you are omitting a key aspect of the problem which makes this hard. If you have a single server serving from a local sqlite database, caching and invalidating the cache is trivially easy.
It becomes way more difficult when you have N servers all of which could potentially serve the same data. Local caches could then, yes easily become stale, and even if you have a propagation mechanism, you couldn't guarantee against network failures or latency issues.
But suppose you have an HA redis instance that you store your cached data in. Even with a write-through cache, you basically need to implement a 2-phase commit to ensure consistency.
See the thread below. Caches only become stale if you refuse to implement any sort of push architecture. And even then, frequent batch-polling with etags will quickly resolve it. (eg right after you used N entries, poll to see which of the N have changed).
Bloom filters, Prolly trees etc can speed the batch polling up. But push architecture obviates the need for it prevents caches from being stale, it’s called eventual consistency.
(Well, of course unless there is a network partition and you can’t communicate with the machine holding the actual source of truth then yeah, your local cache will be stale. And to catch up on missed updates you will need to request all updates since a certain marker you saved. If the upstream had a ton of updates since your marker and can’t send the entire history since your marker then you might have to purge your cache for that specific table / collection, yeah.).
I would love it if I could over-withdraw money from my bank account by having someone on the other side of the country take out money from my account at an ATM at the exact time as me, due to the fact that the bank's storage system was eventually consistent, but that's not the case.
Pretty sure that when it comes to write transactions, such as bank transactions, you want them to be executed on a source of truth with enforces consistency, not eventual consistency. Any client software should not be trusted -- especially one written by a silly programmer who uses a cache as the source of truth.
n64js is an n64 emulator written in (mostly) pure ES6 JavaScript. It runs many roms at full framerate.
Why?
Mostly for the challenge. [hulkholden] spent ~25 years (on and off) working on N64 emulators and writing one in JavaScript gives [him] the opportunity to expand [their] comfort zone and learn something new. It's a good showcase for how powerful modern browsers have become.
Try Core Data with Swift and you will see that happening. Lazy objects (vaults) are mapped from objc into Swift and will happily crash on something like a = b where both are not optional.
This is happening in objc code or in the swift part?
I'm not terribly surprised though, my one experience with core data was miserable once we strayed even a little from the happy path and I ended up rolling my own since we didn't need full functionality anyways. And this was for an internal app, at Apple ┐( ∵ )┌
Right, because the language has the "million dollar mistake" of nullable references by default, which you cannot change without breaking code. And the original comment was bemoaning that Go choose to to have nullable references by default too.
Unless something drastically changed in Scala 3, there is nothing to protect you from null in Scala.
In fact even Java is effectively safer thanks to all the null checking done by IntelliJ
Null is basically non-existent in idiomatic Scala. So technically you're right but besides calling Java libs there is only an infinitesimal small chance to get NPEs form Scala code. (Scala's NPE is the MatchException ;-)).
For Scala 3 there are improvements. It's "null safe" as long as you opt-in (modulo Java libs, and of course doing stupid things like casting a null to some other type).
So you actually complaining about people who don't know what they're doing? How is this related to Scala?
When you have people without clue on the team no language will safe you. You can also crash Haskell programs by throwing exceptions or just using List.head…
I had researched buying a CNG car in the LA area and bringing it back to San Jose. Reliable miles per tank data was tough to come by, but it seemed like if you couldn't use PG&E stations to fill (because it requires pre-registration, and a certified inspection), you wouldn't be able to take the most direct route.
That was for a crown victoria/lincoln town car though. I don't know if the civic's get more miles per tank.
reply