Our experience with this at Buildless is that remote caching loses profitability past 100mb or so.
As a result, we impose reasonable caps which force a recompilation for the largest outputs, and it ends up being faster.
Just our experience and of course mileage varies by project. Redis can do 512mb values but the cap needed for efficiency’s sake is much smaller than that.
Am I the only one who finds that author gives very little details about what "working asynchronously" is and how it actually functions, focusing mostly on some benefits it ought to provide or some other behaviours benefital in traditional office life as well?
I find this article to be absolutely pointless and clickbaity. It can be boiled down to: "you can use poligon.io for market data but algotrading is difficult anyway so there is not much to share yet".
Hey, I'm sorry you feel that way and I wrote it. Personally, I wanted to share the high-level structure of how you'd build your own system. I would have loved to have seen something like this when I went down this rabbit hole in that I needed to figure it all out myself. What do you think would have made it better? I'd be happy to roll that back in.