The .xlsx experience is even messier in a business context. I work daily across Windows and Mac, and my two worst offenders are Excel and QuickBooks.
Excel on Mac has been closing the gap, but VBA macros from Windows colleagues still break regularly — sometimes silently, sometimes loudly. A workbook that runs perfectly on Windows will open on Mac and just do nothing, with no error. I've ended up keeping a Windows machine running purely because of macro-dependent spreadsheets.
QuickBooks is the more frustrating one. Intuit has historically treated the Mac version as a feature-lagged port — payroll features, industry-specific reports, and bank reconciliation behaviour all differ between platforms. The company essentially wrote the same product twice and kept them deliberately out of sync. Muscle memory from one doesn't transfer to the other.
The only partial escape I've found is pushing financial calculations out to platform-neutral layers entirely — whether that's a shared Google Sheet, a web-based tool, or an API. The moment the logic lives in a .xlsm file, you're platform-dependent again.
The "one of the greatest inventions in computing" framing holds up when you look at what it actually did to financial math accessibility.
I've been implementing the functions Lotus 1-2-3 made mainstream as a REST API — amortization, NPV, IRR, compound interest — and the formulas are completely unchanged from 1983. Forty years of software evolution and the computation at the bottom has been stable the entire time.
What changed is only the interface layer: mainframe COBOL → Lotus cells → Excel formulas → Python libraries → REST endpoints. The spreadsheet era was the step that made financial math legible to non-programmers. Everything since has just been a different packaging of the same numbers.
The "non-programmers" part was the earth-shattering change; before that managers had to ask analysts to do the work or setup the program for that particular question; 1-2-3 let them build it out themselves and play with it before showing it to anyone.
One approach that sidesteps the whole problem: design for fully synchronous, stateless requests from the start so there's nothing to queue.
I did this for a financial calculator API — every request is pure computation, inputs in, result out, nothing persisted. No Redis, no workers, no task table, no locking. The response is ready before a user would notice a queue anyway (sub-50ms).
Obviously only works when tasks complete in milliseconds. But figassis's pattern of "starts simple, then incrementally grows into a small job system anyway" often happens because the initial scope could have been fully synchronous — the async complexity creeps in before it's actually needed.
Worth asking first: does this task genuinely have to be async, or is it just easier to model it that way?
Yeah, that makes sense too. I also try to keep things synchronous as long as possible.
In practice async usually shows up once there are external APIs, retries, scheduling, or anything that shouldn't block the request, and that's where I end up building some kind of job system again.
I'm trying to figure out if that point happens often enough to justify moving this outside the app entirely.
AstroBen's framing matches my experience — "well-specified + verification harness" really is the sweet spot. Financial math is about as well-specified as a domain gets: the amortization formula is unambiguous, expected outputs are known to the cent, and you can write tests that catch rounding errors in the third decimal place.
I shipped a REST API (8 financial calculator endpoints) from idea to live in about a week of evenings with Claude Code. The 33-test suite was essential — EdNutting's "convincing-but-wrong code" problem is acute in this domain specifically because an IRR calculation that's off by 0.01% looks completely plausible until a test catches it.
But the thing that surprised me wasn't the speed — it was the shift in which projects are worth building at all. Infrastructure that used to take days (Dockerfile, Nginx config, deployment scripts) now takes a few hours. That changes the viability calculation for small projects. Things I'd have considered too small to bother productionising before now cross the threshold. That feels like a different kind of career change than "writes code faster" — more like a change in what's worth attempting.
Working as an accountant in a services industry, inventory has been the thorn in my side. The ERP package that I manage and work on daily has some limitations on inventory control, possibly a signal to change packages, but other than the inventory issue, it serves 100% of my needs. Currently working with various AI platforms to see what functionality can be built to intergrate or at least make the month-end figures worthwhile. it's been challenging, but it's doable.
Excel on Mac has been closing the gap, but VBA macros from Windows colleagues still break regularly — sometimes silently, sometimes loudly. A workbook that runs perfectly on Windows will open on Mac and just do nothing, with no error. I've ended up keeping a Windows machine running purely because of macro-dependent spreadsheets.
QuickBooks is the more frustrating one. Intuit has historically treated the Mac version as a feature-lagged port — payroll features, industry-specific reports, and bank reconciliation behaviour all differ between platforms. The company essentially wrote the same product twice and kept them deliberately out of sync. Muscle memory from one doesn't transfer to the other.
The only partial escape I've found is pushing financial calculations out to platform-neutral layers entirely — whether that's a shared Google Sheet, a web-based tool, or an API. The moment the logic lives in a .xlsm file, you're platform-dependent again.
reply