Having worked on many examples of both Fortune 500 monoliths and start-up scale monoliths, I feel confident saying monoliths just fail, hands down, at all these scales.
Monoliths only fail when architects don't have a clue about modular development and writing libraries.
Same architects will just design distributed spaghetti code, with increased complexity and maintenance costs.
They just have to learn how to actually use and create libraries on their language of choice.
Each microservice is a plain dll/so/lib/jar/... maintained by a separate team.
No access to code from other teams, other than the produced library.
It isn't that hard to achieve.
The challenge is that in reality you will always need distinct build tooling, distinct CI logic, distinct deployment tooling, distinct runtime environments & resources, etc., for almost all distinct services, as well as super easy support to add new services that rely on previously never used resources / languages / runtimes / whatever. This need happens whether you choose a monolith approach or microservice approach, but only the microservice approach can efficiently cope with it.
The monorepo/monolith approach can go one of two ways, both entirely untenable in average case scenarios: (a) extreme dictatorship mandates to enforce all the same tooling, languages and runtime possibilities for all services, or (b) an inordinate amount of tooling and overhead and huge support staff to facilitate flexibility in the monorepo / monolith services.
(a) fails immediately because you can’t innovate and end up with some horrible legacy system that can’t update to modern tooling or accomodate experimental, isolated new services to discover how to shift to new tooling or new capabilities. This does not happen with microservices, not even when they are implemented poorly.
(b) only works if you’re prepared to throw huge resources and headcount at the problem, which usually fails in most big orgs like banks, telcos, etc., and had only succeeded in super rare outlier cases like Google in the wild.
So I think I do have some experience regarding distributed computing.
And the best lesson is that I don't want to debug a problem in production in such systems full of spaghetti network calls, with possible network splits, network outage,...
With microservices I need one debugger instance per microservice taking part on the request chain, or the vain hope that the developers actually remembered to log information that actually matters.
In the monolith case, your debugger is likely to step into very low-level procedures defined far away in the source code, with no surrounding context to understand why or to know if sections of code can be categorically removed from the debugging because, as separated sub-components, they could be logically ruled out.
Instead you’ll have to set a watch point or something, run the whole system incredibly verbosely, trip the condition and then set a new watch point accordingly. Essentially doing serially what you could do in log(n) time with a moderately well-decoupled set of microservices.
You’d also have the added benefit that for sub-components you can logically rule out, you can mock them out in your debugging and inject specific test cases, skip slow-running processes, whatever, with the only mock system needed being a simple mock of an http/whatever request library. One simplistic type of mock works for all service boundaries.
To do the same in a monolith, you now have to write custom mocking components and custom logic to apply the mocks at the right places, coming close to doubling the amount of test / debugging tooling you need to write and maintain to achieve the same effect you can literally get for free with microservices (see e.g. requests-mock in Python).
And all this has nothing to do with whether the monolith is well-written or spaghetti code compared to the microservice implementation.