Wikipedia is one of those services about whose reliability I have never thought, simply because it always seems to work. Very impressive especially considering that their tech stack doesn't appear to be the most modern or is (in the case of php) especially known for its robustness.
They do have a pretty complex infra with redundancy, extensive caching, the whole works. But I think they use a relatively boring, reliable stack throughout. It doesn't look like they run a lot of GARTNER MAGIC QUADRANT Cloud Solutions to Supercharge™ their web-scale data blazingly.
On the plus side solid stable software can run for decades without incidents, as long as the hardware is willing. The main negative is that it's not as Supercharged™ as the latest cloud product with a 99.9% SLA, which itself relies on 5 other internal microservices with 99.9% SLA each.
I doubt anyone at HN takes Gartner seriously, as James Plamondon (Microsoft) wrote:
> Analysts sell out - that's their business model. But they are very concerned that they never look like they are selling out, so that makes them very prickly to work with.
The question is, who is gullible enough to still believe Gartner has any value? You'd think by now even the most clueless of MBAs would have caught on.
Thought the same thing. Finding the root cause is easier when you don't have to debug dozens of abstraction layers (KVM, Containerd/Docker, Kubernetes, etc).
> Seriously though, not a fan of php at all but the js tooling is rocket science in comparison.
laughs in left-pad, Webpack, Grunt, ...
JavaScript tooling absolutely sucks - even for a moderate sized project, `npm install` can take many minutes, often enough because some native code has to be compiled. Webpack builds can take even longer.
In contrast, with PHP you run `composer install` and it works, no bundling or whatever required.
The Foundation have invested a lot over the last decade in reliability. Nowadays it is in a great place, but about 10 years ago it was up and down all the time.