Hacker News new | past | comments | ask | show | jobs | submit login

Wikipedia is one of those services about whose reliability I have never thought, simply because it always seems to work. Very impressive especially considering that their tech stack doesn't appear to be the most modern or is (in the case of php) especially known for its robustness.



They do have a pretty complex infra with redundancy, extensive caching, the whole works. But I think they use a relatively boring, reliable stack throughout. It doesn't look like they run a lot of GARTNER MAGIC QUADRANT Cloud Solutions to Supercharge™ their web-scale data blazingly.

On the plus side solid stable software can run for decades without incidents, as long as the hardware is willing. The main negative is that it's not as Supercharged™ as the latest cloud product with a 99.9% SLA, which itself relies on 5 other internal microservices with 99.9% SLA each.


I doubt anyone at HN takes Gartner seriously, as James Plamondon (Microsoft) wrote:

> Analysts sell out - that's their business model. But they are very concerned that they never look like they are selling out, so that makes them very prickly to work with.

The question is, who is gullible enough to still believe Gartner has any value? You'd think by now even the most clueless of MBAs would have caught on.


5 internal microservices? I thought the Supercharged™ had deployed 256 microservices last week alone.


Lots of valid criticism exist for PHP, but not robust isn't something I would say.

(Lots of the criticism is exaggerated as well.)


> or is (in the case of php) especially known for its robustness.

It's 2023, can this pointless PHP bashing please stop once and for all if you're not talking about Wordpress?

In some cases, "keep it simple and stupid" is the thing to go - MediaWiki has mastered this.


I'm a big fan of "boring" technology for this reason. Huge knowledge base behind the existing stuff. No real need to change.


Remember that they mostly serve static content to non-logged-in users.

As long as the cache layer is working, most people wouldn't notice an outage.


Maybe that’s why it’s so reliable.


Thought the same thing. Finding the root cause is easier when you don't have to debug dozens of abstraction layers (KVM, Containerd/Docker, Kubernetes, etc).


Operating at this scale you can assume that all these things you mentioned do exist :)


It's all public: https://wikitech.wikimedia.org/wiki/Wikimedia_infrastructure

You can even get access to devops/SREs tickets (and IRC messages)


Docker? You’re probably right. K8s? Wouldn’t be so sure. Would like to hear from the horse’s mouth though.

Edit: looks like it’s used. Wonder if it’s more tooling than production traffic. No time to dig further.



You’re implying serious and useful work can be done in a stable tech stack? Preposterous.

Seriously though, not a fan of php at all but the js tooling is rocket science in comparison.


> Seriously though, not a fan of php at all but the js tooling is rocket science in comparison.

laughs in left-pad, Webpack, Grunt, ...

JavaScript tooling absolutely sucks - even for a moderate sized project, `npm install` can take many minutes, often enough because some native code has to be compiled. Webpack builds can take even longer.

In contrast, with PHP you run `composer install` and it works, no bundling or whatever required.


I think that’s what they meant - “rocket science” as in “way too complicated for 99% of down-to-earth jobs”


Back when I was doing serious work in php it was ftp (yes, no ‘s’) the php files onto an Apache box and that was it! Go went the same way, at least.


The Foundation have invested a lot over the last decade in reliability. Nowadays it is in a great place, but about 10 years ago it was up and down all the time.


Uh? If anything wikipedia is robust because their tech stack isn't modern.

Also php is robust as fuck




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: