I don't use Facebook any longer, but I believe the web app's home page is still addressed as /home.php. It's been 10 years since I worked with a LAMP stkac where the P stood for PHP but even then using .htaccess to rewrite to pretty URLs was commonplace enough to be an explicit requirement from clients. One might wonder if there is an ocean of tech debt underlying that ".php"
This doesn't even come close to making a case for a code quality problem at Facebook.
> That’s 429 people working, in some way, on the Facebook iOS app. Rather than take the obvious lesson that there are too many people working on this application, the presentation goes on to blame everything from git to Xcode for those 18,000 classes.
So your solution is to just have less people working on the app? Those people are writing code that probably does stuff they need. They then quote:
> It becomes harder and takes longer to add new features versus a system where architecture is golden.
Except they do have an architecture that works, they made it work for what they wanted to do. Facebook engineering has time and time again decided to use engineering, rather than process, to overcome the limitations of development tooling at scale.
PHP became a limitation. Nearly everyone said "well obviously just re-write it in another language". They wrote the original Hip Hop to automatically convert PHP to performant C++ code, without the PHP engineers having to skip a beat.
Now, HipHopVM is the fastest PHP vm and they've iterated on the language itself with Hack, they got faster/better without re-doing anything. Contrast that to the amount of times Twitter has had to completely re-write their stack. Which company is better off today?
Git became a limitation, they had too much code in their centralized repository. The monolithic code model was working for them, Git as-is wasn't. Everyone said "well just change how you do things and break up your code base into smaller repositories, OBVIOUSLY."
Instead, they created a Mercurial backend on top of a distributed filesystem and made the solution scale to their needs, they didn't need to change how they worked. Way easier to get an infra team to write better source control than to tell thousands of engineers to each change how they do things and hope that works out.
This model of bending their systems to their will with sheer engineering force, rather than process change, is probably the single greatest decision FB engineering has ever made.
For the second scenario, I don't see how that nitpick is relevant to the rest of the article. For the third, how many end users notice any reliability issues with Facebook's services?
The answer is almost none, because instead of having to be extra careful when shipping new features and having to trust every line of code every single engineer writes, they wrote an orchestration system that automatically tests new changes and rolls anything back as soon as it goes wrong. Visible outages are rare, we wouldn't even be anywhere near privy to the scale of their incident count if that slide didn't exist.
So how is that at all a negative sign of "code quality"?