I'm surprised this article didn't mention test driven development for building large pieces of software!
I thought the test-driven development idea was impractical and pedantic when I first heard about it. I gave it a shot, though, and the thing it allowed me to do was to prevent massive systems from becoming impossible to modify. When a system gets so big that it is impossible to fit it in one's mind, most developers get scared to do new releases because there's probably some critical part of the system they forgot. They forgot all the requirements and QA test cases for at least one obscure part of the system. Even if all the QA test cases are documented, the process to release becomes increasingly difficult and time consuming.
With test-driven development, you can just run all the tests, and if they pass, say ship it. The key is not to get too many tests, especially integration tests which take a long time to run and like to break when designers make UI changes. Usually, I start with the happy path integration test and then write tests as I'm developing for things that don't work right. About 40% of the stuff works the first time, and I never have to write a test for it. 45%, I write the test and fix it, and it passes, and I can forget about it. The other 15% is usually some tricky algorithm with many corner cases, and that's where most of the testing goes. I typically write one happy path integration test and then fill in the lower level tests as needed. When the happy path works, most of the system is working. That said, spending time on integration tests is usually a massive waste of time. It either works the first time or something lower down broke, and one should write a test for the lower level part that runs in a shorter amount of time than the integration test.
I was able to port a large enterprise app from an Oracle backend to Postgres because there were tests for everything. The port just amounted to getting all the tests to run on the new database and the necessary data migration. This migration was by no means a trivial feat, but it was at least possible, and it ended up working and saving the company millions in license fees.
The point being. A system with millions of lines of code is approachable if it has good tests. A new developer can work on it, and if all the tests pass, that developer didn't break anything. I can go back to the code I wrote years ago and still use it if I run the tests, and it works. I can then see how it's supposed to work, add features, and so forth. Without tests, this is very difficult because as systems become bigger, they break more and more if they don't have tests.
I thought the test-driven development idea was impractical and pedantic when I first heard about it. I gave it a shot, though, and the thing it allowed me to do was to prevent massive systems from becoming impossible to modify. When a system gets so big that it is impossible to fit it in one's mind, most developers get scared to do new releases because there's probably some critical part of the system they forgot. They forgot all the requirements and QA test cases for at least one obscure part of the system. Even if all the QA test cases are documented, the process to release becomes increasingly difficult and time consuming.
With test-driven development, you can just run all the tests, and if they pass, say ship it. The key is not to get too many tests, especially integration tests which take a long time to run and like to break when designers make UI changes. Usually, I start with the happy path integration test and then write tests as I'm developing for things that don't work right. About 40% of the stuff works the first time, and I never have to write a test for it. 45%, I write the test and fix it, and it passes, and I can forget about it. The other 15% is usually some tricky algorithm with many corner cases, and that's where most of the testing goes. I typically write one happy path integration test and then fill in the lower level tests as needed. When the happy path works, most of the system is working. That said, spending time on integration tests is usually a massive waste of time. It either works the first time or something lower down broke, and one should write a test for the lower level part that runs in a shorter amount of time than the integration test.
I was able to port a large enterprise app from an Oracle backend to Postgres because there were tests for everything. The port just amounted to getting all the tests to run on the new database and the necessary data migration. This migration was by no means a trivial feat, but it was at least possible, and it ended up working and saving the company millions in license fees.
The point being. A system with millions of lines of code is approachable if it has good tests. A new developer can work on it, and if all the tests pass, that developer didn't break anything. I can go back to the code I wrote years ago and still use it if I run the tests, and it works. I can then see how it's supposed to work, add features, and so forth. Without tests, this is very difficult because as systems become bigger, they break more and more if they don't have tests.