When we build software we are building a machine with components. In every other industry where a machine with components is being created they apply test-driven-development. A component is rigorously tested over and over and over again until it meets the specifications.
I would not trust a car where every component inside of it wasn't rigorously tested in such a way. Otherwise I cannot credibly make the claim that a bug won't show up when I'm driving the car around on a highway that hadn't appeared in development. TDD doesn't eliminate this risk, but it gets us a lot closer.
TDD is not the same as testing. At the risk over oversimplifying, TDD is writing your tests first. AND they need to be automated tests. I bet that never in the history of the world has a car factory been built by starting with the QA line, making sure the QA line is properly robotic, and only after that starting to figure out how to design the parts.
I too suspect that automated testing, and possibly TDD, would raise the quality of FLOSS. But I'm not a committer on any of those projects; maybe those guys know something I don't. Or maybe you and I are right.
But despite that, TFA has a good point about TDD. He says that "TDD" is listed by Canonical as one of the technical virtues of Mir (I don't know if he's correct about this part). And he says that he does not see that as a technical virtue, in part because lots of the systems that he prefers (for technical reasons) are not using TDD, including his own project. (Also, btw, including the Linux kernel, which I believe has a "think about it really hard, then make sure it compiles, then ship it!" approach to quality control, at least if I read some of Linus's public emails correctly.) I agree that if someone wants to list TDD as a "technical feature", they're reaching.
I think that you can look at part specifications as writing a test. The component we're aiming for has to meet x, y, and z standards. Then you design the component, then you make sure it meets those standards. If not, iterate. Pretty analogous, I'd say.
You think car manufacturers practice TDD with their prototypes? Ha! People who are building physical prototypes...
1) don't test first (they test AFTER building the prototype), and
2) don't use automated tests to avoid regressions, at least not typically
Bottom line, there's nothing even approaching TDD in physical-goods manufacturing.
Maybe you don't know what TDD is, and have assumed that "tdd" means "good testing"?
I'm not sure what automated testing would like like in automotive part testing. This might be a situation where just because the technology doesn't yet exist for completely automated testing doesn't mean it's not the right thing to do, and therefore software can lead in this regard.
I don't think the automation part of TDD is essential or definitive, just very nice to have, but the test-first philosophy is essential.
I think you can look at the specifications process as writing a test that you assume will fail, insomuch as you are defining the exact conditions of success and don't yet have a component to meet those conditions. Then you are creating the prototype, then running the test, and iterating the prototype until it passes.
I'm going to go out on a limb and say that if you write KDE UI code, you should be able to answer the question of how to test it, preferably in the automated sense.
Notwithstanding all argument for and against TDD, the best proof of correctness is a long (manual) testing cycle.
Of course more automation is better, so I'd probably want to write a selenium-equivalent for asserting things about a component tree. Again, the KDE components are very regular so it's easy to generically traverse/inspect a GUI.
It's really not hard if you actually try.
I work for Fortune 500 consulting projects and don't believe in TDD for UI code that works across the myriad of technologies we use in work projects, while scaling across multi-site development.
TDD only seems to work on the business logic, and only in code written with testing in mind.
I don't think that's true.
There may be a requirement to do unsexy tasks that volunteers aren't interested in, but the actual overhead to create software that works as intended is lower doing TDD than without it, which is the whole point of TDD.
In reality though the tests often bitrot faster than the actual code does. The cost of the very large set of unittests generated by TDD only grows over time without contributing to the actual feature set of the software being developed. For this reason TDD does in fact introduce a large overhead with a potential savings in safety.
The controversy is introduced when people start to realize that the most bang for the buck comes from tests that exercise the public interfaces of the various parts of the software and that less and less benefit comes from the tests covering the internal parts of the software. Often the fine grained tests covering the internal details cause the internals to ossify and increase the cost of changes until finally they get abandoned.
TDD has it's place but taken too far it can do just as much harm as the lack of tests.
That's not TDD being controversial, that's details of how to apply TDD being controversial at most (and, really, its seems to be a "controversy" that amounts to saying "doing things that TDD doesn't ever call for is excessive", with no other side, which isn't a controversy, its a non-sequitur; if you are creating unit tests for internal details that aren't exposed, you are preventing refactoring, rather than enabling by testing all and only the required exposed behavior so that you can refactor the implementation of that behavior without changing tests, which is a central part of TDD. I suspect the problem with this is bad metrics used by some people who say they are doing TDD but have adopted measurements that imply standards that forget about the entire point of TDD, such as looking to ensure that every method is directly tested, which is clearly inappropriate.)
There is some projects where TDD is down right vital, but most of the time, it is not.
I find it hard to believe that the license that Xorg is using has somehow placed it at a disadvantage. If anything, by choice of license, Canonical has ensured that many commercial organizations that contribute to Xorg today may not contribute to Mir.
I was hoping there would be a real comparison of Wayland and Mir development. It seems like they may be identical in effect, and both will enable using Android HAL implementations in mobile devices.
I had an occasion to ask a Linux kernel developer about this and got an answer long the lines of "Why ask? Only Ubunutu will use Mir."