Hacker News new | past | comments | ask | show | jobs | submit login
Mir in Kubuntu (martin-graesslin.com)
71 points by Tsiolkovsky on May 12, 2013 | hide | past | web | favorite | 33 comments



There's a mirrored version of the article in planet kde: http://planetkde.org/#item-f97a50b7


There's a lot of truth here but the article seems to overstretch at times. The benefits of TDD are many, and I don't understand the logic of refusing to use a GPLv3 component because you're worried about GPLv4.


TDD is controversial. Mir is GPLv3; lots of free software, including KDE, is GPLv2+. Incorporating Mir into KDE would move everything to GPLv3; if there is ever a GPLv4, no software under that license could ever be integrated into KDE (since Mir does not have the usual "or a later version" phrase.)


I don't really understand why TDD is controversial.

When we build software we are building a machine with components. In every other industry where a machine with components is being created they apply test-driven-development. A component is rigorously tested over and over and over again until it meets the specifications.

I would not trust a car where every component inside of it wasn't rigorously tested in such a way. Otherwise I cannot credibly make the claim that a bug won't show up when I'm driving the car around on a highway that hadn't appeared in development. TDD doesn't eliminate this risk, but it gets us a lot closer.


This analogy (machines, cars, etc) is absolutely terrible. No physical-goods manufacturer uses a process comparable to TDD.

TDD is not the same as testing. At the risk over oversimplifying, TDD is writing your tests first. AND they need to be automated tests. I bet that never in the history of the world has a car factory been built by starting with the QA line, making sure the QA line is properly robotic, and only after that starting to figure out how to design the parts.

----

I too suspect that automated testing, and possibly TDD, would raise the quality of FLOSS. But I'm not a committer on any of those projects; maybe those guys know something I don't. Or maybe you and I are right.

But despite that, TFA has a good point about TDD. He says that "TDD" is listed by Canonical as one of the technical virtues of Mir (I don't know if he's correct about this part). And he says that he does not see that as a technical virtue, in part because lots of the systems that he prefers (for technical reasons) are not using TDD, including his own project. (Also, btw, including the Linux kernel, which I believe has a "think about it really hard, then make sure it compiles, then ship it!" approach to quality control, at least if I read some of Linus's public emails correctly.) I agree that if someone wants to list TDD as a "technical feature", they're reaching.


I need to clarify. I'm not talking about manufacturing I'm talking about protoyping and design. Software deevelopment is analogous to making the protoype, not the copies. A replica of the prototype should be able to pass the same tests the prototype can - that's why manufacturers can get away with only testing a sample of them. You can be sure that the protoype has gone through rigorous specifications testing.

I think that you can look at part specifications as writing a test. The component we're aiming for has to meet x, y, and z standards. Then you design the component, then you make sure it meets those standards. If not, iterate. Pretty analogous, I'd say.


I agree with what you said in THIS comment, but it makes it even more obvious that your previous comment contains untrue statements.

You think car manufacturers practice TDD with their prototypes? Ha! People who are building physical prototypes...

1) don't test first (they test AFTER building the prototype), and

2) don't use automated tests to avoid regressions, at least not typically

Bottom line, there's nothing even approaching TDD in physical-goods manufacturing.

Maybe you don't know what TDD is, and have assumed that "tdd" means "good testing"?


Uhm, yes. TDD, as you've already said, means write tests first, and then the function they test. The point of the test-first is to declare and define explicitly what the specifications for the function are. So writing your tests first is pretty analogous to specifying exactly what you want your function to output given a specific input.

I'm not sure what automated testing would like like in automotive part testing. This might be a situation where just because the technology doesn't yet exist for completely automated testing doesn't mean it's not the right thing to do, and therefore software can lead in this regard.

I don't think the automation part of TDD is essential or definitive, just very nice to have, but the test-first philosophy is essential.

I think you can look at the specifications process as writing a test that you assume will fail, insomuch as you are defining the exact conditions of success and don't yet have a component to meet those conditions. Then you are creating the prototype, then running the test, and iterating the prototype until it passes.


It's free software. There is a large overhead when doing TDD. Such overhead is not practical given the low, voluntary, resources such projects get. Free software project velocity is based on contributions. If you require such contributions to also make tests before committing the code, then you kill the project in a few months. This is why TDD based projects are so few and controversial in the FLOSS world.


I think a common misconception about TDD is that it requires a a lot of overhead. It actually reduces overhead because you front-load the quality assurance, and save a lot of time going back and fixing bugs down the road. Some of the most popular open source projects use a lot of test coverage. Some very important projects don't, but I think the only reason Linux, for instance, has gotten away with this is because it has such a gigantic community that finds and fixes a bug immediately if one is found.


How do you TDD UI code?


So hopefully you have a way of generating HTML/CSS that will throw errors if you have incorrect or missing data. Beyond that, rendering a lot of things on one page is a good strategy. Among other benefits, it forces you to modularize your code in order to be able to do that. It's also a hell of a lot more efficient to look at one page with 50 items to check that they all rendered correctly, than it is to check 50 pages.


Great! Now how do you TDD KDE UI code?


I am not entirely sure, but I think you GDB the STDOUT, grep the bin, and if that doesn't work, reroute the auxiliary phase array to the main deflectors. It works every time.

I'm going to go out on a limb and say that if you write KDE UI code, you should be able to answer the question of how to test it, preferably in the automated sense.

Notwithstanding all argument for and against TDD, the best proof of correctness is a long (manual) testing cycle.


The thing is, there is yet to exist any tool that allows TDD of desktop UIs beyond the toy examples people come up with web applications at conferences.


The same way. KDE components are very modular, so it's trivial to write a UI form that displays a bunch of components from various parts of your code. You could write a second executable that just displays this test form with test data (most of your code should be written as a library anyway, so having two executables that call your components shouldn't be an issue).

Of course more automation is better, so I'd probably want to write a selenium-equivalent for asserting things about a component tree. Again, the KDE components are very regular so it's easy to generically traverse/inspect a GUI.

It's really not hard if you actually try.


On the web, you can use Selenium and pair it with either a headless browser like PhantomJS and/or a headless virtual machine running IE.


How you validate if the CSS or HTML is rendered as supposed to be?


You look at it :-) (Edit: That's not the whole answer. Usually when you test you assert that certain elements of the DOM are where they should be. At the very least you have verified the structure of the HTML document, if not how it looks. For actual aesthetics I can't see any way around human testing.)


I am just playing devil's advocate.

I work for Fortune 500 consulting projects and don't believe in TDD for UI code that works across the myriad of technologies we use in work projects, while scaling across multi-site development.

TDD only seems to work on the business logic, and only in code written with testing in mind.


You have the computer look at it. Brett Slatkin of Google Consumer Surveys has a great talk on using perceptual diffs for this:

https://air.mozilla.org/continuous-delivery-at-google/


> There is a large overhead when doing TDD.

I don't think that's true.

There may be a requirement to do unsexy tasks that volunteers aren't interested in, but the actual overhead to create software that works as intended is lower doing TDD than without it, which is the whole point of TDD.


TDD creates a large maintenance overhead. It more than doubles the code that has to be maintained for the lifetime of the software. This is by design since any modification to the codebase should start with a modification to the tests.

In reality though the tests often bitrot faster than the actual code does. The cost of the very large set of unittests generated by TDD only grows over time without contributing to the actual feature set of the software being developed. For this reason TDD does in fact introduce a large overhead with a potential savings in safety.

The controversy is introduced when people start to realize that the most bang for the buck comes from tests that exercise the public interfaces of the various parts of the software and that less and less benefit comes from the tests covering the internal parts of the software. Often the fine grained tests covering the internal details cause the internals to ossify and increase the cost of changes until finally they get abandoned.

TDD has it's place but taken too far it can do just as much harm as the lack of tests.


> The controversy is introduced when people start to realize that the most bang for the buck comes from tests that exercise the public interfaces of the various parts of the software and that less and less benefit comes from the tests covering the internal parts of the software.

That's not TDD being controversial, that's details of how to apply TDD being controversial at most (and, really, its seems to be a "controversy" that amounts to saying "doing things that TDD doesn't ever call for is excessive", with no other side, which isn't a controversy, its a non-sequitur; if you are creating unit tests for internal details that aren't exposed, you are preventing refactoring, rather than enabling by testing all and only the required exposed behavior so that you can refactor the implementation of that behavior without changing tests, which is a central part of TDD. I suspect the problem with this is bad metrics used by some people who say they are doing TDD but have adopted measurements that imply standards that forget about the entire point of TDD, such as looking to ensure that every method is directly tested, which is clearly inappropriate.)


That's the controversy. I do TDD at work and I don't for my contributions. I know both quite well. Most projects have little regressions and it is hard to quantify upfront time you will hypothetically save later on, especially if all you do is contributing and moving to something else.

There is some projects where TDD is down right vital, but most of the time, it is not.


You might be surprised to learn then that you shouldn't trust modern cars. Car manufacturers don't test every part or component that goes into a car. Instead they rely on process control to ensure that said process delivers parts within the specs. Of course this means they'll take regular samples, but the idea behind it is fundamentally different. They're really measuring whether the process is stable and not testing the individual parts.


You're talking about the manufacturing of cars, not design and prototyping. The reason that they can take only a representative sample of each component that rolls off the assembly line is that they should be nearly perfect copies of the prototype which most likely did go through rigorous testing. In my analogy, building software is like building the prototype, not building the copies.


That's a fair point, but it's also exactly why these comparisons of software engineering to other disciplines never quite work.


There are also a lot of desktop apps that are GPLv2 that don't have the "or a later version" phrase, which makes them incompatible with GPLv3.


I don't understand why Canonical felt the need to use the GPL for such an intimate part of the display stack.

I find it hard to believe that the license that Xorg is using has somehow placed it at a disadvantage. If anything, by choice of license, Canonical has ensured that many commercial organizations that contribute to Xorg today may not contribute to Mir.


I'm also confused. Because if you now integrate your application with Mir/Unity Next won't you be required to release it under the GPLv3 ?


No more than Linux apps have to be GPL.


From all that verbiage I was able to extract that Mir might screw up community supported Ubuntu-derived distributions. I can see how some people would like their road map explained.

I was hoping there would be a real comparison of Wayland and Mir development. It seems like they may be identical in effect, and both will enable using Android HAL implementations in mobile devices.

I had an occasion to ask a Linux kernel developer about this and got an answer long the lines of "Why ask? Only Ubunutu will use Mir."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: