Hacker News new | past | comments | ask | show | jobs | submit login

This isn't surprising.

Microsoft switched to this model a few months after Satya took over.

For the majority of Microsoft teams it worked really well and showed the kinds of results mentioned in this yahoo article. Look at many of our iOS apps as an example.

But for some parts of the Windows OS team apparently it didn't work well (according to anonymous reports leaked online to major news outlets by some Windows team folks) and they say it caused bugs.

First of all I think that argument is semi-BS and a cover up for those complainer's lack of competence in testing their code thus making them bad engineers because a good engineer knows how to design, implement, and test their product imo. But I digress.

I in no way want to sound like a dk but as an engineer it is your responsibility to practice test driven development but that's not enough.

Like reading an essay you usually can't catch all of your own bugs and thus peer editing or in this case cross testing is very useful.

You should write the Unit tests and integration tests for your feature

BUT

There should always be an additional level of end to end tests for your feature written by someone else who is not you.

Everyone should have a feature and design and implement it well including its Unit tests and integration tests BUT they should also be responsible for E2E tests for someone else's feature.

That way everyone has feature tasks and test tasks and no one feels like they are only doing one thing or stuck in a dead end career.




> I in no way want to sound like a dk but as an engineer it is your responsibility to practice test driven development but that's not enough.

One of the things that put me off when it comes to TDD is that it has always been a bit like religion.

What matters is whether the tests exist, not when they were written. I'd even argue that writing a test first and then being constrained by that box is a bad idea. Write the most elegant code first, and then write tests to cover all paths. You're more likely to know the problem better after the code is written.


> What matters is whether the tests exist, not when they were written

Technically true, but with myself at least; when I do TDD, I tend to write more, and better tests. When I write tests after code, especially when working on tight deadlines, there are substantially less tests written, just lots of TODOs that never get done.


I don't believe it's 100% on the dev to find all of the problems. Once you get a look at how the code works, you are less likely to find unhappy cases simply because you know in advance that doing X is stupid so you don't think of it. But of course a user wouldn't know that it's stupid or they are lazy or not as careful as they could be.

That's why you need a second person (ideally QA) to look at the result and test it. Cognitive bias 101.


Also making sure you have a good automated build,test,deploy structure is important too.

It's all the little details that will determine if this system will succeed or not.

Not the overall "big idea".

Implementation of this system and the competence and willingness to adapt of the team members is key imo. At least for this issue.


maybe a big company like microsoft can afford to not do as much of "qa tests" since they have people lining up wanting to be the beta tester for them(for free too!)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: