I think this hits the most interesting point. Personally I agree with the article that the best way to develop efficiently is to continually take small steps in the right direction and reevaluate where you are after each one - you save a lot of time by not planning things you don't need that way and it's much easier to maintain focus. But against that, people need estimates for how long projects will take and how much they will cost, and that demands a plan.
"you save a lot of time by not planning things you don't need that way"
Yes, but on the other hand you will lose time if you do not plan those things ahead of time that you can plan ahead of time. It's much cheaper to catch ill defined requirements while planning on paper than once the constraints reach production and people start implementing features and tests based on them. Although you cannot plan _everything_ there is always _something_ you can plan - and should plan. Unless you have a spec or a plan at day 0, you do not have a documentation platform available when the shit invariably does hit the fan and you need to do the learning-organization-disco and facilitate a change to your product because some real world constraints were not known or acknowledged before hand.
I am not sure I agree with that first point, having been involved in a short project, that just became one monstrous exercise in feature creep. That was essentially taking small steps in the right direction, but it never finished.
I think it is beneficial to have a plan in certain situations.
There's clearly a middle position to be had here. By saying NO if you're not completely confident you can do something, you miss out on the best learning opportunities. That said, if you suspect something cannot work, or isn't the right way of doing something for another possibly non-technical reason, being prepared to say NO is critical for delivering.
So possibly: If you always say NO, you never learn anything. If you always say YES, you never deliver anything.
It's important to also put either answer in the context of how it will affect the Time, Cost and Quality Triangle of program management. The customer loves to hear 'yes' but sometimes don't realize that that 'yes' is going to typically mean more time and/or money.
The animation here irritates me. If you change the date by five years the graph points should scroll vertically (and adjust horizontally as necessary), otherwise it just makes it harder to see how the pyramid changes over time.
It's indeed not perfect on this front. Furthermore, real statistician pointed that a continuous line for the pyramid was not the good choice, but I was looking more for interesting visuals and ways to spark the interest of people. That's why I used these animations, it gives the feeling to people to see what's happening.
Unfortunately, the way I animate this graph is very basic and I did not take time to make it perfect (the data crunching, SEO optimization and UI fine tunings already took way more time than I wanted to put in this). This is just a hobby project for me.
That's useful to know. Is that daily data? Have you found it to be reliable? When I started testing moving-average based strategies, my blog post about it ended up being largely about data issues, from both Google and Yahoo finance: http://grahamstratton.org/straightornamental/entries/movinga...
An extreme example was an opening value off by a factor of 100 on one day.
I think this might be an advantage of pair programming that isn't generally talked about.
The time to find a bug with a debugger has a much tighter distribution than the time to find one by thinking about the code. There are some problems that you just can't get to the bottom of by thinking.
By having one person take each path, you get the advantages of both. It might seem that a single programmer should be able to achieve the same by switching between the two approaches, but the cost of context switching is so great that it will be more like starting again each time. So it's really hard to know when to stop thinking about it and to get the debugger out.
The advice that once you find the bug you should work out what the problem really is definitely holds solid, though.