Many of us do believe this. Absolutely.
>It is highly discrete when it comes to output, because it takes time and experimentation to come up with the correct way to approach a problem, but if you do it right, you save yourself an incredible amount of time.
But on the flip side, if you get it wrong you waste an incredible amount of time.
The smaller the steps you take, the smaller the risk you step in the wrong direction.
I can't really comment on the rest of your comment because as far as I'm aware SCRUM means something different everywhere. I have no idea, for example, why a scrum team would not use formal estimation methods and rely on intuition.
Agile project management should mean you can examine your process and improve it. If your intuitive estimates are not accurate, you should be able to suggest a different method and see if you get better estimates.
I think it is empirically not true, if you look at your Github history for example, sometimes you produce less change and sometimes a lot more. It corresponds to the fact (or at least a feeling) that some things are clear and can be easily done while some require some thinking or experimentation before they can be done.
And it's even less true if you restrict the requirement to only functional changes to the code (not just refactorings or productivity improvements or code reduction with preservation of the functionality). These things have to be mentally planned even more and that requires even longer periods of apparent inactivity.
> The smaller the steps you take, the smaller the risk you step in the wrong direction.
I don't think that's how it works. Risks are risks, whether you're taking small steps or not. If you don't know if your DB will break at 1000 users, it's a risk regardless how small steps you take to introduce the DB to your code.
Personally, I prefer to attack the biggest known risks first (for example, make prototype, measure it). But that flies in the face of other Scrum requirements, like getting some feature completely done in smaller chunks.
If nobody can clearly tell you the end goal, your project management process is not the problem.
This is the route to local maxima.
>Many of us do believe this. Absolutely.
Isn't Jeffries' attempt to do Sudoku with TDD pretty much the prime example that it's not always so?
Jeffries tried to write a Sudoku solver by starting from tests and incremental improvement, and got nowhere. Norvig knew about constraint propagation and did it in one go.
But Jeffries didn't know how to get to the solution, i.e. what mechanism to make use of to get the computer to automatically solve Sudokus. So he wrote a few tests for the obvious things (like I/O and data structure) and then got stuck.
In contrast, Norvig knew considerably more CS and so knew that the right way to implement a solver is to use constraint propagation. So he did so in about a screen of Python code, all at once, job done.
The lesson of the story, as I read it, is that you can't iterate yourself out of a problem if you don't know how to attack it. If you're making a symbolic calculus program to solve integrals, it's unlikely that you can start off by going "okay, the integral of x is 1/2 x^2..." and somehow, through a series of small steps, end up at the Risch algorithm.
Similar arguments could probably be made about design. The less CRUD-y your application is, the more careful thinking is required. You have to know enough about the domain to know where the pitfalls are, and what the proper tools are.