Hacker News new | past | comments | ask | show | jobs | submit login

>idea that ideal software development is just a continuous production of small improvements to the code.

Many of us do believe this. Absolutely.

>It is highly discrete when it comes to output, because it takes time and experimentation to come up with the correct way to approach a problem, but if you do it right, you save yourself an incredible amount of time.

But on the flip side, if you get it wrong you waste an incredible amount of time.

The smaller the steps you take, the smaller the risk you step in the wrong direction.

I can't really comment on the rest of your comment because as far as I'm aware SCRUM means something different everywhere. I have no idea, for example, why a scrum team would not use formal estimation methods and rely on intuition.

Agile project management should mean you can examine your process and improve it. If your intuitive estimates are not accurate, you should be able to suggest a different method and see if you get better estimates.




> Many of us do believe this. Absolutely.

I think it is empirically not true, if you look at your Github history for example, sometimes you produce less change and sometimes a lot more. It corresponds to the fact (or at least a feeling) that some things are clear and can be easily done while some require some thinking or experimentation before they can be done.

And it's even less true if you restrict the requirement to only functional changes to the code (not just refactorings or productivity improvements or code reduction with preservation of the functionality). These things have to be mentally planned even more and that requires even longer periods of apparent inactivity.

> The smaller the steps you take, the smaller the risk you step in the wrong direction.

I don't think that's how it works. Risks are risks, whether you're taking small steps or not. If you don't know if your DB will break at 1000 users, it's a risk regardless how small steps you take to introduce the DB to your code.

Personally, I prefer to attack the biggest known risks first (for example, make prototype, measure it). But that flies in the face of other Scrum requirements, like getting some feature completely done in smaller chunks.


I am fine with the small improvements to code part, but I at least want an idea of where I am going. Small improvements to solve the problem? Sure. But I at least want an idea of what the entire problem is, not just the two weeks of problem shards I am given.


Absolutly agree, if you don't know where you are going, how can you possibly move towards it. At the end of every milestone its super important to take a step back and ask, are you on the write track.

If nobody can clearly tell you the end goal, your project management process is not the problem.


I have seen project made three turns based on users feedback until it found niche. SCRUM has not denied our vision but made it possible to continuously roll out to tighten feedback loop. Something else wrong in your case. In SCRUM you should be able to voice your concerns, to work with Product Owner.


If Product Owner is not providing you with the bigger picture he is not doing his job. Product Owner should make sure the end result is crystal clear at all times.


> The smaller the steps you take, the smaller the risk you step in the wrong direction.

This is the route to local maxima.


>>idea that ideal software development is just a continuous production of small improvements to the code.

>Many of us do believe this. Absolutely.

Isn't Jeffries' attempt to do Sudoku with TDD pretty much the prime example that it's not always so?

Jeffries tried to write a Sudoku solver by starting from tests and incremental improvement, and got nowhere. Norvig knew about constraint propagation and did it in one go.


I'm not familiar with the story, but it sounds like he started with a solution, (the tests) and tried to find some code that would satisfy them. Perhaps Norvig started with the problem, then tried to find a solution.


As I understand it, TDD works by that you say what your problem is, find the smallest thing that will get you there, and then implement a test for it. The test will naturally fail, so you then write code to make it pass, and finally refactor. It's fundamentally iterative.

But Jeffries didn't know how to get to the solution, i.e. what mechanism to make use of to get the computer to automatically solve Sudokus. So he wrote a few tests for the obvious things (like I/O and data structure) and then got stuck.

In contrast, Norvig knew considerably more CS and so knew that the right way to implement a solver is to use constraint propagation. So he did so in about a screen of Python code, all at once, job done.

The lesson of the story, as I read it, is that you can't iterate yourself out of a problem if you don't know how to attack it. If you're making a symbolic calculus program to solve integrals, it's unlikely that you can start off by going "okay, the integral of x is 1/2 x^2..." and somehow, through a series of small steps, end up at the Risch algorithm.

Similar arguments could probably be made about design. The less CRUD-y your application is, the more careful thinking is required. You have to know enough about the domain to know where the pitfalls are, and what the proper tools are.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: