Not sure why you were down-voted, but this is a good point. In fact, it is the base problem of all software methodologies. Planning depends on knowing how long an activity will take. Estimating is most reliable when you are estimating something you've already done before. Unless you're working in just such a low-novelty environment, your results will not be predictable and you should understand that the backlog is a priority-ordered list and velocity will let you give a "forecast".
Kanban doesn't "fix" this -- it just changes the frame through which deadline-buzzards get to look at it.
I would say, if you absolutely need an estimate on how long a project will take (and often you don't), do a proper study on it, don't just pull a number out of your a*. Software project estimation is doable, but it takes some effort. It (IMHO) cannot be replaced by a clairvoyance seance, AKA planning poker.
I’ve said this previously but I don’t agree that software estimating is possible in theory. It’s a form of the halting problem. You can’t, even in theory, know how long something will take until you do it.
It’s certainly possible to estimate to a rough-order-of-magnitude but the problem is that the ROM quickly becomes a target, and then a commitment.
I have a couple of positions that I take.
First, the old adage still applies: “better, faster, cheaper - choose two”. If you set a deadline then you have to be willing to drop features or quality, because Brooks already demonstrated you can’t add resources after you start.
Second, if you do need to set a deadline then prioritise it, stick to it and celebrate it. Too many times deadlines are used as a whip, and my or my team have wiped themselves out to meet it, only to be told it wasn’t really needed. This is insanely demoralising and strongly affects productivity in the long term.
My personal (and unpopular) preference is to prioritise completion of stand-alone features rather than building to deadlines, which means you can release a product at almost any time with fewer, but better quality features. This approach has a demonstrable positive impact on the ROI but because it’s more complex and provides less short term certainty, it’s frowned upon.
I think in pure compsci theory you might be right, but in practice (industrial SW production), we usually limit ourselves to algorithmic solutions that do not involve halting problem, so estimation is possible. For example, an upper limit for a rewrite of a system is how long it took to create the original system (provided you understand the tradeoffs and issues of the original architecture).
I don’t understand. The halting problem says you can’t tell if an algorithm halts without running it - you can’t be sure any program will run to completion in a specific time. It applies to any non trivial program.
And it’s well established that a rewrite is dangerous. The second system effect suggests that rewrites will always take much longer.
My point was that OS dispatchers don't spend time working out how long things will take to compute, to allocate resources (such as CPU time) properly into the future; what they do instead is they just put the stuff to compute into a dispatch queue and go from there. Then if it's a novel thing or million times repeated thing doesn't really matter.
BTW, one indication that Scrum is bad is that nobody is doing its analogue (syncing everybody once a day?) in computing. Sane project management practices should be universal, and should also work for operating system schedulers.
Kanban was originally designed for production lines in car factories, so it works. If you go to any McDonalds and look at the screens in the back you'll see orders represented a lot like tickets.
In Agile, "points" are just an abstraction around work to try to normalize it in the same way that every burger+fries combo should be roughly equivalent.
CPUs can only execute, what the instructions tell them to do—like a production line—and not solve / create something (new).