Hacker News new | past | comments | ask | show | jobs | submit login

Managers, not understanding the difference between latency (how long each task takes) and throughput (how much work is getting done in total), always try to optimize for latency. The predictable result: throughput goes to hell, and then latency goes with it.

People who actually write software understand that you have to optimize for throughput first. Not to worry: latency won't be forgotten! But a primary focus on throughput will result in a clean codebase, that will maximize throughput and minimize latency.

The rationale is that if you minimize latency, throughput has to be maximum too, so optimizing latency is enough.

In practice latency is the goto target for optimizing actual processes. It's the most linked with all the risks. But software development is not an actual process.

> The rationale is that if you minimize latency, throughput has to be maximum too

Exactly. And it's quite wrong even without taking the growth of complexity into account — as every engineer knows, or should know. Getting every task done as quickly as possible requires a lot of context switching, which is murder on throughput. When you add in the effects of complexity growth (aka technical debt, though I think "complexity growth" is clearer) the disadvantages of optimizing for latency become that much more serious. And the worst part is, as the disease progresses and latency deteriorates, managers try to cure it by applying even larger doses of the poison.

This idea that managers optimize for latency, while I optimize for throughput, occurred to me only recently. But as I look back over the disagreements I've had with managers through the years (including disagreements over the usefulness of Scrum processes!), it's quite remarkable how many of them seem to come down to this.

There's pretty good theory behind the idea of minimizing latency to improve throughput. One of the better books on this is "Managing the Design Factory", and a followup called "The principles of product development flow".

These books are not about software development, but of product development in general. The first one actually predates agile, published in 1997.

Core idea is that minimizing the size of the tasks is the best way to improve productivity. Not getting it done as quickly as possible, but to decrease the size.

Remember that product development (new, innovative, uncertainties) vs product manufacting (repeatable), is not a new problem, and not unique to the software industry. There's a lot to learn from product development in other industries.

Another interesting read is "The Toyota Product Development System: Integrating People, Process And Technology" which talks about ways of making product development predictive, and less risky.

It's interesting to see how little software is used to improve the process of software development. Other industries use a lot of software (cad/cam, visual modelling, testing, impact analysis) to improve efficiency and quality of product development. Software for product development is a huge market.

It is basically correct, as the math does add up. As yourself pointed, if throughput isn't optimized, latency goes to hell. Thus minimized latency leads to optimized throughput.

The problem is that except on the bare minimum, latency is a bad proxy on development projects. So the idea is perfectly correct, yet it's useless.

There is no manager role in scrum, so not sure if this has anything to do with scrum.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact