Hacker News new | past | comments | ask | show | jobs | submit login

> The Empire State Building was never planned and estimated in the traditional sense. They picked a deadline and used a flow-based approach to make it happen.

Industrial projects tend to have operability thresholds; that is, they're not composed of relatively homogenous outputs. Each floor of a skyscraper is similar to the other floors. But a petrochemical process plant can't be built in slices; you have to a minimum amount of design, planning and construction simply to get to the point of turning it on.

> There is no central control, no central design, no plan.

You may have heard of the Internet Engineering Taskforce.

I agree that many systems have useful emergent properties born of simple rules; and where possible this should be used. Or rather, where such systems are found to already exist, they should be left largely alone.

However, sometimes solving a problem with interacting agents is more costly than simply picking the straight solution. If you can get an answer with a simple differential equation, then it's a waste of time to set up a particle swarm optimiser.




> But a petrochemical process plant can't be built in slices; you have to a minimum amount of design, planning and construction simply to get to the point of turning it on.

Poppendieck also gives examples of using similar processes for 3M's manufacturing plants. Regardless, I think drawing physical analogies for software is risky; software is infinitely soft.

> You may have heard of the Internet Engineering Taskforce.

Yes, I have. And either you don't know what they do or you're drawing a false analogy between traditional planning processes and what the IETF does.


> Regardless, I think drawing physical analogies for software is risky; software is infinitely soft.

Sure, but I also think that this doesn't imply infinite intractability for actual problems. That a problem is NP-hard, for example, doesn't mean we can't find quite-good solutions to it that have business value.

> And either you don't know what they do or you're drawing a false analogy between traditional planning processes and what the IETF does.

You said that the internet was not designed. The protocols didn't evolve without supervision. Every part of them was designed for a purpose.

The question here is whether you think I'm saying "complex systems with emergent properties can be estimated or planned". That's not what I'm saying. I'm saying that not all problems are complex and not all problem systems have emergent properties. Many problems are eminently suitable for estimation.

It does not follow that since in some cases estimation is going to provide very little net business value that we ought to do away with it in all cases.

Could you elaborate on the 3M example, or provide a link? I'd like to read more.


I did not say that the Internet was not designed. But I'm glad to say that now. Specifically, it was not designed in the sense that megaprojects requiring planning and estimation are designed. My point there is that people just assume that big achievements require big plans, but that is demonstrably false.

I do agree, as a long-time reader of RFCs, that some of the protocols were designed, sort of. But if you read more deeply, you'll see that they were just as much evolved. And, that they were never imposed through a central control structure. It's no accident that the foundational documents of the Internet are all Requests For Comment. An instructional contrast is the OSI protocol suite, a top-down alternative to the Internet. Now dead, of course.

I agree that some projects can be estimated. I'm saying a lot of them shouldn't be, because there are more effective ways to get results.

You can read more about that, and about the 3M example, in Mary Poppendieck's books. I think the specific one I have in mind Leading Lean Software Development.


> Specifically, it was not designed in the sense that megaprojects requiring planning and estimation are designed.

Right. Some systems can't be designed that way. But that doesn't mean that no system can be designed. And it doesn't mean that all systems are better off being designed or not-designed.

The reason megaprojects tend to be planned, controlled etc is because they haven't spontaneously emerged on their own. Somebody somewhere wishes to make a positive effort over and above the current baseline.

> I agree that some projects can be estimated. I'm saying a lot of them shouldn't be, because there are more effective ways to get results.

Agreed. A lot of the time a formal estimate isn't necessary. But a "lot of the time" is not the same as "always".

> An instructional contrast is the OSI protocol suite, a top-down alternative to the Internet. Now dead, of course.

There was a good history article on OSI in a recent IEEE or ACM magazine. The two major problems were an irreconcilable fight between circuit-oriented and packet-oriented designers (so they did both) and then lashings and lashings of vendor politics. The author of the piece argued that TCP/IP worked because it was driven by a small group of designers who just went ahead and cut code.

Generally the IETF model has worked because it's done by small groups focusing on a narrow problem in an environment of independent, interacting agents. Some systems work really well that way. Some don't.

> You can read more about that, and about the 3M example, in Mary Poppendieck's books. I think the specific one I have in mind Leading Lean Software Development.

I'll pick it up, thanks for the reference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: