
Frequently Forgotten Fundamental Facts about Software Engineering (2001) [pdf] - fauria
http://www.kictanet.or.ke/wp-content/uploads/2012/08/Forgotten-Fundamentals-IEEE-Software-May2001.pdf
======
serve_yay
When you say things like "I consider these to be facts" and "you may not agree
with these facts", it's a clue that we may not in fact be dealing with facts.

~~~
bhntr3
The lack of citations is concerning. I could read his book but it worries me
when numbers like these are thrown around without some way to verify the
analysis.

The "28x productivity" (or 10x or Xx ...) claim always triggers warning bells
for me.
[http://morendil.github.io/folklore.html](http://morendil.github.io/folklore.html)
does a pretty good job explaining how the research backing that widely
accepted "fact" may be questionable.

I also really like Dan Luu's review of the research behind static typing at
[http://danluu.com/empirical-pl/](http://danluu.com/empirical-pl/) as an
example of how a "well studied" claim can still be questionable due to the
massive difficulty in evaluating software engineering empirically.

Software engineering is a very human activity and that makes it very hard to
measure and quantify.

A book that does a better job is Making Software
([http://www.amazon.com/Making-Software-
ebook/dp/B004D4YI6G/re...](http://www.amazon.com/Making-Software-
ebook/dp/B004D4YI6G/ref=la_B001H6PNGM_1_3?ie=UTF8&qid=1370219697&sr=1-3)) but
as my first link points out, it still has some issues. At least its goal is to
get more rigorous in our scientific analysis of software engineering.

~~~
jobu
"The Leprechauns of Software Engineering" is a fun read that takes a look into
the origins of some of the software engineering folklore:
[https://leanpub.com/leprechauns/read](https://leanpub.com/leprechauns/read)

~~~
bhntr3
Thanks for the reference! It IS a fun read and informative.

------
rednab
The article author, Robert Glass, took these from his book, "Facts and
Fallacies of Software Engineering" (ISBN 978-0321117427), which is definitely
worth picking up, or in the very least worth skimming the table of contents¹
of.

¹ For example, here: [http://blog.codinghorror.com/revisiting-the-facts-and-
fallac...](http://blog.codinghorror.com/revisiting-the-facts-and-fallacies-of-
software-engineering/)

~~~
amelius
> One of the two most common causes of runaway projects is unstable
> requirements. Requirements errors are the most expensive to fix during
> production. Missing requirements are the hardest requirements errors to
> correct.

Hmm, doesn't this conflict with the idea of "rapid prototyping", where new
requirements are thrown in whenever necessary?

~~~
ssmoot
The way I've understood it: Prototyping maybe be an effort to develop
Requirements. It should not be used to develop a Product.

~~~
amelius
How can I explain this to my boss? :)

~~~
dakotasmith
Instead of explaining it, there can be steps taken to ensure a transition
between prototype and product, such as prototyping in a language or for a
stack that your company does not use in production.

~~~
willismichael
Alternatively, if the company still forces you to push the prototype to
production, this can be a great way of sneaking in the technology stack that
you wanted to use in the first place.

------
phkahler
I always say "people don't have requirements, they have problems they want
solved". IMO too many people in software get hung up thinking "requirements"
means some kind of detailed design document. You're gonna wait a long time for
someone with a problem to tell you how to solve it...

~~~
mathattack
This is a prime example of where the writer is stuck in a pre-Agile world.
(This was 15 years ago, and most of his hands on work was well before then) If
one follows and believes in the Waterfall method, tightening requirements is
the most important thing to ensuring that the original goals are met. Modern
software engineering has realized that changing requirements are a reality,
and forcing sign-offs doesn't help as much as a flexible process.

~~~
StillBored
This was the agile party line, but nothing in agile development helps with the
core problem of changing requirements. If you design/code a solution to one
set of problems using an underlying set of assumptions and when someone shows
up and changes those assumptions it doesn't matter if your using an agile
process or a heavyweight waterfall process. The only difference is going to be
in the amount of time it takes to iterate the design and tear the existing
product apart and rebuild it with the new set of assumptions.

The real savings with "agile" methodologies is the understanding that the code
is the documentation. This doesn't free you from having requirements or design
documents, it just allows you to spend less time on that part of the process.
For any sufficiently complex project not having block diagrams of how
everything fits together, and basic documentation of subsystem interfaces just
means you waste a ton of time reading the detailed implementation before you
can understand how the system works.

In other words its the same problem you have with heavyweight processes. If
you have to read 500 pages of design documents to understand how to integrate
your routine, that is the same has having to read 50,000 lines of code to
understand how to integrate a piece of code.

~~~
mathattack
I think we're going the same direction. The investment is less before you have
to change. If you spend 6 months gathering requirements that are obsolete (or
wrong for unanticipated reasons) before they're finished, you've lost 6
months. Working in an iterative process ("Is this what you want?", "No, how
about this?", "Or this?") reflects the reality that requirements change
sometimes for external reasons, and sometimes because people don't know what
you want.

Agile also shouldn't be an excuse not to document. My (perhaps not fully
informed) view is that it's more about iteration.

------
aidos
A lot of this mirrors what I've seen in the real world.

 _"...except for the additional maintenance task of “understanding the
existing product.” This task is the dominant maintenance activity, consuming
roughly 30 percent of maintenance time."_

I've definitely lost a large chunk of my programming life trying to understand
unclear code in large systems!

Edit: removed ambiguous quantification

~~~
kps
_Always code as if the guy who ends up maintaining your code will be a violent
psychopath who knows where you live._ — John F Woods

~~~
redwards510
The T-shirt to wear on code review days.

[http://www.redbubble.com/people/ramiro/works/13306366-im-
a-v...](http://www.redbubble.com/people/ramiro/works/13306366-im-a-violent-
psychopath-white-shirt-with-jules?grid_pos=1&p=t-shirt)

~~~
thyrsus
Someone seems to have told my colleagues that the violent psychopath only
shows up during code reviews - so we never do them.

)-;

------
struppi
Worth reading. I especially love the section about estimates. I have seen all
those problems at my clients in real projects. This is what got me interested
in #NoEstimates in the first place
([http://devteams.at/how_i_got_interested_in_noestimates](http://devteams.at/how_i_got_interested_in_noestimates))...

All other section from the essay are interesting too, and most are still very
relevant - 14 years later!

~~~
McUsr
I agree, totally worth reading, clear, concise, and probably very accurate. I
liked the section about efficiency best. "Efficiency is more often a matter of
design than of good coding".

Thanks for posting this.

------
mathattack
I like this one - and remember it every time I try to pay up for a good
programmer. One also has to wonder how much it's that poor programmers are
overpaid.

 _P2. Good programmers are up to 30 times better than mediocre programmers,
according to “individual differences” research. Given that their pay is never
commensurate, they are the biggest bargains in the software field._

Much of the list stands the test of time, though it's very clear that his
thinking predates Agile. Most old school software engineering experts talk of
the need to tighten up unstable design. Modern thinking is to create processes
that better react to the instability.

~~~
collyw
Can you judge how good a programmer is from an hour or two's worth of
interview?

~~~
mathattack
It's hard to differentiate Just Good Enough from Almost Good Enough in a 10
week internship. :-)

The 30x ones usually have a professional reputation that preceeds them - you
know before the interview starts, and spend the hour or two selling them. They
don't send resumes out. You either have to build people into them, or go
hunting once you hear that their companies are struggling. (Or if you hear
they are being mistreated)

------
mabbo
> "REU3. Disagreement exists about why reuse-in-the-large is unsolved,
> although most agree that it is a management, not technology, problem (will,
> not skill). (Others say that finding sufficiently common subproblems across
> programming tasks is difficult. This would make reuse-in-the-large a problem
> inherent in the nature of software and the problems it solves, and thus
> relatively unsolvable)."

Of all these topics he talks about, I think this is the one that has changed
the most in 14 years. Open Source librarys, github, etc, have made reuse-in-
the-large so much easier and it's so much more common now.

~~~
jayvanguard
I interpret it differently. Layering and libraries were solved ages ago
(although we have many, many more layers now). This isn't what people were
striving for when they talked about reuse in-the-large. It was more about
domain models and objects like having a common Customer entity or reusing an
insurance domain object model across different customers.

Horizontal re-use has always been possible but vertical reuse is a pipe dream.

------
sebcat
Why is it that these lessons remain forgotten? I find them to be accurate. I
can't help but to think that we are all doomed to repeat the same mistakes
over and over again.

~~~
davidgerard
Because with our new Silver-Bullet Development Methodology(tm), it's a
completely new world where none of your decades of experience apply! This
_completely changes_ the nature of software development _forever_!

p.s.: we sell certifications!

(repeat for a new Silver Bullet roughly each decade)

------
jroseattle
Those aren't forgotten facts so much as they're often "not-learned-yet" facts.
Many points of varying validity, but nearly all only come from experience.

One item on the list I believe would change: REU2, reuse-in-the-large. With so
many new services available, the number of addressable "common" use cases has
become very granular. So, reuse-in-the-large takes on a new definition for me.

~~~
thyrsus
Agreed. It seems to me "frameworks" usually qualify as "in the large".

------
jayvanguard
What a stellar list. The reuse part in particular needs to be read by many
people. As he points out, reuse in-the-large will never happen for very good
people reasons, not technology reasons.

> REU5. Pattern reuse is one solution to the problems inherent in code reuse.

Patterns are just Reuse in-the-small anyways.

------
MikhailEdoshin
I like the title. I think it should be an abbreviation of its own, like "FAQ":
FFFF, fee-four. Like "Let's write a FFFF on memory management" or "Where can I
read a FFFF on web app security?"

------
geoelectric
_Q1. Quality is a collection of attributes. Various people define those
attributes differently, but a commonly accepted collection is portability,
reliability, efficiency, human engineering, testability, understandability,
and modifiability.

Q2. Quality is not the same as satisfying users, meeting requirements, or
meeting cost and schedule targets. However, all these things have an
interesting relationship: User satisfaction = quality product + meets
requirements + delivered when needed + appropriate cost._

As someone who specializes in quality, I'd say either he was wrong, or the
practical definition has shifted.

What he calls "user satisfaction" is what I would equate with "product
quality," at least in terms of what we mean when we aim for a particular
quality bar before releasing. The various factors he lists as comprising
quality are aspects of satisfaction that may or may not apply to a given
audience of users, but the absence of one or more does not necessarily
indicate low quality unless it's relevant to the user.

Compare Twitter when it started with Twitter now, for example. When it
started, it was a relatively unreliable product, but still high quality for
its set of users--at the very least, it was high enough quality that spending
time and money to raise it may not have been a good idea. It was successful as
it stood. Now the set of users and their expectations have shifted, both
because the field's bar has raised in general and because it has enterprise
use, so reliability is a much bigger deal. It's still high quality, but for
different reasons.

Some of those things (portability, testability, modifiability) are simply
conflating code quality with product quality, unless what you're developing is
a code component. Even efficiency is meaningless to quality unless
inefficiency causes the -customer- to bear more load. Maybe he meant to
conflate those things, but I don't think they belong together. They're rather
orthogonal: vim is a high product-quality app with (supposedly, haven't read
it) pretty awful code quality. Conversely, I've seen plenty of pretty
codebases that produced crappy apps.

And meeting requirements, delivering at an appropriate time (which is another
form of meeting a requirement) and at an appropriate cost (yet another form of
meeting a requirement), these are all absolutely important aspects of quality.
The scale of quality a free-as-beer product is measured on will always be
different than one a paid product will be measured on.

Basically, he's taking a very absolute approach to quality, rather than
considering context. There's really no such thing as a "high quality product"
in the absolute. It's all relative to the intended audience and use.

I will agree, though, that quality goes way beyond sheer absence of objective
software defects. It's a shame that most of the industry tries to define it
that way.

