
A New Software Engineering - henrik_w
http://queue.acm.org/detail.cfm?id=2693160
======
thibauts
Another silver bullet in the making.

I don't mean to be rude but to me it's pure BS. A grand plan, a conclusion
talking about paradigm shifts, and no substance at all. Seeing the method co-
signed by Robert Martin even made me chuckle a little. I guess we're on for a
second round.

This is clearly meant for management (again) and doesn't give a hint of a clue
to what is the _practice_ of software engineering.

Won't they ever understand (I use _they_ on purpose as they're clearly not
_us_ ) that _software engineering is the process of discovering and
documenting in a formal language the methods to be applied to solve a problem
defined in an ambiguous natural language with the use of imprecise concepts_.
What we really do is assess the validity and feasibility of what an user
wants, and along the way refit concepts, articulate them in a way that can
actually work or formally make sense. Building software is the act of
understanding how the implicit things we take for granted are actually not.
This is a discovery process. This precludes precise estimates or estimates at
all, which is very difficult to finally accept and live with, I concur. It has
to be dealt with in most cases or in a large part like a _research endeavour_.

We'll have some day to take things at face value and learn to live with them.

~~~
greenyoda
Your comment is actually one of the best formulations of software engineering
I've ever read. However, I'd disagree, at least to some extent, with your
conclusion:

 _" This precludes precise estimates or estimates at all..."_

While we software developers do occasionally solve novel problems that can't
be estimated since the methods for implementing them are not currently known
(i.e., research problems), most of the problems we work on are variants of
problems we've already solved, and our experience - especially if we record it
and make an effort to learn from it - can be used to estimate the scope and
complexity of a project (including the "discovery process").

Sure, sometimes our estimates will be way off, but an estimate that's within a
factor of two or five of the actual cost of a project is more useful than no
estimate at all.

The people who are paying us (management, customers, investors, etc.) will
eventually want to know what they're going to get for their money and when
they're going to get it - and sometimes the answers to those questions need to
be answered before they provide any funding at all, since a solution might be
completely useless to a customer if it's not available by a certain date.

~~~
thibauts
I should have worded it _" This precludes precise estimates and sometimes
estimates at all"_, as of course depending on your knowledge of the problem
domain and your field experience you can get some degree of confidence.

------
Animats
Software engineering was taken more seriously 20 years ago than it is now.
There have been some notable successes of rigorous development, but they're
not well known. Here are two in wide use.

The first is the operating system kernel in the air link processor of mobile
phones. In most current phones, that's an L4 kernel with a full proof of
correctness. Since any mobile phone can potentially knock out all phones for
some distance around if it doesn't follow the sharing rules for the air link,
this is important to carriers. They got it right. Nobody talks about this
much, but if that layer had problems, there would be regular cellular
blackouts.

The second is the Windows Static Driver Verifier. This has been used since
Windows 7. It verifies that kernel drivers don't crash, clobber memory or call
the driver APIs incorrectly. Before the Static Driver Verifier, drivers
accounted for more than half of Windows crashes. Now, crashes from signed
drivers are very rare, and usually involve getting the driven device itself to
do bad DMA operations. (IOMMUs are coming along to stop that.)

This shows the right direction for software engineering. Some software really
matters, and has to be engineered properly. Most software doesn't matter all
that much. Engineered systems should separate the two, develop them in
different ways, assume the low-grade stuff will crash, and architect systems
so the low-grade stuff can only do limited damage. We're seeing architectures
like that in the mobile world and in server-side systems. In the mobile world,
"apps" run in relatively contained environments. In the server world, things
seem to be moving towards containerized "apps" in systems like Docker, running
on some minimal glue layer inside a container running on a secure microkernel
such as Xen and talking via message passing.

That's software engineering.

~~~
antimagic
"The first is the operating system kernel in the air link processor of mobile
phones. In most current phones, that's an L4 kernel with a full proof of
correctness. Since any mobile phone can potentially knock out all phones for
some distance around if it doesn't follow the sharing rules for the air link,
this is important to carriers. They got it right. Nobody talks about this
much, but if that layer had problems, there would be regular cellular
blackouts."

That's... an interesting claim. You're basically saying that a cellular
network is wide open to DOS attacks. I would need to see some serious proof
before accepting such a claim.

~~~
thibauts
Any wireless link is open to DOS attacks: jamming.

~~~
antimagic
Jamming is not a DoS attack. The essential feature of a DoS attack is that it
forces the receiver to use resources, which it can then not attribute to
legitimate users of the service. Jamming does not force a cell tower to use
resources, hence it's not a DoS attack.

To put it another way, if jamming is a DoS attack, then so is hacking a server
and reconfiguring its DNS. But we don't call that type of attack a DoS attack,
we call it a penetration, or simply a hack.

And yes, wireless communications are susceptible to more physical attacks,
such as jamming, but jamming is a) easy to track down, and hence dangerous to
execute, b) expensive, you need to invest in special hardware to do it and
c)surprisingly difficult to effectively execute in a radio environment like a
cell network, which is generally quite adept at routing around things like
jamming ( if you put your jammer between me and the antenna, my phone will in
all likelihood simply find another antenna to connect to on the other side of
me - with signal strength dropping as the square of the distance, your jammer
is going to need to be BIG to jam me effectively).

------
wrs
This starts out talking about engineering having a "theory" to work with,
meaning physics, materials science, etc., that is the source of its ability to
construct new reliable systems.

Then it proposes a mechanism to develop a "theory" of software engineering
where the generated theory seems to consist entirely of methods of project
management.

Am I completely missing the point of the article? Because that doesn't seem at
all parallel. Engineering has methods of project management too, but that's
not what makes bridges not fall down.

~~~
seanmcdirmid
What is the equivalent in other fields that build things? I mean, project
management must be studied in other disciplines, and what have they come up
with?

~~~
scribu
The article points out in the beginning that borrowing project management
practices from other disciplines is what we tried to do (which gave us The
Waterfall method) and it doesn't work.

That said, I agree with the GP in that I was expecting more focus on actual
software matters, like how to test the reliability of a system before it's
built etc.

~~~
seanmcdirmid
I don't think waterfall is actually used in other disciplines, perhaps we were
just cargo culting back then?

But my question is: if SE is supposed to be based on a theory of project
management, then what is the theory used in other disciplines? Or do project
management not see that as their underlying theory, but rather something more
hard like physics or chemistry?

It sounds like there must be a general field of project management out
there...getting people to apply technical skills to get things done. It is
probably a soft science, but definitely necessary. So why not talk about this
aspect SE as a sub-field at that field rather than as an aspect of a more hard
science (computer science)?

~~~
zb
> I don't think waterfall is actually used in other disciplines, perhaps we
> were just cargo culting back then?

I think this is half right. There _was_ quite a bit of cargo-culting going on.
But the fundamental error was to try to apply the project-management process
for the _construction_ of other kinds of artefacts to the _design_ of
software. I wrote about this very topic here:

[http://www.zerobanana.com/essays/reclaiming-software-
enginee...](http://www.zerobanana.com/essays/reclaiming-software-engineering/)

You're correct that no other engineering discipline attempts to use waterfall-
style project management of its design process. They don't _really_ use it in
the construction process either (in practice, design tends to continue
alongside construction), though it probably has at least been attempted.

It was no surprise to see that the authors of this piece are in fact among the
originators of the Rational Unified Process. They claim to have corrected
their mistake; in fact they are still pushing the same wrongheaded ideas as
they were in the 1980s.

~~~
greenyoda
_" They don't really use it in the construction process either (in practice,
design tends to continue alongside construction)..."_

For something like a high-rise office tower, the design decisions you can
change after construction is in progress are quite limited. For example, you
can't add ten more floors to the building as an afterthought, since that would
involve adding extra elevator shafts, emergency stairways, water and sewage
lines, etc. that would cause significant disruption to the floors of the
building that have already been built. Similarly, redesigning the layout of a
floor to accommodate twice as many people would require similar changes to
stay compliant with building and safety codes.

I suppose if you're constructing a single-family house, there's much more
leeway to change the design during construction - but it would still be
costly.

------
warcher
It's not like we _can 't_ design very elegant, robust, reliable software, you
know.

We just can't find anybody to pay for us to retool the whole stack (and I do
mean the _whole_ stack, since we're only as strong as our weakest link) while
the current ad hoc solution operates within acceptable parameters.

The guy who wrote this paper, in my opinion, is missing two really bedrock
principles of "pure" engineering-- manufacturing tolerances and _cost_. If it
works as well as it needs to and comes in under budget, it's miller time.
There's a reason every stereo ever made goes -clunk- when you push the 'on'
button. There's a reason none of the walls in your house are exactly plumb.

~~~
warcher
Addendum:

The other real serious mischaracterization here is likening software to a
physical product, even one as complicated as a skyscraper.

Software is a factory that makes products, whether they're html pages, or
graphics on a screen, or inputs to an industrial controller. You start looking
at how to engineer and design and manage factories and a lot of the chaos of
computing looks very familiar. E.g."367 days since someone lost a limb in a
major industrial accident."

~~~
calinet6
The factory analogy is extremely apt.

As in a factory, Quality in software is a result of a complex myriad of
factors, but it reduces to some simple concepts: an understanding of
psychology, a deep understanding of statistics, knowledge and application of
systems theory, and the simple idea of epistemology and scientific method
applied to management and production. Finally, leadership and universal
application of these concepts throughout an organization, using a PDCA cycle
of continuous improvement (Agile is a rough one).

W. Edwards Deming had this all _exactly correct_ way back in the 1940's, after
WWII, where he taught these concepts to Japanese companies, transforming them
from cheap crap makers into the quality powerhouse economy we know and love.

We should listen to him again. That's the "new software development" we need.

[http://en.wikipedia.org/wiki/W._Edwards_Deming](http://en.wikipedia.org/wiki/W._Edwards_Deming)

------
kabdib
Recipe for making great software:

\- Build a good team. Keep management overhead and bullshit to a minimum.
Motivate people by giving them really good people to work with and not pulling
crap compensation games.

\- Do your product three or four times. Rule of thumb: When you're absolutely
sick of re-writing the thing and you're ready to work on something else
(probably at about this many repetitions) you're reaching the point where you
have a decent solution.

\- Work on something hard and fun that people actually want to buy.

\- If someone in management tries to cram a methodology down your throat, or a
salesman knocks on your door with a rad new way to do scrum, kill them and
stuff the body with all the other miserable wastes of oxygen who tried to
bamboozle you with snake oil.

I dunno. This stuff is just bloody hard, and every silver bullet I've ever
seen -- _especially_ the ones that management loves or that use the word
"paradigm" in their literature -- has been a dud.

~~~
thibauts
This.

Things go wrong in a team (with or without a particular silver bullet) when
there are bad or unmotivated programmers, or when there are no experienced
leads to pass the wisdom.

Recruiter more often than not don't realize this is a sure recipe for failure.
As studies don't produce software engineers but merely (in the best case)
algorithm engineers, and as software engineering is not a thing yet, the only
thing we can rely on is the past successes of experienced individuals.

Methodologies won't remove complexity but only provide an additional (time-
consuming) component to your complexity average that will make things look a
bit better on a cursory glance but _won 't reduce at all_ your codebase
complexity and maintenance costs.

Try instead to make your team spend half of its time doing simple things
totally unrealted to your project or product (like piling cubes all day long)
and you'll feel the exact same relief (half better) and have the exact same
productivity gain (zero).

------
discreteevent
Software is more a design activity than an engineering activity. In the
construction world it would be more like architecture than civil engineering.
Architecture in the sense that you do need technical knowledge but it's much
more about thinking about how do we design a building for people who are
engaged in certain activities. What are their needs. How does it fit in with
its environment etc. _Soft_ things. Burt difficult things nonetheless. In a
way the work of the civil engineer is done for software by the language,
library, OS and browser developers (and the hardware people obviously)

~~~
brudgers
In the US and many other parts of the world architects are professionally
liable for the public welfare in general and life safety, regulatory
compliance and system performance specifically of the buildings they design.

The practice is regulated in these places because there is very little soft
about people dying. It is the absense of a culture that comes from individuals
aaccepting such responsibility that concerns people like the author and Uncle
Bob.

~~~
discreteevent
I don't mean that all of what we do is soft. I mean about 50℅ of it (in my
case anyway, the rest I spend on data structure design, performance, testing,
maintainability etc etc.) I'm saying that it is more like architecture than
civil eng. Because it has such a large soft design component it will always
elude a hard engineering approach.

------
jacques_chester
Computer science isn't a science and it's not about actual computers. Software
engineering isn't engineering and is really about the limits of people, not
software in and of itself.

Nevertheless, my job title is "Software Engineer". Where it makes sense, I
model myself after our elder cousin professions. When it doesn't, I don't.

~~~
foobarian
I have that title too; I feel that basically "coder" \+ "process" = "software
engineer."

------
cageface
Only when we can specify the design of a software product with the same
precision that we can specify the design of a skyscraper will we ever see the
failure rates of software "engineering" efforts approach those of other
engineering fields. As any practicing programmer can tell you, even the best
upfront design specs are imprecise and incomplete. And most of the time those
specs are changed over and over again during the development effort so
radically that the final product is often barely recognizable.

I put the blame for this firmly on the shoulders of the "clients". We can and
do achieve high success rates with low defect counts in special cases where
the goal is precisely and clearly defined and the development budget is
sufficient. When we start caring about the quality of a website or mobile app
as much as we care about the quality of the software that runs airline flight
control systems or medical devices then we'll see real software "engineering"
emerge.

------
robert_tweed
There are many hard problems in computer science and software engineering. As
it turns out, the solution to these problems is spider charts.

Finally, we'll be able to write distributed low-latency software that
interacts with both legacy systems and browsers with complete end-to-end type
safety, provable correctness and fault tolerance! In half the time, at half
the cost!

With spider charts.

~~~
tk42
But don't forget the cards.

------
peterwwillis
So the author is basically saying 'we need to improve software development',
and trying to use history as an example of 'what we need' in order to improve
it.

Back in the day, craftsmen were just people who were so specialized in a trade
that they could build amazingly complex and difficult things through example
and practice. But it took a long time to learn this craft from another
craftsperson, and knowing this one trade so well left them at a loss for other
aspects of the thing they built (resulting in things like building collapses).

Modern software developers are the same. Indeed, we're even going through the
cultural shift that happens when different civilizations revisit the same
things without really looking at how their forebears did it. We're still
reinventing the wheel instead of creating a better one, and we're far from
creating any new transportation mechanisms.

In order to achieve the kind of evolution from 'craftsman' to 'engineer' that
existed for physical architecture, software developers need to learn about
things other than software. It's not enough to simply learn how the kernel
works, or the hardware works. You have to learn how all those pieces work with
other pieces, and the resulting interactions between different natural and
non-natural processes with computer systems.

The author gets to those points with his 'essence' of new software
engineering. But it gets a bit bogged down by not being generic enough, and
not being specific enough. It's simple to see the difference between a
craftman's output and an engineer's: the application of scientific principles
to achieve an output we can be much more confident in. And that's what 'new
software engineering's goal should really be: producing a more reliable,
reproducible, safer product.

Are the proposed methodologies going to get us there? I don't think so. I
think we need less process, and more science, and to create software which has
science as an inherent requirement of its design and implementation, and not
merely an afterthought for performance reasons.

------
lucasnemeth
I HATE WHEN PEOPLE MISUSE THOMAS KUHN. Read the goddam book, or actually read
and try to understand it. The scientific revolution occurs in opposition of
the normal science period, we never had a normal science period in software
development (or engineering, if you like this awful term), so it makes no
sense to use Thomas Kuhn here. Which is a very profound and complex author
discussing a epistemological paradigm shift, really different from this
nonsense there.

------
wmt
Automated tests are the closest thing to actual engineering in software. While
communication methods like agile methods are important, to me they still are
outside the scope of actually engineering software. I've found that one of
most significant factors of the software quality I've produced is the test
coverage of the code, and not just lines of code but the different branches of
a function, which honestly feels quite impossible especially if you have
exceptions or threads involved.

Some celebrate TDD, but to me it just has appears to help people not dropping
out the tests.

Automatic static analysis with compiler warnings or pure static analyzers is a
really good and quick solution for it, but what I'm really hoping is a fully
automated dynamic analysis solution. What's really exciting is that there
already are some dabbles with that approach, like the american fuzzy lop
([http://lcamtuf.coredump.cx/afl/](http://lcamtuf.coredump.cx/afl/)) which
actively tries to find new branches from the application being tested.

------
mrottenkolber
I agree with the first half of the article but find the "kernel" concept
lacking. Here's why: It doesn't mention actual programming theory.

SCRUM, Extreme Programming, Waterfall, etc. These are all about management and
business practices, not about what we do. Extreme Programming has some craft
related "practices" (TDD, CI) but even these are mostly in the QA corner.

I think we need a theory about classes of programming problems and their
solutions. We need to analyze our code bases, and how we grew those, in order
to understand the trade-offs and nature of our approaches. And by this I
really mean analyze the code base, the "project idioms", the abstractions,
their cooperation.

Examples I recall are from the Lisp world where the bottom-up onion is a well
liked strategy (layering toolboxes on toolboxes, each made up of many small
pieces which can be combined effectively). Another such observation is the
many types with few methods vs many functions on few types debate. This is
what I want to see researched (and research myself).

------
ChicagoDave
I smell someone trying to make money off of seminars and corporate "re-
education". There's no doubt that many of our corporate overlords need re-
education on agile software development, but this is not the way to do it.
They simply need to engage their IT people openly and collaboratively. Nothing
else will work.

------
tk42
My software engineering process works as follows:

I try to code as much as possible as early as possible. I throw away lots of
stuff and recode it. Besides that I have an eye for stuff that is "similar"
and can be abstracted. If someone wants an estimate, I guess as good as
possible.

Big code is idealy split into one-person-chunks each with a documented API,
but sometimes many people have to work on the same "files". Then big code is
split between multiple people that sit nearby and communicate personally while
discussing implementations based on technical arguments.

How to make a product of software is a different story. But I guess it works
when you design your product in estimateable pieces and adapt fast to changing
requirements.

Also I am pretty sure I forgot one or two things...

------
omouse
What I like about SEMAT is that it's straight-forward and it _can_ encompass
waterfall, agile or scrum or whatever. It quantifies what's happening on your
project without relying on traditional MBA-type project management processes.

The barrier to SEMAT, as with Agile, is training and leveraging the power that
programmers/software engineers have in the economy to get it adopted. We're
going to have to wait 5-10 years to see any significant adoption. What would
really helped is guides and documents on how to present this to your team or
organization and make it easier to get it adopted or at least to trial run it.

------
contingencies
_the ethos of software engineering has tended to devalue coders (if not
explicitly, then implicitly through controlling practices)_

All aboard the software engineering boat: see you downriver for delivery,
right past the waterfall!

------
analog31
Truth be told, there's a huge amount of craft in mainstream product design.
Only a certain subset of the things that we use are engineered at the level of
discipline reserved for things like airplanes and skyscrapers. And hardware
designers are comfortable with some things that would horrify software
designers, such as relying on closed source tools for critical tasks like
structural analysis.

When we discuss engineering discipline, somebody will invariably remind us:
"Look, we don't designin airplanes here."

------
lucozade
TL;DR some perennial software methodology consultants have come up with a base
class for software methodologies.

It seems suspiciously like it has the usual properties of post-facto defined
base classes i.e. arbitrary and fragile. But I could be wrong.

BTW if you found equating physics to marking stakeholder-ness out of 6 hard to
swallow, you might struggle with "Major-league SEMAT". It reads a bit like an
outline for the next Scott Adams book.

------
Spearchucker
I see a lot of detractions here. One of the authors, Ivar Jacobson, is behind
UML (together with Grady Booch & James Rumbaugh). Which explains where SEMAT
fits - the enterprise. Specifically those that live in a world of class
diagrams, sequence diagrams and activity diagrams. I doubt there's much
benefit to the typical HN audience (for reasons the comments here point out),
but SEMAT has a place.

------
terrasect
For more technical methods and theory, I would check out The Five Orders of
Ignorance
([http://dl.acm.org/authorize?9919](http://dl.acm.org/authorize?9919)) and the
book The Pragmatic Programmer.

~~~
contingencies
Five Orders of Ignorance working link: [http://www-
plan.cs.colorado.edu/diwan/3308-07/p17-armour.pdf](http://www-
plan.cs.colorado.edu/diwan/3308-07/p17-armour.pdf) .. I didn't find it to be
particularly of value.

------
golemotron
I lost it when I saw requirements as a cornerstone. There are no requirements
- it's all design. The word 'requirement' is an artifact from processes where
you have handoffs from one group of people to another rather than integrated
discovery and coding. When you look at things as requirements you lose the
fact that most things are negotiable and should be negotiated in service of
the ultimate design.

Beyond that, it's worth noting that when the same people go from idea to A/B
testing to full production the idea of a 'requirement' seems quaint.

------
toolslive
<sarcasm> didn't he already solve this problem in the past? (RUP) </sarcasm>

------
didgeoridoo
"Engineering" is only possible in the physical world because the laws of
physics don't change every thirty years or so. In software, we aren't so
lucky. Bridge-building best practices from the 1950s would build a fine bridge
today. Software development practices from the 80s would get you... basically
nowhere.

~~~
taeric
I think you both a) are greatly unfamiliar with bridge building practices and
b) underestimate the ability of 1980s software design.

Seriously, were there shortcomings in the software design of yesteryear?
Almost certainly. Are they blown vastly out of proportion in most discussions?
My assertion is that they are.

Sadly, I do not know as that I have anything coherent to offer on how to fix
things. What I can offer is that I think we'd do better by not always looking
across the horizon to a silver bullet language/framework/whatever and keep
focused on what the job at hand requires. The amount of failures I have
witnessed due to a desire to over generalize is staggering.

~~~
sgrove
But the assertion is that bridges could be built today using techniques from
the 1950's, and they would still be good bridges, because the principles and
physical laws they were built on haven't changed.

Software design had a lot of cool things happening in the 1980's (possibly
more in the 1970's, though), but the mindset has shifted since then. How many
people are implementing their own VM's? How many are running on non x86
intel/amd chipsets? Things that needed to be heavily optimized in the early
80's might actually run faster now with JIT & whole-program optimizations. And
so on.

It's hard to build an engineering discipline out of such a shifting substrate.

~~~
taeric
And I disagree with this assertion. Heavily. Simply look at the state of most
bridges in the US (and elsewhere?) to see that they are not holding up nearly
as well as is implied by this assertion.

I don't know what to say regarding your examples. Don't put too much credit in
JIT and whole-program optimizations. One could just as easily point to the
fact that what used to be too slow of a program is now fast enough with the
advance of computing speed. And memory. Do not overlook the importance of
memory.

More, I doubt most programs of yesteryear failed due to lack of optimization.
Seems likely to me, that the main cause of software failure has not changed
that heavily in the years.

~~~
coldtea
> _And I disagree with this assertion. Heavily. Simply look at the state of
> most bridges in the US (and elsewhere?) to see that they are not holding up
> nearly as well as is implied by this assertion._

I find that they (from the '50s and even older) hold up just fine.

If software worked initially, and survived 60 years that well as bridges from
the 50s do today, that would be a miracle.

~~~
taeric
This leads to a few questions, though. First, if bridges were built just fine
in the 50s, why do they build them differently nowdays? Second, why is the
maintenance cost of bridges decidedly non-trivial? Third, is there any
software that is that old that still works fine?

For the first question, I would only be offering speculation. Google and
friends can give pretty good references.

For the second, a quick google gives "The annual direct cost of corrosion for
highway bridges is estimated to be $6.43 billion to $10.15 billion ..."[1]

Third, I offer TeX as a good example of old software that has managed to
survive for quite a long time. I am always pleasantly surprised when I go to
typeset something from 20+ years ago and things "just work."

I think the theme here is that maintenance costs for bridges more often than
not entails just keeping them working. Seems that far too often in software
maintenance costs try and include complete rewrites into new technologies.

[1]
[http://www.dnvusa.com/Binaries/highway_tcm153-378806.pdf](http://www.dnvusa.com/Binaries/highway_tcm153-378806.pdf)

~~~
hueving
Tex is not 50 years old. Also, it's borderline unusable to anyone who hasn't
mastered what "underfilled hbox" means and the like. Internally it's so
unsustainable that people are trying to start from scratch with full rewrites.
However, because there actually isn't a standard for latex beyond "how tex"
renders it, it's a horrific undertaking. TeX is far from "just working".

Onto bridges, just because they build them differently now does not negate the
fact that 50 year old bridges are still reliable and safe to drive over
because they were engineered well. Their shortcomings and failure modes are
well known so maintenance can be performed to prevent collapses.

50 year old bridges may be expensive to maintain at this point, but the fact
that it's still possible shows that they were engineered well. They may build
them differently today, but that's more likely related to cost constraints
changing (e.g. can't afford an army of riviters now) rather than the
engineering being unsound.

~~~
taeric
This is pure goal post shifting. Without massive and expensive maintenance,
many of these bridges would be unusable, having crumbled to the point of
destruction. Pretty much period.

TeX is not 50, I had not meant to imply it was. Just that it is a relatively
old piece of software that has had virtually no maintenance compared to many
other pieces of software.

Could it have used an overhaul? It is certainly debatable. However, Knuth has
had massive success in keeping it working without worrying about use cases
that just were not the aim of software.

To continue the comparison with bridges, it is not uncommon for them to have
restrictions saying that large trucks can not cross them. This is not a
recommendation, it is a requirement. In the software world, many would have
modified TeX to do such things as typeset documents miles wide. Because,
generality! Or some such.

And the concerns you have about people not knowing what an "underfilled hbox"
means is simply a lack of training. I would be surprised if anyone I work with
below the age of 25 has even heard of TeX, much less read the documentation
for it. Heck, I could probably raise that to 40.

So, my assertion is that the fact that TeX is still very much usable shows
that it was "well engineered." Would it require somewhat expensive training?
Sure, but how is that any different than the maintenance of bridges?

------
andyl
For web and mobile apps, you need programming ability and tooling (bug
tracker, source code control, editor/ide, test/release system, etc.)

With that, IMHO your job looks more like a writer than an engineer, and your
product feels more like a serial TV show than a skyscraper. Key skills:
identifying a compelling story line, building awareness and engagement with a
loyal audience.

