Hacker News new | past | comments | ask | show | jobs | submit login
Niklaus Wirth: A Plea for Lean Software (1995) [pdf] (cr.yp.to)
38 points by pmarin on Sept 11, 2014 | hide | past | web | favorite | 12 comments

I think one factor that leads to bloated, ruined software was missed... I don't know how common it is overall, but I have personally seen it ruin several very good products.

And that is the simple fact that employers want their employees to remain busy. If a piece of software reaches a point of exceptional quality - the developers working on it still have to fill 40 (likely more) hours a week to appease bosses. And so they do the only thing available - they ruin the product. This is one of the reasons that I think software engineers should, in certain cases, work on retainer. Domain knowledge of a specific product is very valuable, and companies routinely ignore this. They will shuffle an engineer off to some newer project and leave a more junior person to handle maintenance of the original project - and that junior person will have to fill their time somehow. This serves no one except for the shortsighted manager who determines their self worth by how many and how much they control their employees.

Our software needs to scale and to have fallbacks when "infinite compute and memory" is not available. I used to detest when I was forced to make my code run on machine with 512MB, I'd argue that we are already charging them tens of thousands of dollars, why not just ship them a couple sticks. 2G would make the code 10x as clean. But I was wrong.

Our code needs to scale down as well as up, but it should do so in the libraries and the abstractions. That is why "big data" was such an issue, we weren't working on abstractions that blurred the lines between in and out of core and now we are.

If we turn from imperative to declarative and any-typed to algebraic the semantics remain consistent and it is up to the runtime to fit the code to the available compute and memory. Cache Oblivious was a good start, memory and compute oblivious is the next level. A Lisp with an RDD [0] feels about right.

[0] http://www.cs.berkeley.edu/~pwendell/strataconf/api/core/spa...

Counterpoint: http://www.joelonsoftware.com/articles/fog0000000020.html

Or this soundbite from a later blog entry of his: "No matter how much it bothers you neat freaks, the market always votes for bloatware."

I wish he were wrong; it bothers me that 512 MB is now considered low-end for RAM in a smartphone. But I guess we just have to accept this reality of our industry.

Much do disagree about.

> [Hardware arguments]

The problem is not really RAM or disk consumption. The problem is that starting up my OS, or my Web browser, or my word processor, or even Emacs for that matter… is not instantaneous. Our computers are million times faster than 30 years ago, and they still lag. Bloat certainly bears some responsibility: if we had less, cleaner code, we could have more efficient programs:

> Efficiency comes from elegant solutions, not optimized programs. Optimization is just a few correctness-preserving transformations away.

Jonathan Sobel http://www.cs.indiana.edu/~dfried/dfried/dfried/mex.pdf


> If your software vendor stops, before shipping, and spends two months squeezing the code down to make it 50% smaller, the net benefit to you is going to be imperceptible

Short term benefit will certainly be imperceptible. But in the longer term, dividing the size of the code base by two will mean significantly easier maintenance, including the addition of features. Losing 2 months now may very well gain you 2 years down the road.


> [Features]

The problem with too many features is not that software is capable of too much. The problem is the lack of orthogonality. The same level of capability could be achieved with less, simpler features. But I guess that would imply trusting the user to unprecedented level. Like, letting her program her damned computer, like any Excel idiot is perfectly capable of…

> Like, letting her program her damned computer, like any Excel idiot is perfectly capable of…

Excel is a far more advanced programming environment than any real IDE. There's less room for syntax errors, immediate feedback on changes, you can display and examine all intermediate computation steps…

Most important, the computational semantics of Excel spreadsheets do not include "accidentally delete all the user's files" or "accidental remote code execution" like most other programming languages.

Agreed. But we now have an existence proof. There's no reason to believe it can't be reproduced.

From the article:

"It led to Oberon, a language derived from Modula-2 by eliminating less essential features (like subrange and enumeration types) in addition to features known to be unsafe (like type transfer functions and variant records)."

Does anybody know what the argument about the safety of variant records is?

Wirth is a true believer in strong typing: marrying strong types to inflexible compilers

If I understand that quote, he means that variant records are dangerous because they make it possible for programmers to interpret data in different ways, thus bypassing the type checks of the compiler and invalidating the whole concept of strong typing. He mentions a little more about this in his paper on The History of Modula 2 and Oberon: http://www.inf.ethz.ch/personal/wirth/Articles/Modula-Oberon....

I think he might be talking about C-style unions instead of ML-style algebraic data types.

Are there any Oberon systems around to try/buy?

The download page only has links to a broken FTP server, try the sourceforge repository at http://sourceforge.net/projects/a2oberon/files/

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact