In the late 80s and early 90s, a lot of software development was done with various RAD languages (Magic, Business Basic -- my first programming job used that -- and later VB). We fully expected the then-nascent Smalltalk, or something like it, to become dominant soon; does anyone claim that JS or Python are a significant improvement over it? Our current state is a big disappointment compared to where most programmers in the 80s believed we would be by now, and very much in line with Brooks's pouring of cold water (perhaps with the single exception of the internet).
My perception is that the total productivity boost over the past three decades is less than one order-of-magnitude (Brooks was overly careful to predict no 10x boost due to one improvement in language design or programming methodology within one decade), and almost all of it comes from improvements in hardware and the online availability of free libraries (Brooks's "Buy vs Build", which he considered promising) and information -- not from changes in programming methodology or language design (although garbage collection and automated unit-tests have certainly helped, too). The article also mixes hardware improvements and their relationship to languages, but we knew that was going to happen back then, and I think it's been factored well into Brooks's prediction. Moreover, my perception is that we’re in a period of diminishing returns from languages, and that improvements to productivity Fortran and C had over Assembly are significantly greater than the gains since.
The best way to judge Brooks's prediction, I think, is in comparison to opposite predictions made at the time -- like those that claimed Brooks's predictions were pessimistic -- and those were even more wrong in retrospect.
I would also add that if you want to compare the ratio of essential and accidental complexity in line with Brooks's prescient analysis, you should compare the difficulty of designing a system in an accidental-complexity-free specification language like TLA+ to the difficulty of implementing it in a programming language from scratch. I find the claim that this ratio has improved by even one order of magnitude, let alone several, to be dubious.
> Brooks states a bound on how much programmer productivity can improve. But, in practice, to state this bound correctly, one would have to be able to conceive of problems that no one would reasonably attempt to solve due to the amount of friction involved in solving the problem with current technologies.
I don't think so. Although he stated it in practical terms, Brooks was careful to make a rather theoretical claim -- one that's supported by computational complexity results obtained in the 80s, 90s and 00s, on the hardness of program analysis -- about the ability to express what it is that a program is supposed to do.
What was really surprising to me is the lack of GUI building IDEs for python, etc. Now I know why they are scripting languages, and not application languages.
I got around this with wxBuilder by building my own interface layer to decouple everything, but then I needed to change a list to a combo box, and everything broke.
Things that take HOURS this way are a few seconds in Lazarus.
On the other hand Qt3-era QtDesigner was great at everything Delphi was good at and much better at layouts.
But these days it seems that the standard is web-based interfaces and honestly whatever scaling problem these "old" RAD tools have, the web has times 20.
I was late to the webdev party, I only reluctantly started to write JS a couple of years ago and to these days I'm still baffled by how barebones it is compared to the GUI toolkits I was used to. You have to import a trillion dependencies because the browser, despite being mostly a glorified layout engine, doesn't really support much but bare primitives.
Thinks like date pickers or color pickers are a recent development and are not supported everywhere.
Ditto for treeviews.
Styling and customizing combo-boxes is so limited and browser-dependant that you have dozens of libraries reinventing the wheel by creating completely custom controls, each with their own quirks and feature set.
There's no built-in support for translations and localization (unless you count the accept-language HTTP headers I suppose). On something called "the world wide web", that's pretty embarrassing and short-sighted IMO. But you do have a Bluetooth stack now, so that's nice.
On Delphi side, there was significant difference between "code produced by someone just starting out" which tended to make an unholy mess of generated code mixing UI and non-UI operations, but it was quite workable for bigger projects if the developer was more experienced and put some architectural thinking into project (for example, separating business logic and the UI code that called into it).
Therefore, in the 90s, people became tired of this and looked for ways out of the vendor lock-in. That's why OSS started to get traction, and also Java. It turned out that it is cheaper to develop your business app in Java from scratch, rather than in a commercial RAD tool and then pay up your nose to a disinterested company for maintaining the base on which it stands.
So I think OSS is actually less productive and less polished than it could be (many of the RAD tools are actually really cool, but insanely expensive), but it is still case of worse is better.
I think it's possible that some business DSLs (which are at the core of the RAD tools) will win mindshare again, but it is going to be quite difficult.
(I work in mainframes and there is quite a bit of RAD tools that were pretty good in the 80s, when mainframe was a goto choice for large business apps.)
And you and Brooks have been absolutely right: there was no one improvement in language design that gave a 10x boost in productivity.
Brooks claimed in No Silver Bullet that 2x programming productivity improvement is unlikely because now essential complexity dominates over accidential complexity. Dan is trying to demonstrate that it is completely untrue.
When distilled, Brooks's claim is that the relative size of accidental complexity in programming overall is (or was) "low", i.e. less than 90%, while Dan wants to claim it is (or was) "high", i.e. more than 90%. His experiment does not establish that.
People not exposed to this era almost have a hard time believing it. My litmus test is this: how easy is it for any person to make a button in a given computing system? On a Mac in the late 90s this was so easy that non-programmers were doing it (Hypercard). Today where would one even begin?