
Lessons for software developers from 1970s mainframe programming - rbanffy
https://www.hpe.com/us/en/insights/articles/4-lessons-for-modern-software-developers-from-1970s-mainframe-programming-1709.html?jumpid=_TWITTER_
======
yodon
Interesting article but the author seems trapped in an antiquated “compute
time is more expensive than developer time” mindset. For most startups today
the EC2 budget is negligible compared to the dev team budget. If you’re
successful enough that the compute cost matters that’s a good thing and you
can deal with it then.

Optimizing for problems you don’t yet have just keeps you from launching and
getting successful enough to actually care about your compute costs.

~~~
TickleSteve
software development != web software development.

Processing efficiency is still extremely important in the embedded world.
Don't think that the embedded market is small, practically every product you
buy has s/w in it.

When you make for example a million units of a product, every single byte and
every cycle counts as there is a large multiplier to take account of.

Efficiency counts, just not in the web world.

~~~
dualogy
> _Efficiency counts, just not in the web world._

Which regrettably by now even end-users are painfully aware of!

~~~
lykr0n
_cough_ javascript _cough_

~~~
rf15
You can program efficiently in js. Just maybe drop the fat framework that
dynamically checks each and every of your assigned variables to update other
hidden functions (and maybe the dom) and avoid dragging things through N
functions for "encapsulation" reasons and other inefficient OOP principle
application.

what I want to say is: _cough_ javascript developers _cough_

(for reference: I am a js "fullstack" dev)

~~~
klibertp
It's not that simple. My impression - I'm not very knowledgeable about this -
is that JS and others started getting around as fast (in best conditions and a
lot of voodoo) as unmanaged C code, but they are still behind in terms of
memory footprint. For embedded applications, where you may be forced to count
every KB of memory and every CPU cycle, the JITted systems are simply not a
good fit right now.

You certainly can write a JavaScript implementation which would work in an
embedded environment (just found one:
[http://www.espruino.com/](http://www.espruino.com/) \- it actually looks
pretty nice! I wonder how it compares with Arduino?), but, when coughing
JavaScript, it most often means "JavaScript as currently implemented in the
four most popular implementations" or similar.

So, while it's true that you can write JS code that's 1000s of times more
efficient than some _bad_ JavaScript, it's also true that even the _good_ JS
is not going to be fast enough for some domains.

This (among others) is where AOT-compiled, GC-less (or with special
implementations of those) come in. And even then, there are applications where
even the cheap, mostly-compile-time abstractions of such languages prove to be
too clunky and you need to drop down to assembly (bootloaders, demos, parts of
OSes or language implementations).

So, while you can write efficient JS code, it's not going to be efficient
enough for many cases.

~~~
rf15
Your assessment is definitely sufficiently true, although I would argue that,
looking at how v8 essentially increases speed down to just twice the time of a
comparable C program (as opposed to a magnitude or two) it is worth the
convenience it brings to the table for devs (though c++ 11 and later catch up
quite a bit)

------
bandrami
Unfortunately, the "best practice" of today says it's better to import a 2 Gb
library than write a function, given the choice.

~~~
yodon
Yes, because the primary cost to the company is the cost of developer time,
not the download bandwidth cost or the power cost for increased utilization
level of the user’s CPU.

The craftsman cares they can accomplish the task without needing a large
library. The business cares that it can get to market quickly and profitably
at minimal cost. “I prefer your competitor because their code was lovingly
hand crafted instead of being shipped quickly with the features I need” said
no customer, ever (witness that we all use compilers rather than hand-coding
the machine code in an assembler... all the same arguments were made against
compiled code back in the day, and compiled code was the right answer, then
and now).

~~~
bandrami
> The business cares that it can get to market quickly and profitably at
> minimal cost.

And then the successful business, in contrast, actually looks at total cost of
ownership.

~~~
549362-30499
The point is that cloud computing costs are small compared to labor costs, so
it's a waste of time to make a 0.005% cost optimization that takes a week of
work.

I would also love to see some examples of a 2G (!) library that people are
casually importing. Where have you had this problem?

~~~
bandrami
Labor costs are capital expenses; computing cost is an ongoing operational
expense.

------
549362-30499
I really like the point about considering cloud computation costs. It would be
great if individual developers could get feedback like, "This patch caused X%
CPU cost per request, resulting in Y% monthly cost increase to keep projected
usage below alert trigger levels." I also like the note about the headless
abstraction, although I think it could gain some strength by talking about AWS
Lambda or the actor model.

But, I think the dedication to writing perfect code without executing it is
misguided. It's 2018 - we have interactive debuggers, excellent profiling
tools, and unit tests. Most developers have a computer with 4+ cores and 8G+
of memory. It would be foolish not to take advantage of that.

~~~
DrScump
One thing I hate about the modern disregard for resource consumption is that I
can't tell whether a given app or program is merely sloppily inefficient, or
if it's ma!ware-laden. (Too often, it's probably _both_.)

------
DrScump

      Unlike the stand-alone, isolated mainframe era, our applications today are interconnected. 
    

In an _academic_ environment, perhaps the author didn't experience intersystem
dependencies. In _production_ environments, however, there was arguably _more_
interdependency with (and, therefore, risk to) other systems and users.

Take a manufacturing environment. Shop orders take in inventory information,
labor detail, assembly progress, facilities and supplies usage, etc., any or
all of which can involve independent systems. In turn, each production step
can create information that needs to go back to each system.

Any error or change in those inputs and outputs could force a rerun of all
systems downstream of the first error. This is especially noteworthy to the
guy/gal being called in at 3AM to unravel such hairballs.

I'd daresay that most interconnection with external systems in the modern
mobile environment is primarily used for social media, tracking and other
privacy suckage. ( _Ghostery_ output can be quite surprising, for example.)

~~~
jacquesm
> I'd daresay that most interconnection with external systems in the modern
> mobile environment is primarily used for social media, tracking and other
> privacy suckage. (Ghostery output can be quite surprising, for example.)

The web has become a very frequent path for machine-to-machine communications
through APIs, besides other, more direct pathways for internet traffic on
ports other than 80/443.

------
scraft
Absolutely, 100%, should you optimise for your resources. In a lot of
scenarios those resources aren't CPU, HD, memory and bandwidth, instead they
are developers time. But certainly if computer hardware becomes more of a
bottleneck (cost, time, etc) than man power, then optimising for hardware
makes sense. Like most things in life, it isn't binary, and I believe most
developers already embrace this, making compromises in order to meet
deadlines, create maintainable and debugable code, writing code to help with
tomorrows tasks, learn new techniques but attempting to apply new coding ideas
that may help in future tasks, make code run optimally enough for the given
scenario, etc.

~~~
JustSomeNobody
I don't see user time being mentioned. If your app is user facing then
optimize for user time.

------
photon-torpedo
This website fully pegs one CPU core on my laptop (Firefox 57.0.4 on Ubuntu
17.10, with uBlock and Ghostery enabled). Apparently these lessons haven't
reached their web design team.

Edit: investigating closer, this even happens with JS disabled, and also in
Chrome (with less load though).

~~~
shakna
I'm not seeing anything like that, but you got my interest.

The page is only using 14mb of memory, which is a bit higher than some pages,
but it's about 4mb of JS source, and another 4mb of objects held in memory. It
isn't desirable, but shouldn't be causing any issues.

And though the page does load fairly quickly for me, a glance over what it
does during that load makes me suspect I know your issue: Styles were
recalculated more than 75 times, with the repaint happening more than 35
times.

There's also a huge peak in the middle of the JS being executed. The culprit
[0] has a tightly packed for-each loop. Three nested for-each, with each one
contained a lambda, which contains two or three more lambdas. It's not
performant code. That particular script is also 500kb of more of the same kind
of code. (Might also point out that the form-handler on that page is an even
bigger script.)

So: I'd think it's the repaint from too many styles coming in overriding each
other, but it might just be a reliance on a large library, which doesn't seem
to be well thought out.

[0]
[https://www.hpe.com/etc/clientlibs/hpeweb/js/hpe.dll.libs.js](https://www.hpe.com/etc/clientlibs/hpeweb/js/hpe.dll.libs.js)

------
ggm
Experientally, learning to code in constrained systems and with constrained
tools does (ren)force a lot more pre-checks before execution, because the cost
of failure is higher on your process.

A lot of the language around "why functional programming" goes to the same
space. Strongly typed, functional solutions expose problems during code
development which avoid runtime problems from sloppy thinking.

I also think simpler is good. So, coding discipline which favour simple
techniques (within limits, this is not one dimensional) are good. If you have
to exploit a very complex mechanism, the old world old school method was to
look in the NAG library for a well written solution (presuming numeric
problems, which predominated then) -and now, we do much the same: look for a
solution in your language compatible runtime library space, which is a well
trodden path.

------
walshemj
Not sure that basing this on experience of "hem hem" university usage of
mainframes (and presumably the previous gen at that) is actually that relevant
to real world usage at the time.

Even back then it would be impossible for a serious mainframe program to be
written and expected to work first time.

------
MaggieL
There actually are people who worked on 1970's mainframes who understand
modern software tech. This author isn't one of them.

------
pfarnsworth
No.

I disagree with most of this article, and I have ~25 years experience. No, I
didn't program mainframes, but I'm tired, as are younger programmers, of even
older programmers trying to say that the way they did programming is still
"better".

The writer isn't entirely wrong, but the simple fact is that software isn't
written the same way anymore. Stop trying to force antiquated methods down
younger people's throats. The way I wrote programs 20 years ago is inherently
different from the way that I wrote programs 10 years ago is inherently
different from the way I write programs today.

15-20 years ago, we didn't write tests. We had QA that wrote our tests for us.
We tested the code as well as we could (I became pretty damn good at testing
my own code) and then we threw it over a wall to QA. Today, we have zero QA
and I write tests for my own code.

10-15 years ago, you aimed for 0 defects, especially for enterprise code,
because your enterprise customers couldn't afford downtime. Today, in a SAAS
environment, you care about defects but your have a global set of customers,
and you roll your code out slowly and watch metrics.

I have a friend in growth at Facebook and his manager got mad at him because
he was focusing too much time on testing his own code. Apparently he's
supposed to leave that to external QA, and you can always fix the code later.
On some growth teams, code quality and maintainability don't matter, all that
matters is getting customer growth with new features as quickly as possible.
Is that inherently wrong? No, it's a different way of doing business. 10 years
ago there was no such thing as a growth team.

The way software is used is different, and the way software is developed is
different. Mainframe methodologies, while interesting to read, is not
relevant. Things like Optimize Upfront is nonsense to me, especially in a
global context. You iterate on your features quickly, including optimization.
You couldn't do that in mainframe computing, but these days I deploy to
production 10 times a day, and depending on how I deploy, I can see problems
fairly quickly and iterate without affecting most of my users. That's
definitely not a paradigm that you would see back then, when you would have to
schedule time, etc.

~~~
kristopolous
I _did_ do mainframe programming and I do webapps and phone apps these days,
so I've kept up.

It's gotta be both approaches (yours and the articles) but the real problem is
the demand for programmers is so high and the barrier to entry is so low that
the quality has suffered; the quality of the libraries, build systems,
documentation, designs, interfaces, all of it.

You can't magically drag all modern programmers through the mud using line
editors and 16 bit processors for 2 years to learn everything the painful way.

I honestly don't see a way out. We're in the eternal September of software
development and it's all down hill from here unless we make some commitment to
raising the barrier to entry and decreasing the incentives, making it hard
again.

~~~
collyw
"the real problem is the demand for programmers is so high and the barrier to
entry is so low that the quality has suffered"

Interestingly I see a whole world of difference in quality between Python and
Perl libraries compared to JavaScript libraries. OK not all Python libraries
are perfect and not all JS libraries are shit, but in _general_ the backend
stuff seems of a far higher quality.

~~~
kristopolous
There's always a "first" language - the first programming language that's
learned and taught. It was qbasic in the 90s for instance, then vbasic, java,
php for a bit, then ruby - it's certainly javascript now.

The first languages, during their reign as first languages are always derided.
When something snatches the crown from JavaScript (it'll happen however
inconceivable this is), I'm sure things will settle and it won't be so bad any
more.

It's like how at the end of its life, most of the people still using AOL
instant messenger were respectable computer experts. Same idea.

