
The Eternal Mainframe (2013) - ptx
http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.html
======
Animats
"The Return of Time-Sharing" \- now that's very much real. Amazon AWS is time-
sharing. "Pay for what you use". By changing servers from a fixed cost to a
variable one, more people have to worry about resource consumption in fine
detail. Through most of the personal computing and leased server era, that was
not the case.

~~~
markus_zhang
I have a feeling that a lot of corporations don't care too much about their
employee's quality of cloud queries. Wondering if they get a sort of unlimited
subscription or something.

~~~
madhadron
At a certain point they tend to start taking it seriously and cracking down on
the bills. Disney had to ask Amazon to find all accounts linked to a Disney
corporate credit card in order to track them all down, as I recall.

~~~
nighthawk648
Oh wow so you mean an end to end monitoring / analytical system of Baap/s can
generate a company millions upon millions in upstream revenue?

I can offer you all the stats you need.

------
DannyB2
It is amusing to note that on mini computers of the 1970's that what we now
would call a "floating point co-processor" was a hardware option. Even with
the Intel 8086, a "math" coprocessor was a separate 8087 chip.

On mini computers you used a different version of the language compiler
depending on whether your hardware had this coprocessor or not. Typically you
didn't make this choice, but the system administration people made sure the
correct compilers were installed to generate hardware supported floating point
or software floating point. That tradition seemed to make its way into PCs. It
was a while before you could just assume that everyone had hardware supported
floating point. Oh, and every language's software floating point was
annoyingly different. So no portable way of writing binary floating point --
you had to convert to/from ASCII.

GPUs are like a new fangled hardware floating point external to the CPU. Now
to gradually become part of the main system again.

~~~
reaperducer
_Even with the Intel 8086, a "math" coprocessor was a separate 8087 chip._

Even later than that. The 80386 had the 80387 in 1987, and the 68030 had the
68881 in 1984. The '881 was the first square chip I ever used.

~~~
twoodfin
The Intel ‘486 had the FP coprocessor integrated, but they split the line into
the DX and SX models. The only difference was that the SX had the FP unit
physically disabled.

~~~
kijiki
They still sold a 80487, of course. It was a '486 without the FPU being fused
off, and when you plugged it into the "math co-processor" socket, it disabled
the original CPU and took over both duties.

------
pjmorris
FTA:

> Janie Crane: “An off switch?”

> Metrocop: “She'll get years for that. Off switches are illegal!”

> -Max Headroom, season 1, episode 6, “The Blanks”

As a callow 18 year old in 1980, I suggested to one of my parent's friends
that the computer would be as important an invention as fire. He'd been a
hippie 10 years earlier, and thought I was being childish, possibly because
fire is so vital, more likely because he had a hippie's distrust of the
computer's relationship to authority. In retrospect, he was probably right for
either reason, though it took me multiple decades to catch on. This article
does a nice job of laying out the case for being cautious.

~~~
jacobush
Important, yes. Friendly, maybe no.

------
DannyB2
Back in the early days of BYTE magazine, early microcomputers, etc. I seem to
recall that Steve Jobs coined the term "Personal Computer". Apple didn't
market it to death. But at that time the way these computers were described
was as "micro" or "home" computers. Or even "home brew", because before the
"holy trinity" (Apple II, TRS-80 and Commodore PET), no two computers'
hardware was the same. A real hinderance to mass software distribution.

IBM named its computer the PC, thus co-opting the term and forever being
associated with it. But you have to remember back further.

~~~
kens
Although the article claims IBM invented the term "Personal Computer" and
Steve Jobs also claimed to have invented the personal computer, Alan Kay wrote
at Xerox PARC about "A personal computer for children of all ages" in 1972
[1]. Many of his ideas went into the Xerox Alto (1973) personal computer [2].
The Apple I was introduced in 1976 and the IBM PC in 1981.

[1]
[http://www.vpri.org/pdf/hc_pers_comp_for_children.pdf](http://www.vpri.org/pdf/hc_pers_comp_for_children.pdf)
[2]
[http://bitsavers.org/pdf/xerox/parc/techReports/CSL-79-11_Al...](http://bitsavers.org/pdf/xerox/parc/techReports/CSL-79-11_Alto_A_Personal_Computer.pdf)

~~~
ChuckMcM
A pinterest page of interest :
[https://www.pinterest.com/pin/239887117628640892](https://www.pinterest.com/pin/239887117628640892)

At once time in the early 70's DEC was calling the PDP-8 a "your own personal
computer" stressing that its cost was such that you could dedicate the entire
computer to a single individual, so why not buy several. This as opposed to
the PDP-11 or VAX which were 'department' and 'organization' level computers.

------
rossdavidh
If you marketed it as a "server farm in a box", you could probably get people
to start buying literal mainframes today.

~~~
nordsieck
> If you marketed it as a "server farm in a box", you could probably get
> people to start buying literal mainframes today.

Probably not. The path of computing has been directed by economics: cost and
its counterpart, volume. There's no reason why any greenfield project would
choose to host on a mainframe.

The administrators are rare and expensive, the hardware is expensive and
single source, and the performance is mediocre.

I can kind of understand why core business processes don't get ported off of
mainframes: the business risk is too high. I just don't understand why anyone
would choose to start a new project on one.

~~~
scottlocklin
> There's no reason why any greenfield project would choose to host on a
> mainframe.

What if it was a lot cheaper and didn't require as many people to manage it?
Certainly the markups on EC2 (and related) services are insane.

~~~
nordsieck
It could be the case that that is true.

If I were a business owner, that is not a bet I'd make.

1\. z-series experts are rare, expensive and typically old. Old does not mean
unemployable, but it does mean a limited future career. Who's going to replace
the current crop of experts? Internal education is a possibility that could
work, but any way you cut the knot, it'll be more difficult than going with
x86 and linux.

2\. z-series hardware sucks. They are low volume parts who's main value is
their backwards compatibility. If you think the hardware performance is
acceptable today, will it be so in 10 years? Computer performance improvements
have certainly slowed down, but peripheral vendors like nVidia continue to
make solid progress. Even if IBM gets around to supporting them, you'll always
be a 2nd class citizen compared to the PC platform: drivers, bus availability,
etc. will be half baked or late.

3\. z-series is off the main stream of computing. Huge numbers of talented
software developers are building great open source software for linux (and
other OS's) on x86. I don't know if administration on z-series can be done
better or less expensively than on x86 today. I feel confident in saying that
there will be no contest in 10-15 years. The pace of innovation is enormous,
and IBM is not a company that can credibly make a commitment to bring that
innovation to the mainframe.

4\. Yes, mainframes can run linux and do various sorts of emulation. I don't
think that that is a cost effective. I'm not alone in that assessment: if
running mainframes was a cost effective way to run linux, I'd expect that to
be the way big internet businesses run their internal data centers. They
don't.

5\. You'd have to deal with IBM sales. At least with x86, you have multiple
sources for parts, and if you're big enough you can do contract manufacturing.

6\. The markups for EC2 (especially the network transit) is really high. There
is a continuum from Lambda and App Engine, through, EC2 to dedicated hardware
rental like OVH and finally running your own. That is way more options than
you have running on a mainframe: certainly way more vendors.

P.S. Huge fan of your writing.

~~~
jl6
Why do we think the markups on EC2 are insane? Aren’t they comparable to Azure
and Google?

~~~
scottlocklin
If you've ever worked at a firm with serious calculation and data loads (aka
data science at scale), they almost always end up with big hardware in a data
center. Because markups on EC2 (and google/azure) are insane.

~~~
jl6
If the cost of all cloud services is similar, then maybe that’s just what it
costs to run a cloud service? That’s not high markup, that’s just cloud being
more expensive than on-prem. Or are you suggesting price collusion to maintain
artificially high profits?

~~~
scottlocklin
The latter is obviously the case. Look at their profit margins! You don't get
those kind of margins on commodities unless you're price colluding.

Cloud trivially can't be more expensive than on-prem. Cloud is just on-prem at
scale where they sell you the leftovers.

------
api
The wheel of reincarnation is so called because it turns, and it will probably
turn again. The new buzzword "edge computing" refers to the use of computers
close to the user to do what was once done in "the cloud." Peer to peer is on
the rise too, linking "edge" devices directly together. Voila, PC revolution
2.0 but with new terms. Soon we will reinvent the BBS, shareware, etc.

~~~
ptx
With edge computing, though, the computing is done under the control of
whoever controls the mainframe even though it's physically close to the user,
so it's quite different from personal computing.

~~~
api
For now, but edge computing diminishes the role of the mainframe. This makes
it easier to replace the mainframe. Eventually someone will do that.

Previous PCs came from chips and designs that were originally built to power
terminals for mainframes like the Intel 4004/8008, the 6502, and the Z80.
Today's handheld dumb terminals for mainframes are built with ARM64 chips that
are approaching "desktop" performance levels just like those old dumb term
chips started to approach useful performance levels before someone realized
you didn't need the mainframe anymore.

... and so the wheel turns.

I think the biggest current barrier to de-cloudification of mobile devices is
the heavily and cryptographically locked down OS. That's a software thing, not
a hardware thing.

------
bulka
I get overall sentiment, it is thought provoking. At the same time, one of our
clients has on-prem AS/400 with only half of CPUs enabled by IBM. Same client
moved payloads between two cloud providers and one on-prem option in last 18
months. Same same, but different.

------
gumby
At an architectural level a number of mainframe elements have returned, such
as I/O and microcode AKA writable control store.

One of the defining features of the minicomputer, and later microprocessor,
was that the CPU did all the I/O. In mainframes of the 60s and beyond, the IO
devices had their own small processing units (to be distinguished from the
Central Processing Unit) so that if you wanted to write a group of blocks to a
tape drive you could set them up and then send the tape drive controller a
block of commands to make it happen and DMA the data.

While early machines like, canonically the alto in this case, bit-banged
ethernet interfaces directly. The early UARTS were cool in that you could
actually get them to do something with the bitstream to/from the terminal
(i.e. buffer up to 8 bits at a time: wow!).

half duplex in your terminal came from blocks of terminals attached to a
terminal controller (HDX existed in the teletype system before computers, but
this channel controller model is why we still have it in our modern terminal
drivers).

------
jgalt212
The cynical side of me thinks the worst side of the cloud represents another
front in the Civil War over General Purpose Computing.

[https://boingboing.net/2012/08/23/civilwar.html](https://boingboing.net/2012/08/23/civilwar.html)

------
markus_zhang
I'm wondering whether there is a way to connect to a real mainframe, not
through Hercules, just to play with it and maybe do some good programming on
it.

~~~
lboc
IBM runs an annual 'Master the Mainframe' competition for schools, but also
give away year-long free access via the 'Learning System':

[https://www-01.ibm.com/events/wwe/ast/mtm/audit.nsf/enrollal...](https://www-01.ibm.com/events/wwe/ast/mtm/audit.nsf/enrollall)?

~~~
markus_zhang
Thanks man I'll take a look, sounds very interesting!

------
aarestad
Nitpick: (2013)

The quote by Solzhenitsyn really cuts deep.

~~~
onemoresoop
I'll add the whole thing here:

 _If only there were evil people somewhere insidiously committing evil deeds
and it were necessary only to separate them from the rest of us and destroy
them. But the line dividing good and evil cuts through the heart of every
human being. And who is willing to destroy a piece of his own heart?

-Alexander Solzhenitsyn_

------
Causality1
This user has spammed this same page five times including twice in the last
eighteen hours.

~~~
ptx
Who, me? I can assure you that I didn't, but I saw it on Reddit, so others
might have submitted it as well.

