
Western civilization runs on the mainframe - a2tech
http://fosspatents.blogspot.com/2010/08/western-civilization-runs-on-mainframe.html
======
masterponomo
As a 25-year professional z/os programmer working primarily in COBOL and CICS,
I think the dearth of new COBOL programmers is not the big problem it is made
out to be. Having learned C, Java, SQL, Common Lisp, and Scheme on my own, I
have no doubt that programmers will jump to COBOL when the marketplace starts
offering rewards (higher salaries) for doing so. The outsourcing trend is
holding this back as companies continue to pursue the holy grail of cheap
programmers, but all such efforts eventually run afoul of the need for local
talent with a deep understanding of the application. A good programmer can get
up to speed on COBOL very quickly. Getting up to speed on a large application?
That'll take a few years.

The problem with legacy apps on any platform is not the language (unless
you're talking about some very obscure unsupported language). The problem in
my experience is the tangled mess of new code piled on top of poorly designed
old code. In such systems, a small change can have unintended side effects.
The solution is exhaustive testing (expensive and time-consuming) or a Hail
Mary installation (leading to "testing" in production and yet another reactive
fix). COBOL didn't create this problem and changing to another language won't
fix it. In my shop, we practice test-driven development and deliver high
quality releases. Success on the receiving end of these releases varies along
with the established coding/testing/integration practices at each customer
site.

~~~
thaumaturgy
True, except that a lot of programmers want to follow the "sexy" technology,
and COBOL just ain't that.

~~~
masterponomo
Working on big iron, I get to work with assembler, CICS, MQ Series, and DB2 in
addition to COBOL. No, it's not flashy like web apps, but there's a certain
appeal to bit-diddlers like myself. Perhaps it's the same as the difference
between driving a PT boat and working in the engine room of an aircraft
carrier. I would choose the carrier.

Also, Greenspun's Tenth Rule is in effect in the COBOL world as much as in any
other language.

<http://en.wikipedia.org/wiki/Greenspuns_Tenth_Rule>

The difference in the COBOL world is that you are more likely to work with
people who would not know what you are talking about if you mentioned Lisp or
Greenspun's Rule, or even the word "predicate" just to pick an unrelated
example. So if you are into programming more than you are into business, you
can function in the mainframe world as a sort of playground for exercising
your programming muscle in ways that suit you, even if your colleagues aren't
aware that something different is going on.

For instance, I generate a lot of code using a Common Lisp system I wrote for
my own use. No one knows or cares--all they know is they seem to get reliable
code from me very quickly. If I suggested that more people use the Lisp code
generator, it would not fly because it would require training and might cost
maintenance dollars. So it remains a personal tool.

More: My job involves a weekly task to analyze a release while it is under
development. My predecessor did this manually and took all week. I wrote a
Java/MySQL system to do it, and it takes me all of 10 minutes a week. Another
win for the intrepid programmer.

More out in the open, I recently led development of a rules-based system. The
team did not speak rules, evaluation, predicates, or the general use of
collections. The challenge for me was to design the system using these
concepts, then present it to developers who will never want to learn the
general concepts. Done in record time, by some mysterious process (I broke the
design into small pieces w/o reference to the comp sci terms, gave each
developer a focused task, and tied it all together myself).

Definitely more Sears foundation garment than Frederick's of Hollywood, but I
like it.

~~~
avar
You wrote a compiler in Common Lisp that auto-generates COBOL and your co-
workers haven't noticed this?

~~~
masterponomo
Nay, not a compiler. I wrote a set of libraries in Common Lisp that auto-
generate the text of COBOL programs. It uses a combination of template
selection and textual substitution, all based on a spec written in a
spreadsheet. I work from home, where I am free to Get Things Done with no need
to be seen poking away on the standard 3270 terminal to be considered working.

------
jakevoytko
There's no pressing reason for a company to toss its legacy mainframes. What
VP would sign off on replacing core systems that run an international
corporation? The downsides are infinite! I mean, are you insane? Just throw
money at an "expensive" COBOL programmer and get out of my office!

I grilled an employee at a Major Package Shipping Company a few years ago
about their infrastructure, and it's pretty stereotypical: a few mainframes
that were coded decades ago in COBOL and an expensive team of legacy
programmers performing surgical tweaks. They're porting unimportant side
systems to modern hardware + languages (ostensibly so the mainframe
programmers can focus on the major systems), but there are no plans to replace
the core. Ever. Replacement systems just aren't good enough to consider
switching the heart of their company. And so it will continue until there's a
compelling storyline for dropping mainframes

~~~
mey
These corporations may not see it, because they do not understand it, but
having such legacy systems a risk to their operations. The longer they wait to
migrate, the more expensive it will be, and the harder it will be to attract
talent to the project. There may come a point where they can no longer find
anyone to support their hardware or software, and it'll force them to stagnate
against their competition who may be able to provide better or different
offerings because they are not tied to a system they can't modify.

My understanding of business is that you should never stagnate, and always
keep advancing.

People that I've talked to who have worked with such systems indicated that
they built giant shims around the systems in other languages to allow them to
be adaptable. In theory this gives them a clear interface definition (it may
be huge, thousands of method signatures, and possible direct schema access,
but it can still be a clear interface) to boot strap a core system replacement
in another language and hardware system.

~~~
rbranson
He's talking about FedEx. They understand that it's a risk to their business
continuity. The guy who hijacked a FedEx plane planned to fly it into their
datacenter near the airport, which houses the primary mainframe that runs
FedEx's most critical business software. He wasn't an IT employee either, so
it's fairly common knowledge. AFAIK, they are now running in a hybrid
mainframe / client+server model. Most of the package transactions are
processed simultaneously. Apparently all of the tracking on FedEx.com uses the
client+server system to back it. Obviously they are taking it slowly and
carefully (it's like a decade-long project) to avoid any screwups.

~~~
Hoff
When you're working at this scale, geographic redundancy of DCs is
commonplace.

I know of several cases where entire DCs were destroyed, and production
continued unabated.

There are commercial platforms that are explicitly designed for this sort of
DC redundancy over 800+ km spans. Where you have 400 km between various
volumes in your RAIDset.

~~~
mkr-hn
Do you have links? That's a scenario I've always been curious about, and would
like to read about situations where it actually happened.

~~~
Hoff
[http://h71000.www7.hp.com/openvms/brochures/commerzbank/comm...](http://h71000.www7.hp.com/openvms/brochures/commerzbank/commerzbank.pdf)

A moderately technical article on some of the issues that arise, and (for your
question) see the end of the following:

<http://h71000.www7.hp.com/openvms/journal/v1/disastertol.pdf>

~~~
wallflower
Fascinating.

"At 2 a.m. on Dec. 29, 1999, an active stock market trading day, the audio
alert on a UPS system alarmed a security guard on his first day on the job. He
pressed the emergency power-off switch, taking down the entire datacenter.
Because a disaster-tolerant OpenVMS cluster was in place, the cluster
continued to run at the opposite site, with no disruption. The brokerage ran
through that stock-trading day on the one site alone, and performed a shadow
full-copy operation to restore redundancy in the evening, after trading hours.
They procured a replacement for the failed security guard by the next day."

The last sentence.

~~~
mkr-hn
It's a little disturbing that a security guard had the ability to shut the
whole thing down. Is that normal?

I would think they'd keep the power systems under lock and key, or at least
under the control of someone who knew what to do when a UPS starts making
noise.

~~~
throw_away
The big red buttons are often there as a safety measure in case someone starts
getting electrocuted. At very least, though, these very tempting buttons
should be labeled in very dire language that they're not to be pressed unless
either a) someone is about to die or b) you actually know what you're doing.

------
jacquesm
> The mainframe software market is twice as big as the Linux market

That's one _hell_ of a compliment for Linux then.

~~~
njharman
I came here to mention that was the most stunning bit of the article. Not the
fact. But that someone is using the size of the Linux market to demonstrate
how large mainframe's share is.

I'm old and have lived through the growth of Linux. Starting as nothing,
through all the FUD, and naysayer pudits, and claims it was communist, and MS
many monopolistic attempts to squash it, etc. Comments like that still make my
jaw drop.

~~~
hernan7
... and we are talking about "market", that is about usage of Linux that
involve exchange of money, no? So that's even more impressive.

------
dan_sim
In fact, western civilization runs on the mainframe AND excel. As a
consultant, I see a lot of both...

~~~
mkr-hn
More broadly, western civilization runs on entrenched systems which no one has
the will to replace.

Sometimes it's easier to stick with what you know works than to try the newer
thing, and that's probably the reasoning used by the people making the buying
decisions.

~~~
ramchip
Why would Excel need to be replaced? It's good software.

~~~
cadr
It isn't so much Excel as it is spreadsheets in general. The cycle I've seen
is that when a group starts out, they use Excel because it is a rapid tool
that they understand. But then they keep building on that until they have a
very complicated mess on which they are dependent. Spreadsheets don't tend to
have good tools for testing and can be very brittle, so they end up slowing
things down and causing problems.

There are some interesting case studies here: <http://www.eusprig.org>

------
rbranson
Anyone know why legacy mainframe software couldn't be emulated/compiled to run
on a cluster of Linux boxes? Licensing, patents, legal concerns, etc?
Obviously this is not a trivial issue, but it seems as if the amount of data
we're talking about is not immense. Banks exchange and process large amounts
of information, but how does it really compare to what people are doing with
Hadoop or what Google does with MapReduce? I would imagine Google processes
orders of magnitude more information than banks.

~~~
gaius
The simple answer is that you have to have a consistent view of the world.
Let's say that 100 Linux boxes in a cluster == 1 mainframe. Let's also say
you're a bank. Someone connects to node 1 and makes a withdrawal from a joint
account, and someone simultaneously makes a withdrawal from the same account
on node 100. How do you check they haven't gone over their overdraft unless
you can serialize those transactions? And if you can, and make it scale,
congratulations: you're now where IBM was in the 70s'.

It's pointless to invoke the almighty Google; they simply don't deal with
problems in this class.

~~~
jtbigwoo
> It's pointless to invoke the almighty Google; they simply don't deal with
> problems in this class

FWIW, I worked at a major bank (top 5 in the U.S.) and listened in on several
discussions between their top tech guys. As mentioned in other comments, their
number one worry was the supply of COBOL coders.

They were terrified of Google and/or Paypal and what would happen if one of
those companies got serious about competing in the banking arena. Most of the
top architects _knew_ that it was possible to build a bank on a huge cluster
of servers. They also knew that there was no way an existing bank could build
the infrastructure and software while also maintaining their existing
infrastructure.

~~~
rbranson
Apparently PayPal isn't in technologically fantastic shape either. Word on the
street is that they still have their core offering written in C using a CGI
interface.

~~~
Shorel
With good unit testing C is very robust and also much cheaper in hardware
footprint.

~~~
rbranson
I wasn't as much bashing C as I was the use of CGI. Most of the time when C is
paired with CGI it's a pretty good indication that the code is chock full of
memory leaks.

~~~
a-priori
If you are using CGI, then those memory leaks will be short lived, limited to
one request.

~~~
rubashov
Fork is still expensive. It is not sane to fork per request.

------
ssp
Fake Steve on mainframes:

[http://www.fakesteve.net/2009/10/why-ibm-is-in-trouble-
with-...](http://www.fakesteve.net/2009/10/why-ibm-is-in-trouble-with-
antitrust.html)

[http://www.fakesteve.net/2009/10/case-against-ibm-
continued....](http://www.fakesteve.net/2009/10/case-against-ibm-
continued.html)

~~~
vdm
Priceless.

------
mseebach
So that's a $6 billion profit for keeping western civilisation open for
business? I'm not sure I'm buying the premise that that is somehow
unreasonable.

~~~
jacquesm
In fact, at that price it's cheap.

------
wallflower
"According to Maclean, comparing mainframe performance against, say, Oracle
running on an x86 server platform may yield similar results in a low usage
scenario. However, a mainframe will prove its value as that application set
scales up in volume. The difference isn’t in the CPU, which (in non-IBM cases)
is the same on both sides of the comparison. Rather, the difference is in the
architecture of the operating environment."

[http://www.processor.com/editorial/article.asp?article=artic...](http://www.processor.com/editorial/article.asp?article=articles/P3130/32p30/32p30.asp&guid=)

------
yummyfajitas
Obviously there is a business use case for mainframes, these can't all be
legacy systems. But what is it? Can someone who has worked with such a system
shed some light?

~~~
Kadin
They're powerful, reliable, and compared to writing software for an equally-
sized cluster or distributed system, (once you're in the right mindset) easy
to write for. Or alternately, _not_ to write for -- you can just keep
migrating the same ancient COBOL up to newer systems, without ever changing
it, and the stuff just keeps ticking.

The business case is that they work, and have a great track record.

Overall the biggest selling point that I've seen is probably reliability.
There's a perception that mainframes are more reliable than commodity-parts
systems. In truth I think this is confirmation and sample bias; if you run a
business that has a mainframe (very expensive, built for reliability, software
has had 20+ years of debugging) and a bunch of servers (inexpensive, cost-
optimized, software is constant work-in-progress), it's always going to seem
as though the mainframe is unbreakable and everything else is dangerously
unreliable. It's as much a mindset issue as an architectural one.

So many businesses will pay the premium for mainframes as their "core"
systems, and then hang a lot of commodity systems off of it to provide access
or interaction with the mainframe data.

~~~
thaumaturgy
You're exactly right.

I worked on a mainframe for a major Bay Area school district during and
shortly after high school, as a computer operator and COBOL programmer.

I was there for a couple of years; rebooting the mainframe was unheard-of. It
never seemed to develop memory leaks, it didn't get more quirky after it had
been running for a long time. Since it wasn't accessible to the internet (or
even most of the network), it didn't need to have any time spent on it trying
to secure it against the latest Windows bug. It never required software
updates, other than the ones that we coded ourselves and deployed without
having to reboot anything. The database didn't fall over when we ran a payroll
job or report cards.

But, best of all -- and I love telling this story, because I think it really
illustrates something that the techs using all the modern stuff just don't
"get" -- it had the best recovery systems I have ever seen, hands down.

So, one day there was a power outage. We had backup power systems, but like a
lot of data processing departments we were just a couple of overworked people
and they hadn't been checked in a while. They didn't last as long as we were
expecting and the mainframe kicks off. Around that time, I'd been playing
around with Linux and such, and I had some idea of just how ugly things got
when a complex operating system powered off in the middle of doing things.

I was a little bit stressed out. The other guy, a clean-shaven greybeard,
wasn't.

When the power came back on, the mainframe booted up -- quickly -- and resumed
its operations from pretty much the exact point at which the power had gone
out. It didn't miss a beat, none of the data it produced was erroneous.

That was in '98 or thereabouts.

To say that mainframes are more reliable than the popular alternatives is an
understatement. :-)

~~~
JunkDNA
This is interesting. Clearly it's not just the hardware that makes mainframes
this fault-tolerant. I'm curious why other platforms haven't appropriated some
of the core features and functionality of the mainframe machines? Is it lack
of familiarity? Patents? I'd love for one of my database machines to be this
reliable.

~~~
jff
I think there's probably a few factors involved:

1\. Writing completely solid, fully debugged software is Not Sexy, which is
why Linux is getting shinier and shinier but the software still has bugs.

2\. You and I have probably been spoiled by working with Unix. Now, I've never
used an IBM mainframe, but I have used VMS, which is definitely more
businesslike than Unix. Working with VMS feels like going to the DMV. Imagine
how it would feel to be living in a mainframe world, right out of IBM's most
conservative, straight-laced era--because I'm gonna bet you wouldn't be able
to play as fast and loose if you had mainframe-style fault tolerance.

------
rbranson
This only works if you exclude the entirety of commodity and stock exchange
business, which runs almost completely on off-the-shelf hardware. I'd say this
is a pretty big part of western civilization. Of course, the I/O requirements
of these systems massively eclipse that of "puny" banks,
logistics/distribution, manufacturing, and insurance companies.

------
GavinB
It's hard to think of it as a monopoly when the same services can be performed
by a rack of servers running other software. For any type of data processing
that mainframes perform, I'm sure you can find examples of similar jobs
running in server farms.

You can't define mainframes as a separate market just because they're
physically large.

~~~
gaius
Well, this is kind of "the web is the whole of computing" mentality.

You pick up the phone, you get a dial tone, not a fail whale. You put in a
plug, you get electricity. When a bank's core systems go down, it makes the
evening news. It's a whole 'nother world from OMG sharding RoR lol that web
kids think is "scalability".

~~~
rbranson
Web companies operate in a different world entirely. If Twitter goes down,
it's sad, but nobody really cares. They also don't have very much money, so
running lean on development time and hardware is critical to survival. If
Twitter had the IT budgets like banks and insurance companies, things would be
a little different. Of course scaling isn't that hard when you have billion
dollar budgets. Per byte of content processed, Twitter probably operates at
orders of magnitude more efficiency than banks or insurance companies. Each
byte of data HAS to cost almost nothing, or the business can't exist. Nobody
would pay for social networking, so it has to be dirt ass cheap.

~~~
gaius
The fascinating thing is that, including VC, Twitter really did - does - have
a comparable IT budget. They could have just licensed Tibco Rendezvous, which
is how banks and exchanges distribute thousands of price updates/sec, to the
level of this desk at this bank is allowed to see that, but that desk isn't,
and the other desk over there gets it 20 seconds later. Instead they went of
and tilted at windmills, all the while telling themselves that they were
breaking new ground.

~~~
jacquesm
That's very aptly put.

Unless you need many thousands of servers the majority of funded web
application companies could very well afford the best of what the OTP world
has to offer them. But since that isn't 'sexy tech' they want to re-invent
those wheels.

Twitter has since switched to Erlang, I think they've realised that customer
satisfaction beats new technology any day.

~~~
kristianp
Twitter actually uses Scala, I believe.

~~~
jacquesm
Aye, you're right:

<http://www.theregister.co.uk/2009/04/01/twitter_on_scala/>

Memory error, I should have my dimms checked ;)

------
rythie
Essentially people are locked into those mainframes. It doesn't make sense to
pick them if you are starting from scratch.

I wonder if in 20 years people will be locked into the cloud systems they are
investing in now in the same way.

------
gritzko
Well, a friend of mine writes software for mainframes. Says, the platform has
two advantages: customers are totally locked in and there are enormous
possibilities for kickbacks (no market => no fair price => case-by-case
pricing => guess what) Otherwise, the technology is totally inferior, compared
to anything.

------
zandorg
IBM also makes a lot from their consulting (human mainframes).

------
herdrick
_There are estimates that 80% of the world's data are processed by
mainframes._

Pretty clueless estimates I'd have to think.

------
stralep
Should I learn COBOL 2002, or some earlier version?

Are mainframes COBOL standard compliant?

Is there free standard document?

------
unohoo
One of my friends who works on a Z mainframe told me that the last time their
mainframe was 'rebooted' was about 4 years ago. Hows that for downtime ?

------
ronnier
What does Eastern civilization run on?

~~~
maukdaddy
Redframes.

~~~
chunkbot
This belongs on reddit.

------
SwellJoe
This article makes me want to buy some IBM stock. But, it's never performed
very well...

~~~
Kadin
Well, it's mostly factored into the price; IBM's mainframe division isn't that
big a secret.

But like most "blue chips", IBM is probably a reasonably solid income stock.
They have an obvious income stream (software licensing) with a locked-in
market, in addition to their consulting/services divisions.

Whether or not it's too much to pay for the income (compared to similar
companies with license-based revenue models, i.e. Microsoft) is the question
for a potential investor. I'm not sure anyone would regard it as a hot
speculative play...

------
jeberle
Mainframes are very good at their jobs. Where do you think virtual machines,
NoSQL databases, SQL databases, etc., got their start -- about 40 years ago?

<http://en.wikipedia.org/wiki/VM_(operating_system)>
<http://en.wikipedia.org/wiki/VSAM>
<http://en.wikipedia.org/wiki/IBM_System_R>

Page at a time remote interfaces anyone?

