
IBM introduces z15 mainframe - rbanffy
https://fuse.wikichip.org/news/2659/ibm-introduces-next-gen-z-mainframe-the-z15-wider-cores-more-cores-more-cache-still-5-2-ghz/
======
myrandomcomment
In reading the comments here it is clear that there are some that do not
understand the true difference between running a workload on a rack of servers
vs a mainframe. In the case of something like a x86 cluster you need something
you can chuck. That is not necessarily true in with a mainframe. You run a
"job" and it sorts it out for you. You can even walk up to the mainframe and
pull out the cpu the job is running on and nothing happens. It keeps working.

Before anyone starts a huge debate on everything I said it wrong, please
understand I know it is not exact. This is just a base level set of one of the
main structural differences in the HW/OS that everything is derived from.

~~~
bogomipz
>"In the case of something like a x86 cluster you need something you can
chuck. That is not necessarily true in with a mainframe."

What is meant by "chuck" here? To throw? If so what is the thing thats needs
to be chucked exactly in the case of x86 vs mainframe? Thanks.

~~~
oojuliuso
Likely a typo. Probably meant chunk as in batch processing.

~~~
sigzero
Maybe not. Mainframes have hot-swappable motherboards.

------
khanguy
FWIW ibm hosts a coding competition called master the mainframe to help
students learn mainframe programming on the latest systems. (Note: I'm not
affiliated with ibm)

[https://www.ibm.com/it-infrastructure/z/education/master-
the...](https://www.ibm.com/it-infrastructure/z/education/master-the-
mainframe-contest)

~~~
eatonphil
That looks awesome. Passed it along to some folks at NYU. I wish _I_ could
join. Surely they don't have so much demand they'd need to restrict this to
high school and university students... In general if they're willing to do
these programs why not provide training/re-training for adults in need of
work?

~~~
detaro
To take part in the competition parts you need to be a student, but everyone
can use the "learning system", and from what I've heard that includes all the
same exercises and tasks.

~~~
rbanffy
And it's a lot of fun to learn a technology that shares almost no common
ancestry with the Unix boxes most of us use today.

------
ljoshua
What does one use a mainframe for these days? I'm assuming projects that
require massive parellelism, like research projects or engineering modeling
and computation?

For those of us with only exposure to PCs and commodity servers, I'd be
interested to know who has encountered one in their day to day, and how that
experience differed from the norm.

~~~
wolf550e
A mainframe is not a supercomputer. Science is done on clusters of x86 chips
and Nvidia gpus with custom interconnect and Linux.

A system that runs a bank or an insurance company and wants good availability
can be built in one of two ways: you either spend money on software that deals
with the hardware being unreliable and save money on software (pioneered by
Google) or you spend money on hardware that promises to be highly reliable and
save on software.

No new player believes mainframe (extremely expensive hardware) is cost
effective, they all use commodity hardware.

A bank that needs to run binaries from the 70s for which they don't have the
source code can keep paying IBM and not investing in reverse engineering the
binary and implementing it in Java.

A bank that has a billion lines of code in Cobol can compile it to run on jvm
on commodity hardware and run it in parallel with the mainframe for a year to
validate and then switch over to the new system and stop overpaying for
hardware, but that sounds risky, so they keep paying a million dollars for a
system with the same performance as a 50 thousand dollar server.

~~~
rjeli
Is it really common for banks to run legacy applications without sources? How
would they know or trust the behavior?

~~~
tyingq
I can't speak for banks specifically, but for sure lots of old stodgy Fortune
500 companies have critical software running for which they have lost the
source code. Similarly, critical closed source software for which the vendor
no longer exists. And things like servers running that they are afraid to shut
down...they have no idea if the output is used by anyone. It's a mess out
there :)

~~~
mdorazio
Can confirm. The last two clients I worked with (both Fortune 500) used
mainframes for business critical operations. The most recent one didn't
actually have source for some components of their business ops software stack
- they've just been working as-is for about 30 years and no one will touch
them.

I was told they did an assessment a year or two ago to price out what it would
take to move the business completely off mainframe and onto an x86 stack. It
was in the hundreds of millions of dollars to do so because so much other
software has been built to interact with and rely on the mainframe over the
years that switching off it would be a multi-year effort across every
department in the company. So of course that ROI calculation was pretty damn
easy and the mainframe isn't going anywhere.

~~~
benj111
If you were dependent, to the tune of 100s of millions of $ on certain
hardware, wouldn't it make sense to start the process anyway?

You don't necessarily need to move the $new project off the legacy hardware,
just write it in such a way that you can do easily later.

I'm eliding many details here, but the principle stands.

~~~
reaperducer
Choose one:

\- Spend $300,000,000 over 10 years moving to a new system, and hope that by
the time you've done that it's not obsolete.

\- Spend $1,000,000 a year on a system that still works.

You could run the system for 300 years for what it would cost to replace the
system.

In spite of the collective wisdom on HN, there aren't a lot of companies in
the world with hundreds of millions of dollars sitting around doing nothing.
Even ones that work on mainframes.

~~~
benj111
I would characterise it as option 3

\- Spend $1.5 million maintaining the current system, but as bits get updated
keep in mind the system that you'd like to have in 10 15 years time.

You're going to have to replace the system within 300 years anyway, you aren't
saving that money, every feature you add that is reliant on the old system is
literally technical debt, because eventually you'll have to rewrite it for the
new system.

I'm not even saying you need to have a new system in mind, just keep in mind
that you will be moving to a new system, so code appropriately.

~~~
rbanffy
Since a mainframe can usually be updated to the newest model (and you often
run more than one for redundancy, as some people may need more than 5 nines),
in 300 years the company will have a z150 or so, with millions of quantum
entangled cores made of folded spacetime mesh around a pair of rotating
singularities running at a couple terahertz.

And it will still be able to run all your programs that were written and
tested since the late 20th century, by the kind of organic entity we used to
call "human".

~~~
benj111
Lighthearted retort: And web browsers will still be slow.

Slightly more serious retort: I'm not aware of any brands from 300 years ago,
I'd be surprised if any of IBMs customers survive, let alone enough to keep
IBM as a going concern.

Btw when you start folding the space time mesh, terahertz figures just become
marketing numbers, what you really want to know is how many parsecs it can do
the Kessel run in.

~~~
reaperducer
_I 'm not aware of any brands from 300 years ago_

You might be, you just don't realize that they're hundreds of years old.

For example, the insurance company Lloyds of London is fairly well-known
around the globe. It's 333 years old.

~~~
benj111
I suppose brand yes, but it's not really a company. If you want to go down
that route The Church of England, and indeed England are 'Brands'

------
tyingq
The death of the UNIX RISC vendors and also minicomputers creates an
interesting situation where the server "middle class" is gone. We now have
commodity servers where you hardly care about the vendor on one end, and hyper
engineered mainframes on the other. Not much between.

~~~
rudolfwinestock
That's because, ultimately, mainframes and embedded computers are the only
natural computer platforms. I expounded on this at length (yes, this is a
self-plug).

[http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.h...](http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.html)

Long story, short: We will really have to watch out for our privacy because
those two remaining platforms give The Management(tm) an irresistible
temptation to stomp on us.

~~~
tyingq
Interesting essay. Personally, I think AWS might be the new mainframe. You're
guided to a specific set of proprietary services to build solutions, not
terribly unlike the heyday of CICS, VSAM, JCL, MQ, RACF, etc. The playbook of
Lambda, Fargate, AWS Batch, DynamoDB, IAM/Cognito, SQS, CloudFormation, etc,
looks pretty similar.

~~~
VectorLock
AWS is absolutely the new mainframe. The big difference is that when you rent
from Amazon instead of IBM they provide the building the mainframe lives in.

~~~
goatinaboat
In the 1970s there were companies called “mainframe bureaus” that did exactly
that

------
JohnJamesRambo
I find the mainframe era romantic. I remember in college when I first got
there people sending statistics jobs over to the mainframe.

I wanted a future where people had big mainframes in their basement (like a
furnace) as the sole computing source for their house and terminals in every
room. Instead we got no one using computers and everyone having a dumbed down
phone. :(

~~~
zozbot234
Except that the "dumbed down" phone most likely has a whole lot more computing
power than any of the big mainframes you might have used in college.

~~~
mruts
So I can submit jobs to them and have them do arbitrary computations for me? I
can’t even get easy unfettered access to the filesystem for christ’s sake.

~~~
freehunter
As it turns, out, the vast majority of people don't need to do arbitrary
computations on a regular basis.

For those two do need it, you don't need a mainframe sitting in your basement
like a furnace when a Rapsberry Pi has 500x the processing power of an IBM
4300 from the 1980s for a fraction of the price and size and power
consumption.

There is absolutely no need to shit on the capabilities of modern smartphones
when "arbitrary computation" devices are so cheap, small, and ubiquitous.

~~~
nineteen999
Comparing apples and oranges. The smartphone or Raspberry PI may have more CPU
power, but has far less redundancy.

------
Merrill
IBM z15 (8561) Technical Guide -
[https://www.redbooks.ibm.com/redpieces/pdfs/sg248851.pdf](https://www.redbooks.ibm.com/redpieces/pdfs/sg248851.pdf)

IBM Newsroom - [https://newsroom.ibm.com/2019-09-12-IBM-Unveils-z15-With-
Ind...](https://newsroom.ibm.com/2019-09-12-IBM-Unveils-z15-With-Industry-
First-Data-Privacy-Capabilities)

------
sedachv
A lot of people here keep mentioning mainframes and reliability/uptime.
Mainframes were not particularly great at uptime (it was not a prime concern
for batch processing), until Tandem, which was founded specifically to make
fault-tolerant computers for on-line processing, started shipping systems in
the late 1970s and taking market share away from IBM and the other
manufacturers, forcing them to add fault tolerance.

------
gtirloni
I wonder what kind of machines Amazon is running in AWS to offer huge EC2
instance types. Anyone got insight into that? I feel like they could either go
the Google way (cheaper off-the-shelf parts and a lot of replication) or the
IBM way (scale up).

~~~
orf
I've also wondered this, specifically around the number of vCPU's they offer.
I'm sure there is a percentage of over-commitment here, but how do you offer a
96 vCPU instance?

Someone suggested that they used infiniband or something similar to make a
single instance span multiple machines, but I don't buy this. There would be
performance characteristics that would show this, and it would be documented.

~~~
detaro
96 vCPUs = 48-core machine with hyperthreading?

I think all sizes they offer are available as single machines, although the
really large ones maybe are somewhat exotic (NUMALink or other interconnects
to get a machine with 8 sockets? Not sure if the top Intel platforms do 8
sockets natively)

------
jcims
I wonder how nany security vulnerabilities persist in the mainframe ecosystem
simply because the the skillset and platform accessibility is relatively rare.

------
waterheater
A few years ago, I had the good fortune of working at IBM's Poughkeepsie
location on a mainframe subsystem team. What everyone is saying is technically
correct, but it's not completely accurate either. Note that I do not work
there anymore; these are thoughts I had when I was there and since then.

A large entity purchases 12 fridge-sized mainframes from IBM for over $100
million. Who might do that? Airlines, banks, governments, logistics, and
others needing high levels of reliability.

To understand why this clientele would use a Z-series mainframe, first
consider what the "z" in the name stand for: "zero," as in zero downtime.
Typical compute providers express their downtime as "#-nines". For example,
5-nines reliability would mean you're down for around 30 seconds per year, on
average. The Z-series mainframes are sold as having zero downtime, period. A
remarkable amount of research, development, and engineering effort goes into
achieving this level of reliability. Now, these clients usually perform jobs
which are not computationally difficult (validating a credit card transaction,
for example) but must work, since the economy depends on the availability of
these services. The Z-series mainframe shines in processing these loads of
many, short jobs.

There's a security angle to mainframes as well. Commodity hardware allows for
fast scaling and redundancy. However, commodity hardware also allows for
exploits to be shared easily. Once those exploits are discovered, companies
need to patch, and there's no guarantee the patch will happen. Now, imagine
trying to develop exploits for a system which is not commercially available
(governments could still presumably acquire one), is a completely custom
computer architecture (Z/Architecture, custom compiler, Z/OS, pretty much
every layer below JVM), and has very few design documents available online.
Oh, and consider that, from z14 onwards, any data in the mainframe is
encrypted when at rest. (Decryption/encryption is handled beneath the ISA;
once an instruction is run, the mainframe uses the central key management chip
(tamper-resistance, designed to handle natural disasters, etc.) to decrypt
necessary information. The information is processed, then encrypted again
before the instruction is completed.) The likelihood of a script-kiddie
getting into and exfiltrating data from one of these things is very unlikely.
Hacking one of these mainframe would take an intense, coordinated effort.

Another important component is backward-compatibility. Take IBM's two main in-
house storage protocols, FICON and FCP (FCP is FICON, minus most support for
old systems to get higher throughput). FICON connects mainframes with giant
storage arrays from EMC, Teradata, and others. FICON replaced ESCON, which
replaced the parallel data communication system from the System/360 era. When
a company upgrades their mainframe, knowing that your 20-year-old storage unit
can still talk to your new machines relieves stress. Companies WILL pay for
this level of backwards compatibility, and there's no reason to hate them for
it.

Supporting backwards compatibility has historically not been too much of a
problem for IBM. I worked with a person who took a class in IBM Poughkeepsie's
now-abandoned Education Building on this hot new programming language called C
(this was sometime in the 80's). Multiple people in my department were around
for the development of not just the current generation of IBM tech but those
before as well. The levels of technical depth they had were immense. I've
heard people say, "oh, but that depth is narrow and won't get them jobs
outside IBM mainframes." Perhaps, but in my experience, they don't care. They
build systems the world depends on, whether the users of those systems realize
it or not. I'll also add that in the days of Big Blue, your job was basically
secured. Even after the layoffs of the 90's, IBM still needed to retain the
old talent. (Imagine a company with lots of employees who've worked there less
than 10 years and lots who've worked there more than 30 years. You'd describe
IBM's mainframe division well.) Makes me sad to hear that IBM is
discriminating against their older employees to push them out.

One commenter asks why IBM doesn't have "micro-mainframes" for smaller
companies. For all I know, they could be moving this direction. At the same
time, it seems like it wouldn't make much sense for IBM to do this. Why deal
in thousands of dollars when you can deal in millions? Why put engineering
effort into building computers for non-critical companies when, as long as you
keep advancing performance and capabilities, your mainframes will provide you
one of the best long-term cash flows possible?

Another commenter said new companies do not consider mainframes because they
aren't cost-effective. I think it's for a different reason: new companies come
and go. Their services aren't that important to the world, but they're trying
to show the world their importance. Because of that, startups whip-up an
infrastructure concoction which is inefficient, but that's ok because 1) they
aren't encountering the issues of scale and 2) their workload and information
can run anywhere. They just don't need a mainframe because they don't need
that level of reliability.

Happy to answer other relevant questions you might have.

~~~
cpitman
There is no such thing as a zero downtime system, and it is a fiction that
makes it difficult to have real conversations about trade offs with a
business. Moving towards zero downtime requires rapidly escalating cost to
incrementally move the needle.

At a minimum, supporting infrastructure (power/networking/internet/etc) will
eventually fail even with backups. On top of that, no mainframe is going to
work when under water (flooding) or on fire (remember the delta outage?).

~~~
waterheater
[https://news.ycombinator.com/item?id=20977332](https://news.ycombinator.com/item?id=20977332)

> The last time he did shutdown the Mainframe was 15 years ago

And that was a controlled shutdown, not uncontrolled.

If more than fifteen years is your time horizon to validate the zero downtime
claim, that's cool. By that point, though, the system will have proven its
worth.

Right; unreliable supporting infrastructure is the inherent trickiness with
saying the mainframe has zero downtime. The manufacturer isn't being
deceptive, though, if, given reliable supporting infrastructure, the system
will stay online for as long as is stated. It's just not their problem.

Can't imagine a company would blame a manufacturer for downtime in the event
of a flood/earthquake to protect the equipment. If a piece of equipment starts
on fire, that's another story, but I suppose a zero-downtime system makes
assumptions that it won't start on fire.

------
ThinkBeat
This one was already posted on HN a little while ago.

[http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.h...](http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.html)

The new "cloud" running on racks and racks of cpus with memory and storage on
nodes is really close to how mainframes are.

With Google and other cloud vendors making more specific hardware to deal with
certain processes, we are entering even closer to similarities with
mainframes. They have a lot of supporting processing power that offloads the
CPU.

Mainframes now can also run Linux :)

------
ksec
Excuse my utter ignorance.

Isn't Mainframe just powerful Servers? At least I thought the term meant ( or
used to ) mission critical and massively powerful. Nowadays I could get 2S
EPYC 2 with 128 Core and 4TB of Memory. What makes a IBM Server with POWER10
any more reliable than a powerful x86 Server? After all many SuperComputer are
now running on x86 as well.

Or are we now using the word Mainframe specifically for IBM products? Rather
than a category of its own?

~~~
abtinf
Mainframes are a category of their own and IBM is the last company in the
category.

Mainframes have spectacular capabilities, like running a compute on multiple
CPUs in multiple data centers to ensure integrity and survivability built in
so that any software can take advantage of it. They have the lowest
transaction processing costs of any machine. They detect hardware issues and
phone home to order repairs without user intervention. And on and on and on.

Quite frankly, a lot more companies should be using IBM mainframes than trying
to build a reliable infrastructure on the cloud.

~~~
lanstin
Few orgs wants to pay for reliability/quality. Whether hardware or software
costs. They are ok with a bunch of 99.5 or 99.9 availability systems. With no
thought for high availability deployment.

------
myself248
I'm really curious about this eDRAM structure they're using for cache. If it's
that good, why aren't they licensing it to everyone else? If it works at
smaller process nodes, doubling cache density would be huge.

~~~
gpderetta
IIRC the L4 "cache" in some (IrisPro?) Intel CPUs is made of eDRAM.

------
walshemj
Interesting that IBM is still on 12 nm

~~~
dralley
Basically everyone but AMD and Apple is still on 12nm (or a similar density
process) for scale production.

~~~
mrweasel
This is still a little weird to me, given that AMD uses TSMC to make their
chips. So isn't it TSMC who is capable of making sub- 12nm processors, or are
TSMC depending on the AMD designs to be in a certain way?

Failing everything else, Intel could pay TSMC to make a 7nm processor for
them, no?

~~~
my123
Intel does use TSMC for some analog chip manufacturing and such. They even
used their services for manufacturing SoFIA Intel Atom x86 CPUs previously.

TSMC is a pure-play foundry which sells wafers to the whole industry.

On 7nm, there's Samsung too making 7nm chips that are actually shipping (in
the Note10(+)).

------
svennek
OMG, Linus has just plugged IBM-z (mainframes)..

[https://youtu.be/xT9F39Pd3og?t=311](https://youtu.be/xT9F39Pd3og?t=311)

~~~
gtirloni
This is surprising. They hope CTOs are watching Linus' videos? Or maybe to
excite the young generation to learn COBOL?

------
anthony_barker
I did some benchmarks a long time ago on linux on zSeries. It is against the
agreements to publish such benchmarks. But it got me thinking what they have
done well

Amazing:

\- Longevity - you can run code from 30-40 years ago easily. The only real
operating system at this point to do this. Maybe linux will be able to say
this in 1-2 decades.

Pretty good:

\- I/O is pretty good

\- Security - all the datasets were encrypted

Not great:

\- Costs

\- Open standards

------
mamcx
I ask in the opposite direction.

Why IBM have no "micro-mainframes" or workstations so small companies and
developers get on the game?

I see the problem with legacy code but then the big issue is how build new
code in a platform where the hardware is so expensive?

~~~
avar
There are emulators for z/OS and friends, you can use those on a PC.

~~~
mamcx
Yeah, but emulators are always subpar. And the question is why not mini-
mainframes?

------
unixhero
Yeah, but what's up with those YouTube ads. They are really something else,
kind of 2001-ish.

------
jdkee
"Nobody gets fired for buying IBM."

------
ngcc_hk
ibm was ctr in 1911. 108 years old company. That may make it a different firm
that most others. Especially computer one.

------
eclipseo76
What OS typically run on these mainframes?

~~~
zengid
z/OS

~~~
snuxoll
Or z/TPF or even Linux.

------
xvilka
Will it support Rust? Would be nice to have it working on this platform for
both Linux and z/OS.

~~~
nabla9
Yes. You can run Linux on them. They have really robust and secure (EAL5
certified) virtualization system.

------
poseid
what is the price of a single chip ?

------
mruts
I’m surprised that no work has been put into Linux such that one instance of
the OS can work across a whole cluster of machines. This seems like the main
selling point of mainframes and you would think cloud providers would be
interested in developing a replacement. Or maybe they have, but just on top of
the OS? Maybe forcing everyone to use their property crap is a better idea?

But from a technical perspective, how hard would it be to turn Linux into
something that could seemlessly use resources from many different machines? We
already have NUMA machines in which some cores are “different” from others and
Linux seems okay at managing that. Would it to be such a large step for it to
deal with networked CPUs? And the networked filesystem part seems to be
already solved.

IIRC Plan9 supported all this for both compute and disk a long-time ago.

~~~
zozbot234
Plan9 is _not_ a true single-system-image OS (w/ process migration and the
whole approach you're describing here). There were many attempts to achieve
this with Linux - OpenMOSIX, OpenSSI among others - but they've all been
abandoned and bitrotted.

------
SKILNER
Most of this discussion comes down to "it's harder to read code than write
it."

