
IBM Z mainframes revived by Red Hat, AI and security - rbanffy
https://searchdatacenter.techtarget.com/news/252487632/IBM-Z-mainframes-revived-by-Red-Hat-AI-and-security
======
reacharavindh
Sure IBM System Z is a marvel to work with, and it tickles the fascination of
an engineer to be able to hot swap CPU banks and memory banks from a live
system without downtime! I have had my mind blown away doing exactly that with
a test Z10 in 2008.

However, the big System Z is surrounded by a lot of marketing and sales people
who only know how to hype the shit out of the world. You talk to them for 2
minutes, and they bring up Watson just like in the linked article. For no
logical reason. It is a big percentage of your job to cut the bull shit if you
are dealing with IBM, particularly the System Z crowd. Unfortunately.

As expensive it is. It might be a crown jewel of enterprise computing even in
the modern era if the ecosystem around it evolves to modern practices and cut
out the bullshitters.

I have run a few thousand Linux VMs under Z/VM (hypervisor) on the Z10
hardware, using specialty Z processors made for running Linux. It was lovely
albeit being inaccessible to the masses.

~~~
neverartful
I completely agree with you. I have been completely fascinated and astounded
by the capabilities of the System Z (and also System P in my case), yet
utterly flabbergasted and disgusted by the incompetence of IBM in business
dealings.

For example, the hypervisor on P series has been around since the introduction
of Power5 and it allows you to create micro-LPARS with a small fraction of the
CPU resources. Even back in the Power5 days, there was something called
entitled capacity that lets you specify the minimum amount of CPU resources
your LPAR would receive even if the entire server is being hammered
simultaneously. It's an engineering marvel. Back then it was called APV. I
believe it's called PowerVM these days. I'm sure that most people in our
industry have no idea that this capability exists, let alone that it's been
around since Power5. Sad, really.

------
zcdziura
My first job out of college was working at a large financial services company
in one of their departments that, among other things, handles corporate
actions on behalf of its customers. Much of that corporate actions processing
is done as part of a nightly batch cycle that runs on an IBM mainframe. The
system itself was first deployed and continuously maintained for longer than
I've been alive.

Long before I joined the company, that department tried to rewrite the system
to be "modern", porting everything over to Java and having it run across a
couple of servers. This was in the late 00's if I remember correctly.
Apparently this rewritten system couldn't handle the volume of data that the
mainframe can easily process during its batch cycle. However, were they to
rewrite the system now (having the tools available to better facilitate real-
time processing along with flexible resource scalability) I bet that a new
system would be able to keep up with the demand and be just as expensive, if
not a little more cost effective to run.

Does anyone else have any experience updating and rewriting old mainframe
batch processes to newer systems and architectures?

~~~
harrygeez
I work in a bank and we have a team of people writing microservices on top of
mainframe "programs". I'm not sure how it works in the mainframe because I
work mostly on the front-end, but apparently what we are used to in modern
REST apis are just called programs in the mainframe. Anyway, I digress, what
we are experiencing is quite the opposite: operations written in the
microservice run much faster and we are kind of bottlenecked by the mainframe.

I also recently came to learn that the mainframe run jobs in batches, so I'm
not sure how does that affect performance, if it even does.

~~~
InfiniteRand
I think this hints at the best practice, establishing APIs around the edges of
the legacy system, then sectioning off chunks of legacy functionality,
building internal APIs inside the legacy code around those chunks and then
replacing those chunks with modern systems and making those APIs external
(external as in legacy -> modern instead of legacy -> legacy). Some legacy
core pieces might never be replaced but those can be contained so the
bottleneck is limited. (a good example is genuinely batch operations likely
nightly email alert processes, while it might be nice to modernize those)

My limited experience with this is this strategy is that it works, although it
is not error proof and can be messy. You also end up discovering that
assumptions you made about the legacy software are not correct and then you
need to back pedal and potentially throw away previous work (ironically your
own assumptions about how to modernize the legacy code become their own legacy
that you need to work around as the project ages).

It's good if you have some of the original developers still around and
probably worthwhile to give them consulting $$ in order to bring them out of
retirement.

Perhaps the best thing about this strategy is if you in the end conclude it
isn't worthwhile, you will still have modernized some sections of the
codebase. Of course, because of this you need to make sure that the individual
section by section modernization efforts are themselves a net gain, instead of
just hoping that the final end of the modernization process will be a benefit,
because you might never get to that end point.

Also, you might be able to enhance the product early on by being able to
iterate and improve the sections that have been modernized, instead of waiting
till everything is done to flip a switch and get new features.

The downside is this is a slow and gradual process. In the meantime your
developers are filling their heads with details of a unique system that is
hopefully going to go away never to be seen again.

If you have specs and documentation and tests and all of that good stuff, it
might be better to just start from scratch, and build things up according to
requirements.

~~~
jacquesm
That's the way to go about it. It won't be cheap nor fast but you'll get the
job done eventually.

------
throwawayhacka
I think a lot of people at HN just hate IBM for no real reason. An economy
doesn't have to be composed entirely of scrappy startups, bleeding-edge tech
(and IBM does do a lot of great systems research). Super odd to me that in
such a 'rational' community there's a ton of brand affiliation - something
like nationalism reduced to the tech industry.

Ultimately, you actually often need something like an IBM to be around to
acquire your small company so you can run away with the money.

~~~
Lramseyer
> I think a lot of people at HN just hate IBM for no real reason

I disagree. From a marketing perspective, they are a company that is known to
over promise and under deliver. Their marketing is vague pie in the sky
without much to show for in the market. Watch this IBM ad and tell me what is
it that this company actually _does_ :

[https://www.youtube.com/watch?v=gsEwmumEsKU](https://www.youtube.com/watch?v=gsEwmumEsKU)

They are a company that fails to innovate, and fails to be relevant, yet they
put a ridiculous amount of effort to try to convince people otherwise.

From a business strategy perspective, IBM has spun off or sold off most of
their great product lines and business units. Some of which were failing under
IBM leadership, and became profitable once divested. Every company they
acquire, they seem to dissolve after a few years.

From an internal culture perspective, they are known for laying off their
talented experienced employees who have been with the company for over 20
years in favor of younger, cheaper employees. In some departments, they have
resorted to simply not promoting people and let people to leave on their own
accord.

That being said, most of my experience in working with IBM has been while they
were under CEO Ginni Rometty who led from 2012 to 2020. This was one of the
biggest bull runs in history, yet the stock went from about $205 per share
down to about $120 per share. They have a new CEO now, so things might change.

~~~
dralley
disclaimer: Red Hat employee, which technically means IBM employee

They do innovate, it's just that the places where they have been putting all
their flashy marketing over the past couple of years have not been the places
where they've been innovating.

They have not one but two custom CPU architectures designed entirely in-house
(POWER and Z) which they've kept competitive over the years (in terms of
hardware if not price). They're one of a very few entities (corporate or
academic) at the very leading edge of quantum computing and one of the few
builders of high-end supercomputers (#2 and #3 most powerful computers in the
world at the moment are POWER systems)

The problem is that while they are legitimately extremely good at many things,
those things make up a small fraction of the "surface area" of IBM and it's
not what they have have historically been strategically emphasizing for the
past couple decades. The average engineer's experience with IBM is more likely
to involve their software products and consulting services and/or sales team
than it is any of those things I mentioned.

------
Teknoman117
Could someone who has had experience with mainframes explain to me why, in
2020, if I were developing a brand new application with no backwards
compatibility baggage, what scenarios in which I would want to choose a
mainframe rather than a collection of off the shelf servers? Let's assume I
have the financial capacity to run my own data center.

I'm legitimately curious. I already know how stupid reliable they are in terms
of correctness guarantees and staying up when hardware failure and replacement
occurs, but what truly sets them apart, considering their cost?

At least for we 20-something engineers, I'd wager that nearly all of us have
never had an opportunity to be exposed to mainframes.

~~~
rbanffy
IBM can sell you a LinuxONE machine that's mainframe that runs Linux under the
z/VM hypervisor. It's probably the biggest Linux server money can buy. And can
be carved out into thousands of really nice average servers. You won't have to
worry about network components or latency or unreliable servers or anything
like that - you can treat it as a data center in a couple racks.

Another interesting feature is that they are designed to be monitored, because
a traditional business model was to lease a mainframe and rent out capacity to
smaller companies that'd pay for things like CPU seconds or bytes
read/written.

~~~
aww_dang
Are there any hosting providers offering VMs?

~~~
lboc
IBM certainly does:

[http://dtsc.dfw.ibm.com/](http://dtsc.dfw.ibm.com/)

They might also know others who do it too. I must admit I've never heard of
such a thing from non-IBM sources.

~~~
rbanffy
The Marist College offers a time-limited z15 Linux on z/VM for free with their
LinuxONE Community Cloud project.

------
AlbertoGP
“IBM mainframe sales grew some 69% during the second quarter of this year,
achieving the highest year-over-year percentage increase of any other business
unit. Some industry observers attribute the unexpected performance to the fact
the z15, introduced a year ago, is still in its anticipated upcycle.
Typically, mainframe sales level off and dip after 12 to 18 months until the
release of a new system. But that might not be the case this time around.”

They don’t give any more context to know how unusual that 69% is. Maybe there
is a big jump every year in the second quarter, and this one was just a bit
bigger. I don’t know.

One reason they cite is the increase in online shopping that prompted
customers to pay for activation of the extra hardware that is included,
disabled until you pay, in each mainframe: “Some industries are in slumps, but
online sales are up and that means credit card and banking systems are more
active than normal. They liked the idea of being able to turn on ‘dark’
processors remotely.”

------
alfalfasprout
The demise of mainframes is entirely IBM's own doing. Mainframes are capable
of incredible throughput, compute power, and reliability. They allow you to
design applications with extremely tight SLA's without the complex and bloated
distributed architectures you see spammed everywhere nowadays.

However... then you have to: 1) Deal with IBM's ridiculous sales staff.
They're rarely technical and hype their random software offerings like there's
no tomorrow 2) Straight up purchase these mainframes and put them in a
datacenter somewhere. No easy way to just run one of these things on eg; AWS
even if it's dedicated just to you. 3) IBM licensing is beyond stupid. The
typical workloads people run today wouldn't be cost effective purely due to
licensing. 4) The tooling is ancient and highly proprietary. People want to be
able to write modern C++/Go/Rust/Java and if you want to use specialized
libraries to interact with the hardware that's fine. But the current ecosystem
is BAD. 5) How do you even learn to use Z/OS? If nobody can just play with it
even in a free emulator then why would anyone even consider it?

If IBM bothered to address 1-5 maybe we'd actually see more people considering
mainframes at larger companies. But they're digging their own grave.

~~~
egiboy
It looks like they are working on #4: [https://groups.google.com/g/golang-
nuts/c/lq3CH2Qcqc8](https://groups.google.com/g/golang-nuts/c/lq3CH2Qcqc8)

All points listed are very valid BTW.

------
dcolkitt
Despite decades in software and IT, I know embarrassingly little about
mainframes. Simple question for anyone knowledgeable in the field:

Is there any justifiable reason for a greenfield project in 2020 to choose
mainframes as a platform?

~~~
jacquesm
IO; reliability. That and only that. Until recently you could have added
'large amounts of memory' to that too, but you can now get TB+ machines for
very little money elsewhere and most problems an ordinary business will run
into will fit in that footprint.

~~~
neverartful
In some cases, it can be desirable to have single vendor so that there is '1
throat to choke'. (i.e., can't pass the buck).

~~~
jacquesm
Good point, yes, it can be a way to get out of party A pointing to party B and
party B pointing to party A. That said, I would hate to have IBM be the
highest level contractor for anything that really mattered to me.

Their reputation is no longer deserved imnsho.

~~~
yourapostasy
> That said, I would hate to have IBM be the highest level contractor for
> anything that really mattered to me.

My experience is with highly-skilled staff who are very comfortable with using
the IBM troubleshooting facilities of the product, you very quickly (a few
days on normal support cases, 2-6 hours on Sev 1 cases, about an hour on Sev 1
where you're paying for enhanced support) get to real developers with real
source code access.

The problems come when you aren't using skilled staff. There has been a huge
upturn in support cases I've see in clients' IBM Support case histories that
are really "professional services/training-through-support" incidents, where
it is painfully obvious the requesting parties don't know the most basic
aspects of the products they're opening support cases upon. Both customers and
IBM are shooting themselves in the foot here: customers for leaning on IBM
with unskilled staff, IBM for accepting it. No good solutions without spending
more money (on both sides), unfortunately.

Everyone who snarks about enterprise software: the road to enterprise scale
dominance is paved with a maze of twisty little requirements, all alike (on
the surface, then you peel it back and the cockroaches of edge cases come
swarming out). There are more things in IT, Horatio, / Than are dreamt of in
your philosophy.

~~~
jacquesm
> There are more things in IT, Horatio, / Than are dreamt of in your
> philosophy.

That was a fine comment right up to the snark. I've been in IT for 35 years
now, a portion of that on mainframes.

Yes, customers are often also to blame. But IBM is no longer equivalent to
competent. They have plenty of incompetents on staff and do not hesitate to
let those fend for themselves where customer data is on the line. A recent
case involved fairly massive data loss for a customer, and I'm still surprised
that never made the news.

IBM is still quality hardware, and very good documentation on that hardware
but the professional services organization is a mixed bag.

~~~
yourapostasy
Not meant as a snark, a constant reminder I keep telling myself every day,
that there is way more in IT than I'll ever have time to learn. All I have to
do if I ever think I'm beginning to get my arms around it all is look to
someone like Fabrice Bellard or innumerable others. We only win as a team, not
many such Free Electrons in the world to pass around, I'm certainly not such a
talent, I just heave against the entropy with the gang and share like everyone
else.

For IBM professional services, you'll have no quibble from me. Very mixed bag
from what customers tell me, and customers nearly have to be domain experts to
make smart purchase decisions on contractors. A non-trivial chunk of big name
professional services placements (not just IBM) are third parties, most of
those third parties get run through mandatory sub-sub-contracting
relationships and there is even a thriving industry feeding off of sub-sub-
sub-contracting relationships.

IBM _Support_ however, IME while it has declined markedly in L1 and marginally
in L2 (you need to get to the L2 leads quickly if you know what you're doing
and want to save time), is still quite good in L3, and if you pay for the
enhanced support, exceptionally good (but it is so expensive that few people
outside the IBM ecosystem have even heard it is an option).

IBM's documentation has always been very thorough, but I've seen a decline in
accuracy and editing (to organize the typically voluminous standards) that
corresponds with their gutting of the US technical writing teams who when I
worked with Redbooks upon were typically very technically proficient in their
own right. I try to help out these days by insisting upon a documentation APAR
when I run across documentation defects in my accounts, but I'm likely in the
minority of users. Ever since they took away the ability to comment on
individual pages in Knowledge Center, they've considerably increased the
friction for users to help increase the quality of documentation.

YMMV of course, just sharing my perspective.

------
krakatau1
I work for a large bank, we are rewriting business logic from Cics to Java or
SQL/Pl but Db2 zOS stays. It has a phenomenal performance and HA.

It also says a lot how relational database models outlive business logic
api-s.

------
pabs3
I note there is a project for open source on mainframes:

[https://www.openmainframeproject.org/](https://www.openmainframeproject.org/)

Also Debian runs on IBM Z mainframes for folks who prefer that to RedHat:

[https://www.debian.org/ports/s390/](https://www.debian.org/ports/s390/)

~~~
zxspectrum1982
openSUSE Tumbleweed and SUSE Linux Enterprise Server also run on IBM Z
mainframes. I don't see how "Red Hat" has revived mainframes, they are not
doing anything SUSE, openSUSE, Debian and many others are already doing.

------
bobbydreamer
Mainframe stack is very simple COBOL-Db2, online will be CICS-COBOL-Db2, batch
is JCL, COBOL with sorts dfsort or syncsort, SAS, much modern will be
COBOL/JAVA with Db2 Queue replication. Scripting language mostly REXX.

Last 3 years, lots of things are happening, more offloads to zIIP than General
CP. In terms of applications more and more Db2 Queue replication for the time
monitoring. CDC target being Kafka, IDAA for analytics and after acquisition
of RHEL, now I see admins, sysprogs installing Spark, Anaconda, Jupyter in
mainframe.

Usually in mainframe for detecting frauds, it used to be rule based now some
companies are planning for machine learning in mainframe and be close to
source and detect frauds.

Only thing mainframe cannot support is Deep learning as that requires GPU, no
GPU here.

Anyones companies already started ML and spark in mainframe and what's your
experience till now.

~~~
rbanffy
> Only thing mainframe cannot support is Deep learning as that requires GPU,
> no GPU here.

You can have racks full of PCIe slots if you want. I don't think there's
anything that prevents you from adding a dozen GPUs on every IO drawer if
that's what you want. And, unlike PC-based servers, there's a ton of silicon
dedicated to keeping all IO channels fed, so, you'd probably have very low CPU
usage.

------
Misteur-Z
IBM missed the 21st century turn, and buying Red Hat will probably not save
everything [1].

Mainframes are just too expensive, even with open-source software running from
the Canonical or Red Hat ecosystem. You just can't keep billing CPU
consumption and sell memory sticks like it's made of diamonds. Running heavy
"modern" workloads will give you a crazy stupid high R4HA MSU bills ... the
clash between the relatively cheap commodity hardware philosophy of the last
10 years and the IBM way of billing high-end things is too big.

I would say mainframes are still really good at handling an ultra large bus
with sky-high bandwidth but that's pretty much it. The power platform is a lot
cheaper and also good at that.

[1] Earlier this year: IBM’s Lost Decade
[https://news.ycombinator.com/item?id=22224782](https://news.ycombinator.com/item?id=22224782)

~~~
lsllc
IBM should stop trying to copy AWS/Azure/GCP with their cloud offerings and
instead offered a "mainframe" based cloud, they might get more traction.
Customers could submit their COBOL/CICS jobs and just pay for the cpu
milliseconds or I/O blocks used etc. getting the cloud scale and elasticity
for a mainframe without having to actually "own" a mainframe. I guess that's
where AWS is going with some of their ML & analytic offerings (batch oriented,
but not mainframe based).

~~~
lboc
The 'service bureau' does what you're talking about, and has done for a very
long time.

[https://en.wikipedia.org/wiki/Service_bureau#Histories](https://en.wikipedia.org/wiki/Service_bureau#Histories)

I have no idea about the current level of activity in this sector (mainframe
proceesing as a service), but would not be at all surprised to find it greatly
diminished from it's heyday when dinosaurs roamed the raised floor.

I don't think it takes a huge leap to draw a connection between these
businesses and the cloud providers of today.

------
ralphc
Every time one of these articles comes along we hear about fast I/O and how
other hardware can't handle the volume. Why not? What makes mainframe hardware
so special that it can't be duplicated in other architectures?

~~~
temac
The chips are huge and numerous & you would have to boostrap a market to
compete, that's hard. Technically it can be "duplicated", but in the same
sense that Apple can make a state of the art processor better than Intel: you
have to assemble a big first class team, a shitload of money, and persist
during 10 years. If you want to compete on a similar field as mainframes you
also need extreme reliability, and that's even more expensive to design. If
you relax the requirements and make some quite special purpose hardware
arguably it should be "easy" enough to make some fast I/O that can handle
volume; but then your market will be small, niche most of the time.

------
heelix
My first RHEL laptop was a T40 thinkpad with a 31-bit (not a typo) z-series
emulator complete with dongle. I think they spent 20k on the crazy thing. Such
a weird market to sell into.

------
sjg007
Sounds like a great idea. You can have a big iron backend for all your k8s.

I wonder if the math works out to buy or lease one and rent out cloud compute
instances.

------
emptysongglass
Would a Z mainframe be of effective use for data science chemistry modeling? I
wasn't able to get a straight answer out of IBM directly.

~~~
galoisgirl
Mainframes are build for counting money, what you want is a supercomputer:
[https://en.wikipedia.org/wiki/Mainframe_computer#Differences...](https://en.wikipedia.org/wiki/Mainframe_computer#Differences_from_supercomputers)

------
2OEH8eoCRo0
Where does this leave AIX Unix?

