
IBM doubles its 14nm EDRAM density, adds hundreds of megabytes of cache - insulanian
https://fuse.wikichip.org/news/3383/ibm-doubles-its-14nm-edram-density-adds-hundreds-of-megabytes-of-cache/
======
baybal2
I wonder if it will ever see a chance to go mainstream.

The memory bottleneck is pretty much the only thing in CPU design that didn't
see a dramatic improvement over the years. Its elimination is the only obvious
improvement pathway still left with expectation of double digit performance
gains.

So we need either very big and very fast caches, or extremely wide and low
latency memory. Both options are quite costly.

Adding on die DRAM that can work at least as fast as 500mhz will surely
require some specialty process with a lot of compromises like the one in the
article.

Gluing something like HBM2 to the die for a second option moves the cost from
the specialty process to the specialty packaging. Not much better.

~~~
jiggawatts
> Gluing something like HBM2 to the die for a second

This is something AMD is already doing for EPYC, and they've already used HBM2
in their GPUs.

So I'm surprised they haven't released any CPU models with crazy huge L4
caches using a few GB of HBM2.

Then again, Intel made a laptop CPU with a huge 128MB cache and their comment
was that it didn't make that big of a difference. I believe the performance
boost was less than 5% for going from 64MB to 128MB.

~~~
baybal2
Its all about how fast the memory is.

960MiB is prodigiously large for such a microscopic chip, but if it "only"
gain 3-5 times latency reduction over external DRAM, it's still very far from
a proper L3 implementation, and far behind L2.

Make DRAM work on 1Ghz+, and then you will see miracles. Imagine a fully
synchronous on-die DRAM that can sit just behind L1, or even be connected to
load registers directly.

The problem is that effective frequencies for memory round-trip haven't got up
much since nineties. If you work with 100% cache misses, your mem will still
be working at effective frequency of around 100 to 200Mhz

~~~
Dylan16807
> If you work with 100% cache misses, your mem will still be working at
> effective frequency of around 100 to 200Mhz

I think you mean 20MHz. Current DRAM is abysmal at random access.

~~~
baybal2
Yes, more so like this if you discard how it looks from the electrical side.

Even after bytes arrive to DRAM controller analog side, a lot of things have
to happen before the data gets to the register. This accounts for the further
five to tenfold increase in round trip latency

------
RantyDave
Twelve point two billion transistors. That's absolutely nuts. Does anyone have
a ballpark figure for how much a 'drawer' of four of these things costs?
What's it supposed to run, is this an Oracle/DB2 beast?

~~~
unixhero
Highly dense Docker hypervisor hypervisor hypervisor.

~~~
RantyDave
Nobody's even mentioned Crysis.

------
magicalhippo
Impressive tech. How big is the market for these machines these days? Like how
many Z15 CPs would they expect to sell (assuming each Z15 install can vary a
lot in size).

~~~
pm90
This is likely catering specifically to IBM's customers who have been them
from the mainframe days and continue to rely on IBM products (airline, banking
etc.). The systems used by these orgs are massive in complexity and I'm not
sure how much they want to invest in refactoring them to run on COTS
hardware.... it probably doesn't make sense for them financially.

~~~
nabla9
There is market for reliability, security and scale in compact size, so they
can get new customers.

Companies like Robinhood may discover that it's actually cheaper to by
reliable and secure hardware and write software into it than try to write
fault tolerant and secure software over COTS hardware.

~~~
tinco
Is it really that easy to write reliable software for mainframes? What sort of
abstraction do you get from them that you don't get in commodity hardware?

Nice easy shot at Robinhood of course, but they're a very young company. Do
the big banks really have a fundamental edge that's not just more invested
hours?

~~~
msandford
I think the idea is that your locks and threads are so close latency wise that
you only need a few to get the job done. Versus trying to figure out how to
parallelize things, some of which are inherently non-parallelizable. But
sometimes you don't know until you try! Ain't life grand?

------
smartstakestime
This is my type of tech. Not glamorous but highly functional.

------
insulanian
Are there any modern initiatives around mainframe? What is IBM doing to
promote it more to new generations?

Are there any startups doing anything related to the mainframe?

I was always fascinated by the tech around mainframe and am even thinking
about moving into that space. I can imagine that a barrier to entry is high...
or?

~~~
kanzenryu2
The barrier is not always high
[https://www.youtube.com/watch?v=45X4VP8CGtk](https://www.youtube.com/watch?v=45X4VP8CGtk)

~~~
insulanian
Are you trying to say that it is wide and heavy as well :)

Joke aside, how are people actually getting started in this space? And more
importantly, is it "worth it" financially? Is the mainframe skill shortage
("dreaded COBOL"?) a real thing?

~~~
waterheater
> How are people actually getting started in this space?

Depends on your needs. Do you need near-perfect uptime? Most don't; their
products aren't so critical, so they settle for a cheaper option. Do want to
run a data center? Most don't; it's so much easier to push code to AWS and let
them handle the infrastructure demands.

Startups historically survive because of adaptation and speed. The mainframe
may not fit those operational principles, though that's up to a given startup
(see below for industries which might benefit the most from a mainframe).

> Is it "worth it" financially?

The System Z series excels at transactions and updating records. Industries
which deal extensively with these types of computations are banks, airlines,
credit card companies, stock brokers, insurance companies, and certainly
others. If you're in one of those lines of business, you probably should look
into mainframes. More generally, if you need to maintain system state at all
costs, mainframes are probably a good option.

I would not recommend mainframes for heavy, laborous computing loads
(scientific computing, rendering boxes, etc.)

> Is the mainframe skill shortage ("dreaded COBOL"?) a real thing?

If a company wants to add a new feature to or fix a bug in a 40 year-old COBOL
program, they'll likely have a hard time finding a young programmer, sure.
Some older COBOL coders are helping fill the game while they can. Don't forget
that the System Z mainframes have a level of backwards compatibility that
makes x86 blush; your COBOL program will certainly still run.

I wager most new programs (<15 years) have been written with Java or C++,
given z/OS supports more languages than COBOL:
[https://www.ibm.com/support/knowledgecenter/zosbasics/com.ib...](https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zappldev/zappldev_22.htm)

COBOL is dying, as it would be ridiculous to start a new project in COBOL. But
many legacy systems still work, so why change them if they aren't broke?

I've made a comment in the past about IBM mainframes which you also might find
informative:
[https://news.ycombinator.com/item?id=20978305](https://news.ycombinator.com/item?id=20978305)

~~~
insulanian
Thank you for the answer and sorry for not being clearer, but I was asking
from the perspective of an engineer that wants to enter the mainframe world.

Let's say I want to focus on developing, maintaining and running the software
on mainframes. How do I get "in" and are the skills paid well in comparison to
a typical Java/.NET/C++ developer position nowadays?

You touched a bit on dying workforce when you mentioned COBOL. Is that dying
workforce a real problem making it financially lucrative for the people
willing to learn that stuff? Or is it just a myth?

~~~
pickle-wizard
IBM has an internal program for training people in Mainframe skills, but for
the life of me I can't remember what it is called. I was going to be part of
it when I graduated college, but then 08 happened and I was moved into Open
Systems support after a RIF.

A google search showed me this page. [https://www.ibm.com/case-studies/ibm-
academic-initiative-sys...](https://www.ibm.com/case-studies/ibm-academic-
initiative-systems-hardware-mainframe-skills)

