
IBM Launches Z13 Mainframe - zmonkeyz
http://www-03.ibm.com/press/us/en/pressrelease/45808.wss
======
jacquesm
I've found some specs:

\- up to 141 configurable processors

\- new 22nm 8-core processor chip@5GHZ

\- 110GIPS

\- Single Instruction Multiple Data (SIMD) support

\- On chip cryptographic and compression coprocessors

\- up to 10 TB of memory (configured as 'memory raid', RAIM)

PDF is here:
[http://public.dhe.ibm.com/common/ssi/ecm/zs/en/zsd03035usen/...](http://public.dhe.ibm.com/common/ssi/ecm/zs/en/zsd03035usen/ZSD03035USEN.PDF)

~~~
jlarocco
I always feel like mainframes are off in some weird parallel universe.

> Single Instruction Multiple Data (SIMD), a vector processing model providing
> instruction level parallelism, to speed workloads such as analytics and
> mathematical modeling. For example, COBOL 5.2 and PL/I 4.5 exploit SIMD and
> improved floating point enhancements to deliver improved performance over
> and above that provided by the faster processor.

Most people would consider COBOL and PL/I to be laughably out of date
languages from the 60s and 70s. But here's a new machine with 10 TB of RAM,
and the release notes are quick to point out new features for COBOL and PL/I.
Go figure.

~~~
m_mueller
> I always feel like mainframes are off in some weird parallel universe.

Well they are. These things are built for a specific audience and it so
happens that they are still willing to spend money on this, otherwise it
wouldn't be made, especially not by IBM that is otherwise quick to disband
business units that don't make heaps of money.

In my research I'm using an even older language: Fortran. Believe it or not,
but in a field that is less narrow and specific than you would think[1],
there's still no real replacement for (modern) Fortran.

[1] numeric applications that need to get close to the hardware's peak
performance, written in a higher level language than C that is usable by non-
CS-graduates.

~~~
rl9
It's not just learnability. In some cases, Fortran can be considered to be
faster than C since it can apply something like -fno-strict-alisasing by
default.

Not to mention that most compilers have OpenMP suport built-in. Also, there's
the tons of heavily-optimized Fortran math libraries that make it easy to take
advantage of big machines and big clusters.

Those scientists that are still using Fortran after all these years know what
they're doing. It's not that they're ignorant of new technoglogy. A well-
informed and open-minded CS person would probably end up using Fortran for
these use cases, too.

~~~
m_mueller
> It's not just learnability. In some cases, Fortran can be considered to be
> faster than C since it can apply something like -fno-strict-alisasing by
> default.

Yes. In my experience, you _can_ make C as fast as Fortran, but it will be a
lot more work (and you need more knowledge about the machine). Fortran's
defaults are very strong when it comes to crunching numbers as quickly as
possible. Multidimensional arrays with Matlab-like slicing and operations,
implemented as performant as possible, is enough reason alone to stay with
Fortran.

> Those scientists that are still using Fortran after all these years know
> what they're doing. It's not that they're ignorant of new technoglogy. A
> well-informed and open-minded CS person would probably end up using Fortran
> for these use cases, too.

Well, I'm sort of coming out of CS (computer engineering with focus on
software) and I'm using it, so there ;)

~~~
Normati
I always wonder why you can't just add a library to another language to get
most of the features of somebody else's favorite language. If not a library,
couldn't any other language add matlab style matrices and end up being as good
as it, fortran, etc? Do we really need a whole different language with a whole
different set of arbitrary rules just so a few features are easier or faster?

~~~
m_mueller
For Fortran style multidimensional arrays, libraries fly right out the window.
First of all, you need slicing syntax and operators baked into the language.
The compiler needs to be aware of the array's storage order, such that it can
implement operations like A + B optimally for A, B as n-dimensional arrays.
Then it needs to be aware of their boundaries, such that it can align the
memory optimally.

So that leaves language level support. So far the only contender that I can
see coming up is Julia - and it will take _a lot_ of effort until it can reach
performance levels on par with Fortran. This basically would require a big
push by one of the big software companies - who all aren't making a lot of
money in HPC software anymore. I could see Nvidia picking up the ball at some
point, it would suit them well - but then we're getting into vendor lock-in
again.

------
drKarl
While it's an undeniable technological achievement, I don't think it's a good
strategical one.

Nowadays you don't create a single monolithic system, really expensive too,
which is capable of processing all of your transactions, as it says, like 100
cybermondays every day.

Instead, you have a distributed system, with many cheap, geographically
distributed servers, each one capable of a much lower number of transactions,
but still quite high today... and also you can spin up new servers as needed,
or destroy servers as they are no longer needed, so you control your costs
when you don't need as much instead of having a very expensive mainframe 90%
underutilized 90% of the time.

~~~
pbourke
Computers, even these, are cheaper than development and opportunity costs for
a complex system. Especially if that system has been around for 30+ years and
lies at the core of a large business.

~~~
jbergens
But development for mainframes seems to be much more costly than development
on pc's. If you need this type of machine you need it, but it will probably
cost you a lot to work with it over the years.

------
jacquesm
That's a fairly content free release as these come. Nothing about the actual
architecture, clearly aimed at managers rather than techies, IBM does
enterprise sales very well. But of course I'm more interested in the nitty
gritty details (though I'll likely never be in a position to pull the trigger
on a purchase like this). Funny how Linux is mentioned in the press release,
and other platforms don't even rate a mention.

~~~
spydum
Very true.. Though I recall the power7 stuff being released years ago as a
very tech heavy event, and yet it seemed to go down without much of an impact
as well (though the aix guys I worked with were convinced it was game
changing). I think people who are invested (culturally/mentally)in ibm will
continue to buy ibm (and no doubt be successful). I just don't see them
growing any new markets.

~~~
erglkjahlkh
I have participated in sales, and I did not want to get into hours long
discussion (boring, and not really important) about what Power/AIX offers so I
boiled it down for potential customers like this:

If you want the best with insane price level, buy into Power/AIX. If you are
not really sure, please don't. You need to train administrators (with larger
setups) - or prepare to shell in a lot of money, the hardware contains nasty
licensing surprises, the hardware is rather expensive, and so on. When you get
it running, it's insanely powerful indeed, and the platform itself nearly
never screws up (if you tested properly for Monday parts), and the
virtualization is insanely good in practice.

Some potential customers have bought Power/AIX, some not. The ones that did
are still years after customers, and to my knowledge pretty happy. I believe
IBM is actually gaining customers. Slowly, but they are. Those that made the
choice knew what they got into, and are not going to change their views.

~~~
pdardeau
> the [Power/AIX] virtualization is insanely good in practice

This is my experience as well. IMO, Power/AIX has the most sophisticated
virtualization available. IBM just has no idea how to market it.

------
ask123
I have been working in the mainframe industry for over 5 years now and i know
tons of company (government, banks, insurance and others) are still running
mainframe. I picked this field because i saw the opportunity of how alot of
the old folks are retiring and companies are willing to pay alot of money to
get young folks to join and learn the mainframe. Honestly its not even that
hard as long as you are willing to learn.... what are your thoughts about it?
If someone was willing to train people, would people be interested in it?

------
ivan_ah
The part about the in-memory analytics reminds me a lot of the SAP HANA spiel:

    
    
        Transaction Database --> copy -->  analytics db   = slow
        
        In-memory transactions with built-in analytics    = fast 
    

I totally understand the value of analytics and BI applications, but does it
all have to be realtime? And what "mobile analytics" are they going to compute
exactly? Forget analytics, I can _tell_ you what mobile users are doing right
now---they're all playing candy crush.

~~~
chime
Their customers are not King.com or most techie startups. Their customers are
insurance companies, banks, and other financial behemoths.

My bank recently alerted me because I swiped my credit card at one gas station
away from home but did not buy gas (the pump was out of gas) and then swiped
again at another gas station in a 3mi radius within 30 minutes (obviously
since I needed gas).

Their system picked up these two transactions, figured out they were both gas
stations, geo-located them to be out of my area, calculated the distance to be
close enough to reason it was just one person (and not my wife using her card
elsewhere), and triggered an alert because the first transaction was really
just a $1 pre-auth with no actual charge.

While none of this is really that complex if you built a fraud-detection
system from ground up with these requirements, imagine running this rule on a
few hundred million transactions per day. Now add in a thousand more fraud-
detection rules, scoring algorithms, and pattern recognition for uncommon
usage. Then hook it up to the mobile app to further reduce chances of
fraudulent usage by tracking user's location, businesses silently checked-in,
and alert preferences. IBM is targeting companies that need this.

~~~
DonHopkins
I thought the customer was always king, and nobody ever got fired for choosing
IBM.

------
lancewiggs
It's fascinating to see the rhetoric focussing on mobile transactions being
the main driver of demand. While it's certainly huge, there are still a lot of
other uses.

~~~
pantulis
I was surprised that the word "omnichannel" is missing.

------
easytiger
> z13 is the first system to make practical real-time encryption of all mobile
> transactions at any scale

Or you know, any kind of transaction because there is no difference.

Fuck I hate marketing.

------
smegel
The biggest, baddest Linux hypervisor known to mankind.

------
anentropic
YES BUT WHY IS THE FONT SO SMALL?

~~~
twic
That's IBM's new 9-pixel type process.

------
halayli
what's a transaction in this context?

~~~
jacquesm
[http://en.wikipedia.org/wiki/Transaction_processing_system](http://en.wikipedia.org/wiki/Transaction_processing_system)

~~~
halayli
I know that but this is a very general definition of TPS. How many CPU cycles
a transaction requires? It varies on the transaction of course.

So I am wondering why they use TPS as a unit when it can be anything.

~~~
jacquesm
They use TPS as a unit because that's how large banks and other transaction
oriented entities do their provisioning and capacity planning. That's their
'unit', not CPU cycles.

~~~
halayli
But how can they claim they can do X transactions when a transaction's
processing requirement is undefined?

~~~
Someone
There are industry-standard benchmarks that measure TPS. See for example (or
maybe not even for example; it may be effectively the only one in town)
[http://www.tpc.org/information/benchmarks.asp](http://www.tpc.org/information/benchmarks.asp).

And of course, anybody considering buying such a machine will spend quite a
bit on evaluating how it performs under their load, just like SPEC
([https://www.spec.org/](https://www.spec.org/)) says something, but not all.

~~~
halayli
I've looked at the tpc.org maybe 13 years ago. It never made sense to me to
use it as a measurement unit back then and it still don't make sense to me
now. :)

------
_RPM
literally every device used by consumers today can be traced back to IBM.

~~~
jacquesm
How so? How does an ARM powered smartphone or tablet trace back to IBM?

~~~
_RPM
When you press the main button. BOOM! Interrupt! That is one example. They
even had virtualization in the 80's on System / 370 long VMWare came out.

~~~
graycat
There was the operating systems CMS and CP67 done by the IBM Cambridge
Scientific Center for the IBM 360/67 in, right, about 1967. The CP67 was
_control program_ 67 and was for virtual machine. CMS was Cambridge Monitor
Systems, maybe off and on Conversational Monitor System, and the command-line
time-sharing operating system users saw.

CP67 was intended as a means for interactive, time-shared operating system
development. So, right, could run CP67 on CP67 -- once that was done 7 levels
deep.

The combination CP67/CMS was, for the time, a total dream for a time sharing
system.

Stop _malicious_ code? Sure: On CP67 write and run any code, any instructions
doing anything you want with any data you want, and you just cannot bother any
other users. So, could run _malicious_ code safely.

I used CP67/CMS and PL/I from National CSS Time Sharing in Stamford, CT to
schedule the fleet at FedEx. Founder, COB, CEO Fred Smith's remark about the
output was "Amazing document" "Solved the most important problem facing
FedEx". The Board was pleased and a nice chunk of funding was enabled. It
literally saved the company. No, Fred never gave me my promised stock, that
once he said would be worth $500,000. Add a few zeros for now.

Since then CP67 was called VM, and for years IBM's internal computing was done
on about 3600 mainframes around the world and all running VM/CMS and connected
with VNET which was a lot like the Internet except the communications were via
bisync lines and the routing was done by the mainframes themselves. No, that
setup didn't depend on Systems Network Architecture (SNA). Yes, there were a
lot of fora!

The advantages of running on VM were too good to pass up, so eventually
essentially all production IBM mainframes were running their operating systems
as _guests_ on VM on the _bare metal_.

In virtual machine, IBM was way out in front.

~~~
pjmlp
They still are.

When I started learning how OS/400 work, now IBM i. I was quite interested to
see the execution model of having everything stored as bytecode, with a JIT
kernel doing AOT compilation at install time.

Something that is kind of being done on the Android and Windows worlds,
attempted on Oberon and Inferno, but not with the extent that the OS/400 does
it.

------
feld
The "encryption" buzzword is worthless without details. And I wonder if
they'll stand behind their product if their crypto flavor of choice is broken
and needs to be changed. Otherwise that's an expensive brick of swiss cheese.

~~~
wmf
[http://www-03.ibm.com/systems/z/advantages/security/zec12cry...](http://www-03.ibm.com/systems/z/advantages/security/zec12cryptography.html)
(previous generation)
[http://www-03.ibm.com/security/cryptocards/](http://www-03.ibm.com/security/cryptocards/)

One reason for industry standards is that they distribute risk across the
whole industry. So if NIST crypto is broken the entire industry goes down
together; one company wouldn't have a competitive disadvantage.

