Hacker News new | comments | show | ask | jobs | submit login
IBM z14 mainframe (ibm.com)
88 points by rbanffy 3 days ago | hide | past | web | 118 comments | favorite





Can recommend this link; it's a one-click victory in buzzword bingo:

> As businesses adapt to capitalize on digital, trust will be the currency that drives this new economy.

> With z14 you can apply machine learning to your most valuable data to create deeper insights.

> In the digital economy data is the differentiator. IBM z14 can rapidly derive actionable insights and enable progressively smarter decisions for better customer experiences and new revenue streams.

> IBM z14 can help you accelerate development and delivery of secure, scalable services with new economic models.

With such key features as:

"100% encryption is 100% mainframe"

"Trust – the foundation of digital relationships"

"Facilitate new secure services with Java"

I actually like IBM mainframe tech, but this page is hilarious.


The one important part in all the buzzwords:

> New software pricing tied to business value

So, it's like a tax on your business, if you succeed, so do they!

All joking aside, has anyone used mainframes for a greenfield project in the last 20 years? I'm genuinely curious. I've been under the impression the only reason they're still around is because of the massive 20 Million LOC cobol apps hanging around.

I've often thought I'd much rather have a cheap, featureful "walled garden" PAAS for side and small projects, as opposed to wrangling vms and containers all the time. Mainframe could fit this use case.


>> New software pricing tied to business value

> So, it's like a tax on your business, if you succeed, so do they!

You may jest, but the flip side of this seemingly buzzword laden sentence in the feature set as a potential buyer is that our company may not have to pay if IBM fails to demonstrate incremental business value at the end of an engagement (aka conditional fees[1]).

Buzzwords can work both ways, thankfully.

[1] https://en.wikipedia.org/wiki/Contingent_fee


Mainframes are all about VMs and containers. They just use different terminology and tech.

I've heard of a few new deployments (more accurately expansions) where it's cheaper to buy more mainframe than Oracle licensing.


> Mainframes are all about VMs and containers. They just use different terminology and tech.

Exactly. The tech is called LPAR: https://en.wikipedia.org/wiki/Logical_partition



Didn't realize Linux supported LPAR?

LPARs are implemented on hardware/microcode level (some older implementations even involve actual physical switches in the CPU and IO buses for partitioning the system), so OS does not have to even know about it.

The main feature of virtualization on zArchitecture are not LPARs, but the fact that that it meets the Popek and Goldberg virtualization requirements perfectly and thus there are no artificial limits on how deep you can nest the hypervisors. In fact even the process in z/OS can almost behave as if it was running directly on hardware.


You can do virtualization also using z/VM operating system, each user has its own VM.

It still stores all the users password in plain text inside a DIRECTORY file that resides on the maintenance userids. I've seen it 100% of the time I did support on one of those systems.


Not sure if you would count that as greenfield projects (I would not), but I am told they make very good database and ERP servers. Our local utility runs the DB2 backend for their SAP installation on a (small-ish) mainframe.

I remember hearing a while back (~10 years) that IBM and SAP were giving fairly generous discounts for customers to move their SAP installations to GNU/Linux running on zSeries mainframes.

None of that is very buzzwordy, but putting such mission-critical software on a mainframe can make sense at a certain scale.


> So, it's like a tax on your business, if you succeed, so do they!

So... Do you realize this is basically the core of the SAAS/IAAS/anything-AAS business model ?

It is sad to see people joking at IBM just because it's IBM.


Probably only truly new deployment that comes to mind is this: https://en.wikipedia.org/wiki/Gameframe

Whether it actually made sense to implement (it very well might) that way is another question.


They make some capacity and performance claims when it comes to modern software:

>> it can run the world's largest MongoDB instance with 2.5x faster NodeJS performance compared to paltry x86-based platforms. It supports 2,000,000 Docker containers and 1,000 concurrent NoSQL databases.


Don't you use mainframes to have one very powerful machine? Like the central system of a large bank that needs to ensure transactions for millions of accounts book correctly.

If you run VMs anyway, aren't x86 server cheaper? (honest question, don't know much about that)


Mainframes used to be a lot about concurrent I/O for transaction processing - e.g. booking/check-in/handling of airline passengers or yes, batch transactions in the millions (e.g. end of day calculation within banks or financial information providers).

TCO might be lower with x86 these days, it all depends on your workload.


One possible advantage is faster communication between nodes. If you want to run 2 millions docker containers, probably you'll run few hundred different servers.

If you choose 86 servers, then you need to setup all VMs cluster management, backup, recovery, migration..

I wonder who would actually use a proven inferior, troubled document store (MongoDB) on a high-end, state-of-the-art, really expensive mainframe.

Yes. There is plenty of new development. In many places (banks, insurance) where there are old dogs that haven't learned any me tricks.

I'd avoid mainframe development as much as possible,and there is plentiful of wrangling on mainframe too...


I have a better link. Scroll down to the end, and click on "Redbooks"; there you'll find the technical information we HN readers tend to be more interested in.

In particular this one: "IBM z14 Technical Introduction" https://www.redbooks.ibm.com/redbooks/pdfs/sg248450.pdf (I'm still reading it, but page 5 already has a short list with "the main technical enhancements in the z14 over its predecessor platforms").


> Up to 32 TB of addressable real memory per system.

Imagine how many tabs you can keep open with this ;-)

Also, each processor has 128 MB of L3 cache, compared to 38 of the priciest Xeon, and an L4 cache to complete that.


20-30 I guess...

No matter how much memory you have, chrome will find a way to consume it.

Here's a diff of the z13 and z14 Technical Introductions:

https://draftable.com/compare/quHdhGnzauCm


You forgot that they mention blockchain technology.

And how!

"Blockchain with IBM Z creates unprecedented levels of trust and transparency, allows alignment with cognitive capabilities and enables deployment in the cloud."


BINGO!

Classy!

Spoke my mind.

> one-click victory in buzzword bingo:

https://vimeo.com/12112636


machine learning.. ok what ML platform would one run on it? Doesn't it make more sense to use a cloud ML provider? scale infinitely?

Or does it only apply to companies that doesn't want its data to get out to the cloud?


Yes, this is what genuinely puzzles me the most. I get that mainframes have their place for highly stable IO-heavy processing, but I've never heard anyone claim that they are particularly effective for compute-bound workloads. Especially when machine learning does not require much robustness - you just reboot the cluster nodes when they crash (and they will, because GPU drivers are crap).

Yes I thought it was commodity systems with dual xenon's and a MFT of memory and 24-36 TB DASD per node and for networking your choice of (infiniband / 10g/ 40g) for hadoop style HPC

you can of course add GPU's there are some workstation boards specifically designed for this


It more about the fact that tons of customer data and transaction history are already on the platform. Why move the data when you can do the science locally.

> ok what ML platform would one run on it?

Not sure, but it will definitely be called "Watson"


Blockchain! Machine learning! Cloud! This is the most Buzzword Compliant mainframe we've ever made!

IBM invented Buzzword Compliance. It's probably patented. There is likely an IBM Buzzword Compliance team. I bet they leverage Watson to ensure compliance.

  I bet they leverage Watson to ensure cognitive compliance for strategic solutionings.
Fixed it for you.

This just in, after the Buzzword Quality Assurance process:

   I bet they leverage Watson to ensure world-class, mission-critical cognitive compliance 
   for strategic cloud-aligned, value-added, blockchain-enabled solutionings.

That is horrible and beautiful, well done.

I'm going to go listen to Mission Statement now: https://www.youtube.com/watch?v=GyV_UG60dD4


I wish they had a patent, then they could stop everyone else from doing it.

It's part of the Department of Redundancy Department.

It doesn't mention XML, is it no longer a thing? :)

Maybe that's an indicator of a technology being at the tail end of the hype curve - IBM marketing no longer mentions it.

One of the cool things of machines like these is they have a "presence".

Your generic server sits somewhere on a rack. Your supercomputer is a seemingly endless field of racks.

This one is a large black box that exudes computing power.

I only wish they made the front panels more distinctive between iterations.


On a funny note, a city government built a new building and the architect must have badly misunderstood the server requirements for the cities AS/400 (smaller sibling to the mainframe). The AS/400 sat alone in a corner office with an nice view given the copious amounts of windows. I think IT finely got it moved, but it certainly had a "presence".

[edit] the architect did get the power and cooling requirements correct although the cooling did have to work a little harder given the direct sun light.


I liked the "computer rooms" of the past, where you could see the computer from the offices above it, as it was a sacred relic.

https://pbs.twimg.com/media/DDgqA5IW0AIeOw4.jpg:large

https://ist.uwaterloo.ca/cs/redroom/morehistoric/MCRMay1977....


Strangely, after I found out how the fire suppression systems in those rooms work[1], I always felt safer in the two story server rooms. It felt like you had more time to make it to the door.

1) look up Halon (old) or FM-200 (environmentally ok)


I wish they sold the case separately ;-)

A couple years ago I saw a gutted ES/9000 case for sale. I imagined suspending a Raspberry Pi with metal wires in its middle and let it run Hercules.

It would make an incredibly sexy rack, but knowing IBM they'd charge a fortune for the enclosure alone even if they did.

Fair warning: I am going to ramble a bit.

I kind of wonder. From what I know (which is fairly little, I guess, but bear with me), IBM's mainframe business has profit margins that would even make drug dealers drool. The corollary is, of course, that these machines are very, very expensive.

I wonder what the fincancial pro's and con's would be if IBM tried to loosen up a little on the margin and increase their potential market in return, in other words, sell cheaper mainframes.

I vaguely remember they tried something along those lines during the S/390 era, which culminated in a 4U rack-mountable machine. (I assume, naively, that the smaller physical size and capacity corresponded to a smaller price).

From what I've heard and read over the years, IBM invests a lot of money in its mainframe line, and at least on the hardware side, a lot of really cool technology and research goes into these machines. It's kind of a shame that all this cool technology gets cooped up in relatively small market niche.


The simple fact of the matter is that hardly anyone needs a mainframe. Most computing tasks these days can be accomplished better / cheaper with big racks of commodity hardware that you can source from anywhere. In addition the huge price tag (starting at $500k I believe) serves to reinforce the marketing message to the corporations that purchase this stuff that is product is much more advanced, reliable and feature packed than the alternative, and that you want it if you can afford it. Why else would it be so much more expensive, right? IBM has likely done these calculations and figured out that their pricing is optimal for extracting the maximum profit from their customer base.

Marketing message is all about 'trust'. Then you see the product images! They clearly forgot to tell their industrial designer what they were going for. It looks like a Terminator's shower stall.

I see a lot of people knocking these things in the comments...but if you want guaranteed employment you would be trying to get experience with COBOL. I'm telling you.

Truth is, these things are so much more stable and secure that most projects running on large hardware (think Oracle anything) would end up saving money by running on a mainframe.


> Truth is, these things are so much more stable and secure that most projects running on large hardware (think Oracle anything) would end up saving money by running on a mainframe.

Stable, maybe, but probably not secure. The software stack on these was designed well before anyone had any real interest in network security. I saw a youtube by a security researcher who took a look at one and found all kinds of inadvisable stuff. He also said there were very few people looking at these from a security perspective.


I saw the same presentation. You're right, they could use a penetration team in the release development cycle.

I was referring to the access control model that's built in to Mainframes. They were pioneers of mandatory access control's and fine grained security.

Having fewer people working on them is a definite issue, though, when it comes to the kinds of exposures that Mainframe integration to a corporate environment entail.


I'll keep my "non-guaranteed" employment any day over COBOL.

Those might be reliable but you could achieve the same reliability on commodity hardware for a lower price if you write distributed software and plan for failure.

My SIP server software (for a startup I hope to launch soon) can have its power cable pulled and nobody will realise because the server besides it just took the process over. For the price of a single mainframe I can put hundreds of servers all around the world and still come out cheaper.


Right. There are something's that are so important that you only want a single machine running it and you want that machine to have all of the redundancy built in.

I'm not saying that distributed systems are great...I make my living off of them...I'm just saying that there are classes of business problems where the risk calculation still comes down on the side of mainframes...and probably always will.

The COBOL comment...whatever. To each their own. I think its cool. IBM embraced web services in a huge way so basically any COBOL function can be turned into a web service automatically. The number of people who know how to work on these things is pretty small and getting smaller. You sound like you are comfortable with the risk involved in a startup. I'm not. Lot's of people aren't.

I sincerely hope it works out for you though. Good luck. It sounds like you are working on something really cool.


Since you seem to have more knowledge than I do about this, would you have an idea why COBOL is still around and not being phased out? Is there any particular reason they can't make a mainframe that you can program in Python (my poison of choice)? Or does this built-in fault-tolerance requires some specific language and most commodity languages aren't suited for it?

By the way thanks, I'll definitely need the luck. It will be cool if it succeeds yes.


The fault-tolerance stuff does not depend on any language. If a fault happens in a CPU (for instance) while running your application, the checkpoint (pre-failure) of that CPU will be copied to a spare CPU and your application will continue as if nothing happened. The application won't even be aware of the sparing.

All mainframes can run Linux. A single mainframe can run thousands of Linux servers. Except for the difference in arch (s390x vs x86_64) you probably wouldn't know you are on a mainframe. So of course any Python code will run just fine.

COBOL is used with a different OS, z/OS. The combination of hardware, z/OS, and COBOL can make MUCH more efficient use of the hardware than you could ever get with Python. If you would like an explanation I will give you one.

The current language of choice on the mainframe, for new development, is Java.


Great reply, cheers.

EDIT: I would like to hear more about COBOL and Z/OS. Can you explain your efficiency statement?


One of the common things that mainframes and COBOL are used for is financial systems, and that means a lot of fixed-point decimal data. So suppose you have a file containing an account number and a balance, like '9876543210 000000123.45', and you want to add to that balance.

In Python, once you have that record read in, you have to split it apart by some means. That takes CPU work. Python will then store the pieces as new objects, and that takes CPU work. Then you have two strings, and one of them (the balance) must be converted to a number that can be operated on, more CPU work. Then you can do the math, using the Decimal module, which is more work. The result then has to be converted back to a string and written back to the file.

COBOL, on the other hand, understands record formats, understands decimal date, and knows that the hardware supports decimal data. So the same task (add to decimal number in record) that on Python may take many thousands of CPU instructions can be done in three: PACK (convert printable (zoned) decimal number to packed decimal, ADDPACKED (perform the addition), and UNPACK (back to printable number). And those three operations are performed in dedicated decimal hardware in the CPU.

The important thing to remember is that in the mainframe world, not only is the hardware priced according to the performance of the machine, the software is also. This is why there are so many performance levels (over 100) to choose from. Users naturally want to buy the smallest machine they can get that will handle their workload, and they aren't interested in wasting any processor power doing things not absolutely required by their business needs. For this reason, you won't find any mainframe shops running traditional mainframe workload in a language like Python.


Who's buying these things?

Compuware has 2,200 customers worldwide. I'd guess most of them have at least one IBM mainframe. They're based in Detroit so I'd say their largest customers are the big three automakers.

eBay was a big IBM customer at one point weren't they? I'm sure there a lot of banks and financial institutions, and big manufacturing companies. Most of the Fortune 1000 then?


> Who's buying these things?

Probably most of companies that were doing anything requiring scale and reliability before the 1990s. These were something like the big data tech of that era.


Government agencies, think tax and social security infrastructure.

Banks, government, reservation systems (like airlines) and the odd company that had tech at the seventies.

BT in the UK used to be one of the biggest IBM users

Disclosure: I work at IBM.

Mainframes are not my area, but what I've learned makes me salivate. The next time I launch a startup, one of these Z systems is going to be on the CapEx plan.

The lowest end tier of these systems is something that could be easily afforded post series-A, and the stunning computational power is almost the least important benefit. These systems provide incredible redundancy, reliability, fault tolerance, and transaction processing speed. Decades of engineering experience have gone into building these.

You could hire engineers to figure out to make systems run reliably on a cloud provider, but unless you are one of a tiny handful of unicorns that specialize in this, you are going to do a second-rate job of it. And hiring the staff with the skill set to do that is incredibly expensive and doesn't create differentiating value.


Frankly, when was the _last_ time you launched a startup? Why would you waste money on a technology stack which is essentially the physical manifestation of vendor lock-in and rent seeking, all but guaranteed to grow in cost in lock step with the size of your wallet? It's hard enough to find superlative software engineers and operations staff to work on more mainstream technologies; where are you going to recruit staff proficient in a technology stack alien to most everybody, who also want to shoulder the risk of joining your startup venture?

No matter how much IBM wishes it were not so, we've reached a point where you absolutely _can_ (and should) do your computing on a substantially more open platform -- where more than one vendor can provide each layer of the stack that you build on!


> when was the _last_ time you launched a startup?

6 years ago. After that, I was an early (< 10 headcount) engineering hire at a couple companies.

> all but guaranteed to grow in cost in lock step with the size of your wallet?

Check your premises. My general understanding - again, mainframes are not my area - is that IBM almost guarantees falling per-transaction costs over time with Z.

> a technology stack alien to most everybody

Let IBM maintain the mainframe and let your engineers build on top with whatever tech stack or platform they are comfortable with. IBM invented virtualization in the 60s. And there are quite a few more mainframe engineers out there than you might think.


If you're running Linux on System Z there is no lock in - but in raw performance, even low end system z hardware will (generally) out perform x86 class hardware.

You know how Microsoft is playing nice with Linux and open source these days? IBM did it first, because they knew vendor lock-in was a loser. It may continue to help them squeeze money out of existing customers, but it won't get them many new ones. So they had to pivot on their mainframe strategy.

These days, IBM mainframes can not only run Linux and an open source stack on an LPAR, there's even a model of mainframe that is configured to only run Linux. You get the throughput benefits of the mainframe without having to commit to something that's not movable off the platform.


I always hear people say how reliable mainframes are. But if I have a startup that needs to distribute its application among different regions of the world, how does a mainframe help me with that?

It is still a single point of failure.

Has there been any sort of study, comparing commodity hardware and mainframes in this setting?

I'll not argue a single mainframe is more reliable than a commodity server. It should be, given how much they cost.


I am not sure if this true for current hardware, but once upon a time the mainframe CPUs contained duplicates of each functional unit that ran in lock step; after every computational instruction, the results of the units are compared; if they differ, the CPU repeats the instruction once. If the results still differ, it "calls in sick" to the operating system.

The OS then brings a spare CPU online and transfers the program that was running on the failing CPU to the new CPU, takes the failing one offline, and, depending on your service contract with IBM, calls home for a replacement CPU. The program does not even notice something went wrong. The next day an IBM service technician rings your data center's doorbell and replaces the faulty CPU, all without taking the machine offline.

That kind of resiliency and redundancy runs throughout every aspect of the system's design. If you can afford it, having a mainframe be your single point of failure is not too bad.


This is definitely impressive, but most of it could be done with commodity hardware and open source software. Let's say I have a task queue with objects like (1, 2) and my task is to add them together and push the result in a new task queue - what's preventing me from doing that twice to make sure the end result is correct?

Sure, I have to implement it manually, but at least my screen will no longer covered in vomit because of all the buzzwords.


My point was that a mainframe could be a single point of failure, but it engineered to such a degree of reliability and availability, that this is unlikely to be much of a problem. Plus, you can cluster them, you can even build clusters spanning several data centers.

From a technology standpoint, I think they are amazing machines.

The problem, of course, is that they are ridiculously expensive, which is why "open systems" (often meaning PCs running Windows or Linux, but also proprietary Unix system) have replaced mainframes in many places. In a way, I think it is fair to say that Google did something akin to what you propose. (I am told, that proprietary software for mainframes is actually so ridiculously expensive that the already-ridiculous cost of the hardware is not that much of a concern, actually.)

As for the buzzwords, if you find a place where you are safe from them, send me post card, I might move there, too. ;-)


Thank you, this is the kind of information I was looking for! I still would like to see some real world comparisons, though.

It sure sounds pretty impressive from your description!


Also factor in the operational costs with running your own data center and people to keep things running 24x7. Floor space, storage, power, disaster recovery, people = $$$. Companies have moved away from mainframes and the long term costs by migrating to high powered blade servers, vmware, cobol emulation software.

I'm sure these servers still find homes but the computing choices available today make boxes hard to sell IMO.


You can deploy systems to multiple locations in a variety of configurations to handle this need.

Mainframes are not my are of specialty, but it looks like this PDF, "IBM z/OS Multi-Site Business Continuity," has a lot of details:

https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebInde...


They used to have these easy to understand metrics like 1 mainframe can run 1000 Linux VMs at such and such hardware spec. And that almost made it worth the price. Now I see only blahblah.

From a physical design perspective, it really looks like they glued a few supersize nVidia Sheild TV's onto the front of it: https://www.nvidia.com/en-us/shield/shield-tv/

Maybe they did. I remember reading that one particular Cray Supercomputer had a MacBook embedded in it just to display a shiny 3D animation.

How much do one of these things cost?

Bare bones? Probably $200-300,000. Most companies will spend a couple million. And I'm sure if you loaded it to the gills with RAM and disk you could spend $25,000,000 USD.

Sounds like a lot, but it's cheaper than re-writing all the software you're running on your current Z10/Z11/Z12.


Depends on the model honestly. You purchase the maximum capacity frame you would ever want to run on the z14, and then you purchase the capacity you actually want to run right now. IBM has 4 models listed in the specs, so you purchase one of those models, then you configure the z14 to run at a certain capacity. You are about dead on with it being $300k for a basic frame, but then you have to add the "maintenance" to run your stated capacity. And then you pay a monthly fee for whatever capacity your company actually runs.

That's what we do anyway. Companies can run full capacity frame and not do the sub-capacity pricing, but that's a lot more up front. Better, IMO, to pay the monthly rate and spread the cash payments out over 5 years.


Do they come with an engineer?

That's when it starts to get expensive.

Yeah, you actually can't "purchase" one without IBM services.

I understand in big deployments IBM (e.g. Verizon cell phone billing) puts a few extra machines on site at no cost so that if one of your primaries go down you can start one of the backups and then they start to charge you for that one. I understand they do this with disks as well.

It almost certainly is not "no cost"-- typically that kind of stuff is rolled into a line item under "service agreement". And if the line item cost does say "zero" that's because of negotiation and sales engineer magic (possibly after a few $20 martini's). The cost is folded in SOMEWHERE. IBM ain't no non-profit!

Of course. :) I figured it was "no cost" because the servers that are in operation are insanely overpriced to begin with.

IBM does suggest purchasing one production mainframe and one at CBU (Capacity Back Up) pricing if the company runs their own warm/hot/cold Business Continuity location. As great as these mainframes are at staying online, we have to account for storms and other threats to the data center, and need that redundancy somewhere. The CBU box will usually be much cheaper to run than the production box, but they are not free.

They do this with RAM and processors as well. If the mainframe sees a hardware fault it'll email IBM to send a service staff member to fix itself.

It's a hard question to answer. With this kind of gear IBM sells both the machine itself and compute capacity. You have a long term lease on the baseline workload, then lease some MIPS when you need it.

It's further clouded by virtual x86 and Linux workloads which have a different billing scale.

They also do stuff like provide "free" DR hardware that can only be activated in the event of an emergency.


Depends on how much money you make with it. (No, seriously.)

"Contact us for a quote"

First question becomes, "So, you're a 2, 3 billion dollar company?"



I wonder about the timing of the release with Q2 earnings report tomorrow...

I predict t'will be 21 straight quarters of declining revenues.

Every time I see something on these mainframes I'm filled with the idle curiosity about what the minimum quanta of a Z system might really be.

Hercules emulator on a raspberrypi running a bootleg copy of z/os is about the minimum. Maybe not what you had in mind though.

I was thinking more of a minimum reasonable (or perhaps plausible) configuration of actual Z system hardware.

How powerful is a Z-series mainframe?

You can find out in this impenetrable, 500 page technical guide here: http://www.redbooks.ibm.com/redpieces/pdfs/sg248451.pdf

It contains crystalline prose like this, something all technical manuals should aspire to:

"Digital technologies are driving major industry shifts, ushering in a new age. In the Cognitive EraTM1, it is either disrupting or being disrupted. Tomorrow’s disrupters will be organizations that can converge digital business with a new level of digital intelligence. A key in digital transformation is the ability to accelerate the innovation of new business (Internet of Things, Cognitive, and Virtual Reality), while keeping complexity in check."

The answer to your question, by the way, appears to be 'very'. You can have lots of processors and 32TB of RAM.


170 different cores to choose from. Not sure how they go about determining capacity on a first-time shop, but ongoing installations looking to upgrade base the capacity on the new box against their growth rate and plan accordingly for however long they expect the new mainframe to be on the production floor.

170 different types of core? Sounds like a nightmare for a new customer, conversely a bit of a dream for the mainframe sales team.

The Register has a few more details - https://www.theregister.co.uk/2017/07/17/ibm_latest_mainfram...

It features the next generation of IBM's CMOS mainframe technology, with a 10-core CPU chip using 14 nm silicon-on-insulator technology, and running at 5.2GHz, claimed to be the fastest processor in the industry. Each core has hardware accelerated encryption implementing a CP Assist for Cryptographic Function (CPACF). The CPU also has 1.5 times more on-chip cache per core compared to the z13. There can be up to 32TB of memory, three times the z13 maximum, and its IO is three times faster as well.


Technically there can be up to 16PB of memory in all their 64-bit systems, I just don't think IBM is going to sell you more than 32. The Z13 definitely sold up to 20TB, though half that was usually for redundancy.

claimed to be the fastest processor in the industry.

Defined how?


Very.

They are also built like a tank, and have so many levels of redudancy and reporting that if any component breaks in your machine they guarantee there won't be any downtime (and a technician is dispatched to fix it automatically). That includes things like CPU cache line bridges, complete cooling failures, storage failures, etc. Also they guarantee that every new version of the mainframe will not increase power consumption (which is absolutely insane).

Oh, and the bill will make you dizzy.


If you get dizzy from the bill you can't afford one. Personally I think they come with technicians living inside.

They also ship with a small eolic propeller and the case is made with photovoltaic cells. All in case of power shortage. And of course the case contains few pigeons for ip over avian carriers in case of network shortage.

I would not be surprised if that were the case. They might even start doing IP-over-technician while they repair the problem with avian flu that spiked the packet loss in the ip-over-carrier-pidgeon channel.

Just in case no one else mentions it...how about transparent geo-clustering? How about duplicated components and self-healing for critical subsystems?

Really, it's the way that non-computer people think that all computers are but actually aren't because they aren't mainframes.


I expect a nice boost in revenues and stock coming up, happens with each z series launch.

Do you still have to interact with it with the worst UI in the history of the world though?

As a sysprog/admin? Yes.

I can understand perfectly well why they kept JCL around, what I do not understand is why they have not come up with some successor.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: