> As businesses adapt to capitalize on digital, trust will be the currency that drives this new economy.
> With z14 you can apply machine learning to your most valuable data to create deeper insights.
> In the digital economy data is the differentiator. IBM z14 can rapidly derive actionable insights and enable progressively smarter decisions for better customer experiences and new revenue streams.
> IBM z14 can help you accelerate development and delivery of secure, scalable services with new economic models.
With such key features as:
"100% encryption is 100% mainframe"
"Trust – the foundation of digital relationships"
"Facilitate new secure services with Java"
I actually like IBM mainframe tech, but this page is hilarious.
> New software pricing tied to business value
So, it's like a tax on your business, if you succeed, so do they!
All joking aside, has anyone used mainframes for a greenfield project in the last 20 years? I'm genuinely curious. I've been under the impression the only reason they're still around is because of the massive 20 Million LOC cobol apps hanging around.
I've often thought I'd much rather have a cheap, featureful "walled garden" PAAS for side and small projects, as opposed to wrangling vms and containers all the time. Mainframe could fit this use case.
> So, it's like a tax on your business, if you succeed, so do they!
You may jest, but the flip side of this seemingly buzzword laden sentence in the feature set as a potential buyer is that our company may not have to pay if IBM fails to demonstrate incremental business value at the end of an engagement (aka conditional fees).
Buzzwords can work both ways, thankfully.
I've heard of a few new deployments (more accurately expansions) where it's cheaper to buy more mainframe than Oracle licensing.
Exactly. The tech is called LPAR: https://en.wikipedia.org/wiki/Logical_partition
The main feature of virtualization on zArchitecture are not LPARs, but the fact that that it meets the Popek and Goldberg virtualization requirements perfectly and thus there are no artificial limits on how deep you can nest the hypervisors. In fact even the process in z/OS can almost behave as if it was running directly on hardware.
It still stores all the users password in plain text inside a DIRECTORY file that resides on the maintenance userids. I've seen it 100% of the time I did support on one of those systems.
I remember hearing a while back (~10 years) that IBM and SAP were giving fairly generous discounts for customers to move their SAP installations to GNU/Linux running on zSeries mainframes.
None of that is very buzzwordy, but putting such mission-critical software on a mainframe can make sense at a certain scale.
So... Do you realize this is basically the core of the SAAS/IAAS/anything-AAS business model ?
It is sad to see people joking at IBM just because it's IBM.
Whether it actually made sense to implement (it very well might) that way is another question.
>> it can run the world's largest MongoDB instance with 2.5x faster NodeJS performance compared to paltry x86-based platforms. It supports 2,000,000 Docker containers and 1,000 concurrent NoSQL databases.
If you run VMs anyway, aren't x86 server cheaper? (honest question, don't know much about that)
TCO might be lower with x86 these days, it all depends on your workload.
I'd avoid mainframe development as much as possible,and there is plentiful of wrangling on mainframe too...
In particular this one: "IBM z14 Technical Introduction" https://www.redbooks.ibm.com/redbooks/pdfs/sg248450.pdf (I'm still reading it, but page 5 already has a short list with "the main technical enhancements in the z14 over its predecessor platforms").
Imagine how many tabs you can keep open with this ;-)
Also, each processor has 128 MB of L3 cache, compared to 38 of the priciest Xeon, and an L4 cache to complete that.
"Blockchain with IBM Z creates unprecedented levels of trust and transparency, allows alignment with cognitive capabilities and enables deployment in the cloud."
Or does it only apply to companies that doesn't want its data to get out to the cloud?
you can of course add GPU's there are some workstation boards specifically designed for this
Not sure, but it will definitely be called "Watson"
I bet they leverage Watson to ensure cognitive compliance for strategic solutionings.
I bet they leverage Watson to ensure world-class, mission-critical cognitive compliance
for strategic cloud-aligned, value-added, blockchain-enabled solutionings.
I'm going to go listen to Mission Statement now: https://www.youtube.com/watch?v=GyV_UG60dD4
Your generic server sits somewhere on a rack. Your supercomputer is a seemingly endless field of racks.
This one is a large black box that exudes computing power.
I only wish they made the front panels more distinctive between iterations.
 the architect did get the power and cooling requirements correct although the cooling did have to work a little harder given the direct sun light.
1) look up Halon (old) or FM-200 (environmentally ok)
I kind of wonder. From what I know (which is fairly little, I guess, but bear with me), IBM's mainframe business has profit margins that would even make drug dealers drool. The corollary is, of course, that these machines are very, very expensive.
I wonder what the fincancial pro's and con's would be if IBM tried to loosen up a little on the margin and increase their potential market in return, in other words, sell cheaper mainframes.
I vaguely remember they tried something along those lines during the S/390 era, which culminated in a 4U rack-mountable machine. (I assume, naively, that the smaller physical size and capacity corresponded to a smaller price).
From what I've heard and read over the years, IBM invests a lot of money in its mainframe line, and at least on the hardware side, a lot of really cool technology and research goes into these machines. It's kind of a shame that all this cool technology gets cooped up in relatively small market niche.
Truth is, these things are so much more stable and secure that most projects running on large hardware (think Oracle anything) would end up saving money by running on a mainframe.
Stable, maybe, but probably not secure. The software stack on these was designed well before anyone had any real interest in network security. I saw a youtube by a security researcher who took a look at one and found all kinds of inadvisable stuff. He also said there were very few people looking at these from a security perspective.
I was referring to the access control model that's built in to Mainframes. They were pioneers of mandatory access control's and fine grained security.
Having fewer people working on them is a definite issue, though, when it comes to the kinds of exposures that Mainframe integration to a corporate environment entail.
Those might be reliable but you could achieve the same reliability on commodity hardware for a lower price if you write distributed software and plan for failure.
My SIP server software (for a startup I hope to launch soon) can have its power cable pulled and nobody will realise because the server besides it just took the process over. For the price of a single mainframe I can put hundreds of servers all around the world and still come out cheaper.
I'm not saying that distributed systems are great...I make my living off of them...I'm just saying that there are classes of business problems where the risk calculation still comes down on the side of mainframes...and probably always will.
The COBOL comment...whatever. To each their own. I think its cool. IBM embraced web services in a huge way so basically any COBOL function can be turned into a web service automatically. The number of people who know how to work on these things is pretty small and getting smaller. You sound like you are comfortable with the risk involved in a startup. I'm not. Lot's of people aren't.
I sincerely hope it works out for you though. Good luck. It sounds like you are working on something really cool.
By the way thanks, I'll definitely need the luck. It will be cool if it succeeds yes.
All mainframes can run Linux. A single mainframe can run thousands of Linux servers. Except for the difference in arch (s390x vs x86_64) you probably wouldn't know you are on a mainframe. So of course any Python code will run just fine.
COBOL is used with a different OS, z/OS. The combination of hardware, z/OS, and COBOL can make MUCH more efficient use of the hardware than you could ever get with Python. If you would like an explanation I will give you one.
The current language of choice on the mainframe, for new development, is Java.
EDIT: I would like to hear more about COBOL and Z/OS. Can you explain your efficiency statement?
In Python, once you have that record read in, you have to split it apart by some means. That takes CPU work. Python will then store the pieces as new objects, and that takes CPU work. Then you have two strings, and one of them (the balance) must be converted to a number that can be operated on, more CPU work. Then you can do the math, using the Decimal module, which is more work. The result then has to be converted back to a string and written back to the file.
COBOL, on the other hand, understands record formats, understands decimal date, and knows that the hardware supports decimal data. So the same task (add to decimal number in record) that on Python may take many thousands of CPU instructions can be done in three: PACK (convert printable (zoned) decimal number to packed decimal, ADDPACKED (perform the addition), and UNPACK (back to printable number). And those three operations are performed in dedicated decimal hardware in the CPU.
The important thing to remember is that in the mainframe world, not only is the hardware priced according to the performance of the machine, the software is also. This is why there are so many performance levels (over 100) to choose from. Users naturally want to buy the smallest machine they can get that will handle their workload, and they aren't interested in wasting any processor power doing things not absolutely required by their business needs. For this reason, you won't find any mainframe shops running traditional mainframe workload in a language like Python.
Compuware has 2,200 customers worldwide. I'd guess most of them have at least one IBM mainframe. They're based in Detroit so I'd say their largest customers are the big three automakers.
eBay was a big IBM customer at one point weren't they? I'm sure there a lot of banks and financial institutions, and big manufacturing companies. Most of the Fortune 1000 then?
Probably most of companies that were doing anything requiring scale and reliability before the 1990s. These were something like the big data tech of that era.
Mainframes are not my area, but what I've learned makes me salivate. The next time I launch a startup, one of these Z systems is going to be on the CapEx plan.
The lowest end tier of these systems is something that could be easily afforded post series-A, and the stunning computational power is almost the least important benefit. These systems provide incredible redundancy, reliability, fault tolerance, and transaction processing speed. Decades of engineering experience have gone into building these.
You could hire engineers to figure out to make systems run reliably on a cloud provider, but unless you are one of a tiny handful of unicorns that specialize in this, you are going to do a second-rate job of it. And hiring the staff with the skill set to do that is incredibly expensive and doesn't create differentiating value.
No matter how much IBM wishes it were not so, we've reached a point where you absolutely _can_ (and should) do your computing on a substantially more open platform -- where more than one vendor can provide each layer of the stack that you build on!
6 years ago. After that, I was an early (< 10 headcount) engineering hire at a couple companies.
> all but guaranteed to grow in cost in lock step with the size of your wallet?
Check your premises. My general understanding - again, mainframes are not my area - is that IBM almost guarantees falling per-transaction costs over time with Z.
> a technology stack alien to most everybody
Let IBM maintain the mainframe and let your engineers build on top with whatever tech stack or platform they are comfortable with. IBM invented virtualization in the 60s. And there are quite a few more mainframe engineers out there than you might think.
These days, IBM mainframes can not only run Linux and an open source stack on an LPAR, there's even a model of mainframe that is configured to only run Linux. You get the throughput benefits of the mainframe without having to commit to something that's not movable off the platform.
It is still a single point of failure.
Has there been any sort of study, comparing commodity hardware and mainframes in this setting?
I'll not argue a single mainframe is more reliable than a commodity server. It should be, given how much they cost.
The OS then brings a spare CPU online and transfers the program that was running on the failing CPU to the new CPU, takes the failing one offline, and, depending on your service contract with IBM, calls home for a replacement CPU. The program does not even notice something went wrong. The next day an IBM service technician rings your data center's doorbell and replaces the faulty CPU, all without taking the machine offline.
That kind of resiliency and redundancy runs throughout every aspect of the system's design.
If you can afford it, having a mainframe be your single point of failure is not too bad.
Sure, I have to implement it manually, but at least my screen will no longer covered in vomit because of all the buzzwords.
From a technology standpoint, I think they are amazing machines.
The problem, of course, is that they are ridiculously expensive, which is why "open systems" (often meaning PCs running Windows or Linux, but also proprietary Unix system) have replaced mainframes in many places. In a way, I think it is fair to say that Google did something akin to what you propose. (I am told, that proprietary software for mainframes is actually so ridiculously expensive that the already-ridiculous cost of the hardware is not that much of a concern, actually.)
As for the buzzwords, if you find a place where you are safe from them, send me post card, I might move there, too. ;-)
It sure sounds pretty impressive from your description!
I'm sure these servers still find homes but the computing choices available today make boxes hard to sell IMO.
Mainframes are not my are of specialty, but it looks like this PDF, "IBM z/OS Multi-Site Business Continuity," has a lot of details:
Sounds like a lot, but it's cheaper than re-writing all the software you're running on your current Z10/Z11/Z12.
That's what we do anyway. Companies can run full capacity frame and not do the sub-capacity pricing, but that's a lot more up front. Better, IMO, to pay the monthly rate and spread the cash payments out over 5 years.
It's further clouded by virtual x86 and Linux workloads which have a different billing scale.
They also do stuff like provide "free" DR hardware that can only be activated in the event of an emergency.
First question becomes, "So, you're a 2, 3 billion dollar company?"
It contains crystalline prose like this, something all technical manuals should aspire to:
"Digital technologies are driving major industry shifts, ushering in a new age. In the Cognitive EraTM1, it is either disrupting or being disrupted. Tomorrow’s disrupters will be organizations that can converge digital business with a new level of digital intelligence. A key in digital transformation is the ability to accelerate the innovation of new business (Internet of Things, Cognitive, and Virtual Reality), while keeping complexity in check."
The answer to your question, by the way, appears to be 'very'. You can have lots of processors and 32TB of RAM.
It features the next generation of IBM's CMOS mainframe technology, with a 10-core CPU chip using 14 nm silicon-on-insulator technology, and running at 5.2GHz, claimed to be the fastest processor in the industry. Each core has hardware accelerated encryption implementing a CP Assist for Cryptographic Function (CPACF). The CPU also has 1.5 times more on-chip cache per core compared to the z13. There can be up to 32TB of memory, three times the z13 maximum, and its IO is three times faster as well.
They are also built like a tank, and have so many levels of redudancy and reporting that if any component breaks in your machine they guarantee there won't be any downtime (and a technician is dispatched to fix it automatically). That includes things like CPU cache line bridges, complete cooling failures, storage failures, etc. Also they guarantee that every new version of the mainframe will not increase power consumption (which is absolutely insane).
Oh, and the bill will make you dizzy.
Really, it's the way that non-computer people think that all computers are but actually aren't because they aren't mainframes.
I can understand perfectly well why they kept JCL around, what I do not understand is why they have not come up with some successor.