Hacker News new | past | comments | ask | show | jobs | submit login

This is a great missing manual for z/OS tinkering (in the past I have also given up at the login prompt as the author said). Can someone sort of "steel man" this z/OS workflow? e.g. correct noob problems and explain why creating a file takes a page of options? As it is, if this is actually the way to get started, I'm seeing no compelling reason for a developer to bother to learn or use z/OS.

And I know you could say "well Linux is just as complicated." And maybe that's true, but I can also freely download it and learn it, whereas IBM seems to be making no effort to bring z/OS to the public, so why bother chasing it?






> explain why creating a file takes a page of options?

z/OS dates to an era of bewildering diversity in storage hardware - from punch cards to mag tape to disks with variable length sectors and built-in hardware search mechanisms. All sorts of weirdness by modern standards. Also common at the time was record-oriented files, which are accessed not byte by byte but in a more structured manner. For example records might be 80 characters at a time, corresponding to a punch card image. Record oriented files naturally lend themselves to primitive but fast databases (evolved out of records on punch cards) and that's why mainframe OSes supported them. Similarly, file systems in those days were considered significant overhead so there's provision for raw access by sector so apps can roll their own file systems if necessary. z/OS handles all these odd cases still. And all this needs to be specified in a command language that could be parsed in like 24 KB or however much RAM it was that OS/360 was originally designed to run in...

As the article says "file" is technically not the z/OS term - they're data sets. That might give a hint as to just how divergent z/OS is from just about everything else.

> well Linux is just as complicated.

Linux has less baggage, probably. Also the "worse is better" philosophy of UNIX kept things like quasi-database services out of the OS, for the most part. For comparative purposes check out VMS; it has some of these same features like record-oriented files in a (relatively more) modern design.

> I'm seeing no compelling reason for a developer to bother to learn or use z/OS.

Don't worry your vision is just fine. ;)


> why creating a file takes a page of options?

You're really creating a member of a data set. What you get is far more flexible than a file but equally more complicated. The options come in handy when you're sequencing job steps together, as they let you specify sharing, cataloging, and automatic deletion under one or more specific types of job results.

> whereas IBM seems to be making no effort to bring z/OS to the public, so why bother chasing it?

It's a valuable and interesting part of computing history and understanding it's workflows gives good insight as to why many organizations continue to use them. They fill a very unique use case and exploring that is very interesting and enlightening.

The core concepts haven't changed in 70 years. You can even play with them today since a previous OS, MVS, was released publicly, and has been maintained as an open source project in recent years. If you get the Hercules emulator you can run a fully legal mainframe OS distribution right out of the box.

It's a great gas. Highly recommend.


Speaking as someone who had to do a lot of work with these machines in college (and one job after grad school until I found something else), I think the author’s main problem is that they are insistent on using ISPF. Nobody I knew in either place would touch the thing, it was universally hated. We all did our editing locally, used FTP to transfer files to the mainframe, and created datasets and compiled our code by submitting pre-written JCL that older and wiser sages had composed for us.

Not explored myself, but these resources might be a good start:

1. https://www.ibm.com/z/education

2. https://community.ibm.com/community/user/ibmz-and-linuxone/b... (Circa 2020 but a quick spot check seems to show the links are still valid) * Disclosure: I’m an IBM employee


I wonder what happens if you set the "Expiration Date" on a data set... Does it delete itself? Require special flags to access? Throw an error every time the data is accessed? Prevent writing? Can you imagine someone putting in an expiration date a decade in the future and it goes unnoticed until it hits? And presumably, like the other options, it can't be modified after you create the data set.

What a truly frightening option to have on a server file system.


It is rarely used now days in my experience and I have never used a flat file that is over a decade old. Storage used to be expensive so this would ensure that old data got purged. Most flat files are useless after a month or less. Even if a mistake did happen, most MVS shops have pretty good backups.

Also now days, data files that haven't been accessed for some period of time usually gets sent to tape/cartridge in some automated system like this: https://en.wikipedia.org/wiki/Tape_library#/media/File:Stora...


> I wonder what happens if you set the "Expiration Date" on a data set... Does it delete itself?

On a vanilla z/OS install, setting "Expiration Date" on a disk (DASD) data set does... nothing. It just acts as documentation. And it is normally an optional field which for disk data sets normally doesn't get set – although a minority of sites configure it to be mandatory.

Now, if you enable DFSMShsm – which is an extra cost add-on which provides hierarchical storage management (silently moving rarely used files from disk to tape or cloud storage, and then moving them back on demand when an application attempts to read them) – well, by default DFSMShsm doesn't do anything differently, but then you set "SETSYS EXPIREDDATASETS(SCRATCH)", and it will delete ("scratch") expired disk datasets automatically in the background.

Also, there is a distinction between SMS-managed and non-SMS-managed datasets. SMS (aka DFSMS, or to be more specific DFSMSdfp, originally DFP) started life as an add-on product to simplify some of the complexities of mainframe storage management, and eventually went from being an add-on to being included in the base OS. When you create a dataset, you can choose whether to put it under SMS management or not; if it is under SMS management, you have less control over it, because some decisions are based on storage policies configured by the storage administrator. In particular, SMS has a config setting OVRD_EXPDT which controls whether you are allowed to delete a dataset with an expiry date prior to its expiry date being reached. If you have OVRD_EXPDT(NO), then if an SMS-managed dataset has a future expiry date, your request to delete it will be ignored.

Now, it is much more common to set the expiry date on tape datasets. For datasets on tape, there is both a volume expiry date set in the volume header, and dataset/file expiry dates set in the dataset/file headers. (People say that the correct mainframe term is "dataset" not "file"–that's true on disk, ignoring the Unix subsystem, but on tape, both "dataset" and "file" are actually correct.) There is another extra-cost add-on called DFSMSrmm. And if you use that, that will automatically set the tape volume expiry date based on the dataset expiry date (for multi-dataset tapes, DFSMSrmm can be configured to use either the latest expiry date of all the datasets/files, or else in "FIRSTFILE" mode, it just uses the expiry date of the first dataset/file on the tape.) As well as multi-dataset tape volumes, you can also have multi-volume tape datasets (one dataset spans multiple tape volumes), and DFSMSrmm has some more config options to control how volume expiry dates are handled in that scenario. And then there are third party packages you can use instead of DFSMSrmm, such as Broadcom CA-1 or Broadcom TLMS.

Now, once your tape data set expiry date has been translated into a tape volume expiry date, that will go into the tape catalogue used by your tape library. And when the tape volume reaches its expiration date, the tape library management software may (again, depending on its configuration) mark it as a free tape, and then it may be overwritten by new data at any time. Or, maybe you are using a virtual tape library, where the tapes are actually files on a disk array, but the virtual tape library pretends to the mainframe to be an actual physical tape library – in which case the virtual tape library software may (again, configuration dependent) delete the file backing your tape volume once its expiry date is reached.

> Can you imagine someone putting in an expiration date a decade in the future and it goes unnoticed until it hits?

Since the expiry date is optional, it is commonly not actually set, except on temporary files, backup files, archival files, log files, etc. You can configure it to be mandatory or auto-populated, but you'd generally only do that for files/datasets whose names match certain patterns.

Most sites, source code files, application binaries, configuration files, operating system files, etc, the expiry date is blank.

> And presumably, like the other options, it can't be modified after you create the data set.

Not true, it can be. Although there are configuration options which control whether you are allowed to do that or not. A common configuration is that you can increase the expiration date of one of your own datasets, but only a privileged user can reduce it (bringing the expiry date forward).

> What a truly frightening option to have on a server file system.

HSM systems exist on Linux/Unix/Windows too, and most HSM systems have a file expiration date feature. And you'll find similar features in tape management systems, record/content/document management systems, email archiving systems, e-discovery systems, etc, all of which exist on those platforms too. The difference is, that on mainframes and minicomputers (not just IBM), "expiry date" has commonly been a standard filesystem attribute – most commonly the system by default doesn't do anything with it, and you need add-on software to actually delete the expired files (if you really want to), but it is part of the core OS filesystem metadata. By contrast, on Linux/Unix/Windows, it isn't part of the core OS filesystem metadata, so even if you are using a HSM system and your file has an expiry date set, you'll need to use some proprietary API or non-standard extended attribute to get at it. That's the only real difference.


This was fascinating. Thankyou for taking the time to write it.

That said:

> DFSMSrmm ... DFSMShsm ... DFSMSdfp

If rmm, hsm, and dfp mean something, and IBM's documentation didn't explain the names to me. they are some of the most extraordinary acronyms, _mixed-case_ no less, I've ever seen.


DFSMS: Data Facility Storage Management Subsystem

DFSMSdfp: Data Facility Product

DFSMSdss: Data Set Services

DFSMSrmm: Removable Media Manager

DFSMShsm: Hierarchical Storage Manager

DFSMStvs: Transactional VSAM Services

DFSMSopt: Optimizer

See ABCs of z/OS System Programming Volume 3 (SG24-6983), https://www.redbooks.ibm.com/redbooks/pdfs/sg246983.pdf.


Welcome to IBM. Each new acronym is an extra fee, annually.

I can imagine where it would be really nice to have an expiration date on a filesystem. The behavior I would want to see for expired files would be: to show up on a report and maybe display in different color in directory listing. It could be a useful reminder to update an expiring certificate, expiring customer contract, etc.

https://www.ibm.com/z/resources/zxplore has been a very useful resource for me.

learning how to edit in ISPF was the hardest part for me.

There are now multiple options for GUI editors. When I was using the Mainframe in 2016 through 2020 I was using multiple eclipse based IDE's that provided the ability to edit datasets on the Mainframe, submit and view jobs, etc. They couldn't do everything you could do in the ISPF interface but they made the Mainframe much easier to use.

I was actually rather fond of the editor in ISPF by the time I was done with the Mastering the Mainframe course.

> I'm seeing no compelling reason for a developer to bother to learn or use z/OS.

I agree.

> And I know you could say "well Linux is just as complicated."

I've written COBOL on z/OS in the past (nineties). There's still COBOL used today. But there's a reason none of Google, Amazon, NVidia, Tesla, Meta, Netflix, etc. were built on mainframes zOS / COBOL / JCL / etc.

Yet billions (tens of billions?) of devices are running on Linux today. So saying "well Linux is just as complicated" would be actually quite stupid.

Something could be say too about the virtualization / containerization of all the things and ask how many VMs, containers and hypervisors are using Linux.

So, complicated or not, it actually makes sense to learn how to use Linux.


> But there's a reason none of Google, Amazon, NVidia, Tesla, Meta, Netflix, etc. were built on mainframes zOS / COBOL / JCL / etc.

The main reason is that Linux is free of charge, and that Unix happened to be more used in academia. It has little to do with underlying technology.

> So saying "well Linux is just as complicated" would be actually quite stupid.

It is just as complicated, if you are looking at feature parity. Maybe there is less historical baggage but that comes with complications as well (think of the grumbles about systemd).


> The main reason is that Linux is free of charge, and that Unix happened to be more used in academia. It has little to do with underlying technology.

That's not true, or at least there certainly isn't a consensus about it. One of the narratives that is associated with the rise of Google is the use of commodity hardware and high levels of redundancy. Perhaps this attitude originated from some cultural background like linux in academia, but their rejection of mainframes and the reasoning surrounding it are extremely well documented[1]: "To handle this workload, Google's architecture features clusters of more than 15,000 commodity-class PCs with fault tolerant software. This architecture achieves superior performance at a fraction of the cost of a system built from fewer, but more expensive, high-end servers."

[1]: https://ieeexplore.ieee.org/document/1196112


Name one business started this century that uses mainframes. If there were any compelling reasons to use them in the modern times, there would certainly be some companies using it. Mainframes are legacy cruft used by businesses that had them decades ago and are too cheap or entrenched to modernize their systems.

"Legacy cruft" can be code that's been providing business value for 50 or more years. The mainframe may be expensive, and IBM may love to nickel-and-dime you for every available feature, but it might still make business sense to keep using it. What's the point in rewriting all your code to move it off the mainframe if that will cost twenty times as much as maintaining the existing code while vastly increasing risk? While you may achieve cost savings by moving off the mainframe, they might take so long to accrue that it doesn't make business sense.

If there are any, you can be sure that they're not using z/OS. More likely would be one of those rack-mounted models that only run z/Linux (and possibly z/VM).

Z/OS systems are rackmount nowadays too, they just take up the full 42U.

At least back in the POWER8 days those z/Linux systems were the fastest you could buy, and IBM was super happy to let you overclock them: their reps told me that was just more CPU sales for them.


Why would they do that instead of standard PC hardware?

IBM has many decades of innovations (and patents) on making their mainframe hardware super resilient.

My previous company had a large estate of linux and mainframe applications. While ensuring that the disaster recovery is implemented in linux applications was a nightmare with different standards and different ways of doing things, in mainframe it was inbuilt.

While the mainframe may be old and out of fashion, it did have the capabilities that we are rediscovering in Cloud, containers, VMs and all...


I've wondered what might happen if IBM lowered costs on this hardware... If they offered a compelling deal, it's conceivable a startup might choose them over Linux. As it stands now, I find it near impossible to imagine any organization starting with a clean slate choosing a mainframe for anything. Cost combined with the work required to make the thing do the things is just way too much of an investment.

Many comments tout the uptime and reliability of the "mainframe", I'd argue we have many counterexamples built on Linux. Building a product with this level of reliability on Linux is expensive but still cheaper than a mainframe and the various support contracts, IMHO.

I started out working with an IBM AS/400 in college and eventually worked for a company that ran their business on one. Eventually market pressure forced that company to move to Windows servers accessed through Citrix from thin clients. In my opinion, this didn't make much material difference: it was still complicated, people on the floor complained about Citrix+Windows just as much as they did the old IBM terminals. Hardware costs and support contracts were less expensive but the cost of the software was much, much more expensive and the organization no longer had the ability to change any substantive functions. Just sayin', moving away from a mainframe isn't necessarily a clear win either.


Heh, I wouldn’t call companies that use mainframes “too cheap.”



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: