Return and enter and two different keys, but, on most modern systems, they perform a similar function. On z/OS Return moves down a line (similar to Tab, but ignoring all entries on the current line) and Enter actually sends to data off.
Once you get used to it, it's really no different to the Linux or Windows command lines. It's certainly dated, but that's what you get from running a system designed to be fully backwards compatible (with 24-bit, 31-bit and 64-bit addressing modes) that can continue to run software that's over 40 years old.
[For reference, the mainframe originally had 24-bit addressing. When IBM wanted to add 32-bit addressing, they found that people had been using the remaining byte to store other data, such as flags. So, to avoid breaking customer applications, the 32nd bit is used to identify whether the address is 24-bits or 31-bits]
((And yes, for the record, I am an IBMer, working in a z/OS product that's over 40 years old))
Interestingly, Ctrl+Enter is conventionally different to Enter.
But I agree, ISPF editor is rather good.
In fact, I'm almost willing to bet that the foreignness of mainframes to the average developer is due to mini/microcomputers having become dominant and taken a very different evolutionary path; Linux and DOS/Windows have far more in common with each other than mainframe OSes, despite their huge differences, because they evolved from mini/microcomputers and UNIX.
Surely Linux and the Windows NT line (don't confuse the Windows 9x line, which evolved indirectly from DOS with the Windows NT line) have much more in common with each other than mainframe OSes, but Windows did not evolve from UNIX:
The Windows 9x line evolved indirectly from DOS (cf. ). DOS was inspired a lot by CP/M.
The Windows NT line is a spiritual succesor to VMS (cf. ). The kernels of both operating systems were designed by Dave Cutler, who did not like UNIX (cf. ). Indeed, Windows NT has I/O Completion Port (IOCP, ) as an - in my opinion - better I/O system than the UNIX process input/output model.
Not sure what you believe to be 'indirect' about that relationship. That Windows codebase started out running on DOS and ultimately wound up so closely coupled that they shipped together in the same box as the same product.
> Windows did not evolve from UNIX:
Windows, through its DOS lineage, does indeed have some roots in the Unix tradition. (This should be unsurprising, since Microsoft was one of the more successful Unix licensees in the early 80's.)
To put Cutler's dislike of Unix into perspective, DOS (and Windows) had close to a decade of development prior to his involvement. (DOS shipped in 81, Cutler got involved in ~88, his first product shipped in ~93, and the old DOS-based Windows wasn't fully deprecated until 2001-1.)
DOS was at best a mongrel mash-up. It no more evolved from Unix than AmigaOS did.
I'm sure he was, but DOS got pathnames in March of 83 and Cutler didn't join until 6 years later.
I know, but IOCP badly fits into the "UNIX philosophy", i.e. in this case how input/output is "typically" organized under UNIX.
It's usable, but a little different. I mean the guys at Rocket Software have ports of some of the basic unix packages for z/OS including the Bash Shell, just adding that to USS improves quality of life tenfold. Many features are doable through USS but ISPF will always be the main interface.
There are ways to develop in Eclipse that can target z/OS.
However, the people who develop for z/OS are often people who are familiar with Interactive System Productivity Facility (ISPF). It's a bit like whether people who are proficient in Vim or Emacs really benefit that much from an IDE.
There is also a unix shell with a z/OS interface:
But again, the people who program mainframe learn it through that environment. It would be interesting to hear from people who have used these sorts of set ups.
Hehe, I once ran tail(1) and forgot to enter the file name, so tail read from stdin, and Ctrl+D did not work, so my shell was stuck waiting for input until the system was IPL'ed. Fun times... ;-)
You're not alone my friend, those IBM docs certainly are not easy to read/understand.
Just as for Unix, Linux, Windows, and so on, there's an IBM "culture" (terminology, conventions, etc.) that you have to get used to. (It often helps to know some of its history, too.) But since I grew up professionally with one foot in the IBM world I usually don't have much trouble at all reading their docs.
I once spent three days figuring out how to deal with an invocation that did not fit in an 80-character line. sigh
(Rexx has since developed an object oriented dialect, that might be more convenient to use.)
For whom? They shouldn't necessarily spend a bunch of effort adapting their product towards the Unix world, particularly to the extent it compromies their product for their primary customer base.
I'll try to memorize this feeling, and remember it every time I try to explain something technical to people outside that field. Maybe it will help me explain better.
Then I found Master The Mainframe and got to play with a properly maintained LPAR with some (admittedly handhold-y guides). Joy! Non-crusty versions of z/OS. Did you know you can generate JSON with Cobol and I've managed to bolt this on to a webservice that interfaces with DB2? I sure as hell didn't! (z/OS Connect is a better way to do this though).
It's only been a month or two, but the amount of time I spend going against my intuition is beautiful. It's really made me reconsider the way I use/design computing facilities in other avenues. I'm not a professional or employed programmer, but this is the most fun I've had since playing with distributed computing, and in Cobol nonetheless. I even set up a 3270 styled blog due to it.
They obviously never used MVS (z/OS's ancestor), OS/400 (now I/OS or something like that) or Burroughs' MCP (when an OS lends its name to a movie super villain, you have to respect it).
That would be UHOS, not UNIX.
As I recall, it's called "Bluespeak", and that sort of thing is pretty common actually. I was educated in networking at a Cisco netacad, so I use Cisco terminology that is apparently not universal.
Programming languages do this too for some reason: Sum type, tagged union, discriminated union, variant...
Once upon a time I worked as a (young) field engineer looking after mostly Intel based kit, peripherals and Novell/Windows/Unix OS support. Our company was subcontracted to look after a bunch of Perle controllers for another maintenance company that didn't have engineering staff locally (these were on two and four hour onsite must-fix contracts). Perle manufactured a range of clones of IBM's 5294/5394/5494 Twinax remote access controllers that you plugged stuff like 5250 series terminals and printers into.
Anyway, I had to go on a training to course to learn about the gear, usual faults etc. But I also had to learn the IBM lingo such as asking the remote ops folks to "VARY ON" and "VARY OFF" (i.e. enable/disable) controllers when working on them. There were other, now long forgotten, incantations you needed to utter over the phone to IBM ops folks when on site, but the VARY ON/OFF one stuck with me.
As an aside, I also ended up looking after and field repairing a bunch of System/36's, in particular replacing hard disks which looked like:
Amazingly a sole engineer could carry out this task in about 30-40 minutes with no need for extra hands. These were well thought out and designed workhorses.
The simple truth is that there are many people to whom jargon such as "WIMP", "ISOs", "flat UI", "IIFE", "DOM", "pull request", and "UX" is equally as opaque and foreign as "DASD", "APAR", "SRC", "PMR", and "PTF" (http://jdebp.info./FGA/fix-terminology.html). All are in fact niche terminology.
BTW, I've been around long enough to have had conversations like the following:
Them: "I want to run Lotus 1-2-3." (This was the near-universal spreadsheet standard before Microsoft Excel came along.)
Me: "OK, then first we're going to have to get you a PC."
Them: "What's a PC?"
Then I would have to explain that this was a "Personal Computer". And not just any personal computer, either, but rather an "IBM" PC, in order to distinguish it from an Apple or Commodore or TRS-80 or TI-99/4A or Atari or whatever. And they might respond that they didn't even know that IBM made personal computers. (Which they don't any longer, of course.)
Often programming features end up with a math name (matching the element they are based on) and a developer friendly name which in some ways makes life easier (by separating high/low level discussions) and more confusing (everything now has different names which get used based on author/speaker preference).
I did not mind the names, at the time I had plenty of old mainframe hands around, who were actually happy I showed such an interest in their work, so they gladly took their time to answer any and all questions I had. Fun times... :-)
Yes. The villain was named after the operating system. My feeling was that it really hated users.
> As I recall, it's called "Bluespeak", and that sort of thing is pretty common actually.
How much of that is because they invented terms for these concepts before our current terms were coined or became ubiquitous? It might be easy to forget, but IBM was once at the cutting edge of computing. They coined terms, others coined rival terms, and it wasn't at all clear whose terms would be ubiquitous in 2018. IBM changing to adopt another computing culture's terms would be akin to metrification: expensive, short term pain to abandon a good-enough system to achieve distant long-term benefits.
z/OS, TSO, JCL and the rest are different to what most people are used to but this is where modern IT started, where virtualisation, high availability and serious backward compatibility were invented.
Peel away the layers of technology is a large company and you will often find a mainframe managing the core data of the business.
It's now my most popular GitHub project and is included in the Debian repos:
For Z machines, you can use Hercules as an acceptable approximation. A couple questions I posted as example are about Hercules.
Also, a surprising number of concepts of MVS 3.8j are present in z/OS, so what you learn there is not wasted. As the article points out, a lot still spins around 80-column punched cards. If you go for the less bare distributions of MVS, you'll see lots of software that tries to replicate the functionality of newer releases.
As an example, I pinged Cincom Systems to see if they had an old version of Mantis that could run on MVS 3.8 that I could obtain for free (Mantis was the language I used on mainframes). To my surprise, that version is still a commercial product and is fully supported. Things dom change very slowly in mainframe land.
IBM, on the other hand, has always competently (and sometimes quite viciously) protected their products legally. They also have a massive patent portfolio, or at least they used to.
Competently...well, the Wintel PC market—long ago “The IBM-compatible PC market” might be an indicator that they can fall down hard on that.
As for their other hardware, these days they're pretty much the only game in town for mainframe and midrange systems. And they jealously guard those markets, as lethargic as these may currently be.
Here's What Happens When an 18 Year Old Buys a Mainframe
Does that mean z/OS supports the Julian calendar?
* The Julian calendar, which actually has nothing to do with this field on the panel.
* The Julian day number, which is actually a day count since a point in the 48th century BCE, and again nothing actually to do with the panel.
* The day number of the year, which is sometimes colloquially, but erroneously, referred to as the "Julian day".
The panel is making that very error. It used to be a fairly common error, and you can see its echoes in the likes of "j" being a format specifier in various systems that expands to the day of the year. It is not so common, now. But it still happens occasionally.
Today is July 31, 2018 in the United States, which adopted the Gregorian calender per the act of British parliament entitled "An Act for Regulating the Commencement of the Year; and for Correcting the Calendar now in Use" (24 Geo. 2 c. 23) by skipping September 3-13, 1752 (obviously this was prior to the American revolution).
In the Julian calendar, there are leap days every four years. In the Gregorian calendar, there are leap years every four years except every 100 years except every 400 years. That is, 2000 and 2004 are leap years, but 1900 is not a leap year.
Since 100, 200, 300, 500, 600, 700, 900, 1000, 1100, 1300, 1400, 1500, 1700, 1800, and 1900 were leap years in the Julian calendar but not the Gregorian calendar, one must subtract 15 days to get to the Gregorian date to get the Julian date.
Today is July 18, 2018 in the Julian calendar.
The Soviet Union adopted the Gregorian calendar in 1918 by skipping February 1-13. They were the last. This is why the famous February revolution took place in March and why the October revolution took place in November. (Though I noticed Reuters announced it had been 100 years last year on the wrong day.)
Mainframes are weird, but I really doubt they don't use the Gregorian calendar whatever the text on the screen says.
My startup was acquired by BMC software, one of the largest third party providers of mainframe software, and I also worked for EMC for a time early in my career, so I spent a lot of time hanging around the mainframe space. Two quick anecdotes:
- working on a giant data center move for a big bank, there was a break room filled on a Saturday with operations people from the “open systems” (non-mainframe) and mainframe teams. I was struck by the contrast: the Unix/windows operations folks were tattooed, pierced, jeans and t-shirts, late 20s. The mainframe folks were short hair, polos and jeans, mid-50s. The two groups self segregated into clusters on opposite ends of the room, not speaking to each other.
- at a BMC leadership offsite, I found myself st the dinner table with a couple of mainframe engineering and product directors. After a few drinks, we got on the subject of why more tech companies weren’t using mainframes. “They’re so reliable! Why wouldn’t you just go out and rent one from IBM, and then you don’t have to worry about uptime or stability?”. I explained that the fashion had become to build the reliability into the application and assume that the hardware was not reliable. This confused them. “That sounds like a huge pain in the ass, why would you bother dealing with that?” So I explained that the cost per compute cycle was so much cheaper that it made it worth it, plus open source, etc etc
One of them kept pressing: “but Facebook could just take all the engineers they’re devoting to building reliable infrastructure and shift those people to writing customer facing code!”
The other director stopped him and said, “don’t you see? They’ve already done it. There’s no reason for them to go back now. People have figured out that they don’t need mainframes to get mainframe reliability”. The other director just kept shaking his head and we moved to other subjects.
This conversation happened in 2014.
Having never worked directly with a mainframe, I can't really speak to the reliability question. What I do know, though, is that I've spent my entire life living around PC hardware, working on PC hardware, etc, because that's the hardware that's designed to fit into my day-to-day life. So, if I have a new idea, and want to try it out, I'm going to reach for the hardware I already have, because it's basically free. Just like Mark Zuckerberg did.
And at that point, I'm already on a path that leads inexorably toward building the reliability into the software. There's simply never going to be any point at which it makes sense for me to scrap everything I have so far and do a complete rewrite for a different platform.
Absolutely agreed, "path dependency" is a big issue that comes up in a lot of different contexts both in the natural world and human endeavors. Just because from a high level or after the fact someone can identify a much more optimal final result doesn't mean that result was actually realistic to get to, or would be worth moving to from whatever local minima a project ended up in. I think path dependency plays a major part in what makes "disruption" possible, and in how businesses can sometimes be eaten from the bottom. A lot of companies end up in a state where they have a bunch of end point products but they haven't considered the paths necessary for users to reach that end point. Without a ramp their pipe can slowly empty.
Yet people still often claim that it makes sense to go to all the time and trouble and expense of moving off of reliable legacy platforms onto other platforms which aren't naturally anywhere near as reliable. Imagine that!
But that was only introduced in 2010, so either the timeframe I was guessing at is wrong, or I'm misremembering what component it was. Could have been disk drives maybe.
Yes, I've seen this too, and on lowish-end servers.
Doesn't help when your mainframe's infiniband controllers shit the bed and take the whole box down (a thing which has happened to me).
In any case, it helps explain why a purse-string-holding exec would be convinced to shell out money to IBM.
I wasn't really meaning to say that a Spark cluster is as robust as a mainframe (though maybe someone's figured out some tricks), more that I could totally see some sales folks conducting the same demo using something like Spark.
You're still boned if the driver dies. I am pretty sure that the driver keeps some important state in RAM so if the node hosting it goes down you have to restart from the beginning, even if the cluster manager restarts the driver.
Meanwhile, these days you have folks running "start of the art" systems which may just roll over and die for no discernible reason. These may not easily come back up, either, if at all. That's why they have to have so many of them!
I was in college at the time and looked at job postings for mainframe programmers because I wanted to work on that. Never found one that required less than 5-10 years experience, not then and not when I've checked every couple years after that. Too bad; I quite liked the idea of working in an ecosystem that starts from "let's make this work every single time" instead of "let's make this work well enough to keep customer complaints to a dull roar".
I've been part of a team that ran Linux on s390x, and it was a great exercise in demonstrating that however reliable your mainframe, it's still a SPOF. Even if you don't have my bad luck of four catastrophic hardware failures in as many years, your one reliable box still depends on the power, network fabric, storage fabric, etc at a single site. If you want to avoid losing your business when a sparky screws up, you need... several mainframes. At which point you're spending an absolutely astronomical amount to either sysplex, or you're building resilience into your applications, just like a bitty box.
And yes, so many mainframe folks are so very out of touch with the broader world. In around 2016 I had the supposed regional tech expert on z-Series system lecture me (rather snidely) on how:
1. s390 processors were 20 - 40 times more performant that Intel processors (demonstrably not true for any workload I cared about).
2. You could not do virtualisation on x86_64. It was impossible. With a straight face, in the year 2016, this guy told me, and clearly believed, that it was not possible to run heterogenous virtualised workloads on an Intel processor. Apparently tens of billions of Jeff Bezos' net worth literally did not exist for him.
There was more, but it was like talking to someone who had been frozen in a block of ice since 1996.
Always remember (and don't ever forget) that the only reason most of us have even heard of Intel and Microsoft is because of their relationship to IBM back in the day. They weren't chosen, nor did they ascend to great heights, based the quality of their products. Rather IBM chose them mostly for the sake of its own convenience, and then they leveraged that relationship to the hilt.
BTW, just because a speculative execution attack or whatever is theoretically possible on an IBM system doesn't necessarily mean that it can be carried out in any practical sense, given the overall design of those systems and their general level of built-in security. But IBM can't just sit there and ignore the possibility, either.
Another problem they have these days relates to open source software and such, which they've been porting to their platforms much more lately. If a security patch for that software comes out then they still have to apply it, even if there may be no practical way to exploit it on their systems. And it can be quite unnerving to see long lists of such patches show up on a regular basis for systems which are otherwise generally considered to be rock solid.
As for Facebook and Google and Netflix and so on, you have to remember that for all their claims of reliability and such, their stuff really only needs to be "good enough". But good enough generally just doesn't cut it when dealing with things like financials and such.
BTW, I have a colleague who works for a massive corporation - one that is still recovering from a ransomware attack which has so far cost them at least $300 million. They have thousands of servers, and she says that it has now become IT's full-time job just to keep those updated and patched. (There's apparently little or no time these days for silly little things like development and testing and code loads and such.) I didn't ask her if they are still planning to replace their few remaining mainframes (I'm guessing they're in no big hurry now), but she did tell me that upper management currently thinks that "The Cloud" may yet be their road to salvation.
Had a similar experience with the Bloomberg terminal, which is a similar evolutionary offshoot. (Think "what if the GUI had never happened, but the 3270 form-style CLI had gained a mouse and graphical representations?")
BBG's terminal is fundamentally still keyboard-driven: you can use the mouse, but in the same way you can use a mouse with emacs: you're going to be driven back to the keyboard sooner or later, so you might as well stay there.
There are some introductions on YouTube, but they're all horrible. I'll see if I can find a good one.
Putting all the eggs in the same basket to justify the basket. That's classical pointy-haired thinking.
It's been a long time now since I've managed hardware so I don't really know what the current situation is. But I happen to have ready access to a hardware document from around 2008 (the last time that I worked with such large system), and it lists a whole slew of RAS features that were copied from the mainframe. I expect that if I tracked down the corresponding document for a current high-end system that it would list even more.
That said, it was generally my experience that the whole platform family (low end to high end, new systems and old) was just rock-solid reliable - not at all like "The server crashed again - time to reboot it!" situation I usually found on the Wintel side of things. Plus stuff like malware was practically unheard of. And I've worked on systems that at any given moment might have thousands of users on them, and maybe tens of thousands to hundreds of thousands of jobs. From an operational perspective this might not have necessarily been the best idea, for a variety of reasons, but at least the system could handle it easily. And much like mainframes, unexpected outages were basically unheard of - if they ever happened they were extraordinary, jaw-dropping events. I'm not claiming the platform was/is perfect, though.
* There's still "object" code, and a "link (edit)" step.
* The function key mappings such as F3 for Exit are Common User Access standards, and were to be found in some Microsoft and IBM softwares for PC operating systems in the 1980s; most prominently perhaps in the various "E" editors and clones thereof available on PC-DOS and OS/2.
* The underlines for the menu item hotkeys are also Common User Access things, as is F10 for bringing focus to the menu.
M. Bellotti would gain by learning about the "TE" line command. Xe would also gain by not abusing the word "legacy", especially since the so-called "legacy paradigm" of IBM's "panels" on block-oriented terminals is pretty much the same paradigm as forms on a WWW browser. (-:
And the doco, IBM's and others', is not really wildly different, in both range and content, to other platforms'.
* Gary DeWard Brown (2002). zOS JCL. John Wiley & Sons. ISBN 9780471426738.
* Mike Ebbers, John Kettner, Wayne O'Brien, and Bill Ogden (2012). Introduction to the New Mainframe: z/OS Basics. IBM Redbooks. ISBN 9780738435343.
The best part is where they didn't know about the INSERT key. I can understand z and ISPF being a hassle if you're used to modern computing but the mainframe logic is actually logical, and probably predates whatever modern stuff you're on to... like "files". :)
Useful for typing words like pâte, infâme, or grâce. Except not at all.
Give us the perspective from the other end of the learning curve: If you were the author, how would you start learning these things?
Possibly the worst big co in these factors?
There's a whole world of solid, reliable, but unbelievably boring "institutional computing" out there that hackers usually don't touch because it's... well... boring. Java is the closest most hackers ever get, and that has more in common with Python or Ruby than COBOL or z/OS. Java is kind of a modernized mini/micro computer business language.
You can read about amazing hardware and IBM's ability to fuse hardware, microcode and software capabilities that keep pushing the ability to solve business problems. Yes, most of this is needed to extract value out of incredibly expensive technology.
I can imagine if the salaries would be sufficiently (i.e. very) high and the complete culture would not be openly hostile towards the values of the hacker culture, I can easily imagine that hackers would be willing to touch it.
(2) Mainframes are usually buried deep within an organisation behind many firewalls.
I would love to see a z/OS system at a Defcon conference. After all Windows and MacOS seem too easy these days.
Having said that, I found this... "Follow me on a journey where we p0wn one of the most secure platforms on earth." Quite a cool presentation (https://media.defcon.org/DEF%20CON%2025/DEF%20CON%2025%20pre...) with a good mainframe intro.
IBM zEC13 technical specs:
• 10 TB of RAM
• 141 processors,5 GHz
• Dedicated processors for JAVA, XML and UNIX
• Cryptographic chips...