Obviously at this point almost three decades into the internet era no one really cares. We all know that "somewhere" old COBOL code is running, but in practice the companies managing the bulk of data for society didn't even exist before Y2K. Everyone with unmaintainable COBOL has long since migrated to really bad Java code instead.
But... yeah. Linux is now an official heir to the S/360. We've made it. Finally.
I can assure you that’s not correct, as I work in an enterprise that has plenty of unmaintainable COBOL still running, though they’ve supplemented it with lots of really bad .NET code more recently.
However, as with most languages, that is more a measure of the quality of the developer, than the quality of the language.
As a C++ developer now writing C# using .NET that new language and framework are a pleasure to work with.
No, it’s more a measure of the quality of the development organization and process. The quality of individual developers is a factor here, but not the only one and often not the limiting one on software quality (and particularly often not on, e
g., whether or not business requirements and system design are documented in a useful way to support maintenance and change analysis.)
There are lots of fairly common institutional factors at the kinds of shops that also tend to have lots of COBOL legacy code still running which also tend to result in instant-legacy code independent of platforms. They also have common in institutional patterns that tend to have platforms like Java and .NET.
This is not an indictment of Java or .NET as platforms, or of the individual developers in those environments.
Sadly this is entirely untrue.
What's happening in practice is they're still running COBOL and they're paying some woefully underpaid and underappreciated developers to write PHP ("Zend Core") for System-i.
This allows companies to run web applications on these aging systems.
Pay for PHP devs, even on these systems is generally lower and a lot of the work is in Europe where developer pay is half.
This same work has proved useful in running chromium on ppc64le/Linux, even if I think running node.js on IBM i sounds ridiculous ;)
Not quite. I'd suggest that there are still many trillions of dollars worth of people's money still managed in COBOL on mainframe core banking systems. Certainly the "big 4" banks in Australia all still have such systems in production (though they also have java systems)
Disclosure: I have worked for a few banks.
I worked at IBM at the time and we had internal-use-only versions of a lot of major IBM AIX/Unix software running on Linux by 1998. Our experiments with running it directly on big iron started around then too. At least in my area most of the engineers were moving from X terminals to Linux desktops at that point so porting the software they were working on was inevitable.
Unfortunately, not true. Most of the companies decided it was too much time and/or money to do that.
This machine, processed the payroll for a bit over 5,000 employees on a "clustered" machine that had a combined total of 512MB of RAM and 4GB of disk. I can assure you that "Quickbook Payroll" would not be able to handle this on a similarly resourced x86 machine. Not even close.
Unicode support is nice, and the fact that once you've compiled your application to binary it doesn't have to go over the Internet to load key parts is super nice.
And yes, there are people that still use COBOL, still think it is the best choice for the task that it is doing, and keep it going with a modest amount of maintenance.
Why? I've been into x86 databases since the Windows NT days and 5000 doesn't seem an impressive number to me.
Windows, let alone Quickbooks is not going to run very well on a machine with just 512MB of RAM.
Also, I can imagine using an Excel spreadsheet with some sheets, thousands of records on each one (Excel 97 max rows was 65536) + formulae and macros even 20 years ago so, again, 5000 doesn't seem an impressive.
I can't remember how much would SQLServer with a database take those days though so perhaps a 4GB HDD might indeed be insufficient.
Doesn't macro32 on alpha or itanium run old VAX binaries transparently? Why keep the antiques? Even SimH seems a better option (assuming the licenses work).
Since I've got actual Vaxen it doesn't help.
Where you have the source, you can build a native binary. Where you lack it, perhaps the emulation is sufficiently robust.
I have keys for the VAX assembler (and Fortran, BASIC, and Ada) so running source, and building applications that DECUS distributed back in the day, isn't a problem for me.
What annoyed me was that VMS/VAX support was discontinued by the hobbyiest program. When it was announced and I asked for details they suggested I contact licensing. Which I did and suggested I could buy new licenses for VAX machines for $5,000 good for up to 25 users. For a machine that was headed for the skip. So someone, somewhere, and I'm thinking US GOV here, "can't" upgrade their VAX hardware and HP has them over a barrel so they continue to hold those licenses as saleable products. But that is purely speculation on my part.
 In fact, for the earlier models (anything before the VAX 4000 series), they run faster on a modern PC under simh than they do on the original hardware.
MACRO32 is a VAX assembly compiler - given a VAX assembly source listing, it can turn it into object code for Alpha, IA64 or x86. In some cases, the code will need special annotations.
There's also VEST on Alpha which does translation of user-mode VAX binaries into Alpha binaries. I think that was an optional product. Many products (including things unrelated to the operating system) don't fall into the user-mode category due to the use of the "Privileged Image" mechanism of VMS.
Nitpick: Neither are emulators in the strictest sense of the term.
Which is what I or anybody else who wants to have now anyways. I do not recall my applications "loading their key parts" from various places. Everything is local and native yet connected when needed.
COBOL is really easy to write spaghetti code with - perhaps even the default. But some of these programmers (not including myself back then admittedly) just wrote code that came together beautifully like a jigsaw puzzle. I’m sure if given a C++ or other language with modern features, they could do amazing things. This was before a time when Clean Code (capitalized on purpose) and such were in the zeitgeist. I didn’t appreciate the full beauty of these modular, generalized, programs until a decade or more later.
I also didn’t realize that Z systems natively ran Linux. Must have missed that news. I suspect they’ll open source this soon - makes a lot of sense, if only for keeping consistency with their C/C++ work on Z systems. They are a big contributor to L.L.V.M in that area.
Does it have anything in common with XL C? Does it use W code as an intermediate language?
And what language is it written in? (PL/X? PL.8? C/C++?)
XL C is a separate codebase, with very little shared code IIRC.
Yes, some people like this are absolute geniuses.... And also result in code that's incredibly difficult to maintain, especially when it wasn't well documented or documentation binders have been sitting in boxes at some off-site cold storage for 25 years.
For anything that isn't pushing the boundaries of computing, I'd much prefer a few experts to a single genius.
If you find yourself with a true genius, you need to realize that they are not replaceable and you cannot rely on them alone, doing their work as usual. To harness genius like that, you need to build a structured team around them, not try to fit them into an existing structure.
1) I prefer geniuses to be working in areas where they are pushing boundaries further. I don't think such talent is used wisely in service to finding novel solutions to mundane problems much faster than the average expert. Better to throw three experts at the problem instead.
2) However, sometimes a large organization finds itself with a true genius in the ranks. (It might just be the sort of work they enjoy, or the best job in their region and they don't want to relocate, or any other factor) The problem is that even well-organized large organizations are is highly structured and setup to organize the massively vast majority of people who are not geniuses.
Yet a true genius is inherently an agent of chaos. They are a genius precisely (in part) because they see things beyond the current structures & paradigms, and realizing their potential requires them to either break or circumvent the current status quo. If you put such a person in an agile scrum and assign them stories, they will either
1) Quit out of boredom
2) Fuck off on their responsibilities out of boredom.
3) Complete their assigned tasks finding novel and interesting ways to solve (compared to their capacity) stupidly simple problems that will make future code maintenance a nightmare when a mere expert or newbie has to decipher it.
4) If #3, they may have their productivity recognized and be promoted into a PM/Manager type of role they will hate, at which point see #1 & #2.
Under #2 & #3, if you're fortunate, they will spend their copious free time finding more interesting things to do that benefit the organization. If not, they'll be off pursuing their own curiosity.
So, any organization should have some mechanism of recognizing these individuals (This is not something I have an easy solution for because, given #1 & #2 above, they can sometimes be indistinguishable from lazy/incompetent) However, if recognize, you need to set them to very challenging tasks suited to the direction of their genius:
Build a team around them. At least one of those team members needs to be an expert. This is the person that will act as a buffer between the genius and everyone else. The expert will also coordinate with the rest of the team to determine the practical/logistics involved in making use of the output of the genius. They will systematize the chaotic creative forces at work. They will document, they will disseminate to the rest of the organization, they will ensure that the incredible value of the genius's work isn't lost if/when the genius moves on, retires, or whatever.
How do I know this? I've seen one or two geniuses myself. I have even experienced this dynamic first hand. Let me be clear though: I am absolutely not a genius on that level. I am very much a generalist with a few areas that extend close to expertise. I have, however, found myself working in organizations that are so far behind what is possible that even very basic things have had people label me a "genius", which is very embarrassing because I'm not. And what I've done-- to use an artistic metaphor-- is kindergarten finger painting, Yet presented to a crowd that's never seen art, I am praised as a Picasso. Seriously: Pulling down 5 years of data, pre-processing in python (my preference) and running some basic regressions in R to show that a current very time-consuming process was useless was thought to be revolutionary.
As a result, I've been through #1, #2, #3, and I'm currently resisting #4. When in #3, which is most of the time, I try to document as much as possible and make sure anyone who would be responsible for my work if I left is fully in the loop on what I'm doing. And I get to spend about 10% of my time on things I find truly interesting that push my own limits. But I like working for organizations that are behind the times, if they are flexible to accept change. I like it precisely because while I am not a genius, I am pretty good, and working with such organizations still allows me to make an outsized impact on them for the good.
It's not so much that languages are good or bad, it's developers that are. COBOL can be made elegant, Swift can be made ugly. Even driftwood can be made beautiful.
I believe you could get the IBM COBOL compiler to generate 600 lines of error messages if you put a single period in column 6.
Of course times have changed and I don't know how to get a punched card into a linux machine. :)
When I started being exposed to and interested in computers around 1982, I inherited a stack of Creative Computing magazines dating back to maybe 1978-ish.
I remember reading about these error message contests in Creative Computing.
Today, I understand why that was. In the olden days, it was important to get as much information from a single compiler run, because interactive computing wasn't universal. Programmers had to share a machine and submit programs in the forms of decks of punched cards to a job submission window. You would not want to fix one error per round trip.
This means compilers had to be smart about error recovery: to try to repair the program after an error, and then keep going to get more information out of it. Whenever error recovery inserts a token into the program (like a suspected missing closing parenthesis), it is making the program longer, and risks confusing itself and creating a runaway loop that has to be curbed somehow.
I suspect the best solutions to these contests must have been exploiting features of error recovery; tricking the compiler into creating a cascade of errors out of something small.
That's a good point. I remember debugging a core dump... in hex... and going and circling memory addresses with a red pen and chaining them together until I found the problem.
> Minimum 250 MB for product packages
> Minimum 2 GB of hard drive space for paging
> Minimum 512 MB for temporary files
> Minimum 2 GB RAM, with 4 GB more optimal
Ouch; this ain't your grandpa's lean and mean Cobol.
- "Communication between COBOL and C/C++"
- "Compatibility with Enterprise COBOL for z/OS and COBOL for AIX"
Integrated with IBM CICS TX on Cloud 11.1
- "Certified Red Hat operators that can simplify deployment and management of CICS TX applications on Kubernetes"":
I had a few jobs where I programmed in COBOL when I was in college. Its probably just as well that it's not Open Source, because I was tempted to install it just for fun.
Remember that Skynet was written in COBOL:
Redhat and Ubuntu.
Is that a common thing for software to be marked as "available for Linux" but only supported for 2 flavours?
I guess it's fair, a debian and rhel base would cover "most" use cases.
Yes, because it keeps the support matrix smaller. As much as people like to think “Linux is Linux is Linux”, there’s all sorts of subtle and not-so-subtle system and library-level differences between distributions that can cause software to not work properly. IBM can’t and won’t test against every single distribution nor will they support their software running on whatever obscure distribution someone decides to use.
I'm actually surprised to see Ubuntu. It's typically just RHEL and SLES with Oracle Linux occasionally thrown in.
Very common. For example Steam for Linux only really supports Ubuntu.
In my experience, extracting stuff from debs and rpms to run on, say, Arch Linux was very easy, using debtap, rpmextract, etc.
(yes, I know it's a RHEL derivative, but I also know some large fraction of 'developers' are oblivious to these matters, which is why no one hesitated to list 'CentOS' when that still made sense.)
"Offers an extended source format that lets source text vary in length up to 252 bytes per line. COBOL for Linux on x86 supports fixed source format and extended source format. Fixed source format consists of text that varies in length up to a 72 bytes per line."
Yeah I'm never going to be in the same room as COBOL code
IIRC both of them where featured on the HN front page once.
Sir, the politically correct term today is..vintage.
>Planned availability date: April 16, 2021