The problem with those old codebases that governments, hospitals, big businesses are struggling with is not really the language, it's the engineering practices of that time with regards to constraints of old technology. The language is not the problem - lack of comments, bad variable naming, bad structure (little or no procedures or readability), and just sheer volume of it, is.
It would be very interesting to see the old systems rewritten in a modern language, with modern engineering practices, but keeping the old UI and UX (which often is incredibly ergonomic) - so as to limit scope and not mess it all up by trying to introduce mouse navigation and windowing mess.
You might be surprised about the comments though - depending on the age of the codebase. Mainframes were rented back in the day, you paid by resources consumed, terminal time was precious, and mainframes were often turned off outside business hours.
Because of this a lot of the development actually happened between terminal sessions in flowcharts, pseudo code, documentation, and peer review before the programs were ever modified and run.
If you ever run across really old comp-sci books you’ll typically see them divided into three sections - first section was usually a guide to the author’s terminology and symbology, second part was usually a guide to flow charting and documentation (IBM had standardized forms for developers to use), and then the remainder of the book was the content with lots of explanations of how to work the datasets hugely larger then the memory available to you.
But as time passed and computer time became cheaper many of those formal development practices started to get lax.
To expand on that, IBM used to rent mainframes based on a 40-hour week. The computer had a usage meter (like a car odometer) that would keep track of how much time the computer was running. If the meter ran over, you would be billed for the excess charge.
The computer actually had two usage meters, with a key to select between them. When an IBM service engineer maintained the system, they used their key to switch from the customer meter to the maintenance meter. Thus, customers weren't charged for the computer time during maintenance.
One interesting thing I noticed about the IBM 1401 is that it's built from small circuit boards (SMS cards) that are easily pulled out of the backplane for replacement. Except the cards driving the usage meter. Those cards are riveted into place so they can't be removed. Apparently some customers discovered that they could save money by pulling out the right cards and disabling the meter.
Didn't matter if the terminal was turned off or not either. The UI was burned into the phosphors.
Oh wow, so true, brought back many memories and also one of those overlooked aspects when you change a system as the previous burned in fields would with the right lighting create a whole avenue of data input errors that truly was a case of given the user a new monitor, which can be fun if your looking at it as a software issue in some new rollout. Yeah that can be a fun one and sometimes can't beat site visits as the local environment will never be replicated in any user transition training setup, however well it is done.
I actually quit the interview process with an insurance company a year or two ago after they wanted me to take a test involving reading flow charts, but now I'm in the position of having to pass something similar if I want to get promoted.
However, I don't think people use flow charts on the job anymore, even in state government.
I feel like once you need a general graph sort of structure, you're too far down the road to spaghetti and/or excessive detail.
Only for our service desk. Much easier to read than walls of text.
I don't know, at this point seeing a GOTO in my « native language » (the C language) is so incredibly weird that my first thought is « this guy is trying to do something weird and interesting ». It just wouldn't cross my mind somebody would be using a GOTO as a result of ignorance or laziness.
At night, nearly the whole machine was consumed by batch processing. If there was some problem that required a late-night fix, it was worked out first on green-bar print of the program, using hex to determine what was in the registers. Then the code could be submitted via ICCF, but the wait for a compile could take literally hours.
If you mis-typed something (say a forgotten period), the compile would fail and you would have to resubmit the job. Waiting hours again!
~15-20 years ago we used to have some teams which were specialized in creating flowcharts for anything that had to be implemented.
Nowadays I do a bit of everything (project mgmt, development, support, analysis, etc...) and in my area I'm the only one drawing logical overviews (very primitive stuff - I use "MS Visio" and I like it a lot) whenever we have to implement something that has the potential to become a bit challenging => so far I've always been very happy of having done that (all conflicts/complications/flaws/etc... of the proposed logic are then caught already at that stage, therefore no problems later during the core dev phase and we have as well less problems with the resulting implementations).
Then you hit reality and all the baggage legacy code has as well as standard, JSP did not traction well and when it did - well maintenance of code......lots of legacy spagetti out there.
WHy is COBOL still in use, its a robust data processing language and that is the bulklk of things - batches of data that need processing, mailing lists for the post, bills.
COBOL does handle data well if you want to know when it rounds/how it rounds and have and trancation....Formating ourput. This was at a time that no other could fit the job and all runs upon robust hardware designed not to fail as much as your consumer affairs that were still a glint in many eyes.
So the legacy grew, bloated. I've worked on a fair few migratioin projects for a software house and the costs to migrate you large blob of legacy code on legacy hardware to run upon something modern is not a quick process, not cheap and so much planning and due dilligence as well as data integretary and testing involved alone is a huge costs.
SO you end up with legacy code hanging in there as no managment team can justify a 6-10 year budget in the mindsets that work upon a 5 year plan and budget.
WHich ends up with many systems being literally too big to fail and too costly to ever be full migrated as the risk/costs just grow and management with the guil and drive to push against the status quo in management is often a path to career suicide, so they carry on with the heard mentality that management prevail. With those that do stick their neck out being of two types, those who care and know what's needed and those who just want to be seen to be doing something big, run into it, then the rush decisions unfold and before you know it, they have already flown off to another company saying how they initiated a project that will die a horrible death not long after they left as people realised what a mess and the true costs involved are.
Hence many reasons why COBOL still around today, it just works and in some ways you can't knock legacy. Nokia phones, work for days, just work and do the job of being a phone. So for that task they do the job much better than anything modern, however the modern android and iphones do much more, bells and whistles of all flavours and yet if you just want to, or need to make a call, overall they are not as robust compared to using an old nokia, that just works and works well for the task at hand.
This and the mentality, if it works, don't change it does have merit and is something you learn over time.
But there is always hope, so bits can be pulled away from the legacy and if planned and managed right by people who know what is needed and the business needs as well as requirements and mindful of minimising risk and interruption, there will always be a way.
Though I've seen many a project with the best in the world be doomed from the start as the bites of the cake made it an all or nothing approach and nobody wants to wait or budget/plan for something that takes longer than 5 years in software in the business customer world. ALways exceptions, but then many are planned for 5 years when known will take longer on the basis that 3 years in you push a new 5 year plan out and bolt on a few trinket features and justification to hide the fact that it was never going to go fluid and end up on target.
Best approach, bit by bit, batch processing and the like can be more easily migrated, though the data as always and interacting with that will always be as big a part of any migration than the code.
But yeah GOTO, when your working on code that needs performance and the platform is more costly to upgrade than most, you will see much use of GOTO in the code.
Then you get wonderful things like variable length records that many won't even know about and unaware that in COBOL you can define a say a top level record definition as 80 character and then redefine that with a PIC(x) OCCURS DEPENDING UPON VARIABLENAME. Then write that feild and get variable length records stored and save data storage and other expensive resources that we take for granted today.
So yes, many gotchas and creativity to eek out performance and reduce storage costs.
WIth that, GOTO is not your biggest problem with legacy code of the COBOL flavour, let alone linked in machine code specially crafted to do a sort upon the data as was faster and now nobody knows what that blob actually does or how to change it, so yeah, lots of traps in any legacy code of any flavour.
Sounds very cloud
I have the feeling that with the success of the iPhone many people forgot that a thing like a UI can have a target audience as well. If you make a tool that is beeing used twice a week for a minute at a time it has to look fundamentally different from a tool that is used 50 times every day.
With the former beeing intuitive is more valuable, while with the latter reducing friction is more valuable. This is a choice which has to be made – and sadly I often don't see it beeing made. People just make a UI that is akin to the ones Google or Apple make and call it a day.
It's worse than that. Lots of people involved in the creation of software don't just follow the trends, they have internalized the idea a UI exposing any complexity is inherently bad. That if something can't be easily expressed in the interaction language currently fashionable in mobile, then it must be a misfeature.
A distant but perhaps illustratively analogous example can be seen in non-nerdy teens and young adults. Take one that does class writing assignments in a google doc on their phone (they're not hard to find, you can even find some that try to do CAD on mobile devices). Try suggesting that if they learned to properly touch type on a real keyboard they'd find the whole process easier and faster. Then tell them apple's bluetooth keyboard can pair to iPhones. Compare the reactions.
tl;dr: In the TV show Metalocalypse the characters derisively called acoustic guitars "grandpa's guitars." That's the UX world in a nutshell.
Management prefers not to think about long term, because management obviously does not think long term.
They'll not comment, help, document or whatever, and once they're doing that, they are uncontrollable. They can't be sacked because they're the only ones keeping the system running, and they won't help others train up. That seems a very difficult situation for the suits to deal with, even if they want to.
The the multinational decides a change in direction and fired the entire office, relocating it for regional diversification (this was not offshoring, spreading the work out across multiple teams). This was pretty niche stuff, and he became unemployable until moving across the country.
And I know of a company that similar was outsourced to. Fully outsourced with seemingly no internal expertise. Bleeding their client for increasing support prices year-by-year imagining they could do this for ever. I was in the team in that client that for two years, going from-scratch re-implementing the functionality, no cheap proposition, but within 3 years of in-sourcing the savings were already there.
Not all wizards know magic.
> They'll not comment, help, document or whatever, and once they're doing that, they are uncontrollable. They can't be sacked because they're the only ones keeping the system running, and they won't help others train up. That seems a very difficult situation for the suits to deal with, even if they want to.
This sounds _exactly_ like a failing of management to me. If developers are expected or allowed to "just code" without documenting anything or training anyone, management is absolutely to blame for allowing it to go on.
Not to mention the years where they did not comment, help, document or whatever, and became uncontrollable sounds like a management problem over those years.
Per your 2nd point about long-term management problems, no doubt of it at all, but sometimes the new (and sometimes good) management simply inherits what previous mismanagement left behind.
Also perhaps you underestimate the power that programmers in well-bedded-in positions have. They can outright ignore management orders - experience speaking.
But a good post nonetheless, thanks.
In reality they cannot do anything, so they scheme to get rid of people who can. During the battle no useful work is being done.
The people that 'snitch' and feel 'under appreciated', and so on... my experience is they have trouble keeping work, are afraid of being found out, and do what they can to get rid of others who can recognize their lack of skills. I just don't think you'd find as many of those getting hired in the context of the needs of this thread.
Don't forget to also sack the management that looked the other way for years while this situation got where it is.
The zero asshole rule is non-negotiable.
The problem is that management doesn't care about the human cost of their decisions and it causes technical problems.
Not kidding, try to build a 2 year old React web application, you'll see what I mean.
There are plenty of options that have a reasonable probability of being stable.
(Also, consider committing the docs right in to the repo. In the 1970s such an idea would have been absurd. Today, a lot of my projects technically already have this, thanks to vendoring and docs embedded into the programs themselves.)
I will say though, every migration project I worked up, the documentation was carefully worded in the contract as being the customers liability and with that, code gets migrated logic for logic, bug for bug and testing so anal that it will show that what goes in is the same that comes out in the migrated code. That and the business documentation will still be good, even much of the code documentation if high enough level, but the code itself will be the best documentation.
Until we get a standard in which the documentation produces the code and all changes done to the documentation over quickly hacking the code, the disparity between any documentation and the code will always be adrift.
So you see many bespoke solutions to go thru the code and produce documentation from that to varying levels of success, however that success for one sites quirks in code may not work as well with others.
Hence even the best documentation will wisely get treated with a pinch of salt and in many instances, be like comparing a book to the movie it spawned in many ways, some close to the original, many not even close. That is documentation and code in a nutshell.
Always best to map the data first and that can be done easier and more automated, more so databases and generating a schema and then map what code talks with what and gradual get to see what is happening.
I was the tech lead on one of these projects. Personally I was sad and frustrated we had to keep the old UX/UI. I would have much rather have made something more ergonomic for our users. Alas retraining would have been too expensive to do that even though it would have been probably more intuitive.
I do agree with you that there is some benefit to being able to do everything on a keyboard without having to deal with the baggage of what we consider modern.
30 years ago, those of us on green screens at Big Org knew more shortcuts than any emacs user. Now whenever I have to use a “modern” CRM it’s the most anti-productive aspect of my job.
Enterprise UI optimized for speed of trained personnel. They can't close page if they don't like it, they are paid for work.
It's completely different situations and when someone mixes them, it leads to bad UI.
Now is definitely not the time for a rewrite. As much of a trap this seems to be, it's actually the best move given the circumstances.
I may actually take IBM up on this since my father was a COBOL programmer, but I wouldn't plan to make a career of it.
Is it a trap? Well if you want a secure position in managing a code base and maybe eventually working with others to move it to a new platform I do not see how. I have been around enough new languages to know we are always going to run into code bases we just want nothing to do with but here we are.
The problem to me is you may land in a development shop that is not well maintained. The code has worked for so long that management outside the department just went assumed that everyone knew everything.
Just like in math!
The legacy stuff is usually fine, it’s the layers of middleware scaffolds around the mainframe.
Mainframe jobs are 90% batch, so even under stress, it can handle it. Your circa 2002 scaffolds are the problems.
If you don't get the right people, the rewrite would be an overengineered object oriented nightmare with an endless stream of bugs.
Heck, I’d be willing to settle for seeing modern software languages used with modern engineering practices.
I wonder what basis, evidence or data you might be using to make this assertion. You are assuming quite a bit there as well as generalizing all problems across all affected systems to have your list of issues as the root cause.
Could it be that nobody bothered to maintain and modernize these systems because spending more money on software that "works" isn't going to earn anyone in government points? Government and politics have metrics and fitness functions that do not align very well with the real world (anything outside of government or large stagnant companies).
And yet, at the same time, have you looked at open source libraries lately? The phrase that comes to mind is: rotten smelly stinking mess.
I just had to deal with one of those a few weeks ago. No comments, horrible code structure, massive class hierarchies, just awful stuff. The complexity and thickness of the interface they created was astounding. We re-wrote the entire thing in about thirty lines of code. Yeah. A massive multi-source-file library got boiled down to just a handful of clean code, no classes, just clean, simple and easy to understand code with comments anyone could understand.
I know it might be difficult to modern programmers to understand the kinds of constraints software developers had to work with in the '80's or before. A simple example of this would be single character variable names. When you only have a few thousand bytes of memory and you are working with an interpreted language, variable names consume memory you desperately need, not to mention CPU cycles. So, yes, people resorted to use single character names to conserve memory and improve execution time. Context is important.
I took one semester of COBOL back in the dark ages, FORTRAN also. Thankfully I never had to use them professionally. I started professional life using APL, C and FORTH. I realized, years later, how lucky I was to have been shoved into that path by a physics professor who insisted I veer away from COBOL/FORTRAN and take his APL class.
This has nothing to do with technology and everything to do with people. For all the progress we've made with technology in the past 50 years, we've made little or no progress with any of the items in this list.
The second biggest problem building software has always been programmers too small for the task at hand.
The biggest problem has been managers who are even smaller.
So don't blame COBOL or any other technology. Fix the people and you can build anything excellent from almost any tech.
Oh, it has everything to do with technology. Wouldn't want to waste that perfectly serviceable punch-card by punching a * in column 7.
What do you think to get the proposal off the ground. Just reimplement granddad system is not it I am afraid.
Saw that so many times ...
By contrast state, county, and even city government are paying $65K - $85K starting going all the way up to $100K at the top end. And these are jobs where you're doing modern development, not COBOL.
People will start learning COBOL as soon as it makes rational sense. As it stands a lot of organizations aren't looking for any COBOL developer, they're looking for CHEAP COBOL developers. It doesn't make sense to learn when you'd lose money for taking those jobs.
I decided to move to NYC and was making 100k immediately as a Jr RoR developer at a lean startup that was paying probably 10-15% below market.
It makes little to no sense for anyone with other options to take these incredibly low paying jobs relative to what they could be earning, doing what I would argue is far easier development work. It continues to baffle me that these major organizations claim a shortage of mainframe developers, especially as all the senior ones are retiring, yet their pay scale looks comparable to what department managers made when I worked at Circuit City.
Well, supposedly there could be one in the Midwest and one in NJ, but theoretically those who do not have a job in the Midwest would be motivated to move, since their only options are:
A) Remain jobless in Midwest.
B) Switch careers in Midwest.
C) Move to NJ for immediately available position.
Structural unemployment due to ageism?
This is one of my pet peeves. I am also thinking that it is outrageous that I am facing such lack of supply of Porsches. What? What do you mean, I should be paying more than a thousand bucks for a brand new Porsche? What sense would that make?
“Open Source COBOL Training – a brand new open source course designed to teach COBOL to beginners and refresh experienced professionals. IBM worked with clients and an institute of higher education to develop an in-depth COBOL Programming with VSCode course that will be available next week on the public domain at no charge to anyone. This curriculum will be made into a self-service video course with hands-on labs and tutorials available via Coursera and other learning platforms next month. The course will be available on IBM’s own training platform free of charge.”
SOURCE: The full IBM press release is here:
A few other bits to clarify things ( coming from me being Director of the Open Mainframe Project )...
- The coursework itself is being contributed to a new open source project being hosted by Open Mainframe Project ( CC-BY-40 license ).
- We would have liked it to be ready at the time of announcement, but it literally got approved by the Open Mainframe Project TAC as a new project about an hour before the blog post went live ;-). Have no fear, it should be landing next week ( there will be a bit of work to come on translating docx files to markdown, in case anyone wants to help ).
- Right now the course work focused on VS Code as an editor, but the project is very open to contributions that leverage other IDEs ( such as Eclipse Che, Atom, etc )
- Open Mainframe Project is part of the Linux Foundation, with IBM being one of the 30+ sponsoring organizations.
- On the notes I've seen around "hey let's rewrite all that COBOL code in some modern language", I won't add more fuel to that fire ;-). I will however say there is some interesting work in a project hosted by Open Mainframe Project called Zowe ( https://zowe.org ), which basically makes connecting to mainframe apps and data on z/OS much easier ( think REST APIs, CLI interface you can use on your laptop, App framework for creating browser based apps, etc ).
Anyways - hope this helps! Feel free to ping me if you want more details or help getting engaged ( @jmertic on Twitter ).
I last used it in 1997, then moved on to Oracle PL/SQL, Java, Oracle's Java software stacks and now iOS ObjC/Swift.
Now I'm 52 years old. I have my own apps now on the store, don't need the money but I am looking for a new challenge - something a bit more social than working for myself.
I think I'll do this COBOL refresher. Only issue is I'm in Australia but I see there may be some demand here too. Nothing to lose - plenty of time at the moment to do the courses. Good for a laugh anyway.
I like pure solar powered calculators for the same reason. If you were to send one back in time it would continue to work and be useful without requiring intervention or dying.
I assume users can turn notifications off though? Almost nothing on my iPhone is allowed to send me notifications.
I had a project that gave all sorts of hints of stink that was Java based. Turns out that project involved integrating a system in a preexisting code base with tens of millions of lines of code. Limited documentation, so much abstraction Java was just a syntax of underlying conformity, the language was understanding the myriad of interdependent abstractions to accomplish the integration task. Documentation was incredibly limited given the scale of everything. Obviously, progress was painfully slow.
Java itself was not the problem, everything else was.
And there’s the problem summarized in less than a single sentence. Yes, they need help, but it’s the same kind of “help” as in “I want a BMW but I only have a dollar, please help”
There would be a ton more COBOL coders if companies paid the equivalent of FANG companies - I say this as a former COBOL coder myself
I honestly can't wait until all the old non tech people retire.
It's a bad word though, because it creates these confusions. However, they might be at a max of existing COBOL 'full timers'. Who would 'volunteer' to leave their current dev roles to jump in to the NJ COBOL life? Might mean moving to NJ as well...
Why aren’t we asking other government employees to volunteer for free?
- Forum for COBOL programmers to express interest in volunteering or hiring
- Forum monitored by experienced COBOL programmers to help developers
- Open source COBOL training materials from IBM (Available “in the coming days”)
I find the main issue when you want to learn COBOL is the access to a machine as close to real one an not a simulator. Maybe openmaineframeproject is the missing link between learning and practice.
The first is GNU Cobol. It's quite functional for its intended purpose: To port mainframe applications to Linux. However, when writing a new application on Linux you usually want to do things like access arbitrary files (without hardcoding the filename in the source) or make network connections. It does have nice interactive screen support though.
The other option is to run MVS 3.8j in Hercules. This is a predecessor to the current z/OS which runs on mainframes today. The problem with this is that it's stuck in the 70's (which is when the last free version of MVS was released). A lot of work has been done by the community to keep it up to date, but the Cobol compiler is the language of 40 years ago, not modern Cobol.
None of the above options are really appealing unless you're like me and have a thing for messing around with stuff in their non-native environment.
The third option is: http://mtm2019.mybluemix.net/
When signing up, it gives you access to a z/OS account where you have a surprising level of access. It's probably running on an emulator (it's quite slow) but it does give you access to modern software, including compilers for Cobol, C, Java etc. It also has DB2 installed.
However, the purpose of this option it to learn z/OS, not necessarily learn Cobol. And developing using the ISPF editor isn't particularly nice. They do give you ssh access to the Unix-compatibility environment though so maybe it's possible to edit files using Tramp in Emacs locally. I haven't tried that.
In any case, what is needed is a proper Cobol development environment that you can run locally on your workstation. As far as I understand, that's how Cobol developers normally work. IBM would do well by releasing such a product for free. However, I'm not having high hopes given the fact that the mainframe division seem to be actively hostile to any free software (look into the difficulty the community has to get even the smallest community-made improvements accepted by z/OS, or releasing some small tool for the MVS community).
It's a UNIX shell. You can always use ed :)
You need to first edit the file, fine, you can use vi. Then you need to go into ISPF (or TSO) on a 3270 terminal to submit the batch job that compiles the code (and possible runs it). Then you need to go into SDSF to view the results of the compilation.
Back in the 70's this was an acceptable way of working, but not what a modern programmer would expect.
On the MVS 3.8j side, I can submit a job directly via the virtual card reader and then read the results directly from the printer and feed it into Emacs. That's the most efficient way to work with Hercules but you'll be stuck with a very old version of Cobol.
Well, I don't know how you can avoid that part, i.e. submitting a batch job and looking at the results separately. SuperPaintMan describes an alternative but I'm not sure how this works. When I was working on a mainframe, it was like you say, except I couldn't edit files remotely with vi - because big financial corporation security :)
To be fair, I didn't try. There probably was a way. I didn't mind the editor I had on ISDF, EZY editor. The only annoying thing was that, if I understand this correctly, EBCDIC doesn't have an end-of-line character so I couldn't just control-End to go to the end of a line, I had to use the arrow keys or touch the mouse (yuck!).
It's a struggle to hire these people cause there are so few people and many companies only want to hire local talent (within 20-50 miles, and don't want to do VISAs).
There's money to be made on supporting tech this old, likely more on the consulting side than becoming an internal employee.
Which is to say that these are organizations that have been sitting on this problem for thirty or more years, doing nothing about it, and suddenly they cannot handle the load due to THEIR greed, apathy, and incompetence. But programmers are meant to queue up and work for free in order to fix it because it is now a "crisis" of their intentional making.
This is an argument for private profits, subsidies the loses. How about no? How about we eat into these organization's balance sheets in order to fix the massive financial and technical debt they let build up, because frankly they deserve it and it is the only way they'll learn. Any other solution will just excuse them to do this again.
Why should programmers bare the cost of their mistakes while they seemingly get a free ride/keep the "good year" windfall? It is immoral for programmers to profit, but not for these organizations that created these problems out of their greed? Nope.
Yes, it might be nice to have voters with more foresight and long term horizons who don't mind paying higher taxes. But lets not blame some nameless profit centered corporation.
The ONLY reason that social distancing is an effective option is because so much is possible online and that all that infrastructure is good enough to handle the surge in traffic that they are currently experiencing. Imagine an alternate world in which these services were not as competent.
I also think that, precisely because of that, social distancing will only slow the rate of exponential growth, not prevent it.
There's also the context of more money in a time when the fed prints trillions of dollars every week. This will be a drop in the bucket, and in an environment where governments have decided to spend their way out of the problem.
Because it's actually a hell of a lot of fun. I did a year in a Cobol shop as a graduate developer at a large financial corporation. Working on a mainframe was the most fun I had on that job. And why not? You're logging on to a gigantic computer with millions of users and billions of transactions daily, with a text-based user interface that looks like it was designed by Tarn Adams. And I say that 100% as a compliment.
I mean, seriously, when I first got all the permissions and so on that I needed to work on a mainframe, I was giggling to myself like a little girl. "Really? They gave me access to all this?". It was like someone had given me the keys to the playground.
The job sure got a bit boring after a while, which is the reason I left, but for a few months it was just sheer tomfoolery, poking at things and finding how things worked.
>> The real solution is to think long term and rewrite all of these antiquated systems so that the next time there is an emergency it will be much simpler to find qualified developers.
That is not a sustainable solution. Fifty years from now people will be making jokes about "that antiquated Python stack" and state agencies will be ringing alarm bells for the lack of experienced young Python programmers. You can't just keep throwing out all the old code and replacing it with whatever new language is cool right now.
I wonder how many people realise Python is more than 30 years old already.
Assuming the 're-written' version isn't similarly out of date by that point in time, the real solution is for business/government to finally understand that if your org depends on software then it's infrastructure that requires constant maintenance and scheduled replacement.
Getting them to understand that is an exercise for the reader.
Someone needs to decide whether Cobol is going to live or die. If it is going to live then people have to stop pretending that it is not there, create proper training courses for it and recruit people to maintain and develop the systems properly. And of course the requirements should be reviewed and updated to include the capacity to cope with the volume of work that is being experienced by these systems.
Of course this won't happen because as soon as the crisis is past all those USD 1000 per hour people will be laid off, the employers will breath a sigh of relief that it is all over and go back to their old ways.
Five months to code a percentage, or five months to add the calculation, integrate that data point with other systems (does this need to be shown as its own field on any unemployment forms or reports? etc.), test it to the point where it has been demonstrated stable enough to be included in a critical system, and then deliver this change?
I don't know the first thing about COBOL but the fact that government is (for good reasons) slow to move, the stability needed in critical things like unemployment processing, and all the other things that go into "coding a percentage", explain that timeline a hell of a lot more than what language was used.
Assuming the demand for people still exists by then.
In the same way they ask for 10+years programming experience in Swift or Rust
I see Fiserv firing all their 3-6 years COBOL juniors (last in, first out.) They retain only the most expensive 15-25 years pros.
If these orgs desperate for devs want to put their money where their mouth is and start a program like that, hit me up. Sure.
I used to do COBOL, but the problem is that possibilities for advancement are very limited - there's far more jobs available - and promotion pathways - for a Python or Java coder than there is for a COBOL coder.
In exchange for a refresher in COBOL and a guaranteed job, yes that is a much better deal.
Well, not so fast. If you PERFORM a paragraph, then it's like a BASIC GOSUB statement, or "jump and store" in assembly language. Sort of like a function call, but with no local variables.
Or you can do PERFORM A THRU B to jump to paragraph A, continuing sequentially until the end of paragraph B.
Or you can do PERFORM A THRU B VARYING X FROM 1 BY 1 UNTIL C, which is sort of like a for loop.
The nasty thing about all this is that looking at a paragraph (block of code), you can't tell shit about how it gets executed. Does it fall through to the next paragraph? You can't tell: that's determined dynamically at runtime. Is it a loop? Can't tell. That's one of the things that makes large COBOL programs really hard to understand.
Another fun COBOL statement is ALTER:
If you think GOTO is bad, it's a walk in the park compared to the lethal combination of ALTER + GOTO. Basically, when you say GO TO A, it can go to some other place based on a previous ALTER statement. Now we're having fun!
As some have noted management has failed to look to the future and now their business has a major problem they are struggling to deal with. Same as we debate not bailing out, financially, companies that failed to plan for future disaster or catastrophies I would argue we don't bail out these companies that failed to maintain their own internal technical architectures by planning for upgrades and future maintenance.
It's akin to the BS ISPs argue about not being able to afford to maintain it upkeep their infrastructure. It's largely BS. Put less in the upper managements pocket and more in the business and all of a sudden the business works and has resiliency against unexpected events. If you don't think these "vital" banks and hospitals can't afford it then you haven't been paying attention.
Not sure if there's real demand for paid Fortran programmers, but it's certainly not a dead language.
So the Fortran equivalent of the current situation was NASA being desperate for Fortran programmers to patch Voyager a few years ago (which may have been overblown - my understanding is that they had the programmers, someone just took the "what's the problem?" and ran with it). Orbital models, weather models, stuff that's deep science put into code. COBOL is stuff where business logic is put into code.
They're both titans in their fields to this day - just different fields. At a very, very high level it’s essentially matlab vs excel.
Every modern Linux distribution has a current gfortran bundled.
These stored procedures will be triggered by REST APIs & Endevor(version control). What makes me happy is, these will work for another 10+ years without any upgrades or me tinkering the code again for new version.
Well if it was any web technology or cloud application, I would be getting a mail from them saying they are going to decommission a version of language so either you need to rebuild your code only to know it fails in new version(every 2 years). Well that doesn't happen in mainframe.
I rather work on a huge legacy codebase than rewrite the same CRUD app in the JS flavor of the week.
Have the reports of high pay been confirmed, AFAIK the push now is for volunteers.
Maintaining it as it is, without the authority to document and refactor it might not be as much fun though.
A question I'll ask is, someone like me; retired from a long development career, experience in many languages but not COBOL. Could I take this course, or a similar one, and be useful to prop up a rickety code base on a volunteer basis? If yes, are any of these states allowing remote work on it?
edit to add https://www.openmainframeproject.org/blog/2020/04/09/open-ma... is the actual announcement.
This is a great non-monetary example of one of the biggest issues we are seeing in our time. That is, as a civilization, we have become terrible at investing in future contingencies. In the last few months, we have witnessed how our biases prevented most of us(and businesses) from building up a meaningful savings for stormy weather, and the COBOL situation is hardly different. Banks could have invested in gradually switching from older mainframes to modern ones with software built with any modern language, and they were(or should have been) in one of the best positions to do this. Instead, they said "whatever" or simply were complacent in where technology was headed, and did little or nothing.
As a society in it's current state, we need to seriously look at ourselves in the mirror.
(gonna keep posting this on all the COBOL threads haha)
Edit: and when do you upgrade? If they upgraded in ‘85, it would be in C++. ‘95, and it would be Java 1.0. ‘05 and it would have been VB6. None of those would have been substantially better save that they would have been easier to hire maintenance programmers for.
Maybe write the tests first so it can be validated easier?
With so many legacy systems that are often in very important places, I wonder whether it wouldn't be smarter to spend money on systems converting a modern language to cobol, e.g. python2cobol. Is that impossible?
The closest that modern design patterns come to these systems, is using them as the nightmarish example that justifies why modern practices exist.
It's rarely actually a technological problem. It's scale, scope, documentation, budget and motivation. You have to take Mount Everest, and carve it into 2 million separate boulders. Document every single one of them. Paint some of them. Replace some of them with stronger materials. If any of them move, you failed. If any snow is disturbed, you failed. If the climbers even notice this is happening, or has happened, you've failed. And on top of this herculean feat, the person paying for it needs to understand that despite the insane cost of this endeavour, he's probably not going to see a single benefit - but his successor in 10-20 years will. But if you fail, he's going to feel that hard and fast.
Cobol was a modern language 50 years ago. Python will be an ancient language 50 years from now. What do we do? Keep rewriting everything every 50 years?
And those ancient Cobol codebases actually have a big advantage: they've been maintained for so long that all the major bugs have been virtually eliminated. Creating a new system from scratch means another 10 to 20 years of maintance until the new system reaches the stability of the old one.
This is no joke. Given that most of the Cobol running today is running on mainframes at banks and card networks and the like, "a new bug" may translate to a few hundred thousand dollars of losses.
There's no point in building a new house of cards every few decades.
Uh… sure? What would you consider an acceptable minimum amount of time to be after which rewriting a large but fairly critical codebase to modern standards becomes acceptable?
> And those ancient Cobol codebases actually have a big advantage: they've been maintained for so long that all the major bugs have been virtually eliminated. Creating a new system from scratch means another 10 to 20 years of maintance until the new system reaches the stability of the old one.
That's not quite fair. Having a known good code base while porting means that you can just rewrite the existing algorithms in the new language and then run some automated tests to make sure you get the same output from the same input across both systems. Unless you're doing a black-box rewrite for some reason, you're not really throwing away all fo the maintenance done on the existing system.
Re-writing an existing algorithm is one thing, but even that is likely to be a big source of new bugs, given that Cobol is actually a quite low-level language and much code will rely on its specific view of a mainframe's architecture.
The bigger problem is that any implementation of complex business logic is going to depend very heavily on the facilities provided by whatever language it's originally implemented in (Cobol, in this case, obviously). A direct translation to a new language is likely to be completely impossible. And the bugs will grow in all the semantic gaps between the old language, and the new.
And that's before considering that, for Cobol in particular, the Cobol code itself is only half the story. Cobol programs run as batch jobs controlled by JCL ("Job Control Language") which often means that crucial aspects of businees logic are spread over multiple files in _two_ languages. And the JCL part is a mess. I didn't mind Cobol when I was working with it, I even came to like it a bit actually. JCL is really, really awful.
But, aesthetics aside, where does all the JCL-encoded logic go? Is that translated to the new language, also? That's going to be really hard given that JCL is operating-system specific. Is it going to be translated in scripts in a new shell language? The difference between concepts on JCL and, say, bash, or powershel, is going to be impossible to bridge without making drastic changes- and cultivating new bugs.
In general, translation of a large codebase between two very different languages is going to cause lots and lots of new bugs. So, if you rewrite everything every 50 years, in 200 years you'll spend a total of 40-80 years fixing bugs. If you write it once and let it be, you'll spend at most 20. I don't see a good reason to do it.
And what's wrong with an "antiquated language" anyway? I mean is it just aesthetics we're talking about here? Is it the lack of programmers that's the problem? The latter is sure to make translation even harder and more bug-prone. What is the real reason to change a working codebase every n years?
Yes, or sooner.
And you keep the 'institutional knowledge' externalized in documents, and you use testing tools and use virtualized systems and what not (virtual systems were available to consumers even 20 years ago - this is not that new).
"Upgrade" or "rewrite" every 10-15 years. This should just be a cost of maintenance. I'm at the point where I've had PHP code running on systems for 15+ years (had a call from someone in 2017 about software started in 2002 and last touched in 2004). There's a 'on the public internet' distinction with web apps vs internal bank systems, for example, agreed, but it doesn't remove the need for upgrading old systems. Doing it on your own schedule, on your own terms, vs having to deal with systems in crisis, is where the benefit is.
"what not" has been running in production since 1972: https://www.ibm.com/it-infrastructure/z/zvm
Somehow I think C or C++ wouldn't run into the same problem while I can see this happen to python.
01 Char PIC X.
88 Vowel VALUE "a", "e", "i", "o", "u".
88 Consonant VALUE "b", "c", "d", "f", "g", "h"
"j" THRU "n", "p" THRU "t", "v" THRU "z".
88 Digit VALUE "0" THRU "9".
88 ValidCharacter VALUE "a" THRU "z", "0" THRU "9".
display "Enter lower-case character or digit. No data ends.".
perform until not ValidCharacter
when Vowel display "The letter " Char " is a vowel."
when Consonant display "The letter " Char " is a consonant."
when Digit display Char " is a digit."
when other display "problems found"
So... where is it actually testing what kind of character you input? Where is the code for that? You input a specification for what a Vowel is, for example, and you write explicit code for what to do when a Vowel is input, but where is the code which goes through the specification and decides, yep, that's a Vowel?
COBOL is kind of an odd language. It's verbose in some respects and quite concise in others. Rewriting COBOL into something else would take actual human effort if you wanted the "something else" to look like code a human wrote, as opposed to the intermediate pass of an optimizing compiler, which is what the C GnuCOBOL can output looks like. Re-writing the COBOL might be the best move in some cases, or replacing it with entirely new code, but it isn't something you'd be able to do "for free" in any sense, especially with regards to time.
Cobol is really not such a bad language. It's just got a lot of ...ceremony. All those forced divisions and sections. But that's a feature: in the olden days, structured programming was a big thing. And an experienced Cobol programmer can take a quick look at a big Cobol file and find where everything is in a blink.
It is, but the "magic" is that the pattern-matching is part of the variable declaration, so you can reuse those patterns wherever you can use the variable.
dim char as string
char = inputbox("Enter lower-case character or digit. No data ends.")
select case char
case "a", "e", "i", "o", "u"
msgbox "The letter " & char & " is a vowel."
case "b", "c", "d", "f", "g", "h", _
"j" to "n", "p" to "t", "v" to "z"
msgbox "The letter " & char & " is a consonant."
case "0" to "9"
msgbox char & " is a digit."
More to the point, it allows you to keep the patterns near the variable declaration, so you can reuse them.
The pattern matching is part of how the data is declared.
Aside, RE testing, a wild guess: could the "accept char" be tested with emulated keyboard inputs?
Govt IT systems and procurement are such a mess, putting it all out there for people to review and complain about is the only way it'll ever get better.