For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.
The resulting interfaces are very efficient for text entry, and allow users to quickly get a lot of work done. The interaction is so much more efficient than what we are used to with current technologies that it can barely be described or thought about in today's terms.
Regarding the underlying theme of legacy systems, here is an interesting presentation by Jonathan Blow, citing various examples of earlier civilizations losing access to technologies they once had:
We blew both objectives out of the water. Training time went from 3-6 months to about 2 weeks. We had a variety of modern UI concepts like modals, drop downs and well behaved tables that dramatically simplified the work flow. We also had a full suite of user defined hotkeys, as well as a smart templating system that allowed users to redefine the screen layout on a per use case basis (the screen would be reconfigured based on what customer the user was entering data for example).
For performance, cobol typically requires a server round trip every time the screen changes. We simply cached the data client side and could paginate so quickly that it actually caused a usage problem because users were not mentally registering the page change. The initial screen load took slightly longer compared to the cobol system, but the cobol system required 40 screens for its workflow whereas our system could do everything on a single screen.
I guess my point is that modern systems are capable of much better performance and user ergonomics than cobol systems. We have a lot more flexibility in how UI's are presented that let us design really intuitive workflows. The flexibility also lets us tune our data flow patterns to maximize performance. But most modern development processes do not have this kind of maniacal focus. Systems don't perform because most product owners don't really care that much at the end of the day. Once you care enough, anything's possible.
The resulting webapps were made automatically responsive for the web and could be run anywhere - desktop, tablets, phones and apps could open tabs that invoked other apps for multitasking. We also added some custom commands to the COBOL so that they could invoke web based graphs, reporting, printing, pdf generation, etc. The client supported native typeahead, so the web apps behaved very much like desktop apps whereby pressing a number of shortcuts simultaneously resulted in them being played in order and overcoming any latency. This made the apps completely superior to normal web apps for their application (i.e. POS, ERP systems).
The utility that COBOL provided, coupled with a modern web based runtime written in React, was remarkable. Truly a hybrid of both the best parts. When I left, they were working on wrapping the React webapp into a React Native app.
I have no idea about COBOL at all but I've done something like this before with a client mainframe scripting/macro language, it was not fun. Basically I had to hard code a bunch of key inputs to get the information screen I needed finally read that screen back out in plain text and parsing that into some sort of structure. It was a mess but worked for what it required at the time.
Round trips aren't the problem. Waiting for IO is. CICS solves this by not waiting for IO. It dumps the screen and moves on. Then it becomes the terminal's responsibility to wake CICS up with an attention key. If a front-end application is stuck waiting for terminal input, it's written wrong.
People should bring back sigs, so that can become a meme.
By the way if it seems odd to have I/O like this in your language:* this reflects the architecture of mainframes; because the CPU was so important, I/O was managed by external devices (typically themselves a cabinet of electronics, at a time when the CPU took up two or three cabinets itself). You'd write essentially a small program describing what IO you wanted and then let the IO channel controller be busy dealing with minutiae like dealing with the tape drive motor or input from terminals.
So the language would set up data to be slapped to a terminal (this would have been a decade or more after COBOL was written, once terminals with screens were available), which the channel controller would send to the appropriate terminal. It would then deal with input and once all was ready, tell the CPU that it had a bunch of input ready.
Terminals like the 3270 were even half duplex so would process the input and then send it off as a block, to make the channel controllers more efficient!
In the ARPAnet of the 1970s, the ITS PDP-10s and -20s took this further with a protocol called SUPDUP (super duper) which allowed this kind of processing to be done on a remote machine when logging in over the net (as we would do today with ssh). So you could log into a remote machine and run EMACS, and while you were editing a remote file with a remote editor all the screen updating would be done by your local computer! Even the CADR lisp machines supported this protocol!
* At the time not doing IO through the language itself was considered oddball. I seem to recall a line in the original K&R where they described C's I/O and made an aside, '("What, I have to call a function to do I/O?")'
In fact, our PCs are like that because they had to be cheap and the cheapest way is to burden the CPU with all work.
PCs of course had the same issue — down to the cpu controlling the speed of disk rotation! But a modern PC has an I/O system more complex than the mainframes of old, with network interfaces that do checksum processing, handling retransmission and packet reassembly etc, just DMAing the result. Disk drives, whether spinning or SSD have a simple, block interface unrelated to what the storage device is doing, etc. I think this is all as it should be, though I personally consider the Unix IO model grossly antiquated.
We should propose a kernel patch for this next April.
Extra credit if we get it rolled into POSIX.
Imagine if they’d run out of beeps during filming!
As for BOM...a few years ago we designed a big serial board (industrial control) and found it was much cheaper to buy AVR cpus and use just the onboard UARTs than to buy UART chips.
However, the rest of the students were mainframe guys, so it was an interesting few days. My (Solaris, Linux) world seemed incomprehensible to them, and vice versa, but it was nice to get a glimpse into how the other half lived. Finding experienced computer people who’d never used ftp was quite surprising.
- UI in React are trying to be just rendering functions with one input and one output. Even more so with hooks.
- in python, there is a new trend of providing "Sans I/O" (https://sans-io.readthedocs.io/), and async/await are not just keywords, but a generic protocol created to delegate I/O to something from outside your code.
It's interesting to see that on very old systems, even the hardware was organized that way.
It's funny how it was the systems in the middle (minicomputers, like the PDP-11 where C originated) that did everything on the CPU, whereas the systems on the high end (mainframes) and low end (microcomputers) both split the work out, for different reasons: mainframes pushed IO out to independent coprocessors for multitenant IOPS parallelism (can't make any progress if the CPU gets an IO interrupt every cycle); while early microcomputers pushed IO out to independent coprocessors to retain the "feeling" of real-time responsivity in the face of an extremely weak CPU!
Most things we call "coprocessors", on the other hand, were a bit different: they had their own on-board or isolated-bus memory, which only they could read/write to, and so the CPU would interact with them with "commands" put on a dedicated command bus for that coprocessor. Most sound chips (down to the simplest Programmable Interval Timer, but up to fancy chips like the SNES's SPC700) were like this; as were storage controllers like memory cards and the PSX's CD drive.
I was thinking of the GPUs we had in those days which were often a couple of VME cards or even a small cardcage, but you’re right: that physical distinction isn’t really relevant.
Here's my Apple ][ FORTH implementation of SUPDUP with line saving (%TDSAV, %TDRES), which saved lines in the expansion ram card.
There are thousands of template sites being used today with JS plugins the developer probably wouldn’t know how to write. But the minimal interface layers are sufficient to get them to work.
Stuff like Select2 and bootstraps collection of plugins cover a broad range of interactivity most people need on the internet.
html forms are more like the half duplex terminals of the 70s/80s.
"It generates an interrupt every time you press a key!"
But the first version didn't have any de-bouncing, and would process each and every ^T it got immediately (each of which had a lot of overhead), so on a hardwired terminal you could hold the keys down and it would autorepeat really fast, bringing the entire system to its knees, while you could even watch the load go up and up and up!
And of course whenever the system got slow, everybody would naturally start typing ^T at once to see what was going on, making it even worse.
That was a Heisenbug, where the act of measuring affects what's being measured, with an exacerbating positive feedback loop. He fixed the problem by rate-limiting it to once a second.
I suspect it was a remnant of hardcopy terminals
The block mode terminals (like the 3270) were/are kinda like HTML forms: The mainframe sends a form to the terminal, and the terminal has enough local smarts to know how forms work, that only some regions of the form are writable, and how to send a response back one form at a time, as opposed to the character-at-a-time terminals which Unix and VMS and ITS were built around. There's a lack of flexibility, but it allows mainframes to service tons of interactive users, for a certain definition of interactive.
The Blit terminal was the next step beyond block mode terminals, in some sense: Blits could be character-cell terminals with fundamentally the same model as the VT100, but they could also accept software in binary form and run interactive graphical programs locally. Think WASM, only with machine code instead of architecture-independent bytecode.
> When initially switched on, the Blit looked like an ordinary textual "dumb" terminal, although taller than usual. However, after logging into a Unix host (connected to the terminal through a serial port), the host could (via special escape sequences) load software to be executed by the processor of the terminal. This software could make use of the terminal's full graphics capabilities and attached peripherals such as a computer mouse. Normally, users would load the window systems mpx (or its successor mux), which replaced the terminal's user interface by a mouse-driven windowing interface, with multiple terminal windows all multiplexed over the single available serial-line connection to the host.
> Each window initially ran a simple terminal emulator, which could be replaced by a downloaded interactive graphical application, for example a more advanced terminal emulator, an editor, or a clock application. The resulting properties were similar to those of a modern Unix windowing system; however, to avoid having user interaction slowed by the serial connection, the interactive interface and the host application ran on separate systems—an early implementation of distributed computing.
That was 8th and 9th Edition Research Unix; it was an influence on Plan 9, which took the distributed GUI computer system concept and ran with it.
> So you could log into a remote machine and run EMACS, and while you were editing a remote file with a remote editor all the screen updating would be done by your local computer!
Also, ITS had the neat feature of detaching job trees: You could login, get your own HACTRN (the hacked-up debugger ITS used as a shell), run a few other programs which would then be children of that HACTRN job, and detach the whole tree and logout. When you logged back in, you could re-attach the tree and carry on like nothing happened. It's kinda like screen or tmux.
You could also detach any particular job sub-tree, and other users could reattach it. Useful for passing a live ZORK or LISP or EMACS back and forth between different logged-in users. "Here, can you fix this please?"
There was also a :SNARF command for picking a sub-job out of a detached tree (good for snarfing just your EMACS from your old HACTRN/DDT tree left after you disconnected, and attaching it to your current DDT).
It helped that ITS had no security whatsoever! But it had some very obscure commands, like $$^R (literally: two escapes followed by a control-R).
There was an obscure symbol that went with it called "DPSTOK" ("DePoSiT OK", presumably) that, if you set it to -1, allowed you to type $$^R to mess with other people's jobs, dynamically patch their code, etc. (The DDT top level shell had a built-in assembler/debugger, and anyone could read anybody else's job's memory, but you needed to use $$^R to enable writing).
Since ITS epitomized "security through obscurity", the magic symbol DPSTOK was never supposed to be spoken of or written down, except in the source code. But if you found and read the source code, then you passed the test, and deserved to know!
There was a trick if you wanted to set DPSTOK in your login script (which everyone could read), or if somebody was OS output spying on you (which people did all the time), and you wanted to change their prompt or patch some PDP-10 instructions into their job without them learning how to do it back to you.
The trick was to take advantage of the fact that DPSTOK happened to come right after BYERUN. So you could set BYERUN/-1, which everybody does (to run "BYE" to show a joke or funny quote when you log out), then type a line feed to go to the next address without mentioning it name, then set that to -1 anonymously.
So knowing the name and incantation of the secret symbol implied you'd actually read the DDT source code, which meant you had high moral principles, and were qualified to hate unix, which didn't let you do cool stuff like that. ;)
From: "Stephen E. Robbins" <firstname.lastname@example.org>
Date: Thu, 21 Dec 89 13:25:56 EST
Subject: Where unix-haters-request is
Date: Wed, 20 Dec 89 22:13:42 EST
From: "Pandora B. Berman" <CENT%AI.AI.MIT.EDU@mintaka.lcs.mit.edu>
....Candidates for entry into this august body must either prove their
worth by a short rant on their pet piece of unix brain death, or produce
witnesses of known authority to attest to their adherence to our high
Does knowing about :DDTSYM DPSTOK/-1 followed by $$^R qualify as attesting
to adherence of high moral principles?
BYERUN: 0 ;-1 => RUN :BYE AT LOGOUT TIME.
DPSTOK: 0 ;-1 => $$^R OK on non-SYS jobs
N2ACR: SKIPN SYSSW ;$$^R
jrst n2acr0 ; Not the system?
n2acr0: skipn dpstok ;Feature enabled?
jrst n2acr9 ; nope
skipe intbit(U) ;Is it foreign?
jrst n2acr9 ; no, either SYS (special) or our own (OK anyway!)
iorm d,urandm(u) ;turn on winnage
n2acr9: 7NRTYP [ASCIZ/ OP? /]
The same was true of the standard indexed storage: a different runtime could use a SQL database for storage without recompiling the program, which was key to some gradual migrations to Java. A similar feature allowed remapping invoke calls to run on a remote server.
At one point we produced a NPAPI plugin version: install it in Netscape and your client ran the UI while the data access and RPC calls happened on a server. All I can say is that it seemed like a good idea at the time.
Modern web apps, each unique and confusing, with no reliable way to move focus around and with bizarre text entry "improvements", would have made the Admiral weep.
If accessibility is set up correctly, you can still move focus with TAB and activate widgets/buttons with SPACE or ENTER. No different from any terminal app. If client-side JS is being used, the web page could even display additional forms in response to a keyboard shortcut, with no network-introduced latency.
No different from a terminal app that pretends to be a GUI app :-P. At the minimum many terminal/DOS apps used arrow keys to move between fields and often just pressing enter in a field (often after it was validated), it'd move to the next relevant field. Shortcut keys to move around and/or perform context sensitive functions (e.g. search for product IDs in a field where you are supposed to enter a product ID) would also be available at any time.
I remember my father making a program for his job (for his own use) ~20 years ago in Delphi and he was manually wiring all the events to move around with the Enter key, like his previous program (written in DOS and Turbo Pascal) did and was annoyed that he had to do this manually.
TBH i do not believe it is impossible to do this nowadays on the web, but just like it wasn't done in the majority of GUI desktop apps that followed the DOS days, i also do not believe it is/will be done in the web apps that is done nowadays. The reasons for this vary, but i think a large part of it is that people simply haven't experienced using the other methods to even think about them.
FWIW I agree that for data entry apps in particular, Enter is just too convenient - and I had to write code to manually implement it in GUI apps, as well.
And even if it works, why should I need to enable special accessibility features to access a UI mode that isn't utter sh_t?
I'd not vote to go back to the 90ies (we've fixed way to many security problems since then) but it would be great if we could all agree that we lost something along the way and try to get it back somehow while keeping the things that work today.
This is why printed physical books are still an important thing in the age of eReaders and the like. Also, the Library of Congress archives audio on vinyl. The ancient analog formats will still be able to be understood in 100 years. Today's latest tech won't be though. I was just provided a CD-R/DVD-R(didn't look that close) of x-rays of my cat. I honestly have no way of reading that data as none of my devices have a x-ROM drive.
Similarly, you may not have a CD-R or DVD-R reader, but it takes about 5 euros to buy one and access your CD. In 100 years it might cost more, but i'm willing to bet that CD-R and DVD-R readers will still be available in much greater numbers than cylinder phonograph players are. Making new ones, for really important data, will certainly be more expensive than making a new cylinder player but it wont be impossible - if the data is important enough and for some reason all the millions of CD-R players in existence nowadays has vanished, a new one will be made.
And analog formats aren't that great either - most of them tend to wear over time, so they have their own drawbacks.
> everything can be accessed using only the keyboard
Sounds like plain old html forms.
For one thing, and this is important, with every browser I tried, I get periodically interrupted by messages that the browser displays and that are not part of the form. Just recently, I was again asked for security updates by the browser. Sometimes when opening a form it gets prefilled by the browser, sometimes not, and sometimes only some of the fields. Some other times I get asked whether the browser should translate the form. Sometimes it asks me whether some fields should be prefilled. Sometimes when submitting a form the browser asks me whether I would like to store the form contents for later use. Sometimes the browser asks me at unpredictable times whether I would now like to restart it.
An important prerequisite for getting a lot of work done is for the system to behave completely consistently and predictably, and not to interfere with unrelated messages, questions, alarms etc. This is also very important for training new users, writing teaching material etc.
Another important point, and that is very critical as well, is latency: Just recently, I typed text into my browser, and it stalled. Only after a few moments, the text I entered appeared.
I never had such issues with COBOL applications. Today, we can barely even imagine applications that are usable in this sense, because we have gradually accepted deterioration in user interfaces that would have been completely unacceptable, even unthinkable, just a few decades ago.
In the browser, given the steaming pile of code that some pages have become, I may understand some lag, quirks. They broke the web and Chrome is helping so... Yeah. But in a text editor of all applications? No. I mean, just... no. (Looking at you, atom, vscode... WHY?)
Whatever the hell you're trying to search / display / message (I assume, trying to help me...): please begin by not interrupt my input feedback? That should be, I don't know, the only sane priority for UX, to actually respond to the user? Otherwise the computer feels... broken, subpar, unfit for the task? Am I alone in having these feelings when using devices?..
/rant over. I'm not that old for god's sake, 37 and they've got me jaded by UX quality already. All it took was two short decades to flush down the drain the precious good that didn't need fixing.
Ah. I'm sure we'll get back there. Eventually. Even if I have to write it myself (surely, we'll be legions). Someday I wonder what we're waiting for. And then I remember that this is the year of the Linux desktop... It's not that easy, is it?
And everything on the web front is so bloated. Yes, I can make a pretty UX with that, but at what cost? I think that the faster and cheaper computers that we have today have made it easier to overlook how bloated everything has become.
Upside is, it does make a lot of things easier. Downside is, everything is slower and fatter. Mostly because of new frameworks, languages, etc that all aim to accomplish something, just easier.
And I get it. I understand why we want to make it easier. But again, what are the costs?
Um, you're posting this on a site that's basically the polar opposite to "bloat" on the web. Badly-engineered systems have always existed somewhere; even the COBOL-based proprietary solutions referenced in the OP are a fairly obvious example of that.
HN, in my opinion, is an island of simplicity in a sea of complexity.
1 minute per year!
So you get stuff like Wayland that introduces latency issues and when you try to explain the issue you feel like trying to describe the difference between magenta and fuchsia to a blind person.
Can't agree more. That might very well be it.
Could it be trained, through exposure, e.g. with video games as I implied?
Or is it maybe related to "snapiness" of the brain?— I think it's been shown that we clearly have difference response times, some people being several times faster/slower than others (with no explicit correlation to a "level" of intelligence", more like the characteristics of different engines in terms of latency / acceleration / max speed / torque / etc).
So perhaps it can be learned?
As always, there are sacrifices, but in general I think we have made rational trade-offs in this realm.
I think despite all of these most of us stick with modern browsers because the tradeoffs are worth it. Removing complexity and unpredictability at all costs has usually worse impacts than being annoyed and surprised. Except if your software drives a cockpit or a nuclear plant.
Pressing the ESC key should dismiss these messages. It's an annoyance to be sure, but not a deal breaker. Similar for pre-filled content, you can just select and erase it.
At least, such behavior should be user preference— “get out of my way” is like #1 on most users list for a useful toggle.
As for notifications specifically, it's not like we haven't developed 101 notification centers to postpone user response.
The problem, imho, with qualifying this as universally "not a deal breaker" is that it's a slippery slope, all too subjective to boot with— e.g. is Windows auto-rebooting not a deal breaker either? I'm sure to some people, not really...
A showstopper has very different thresholds depending on what you do. Live tasks notably (recording, streaming, etc) should be sanctuarized (down to a "real-time" setting at the thread level, when it's not actually unstable for some ungodly reason). Chrome is just another OS nowadays, videoconference being a prime example of app. I wouldn't like Chrome to nag me during an interview for instance.
If I even have to make just 1 extra move, gesture, click, whatever even a look away from my work, because the machine demands that I do (screen or app locked 'behind' otherwise), in effect creating unsollicited gates between me and my workflow... yeah, there's a big problem. 5 minutes saved a day per user for a big corp is millions saved before a month has passed.
Now think that we're collectively, computer users, like one big human corporation. Think of the time we're losing due to bad design. Let that sink in for a minute... It's a huge and stupid cost we impose on ourselves. Must be funny to some, idk...
There is a way to a better UX. We shouldn't need ESC to get there. ;-)
Until they wanted to stop paying for terminal emulator licenses and replaced it with a web gateway that translated the 3270 pages into HTML forms. That was pretty horrible to use and of course people began “forgetting” to record inventory moves and process steps. This was for six figure aerospace components so at the end of the day someone knew where they all were and what had been done, it just made it a lot harder to bill for milestones and wasted tons of time. Of course the software was like $15 a seat.
>The resulting interfaces are very efficient for text entry, and allow users to quickly get a lot of work done. The interaction is so much more efficient than what we are used to with current technologies that it can barely be described or thought about in today's terms.
It's not common anymore, but Unix clones still have dialog(1):
One important point that makes them so efficient to use is that everything can be accessed using only the keyboard.
For my personal use, I have simulated such forms using Emacs, and especially its Widget Library:
This is a bit similar to what you get with a SCREEN SECTION in COBOL.
Here's an example screenshot: http://www.buildeasypc.com/wp-content/uploads/2011/11/step12...
What's the reason that COBOL faded over the years?
Well, you should question your assumption that the new languages are better. They aren’t, they are just more fashionable. You can see this in every aspect of life, not just programming. Previous generations liked high-quality products that would last. Modern day people like cheap objects that soon break and are thrown away and replaced with something equally cheap and flimsy.
Availability of programmers is part driven by the market but largely by programmers themselves who try to avoid mature and established technologies and chase the hottest new trend. So it’s true what you say, but it doesn’t happen because it has to but because people want it to.
If you want something naive that I think blows both out of the water: FreePascal/Lazarus.
I recall it won an award for the most error messages from the fewest number of lines of code.
I think a program with a period in column 6 run through the IBM Cobol compiler would generate either 200 or 600 lines of error messages.
> COBOL “remains one of the fastest programming languages,” said John Zeanchock, associate professor of computer and information systems at Robert Morris University. “We get calls from big companies in Pittsburgh, like Bank of New York Mellon and PNC Bank, looking for COBOL programmers. The limitations of COBOL itself are probably not the problem with New Jersey’s system,” Zeanchock said, adding that a single mainframe computer could probably handle the processing needed for an individual state’s UI unemployment insurance system, even if the number of applications increased tenfold.
The article goes on to note:
> When it comes to our broken social safety net, COBOL isn’t really the bad guy: blame Congress for underfunding the programs, and states for failing to make up the gaps.
> “We’ve known for a long time that even a much smaller crisis would lead to daunting challenges,” said Dutta-Gupta. He points out that during the Great Recession, state unemployment agencies were so understaffed that their agency heads spent shifts answering applicant phone calls. Even if states weren’t getting federal funding, they still could have paid for better technology with their own tax revenue. According to reporting by the Tampa Bay Times, former Florida Gov. Rick Scott and current Gov. Ron DeSantis ignored repeated warnings that the state’s website barely functioned.
This looks a lot less like a computer problem and more of a process and institution problem made much worse by chronic underfunding and understaffing.
It’s a political issue. Some states are broke (e.g. NJ), and some are hell bent on not helping others in society so they handicap social welfare systems (e.g. FL). Real solution won’t come from a few COBOL programmers, it will have to be voters voting in people who will address the issues.
Other than defaulting on their debt, they can’t do any thing about it as the money was spent decades ago in the form of defined benefit pension payments in the future. If all the recipients of those future defined benefit pension payments died, then that would also lower their debt.
They are relatively a prosperous state. (Good GDP per capita, good income after taxes - https://upload.wikimedia.org/wikipedia/commons/9/9f/Median_h... , though life is accordingly expensive 34th overally affordability - https://www.usnews.com/news/best-states/rankings/opportunity... - better than Florida, Washington, NY, CA, etc. 2017 "PPP" is 112% of national average, close to NY's 115% and CA's 114%.)
Though they should just ship pensioners to a cheap-to-live state, eg Florida or Pennsylvania.
Okay, okay, armchair policy recommendations are always very nuanced and great (not to mention useful), especially on random internet forums! But the level of dysfunction due to braindead leadership is always surprising.
On the other hand . . . In 2009 recession, I had to call about some issues with unemployment benefits. It was a harrowing process. Like many, I practically memorized the key sequence so didn't have to listen to all the voice prompts.
This assessment was after both signed up to volunteer to do the work and saw it was a human process failing and not a software issue. People tend to point to the parts most mysterious to them when things start to fail - a zSeries frame is in actual reality, a very important big black mystery box that officials with access to microphones are not allowed to touch - perfect.
Don't let this discourage you though, their problems are still very real.
The big bummer is that the crisis almost requires enabling behavior here, in the sense that if there are things wrong with said COBOL system, it's the fault of the NJ gov't for not fixing it a long time ago. People should rescue this for the sake of the unemployed, but can we please fire people for this?
This isn't a red state: They collect plenty of revenue. So IF IT IS COBOL SYSTEM, and not just bureaucrats, it's still NJ's fault.
As Bezos said recently regarding the Seattle city gov't:
They don't have a revenue problem. They have a spending efficiency problem.
They spend even more. NJ is in the worst spot out of all the states:
Part of this revenue collection (tax policy) is why corps are moving en masse, the recent largest example is Honeywell's relocation to Charlotte. NJ's latest tax hike has actually caused them a net loss in revenue.
The insane part of me here is NJ politicians keep clamoring to raise taxes more. It's almost as if they heard De Blasio's campaign slogan and thought it was a great idea.
If they consolidated and streamlined, it would be much more efficient, but there are numerous barriers to this (not least of which how eliminating a bunch of those jobs would go over with the public). They're likely going to have to do something eventually, they just haven't figured out how yet.
In my observation, after 5-10 layers of indirection, it eventually ends up overseas with a young foreign national who makes $3 an hour and absolutely does not have the required security clearance.
Yeah, sounds like New Jersey. I'm surprised that they sacked the team, it would be a very NJ thing to keep them on and just handwave the costs away.
Is that statement directed a programmers who want to help NJ? Because you made a pretty good argument earlier in your post that this isn't a software development problem.
I'd love for them to be wrong, that'd be great.
I vaguely remember they had different namespaces using common blocks.
 https://bugs.gentoo.org/641888 (reported 3 years ago!)
 https://bugs.gentoo.org/685960 (reported 1 year ago!)
It's great to see that they are alive and kicking.
The ASPX taxing/assessment system is the slow one in comparison when it comes to calculating. The ColdFusion system is not as hindered but its used for making letters and reports, exporting layers to GIS, and generally replacing COBOL-screens for information retrieval.
Is/are there better solutions, yes, but I was giving an example rather than critiquing the County’s choice of software that happened well before I came along.
Not sure if the COBOL bridge for Node is allowed. :-)
I have been working on a project and have some experience with DB2/AS400, it is not as old as COBOL. It is not very fun or exciting and stackoverflow lacks much information, a lot of trial and error.
I can't believe how widespread these old systems are. I am surprised nobody has decided to update them, I mean when these old developers die or retire......they will have a hard time migrating to a newer systems.
Bonus: I HATE IBM, .Net Core provider to access database? That is a few $$. Crappy documentation? Check. Deleted code examples I had bookmarked? Check. IBM forum questions answered in private messages, so nobody else can use the information and the question needs to be asked hundreds of times? Check.
IBM is so upside down.
I mostly work in .Net MVC/Core. However, connecting to db, obtaining licenses, etc have been a real pain.
They even invented the PC.
But they fumbled and lost the PC and then were pretty much eclipsed by it.
If we make these (very questionable) assumptions:
- Most qualified mainframe COBOL programmers are in an age bracket for which COVID-19 is very dangerous,
- and most COBOL shops are reluctant to allow all-remote teams,
then it might be harder to fill those jobs than it is to hire risk-tolerant / risk-ignorant young adults to flip burgers.
EntryRec = record
Entries: File of EntryRec;
Write('Phone book, what to do? (Enter, List, Modify, Quit) ');
if C='E' then begin
Write('Name: '); Readln(Entry.Name);
Write('Phone: '); Readln(Entry.Phone);
end else if C='L' then begin
while not Eof(Entries) do begin
Writeln('Name:', Entry.Name:32, ' Phone:', Entry.Phone:30);
end else if C='M' then begin
Write('Entry number (0..', FileSize(Entries) - 1,'): ');Readln(I);
Write('Name (', Entry.Name, '): ');Readln(Entry.Name);
Write('Phone (', Entry.Phone, '): ');Readln(Entry.Phone);
And of course there were the more high level "4GL" languages that combined a full database and a (mostly) general purpose programming language, like dBase/FoxPro/etc. These also had features like a visual form editor that allowed building data entry and manipulation applications quickly.
COBOL has built-in support for accessing flat file databases with either sequential or indexed organisation. On many mainframe and minicomputer operating systems, the filesystem provides native support for record-oriented files (with fixed or variable width records) and indexed-key files, and COBOL directly integrates with that.
COBOL has syntax for defining nested record structures which was often used to define the data schema for these files. You could store the definitions in a "copybook" (conceptually equivalent to an include file in C/C++) and reuse it in many programs which would manipulate the same files.
COBOL itself doesn't directly support "querying", except for the strategies of (1) manually iterating through every record in a sequential file to find matching records or (2) looking up a key-indexed file by its key field.
There are commonly implemented extensions to COBOL to access relational databases (SQL), hierarchical databases (e.g. IMS), etc. However, those are not part of the core COBOL language, and comparable facilities are defined for many other languages, so COBOL isn't really unique on that point.
It doesn't seem to me that having built-in syntax for this is necessarily a big advantage, since Python can easily and naturally work with such databases using the built-in dbm module in the standard library (and has no need for defining the schema up front): https://docs.python.org/3/library/dbm.html
If you do need to define the database schema in code, there are many third-party libraries that allow you to do this cleanly using classes and fields, e.g. Django. Even Java can do this with Hibernate.
Python doesn't support accessing mainframe key-indexed files. There is a port of Python to z/OS , but the dbm module can't read/write VSAM KSDS files, and I don't believe there exists any other Python module that can either. (Of course, someone could write a Python C extension using the VSAM C interface, or create a module which calls it using ctypes–z/OS adds some functions to stdio like flocate,fupdate,fdelrec for VSAM access–but I don't think anyone has done that so far.) Now of course, if you are porting your app to another platform, this will not be an issue, but if your data is staying on the mainframe, Python doesn't know how to handle it, COBOL does.
> If you do need to define the database schema in code, there are many third-party libraries that allow you to do this cleanly using classes and fields, e.g. Django.
Mainframe systems tend to use file formats based on plain text records where fields are assigned to fixed ranges of columns. Is there a Python module to do that? Especially in a declarative way, which is what COBOL provides.
My question was about the advantages of Cobol's built-in syntax for database access and it seems like, as you said, you could write a Python module for the mainframe database format if desired, and there would then be no particular advantage to Cobol's special syntax.
Edit: The J2ME environments used on phones 15 years ago also provided a database (Record Management System) which would be difficult to access from a language other than Java, but that's unrelated to the question of whether the "programming language itself has no clue about database layout".
In terms of data structures, imagine you are working in an environment in which you have hundreds of pre-existing data files defined by COBOL copybooks, which look like the example on this page: https://www.ibm.com/support/knowledgecenter/en/SSMQ4D_3.0.0/...
Now, in COBOL, you just include that in your program and then you can parse the file. In Java, well Java has no built-in support for that, but IBM has a product called IBM Record Generator for Java which uses COBOL copybooks to generate Java classes. I'm sure someone could write an equivalent tool for Python. But as far as I am aware, no one has, and it would be a non-trivial amount of work to do.
> but that's unrelated to the question of whether the "programming language itself has no clue about database layout".
Well, that wasn't my statement, that was Animats'. Possibly what Animats was trying to say, is that in a classic COBOL environment, the language you use to define your database schema is built-in to your programming language, and uses the same syntax as you use to define in-memory data structures. That usually isn't true in other programming languages, including Python and Java, in many of their common usage patterns. In Java for example, you could store all your data using serialization, and then it would be much closer to how COBOL does it, but you normally don't do that. Or you use some kind of ORM framework like Hibernate, but then you end up with all these annotations which don't mean anything for in-memory data – or, in older versions of Hibernate, an XML file – which is further away from the classic COBOL model that describing in-memory data and on-disk/tape data is done identically.
However, not all COBOL code is "classic COBOL code" in that sense. Rather than storing data in files, many COBOL programs use a relational database, most commonly via the SQL precompiler (EXEC SQL). In that case, COBOL is basically no different than any of the other languages which support embedded SQL – the SQL standard defines embedded SQL support for Ada, C, COBOL, FORTRAN, MUMPS, Pascal, and PL/I (I know in the past Oracle RDBMS supported every one of those languages except for MUMPS, although in newer versions many of the precompilers have been discontinued due to lack of use.) Embedded SQL also exists for Java (SQLJ), although I've never seen anyone use that technology, almost everybody uses JDBC instead. I don't believe Embedded SQL is available for Python, but again, there is no reason why someone couldn't implement that if they wanted.
That said, if it's paid with public funds, it should be open source.
I have a full set of Donald Knuth's "The Art of Computer Programming" sitting on my shelf, unopened after two years. Maybe it is time to rifle through it and learn how to program like the pioneers did in the old days.
I suspect that programmers who have grown up in the era of the PC would be surprised at how much the vendors supplied. The IBM mainframes came with huge libraries of software and documentation. I remember being given a UNIX box for a project at the telco where I worked, and being a little surprised at how little was included compared to the IBM or DEC machines of the day.
There certainly were some good programmers around, but a lot of "development" consisted of taking a copy of some existing COBOL program and tweaking a few things to create a new report. Out of two or three years of working in COBOL, I don't think I ever wrote a program completely from scratch. As a creative outlet, it was much more interesting to work in C and try to figure out how the guys at Bell Labs were doing things. Those are the shoulders that many of us stand on today.
Mainframes are not big Unix boxes. Their OSs were evolving for decades before Unix booted for the first time and are very different from anything most people have direct experience with. You can boot up a legal copy of MVS 3.8j (or an illegal one of its modern descendant z/OS) but expect a learning curve. You can also get something similar from Unisys for their Clearpath machines.
Actually, these days, they can be:
> However, z/OS also supports 64-bit Java, C, C++, and UNIX (Single UNIX Specification) APIs and applications through UNIX System Services – The Open Group certifies z/OS as a compliant UNIX operating system – with UNIX/Linux-style hierarchical HFS and zFS file systems. As a result, z/OS hosts a broad range of commercial and open source software. z/OS can communicate directly via TCP/IP, including IPv6, and includes standard HTTP servers (one from Lotus, the other Apache-derived) along with other common services such as FTP, NFS, and CIFS/SMB.
Still, unless you are running a Unix like OS directly on an LPAR, you can't completely ignore the z/OS personality.
Unix was born after magnetic storage, both disk and tape, were common. A file looks a lot like a tape.
All System/360 descendants support ASCII, the issue they ran into early in the System/360 program was accessories, IBM had lots of EBCDIC I/O devices, and developing a line of ASCII I/O devices would have delayed introduction of the line by a period of time.
Also, I'm sure if you search "COBOL tutorials" on the engine of choice, you'll find tons of stuff.
I don't know that anyone would ever want to do things the same way today. Today we expect interfaces to adjust to the available screen and font size, which to the best of my recollection was not something that the old COBOL screens could do. There are still 'text-based user interface' tools available, although the modern approach is to link them into the program as a library rather than having them built into the language runtime. The curses (or ncurses) and s-lang libraries are what comes to mind.
The second rule of COBOL programming is...
Think about it 40yr old system still running. Amazing right. Pick any language today, write a program little bit enterprise level, will it run for another 40yrs without any major changes.
And they wonder how it got this bad.
years of "it already works, dont fuck with it" policy and tech illiterate politicians.
Any developer saying, "Hey, go and rewrite the code" underestimate all the edge cases that end up all over the place in the application.
Nobody understands why the rules are the way they are - but if someone gets paid $1 less they'll be screaming at you.
Search for "jane street ocaml" and "standard chartered haskell" for examples of companies using languages with good type systems in financial code.
The mainframes aren’t the problem, and doing a code conversion of COBOL is not trivial. Their data types are completely at odds to normal data types in modern languages (for example, you can specify a string that is always exactly 30 ascii characters long, the 4th character must be a numeric, the 18th must be ascii but not numeric, the 22nd through 30th must be alphanumeric. All with a native datatype. And that’s just the scalar types, Cobol data types natively support recurrences and redefines and heirarchies. And the language has like 400 keywords.) You end up having to wrap every single line of code in compatibility layers upon compatibility layers, and then it doesn’t even look like python, it just looks like a crappier version of Cobol, to make it function identically.
It’s much easier (though more time consuming) to read the code and determine the business rules being enforced and reimplement them in a modern language.
I have worked with a lot of Cobol translators and their output is worse than the original code by far. With a lot less confidence and experience with maintaining it.
More constructively, having worked on a (close to decade-long, by the time I joined) project to migrate a financial institution's business logic off an IBM mainframe onto more modern architecture, my main takeaway was that these systems have worked pretty darn well for decades with relatively little maintenance. The reluctance of companies to migrate god-knows-how-much data, business rules, and institutional knowledge (e.g. of data entry folks, business-level administrators who have to navigate these systems to do their job, etc) off a time-tested system they trust at massive expense, is completely understandable.