Hacker News new | past | comments | ask | show | jobs | submit login
COBOL programmers answer call (ieee.org)
320 points by furcyd on April 10, 2020 | hide | past | favorite | 238 comments



COBOL can be a real pleasure to use.

For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.

The resulting interfaces are very efficient for text entry, and allow users to quickly get a lot of work done. The interaction is so much more efficient than what we are used to with current technologies that it can barely be described or thought about in today's terms.

Regarding the underlying theme of legacy systems, here is an interesting presentation by Jonathan Blow, citing various examples of earlier civilizations losing access to technologies they once had:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

I think in terms of user interfaces, we are on a downward path in several important ways, and I highly recommend checking out earlier technologies that were in many ways an improvement over what we are working with today.


At one of my previous jobs, I was responsible for designing and building a webapp to replace a high speed data entry system that had been written in cobol that was used to input large legal documents (several thousand lines of data in some cases). We had two objectives, data entry had to be as fast or faster than the old system and the training time on the new system had to be faster.

We blew both objectives out of the water. Training time went from 3-6 months to about 2 weeks. We had a variety of modern UI concepts like modals, drop downs and well behaved tables that dramatically simplified the work flow. We also had a full suite of user defined hotkeys, as well as a smart templating system that allowed users to redefine the screen layout on a per use case basis (the screen would be reconfigured based on what customer the user was entering data for example).

For performance, cobol typically requires a server round trip every time the screen changes. We simply cached the data client side and could paginate so quickly that it actually caused a usage problem because users were not mentally registering the page change. The initial screen load took slightly longer compared to the cobol system, but the cobol system required 40 screens for its workflow whereas our system could do everything on a single screen.

I guess my point is that modern systems are capable of much better performance and user ergonomics than cobol systems. We have a lot more flexibility in how UI's are presented that let us design really intuitive workflows. The flexibility also lets us tune our data flow patterns to maximize performance. But most modern development processes do not have this kind of maniacal focus. Systems don't perform because most product owners don't really care that much at the end of the day. Once you care enough, anything's possible.


In my last job we wrote a front-end that ran the COBOL and streamed the presentation information to the client over a websocket. It allowed them to use 30+mil LOC of existing apps.

The resulting webapps were made automatically responsive for the web and could be run anywhere - desktop, tablets, phones and apps could open tabs that invoked other apps for multitasking. We also added some custom commands to the COBOL so that they could invoke web based graphs, reporting, printing, pdf generation, etc. The client supported native typeahead, so the web apps behaved very much like desktop apps whereby pressing a number of shortcuts simultaneously resulted in them being played in order and overcoming any latency. This made the apps completely superior to normal web apps for their application (i.e. POS, ERP systems).

The utility that COBOL provided, coupled with a modern web based runtime written in React, was remarkable. Truly a hybrid of both the best parts. When I left, they were working on wrapping the React webapp into a React Native app.


For "presentation information"are you saying you just stream back the normal text on the screen (for that piece of information in the COBOL app) and then parse it into some sort of API?

I have no idea about COBOL at all but I've done something like this before with a client mainframe scripting/macro language, it was not fun. Basically I had to hard code a bunch of key inputs to get the information screen I needed finally read that screen back out in plain text and parsing that into some sort of structure. It was a mess but worked for what it required at the time.


Things like buttons and windows are streamed across.


> For performance, cobol typically requires a server round trip every time the screen changes.

Round trips aren't the problem. Waiting for IO is. CICS solves this by not waiting for IO. It dumps the screen and moves on. Then it becomes the terminal's responsibility to wake CICS up with an attention key. If a front-end application is stuck waiting for terminal input, it's written wrong.


> Once you care enough, anything's possible.

People should bring back sigs, so that can become a meme.


Hi, I'm curious how you designed the app? Did you use the existing application as a base and try to duplicate it in the web or was it a completely different beast? Do you have any book recommendations or even blog posts for how to upgrade legacy UI? It's a topic I'd love to dive more into and I don't hear as much success stories as I would like.


How long did it take? What frameworks/libraries did you use?


We used Angular 1. Took about 2 years, but it was just me for about half that time and we were all learning js and the web ecosystem. The new system had been prototyped a few years earlier using java swing. That really helped nail down the design requirements.


Has this meet expected budget? Compare to continue maintain existing cobol system?


> For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.

By the way if it seems odd to have I/O like this in your language:* this reflects the architecture of mainframes; because the CPU was so important, I/O was managed by external devices (typically themselves a cabinet of electronics, at a time when the CPU took up two or three cabinets itself). You'd write essentially a small program describing what IO you wanted and then let the IO channel controller be busy dealing with minutiae like dealing with the tape drive motor or input from terminals.

So the language would set up data to be slapped to a terminal (this would have been a decade or more after COBOL was written, once terminals with screens were available), which the channel controller would send to the appropriate terminal. It would then deal with input and once all was ready, tell the CPU that it had a bunch of input ready.

Terminals like the 3270 were even half duplex so would process the input and then send it off as a block, to make the channel controllers more efficient!

In the ARPAnet of the 1970s, the ITS PDP-10s and -20s took this further with a protocol called SUPDUP (super duper) which allowed this kind of processing to be done on a remote machine when logging in over the net (as we would do today with ssh). So you could log into a remote machine and run EMACS, and while you were editing a remote file with a remote editor all the screen updating would be done by your local computer! Even the CADR lisp machines supported this protocol!

* At the time not doing IO through the language itself was considered oddball. I seem to recall a line in the original K&R where they described C's I/O and made an aside, '("What, I have to call a function to do I/O?")'


It's quite clever, actually. Offloading specialized work to autonomous subsystems allows the CPU to run your code better than it would if the CPU also had to deal with reading from disk or assembling packets for the network. A modern mainframe has tons of such systems and that's what allows them to process volumes of transactions that PC-based servers of the same price range can't.

In fact, our PCs are like that because they had to be cheap and the cheapest way is to burden the CPU with all work.


Actually I’d disagree a bit with your last sentence. Handling io interrupts in the kernel was a property of cheap machines like minicomputers that you could buy for a few hundred $k or less. You typically couldn’t afford extra channel controllers on cheap machines like that. Hence Unix’s I/O architecture was driven by the constraints of the PDP-7. Multics has a standard io controller architecture.

PCs of course had the same issue — down to the cpu controlling the speed of disk rotation! But a modern PC has an I/O system more complex than the mainframes of old, with network interfaces that do checksum processing, handling retransmission and packet reassembly etc, just DMAing the result. Disk drives, whether spinning or SSD have a simple, block interface unrelated to what the storage device is doing, etc. I think this is all as it should be, though I personally consider the Unix IO model grossly antiquated.


If /dev/zero is an infinite source of zeros, then why doesn't the minor device number specify which byte to use? Then you could make character special file /dev/seven that was an infinite source of beeps! There have been so many times I needed that.


Great idea! Also, for when you need some paper, you can pipe /dev/twelve to your printer.

We should propose a kernel patch for this next April.

Extra credit if we get it rolled into POSIX.


Star Wars is more advanced and had this. How do you think R2D2 wasn’t cut from that film? He had an advanced case of Tourette’s so everything had to be beeped out.

Imagine if they’d run out of beeps during filming!


These days, the "autonomous subsystem" is a microcontroller at the other end of a USB, SATA or PCI bus. Modern network cards also need to perform most of their packet-assembly operations on device and not in the kernel, or they would never reach their advertised max bandwidth.


Those controllers are still designed prioritizing cost over performance most of the time.


Well it’s system cost — the controllers take the high frequency interrupts, not the cpu.

As for BOM...a few years ago we designed a big serial board (industrial control) and found it was much cheaper to buy AVR cpus and use just the onboard UARTs than to buy UART chips.


Almost 25 years ago I took a TCP/IP class in Chicago that turned out to be a waste of my time, because it primarily dealt with clients like ftp, telnet, etc that I was already quite familiar with.

However, the rest of the students were mainframe guys, so it was an interesting few days. My (Solaris, Linux) world seemed incomprehensible to them, and vice versa, but it was nice to get a glimpse into how the other half lived. Finding experienced computer people who’d never used ftp was quite surprising.


Separating the IO from the rest of the program is always a good practice, that we keep rediscovering.

E.G:

- UI in React are trying to be just rendering functions with one input and one output. Even more so with hooks.

- in python, there is a new trend of providing "Sans I/O" (https://sans-io.readthedocs.io/), and async/await are not just keywords, but a generic protocol created to delegate I/O to something from outside your code.

It's interesting to see that on very old systems, even the hardware was organized that way.


Oddly, that reminds me a lot of early game consoles! You'd just set up some sprite registers, or send some commands to music sequencer hardware, and then those would go on and persistently do things on their own while the CPU got on with the "business logic" of running a game.

It's funny how it was the systems in the middle (minicomputers, like the PDP-11 where C originated) that did everything on the CPU, whereas the systems on the high end (mainframes) and low end (microcomputers) both split the work out, for different reasons: mainframes pushed IO out to independent coprocessors for multitenant IOPS parallelism (can't make any progress if the CPU gets an IO interrupt every cycle); while early microcomputers pushed IO out to independent coprocessors to retain the "feeling" of real-time responsivity in the face of an extremely weak CPU!


I never thought of a sprite chip this way but your message makes me think I should have. A sprite chip was a kind of coprocessor like an external floating point unit or a GPU.


A sprite chip literally is a GPU [Graphics Processing Unit], just for 2d graphics (sprites,tiles,color palettes,etc) rather than 3d (vertex,triangles,fragment shading,etc).


Mind you, a PPU is specifically, in most cases, more like an iGPU (integrated GPU, like Intel's on-CPU graphics.) Most of the PPU chip designs tended to share a memory (both physically and in address-space terms) with the CPU, such that the CPU would write directly to that memory, and then the PPU would read back from that memory. (This was when the PPU had memory at all; often they were pixel-at-a-time affairs, doing just-in-time sprite-tile lookups from the ROM pointed to by their sprite-attribute registers.)

Most things we call "coprocessors", on the other hand, were a bit different: they had their own on-board or isolated-bus memory, which only they could read/write to, and so the CPU would interact with them with "commands" put on a dedicated command bus for that coprocessor. Most sound chips (down to the simplest Programmable Interval Timer, but up to fancy chips like the SNES's SPC700) were like this; as were storage controllers like memory cards and the PSX's CD drive.


Literally true.

I was thinking of the GPUs we had in those days which were often a couple of VME cards or even a small cardcage, but you’re right: that physical distinction isn’t really relevant.


In the Amiga it was called the Copper which was an abbreviation of co-processor.


SUPDUP and EMACS supported "line saving", so Emacs could send a control code to tell a SUPDUP terminal to save lines of text in off-screen buffers, and restore them to the screen, so EMACS only had to send each line one time, and the SUPDUP terminal could quickly repaint the screen as you scrolled up and down through a buffer.

Here's my Apple ][ FORTH implementation of SUPDUP with line saving (%TDSAV, %TDRES), which saved lines in the expansion ram card.

https://donhopkins.com/home/archive/forth/supdup.f


Now in HTML5 you have to learn 3 languages to do that.


HTML and CSS are more markup languages than actual languages. You can work at varying degrees of abstraction.

There are thousands of template sites being used today with JS plugins the developer probably wouldn’t know how to write. But the minimal interface layers are sufficient to get them to work.

Stuff like Select2 and bootstraps collection of plugins cover a broad range of interactivity most people need on the internet.


What you are describing sounds like the inverse of TRAMP in modern GNU emacs.


In a way. We had that too, with a networked filesystem in the 1970s so you could simply open a remote file in ITS EMACS (or any other program). It was handled by the O/S, or rather a user space program like today’s FUSE.


But the bulk of the remote file system code was in user space.


Or an HTML form.


SUPDUP was far more dynamic than an HTML form (read RFCs 734 and 749).

html forms are more like the half duplex terminals of the 70s/80s.


I remember someone complaining about unix machine in comparison to mainframes:

"It generates an interrupt every time you press a key!"


At UMD, Chris Torek hacked ^T support to the 4.2 BSD tty driver, inspired by TOPS-10/TWENEX's interrupt character that displayed the system load, current running job, etc.

But the first version didn't have any de-bouncing, and would process each and every ^T it got immediately (each of which had a lot of overhead), so on a hardwired terminal you could hold the keys down and it would autorepeat really fast, bringing the entire system to its knees, while you could even watch the load go up and up and up!

And of course whenever the system got slow, everybody would naturally start typing ^T at once to see what was going on, making it even worse.

That was a Heisenbug, where the act of measuring affects what's being measured, with an exacerbating positive feedback loop. He fixed the problem by rate-limiting it to once a second.

https://en.wikipedia.org/wiki/Heisenbug

https://en.wikipedia.org/wiki/Positive_feedback


VMS systems on LAT networks only received a packet/interrupt per line from the terminal, that’s how they were able to support many times more users than Unix contemporaries.


Unix can be configured this way —- it used to be the default, with # as the rubout character and @ the line delete character. It’s still in the tty driver and can be useful when programming from a teletype (tty)


I learnt something today - thanks. I’ve never seen a Unix system configured like that, in over 25 years of doing this!


That system was inherited from Multics, but the limited number of interactive computers in those days were primarily used from printing terminals which couldn’t actually erase a character so all of them had some such facility.


thank goodness. I remember it was common that the login screen on a terminal would be configured for # as backspace while backspace would actually DO a backspace but be entered into the buffer.

I suspect it was a remnant of hardcopy terminals


> Terminals like the 3270 were even half duplex so would process the input and then send it off as a block, to make the channel controllers more efficient!

The block mode terminals (like the 3270) were/are kinda like HTML forms: The mainframe sends a form to the terminal, and the terminal has enough local smarts to know how forms work, that only some regions of the form are writable, and how to send a response back one form at a time, as opposed to the character-at-a-time terminals which Unix and VMS and ITS were built around. There's a lack of flexibility, but it allows mainframes to service tons of interactive users, for a certain definition of interactive.

The Blit terminal was the next step beyond block mode terminals, in some sense: Blits could be character-cell terminals with fundamentally the same model as the VT100, but they could also accept software in binary form and run interactive graphical programs locally. Think WASM, only with machine code instead of architecture-independent bytecode.

https://en.wikipedia.org/wiki/Blit_(computer_terminal)

> When initially switched on, the Blit looked like an ordinary textual "dumb" terminal, although taller than usual. However, after logging into a Unix host (connected to the terminal through a serial port), the host could (via special escape sequences) load software to be executed by the processor of the terminal. This software could make use of the terminal's full graphics capabilities and attached peripherals such as a computer mouse. Normally, users would load the window systems mpx (or its successor mux), which replaced the terminal's user interface by a mouse-driven windowing interface, with multiple terminal windows all multiplexed over the single available serial-line connection to the host.

> Each window initially ran a simple terminal emulator, which could be replaced by a downloaded interactive graphical application, for example a more advanced terminal emulator, an editor, or a clock application. The resulting properties were similar to those of a modern Unix windowing system; however, to avoid having user interaction slowed by the serial connection, the interactive interface and the host application ran on separate systems—an early implementation of distributed computing.

That was 8th and 9th Edition Research Unix; it was an influence on Plan 9, which took the distributed GUI computer system concept and ran with it.

> So you could log into a remote machine and run EMACS, and while you were editing a remote file with a remote editor all the screen updating would be done by your local computer!

Also, ITS had the neat feature of detaching job trees: You could login, get your own HACTRN (the hacked-up debugger ITS used as a shell), run a few other programs which would then be children of that HACTRN job, and detach the whole tree and logout. When you logged back in, you could re-attach the tree and carry on like nothing happened. It's kinda like screen or tmux.


>Also, ITS had the neat feature of detaching job trees: You could login, get your own HACTRN (the hacked-up debugger ITS used as a shell), run a few other programs which would then be children of that HACTRN job, and detach the whole tree and logout. When you logged back in, you could re-attach the tree and carry on like nothing happened. It's kinda like screen or tmux.

You could also detach any particular job sub-tree, and other users could reattach it. Useful for passing a live ZORK or LISP or EMACS back and forth between different logged-in users. "Here, can you fix this please?"

There was also a :SNARF command for picking a sub-job out of a detached tree (good for snarfing just your EMACS from your old HACTRN/DDT tree left after you disconnected, and attaching it to your current DDT).

https://github.com/PDP-10/its/blob/master/src/sysen1/ddt.154...

It helped that ITS had no security whatsoever! But it had some very obscure commands, like $$^R (literally: two escapes followed by a control-R).

There was an obscure symbol that went with it called "DPSTOK" ("DePoSiT OK", presumably) that, if you set it to -1, allowed you to type $$^R to mess with other people's jobs, dynamically patch their code, etc. (The DDT top level shell had a built-in assembler/debugger, and anyone could read anybody else's job's memory, but you needed to use $$^R to enable writing).

Since ITS epitomized "security through obscurity", the magic symbol DPSTOK was never supposed to be spoken of or written down, except in the source code. But if you found and read the source code, then you passed the test, and deserved to know!

There was a trick if you wanted to set DPSTOK in your login script (which everyone could read), or if somebody was OS output spying on you (which people did all the time), and you wanted to change their prompt or patch some PDP-10 instructions into their job without them learning how to do it back to you.

The trick was to take advantage of the fact that DPSTOK happened to come right after BYERUN. So you could set BYERUN/-1, which everybody does (to run "BYE" to show a joke or funny quote when you log out), then type a line feed to go to the next address without mentioning it name, then set that to -1 anonymously.

So knowing the name and incantation of the secret symbol implied you'd actually read the DDT source code, which meant you had high moral principles, and were qualified to hate unix, which didn't let you do cool stuff like that. ;)

https://github.com/larsbrinkhoff/its-archives/blob/master/em...

    From: "Stephen E. Robbins" <stever@ai.mit.edu>
    Date: Thu, 21 Dec 89 13:25:56 EST
    To: CENT%AI.AI.MIT.EDU@mintaka.lcs.mit.edu
    Subject: Where unix-haters-request is

       Date: Wed, 20 Dec 89 22:13:42 EST
       From: "Pandora B. Berman" <CENT%AI.AI.MIT.EDU@mintaka.lcs.mit.edu>

       ....Candidates for entry into this august body must either prove their
       worth by a short rant on their pet piece of unix brain death, or produce
       witnesses of known authority to attest to their adherence to our high
       moral principles..

    Does knowing about :DDTSYM DPSTOK/-1 followed by $$^R qualify as attesting
    to adherence of high moral principles?

    - Stephen
Here are the symbols in the source:

https://github.com/PDP-10/its/blob/master/src/sysen1/ddt.154...

    BYERUN: 0 ;-1 => RUN :BYE AT LOGOUT TIME.

    DPSTOK: 0 ;-1 => $$^R OK on non-SYS jobs
Here's the $$^R handling code that slyly prints out " OP? " to pretend it didn't understand you. (In case anybody's watching!)

https://github.com/PDP-10/its/blob/master/src/sysen1/ddt.154...

    N2ACR: SKIPN SYSSW ;$$^R
          jrst n2acr0 ;  Not the system?
        SETOM SYSDPS
        jrst n2acr9

    n2acr0: skipn dpstok ;Feature enabled?
          jrst n2acr9 ;  nope
        skipe intbit(U) ;Is it foreign?
          jrst n2acr9 ;  no, either SYS (special) or our own (OK anyway!)
        movei d,%URDPS
        iorm d,urandm(u)  ;turn on winnage
    n2acr9: 7NRTYP [ASCIZ/ OP? /]


This had some neat benefits: in the 90s I worked for a COBOL vendor (Acucorp) who had a bytecoded portable runtime which allowed you to run a binary on systems ranging from 16-bit DOS to Windows NT, most Unix variants, VMS, etc. (our QA matrix had ~600 platforms & versions). The display section meant it could adjust to the platform: on DOS and other consoles, you had text controls but on X11, Win16/32, OS/2, and Mac it had native GUI widgets, with native validation UI. It wasn’t beautiful out of the box but it was familiar, consistent, and accessible.

The same was true of the standard indexed storage: a different runtime could use a SQL database for storage without recompiling the program, which was key to some gradual migrations to Java. A similar feature allowed remapping invoke calls to run on a remote server.

At one point we produced a NPAPI plugin version: install it in Netscape and your client ran the UI while the data access and RPC calls happened on a server. All I can say is that it seemed like a good idea at the time.


I've not used this, but it reminds me some of the UI of early Lotus 1-2-3 (in text, no GUI). It was amazingly intuitive and easy to navigate, even without a mouse.

Modern web apps, each unique and confusing, with no reliable way to move focus around and with bizarre text entry "improvements", would have made the Admiral weep.


> Modern web apps, each unique and confusing, with no reliable way to move focus around

If accessibility is set up correctly, you can still move focus with TAB and activate widgets/buttons with SPACE or ENTER. No different from any terminal app. If client-side JS is being used, the web page could even display additional forms in response to a keyboard shortcut, with no network-introduced latency.


Accessibility is only part of it. You should know where your keys are going to take you, and tabs often become unpredictable and confusing


> move focus with TAB and activate widgets/buttons with SPACE or ENTER. No different from any terminal app

No different from a terminal app that pretends to be a GUI app :-P. At the minimum many terminal/DOS apps used arrow keys to move between fields and often just pressing enter in a field (often after it was validated), it'd move to the next relevant field. Shortcut keys to move around and/or perform context sensitive functions (e.g. search for product IDs in a field where you are supposed to enter a product ID) would also be available at any time.

I remember my father making a program for his job (for his own use) ~20 years ago in Delphi and he was manually wiring all the events to move around with the Enter key, like his previous program (written in DOS and Turbo Pascal) did and was annoyed that he had to do this manually.

TBH i do not believe it is impossible to do this nowadays on the web, but just like it wasn't done in the majority of GUI desktop apps that followed the DOS days, i also do not believe it is/will be done in the web apps that is done nowadays. The reasons for this vary, but i think a large part of it is that people simply haven't experienced using the other methods to even think about them.


It wasn't done by the apps, because the standard UX on all those platforms did not include move-focus-on-Enter - and GUI apps are generally supposed to follow the standard.

FWIW I agree that for data entry apps in particular, Enter is just too convenient - and I had to write code to manually implement it in GUI apps, as well.


Per sibling, even at best, this just isn't in the same league. And I'm doubtful about "best".

And even if it works, why should I need to enable special accessibility features to access a UI mode that isn't utter sh_t?


> Modern web apps, each unique and confusing, with no reliable way to move focus around and with bizarre text entry "improvements"

I'd not vote to go back to the 90ies (we've fixed way to many security problems since then) but it would be great if we could all agree that we lost something along the way and try to get it back somehow while keeping the things that work today.


That is a great video! It also reminds me of Dan Wang's essay on How Technology Grows: https://danwang.co/how-technology-grows/. It introduced me to the Ise Grand Shrine, which is a wooden temple that's rebuilt and torn down periodically so that the knowledge of how to maintain it doesn't get lost.


>Regarding the underlying theme of legacy systems, here is an interesting presentation by Jonathan Blow, citing various examples of earlier civilizations losing access to technologies they once had

This is why printed physical books are still an important thing in the age of eReaders and the like. Also, the Library of Congress archives audio on vinyl. The ancient analog formats will still be able to be understood in 100 years. Today's latest tech won't be though. I was just provided a CD-R/DVD-R(didn't look that close) of x-rays of my cat. I honestly have no way of reading that data as none of my devices have a x-ROM drive.


Eh, not really a problem. Your CD-R issue is pretty much the same as you not having a cylinder phonograph player - a device that is already 100 years old. Sure, if you do not have such a player you cannot play the cylinder, but the audio stored in those players isn't lost, players exist for those who want to listen to the cylinders and given enough money, new ones can be made.

Similarly, you may not have a CD-R or DVD-R reader, but it takes about 5 euros to buy one and access your CD. In 100 years it might cost more, but i'm willing to bet that CD-R and DVD-R readers will still be available in much greater numbers than cylinder phonograph players are. Making new ones, for really important data, will certainly be more expensive than making a new cylinder player but it wont be impossible - if the data is important enough and for some reason all the millions of CD-R players in existence nowadays has vanished, a new one will be made.

And analog formats aren't that great either - most of them tend to wear over time, so they have their own drawbacks.


Mr Wizard taught me how to improvise a record player [1]. I doubt our great-grandchildren will be able to improvise a CD reader.

[1] https://www.youtube.com/watch?v=HJa6Ik6xmiU


External DVD writer (so, also CDs) drives are about $30 now for a decent brand, powered entirely over their USB cable so no power brick to mess with, and are about half an inch thick and barely bigger than the discs that go in them. One can easily live in a junk drawer, unnoticed until needed. I was surprised to discover how tiny and low-power they are now, and how cheap.


That's great for today. In 10 years? 20 years? Definitely worthless in 100. Even the $30 today is wasted money for me. I have friends that have antiquated tech laying around in a junk drawer that I could easily borrow if I needed. Instead, I just requested the vet attach the data on the disc in an email. Even cheaper and less tech waste involved.


That’s funny. I have a Blu-ray Drive only so I can make physical backups of some important things to store.


> quickly create user interfaces by declaratively describing them

> everything can be accessed using only the keyboard

Sounds like plain old html forms.


Yes, at least in some respects, HTML forms could have been that way, but in practice, they are not:

For one thing, and this is important, with every browser I tried, I get periodically interrupted by messages that the browser displays and that are not part of the form. Just recently, I was again asked for security updates by the browser. Sometimes when opening a form it gets prefilled by the browser, sometimes not, and sometimes only some of the fields. Some other times I get asked whether the browser should translate the form. Sometimes it asks me whether some fields should be prefilled. Sometimes when submitting a form the browser asks me whether I would like to store the form contents for later use. Sometimes the browser asks me at unpredictable times whether I would now like to restart it.

An important prerequisite for getting a lot of work done is for the system to behave completely consistently and predictably, and not to interfere with unrelated messages, questions, alarms etc. This is also very important for training new users, writing teaching material etc.

Another important point, and that is very critical as well, is latency: Just recently, I typed text into my browser, and it stalled. Only after a few moments, the text I entered appeared.

I never had such issues with COBOL applications. Today, we can barely even imagine applications that are usable in this sense, because we have gradually accepted deterioration in user interfaces that would have been completely unacceptable, even unthinkable, just a few decades ago.


Of all the woes of modern UI, latency on text input is what I find most irritating. It's physical, visceral, mental, I don't know; when you've been raised by incredibly snappy video games (1980-1990s, 2D glory), it's asinine to suffer input lag in 2020 of some x86 platform. It's just alien an experience, like time suddenly reversed for userland or something.

In the browser, given the steaming pile of code that some pages have become, I may understand some lag, quirks. They broke the web and Chrome is helping so... Yeah. But in a text editor of all applications? No. I mean, just... no. (Looking at you, atom, vscode... WHY?)

Whatever the hell you're trying to search / display / message (I assume, trying to help me...): please begin by not interrupt my input feedback? That should be, I don't know, the only sane priority for UX, to actually respond to the user? Otherwise the computer feels... broken, subpar, unfit for the task? Am I alone in having these feelings when using devices?..

/rant over. I'm not that old for god's sake, 37 and they've got me jaded by UX quality already. All it took was two short decades to flush down the drain the precious good that didn't need fixing.

Ah. I'm sure we'll get back there. Eventually. Even if I have to write it myself (surely, we'll be legions). Someday I wonder what we're waiting for. And then I remember that this is the year of the Linux desktop... It's not that easy, is it?


I gotta agree here. UX seemed so simple just ten- twenty years ago... and still I wonder how the hell people programmed 16 bit video games in a tiny system.

And everything on the web front is so bloated. Yes, I can make a pretty UX with that, but at what cost? I think that the faster and cheaper computers that we have today have made it easier to overlook how bloated everything has become.

Upside is, it does make a lot of things easier. Downside is, everything is slower and fatter. Mostly because of new frameworks, languages, etc that all aim to accomplish something, just easier.

And I get it. I understand why we want to make it easier. But again, what are the costs?


> And everything on the web front is so bloated.

Um, you're posting this on a site that's basically the polar opposite to "bloat" on the web. Badly-engineered systems have always existed somewhere; even the COBOL-based proprietary solutions referenced in the OP are a fairly obvious example of that.


Ok, then the _majority_ seems to be bloated. And I feel like this is the new normal.

HN, in my opinion, is an island of simplicity in a sea of complexity.


This is pure speculation on my part but I like to think that there's literally billions to save. It's about the order of magnitude: even 1 minute per year per person amounts to something like 15,000 man-days (>100 million man hours!).

1 minute per year!


I have a feeling that many programmers and engineers are blind to latency. Not in a sense that they are bad programmers/engineers, but in a sense that they simply cannot feel the difference, so they make stuff that introduce latency they themselves cannot perceive.

So you get stuff like Wayland that introduces latency issues and when you try to explain the issue you feel like trying to describe the difference between magenta and fuchsia to a blind person.


Oh my, I've never thought of it this way. I really get what you mean.

Can't agree more. That might very well be it.

Could it be trained, through exposure, e.g. with video games as I implied?

Or is it maybe related to "snapiness" of the brain?— I think it's been shown that we clearly have difference response times, some people being several times faster/slower than others (with no explicit correlation to a "level" of intelligence", more like the characteristics of different engines in terms of latency / acceleration / max speed / torque / etc).


TBH i don't know. I was playing video and computer games since a very young age, so it might have to do with this but at the same time i know i wasn't paying much attention to latency until my late 20s. Even at 25 i'd be running Beryl (one of the first compositors) with its massive latency and i'd be writing games with software rendered mouse cursors, both being incredibly poor in terms of latency and i'd just not notice it.

So perhaps it can be learned?



The other thing about too many modern user interfaces is that they often get subjected to whatever fad is in vogue this week. Eye candy takes precedence over consistent user interfaces, and gratuitous changes are routine because a new fad has become the Next Big Thing.


The trade-off, of course, is access. Now, computers are cheap and available everywhere. In the COBOL era, what percentage of a family's income would it take to purchase such a machine? How wide-spread were these machines in non-English speaking areas? How easily could a program from one of the machines be used on a machine built by a different vendor? Were said programs resilient to malicious actors?

As always, there are sacrifices, but in general I think we have made rational trade-offs in this realm.


Not to mention executing the program across different kinds of wired and wireless networks, and handling a huge variety of client OS, user agent, and input modalities.


You seem to be complaining more about the browser than HTML per se. Why not use another browser ? or even a lynx like browser then ?

I think despite all of these most of us stick with modern browsers because the tradeoffs are worth it. Removing complexity and unpredictability at all costs has usually worse impacts than being annoyed and surprised. Except if your software drives a cockpit or a nuclear plant.


> For one thing, and this is important, with every browser I tried, I get periodically interrupted by messages that the browser displays and that are not part of the form. ...

Pressing the ESC key should dismiss these messages. It's an annoyance to be sure, but not a deal breaker. Similar for pre-filled content, you can just select and erase it.


Sadly, <ESC>it doesn'<ESC>t really solve t<ESC>he problem of interrupting one's fl<ESC>ow...

At least, such behavior should be user preference— “get out of my way” is like #1 on most users list for a useful toggle.

As for notifications specifically, it's not like we haven't developed 101 notification centers to postpone user response.

The problem, imho, with qualifying this as universally "not a deal breaker" is that it's a slippery slope, all too subjective to boot with— e.g. is Windows auto-rebooting not a deal breaker either? I'm sure to some people, not really...

A showstopper has very different thresholds depending on what you do. Live tasks notably (recording, streaming, etc) should be sanctuarized (down to a "real-time" setting at the thread level, when it's not actually unstable for some ungodly reason). Chrome is just another OS nowadays, videoconference being a prime example of app. I wouldn't like Chrome to nag me during an interview for instance.

If I even have to make just 1 extra move, gesture, click, whatever even a look away from my work, because the machine demands that I do (screen or app locked 'behind' otherwise), in effect creating unsollicited gates between me and my workflow... yeah, there's a big problem. 5 minutes saved a day per user for a big corp is millions saved before a month has passed.

Now think that we're collectively, computer users, like one big human corporation. Think of the time we're losing due to bad design. Let that sink in for a minute... It's a huge and stupid cost we impose on ourselves. Must be funny to some, idk...

There is a way to a better UX. We shouldn't need ESC to get there. ;-)


Indeed, and in fact there are systems that adapt between the two paradigms. For example, taking a system based on IBM 3270 screens and turning it into a series of web pages to enable a web portal for an existing mainframe system.


I worked for a company that had one of these ancient (probably mainframe) systems for inventory, business analytics etc. Users at the factory would access it via 3270 terminal emulators. It was clunky but usable.

Until they wanted to stop paying for terminal emulator licenses and replaced it with a web gateway that translated the 3270 pages into HTML forms. That was pretty horrible to use and of course people began “forgetting” to record inventory moves and process steps. This was for six figure aerospace components so at the end of the day someone knew where they all were and what had been done, it just made it a lot harder to bill for milestones and wasted tons of time. Of course the software was like $15 a seat.


Plain HTML5 forms: they had complex field structures and validation which required JavaScript until we got things like the extra input types and regex patterns.


Kinda but not entirely off-topic. Low-tech magazine is all about solar power these days but previous issues brought back to light technologies we once used, and probably would be still useful these days; plus, you get to save power, lower carbon footprint, and so on.

https://www.lowtechmagazine.com/


>For instance, it allows programmers to quickly create user interfaces by declaratively describing them, in a so called SCREEN SECTION.

>The resulting interfaces are very efficient for text entry, and allow users to quickly get a lot of work done. The interaction is so much more efficient than what we are used to with current technologies that it can barely be described or thought about in today's terms.

It's not common anymore, but Unix clones still have dialog(1):

https://www.freebsd.org/cgi/man.cgi?query=dialog


Do you have a video that shows an example of this text entry system? I'm interested in seeing this.


Sometimes I see it on terminals for instance when checking into a hotel, opening an account in a bank, booking a flight, interacting with the tax administration etc.

One important point that makes them so efficient to use is that everything can be accessed using only the keyboard.

For my personal use, I have simulated such forms using Emacs, and especially its Widget Library:

https://www.gnu.org/software/emacs/manual/html_mono/widget.h...

This is a bit similar to what you get with a SCREEN SECTION in COBOL.


Anyone who has used the BIOS setup function of a PC before the EFI bloat took over will find that interface familiar, it's certainly easy to use and quite efficient.

Here's an example screenshot: http://www.buildeasypc.com/wp-content/uploads/2011/11/step12...



In the early 80's we had terminals (Newberry being one brand iirc)that would talk to the mainframe and in this instance a Honeywell Bull DPS8 range and these terminals had block mode, so could just type out using the cursor keys to navigate and key in all your code or feild attributes as you would for a transactional screen layout input interface. Then having effectly edited and dealt with localy, could hit send into the edit buffer and covered much WYSWYG form of input and layout for screens. So many ways of doing screen editing and text entry in the real early pre PC days would of been down to the terminals and local cursor, send whole screen block mode form of editing. Unlike character mode (very early systems way of pooling terminals) in which it had to handshake each and every keystroke in effect. Though even some of these terminals could be configured to allow full screen editing mode and sent via character mode batch polling. Terminals were not cheap back then either and when the PC came out, terminal emulation software was one of the big sellers in some markets who would spend lots on the early PC's and more on this software as cheaper than the dedicated terminal offerings.


I was blown away by some reporting systems in COBOL that were surprisingly understandable and simple.


The only way I found COBOL palatable was to spend the better part of a year programming in RPG III first.


If only it were so easy to do that with modern web technologies. Instead, you write dozens (if not hundreds) of lines of HTML/CSS/JS just to do a simple form post. And it feels like it's gotten worse over the years.


thanks for the Jonathan blow talk. very informative and good to reflect on. e.g in software younger dev's don't generally accept the advice of the older gen. hence we keep reinventing tech instead of iterating. examples been serveless vs php deployment. k8's, react etc


Wow, I was sure that COBOL is not being used any more since the new programming languages are just better in any aspect.

What's the reason that COBOL faded over the years?


the new programming languages are just better in any aspect.

Well, you should question your assumption that the new languages are better. They aren’t, they are just more fashionable. You can see this in every aspect of life, not just programming. Previous generations liked high-quality products that would last. Modern day people like cheap objects that soon break and are thrown away and replaced with something equally cheap and flimsy.

There’s a reason that COBOL (and FORTRAN) code is still running 50 years later, and that last year’s JavaScript already needs to be rewritten.


It depends how you quantify better. If you mean easier to hire developers in, provides higher levels of abstraction, etc. then modern programming languages are generally better. If you value stability and cost, as those who own these COBOL do, then you don't care about what it's running on and what you have is good enough until it suddenly isn't. 50 years is a long time to find bugs and modern applications don't have anything close to that level of hardening.


you mean easier to hire developers in, provides higher levels of abstraction, etc. then modern programming languages are generally better

That’s a subtle distinction, in JavaScript a high level of abstraction means hiding the details of the DOM and how the interface works, whereas in COBOL or FORTRAN what you’re really abstracting is the problem domain itself.

Availability of programmers is part driven by the market but largely by programmers themselves who try to avoid mature and established technologies and chase the hottest new trend. So it’s true what you say, but it doesn’t happen because it has to but because people want it to.


Have you used imgui? Easily fastest way to create a GUI since dawn of computing, Cobol screen section really has nothing on it.


Compared to Tkinter (or even straight Tk)?

If you want something naive that I think blows both out of the water: FreePascal/Lazarus.


Everything Old is New Again


little known fact - COBOL is very efficent - in terms of errors.

I recall it won an award for the most error messages from the fewest number of lines of code.

I think a program with a period in column 6 run through the IBM Cobol compiler would generate either 200 or 600 lines of error messages.


It's not so clear that COBOL is to blame:

> COBOL “remains one of the fastest programming languages,” said John Zeanchock, associate professor of computer and information systems at Robert Morris University. “We get calls from big companies in Pittsburgh, like Bank of New York Mellon and PNC Bank, looking for COBOL programmers. The limitations of COBOL itself are probably not the problem with New Jersey’s system,” Zeanchock said, adding that a single mainframe computer could probably handle the processing needed for an individual state’s UI unemployment insurance system, even if the number of applications increased tenfold.

https://slate.com/technology/2020/04/new-jersey-unemployment...

The article goes on to note:

> When it comes to our broken social safety net, COBOL isn’t really the bad guy: blame Congress for underfunding the programs, and states for failing to make up the gaps.

And later:

> “We’ve known for a long time that even a much smaller crisis would lead to daunting challenges,” said Dutta-Gupta. He points out that during the Great Recession, state unemployment agencies were so understaffed that their agency heads spent shifts answering applicant phone calls. Even if states weren’t getting federal funding, they still could have paid for better technology with their own tax revenue. According to reporting by the Tampa Bay Times, former Florida Gov. Rick Scott and current Gov. Ron DeSantis ignored repeated warnings that the state’s website barely functioned.

This looks a lot less like a computer problem and more of a process and institution problem made much worse by chronic underfunding and understaffing.


> This looks a lot less like a computer problem and more of a process and institution problem made much worse by chronic underfunding and understaffing.

It’s a political issue. Some states are broke (e.g. NJ), and some are hell bent on not helping others in society so they handicap social welfare systems (e.g. FL). Real solution won’t come from a few COBOL programmers, it will have to be voters voting in people who will address the issues.


How broke is NJ? What could they do? Why aren't they?


Extremely broke.

https://www.truthinaccounting.org/news/detail/financial-stat...

Other than defaulting on their debt, they can’t do any thing about it as the money was spent decades ago in the form of defined benefit pension payments in the future. If all the recipients of those future defined benefit pension payments died, then that would also lower their debt.


Hm, so currently they just "service the debt", they can issue bonds and just pay out over 50+ years. I don't see why they don't plan for the long term with this.

They are relatively a prosperous state. (Good GDP per capita, good income after taxes - https://upload.wikimedia.org/wikipedia/commons/9/9f/Median_h... , though life is accordingly expensive 34th overally affordability - https://www.usnews.com/news/best-states/rankings/opportunity... - better than Florida, Washington, NY, CA, etc. 2017 "PPP" is 112% of national average, close to NY's 115% and CA's 114%.)

Though they should just ship pensioners to a cheap-to-live state, eg Florida or Pennsylvania.

Okay, okay, armchair policy recommendations are always very nuanced and great (not to mention useful), especially on random internet forums! But the level of dysfunction due to braindead leadership is always surprising.


California, for example, is handling the increase in applications without many problems.

On the other hand . . . In 2009 recession, I had to call about some issues with unemployment benefits. It was a harrowing process. Like many, I practically memorized the key sequence so didn't have to listen to all the voice prompts.


I asked a few friends about the New Jersey call. They (2 of them, both retired and over 70) claimed there's no work to be done and it's actually an incompetently administered administration with human problems who are scapegoating the technology. Also supposedly the New Jersey govt sacked their team and then was trying to contract out the work at $50/hr. Now they are offering $0/hr. And the solution isn't in software, or so they claimed.

This assessment was after both signed up to volunteer to do the work and saw it was a human process failing and not a software issue. People tend to point to the parts most mysterious to them when things start to fail - a zSeries frame is in actual reality, a very important big black mystery box that officials with access to microphones are not allowed to touch - perfect.

Don't let this discourage you though, their problems are still very real.


How many folks on HN have witnessed this same very thing within large mega-corps and government agencies? I've seen it a LOT.

The big bummer is that the crisis almost requires enabling behavior here, in the sense that if there are things wrong with said COBOL system, it's the fault of the NJ gov't for not fixing it a long time ago. People should rescue this for the sake of the unemployed, but can we please fire people for this?

This isn't a red state: They collect plenty of revenue. So IF IT IS COBOL SYSTEM, and not just bureaucrats, it's still NJ's fault.

As Bezos said recently regarding the Seattle city gov't:

They don't have a revenue problem. They have a spending efficiency problem.


> This isn't a red state: They collect plenty of revenue.

They spend even more. NJ is in the worst spot out of all the states:

https://www.truthinaccounting.org/news/detail/financial-stat...


> This isn't a red state: They collect plenty of revenue.

Part of this revenue collection (tax policy) is why corps are moving en masse, the recent largest example is Honeywell's relocation to Charlotte. NJ's latest tax hike has actually caused them a net loss in revenue.


NJ seems to be on the bad side of the Laffer curve.

https://en.wikipedia.org/wiki/Laffer_curve


How it affected their expenses? A big corp leaving lessens services utilization which might help if NJ was paying a hefty premium for over-using their infrastructure.


A good theory but doesn't apply here, but I did leave out a key piece of info: the largest drop has actually been in income tax ala large hedge fund managers who have fled the state. Carolina Panthers owner and billionaire David Tepper is one recent-ish high profile example, but there are many others. Corps like Honeywell aren't an infrequent sight either. New Jersey's revenue has fallen YoY since the loss of Tepper et all.

The insane part of me here is NJ politicians keep clamoring to raise taxes more. It's almost as if they heard De Blasio's campaign slogan and thought it was a great idea.


I was surprised to find it's not california or new york that have the highest taxes, New Jersey has the highest taxes in the nation.


I was reading more in depth about this NJ tax situation recently. Apparently a lot of it has to do with how they divided up counties into tiny municipalities, and each one gets its own public services such as firefighters/EMTs, police, etc.-- which of course all have solid pension plans.

If they consolidated and streamlined, it would be much more efficient, but there are numerous barriers to this (not least of which how eliminating a bunch of those jobs would go over with the public). They're likely going to have to do something eventually, they just haven't figured out how yet.


Yeah, I feel like as much as we programmers like to make believe that we can re-architect our programs and it'll solve all of the problems, often times the problems are up the chain with the design of the product.


Interesting. Recently I've been getting emails for $55/hr work with "security clearance". I see how this kind of contracting operates; many body shops have become "minority owned" corporations. Minority = either women or color.


It’s a scamola. They’re required to have X% of contracts fall into that minority / women-owned category so there’s a series of shell companies that offer a conduit service to subcontract the work.


The prime contractors are required to give out a certain amount of work to women/minority owned companies. There is a big number of these companies around DC. It’s one of the best ways to get into government contracting.


Everyone needs their cut. The original contracting company's probably charging $100+ an hour. They sub it out to another company for $75/hr. Eventually it gets to someone who actually does the work for $55/hr or less. They think they're getting a good deal, meanwhile everyone's getting screwed, from the employee to the government agency, due to all these middlemen.


The problem are the procurement rules. You want to make procurement as 'fair' as possible, but this makes the process lengthy, slow and costly. The only option or deal with this form the administration perspective is to to occasional big support contracts with generic terms, which then call in specific expertise when it's needed. The administration knows it's overpaying and getting 2nd rate service, but the alternative is to wait 2 years to launch a procedure and get a contract with hopefully the right guys. But the right guys rarely apply as the procurement rules are too complex to handle for any organisation that isn't specialised in handling procurement.


> Eventually it gets to someone who actually does the work for $55/hr or less.

In my observation, after 5-10 layers of indirection, it eventually ends up overseas with a young foreign national who makes $3 an hour and absolutely does not have the required security clearance.


The full requirements (security clearance, bonding, etc.) will probably make that $55 hr a bigger joke.


>it's actually an incompetently administered administration with human problems

Yeah, sounds like New Jersey. I'm surprised that they sacked the team, it would be a very NJ thing to keep them on and just handwave the costs away.


Jerry Weinberg, in his classic Secrets of Consulting: No matter what it looks like, it's always a people problem.


"Human problems" is even more vague than "software issues". Any details as to what in the process is actually the bottleneck or failing?


No piece of software survives contact with humans.


As standard, most problems end up being ultimately human, not technical.


> Don't let this discourage you though, their problems are still very real.

Is that statement directed a programmers who want to help NJ? Because you made a pretty good argument earlier in your post that this isn't a software development problem.


The opinions of two people hardly can be considered as constituting any true reality.

I'd love for them to be wrong, that'd be great.


Places other than NJ have problems.


[flagged]


It literally can't be worse than private insurance.


Where is there a system based on private insurance?


The US. Who doesn't know this?


Deregulate healthcare pls. The cartels are straight evil.


yeah sure I'd loooove to have preexisting conditions back


Well it is in most of the world. Compare the US to Canada, similar patient outcomes at double the cost.


I worked in COBOL at an internship and this was around 2012. This system runs over 1000 court systems in the U.S. It's graphical too and sort of looks like vb6 on the frontend. It was a nightmare to work in even though the team was disciplined and took care of the codebase. You can't write 10k line programs with all global variables and have something that's nice to work on.


My first gig was a HLASM/COBOL internship at an insurance company.. so true, 10k line programs with all globals was the entire codebase. Navigating it with only a 3270 emulator gave me headaches but in a way it was fun. Coding it felt like doing a sudoku.


Did the software happen to be used by courts in NY? On a job, I worked with fixed width ASCII NY court data that was processed on some sort of mainframe that was setup in the late 70s maybe early 80s. Still gets updated many times a day with every single civil case in NY state, always wondered if it was COBOL based.


I remember working on a super-old-legacy Fortran program that had 6-character significance for the variable names.

I vaguely remember they had different namespaces using common blocks.


This might not be all that old! Early ISO C standard (1990) had the same 6-significant-character limitation for identifiers exported across translation units. So far as I know, this quirk is there because early C implementations reused pre-existing Fortran 77 linkers, many of which still had that limit at the time.


AcuCOBOL? I used AcuCOBOL at my internship back in 2009-2010 to build VB6-style forms with COBOL.


For me it was micro focus Cobol.


MicroFocus bought AcuCorp, so that might have been the same product.


Ah MicroFocus where enterprise products go to die a very slow drawn out death


Ah, so that's why MicroFocus stock suddenly started surging a few days ago. I got stuck with some shares when they spun off from HP. It's my worst performing stock and I couldn't figure out what the heck they did. But now COBOL oriented companies are booming.


And at the same time, Gentoo maintainers are reluctant[1][2][3] to update the shipped version of GnuCOBOL. Similar to Fedora/RHEL[4]. And it's far from dead - GnuCOBOL is actively developed[5] and preparing[6] an upcoming release this year.

[1] https://bugs.gentoo.org/641888 (reported 3 years ago!)

[2] https://bugs.gentoo.org/685960 (reported 1 year ago!)

[3] https://github.com/gentoo/gentoo/pull/12067

[4] https://bugzilla.redhat.com/show_bug.cgi?id=1714241

[5] https://sourceforge.net/p/open-cobol/code/commit_browser

[6] https://sourceforge.net/p/open-cobol/discussion/cobol/thread...


Ah. Gentoo. That brings up memories - I've used it as my main OS for ~3yrs. Updating OpenOffice with 4h compilation times used to be fun :).

It's great to see that they are alive and kicking.


wow, those gentoo bugs are very disappointing, are they typical of gentoo?


Robert Glass wrote a series of articles in the 80s and 90s defending COBOL. I once wrote him (the old fashioned way, with an enclosed check!) to get back issues of his old printed newsletter because I wanted to hear what he had to say about it. Those don't appear to be online, but here's an article he wrote in 1997 for CACM:

https://www.thefreelibrary.com/Cobol+-+a+contradiction+and+a...


That was honestly a great read. Would love to read the same article but rewritten for today's context. Thanks for sharing it.


Glass is probably best known now for his book "Facts and Fallacies of Software Engineering", which is a pretty good entry in a pretty lame genre, which is the attempt to draw meaningful conclusions from the research literature on software engineering. The main lesson of the book is how poor the studies are, but it's still worth reading for his own wisdom.


I work in County IT in a medium large county in my state, about 110K residents. We just finished a migration from COBOL on HP3000 to SQL+ColdFusion and some ASPX. COBOL is infinitely faster at computation on PARisc than SQL is on Intel; our HP3000 is a 120MHz with 500mb of ram and our Intel stuff is VM on ESXi. Running a tax roll on COBOL for 45K parcels takes a fifth of the time on the HP despite the massive hardware differences!


Why was "SQL+ColdFusion and some ASPX" selected as the replacement stack?


Two systems, one for Tax/Assessment which was provided by the state is SQL and ASPX and the other which is used for historical inquiry and document generation which is in-house code in SQL and Coldfusion.

The ASPX taxing/assessment system is the slow one in comparison when it comes to calculating. The ColdFusion system is not as hindered but its used for making letters and reports, exporting layers to GIS, and generally replacing COBOL-screens for information retrieval.

Is/are there better solutions, yes, but I was giving an example rather than critiquing the County’s choice of software that happened well before I came along.


yeah, wtf...


wow, this feels like a case of really poorly optimized software.


We should have a celebratory COBOL-thon this weekend. Everyone write a minimal COBOL app. Just for fun.

Not sure if the COBOL bridge for Node is allowed. :-)

Resources:

https://opensource.com/life/15/10/open-source-cobol-developm...

https://archive.org/search.php?query=COBOL%20programming


And what about https://github.com/azac/cobol-on-wheelchair ?

       display
           "content-type: text/html"
           newline
       end-display.
Most of your webapp is already done :D


Step 1: Write a JetBrains Cobol plugin


I am wondering if my old professor answered the call. He used to work in the banking industry, is very outspoken against Capitalism and got fired for wearing jeans.

I have been working on a project and have some experience with DB2/AS400, it is not as old as COBOL. It is not very fun or exciting and stackoverflow lacks much information, a lot of trial and error.

I can't believe how widespread these old systems are. I am surprised nobody has decided to update them, I mean when these old developers die or retire......they will have a hard time migrating to a newer systems.

Bonus: I HATE IBM, .Net Core provider to access database? That is a few $$. Crappy documentation? Check. Deleted code examples I had bookmarked? Check. IBM forum questions answered in private messages, so nobody else can use the information and the question needs to be asked hundreds of times? Check.

IBM is so upside down.


I thought IBM handles this by making VMs so old things can run on the new(er) things? Isn't it turtles all the way down? I worked for a company that specialized in distribution and circulation software for all the big papers at one point in time or another and it was all COBOL with some of the code bases as old as I was (a handful of engineers remained which laid down a few of them were still on staff, pretty cool resources since they've solved pretty much everything). I bagged a bunch of AS400 operating manuals which were on their way to the trash. I've got one on my shelf I wish I could share with you. Anyway to complete the anecdote one of the younger engineers found their way to greener pastures by creating a screen scraper of sorts for the AS400 to expose it to another API. My mind was blown at the time but it makes sense since these were the webforms of their day.


In our case we have an IBM iSeries box, AS400 uses DB2 as its database. I am writing an application that communicates with DB2, and I perform a lot of queries directly on the database. We also do have a book on AS400 but most development is done by our old school dev. There is plans to migrate to a newer system but the cost is quite expensive.

I mostly work in .Net MVC/Core. However, connecting to db, obtaining licenses, etc have been a real pain.


I'm not the poster you're intending but I would definitely like some AS400 manuals. I played with it a little in high school and think we're missing out on some good ideas there. I'd like to learn a little more, perhaps take some inspiration. email is in profile.


Back in the early 80s, IBM was at the top. They had solid systems and excellent documentation.

They even invented the PC.

But they fumbled and lost the PC and then were pretty much eclipsed by it.


I coded COBOL and RPGII programs back in the late 70's, and I'd flip burgers before writing another line.


Oddly enough I suspect you’d have a much harder time getting hired for a burger flipping gig at the moment. What a strange time.


Maybe, maybe not.

If we make these (very questionable) assumptions:

- Most qualified mainframe COBOL programmers are in an age bracket for which COVID-19 is very dangerous,

- and most COBOL shops are reluctant to allow all-remote teams,

then it might be harder to fill those jobs than it is to hire risk-tolerant / risk-ignorant young adults to flip burgers.


For what it's worth, my dad was doing remote COBOL in the early 1980s, so the technology is available.


And I wonder how the pay compares.


I wouldn't want to spend more time doing it, but I do think today's newbies have missed something in not seeing them and playing with them for a bit.


Thanks for the laugh.


COBOL was the first language to take data format seriously, with "records" and a "data division". To some extent, it still is. Look at the gyrations people go through to get Java or Python to talk to an SQL database. The programming language itself has no clue about database layout.


There were many languages which had support for records. Here is an example in Turbo Pascal:

    program PBook;
    type
      EntryRec = record
        Name: string[32];
        Phone: string[30];
      end;
    var
      Entries: File of EntryRec;
      Entry: EntryRec;
      C: Char;
      I: Longint;
    begin
      Assign(Entries, 'entries.dat');Reset(Entries);
      repeat
        Write('Phone book, what to do? (Enter, List, Modify, Quit) ');
        Readln(C); C:=UpCase(C);
        if C='E' then begin
          Write('Name: '); Readln(Entry.Name);
          Write('Phone: '); Readln(Entry.Phone);
          Seek(Entries, FileSize(Entries));
          Write(Entries, Entry);
        end else if C='L' then begin
          Seek(Entries, 0);
          while not Eof(Entries) do begin
            Read(Entries, Entry);
            Writeln('Name:', Entry.Name:32, ' Phone:', Entry.Phone:30);
          end;
        end else if C='M' then begin
          Write('Entry number (0..', FileSize(Entries) - 1,'): ');Readln(I);
          Seek(Entries, I);
          Read(Entries, Entry);
          Write('Name (', Entry.Name, '): ');Readln(Entry.Name);
          Write('Phone (', Entry.Phone, '): ');Readln(Entry.Phone);
          Seek(Entries, i);
          Write(Entries, Entry);
        end;
      until C='Q';
      Close(Entries);
    end.
Many BASIC dialects, like GW-Basic, also had support for record-based files. Though since they often didn't have explicit type support these records were defined in terms of how many fields and characters per field would fit in a record, but still you worked with what was essentially a flat-file database.

And of course there were the more high level "4GL" languages that combined a full database and a (mostly) general purpose programming language, like dBase/FoxPro/etc. These also had features like a visual form editor that allowed building data entry and manipulation applications quickly.


Python takes care of the records pretty well with named tuples (and since it's dynamically typed, not knowing the fields until runtime is expected). It doesn't have anything for the query part through, like C# does with LINQ - are you saying that COCOL does? What does this knowledge of database layout look like in COBOL?


> It doesn't have anything for the query part through, like C# does with LINQ - are you saying that COCOL does? What does this knowledge of database layout look like in COBOL?

COBOL has built-in support for accessing flat file databases with either sequential or indexed organisation. On many mainframe and minicomputer operating systems, the filesystem provides native support for record-oriented files (with fixed or variable width records) and indexed-key files, and COBOL directly integrates with that.

COBOL has syntax for defining nested record structures which was often used to define the data schema for these files. You could store the definitions in a "copybook" (conceptually equivalent to an include file in C/C++) and reuse it in many programs which would manipulate the same files.

COBOL itself doesn't directly support "querying", except for the strategies of (1) manually iterating through every record in a sequential file to find matching records or (2) looking up a key-indexed file by its key field.

There are commonly implemented extensions to COBOL to access relational databases (SQL), hierarchical databases (e.g. IMS), etc. However, those are not part of the core COBOL language, and comparable facilities are defined for many other languages, so COBOL isn't really unique on that point.


Thanks.

It doesn't seem to me that having built-in syntax for this is necessarily a big advantage, since Python can easily and naturally work with such databases using the built-in dbm module in the standard library (and has no need for defining the schema up front): https://docs.python.org/3/library/dbm.html

If you do need to define the database schema in code, there are many third-party libraries that allow you to do this cleanly using classes and fields, e.g. Django. Even Java can do this with Hibernate.


> since Python can easily and naturally work with such databases using the built-in dbm module in the standard library (and has no need for defining the schema up front): https://docs.python.org/3/library/dbm.html

Python doesn't support accessing mainframe key-indexed files. There is a port of Python to z/OS [1], but the dbm module can't read/write VSAM KSDS files, and I don't believe there exists any other Python module that can either. (Of course, someone could write a Python C extension using the VSAM C interface, or create a module which calls it using ctypes–z/OS adds some functions to stdio like flocate,fupdate,fdelrec for VSAM access–but I don't think anyone has done that so far.) Now of course, if you are porting your app to another platform, this will not be an issue, but if your data is staying on the mainframe, Python doesn't know how to handle it, COBOL does.

> If you do need to define the database schema in code, there are many third-party libraries that allow you to do this cleanly using classes and fields, e.g. Django.

Mainframe systems tend to use file formats based on plain text records where fields are assigned to fixed ranges of columns. Is there a Python module to do that? Especially in a declarative way, which is what COBOL provides.

[1] https://www.rocketsoftware.com/product-categories/mainframe/...


Sorry, I think I was a bit unclear. By "such databases" I meant to refer to your "flat file databases with either sequential or indexed organisation" in general, not the specific format used on mainframes.

My question was about the advantages of Cobol's built-in syntax for database access and it seems like, as you said, you could write a Python module for the mainframe database format if desired, and there would then be no particular advantage to Cobol's special syntax.

Edit: The J2ME environments used on phones 15 years ago also provided a database (Record Management System) which would be difficult to access from a language other than Java, but that's unrelated to the question of whether the "programming language itself has no clue about database layout".


I don't believe COBOL has any absolute advantage over Python. It can have a relative advantage in certain environments, especially in mainframe environments. In mainframe environments, COBOL has pre-existing interfaces to various operating system and middleware facilities used on mainframes, whereas for Python you'd have to build those interfaces yourself.

In terms of data structures, imagine you are working in an environment in which you have hundreds of pre-existing data files defined by COBOL copybooks, which look like the example on this page: https://www.ibm.com/support/knowledgecenter/en/SSMQ4D_3.0.0/...

Now, in COBOL, you just include that in your program and then you can parse the file. In Java, well Java has no built-in support for that, but IBM has a product called IBM Record Generator for Java which uses COBOL copybooks to generate Java classes. I'm sure someone could write an equivalent tool for Python. But as far as I am aware, no one has, and it would be a non-trivial amount of work to do.

> but that's unrelated to the question of whether the "programming language itself has no clue about database layout".

Well, that wasn't my statement, that was Animats'. Possibly what Animats was trying to say, is that in a classic COBOL environment, the language you use to define your database schema is built-in to your programming language, and uses the same syntax as you use to define in-memory data structures. That usually isn't true in other programming languages, including Python and Java, in many of their common usage patterns. In Java for example, you could store all your data using serialization, and then it would be much closer to how COBOL does it, but you normally don't do that. Or you use some kind of ORM framework like Hibernate, but then you end up with all these annotations which don't mean anything for in-memory data – or, in older versions of Hibernate, an XML file – which is further away from the classic COBOL model that describing in-memory data and on-disk/tape data is done identically.

However, not all COBOL code is "classic COBOL code" in that sense. Rather than storing data in files, many COBOL programs use a relational database, most commonly via the SQL precompiler (EXEC SQL). In that case, COBOL is basically no different than any of the other languages which support embedded SQL – the SQL standard defines embedded SQL support for Ada, C, COBOL, FORTRAN, MUMPS, Pascal, and PL/I (I know in the past Oracle RDBMS supported every one of those languages except for MUMPS, although in newer versions many of the precompilers have been discontinued due to lack of use.) Embedded SQL also exists for Java (SQLJ), although I've never seen anyone use that technology, almost everybody uses JDBC instead. I don't believe Embedded SQL is available for Python, but again, there is no reason why someone couldn't implement that if they wanted.


Since python 3.7, dataclass is probably preferred over namedtuple, which is typed, https://realpython.com/python-data-classes/


Dataclass is not immutable, for some cases typing.namedtuple should be preferred over dataclass if typing is needed. Deep dive here: https://stackoverflow.com/questions/51671699/data-classes-vs...


That's not really true - you can set frozen=True and you will get an immutable version of the class.


We've had great results with Pydantic models. https://pydantic-docs.helpmanual.io/


If you want me to work on it for free, a mechanism already exists for this. It's called open source. Put the codebase on Github or wherever and I'll contribute.


The odds that their actual problem is COBOL code and not mismanagement are vanishingly small.

That said, if it's paid with public funds, it should be open source.


Or, if you're a resident of NJ, then how about a no-money-exchange deal. I'll do the work, but I won't owe state taxes for the 2020 year. Win-win


What's the equivalent technology of today to get in on, in order to get some of this kind of money for old rope action in our retirement?


C and Java are going to be around forever.


I wonder if I'm personally addicted to easiness. I stand on the shoulders of giants and I forget just how nice it is to dig my toes into the soil.

I have a full set of Donald Knuth's "The Art of Computer Programming" sitting on my shelf, unopened after two years. Maybe it is time to rifle through it and learn how to program like the pioneers did in the old days.


No criticism of TACP intended, but I don't think you would find it very helpful in learning "how to program like the pioneers did in the old days." I don't remember too many of my older colleagues in the 1980s having copies of Knuth. It was something you might encounter among computer scientists, but not day-to-day working programmers. We rarely thought about algorithms in that way when we were building business systems. If we did need an algorithm (maybe to sort some customer records that were stored on tape) we would dig out the IBM or DEC binders and look up the system call that we were supposed to use.

I suspect that programmers who have grown up in the era of the PC would be surprised at how much the vendors supplied. The IBM mainframes came with huge libraries of software and documentation. I remember being given a UNIX box for a project at the telco where I worked, and being a little surprised at how little was included compared to the IBM or DEC machines of the day.

There certainly were some good programmers around, but a lot of "development" consisted of taking a copy of some existing COBOL program and tweaking a few things to create a new report. Out of two or three years of working in COBOL, I don't think I ever wrote a program completely from scratch. As a creative outlet, it was much more interesting to work in C and try to figure out how the guys at Bell Labs were doing things. Those are the shoulders that many of us stand on today.


Just remember, it was uphill both ways when those guys walked to work. They also could only write code by candle light.


Can any of you seasoned COBOL programmers in here please point to the best learning resources for this? I would love to learn it but I don’t have any old IBM mainframes lying around.


GNU COBOL runs on pretty much anything. You can play with it right now. No need for a mainframe. In fact, a lot of COBOL code running these days is not running on mainframes.

Mainframes are not big Unix boxes. Their OSs were evolving for decades before Unix booted for the first time and are very different from anything most people have direct experience with. You can boot up a legal copy of MVS 3.8j (or an illegal one of its modern descendant z/OS) but expect a learning curve. You can also get something similar from Unisys for their Clearpath machines.


> Mainframes are not big Unix boxes.

Actually, these days, they can be:

> However, z/OS also supports 64-bit Java, C, C++, and UNIX (Single UNIX Specification) APIs and applications through UNIX System Services – The Open Group certifies z/OS as a compliant UNIX operating system – with UNIX/Linux-style hierarchical HFS and zFS file systems. As a result, z/OS hosts a broad range of commercial and open source software. z/OS can communicate directly via TCP/IP, including IPv6, and includes standard HTTP servers (one from Lotus, the other Apache-derived) along with other common services such as FTP, NFS, and CIFS/SMB.

https://en.wikipedia.org/wiki/Z/OS#Major_characteristics


Oh yes. And, thanks to that wonderful hardware, they are really fast ones.

Still, unless you are running a Unix like OS directly on an LPAR, you can't completely ignore the z/OS personality.


The thing that struck me from coming from a unix background was that the entire OS was record aware, all the native tools, everything. Stuff that is just hard in unix is trivial in z/OS


Having grown up in the older 'record aware' systems unix bytestreams were a wonderful breath of fresh air (this was back in v6/v7) - I guess perspective is everything


Being record aware comes from the punched cards that were the most popular way to keep data when they started evolving. They were born before ASCII (they use EBCDIC and, while ASCII was supported in one early IBM 360 model, it was dropped almost immediately)

Unix was born after magnetic storage, both disk and tape, were common. A file looks a lot like a tape.


System/360 was supposed to be ASCII from the get go, the architecture supports it (the RCA Spectra 70 series which was a System/360 architecture system was ASCII).

All System/360 descendants support ASCII, the issue they ran into early in the System/360 program was accessories, IBM had lots of EBCDIC I/O devices, and developing a line of ASCII I/O devices would have delayed introduction of the line by a period of time.


The USASCII-8 mode bit in the program status word existed in all 360's (sorry about that) but was removed in the 370.


Ideally I want both things.


Pretty sure you can practice locally on any machine. The few people I know working with COBOL use virtual machines on modern computers before pushing to production.

Also, I'm sure if you search "COBOL tutorials" on the engine of choice, you'll find tons of stuff.


My father programmed in COBOL, but he never talked me about this. Anyone mind to share his experience? What would be to do this today? is it hard to learn?


I used COBOL on PDP 11/7X (RSTS/E) and VAXen (VMS) back in the 80s, and on IBM mainframes (I graduated in '85). I can't say that I was a big fan of the language, or the SCREEN SECTION specifically. The SCREEN SECTION is basically a description of labels, input fields, and display areas and where they appear on the screen (line, column). The COBOL program can reference the input fields and display areas as a variable. Typically you would use the 'move' statement to copy a value from a variable or a literal to the display variable.

I don't know that anyone would ever want to do things the same way today. Today we expect interfaces to adjust to the available screen and font size, which to the best of my recollection was not something that the old COBOL screens could do. There are still 'text-based user interface' tools available, although the modern approach is to link them into the program as a library rather than having them built into the language runtime. The curses (or ncurses) and s-lang libraries are what comes to mind.


First you need to order your COBOL coding pads:

https://www.ebay.com/c/911841529


Evidently it's a faux pas to ask about :P akin to asking veterans about war stories. You may get a kick out of this: http://www.coboloncogs.org/HOME.HTM


The first rule of COBOL programming is you don't talk about COBOL programming.

The second rule of COBOL programming is...


“Cobalt” skills. Really. Learning just COBOL is not enough, you will have to learn ISPF, SDSF, JMR, JCL, Db2, CICS, CA7, ENDEVOR.

Think about it 40yr old system still running. Amazing right. Pick any language today, write a program little bit enterprise level, will it run for another 40yrs without any major changes.



I would answer the call for 250K, up front.


In NJ, they are asking for volunteers.

And they wonder how it got this bad.


It's like an unpaid internship, smh

years of "it already works, dont fuck with it" policy and tech illiterate politicians.


This seems like the makings of a movie. Along the lines of "Space Cowboys" or maybe another "Grumpy Old Men".


What exactly are these Cobol programmers supposed to do? Process punch cards manually?


Was there a COBOL search light bat signal? Because there should be.


COBOL programmers rise!


Back from the dead!


I saw the best minds of my generation destroyed by COBOL copybooks...


[flagged]


Legal click-through requires proper signoff


Just put the entire COBOL source code on the github and let us rework it in the Java or C# and be done with this abomination.


Verifying the equivalence of two large programs can be very difficult. Developers often underestimate the effort needed to achieve a correct rewrite.


Exactly and figuring out all the logic with code that's been refined and built out of 20-30 years.

Any developer saying, "Hey, go and rewrite the code" underestimate all the edge cases that end up all over the place in the application.


These red-tape based systems are 80% corner cases, exceptions and special cases. The business “logic” resembles geological strata.

Nobody understands why the rules are the way they are - but if someone gets paid $1 less they'll be screaming at you.


Surely, you'd need to understand COBOL to translate it first?


Not a bad idea for the long term. It should all be Open Source if it's running a public service. Right?



If I were doing financial stuff, I'd probably use a language with a better type system than Java or C#


Like what?


Anything to the right of Rust on the type safety spectrum... F#, OCaml, Haskell, etc. Compile-time guarantees of runtime type safety are :100: when dealing with money and other critical things.

Search for "jane street ocaml" and "standard chartered haskell" for examples of companies using languages with good type systems in financial code.


You might want to read https://news.ycombinator.com/item?id=17636029 if you haven't seen it before...


Or just put it on github... might find people interested in helping without a full rewrite.


There’s a staggering amount of hubris in this notion...


Someone invent a COBOL to Python converter to get rid of these old dinosaurs IBM Mainframes.


Mainframes aren’t some old crusty thing that’s about to die from all the bugs stuck in the fans or something. We have a cobol system and it runs on modern IBM hardware. Besides, almost everything is virtualized now anyway.

The mainframes aren’t the problem, and doing a code conversion of COBOL is not trivial. Their data types are completely at odds to normal data types in modern languages (for example, you can specify a string that is always exactly 30 ascii characters long, the 4th character must be a numeric, the 18th must be ascii but not numeric, the 22nd through 30th must be alphanumeric. All with a native datatype. And that’s just the scalar types, Cobol data types natively support recurrences and redefines and heirarchies. And the language has like 400 keywords.) You end up having to wrap every single line of code in compatibility layers upon compatibility layers, and then it doesn’t even look like python, it just looks like a crappier version of Cobol, to make it function identically.

It’s much easier (though more time consuming) to read the code and determine the business rules being enforced and reimplement them in a modern language.

I have worked with a lot of Cobol translators and their output is worse than the original code by far. With a lot less confidence and experience with maintaining it.


Given that the Python foundation has been trying to get folks to run a python-to-python converter for more than a decade and have not completely succeeded, I'm skeptical that an AST transformer is really sufficient for migrating off those legacy systems.

More constructively, having worked on a (close to decade-long, by the time I joined) project to migrate a financial institution's business logic off an IBM mainframe onto more modern architecture, my main takeaway was that these systems have worked pretty darn well for decades with relatively little maintenance. The reluctance of companies to migrate god-knows-how-much data, business rules, and institutional knowledge (e.g. of data entry folks, business-level administrators who have to navigate these systems to do their job, etc) off a time-tested system they trust at massive expense, is completely understandable.


This is a misunderstanding of the problem. It’s not the fact that it’s COBOL, per se, that’s the problem. The issue is with making changes to an entire system. A program written in COBOL may be at the center of that system, but that program is only one part of it.


I don't know if it's still true, but COBOL systems were really well optimized for I/O, for performance, etc. Python won't stand a chance...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: