It was a lot of fun, but terribly inefficient programmer productivity. I would not want to go back :o) Dereferencing registers prepared me for C pointers later.
"OVL" popped up in my head for some reason, and https://www.google.com/search?q=dos+ovl seems to return interesting results (showing a few real-world examples).
Here is the correct one:
When the call was completed, the data from disk would be assembled into a full ECG record and written to tape, and simultaneously passed to the diagnostic program written in Fortran. The system would then initiate a phone call to the hospital's printer and print out an English-language diagnostic. The result was then available in ten minutes to the staff a the hospital.
The front end and back end was all Sigma 5 (first SDS then Xerox midrange real-time computer) assembler in an interrupt-rich process--one interrupt every 2ms for the analog, one interrupt for the disk write complete, interrupts for the tape record writing, interrupts for the outgoing phone call progress. This included an cost optimization process that would choose which line (this was in the days of WATS lines) based on desired response time. The middle was the Fortran program that would analyze the waveforms, identifying all the ECG wave forms--P-wave, QRS, and T-Wave--the height, duration, sometimes slope.
This all took place in a machine with 32k words (four bytes per word). There were two computers, one nominally used for development, but could be hot-switched if the other failed. I think downtime was on the order of an hour per year. This would have been called an Expert System, but I don't think the term was in common use as yet.
So the answer to your question is: "A considerable amount". Today we are all spoiled by environments with more memory on one machine than existed in the entire world at that time.
By the way, this was significantly easier than what folks have to go through with C or using that dratted asynsc/await pattern.
This particular code that used the coroutine was the outbound call processing low-level stuff. I was second fiddle on that one, and the lead was a fellow who is quoted in TAOCP. We had zero single-thread errors and one multi-thread error when we stood it up. Keep in mind that this was in the days of no debuggers other than console switches.
It was quite good.
I would imagine that they are--I haven't kept track.
The landscape is vastly different these days. You can't even stand up something in a medical environment that measures heart rate, much less waveforms without significant clinical trials. Apparently the FDA is all over this one.
It's also worth remembering that, at the time, a machine with 32k of RAM was one of the most powerful on the market, was still considerably expensive, and the alternative was paying (a team of) humans to do the work by hand. For all its shortcomings and the insane complexity required to get the machines to work properly, they were generally much faster than humans performing the same task and generally (assuming they were programmed correctly) could be relied on to make fewer mistakes. Their utility was remarkable, especially their ability to perform arithmetic very quickly, which was (and still is) quite tedious to perform by hand.
The source is on Github, too, for example:
And a team of scientists who had done all the difficult calculations beforehand...
People do amazing things with primitive tools.
You had to connect via an arcane telnet client (tn3270 protocol perhaps?) and input the change details. No fancy
web forms. Perhaps it was a limitation of the application, but you couldn't mix uppercase and lowercase in the one form.
Btw, a great book imho is "Assembly Language Step By Step - Programming with Linux - 3rd ed" (https://musho.tk/l/d2d56a34).
The great things is that it is an easy read and really starts from the basic and explains how the i386 architecture works, and then explains how to program it using assembly.
The sad thing is that afaik the author is quite old and probably is not going to release a 4th edition, meaning that the book will stay on intel i386.
It must be difficult to write a good assembly book. On one hand there's a lot of basics to cover, like memory addressing, segmentation registers, etc. But on the other hand, the main use case for it today is hand optimized functions for when the compiler can't optimize enough, which is inherently an advanced topic.
I've skimmed the document and it seems rather thorough and doesn't shy away from all the nasty details (which is a good thing for an ASM tutorial), the only criticism I have so far is that the assembly listings are a bit hard to read, additional whitespace and basic syntax highlighting (at least to isolate the comments) would make it a bit easier on the eyes I think, for instance: https://svkt.org/~simias/up/20180717-151848_asm-listing.png
But as far as I can tell there's no branch with a different name - maybe this was just a working title for the English version at some point?
Anyway, this new submission with a new title made take a look, so I'm happy :)
Now, I just hope someone takes a crack at forcing an epub build for better reflow/resize on small screens...
There's a (dormant?) issue:
Fast forward a couple of decades, and I found myself reverse engineering CP/M for the Z80 processor in order to create a virtual Z80-based system that ran inside Unreal Engine. I started with Udo Munk's wonderful Z80pack system, adapted a public domain Z80 CPU emulator which was written in C to C++, and did minimal reimplementation of the Z80pack terminal and disk I/O devices. Since the systems were implemented as "actors" in UE4 it's possible to spawn and run quite a few concurrently as long as you limit the CPU speed of each instance somewhat.
The resulting virtual system in UE4 is able to run original CP/M ports of Rogue and Zork (https://i.imgur.com/gnOCp3e.png), various Z80 instruction exercisers (https://i.imgur.com/kwNuq5X.png), a Z80 C compiler and and even Wordstar 4 (https://i.imgur.com/Q6307w3.jpg) and Microsoft BASIC.
Learning assembly can be a lot of fun - it can really teach you quite a bit about systems architecture that you otherwise might not get if you're always programming in high-level languages only.
1. Assembly language needs more lines of code to achieve the same task than higher level languages, by its very nature.
2. What I call Pascal's Amendment :) - very loosely like claiming the Fifth Amendment (to the US Constitution):
"I have made this letter longer than usual, only because I have not had the time to make it shorter."
- Blaise Pascal
As a writer, I can corroborate that. In fact, if he "had the time to make it shorter", it implies that he spent even more time to write those 1000+ pages than it would seem on first glance. Plus even more than for the same 1000+ pages if in a higher level language, since assembly is a lot more error-prone.
Then, they'll understand at least one architecture inside and out. Plus be able to customize it to their liking. :)
Since both sites appear to be owned by the book's author this is most likely just a change that has not yet been pushed to github (or mentioned in the author's sites), but it would be better if the author clarified it (would that be you, dennis714?)
My recommendation for beginner's assembly on linux is to write toy code in C and then view the disassembly in gdb or objdump. You have options to switch to Intel syntax from GAS/AT&T if you want.
I'm generally against using windows for anything, but Visual Studio has decent disassembly debug options where you can step through the native assembly code. You could also look at IL code ( which is very similar to native assembly ) and learn assembly concepts that way. ildasm and ilasm are great tools for that.
Assembly is so low level and can be intimidating to write from scratch in the beginning. It's better for beginners to write code in a higher level language like C and then read the compiler generated assembly code. Once they are comfortable with a disassembly of a "hello world" program, then write more complicated code and understand the disassembly. Then try to edit and compile the disassembled code. Once you are comfortable, then write your own assembly code from scratch.
Edit: Also, if you have time and the will, watch the nand2tetris lectures and try their projects. It'll give you a hands-on general overview of hardware to assembly to VM to OO. How native assembly works with hardware. How VM interacts with native assembly. How you go from OO to VM? It's a very limited but educational overview of code flow from object oriented code all the way to circuitry ( software circuitry ).
I think the best reason to learn assembly is not to write rather be able to read compiler output.
These days I know that older versions are still (partly?) included in x86_64 and that they're often mostly the same, but that was not clear to me when I saw tutorials for ancient architectures of which I didn't see the point.
But then, I've never taken well to the school system where you get taught something stupid first only to see practical applications later. It's why I dropped out of high school and went to do something of which I did see the use (a sysadmin 'study', because I knew I enjoyed working with computers, and that was indeed a good fit for me).
You could deep dive into x86_64 or ARM, but in the general case you would never actually code in those (i.e., most folks trust the compiler) unless you were writing a driver or writing something with crazy performance like MenuetOS.
It must be both useful and a job skill for some people:
I wouldn't study it to get a job. There's apparently still utility in it, though, with WDC's versions of it.