As EMS was a kludge that sucked up high memory for its mappings, I eventually had a fancy CONFIG.SYS with different menu options for different memory setups, depending on which game I wanted to play.
And of course you needed to know exactly your sound card configuration (IRQ number and COM port) and your video driver capabilities and memory size (I started with a hercules and moved over to a trident before getting a 3dfx voodoo).
Yes, running a game then was much more difficult than double clicking an icon but it was worth it. The games of that era were great and you could aquire skills (persistance, attention to detail, thinking on finding solutions) useful for the rest of your life - if you were a kid at least!
And I remember being a little disappointed when a game only supported the PC Speaker or Adlib, but shrugging, and deciding to be grateful that I didn't have to worry about getting digital audio working too.
I had memory/sound trouble with that, X-Wing, and World of Xeen.
The IRQs were handled by the PIC controllers, usually two with the secondary piggybacking on the main controller's IRQ #2. This was how the hardware could signal to the software that it wanted attention. The sound driver needed to know what interrupt to listen for or it would never reply to the sound card's requests. It was setup this way because these were physical pins! If you look at the ISA slot pinout there are physical traces for IRQ 3-7. 16-bit ISA slots added pins for IRQ 10-14. A bunch of IRQs were permanently assigned from the old IBM PC days which is why turning off COM ports in the BIOS would free up the IRQ for use by other peripherals. I seem to remember having a card at one point that was 8-bit, but had a little extension off the end with a couple of traces for 16-bit slot support only so it could support higher IRQs; The manual told you not to use IRQs > 9 if plugged into an 8-bit slot.
The IO port base address is directly related to the article: It selected where in main memory the device was memory-mapped. Writes to that address would cause the device to latch onto the bus and start receiving data. Anything not in the appropriate range was ignored even though all devices technically received the signal electrically. If the driver had the wrong address it would either write to unmapped memory (not that there was an MMU so the data words just went by on the bus without any listeners). Or worse: the write went to a different device that might ignore what it considered malformed data, but in many cases would do random things, scribble on memory, or just blast the bus and cause a lockup. IO base addresses were pre-assigned to peripherals like COM and LPT ports too but they were not in such high demand compared to IRQ or DMA channels; still in some BIOSes you could change the COM port's base address which would make it show up as COM3 instead of COM1 for example.
The DMA channel was exactly what it sounds like: the system had a limited number of direct channels to the memory controller. Often the protocol for a sound driver was to write a "Read audio from address XYZ, length X" to the IO port. The sound card would use the DMA channel to read the audio from that address. Then it would signal an interrupt when it finished so the driver could submit the next batch of samples.
You usually configured ISA cards by setting jumpers on the card. The available jumper settings were usually much narrower than the hardware supported so conflicts were more common.
PCI still used the physical IRQ system but had a protocol for devices to negotiate available settings with BIOS on startup.
PCI express uses a higher-level protocol abstraction. An interrupt is just a special message on the bus and doesn't have any dedicated pins.
Right now i am trying to figure out what it is that makes a Windows 10 tablet i own lose the content of dropdown menus. All i get is the "drop shadow" frame around them, if that.
Funny thing is that if i fiddle with the screen resolution it sorts itself out, for a while.
F-ing black box computers...
DMA channels were even more limited but thankfully fewer devices needed them and programs tended to be less picky about which ones you chose.
DMA channel number more likely ;-)
I remember getting a borrowed 19" Sony CRT running at some insane resolution for the time, probably 1280x1024, and hearing that worrying thunk changing modes, then reading this book and reconsidering the wisdom of my custom modelines..
> One night at 3 a.m. Pekka caused this to happen, and
> immediately after the screen went black and made that
> clunking noise, it exploded in his face. The front of the
> picture tube was made of heavy glass (it had to be, to
> withstand the internal vacuum) which fragmented and spread
> into Pekka's face, neck and upper body. The very same
> phosphors that had been glowing beneath the sweeping
> electron beam, moments before, were now physically
> embedded in his flesh.
Got qbasic, some assembler, and a list of interrupt vectors from somwhere. Messing with int 10h, and activating each individual video mode to see what happens. Somewhere halfway the list I find 'character generator video ram access' or something. I remember thinking it sounded really cool. Anyway, activate!
Today I suppose what happened was that the horizontal and vertical retrace got switched off, and the electron beam concentrates on the center of the screen instead of going everywhere. Or maybe not, you tell me.
I do know what happened next: The screen starts to make actual noise! Something wobbling starting silent and getting louder, straight from some third rate sci fi movie: wowowowowowoewoeoeWOEWOE. It can only guess it took a few seconds, but I heard al kinds of exploding monitor horror stories, so I am scared like hell. A few seconds later, I unstiffen and pull the power plug, not the best course of action but at least the fastest.
Repower, and there is a huge purple blotch on screen. Oh dear. After a few second it shrinks away to the center and disappears. For the next month, this keeps on happening every time the screen gets powered on.
I think it was 2 months later when my dad was sick of the display vageueness that had studdenly started and finally replaced it.('Did you do anything?' - ' Who me'? + Put on the 'I know nothing, Im from Barcelona' face) No idea if parents ever knew what happened (It's about 25 years ago at this point).
Friend of mine says every comptent engineer needs to destroy at least 1 piece of expensive hardware. So I hope I'm competent, but at least I learned a healthy respect for hardware that day.
Your story reminded me of this fun CRT classic, which I kind of hope is real in spite of the replies: http://atariage.com/forums/topic/176969-writing-a-program-fo...
(but only because systemd prevents me from configuring X11 by hand (to support nvidia optimus (damn you optimus!)))
So we can look at it one of two ways: those kids today, all excited about VMs and stuff we had in the 60s. Or, holy crap, I can do stuff on my phone that used to take a multi-million dollar mainframe.
I wonder if it has to do with how the microcomputer world had to almost bootstrap itself without any input from the mainframe people, because the latter considered the micros little more than toys.
And this continue into the present, as a large segment of the business is self taught. And by now even going to university will not expose you to mainframes, as they have largely been abandoned in favor of clusters.
The tech industry has no mechanisms for developing institutional memory, so we are constantly reinventing old wheels instead of developing new ones.
People who developed noSQL databases would have looked at things like IMS when writing new systems presumably too.
There's just mostly a disconnect with people who work developing applications and others on PCs and those who work on mainframes.
At places where I've worked that had both PCs and mainframes there was default segregation between mainframe developers and maintainers and PC developers and users. Of a mainframe team of 20-30 people there seemed to be about 5 who were PC devs as well.
PCs have always been about shrinking down mainframe technology into a much cheaper package.
Mobile (since the iPhone) has been about shrinking down PC technology into a much smaller and lower power package.
I had a bunch of games I wanted to play in the early/mid 1990s, including Wing Commander. Each one seemed to require it's own config.sys/autoexec.bat, so I wrote a menu type thing to choose the game and do the appropriate setup.
The experience made me think about what else I could do by joining a few commands together. I wish the internet had been available to me then, because I would have made much faster progress if something like stackoverflow had existed.
I built my first computer about 17 years ago but dial up was still a thing and I was discouraged from spending time on it so I just did html in the 30 minutes per day I was allowed. If I'd have found a REPL back then, I probably wouldn't feel so inadequate when discussing concepts with my friends at the big 4! It'll work out fine, but later than planned; the spectre of unemployability will start looming in a decade or so, which certainly motivates me!
Kids these days have it easy! /s
It's bizarre that the computer magazines I read (and obsessively reread) would not have run an article like this.
EMS was pageable. You could had up to four 16 KB pages below 1MB that were fully accessible, but you had to select which address above 1MB each of those four pages had to point to.
So in short, XMS was inefficient, because you had to copy, and EMS was more efficient because you simply decided which part of the extended memory you wanted to map to lower memory.
I think, EMS was emulated in real mode on the 80386 chip by using its virtual 8086 mode. This was not possible on the 80286, so the 80286 could only use XMS in real mode.
I briefly developed using both EMS and XMS, and as I recall, XMS was somehow less painful to get up and running and generally work with, but the performance conscience part of me obviously favored EMS's paging mechanism.
Fortunately, 32-bit memory came along quickly both in the form of Windows and DOS protected mode.
When the 386 came out, it was common to have more than 1MB on the motherboard, but the motherboard (typically?) didn't natively emulate the EMS bank switching scheme, probably because at about that time, everyone figured that people were going to stop using DOS and switch to OS/2, and be running 32 bit code that could natively address higher than 1MB.
When that didn't happen right away, Quarterdeck came out with QEMM/386, which used the Virtual 8086 features of the 386 to emulate EMS in software. Microsoft then shipped a not quite as good clone of QEMM as EMM386 with DOS 5.0.
The "Megamemory Explained" graphic used in this post comes from the Jan. 14, 1986 issue of PC Magazine, part of an article called "Enlarging the Dimensions of Memory." It doesn't cover XMS as the 386 was so new as to be effectively nonexistent at the time, but otherwise seems to be a pretty thorough explanation.
I was an Apple guy and a late comer to the PC world, and PC Magazine was pretty much a must-read back in the day. Lots of good technical content.
The hardware bus was async (!!!), meaning you had to arrange correct clock synchronization for every one of your peripherals independently. (c.f. "DTACK grounded", the classic 'zine of the 68k hobbyist).
The exception handling was insanely complicated, and even then they got it wrong in the first version: faulting instructions couldn't be restarted, making the use of an MMU impossible even in principle. They fixed this (via dumping even more undocumented complexity onto the stack) in the 68010, which was essentially a "patch release" to the part that was needed by the Unix workstation community.
It's true that starting the PC with a 32 bit flat address space would have saved headaches in the late 80's, and it's likely that the feature set in Win95 would have arrived a few years earlier. But on balance the mac and unix worlds had not less grief moving off of 68k.
The problems mentioned in that article result from software that thinks this limit is set in stone. Such software encodes data in these upper bits, and then later masks it away before accessing the pointer.
Memory addressing on 8086 inherently adds two registers every time: the pointer and a segment register (which points to a 64k region of the 1M address space with a granularity of 16 bytes). The problem is what to do when the segment plus the address overflows. On the 20 bit original implementation, you just got the rolled-over address. (What you should have gotten was a fault of some kind, of course.) On the 286, the same addition would give a valid address in the second megabyte of memory, which isn't the same memory. So the PC/AT (not the CPU) had a compatibility switch wired that would force the 21st memory address line low in real mode for compatibility with apps that actually relied on the early behavior.
The point was just to have a compatibility hack for old apps, not to pollute the architecture going forward. The reason that pollution happened was actually that the PC clone makers incompletely and incorrectly copied this feature, so later OSes never knew a priori whether or not it would be enabled by the BIOS, and they all had to have the same 10-20 instruction sequence to turn it off. And that meant that future PCs needed the same 8042-looking device on the bus to handle that request.
I suspect (but cannot prove) that both of those could still have happened if they'd gone with the MC68000.
I was impressed by the performance of the PowerPC 970 but much sooner than expected, Intel's (and AMD's) lineup made it irrelevant.
the 24 bit addressing thing, yeah, that was a hazard as well.
IBM had a second source requirement on procurement, Intel was willing to license or already has licensed the 8088 to be manufactured by someone else, and Motorola was not.
This brings back memories, but not particularly good ones.
So with the requirement of the physical CPU hardware for a small reserved space at the top of the (at the time) maximum addressable range resulting in a non-linear memory map anyway, putting other reserved stuff (ROM, I/O memory, etc.) up there was not so much of a bad idea at the time. They clustered the "ROM stuff" up there along with the required reset vector.
Also, remember, hindsight is 20/20. The IBM engineer's in Boca Raton, in 1979 to 1981 , designing what became the IBM-PC likely had no idea just how big of an industry wide impact their choices were going to eventually have.
 https://en.wikipedia.org/wiki/IBM_Personal_Computer (released Aug. 12, 1981, so the design was created some length before that date, 1979-1981 is a guesstimate)
I think the real problem was that people didn't want to let go of DOS. OS/2 1.x was a fine OS, especially from 1.2 onward and it ran great on a 286.
That is incorrect. If your card has a block of 2^n bytes of memory, you need to check the (in this case) 20-n uppercut bits of an address to check whether it is in 'your' address range, whatever block your card is assigned to. Checking whether that is all zeroes isn't easier than checking whether it is, say, 42.
And I don't think people were attached to DOS; they were attached to their (expensive!) programs and those required DOS.
OS/2 tried to support those programs, but wasn't perfect, and even if it,did, it didn't give those programs more memory.
If they made a few tweaks that allowed peripherals to slot into different spots using some kind of negotiation over the ISA bus this could have been avoided, but that was also way too sophisticated for what was a quick prototype.
Combine that with the $3K SDK cost and the deliberately incompatible PM, there were good reasons it was a hard sell.
I usually, launch memmaker, and then fine-tune the config.sys & autoexec.bat . country.sys ? doskey.sys ? ansi.sys ? Remove it! I need moar free conventional ram !
Now I do back-of-envelope calculations that tell me a single data structure is 100MB, and I'm like, "Yeah that's fine."
Once the 386 came around it was so much easier, Watcom C++ was my favorite way to take advantage of my luxurious 16MB I could afford at the time.
To go back to the subject, I remember to have used a "driver" to free up some additional memory, but I can't remember the name and how it worked. May be I found it reading Imphobia ? The name may starts with a R.
There was the unreal mode / flat mode too : https://en.wikipedia.org/wiki/Unreal_mode
I've remember to have code a memory allocator for Turbo Pascal. With pure DOS (not Windows), it was easy to use after the setup.
Are there technical reasons why Microsoft didn't release a Protected Mode version of MS-DOS in the 80s, or was it just a case of trying to push users and developers to OS/2 and Windows?
It seems like such a no-brainer, especially since MS-DOS provided so few services to applications anyway. You have MS-DOS 8 and MS-DOS 16, and you let the user choose which version to boot into, or you automatically kick over to MS-DOS 16 when a Protected Mode app is launched. It seems a lot saner than forcing users to muck around in config.sys.
That people were forced to deal with the memory limitations of the 8088 PC well into the 90s is just bananas.
It wasn't just a question of rewriting the OS, it was also a question of rewriting all the application software. Which eventually happened.
By the way, MS/PC-DOS provided quite a lengthy list of services to applications. Have a look at DOS parts of Ralf Brown's Interrupt List some time.
Also it's great to finally learn what "UMB" stands for in "DOS=HIGH,UMB". Never felt right to add that option without knowing its meaning :-).
Bill Gates never actually said this . There are those who might attribute this to the Mandela Effect.
It's duct tape all the way down.
Had a VP with a WANG PC that used Microchannel, he needed Netware, AS/400, and the Wal-Mart client going at the same time and not cut into his 640K of DOS memory.
There was a trick to it. First using card IDs I found the option floppies for his expansion cards from CompUServe (As The Internet/WWW was in infancy and our employer paid for CompUServe to research things) and I re-arranged the expansion card memory to leave a 64K area that was not used that DOS could load drivers into it to free up the main 640K memory. At the time Netware and AS/400 used DOS drivers, while OS/2 had OS/2 drivers that could access the memory beyond it. This was before Windows 95 (It was in beta test at the time).
I had upgraded the VP to MS-DOS 6.22 used loadhigh and memmaker and other things and managed to get all three networks working with most of the 640K area free. No other tech had done things like that before.
I studied programming in college, but this gconf tool I wrote had helped me out a lot. I wrote it in college and then modified it to be named whichnet so that it could read Arcnet or Starnet network cards and use the correct Netware driver for DOS on a boot floppy disk. Either by using the expansion card ID to detect the right card or by reading ROM memory and look for patterns or a signature. It had worked 99% of the time, and in the 1% it had failed and I never learned why, but worked af\ter a reboot or shutdown and power up.
I wrote this HexStrip or HStrip to extract ASCII data out of Word Perfect or other files that got corrupt and could no longer be read. It is also good for scanning EXE and COM files in DOS to see if there is a virus that stores ASCII text when it infects without running the EXE or COM. I used it in Tradewars 2002 with the trader.dat to pull out all of his advice for playing the game in it. https://sourceforge.net/projects/hstrip/
I had bought QEMM which helped as well, until the Windows 9X and Windows NT OS models put them out of business and Symantec bought out Quarterdeck and other companies.
Also, I think B000:0000 to B700:0000 was the monochrome memory that could be used to load things into it as well if you never used it. I used to write DOS memory maps, until DOS came with MSD.exe that could show used and unused memory maps. Intel 8086/8088 had 640K of RAM, and 384K of reserved memory, that some of it can be used by EMS for paging to load things into it to free up the 640K.