Hacker News new | past | comments | ask | show | jobs | submit login
DOS/4GW and Protected Mode (2021) (pikuma.com)
126 points by atan2 on Nov 20, 2022 | hide | past | favorite | 50 comments



"... a terminate-and-stay-resident program (or TSR) was a computer program running under DOS that uses a system call to return control to DOS as though it has finished, but remains in computer memory so it can be reactivated later. Needless to say, this was extremely unreliable."

There were very likely some hacky TSRs that caused problems, but in my experience most were extremely reliable. We used an off-the-shelf TSR to enhance a motion control system that laser scribed ceramic vacuum checks for silicon wafer fabrication. Those things cost $5k in 1990, and took ~20h of processing, increasing their value to $15k; we wouldn't screw around with something that was inherently "extremely unreliable".


I loved TSRs - I wrote two of special note:

stop_clock: All it did was stop the real time clock of the system from counting up when you pressed the alt key and started it again on a 2nd press. However, this was enough to stop the timer of a typing speed program we used in high school. Magically, I was a VERY fast typist. :-)

stay_on: When you pressed a certain key sequence it would start the floppy drive motor and a 2nd press would turn it off. The goal was to speed up floppy accesses by not needing to spin up the motor all the time. Unfortunately, I got up one day to find my floppy drive motor dead. I suspect I forgot to turn off the motor (there was no idle timeout...I was a kid, never even crossed my mind!)


I had one TSR that let you allocate RAM and switch between up to 3 programs. Of course, they had to be small programs due to the 640KB limit. But it was still quite useful in the days before hard drives, to be able to have a couple of utilities (like a text editor) loaded without having to keep swapping floppy disks.

Also wrote a couple of TSRs of my own as a kid learning to program. Sure you could crash the system, as you could with any program, but they were as reliable as anything else.

The only thing special was that you generally didn't want to run two TSRs that naively hooked the same interrupt without chaining properly. But back then we didn't have thousands of mysterious background processes always running in the background. You knew what few programs you had run, so it wasn't really a problem.


Yes. DOS itself came with several TSRs, for example keyboard drivers. They were absolutelt commonplace and, if written properly, nowhere as unreliable as claimed.


Interestingly, the author updated the post, which now says "this was not 100% reliable".

There isn't anything inherently unreliable with TSRs; DOS even provided interrupts for this specific operation (although some malware would not use them).

I think (although not entirely sure) that some mouse drivers were, for example, TSRs.

The problem is that there was a wide range of purposes and implementations, including malware, so the argument is similar to "BTC is mostly used for dirty money, so BTC is inherently criminal".


All DOS mouse drivers were TSRs. The driver would hook INT 33h (which is the mouse driver "API" entrypoint) and whatever IRQ (ie. 4 or 12) that the actual hardware used. The IRQ handler would then update the driver's internal state according to data received from the mouse and if enabled draw the mouse cursor into frame buffer and/or call registered user event function (which runs in the interrupt context).


TSRs were unreliable when they interacted with games and other graphical applications.

Eg, things like a calendar tool trying to pop up a reminder mid-game would often not end well.


My 486 had boot sector protection in the BIOS. It would pop up a Y/N confirmation in text mode during boot sector overwrite.

This froze the windows 95 setup in graphical mode. Although the text prompt appeared for whatever reason the Y/N didn’t work and we could never continue.

So was never able to upgrade that machine.


Yes, I remember that! But at least on my board it could be disabled.


> There were very likely some hacky TSRs that caused problems, but in my experience most were extremely reliable.

Programming TSR's as a teen was where I learned to hit ctrl+s every line, at most!

Was kinda hard to debug given the "terminate" part and the tools at the time.


I used a TSR to run protected mode software under early Microsoft Windows back in the day. Effectively it was letting me have the big allocations I needed while using Windows as a portable display driver.


Just because the one you used worked doesn't mean the concept was safe. Viruses also loved the concept.


The concept was safe enough that DOS officially supported it, and development tools like Borland Pascal and C++ had special support to implement TSRs.


No more than third party drivers.


So the "4G" stands for 4 gigabytes... that's funny, as at that time (early nineties) I bet no one could even imagine a humble PC having 4 GB of memory. That was the domain of supercomputers. And nowadays, even smartphones have more RAM than that...

Also, "Fun fact: The original Wolfenstein 3D engine, created by id Software, was developed using pure real mode". Well, that figures... if you still wanted your code to run on 16 bit CPUs (286 and below), you had to use real mode. That's also why, for several years, only the then-"AAA" games like Doom, Duke Nukem 3D or Tomb Raider used DOS extenders. Titles that were less demanding on the hardware (platformers and other 2D games) kept using real mode for quite some time longer.

Actually, I can add a fun fact of my own: when I started in software development in 2000, it was at a company developing Windows applications using Delphi - and at that time they were still keeping up compatibility with Windows 3.1, i.e. compiling in 16 bit mode. They finally switched to 32 bit soon after I joined, and I can't tell you what a relief that was. Although getting rid of the weird hacks in the source code that were made necessary by the constrained memory space of 16 bit applications took some more time...


Note that extenders did a lot of emulation stuff for you, which was not really needed if all one wanted was flat 32bit mode.

We did it for 4k intros back in the day: https://www.pouet.net/prod.php?which=289.

One fun optimization trick was the 0x66 prefix: Switching from 16bit to 32bit mode also switched the trade-off in opcode sizes. So in that intro most of the audio code runs in 16bit mode, while the graphics (which is actually not palette but full 32bit color) runs in 32bit mode.


I'd be interested in reading the source, but links to the source code seem to be dead (domains bought up by dodgy parking page types)


Yeah, I noticed. Tried to look at it myself - I don't have it archived anywhere myself. But last time I looked at it I cringed pretty hard. It's not great code :) But here you go: https://web.archive.org/web/20120710162700/http://www.active...


Neat, thanks!


I first learned C in 1991 on a VAX running VMS and also on an IBM AIX machine. If I went outside the array bounds I might get a crash or segfault and could use a debugger to load the crash dump and see where it died.

When I tried using my roommate's DOS PC and C compiler I crashed the entire machine so many times. There was no memory protection of any kind. Write something outside the array bounds and you could overwrite critical DOS data structures and lock up the whole machine. Hard reboot so many times.

Then my roommate tried to explain near and far pointers. I never understood it until I looked it up a few years ago. It was all related to the 16-bit vs 32-bit segmented vs flat memory models. Everything just seemed so much easier and faster on the VMS and Unix systems. But the also cost 10 to 100 times as much.

I also thought it was really pathetic that I could only run one program at a time. On the VAX and Unix systems 10 to 200 people could be logged on at the same time all doing there own thing and it was very difficult to accidentally bring down the whole machine.

It all made me NOT want my own PC because DOS/Win 3.1 was so limited. It wasn't until Linux in 1993 that I wanted my own PC.


I agree DOS/Windows wasn't anything to look forward to running. It was only good for FreeCell or selling software for. I did have a Windows laptop that dial-up connected to the Internet via Winsock.dll though, so that was cool.

I was running OS/2 1.2 and 1.3 in 1988-89 and beta versions before that. It only let you run a single DOS box but you could run any number of protected mode OS/2 text and Presentation Manager (PM) GUI programs. These were of the 16-bit (near/far) segmented memory model. In 1992, OS/2 2.0 ran the 32-bit flat mode and multiple DOS boxes in windowed areas of the desktop, as well as Win16 programs.

The company also had developed their software for IBM mainframes, Wang minis, VAX, AIX, HP-UX, and DEC Alpha. Of all of these OS/2 and Windows NT were the most interesting to me as they seemed close to consumer platforms. The experimental NeXT target we dabbled with but never ported to. That was the future that we had to wait many years to be popularized by Apple. Interface Builder seemed so much better than XCode though.

DOS4GW was the flavour that came free with the Watcom compilers. It was very popular with DOS games that let you use the available hardware to the maximum capabilities.


I learned C on an Amiga around 1989 or so. K&R second edition had just come out. The Amiga had multitasking and a flat address space, but no memory protection. If you slipped up, you got a visit from the old "Guru Meditation" error, followed by a reboot.


> I first learned C in 1991 on a VAX running VMS and also on an IBM AIX machine. If I went outside the array bounds I might get a crash or segfault and could use a debugger to load the crash dump and see where it died.

I started on DOS... but one day I bought an AT&T 3B2400 and two veritcal format Televideo terminals at a university salvage sale for $25 (it didn't do Lotus 1-2-3, so the business school didn't want it). That machine was a a true SVR3.2 Unix machine, complete with all the development tools and incredibly good documentation. The 3B2 was a world into itself, and opened my eyes to how limiting DOS was.


Wow, that’s a steal! They did eventually release 1-2-3 for Unix in 1990, in fact someone recently ported it to Linux:

https://lock.cmpxchg8b.com/linux123.html


I didn't fully appreciate what a deal I got until the SCSI controller failed. The part cost more than the 386 PC that replaced the 3B2.


Ditto for "C in 1991 on a VAX". But at least my college had the courtesy to teach us asm on a PC before that so that we could appreciate how addressing worked. Plus it helped to demystify pointers in C.


Watcom compiler shipped with DOS/4GW extender and the audacious simplicity of being able to malloc one whole megabyte in one go felt like a real magic. Good times.

* Not 100% if it was 1 meg, but 4gw ultimately removed the run-time allocation cap. If you had X megs, you could allocate X megs with one call. That was truly revolutionary.


So cool to finally read about this.. I remember wondering about this when I was a kid, starting games..


And nowhere to search for info on it!


This was the most frustrating part of learning this stuff while growing up; I quickly got into more esoteric issues and complicated questions than I could find answers to from the limited circle a ten year old has. The big breakthrough was a teacher who let me use his Internet connection to post on newsgroups. I was always amazed at how incredibly helpful people were on there, even at big name companies at the time. This was in the early-to-mid 90s. I even had someone from Microsoft explain and correct code I had been sharing to make a silly TSR that would bounce a smiley face around the prompt (which was an existing .com executable I was trying to reproduce)


Depends, I was an havid magazine collector of PC Techniques, The C Users Journal (latter The C/C++ Users Journal), DrDobbs, Crash, Amiga Format, Computer Shopper UK, Portuguese and Spanisch versions of them like Spooler, Microhobby, Micromania, Programacion and the local library computing section.


> The 286 was a 16-bit processor, which means that the 286 could now address up to 16 MB of RAM!

The 80286 was a 16-bit processor but had a 24-bit address bus which is what allowed it to access 16MB of RAM. The 8088/8086 were also 16-bit processors but had a 20-bit address bus which limited them to 1MB. The Z80 and 6502 were 8-bit CPUs with 16-bit address buses which limited them to 64KB.


Anyone too young to remember or non-DOS gamers, you can see it in action here on Duke Nukem 3D running in browser on the Internet Archive: https://archive.org/details/DUKE3D_DOS :)


Thank you for this! It's also super helpful that it runs directly in the web browser. :)


I always wondered why DOS extenders were separate .exes. It looks like you were supposed to be able to replace them, or start them in advance and share one with multiple programs. But that wasn't the case, when you launch it you just got a message to launch the main executable. Why did they bother to make it a proper executable? Wouldn't it make more sense to give it a .sys or .dat extension and to just load it yourself (in DOS a matter of reading it into memory and jumping to that address). Or, since it is proprietary and often delivered with the compiler, just link it in statically?

Random anecdote: as a kid, DOS/4GW stood for "DOS for great win(dows)" (which I know makes absolutely no sense).


You can replace DOS extenders to a certain extent. DPMI is standardized, so if an application is directly developed against the DPMI interface, it doesn't matter which DPMI server is present. I remember that some programs would run without any additional executable on Windows 98 (which provides its own DPMI server) but would require CWSDPMI.EXE when run from DOS.

I guess that other programs would directly communicate with DOS/4GW through a different interface and that's when you can't just replace it with a different extender, but there are other extenders that can fully replace DOS/4GW such as DOS/32.


You can replace them. I forget the details on how I did it, but I recall doing this for a popular first person star wars game at the time. I replaced the DPMI runtime with a different one and the level load times skyrocketed in speed. My suspicion at the time was that the runtime included switched to real mode to do disk transfers but the other perhaps talked directly to the IDE hard disk staying in protected mode? Not sure, but it was very, very fast with I/O. I was pretty happy with my hack at the time.


My suspicion is you're right. Mostly based on Raymond's blog on the role of DOS in Windows 95[1]. Where people using weird drivers and other things that got in the way of 32bit disk access could massively slow down the system. It's plausible that one extender was being "nice" and using your native drivers or at least trying to be compatible whereas another just went "Nah that crap is slow" and just went for it with a native direct driver and was thus massively faster.

[1] https://devblogs.microsoft.com/oldnewthing/20071224-00/?p=24...


There was always a delay of a few seconds before the main program started. I found out why when I tried to write protected mode code myself: You could only ask 4kb of memory at a time from emm386 (or himem, I forgot). There was no bulk access. So you had to loop trough the complete allocation in steps of 4k at a time. I think emm386 switched to protected mode, changed a page table, then went back to real/v86 mode for each block.

When my code was written, it had exactly the same delay as other dos extenders.


Actually, Gates really said "640k will be enough..", but as I remember, context was, "to run (current for that year) msoffice comfortable", with typical for that year business tasks :)

So I think, it will be better for him to admit this, and give exact source, so will not Streisand effect issues.


He said this in 1981, there was no Windows let alone MS Office at the time. The average simple text mode DOS application was barely 50-100kb in total, so 640k was quite a comfortable limit for several years more.


Some late DOS extenders provided Win32 APIs for stuff like the console, threads, memory etc. HX-DOS (https://www.japheth.de/HX.html) was famously capable of running Quake 2 on its own.


I loved Watcom C/C++ for what I could do with it. Unreal mode became easy with the appropriate magic.


EDIT: Why do I post sometimes...


Changed because parent changed. Don't sweat it, fren. We all have done it!


EDIT... :)


Very interesting article. My objects were always written for real mode on DOS, by the time I would have wanted to use these extenders, Coherent and then Linux came out.



Love pikuma! Great learning resource


I actually remember this shit. Man I'm old.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: