Can someone who upvoted this explain why? I'm physical having difficulty reading the googlebooks blurry text (and zooming isn't doing enough to mitigate that) and would love to know why this made it to the front page.
Is there maybe a copy of this text on a plain webpage anywhere that I can read instead?
Is there maybe a copy of this text on a plain webpage anywhere that I can read instead?
Unfortunately not; this seems to be one of those obscure and unusual facts that may or may not have been well-known 26 years ago, and something I couldn't find a reference to anywhere else on the Internet (but that may be due to other things[1].) I've reproduced the interesting part here:
The size of an SFT entry is fixed for every DOS version up to 5.0, but Windows attempts to determine the SFT size by observation. It scans conventional memory seeking three evenly spaced occurrences of the string "CON" and takes the distance between them to be the size of one SFT entry. Since DOS itself opens three consecutive SFT entries for the CON device, this technique usually works. But if a utility program moves that portion of the SFT to upper memory (memory above 640K and below 1MB), Windows won't find it.
HIFILES.BAT overcomes this problem by placing a trio of ersatz SFT entries in conventional memory, where Windows can find them. All that's important is that the characters C, O, N, and space appear three times, starting 59 bytes apart.
DOS uses a System File Table (SFT) track open files (file handles are just a pointer into the SFT).
Despite the size of each entry of the SFT being the same for all versions of DOS up to 5.0, Windows attempts to determine the size by observation.
Because DOS itself opens three handles to the CON device, it can determine the size of an SFT entry by finding three evenly spaced occurrences of the string 'CON ' in memory.
This usually works but if a utility program (not sure what that refers to) moves that portion of the SFT into upper memory (640K-1MB) then Windows won't be able to find it.
The magazine has the solution in the form of a .BAT file that will place three properly spaced instances of 'CON ' in conventional memory so that Windows can find it.
Welp, this win the award for the ugliest hack of the week, and I'm not sure which part of the post to award it to. Windows scanning memory for strings, or the part where you make a batch file just to make that check pass.
Back in the day, this was not even considered odd, in the Microsoft ecosystem. People collected lore like this because you had to know a great deal of it to get anything to work. Writing the code to do the actual work was 10% of the job. Compensating for stuff like this was the rest.
Right! This sort of "ugly hack lore" more generally is why it's extremely difficult for third-party word processing applications to handle old Microsoft Word .doc files, and in fact is why Microsoft put so much effort into backward compatibility for modern versions of Windows: Microsoft Office actually contains the document-rendering engines of old versions of Microsoft Word, in order to handle documents which are in the old formats.
And I understand that a significant proportion of Azure is running VMs with headless copies of Word hooked up to Office 365 so that those same old documents can straightforwardly be opened online, too.
Apologies I can't find the reference (of course I read this on here).
I'm very curious if a) the VM stays open the whole time, b) your document gets exported to some kind of perfect serialization format that can then be re-saved when you're done, c) if Word is using something like operational transformations to store keystrokes and edits and then ticker-taping that into a save routine, or d) something else.
Even something like loading a file into a text editor containing the appropriate pattern would be sufficient; but what's more interesting to consider, is what happens if you happen to have that pattern in memory before starting Windows, but it's the incorrect size of SFT for the version of DOS --- I could imagine some pretty odd things happening... as well as KB article getting published if some important customer ran into it:
"Windows becomes unstable when started after you work with files containing 3 or more occurrences of 'CON' spaced between 16 and 256 bytes apart."
Windows 3.1 needs to manipulate the MS-DOS System File Table (the table that keeps track of open file handles) in order to facilitate multitasking applications. This is fine.
However, Windows 3.1 relies on an utterly absurd hack to determine size of entries in the MS-DOS System File Table, despite the fact that MS-DOS and Windows are both under control of the same company.
It scans raw memory looking for three instances of the string "CON", assumes this is the location the SFT, and uses the distance between the first two CONs as the size of a SFT entry.
The hack breaks if the SFT has been moved to high memory, which prevents Windows from running. The workaround is to put three CON instances in conventional memory, correctly spaced, before starting Windows.
> despite the fact that MS-DOS and Windows are both under control of the same company.
But Windows 3.xx ran on non-Microsoft versions of DOS. They tried to stop this with the release of 3.1 but ultimately failed. DR-DOS was a popular way to run Windows (although piracy was rampant, and many simply pirated MS DOS instead of buying the Cheaper DR-DOS).
No. There was never any intention to keep non-msdos systems working. Quite the contrary. But many, many users had add-on memory extenders to move things out of the memory ms-dos knew about, to leave more room for programs, or to speed up switching beween them.
The CON hack was just business as usual, and hacks to keep the hacks working, like this, were stock-in-trade for people obliged to work in it the environment. Microsoft was directly responsible for a two-decade regression of progress in software system design. They did not just displace better systems; they got everyone convinced that their stuff was normal.
>No. There was never any intention to keep non-msdos systems working.
$155 million and Bill Gates: "You never sent me a response on the question of what things an app would do that would make it run with MSDOS and not run with DR-DOS. Is there feature [sic] they have that might get in our way?"
i think that was a hidden cost in the decrease of computer hardware price. it's hard to know what would've happened without microsoft though. whether it would be we all used expensive hardware like macs and amigas (or ibm) or whether something like linux or a different ibm os would have become the huge thing. it was great how we could build our own inexpensive system, or buy an inexpensive ibm-compatible someone else made. i have felt grateful to microsoft for that but maybe i'm wrong and it would just be another company that ibm chose.
OS/2, for example, ran on the same machines as W95, and was fully modern. It was easy to keep an OS/2 system running for months without a crash. But few would develop for it, because few had it.
People paid absolutely through the nose for all the effort needed to keep anything working at all. Disk de-fragmenting, for example, was a Windows-only phenomenon.
Consider all the Free Software we use today, that just works without attention, year after year. People had to pay high prices for Windows software because it took so much effort to get it to work at all, and to keep working when the tide changed.
I used OS2 a lot, both version 2.1 and later Warp 3 back then. While it was really powerful (not to mention a lot more stable) compared to Windows 95, it was also heavier and slower. IBM advertised Warp 3 as something that could run in 4 MB or RAM; that was no different from Microsoft stating 20 years later that Vista could run in a half GB RAM. Yes, the system loads, but you sadly can't do much with it save for looking at the shiny icons and enjoying the sound of the hard drive constantly spinning.
Unfortunately back then RAM modules were so expensive that being forced to upgrade to 8 MB to have an usable system would make a difference, so although once upgraded the RAM OS/2 could become a huge step forward from windows, most people simply wouldn't spent that money.
4MB was ~$150 in 1994, less than cheapest 486 processor/cheapest CDROM drive/around 14.4 modem or 16bit sound card price etc. Whole 486 systems started around $1.2K, Pentium ones at $2K, 8MB was standard lowest Ram offered. $150 wasnt the end of the world.
OS/2 had comsistently higher hardware requirements than MS DOS and Windows. Mainly the memory usage was way higher. This was the price you had to pay for the better system design.
The MS system software of that era is a prime example of how worse is better for software products.
Sorry, but from my experience, this is not true. I did have several machines Running w95 and I did try to run OS/2 on them. It would boot but it would not run fast enough to work with it. W95, on the same hardware, worked quite well and responsive.
Cheaper was better... even with Microsoft murdering the competition, the old school companies never saw the error of their ways and kept selling stuff at a premium.
We’d be discussing this on a Prodigy forum if the incumbents had their way.
> Microsoft was directly responsible for a two-decade regression of progress in software system design.
People did buy the products rather than better stuff. However there were monopolist practices that stopped that and monopolist practices are hard to spot when there is hype and a new industry people know nothing about. If you are trying to play catch up to get on board this tech train you don't know that it's a con.
I believe Microsoft's 'regression of progress' goes further back. BASIC is the real culprit. Nobody needed line numbers and goto statements, yet hobby and home computers from the 8 bit era were invariably burdened with lousy variants of MicroSoft BASIC. Including the Commodore things, even if they did not say Microsoft on boot up, they were.
Fundamentally there was open source ethos at the computer club of lore, where it all started. Bill Gates cloned Basic from the PDP-11 he had access to then made people pay for it rather than just copy it. He kind of invented the notion of software piracy being evil and morally wrong.
The UK home computing scene had BBC BASIC and Sinclair BASIC that were not directly derived from Microsoft's effort. The Sinclair contrivance was a super cheap BASIC hat needed the funny keyboard mappings, the BBC BASIC was actually a decent real programming language rather than just the lame Microsoft idea of what BASIC should be.
The only BASIC most users knew was 'LOAD""' followed by a press of the cassette play button to load a game.
MS-DOS was acquired and repurposed, sold to IBM with BASIC doing the groundwork - Bill Gates did not have an OS before then.
So computing was half way there towards being open source before Bill Gates ruined it by making people pay for his clone of PDP-11 BASIC.
Computers were also fundamentally networked before MS-DOS came along. Bill Gates kept it as personal computing, fleece everyone rather than share resources.
>I believe Microsoft's 'regression of progress' goes further back. BASIC is the real culprit. Nobody needed line numbers and goto statements, yet hobby and home computers from the 8 bit era were invariably burdened with lousy variants of MicroSoft BASIC. Including the Commodore things, even if they did not say Microsoft on boot up, they were.
The Atari 800 series had a unique BASIC with unusual string-handling conventions and graphic and sound commands miles ahead of the vanilla Commodore (pre-7.0) dialects.
I don't blame them for line numbers. Remember that the earliest hobbyist machines were going to be connected to who-knows-what display devices, even literal teletypes, where you might not have been able to support a full-screen editor or a convenient way to differentiate immediate versus program commands. Line numbers give you workarounds to those problems.
It could be argued they languished too long-- perhaps BASICA/GWBASIC, built for a machine with standard robust display controllers and ample ROM and address space-- should have been more like QuickBasic-- but f you wanted a box-checking 8k BASIC for your new microcomputer, it's not an awful choice.
It seems doubtful that Microsoft would jump through such ridiculous hoops to ensure Windows ran on DR-DOS. They did the exact opposite, and added code to detect DR-DOS and disable Windows.
And in any case, it would be up to DR-DOS to emulate whatever MS-DOS was doing.
> despite the fact that MS-DOS and Windows are both under control of the same company.
I've never worked at Microsoft, but I have read that teams at Microsoft have been (historically, at least) very disjointed and uncooperative towards each other (No idea whether that's true, but the existence of Powershell in spite of cmd was given as an example)
Sadly, I don't have a reference handy, but I recall reading a while back that there was this toxic culture at Microsoft in the past where one of the easiest or least frictive ways to climb the ladder professionally was to undermine others due to the way performance was evaluated. Basically, something of a “cull the lowest performing members” model.
Not sure if this is related or even reality, my memory wasn't so great even when I was young.
Windows and MSDOS teams were very close, no issue there.
The real reason someone might need to do this hack is that there were in fact many versions of "DOS" not made by Microsoft. These included Zenith, Compaq, Digital Research and IBM. Each of these each had their own subversions.
IBM DOS was basically MS-DOS (except IBM did do some of their own work of course). Pretty sure Zenith and Compaq DOSes were OEM MS-DOS (Zenith did have a "Z-DOS" but it was just another OEM version of MS-DOS for a non-IBM-compatible system). DR-DOS was of course a true third party DOS (one of the very few contemporary ones IIRC)
Reading the source code (which is available on archive.org) you can see exactly this. Lots of IFDEF statements for compiling MSDOS and PCDOS differences. When I first saw that, I was really surprised.
You are being reminded that Microsoft code was always batshit insane, and only ever just barely worked.
In the W95/98 era, they succeeded in redefining "crash" to mean, not that the program or OS stopped, requiring a restart, but rather that restarting wouldn't work, and you would have to re-install the OS.
I still recall my shock at learning of this re-definition. I had said that Linux (of the day) would hardly ever crash, and the person said he often went weeks without need to re-install his OS. Crashing, in the old sense, was a several- or many-times-a-day event, and hardly deserved mention. People berated themselves for not having saved their work recently enough. Sometimes they complained of bluescreens while saving, and would alternate saving to two different files, so one might survive.
I disagree with your redefinition of "crash". I remember the 90's quite well, and have spent lots of time supporting everything from DOS 3.3 on up to W10. That's not to say that there weren't those people for whom "crash" meant "OS reload". Usually when somebody said their computer "crashed" and they had to reinstall their OS I discerned a few possibilities:
1) the user broke their system because they were installing anything they could and are blaming MS for their troubles
2) the user has no idea how to fix anything so they default to a reinstall because they think it's easier than finding the fix. In a pre-Google/StackOverflow world, this may have been true.
3) The user was on a quest for absolute stability, and was superstitious about uninstalling/reinstalling drivers etc.
As for the "only ever just barely worked" statement, I strongly disagree. Yes, a lot of maintenance and good habits were required, but it wasn't outside of the realm of possibility to have a stable Windows ME. I know, because I'm one of about 5 people who ever liked Windows ME. I thought it was a decent OS, as long as you figured out that the Device Manager had 10 copies of every driver, unhid them, and removed them all and rebooted.
I guess you don't remember when it came out that a Windows 95/98/Me machine, running nothing at all, would crash spontaneously after, what? 39? days, when a time counter rolled over. No one had noticed, in years prior, because: No one had ever got one to run that long.
Boeing 787s need to be fully power-cycled every few months, for identically the same reason, although its time counter ticks less frequently. They would fall out of the sky, otherwise. It is just lucky somebody noticed before it happened, because there has otherwise been no need to reboot it.
Wow I had managed to completely forget all that. Yet I still spontaneously save documents way too often as a trigger reaction, even though I haven't used Windows in any meaningful capacity in nearly 2 decades.
Another weird habit, but caused from learning Unix through QNX flakiness when dealing with "quad-density diskettes": using the 'sync' command, repeatedly.
It is very rarely necessary to sync nowadays, but back then we used to "sync thrice like you mean it" because you'd lose everything on the disk if you didn't.
I'd be interested in a reference for this "crash" definition you're describing. It was common parlance in the late 80's in the DOS-era PC circles I ran in to use the word "crash" to mean a physical fault with a hard disk drive (probably in reference to a head crash) and I've seen it picked-up by the novice community over time. I've never seen any Microsoft-provided reference that forwarded such a definition.
None of this is to say that MS-DOS and Win3.x isn't a bizarre maze of assembly-language hacks. It was unreliable as hell, though the entire PC software ecosystem and playing fast-and-loose with tricks like directly accessing hardware was as much to blame as anything.
I've had things happen to old versions of Windows that have completely blown the system installed to the C: drive.
This was definitely a thing. You can recall an artifact of folk proof in this detail, by recalling a lot of more technical users and professional IT/desktop support staff in computer labs using C: as a system drive, and D: as the work area containing user files, even if it was a single user machine. Losing the C: partition was a bad burn, bad enough that you didn't let it happen twice.
Years later, even in the Vista days, I had a partition destroying error from a truly innocuous action (like renaming some file; and that NOT being just a coincidence, but a repeatable error with Google results), and had a flabbergasted flashback to a prior C: drive implosion, that reminded me why you always divide the bootable system install off to its own untouched C: partition, and always only put the things you care about on the D: drive, and create lots of backups.
This was real. Never touch C: and don't trust it. Never put anything that matters in the C: partition.
One of the ways Windows would corrupt itself was an optimization it used to reduce the size of the VM swap file. On many OS, when a program is loaded into memory the executable code pages of memory are the same as any other memory page: the VM might write the page to the swap file, and later reload that memory from the swap file to restore the memory page.
On windows, code pages were not sent to the swap file, because that data already had a copy of that data in the program's original .exe file! Instead of writing the memory page to the swap file, simply freeing the memory was much faster and saved a lot of space on the disk reserved for the swap file. The code pages were supposed to be read-only, but some kind of bug(s) would cause the memory pages to be seen as normal memory, where writes would mark the page as "dirty" and eventually the VM would write those changes to the swap file. Unfortunately, for code pages, the "swap file" was the original .exe, so the changes to the memory page would corrupt the .exe, probably requiring it to be reinstalled.
> always divide the bootable system install off to its own untouched C: partition
They might have been using 'ghost' to restore the C: partition over the LAN at boot time. This was very common in professionally-managed settings.
> Unfortunately, for code pages, the "swap file" was the original .exe, so the changes to the memory page would corrupt the .exe, probably requiring it to be reinstalled.
Do you mean Windows modified user programs without warning? It seems like it was a serious bug.
Yes, it could overwrite any program, but (if I recall correctly) it hit OS file most of the time simply because they were always open. It was a very rare bug, because it had to combine both corruption of the VM to allow the write to happen, and a bug (e.g. a pointer error) that would write to the program's code page. The actual modification would usually be very small, and wouldn't change the file size.
And yes, it was a serious bug, that might have been responsible for why Windows seemed to have a "half life". Over a given period of time, each system had a chance of corrupting itself. Some systems would be fine for years, but many needed to be reinstalled after a ~year.
Ran a fair bit of Windows 95 back in the day, and a bit of 98 before ditching and going all-out Linux. Did weird shit to those systems, had my liberal share of blue screens and reboots and facepalms and shouting at my CRT. Also frequent reinstalls because things inevitably grew crufty and polluted beyond repair. But these were planned and deliberate. I never once had a system die on me.
You probably, by luck or design, bought decent gear.
The biggest killer of windows systems was drivers and bad hardware.
The average punter bought some garbage device at retail and plugged in his software modem, software printer, and god knows what kind of software. Those boxes died.
Enterprise software was usually even worse, as companies were uniquely unskilled to produce it.
That's simply not true. Many, many people got their start with Microsoft Basic, Microsoft Visual Basic, etc. People still make a living knowing the very reliable Microsoft Excel.
Windows OS quality has varied quite a bit, and I think it is unfair to call the majority of Windows "batshit insane." It isn't a knowledgeable opinion and it isn't necessarily fair, either.
There ARE plenty of mistakes in the history of Windows, don't get me wrong.
Microsoft has gone well out of their way to keep backwards compatibility from release to release, even if it means other features won't be added. Every few years they break this trend and they give everyone multiple years of notice.
There are often very good reasons for some of the things that appear to be insane, as any regular reader of Raymond Chen's "The Old New Thing" blog will testify.
Arguably, if anyone I know of would have a valid opinion of Windows' level of insanity, it would be Raymond Chen, as he's had his hands in the Windows source code for 2-3 decades at this point. He continues to work as a developer at Microsoft, so I'm quite confident that the level of "batshit insanity" in Windows is quite low and getting lower by the day.
Getting an 80286 or 80386 out of 8086/8088 mode to address more memory required access to the A20 address line. Which IBM added to the keyboard controller on the motherboard. Which lead to the A20 hacks necessary to access memory above 1MB (let alone the hacks already in place to access over 640K). Note that this hack has required CPU support up to Intel Haswell which is only 5-6 years ago.
Actually, the hack is to fix the incorrect behavior of the processor in real mode (not lowering line 20 to 0).
The fact that it was put into the keyboard controller was probably as it is a simple soc[1] with (in the case of a keyboard controller) about 256 bytes of ram and about 1-4K of rom. Which resources was probably still available to the motherboard designers without complicated logic or additional components. It is not the weirdest fix to imagine. note: modern variant of the keyboard controller can sometimes be flashed which is fun to play around with.
People who lived it could be forgiven for wanting to forget.
I never had any direct contact with Microsoft code, but was surrounded by people who did.
It should be noted that Windows NT/Vista/XP/7/8/10 are a different lineage, and carry very little code from the 3.1/3.11/95/98/Me codebase, aside from some UI subsystems. There was not much contact between the two groups, and (I was told) a fair bit of bad blood.
If you find somebody claiming that crashes were not, at minimum, a daily event on W95, you may commiserate over their amnesia. Many people ran Windows programs under OS/2, where only the programs would crash, not the OS, and so less often, and they did not need to reinstall the OS.
OS/2 had a big advantage over Windows in that it was an OS designed from the ground up.
Windows started out life as nothing more than a GUI designed to make MS-DOS look more like the Apple Macintosh.
One of Windows 3.x big strengths was it's ability to run a large number of MS-DOS programs.
But that was also it's weakness because it also meant it had to run MS-DOS and that OS was not much more than a boot loader and nothing like the modern looking operating system like OS/2.
However a big reason why OS/2 failed was because OS/2 1.x could only run a handful of MS-DOS programs and it ran them very badly.
People would use Windows 3.x ahead of OS/2 because Windows could run their MS-DOS programs.
No, generally OS/2 ran Windows and DOS programs much more reliably than Windows could. But you couldn't buy a machine with OS/2 on it (except from IBM) because Microsoft had the market locked down, so it (1) cost money -- $100 was a lot, back then -- and (2) had to be installed over top of the Windows already there. So, only a few geeks had it.
Rumors about incompatibilities spread because people preferred to rationalize what they were already doing, and there was a lot of money in tuning and fixing messed-up Windows systems that would have dried up.
OS/2 1.x was the competition for Windows 3.x and it ran DOS programs in a thing colloquially called a dog box so you can imagine how well that worked.
And that dog box could only run MS-DOS and no Windows programs.
Now it is true OS/2 1.x only ran on an IBM PS/2 but when it came running Windows and DOS it was no match for Windows 3.x itself.
Now OS/2 2.0 did run Windows and DOS program much better than Windows 3.x and it ran on any machine that could run MS-DOS, but it came out after Windows 95.
OS/2 2.0 did this by using the hardware support built-in to the 80386 chip and Windows 95 used that exact same hardware support to greatly improve on Windows 3.x.
Now while OS/2 2.0 was still much better than Windows 95, the race was lost only because by the time it came out Windows 95 was a smash hit.
Also OS/2 2.0 needed a high end 80386 machine with lots of (and very expensive) RAM where as Windows 95 ran just fine on low end 80386DX chips using much less RAM.
Unfortunately, OS/2 had a crucial flaw in its design: a Synchronous Input Queue (SIQ). What this meant was that all messages to the GUI window server went through a single tollbooth. If any OS/2 native GUI app ever stopped servicing its window messages, the entire GUI would get stuck and the system froze.
In 1996, IBM released OS/2 Warp 4, which included a revamped Workplace Shell, bundled Java and development tools, and a long-awaited fix for the Synchronous Input Queue. It wasn’t nearly enough. Sales of OS/2 dwindled while sales of Windows 95 continued to rise.
Windows 95 was a smash success, breaking all previous records for sales of operating systems.
So to clarify it was OS/2 Warp 4 that finally delivered on the promise of Windows Better than Windows, but unfortunately it just came too late.
I should add that many people ran stock systems, without add-ons, and only one or two programs provided directly by Microsoft, and did not experience as many crashes.
Microsoft was better able to reproduce failures when only their code ran, and took some care to make that work.
> I never had any direct contact with Microsoft code
So your claims are rather reflecting your memory of your interaction with the people who did have the direct contact, and not the actual state.
In reality, Windows 3.0 and 3.1 is hugely different from 3.11, and from 95/98/Me. Starting with 3.11, there was actually the of code avoiding the lower lever calls to BIOS and DOS if possible (if the hardware allowed) but using the new 32-bit code instead.
The stability of the OS was definitely not as you describe, one could surely use the system for days without having the system crashes.
The system's design was the effect of the engineering goal of having a system that can work with very small amounts of memory, compared to the competing safer systems. With more memory available on the computer to implement the "hardware" memory protection one was able to use the "safer" systems like NT.
The wide use of that lineage vs. NT was a strong reflection of the market needs: people did have the computers with too little RAM for the "safer" systems but still wanted to run the programs that had the functionality they needed.
Additionally, the "safer" system not only demanded more RAM, but were often for some tasks less convenient.
A lot of users who "had to reinstall the system often" were actively doing "unhealthy" steps themselves: "trying" the software they didn't need, which did the interventions on the system which damaged the system.
The programs running on the OS (even on NT) typically did tend to crash regularly, however. Especially including the biggest like Microsoft Word. Fascinatingly, that was by design: it was the "knowledge" of the managers that delivering fast the new versions of programs with more features was more important than the code correctness. Just like in the web world more recently...
Wow, two, three whole days without a crash!? Imagine!
At the time, people running Linux on the same hardware resented being obliged to shut down after six months to upgrade their OS or disk storage. Back then, Linux was much less professionally maintained, but if it ever crashed at all it was a big deal. OS/2 likewise.
Windows crashes were not just for tinkerers. In any office, secretaries and receptionists were very used to re-booting in midday, despite shutting the machines off every night. They considered it completely normal.
It's not what I've said. My claim was more in the direction that the OS crashes happened, for properly maintained system, on reliable hardware, starting with 3.11 more like... never. The user space programs did crash, but it was often practically a "feature" of the design goals.
Linux? In 1993 kernel was not even v1? And the distributions didn't exist? Linux was a text based toy back then, with nothing for those who needed the user space GUI programs running on 3.11 or 95, and who couldn't afford to have a computer to run the NT which was released in 1993. Since 1993 there was a safe Windows system, anybody using a less safe one, like 3.11, 95, 98, Me did that as a trade-off in order to be able to pay less for the hardware. Or to use a program which had used something that NT didn't have (an Linux surely even less). But even 3.11 already had a stable 32-bit core. Hardware was expensive then. Your memory is definitely not reflecting the actual state of that time.
Linux, in fact, had its X Window System practically from the beginning, so your recollection is 100% false.
Most people could not use Linux because it did not run Windows programs, or anyway because they did not feel competent to install it, but it needed no more machine than W95. OS/2 needed no more machine either, and would run Windows programs more reliably than Windows could, but cost real money and also had to be installed.
NT was pretty crashy back then, too, and a huge resource hog besides, and many Windows programs didn't run on it either. Vista was an even bigger hog. 10 is hundreds of times bigger, but RAM and disk got cheap, so we don't often notice.
Your mentioning of Vista from 2007 shows the amount of confusion: the relevant years for comparison in the thread were: Linux, for which no distribution existed before around 1995 at minimum, and no serious software with the features of those written for Windows, and Windows systems for which a stable Windows NT existed since 1993. Since the same year users chose non-NT Windows systems because they needed cheaper hardware. That’s it. And even the non-NT systems were more stable than the confused comments claim. The instabilities were typically bad hardware or bad user software.
I was also around and never heard of anyone describing "crash" to mean "need to reinstall OS". Always in the same Linux sense, which absolutely was a breath of fresh air by comparison.
The fin de siecle HN is currently in is apparently some GenX midlife crisis phase where we're so mad about ecmascript that we vote up any rando retrocomputing article which gets posted. Like we have to remind ourselves that we loved this shit back in the day.
But I got all this knowledge from reading PC Magazine, Mac World, etc, but mostly from the tabloid trade weeklies PC Week, Computer World, Mac Week. This old PC/Mac stuff was never really properly put on the web. Yes, its hard to read nowadays in some viewer. But the ads are much more attractive.
(Just for the record, I like making stuff with ecmascript.)
Some context from the beginning of the article for why this data would be moved in the first place:
> With the advent of upper memory managers, several utilities that save conventional memory by moving DOS system data into upper memory have become popular. Many of these utilities extend the System File Table (SFT), creating space for additional file handles in upper memory and linking it to the SFT chain. This technique can save as much as 14K of conventional memory, depending on the number of SFT entries moved to upper memory.
Suppose you know that three consecutive entries have "CON" for the name. If you can find them in memory, then the distance between the "CON" strings is the same as the value of sizeof (struct file_handle). Unless somehow there are some false positive "CON" bytes in there somewhere.
I suspect many only read the title and went straight to the comments to throw around their own personal anecdotes about Windows 3.x, and very few actually tried to read the linked content.
Thanks! Though unfortunately I still can't see more than a few letters at a time due to the letters being blurry; the brain still goes "these are a bit smudgy, let me turn the entire graphic into a giant smudge for you".
You seem to be implying that because you're having trouble reading it, nobody could possibly be interested in it or have any reason for upvoting it. As you can see from the rest of the discussion, in fact people can read it just fine. The upvotes don't require defending or explaining.
If that's how you read it, I encourage you to reread it when you're not in a bad mood: I have physical difficulty reading the article, others have upvoted it and so clearly did not have that difficulty, and might be able to explain what it's about for people who can't read unless the actual letters are clean enough to not trick the brain into turning them into one giant smudge at any zoom level.
Make of that what you will, but making "berating people for upvoting" of it is projecting a frankly bizarre kind of negative narrative where there is a request for help.
When you said "Can someone who upvoted this explain why?", I read it as "Can someone who upvoted this explain why they upvoted it?". I now gather you meant "Can someone who upvoted this explain why Windows 3.x does that?"
It's an interesting ambiguity that I failed to spot, rather than some kind of bad mood. Thank you for answering earnestly.
Is there maybe a copy of this text on a plain webpage anywhere that I can read instead?