If you are implementing an emulator, you must insert some jitter to the emulated floppy drive.
Because if there is no jitter, the ROM's calibration code does a division by zero and crashes.
Modern Mac OS also has all sorts of "bugs" that Hackintosh users need to patch or otherwise work around. Since we're doing something that was never intended, I don't really see these as flaws in the OS.
EDIT: I should add, this applies to applications, not just OSs. If you're an Android dev - will your app run on Android 15? Will it work on ChromeOS? Will it run on Fuchsia? If you're writing Windows software - will it run on ARM? If you're making webapps - does they work on Firefox? And maybe it's not worth the effort, especially if you don't plan to be selling the same software in 5 years, or maybe you think you can just deal with those things when you get there, but if you plan to still be in business in a decade then you should plan accordingly.
For my sanity would you mind calling it Mac OS X/OS X/macOS? I’m not too picky about you matching them all up to the right release but the moment I see Mac OS my mind jumps to the old one without memory protection ;)
2 built-in USB C ports on the top, the I/O card has 2x USB-C and 2x USB 3, and each GPU has 4x USB-C.
So with dual GPUs that's 14 USB ports. Maybe some are implemented via internal hubs?
If not for that, it would in fact go way over, because USB3 ports count twice.
That said, the limit is more problematic than it initially appears, because USB 3 ports count twice—once for USB 2 devices, and once for USB 3 devices. Some motherboards also use USB under the hood for things like Bluetooth (as do real Macs, btw), and even USB headers which aren't connected to anything will take up space if you don't explicitly exclude them.
Indeed, there's a pre-made VirtualBox image pinned to the top of Reddit's /r/windows95 if you are lazy.
If only I had known about AMDK6UPD.EXE back then and been able to understand the reasons behind the crash and why the patch fixed things.
Back in the early 1990s I was in college and working in the computer lab. So I wrote various little DOS utilities to help us better manage the computers and the interaction with Novell Netware.
Due to this reminiscing I have even purchased a few tech books from that time. MS-DOS Encyclopedia, Peter Norton's Programmers Guide to the IBM PC, and some others.
I only wish I still had a copy of SpontaneousAssembly 3.0 as it would be fun to recompile some of my old code!
I am not familiar with how libraries work in the US. Can anyone get a library card with Library of Congress? They have the floppy images:
Has a link for purchase which 404 but to an existing site. Maybe Kevin is the friendly type?
I don't live anywhere close the the Library of Congress, so not easy for me to get a copy there :(
I go to the Library of Congress every once in a while and could ask for this. In my experience, to get items like this you use the Ask a Librarian link on the right and they'll work with you from there. Send me an email at the address here and I'll try this the next time I go: http://trettel.org/contact.html
(Unfortunately my next Library of Congress trip might not be for a year or more at this point due to COVID-19 and life.)
I could make disk images with GNU ddrescue, or another software if you prefer. Note that I don't have a floppy drive at the moment but will ask them if they have an external USB drive.
I'll bet the AMD name was suggested by producers and/or management at the protest of engineering, with the argument that the public knows this as an AMD problem and so it's better to call it that regardless of the technical reality. I've seen this logic many times in my career and do understand there's some rationale to it.
Faced with the Great Satan of Software's apparent refusal to admit its mistake and eliminate the charge, AMD has made the fix available from its Web site free of charge.
Later 386 and 486 systems implemented turbo logic in different ways. Some by reducing bus speed, some by disabling CPU caches, some by inserting wait states for memory access.
It is a common misconception is that the MHz displays from this era had any kind of communication with the CPU or motherboard. They don't -- they are dumb devices that can switch between showing two arbitrary patterns on the LED display and are "programmed" by painstakingly setting jumpers on the back. Often when using 2-digit displays for computers with 3-digit clock speeds they would be programmed to display "HI" and "LO" instead of a number.
So when your display showed "20" that doesn't mean the CPU was running at 20 MHz. It might have been, because 386 CPUs always ran at the bus speed and 20 was a common 386 speed, but things get a lot more complicated when you move to the 486 platform with internal clock multipliers.
My 486 DX4/100 (33 MHz bus speed, 3x multiplier) has a turbo button that when disengaged lowers the effective speed of the system to something roughly like a 486 DX50. But this is not an exact science and does not in fact mean that the CPU is running at 50 MHz.
Turbo buttons were always a shaky proposition. They might have worked okay-ish with the original AT to slow the machine down into a somewhat fitting range to play older games, but probably quickly devolved into some show-offy marketing ploy ("look how fast it goes if I press this!").
Quite a project indeed, but possible with the right motherboard -- as an amusing side note, there is a very strange sub-sub-sub-genre of computer enthusiasts who enjoy the challenge of installing various Windows versions on the slowest possible systems that will run them:
They've managed feats like running Windows XP on a 4 MHz Pentium Overdrive and Windows ME on a 3 MHz 486SL (that one takes 1 hour and 10 minutes to even boot)
The biggest bang for the buck (for me) contracting job I ever did was to turn that bios setting on and press the button to make it run faster.
This makes me wonder about three-instruction sequences of increment / decrement / jump-if-nonzero, and one-instruction sequences of jump-if-nonzero. What's the point of having the unconditional LOOP instruction in the first place?
Note: here, the PAUSE instruction is not the problem at all, but the "code that makes assumptions."
Because the "seen" code is not named, I assume it's something internal for some company?
Note that it's not exactly the same thing, because an interrupt can happen between the decrement and jump for the two-instruction case, but not for the LOOP case.
the road to "run DOS stuff [without being DOS]" was very long, and paved with many gravestones... i think OS/2 comes in in part 5 or 6, but i really recommend reading the whole thing.
tl;dr A graphical OS developed by IBM that succeeded DOS and competed with Windows. Notably, it featured pre-emptive multitasking before Windows did. It was not a success in the home market but was reasonably successful in big business, especially finance, for a short amount of time.
They send me the binaries for OS/2 for every release till 2016
Apparently modern C++ and Qt run there without issues
On a related note, "Showstopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft" is a surprisingly entertaining story, and reads more like a novel than a documentary/memoir.
As a result of a feud between the two companies over how to position OS/2 relative to Microsoft's new Windows 3.1 operating environment, the two companies severed the relationship in 1992 and OS/2 development fell to IBM exclusively.
Windows 3.0 was eventually so successful that Microsoft decided to change the primary application programming interface for the still unreleased NT OS/2 (as it was then known) from an extended OS/2 API to an extended Windows API. This decision caused tension between Microsoft and IBM and the collaboration ultimately fell apart.
OS/2 was an alternative Operating system oriented towards businesses that could run apps from different operating systems under one unified framework.
The article says it would have been picked up in code review, and I agree. But it just seems odd that it wasn't changed right there. Why not just write to loop so that it keeps looping as long as the divisor is below some number like 10ms? You also want to minimise the estimation error, which is easier to do if you divide by a slightly larger number. Consider a loop that takes between 1 and 2ms to finish, your estimate will be either x or 2x.
e.g. for linux: https://www.kernel.org/doc/Documentation/timers/timers-howto...
ATOMIC CONTEXT: You must use the delay family of functions. These functions use the jiffie estimation of clock speed and will busy wait for enough loop cycles to achieve the desired delay
It finds things like "The daily check for updates leaves a few logfiles, and after 30 years there are enough logfiles that the disk is full".
Normally you need to fake or mock all time related API's.
Yes, I'm imagining you'd need to be in a virtualized/emulated environment of some sort.
We once got a couple of high end motherboards for AMD processors, back in the Win98 era, and I tried to install Win95 on one of them. The box said that Win98 was required, but it couldn't hurt to try, right? Maybe there would not be drivers for some of the peripherals, but all I needed was the CD-ROM and hard disk to work.
Install is going fine until it is time for it to reboot. That failed. It didn't even get to the BIOS initialization screen. It appeared to just be dead.
Figuring we just got unlucky and got a defective motherboard or processor, I tried installing on the other one so I could get on with my work.
That one died too.
Eventually I found something about this on the motherboard maker's support site. The problem was with the device probing during install.
What I'm about to say is not made up. As unbelievable as this might sound to people who grew up with more modern PCs, at one time they really did work as I'm about to describe.
The early PC buses had no built-in way for the host to identify what cards were plugged into the expansion slots. Typically, an expansion card would have a set of jumpers or DIP switches that could chose between several sets of possible addresses for the card's registers to appear in I/O space.
The user was expected to keep track of the settings of all cards they installed, and adjust the jumpers appropriately to avoid conflicts, and record the settings in a config file that the drivers would read to find out where their card was.
Later buses, such as EISA and later PCI, provided ways for the host to find out what is there and how it is configured. But operating systems still needed to support the old bus, and they wanted to make this as user friendly as possible.
So systems like Win95 would have a device probe during install that would try to identify what is on your old bus. They would do this by very carefully probing the I/O address space.
For example, suppose you know that a particular network card if present has to be at one of 8 addresses, and you know that after power on or reset that certain bits will be set in its status register and certain bits will be clear. You can read those 8 possible addresses, looking for the right bit pattern. If you don't find it, that particular network card is not present. If you do find it, you can do more tests to confirm it.
Some of those other tests might involve writing to the device registers, and seeing if it responds the way that network card should.
This is obviously risky. What if it is not that network card, but rather a disk controller card that just happens to have a register that after reset has the same bit pattern you expect in the network card status register? The thing you write then to verify it is the network card might be the "FORMAT DISK" command to that disk controller.
And so you had to be very careful with these probes. They had to be done in a safe order. You'd need to probe for that disk controller before you probed for that network card.
Those new motherboards contained peripherals that Win95 did not know about, and so the Win95 probe procedure did not know how to avoid doing bad things to them.
One of those peripherals was the built in interface for flashing the BIOS EEPROM. The Win95 device probe ended up overwriting the BIOS.
While attempting to upgrade the BIOS, something went wrong. Most likely there was a bad sector in the boot floppy I used. The result was an unbootable machine. Solution? Swap in a working EEPROM chip from a compatible motherboard. Boot to a floppy disk that has a BIOS imaging utility and image file. Hot-swap the bad EEPROM chip back in. Re-flash the BIOS. Or, if you had money, you could sometimes purchase a pre-flashed replacement EEPROM chip.
I don't miss those days, but am fortunate to have experienced them. It forced us to learn more about how a computer really works.
Disagree. Where I've worked (Oculus/Facebook and EA) we would never allow such assumptions in code reviews, regardless of how unlikely the failure may be. You never allow div/0 unless it's mathematically provable to be impossible. I'm sure other orgs have the same code review policy, and static analysis these days would also catch it.
Computers became more powerful and more diverse, we added abstractions, we abolished assumptions.
And still I'm pretty sure that even in Oculus (to pick up your example, I know nothing about that), there are bound to be a great deal of assumptions in the code that cease to be valid with later versions of the products.
Today we have the benefit of hindsight, we know how fast processors have become. In the Win3.1 era, noone sane would have predicted this. Even Moore's Law applied to transistor counts, not processor speeds.
What you should ask is: what other assumptions are you implicitly making that you are not currently aware of?
That's a bold claim!
We went from 4-8MHz 286 chips to 20-50MHz 486 chips in the decade leading up win3.1's first release. By the time we were approaching windows 95, pentiums were up to 133MHz.
Those chips already had really fast branch instructions.
So you're already staring down the barrel of calibration taking 15 milliseconds. It's a reasonably obvious step to consider LOOP being a cycle faster than adding and branching, which takes you all the way down to 7 milliseconds.
So taking that all together, x86 clock speeeds have doubled 3-4 times in the last dozen years. A chip could come out tomorrow that takes 15 or even 7 milliseconds on the calibration loop. Your code breaks if it hits 2.
I think someone sane could have predicted the problem.
Computers today are literally 1000x better than PCs 30 years ago. 1000x (even more) faster, 1000x more ram, not to mention storage and other capabilities
I was only forced to switch to Windows XP when I upgraded to Pentium M (Dothan) - besides the Safe Mode I could find no solution to run Windowes 98 on it.
I would gladly return to Windows 98 now if my hardware and software supported it.
Not if you would like to browse web. HTTPS - sorry. But maybe you should give ReactOS a shot with the classic theme: https://reactos.org/ (WinNT era I think)
It also took so little RAM and HDD it would altogether fit in a humble corner of my today RAM - I don't really get it why does modern software need so much more.
Decided to try XP on it, and I was blown away at just how much better it performed. Explorer windows opened up instantly and the whole system just ran smoother.
That's certainly a take.
If you wondered about this, 10000h appears to mean "100,000 hexadecimal". I assume it was intended to say 100000h.