Related but completely different: All MS-DOS programs built with Turbo/Borland Pascal stopped working on ~200+ MHz Pentium CPUs because of a sleep calibration loop that ended up going too fast and causing a divide by zero in the runtime library:
One early videogame that got this right was Alley Cat... in '83!
The x86 version, coded in assembly language, works at the right speed regardless of CPU speed. MobyGames has this to say [1]:
> "Alley Cat was one of the few games of the 1980s that was programmed with full attention to different PC speeds. It's an early, old game--yet it runs perfectly on any machine. The reason it runs on any computer today is, upon loading, the first thing it performs is a mathematical routine to determine the speed of your processor, and as of 2003 we've yet to build an Intel PC too fast to play it."
I had a lot of fun with this game back in the day, when I got my first PC, an XT clone (with Hercules graphics card and "amber" monochrome CRT!).
I had a lot of fun with Alley Cat, Sopwith Camel, digger (https://digger.org/), GW-Basic and Turbo Pascal 3.0 or so when my computer finally got upgraded from a ZX81 (I got it in 1984 as a handmedown from a relative) to an Amstrad PC1512 in 1988 (thanks, supportive parents).
The British truly ruled the low-end computer market in Europe back then.
Lots of love for GW Basic here! It was my first "true" programming language. I started with C64 BASIC but it was too limited to do actual interesting things, at least while not resorting to the cheat of PEEK and POKE.
None of that comes into play in MS-DOS, which never supported multiple processors. CPU 0 on Alder Lake is also a P-core, so you're not going to be throttled by MS-DOS not knowing how to thread-schedule itself. I'm also not certain but would not be surprised if power saving isn't implemented or functional in any of the firmware-provided compatibility SMMs that are running when you boot into such an old OS.
Modern CPUs absolutely do dynamically adjust their speed based on power availabiltity and cooling effectiveness. This is active even before you boot any OS. Now whether or not those CPUs support DOS at all I'm not sure.
Another 68000 system time related bug but completely different problem (former Apollo/Domain sysadmin here, which I became at one company because I was already doing their Unix sysadmin job. We had to periodically reset the system clocks on those):
''The bug apparently results from the high 32 bits of the clock data type
being declared as a 31 bit value instead of 32 bit in the pascal include
files. The reason for this is lost to history, but early pascal compilers
may have had problems with 32 bit unsigned numbers, possibly because the
early Motorola 68000 processor chips didn't have 32 bit unsigned multiply
operations.''
> possibly because the early Motorola 68000 processor chips didn't have 32 bit unsigned multiply operations.
This doesn't really make sense as an explanation. For both signed and unsigned multiply, the 68000 has a 16x16 -> 32 multiply. This is indeed kind of inconvenient if you need to multiply 32-bit numbers, but 31-bit numbers are not any easier. If anything, the unsigned cases is easier to reason about than the signed one
It looks like maybe Apollo/Domain systems use an epoch of 1929-10-14 instead of 1970-01-01. Maybe that's the birthday of one of the early developers (or their spouse).
There's nothing particularly magical about the Unix epoch.
> There's nothing particularly magical about the Unix epoch.
When you have to debug the same event reported in five different timezones, and then see some ugly 13-digit numbers and immediately know that's milliseconds since a certain globally-coherent moment… That moment definitely starts to feel magic.
I was too young to know/understand the hows and whys, but I do remember having a couple of programs that you had to disable the turbo button on the front of the computer for them to run. I always thought it silly to have a button to intentionally slow down the computer, but yet it absolutely was required.
Up to this day I find it fascinating that the runtime error number just so perfectly coincides with the approximate speed at which the error will happen.
A lot of their game logic times based on "game cycles" instead of something proper like counting seconds. ScummVM runs a lot of old Sierra games; it applies some patches to game scripts to fix these issues and also throttles some stuff to make it run at a proper speed on modern hardware.
Because it was expensive on old hardware. Companies tried to squeeze a lot into a game especially when there was little space to have. Some games even reused fonts, images and sounds that are already part of the operating system.
Computer software was incapable of the completely of modern languages and practices because stack overflow didn't exist. Until windows came along with a time api based on datetime most games just used CPU clock cycles like you identified. CPU speed is the ultimate time api.
The only use of the "turbo" button on some PCs in the 90s I found was to slow the machine down for the few older games that didn't have code to deal with the various speeds.
That's it's intended use. It's more of an anti-turbo, but you can run in compat or fast modes. And some of them had a sweet MHz display that was just set with jumpers so you could put BS and wow your friends.
I knew a computer reseller through my father and the turbo button was just a glowing button on the front a user could turn on or off without affected the CPU speed. In other words it became a marketing gimmick and I suspect more often than not it was not functional in any way except to give the user the idea that their 386SX was in TURBO mode.
More accurately, it was real on earlier hardware when people had things like games which were unusable on processors faster than IBM’s original 4.77MHz 8086 processor. As a kid I had a hand-me-down system where that was definitely necessary for a handful of older games.
That meant almost every case had that button and people got used to it. That didn’t mean it was always connected - we’re talking a simple cable connecting to a couple of pins on the motherboard so this was trivial - and I knew a couple of people who disconnected it to keep kids or certain hopeless adults from clicking it, forgetting, and then complaining that the computer was slow.
By the late 386/486 era that’d become pretty common and you stopped seeing it as much.
I remember using turbo switch when playing Tetris. At the startup game would try to adjust its speed to the speed of the hardware it was running on so turning of turbo (basically lowering 8086 frequency from 10 to 4.77Mhz) resulted in a game being so slow that it was possible to easily reach scores over 20k.
The "real" bug here is Motorola's. Having instructions that fail silently (vs. trapping, as DIVU actually does if the divisor is zero!) is just outrageous.
For clarity, because the article takes too long to get there: DIVU has a 32 bit dividend, but a 16 bit divisor and a 16 bit result. So if you try to divide e.g. 0x20000 by 2, the result should be 0x10000, which doesn't fit in the output register. So... the CPU sets the overflow flag but otherwise does nothing!
I'm not quite old enough to have been writing assembly for the 68k, but I've heard this issue before. This was surely not the only footgun situation with DIVU in the market.
The rationale probably is that you can easily check for division by zero before doing the actual division, whereas checking for an overflowing division requires to actually perform the division. The DIVU instruction thus allows to check for overflow without the overhead of raising an exception.
It certainly is a bit of a footgun, because one normally doesn’t expect overflow on integer division. On the other hand, the other basic arithmetic operations all require overflow checks as well, or equivalent operand value analysis. And this is assembly we’re talking about, where you’re supposed to know what you’re doing.
The developer should check for overflow before using the value. If this assembly is compiler generated then it's probably classifiable as a compiler bug.
I would disagree. At the assembly level, as long as the instruction documents what it does in the oddball cases, it's up to you the programmer to use it correctly. Some CISC CPUs trap on errors like divide-by-zero; RISC instruction sets usually make the choice to define some specific behaviour. (For example on Arm, division by zero will always return a 0 result, and will not trap.) Taking an exception is an expensive business, and if the programmer wasn't expecting it then the result will be no better (program crashes, instead of program hangs).
It's not really failing silently, it's telling you through the overflow flag. To me this seems logically consistent with how one would expect overflow to behave, as it does with other instructions like ADD.
That said, I think this instruction would be safer and more useful if it still set at least the remainder result bits (which should always be valid). Then this case would not require checking, as would some other common cases like 'execute every odd iteration' kind of code.
Does ADD on m86k leave the output unchanged or truncate? If division truncated the multiplication result then the moduls would still be usable and this bug would not have happened. Not performing the operation at all on overflow but not raising a fault is odd behavior to say the least.
That may be a better choice, but it wouldn't have prevented the bug from the article, would it? Because at the time of release, the date didn't overflow. And after their goofy "subtract 5,000 days" fix, it didn't overflow either. The only change would be the user experiencing a crash vs a hang.
SCI was Sierra On-Line's third language for writing adventure games. According to the ScummVM wiki, the name stood for both "Script Code Interpreter" and "Sierra's Creative Interpreter".
It was preceded by ADL (Adventure Development Language, which did simple text adventures with graphics) and AGI (Adventure Game Interpreter, which did the same sort of thing as SCI, but lower resolution.)
http://www.pcmicro.com/elebbs/faq/rte200.html