Hacker News new | past | comments | ask | show | jobs | submit login
Why does the Commodore C128 perform poorly when running CP/M? (retrocomputing.stackexchange.com)
100 points by xeeeeeeeeeeenu on Dec 9, 2022 | hide | past | favorite | 85 comments



Didn't it use the 80 column screen? It was my experience that the C128's 80 column screen was an absolute dog. I was pretty handy with 6502 assembly language but trying to get anything onto that screen was just disappointingly slow. Maybe I was doing it wrong, but it seemed like you had to put every last screen address into that display controller, and that controller itself required you to select register numbers before putting the values in.

So just to write one character onto the actual screen required six writes into the controller chip. Adress-low-register-number into D600, screen address LSB into D601. Then address-high-register-number into D600, then screen address MSB into D601. Now you're ready to write 'A' into the actual screen buffer! So, write-data-register-number into D600 and 0x41 into D601!

Do you want to put a 'B' on the screen right after that 'A'? Repeat the above - six more writes to get the 'B' showing! Why they didn't embed an auto-incrementing screen address counter in the controller is beyond reckoning. At least that way you'd only have need to set the address once, and could then leave the select register (D600) parked on the write-and-increment register, and merrily stuffed your display string into D601 with a relatively tight loop.

Presumably the Z80 had to put up with the same annoying 80 column controller. That can't have helped.


The hardware designer of the C128 (Bil Herd) had lots of choice things to say about the designer of that 80 column (MOS 8563) chip. The chip and machine were designed in parallel but the 8563 was originally intended for a different application and its designer had little consideration for making it work well in the system it ended up in. Though to be honest the weaknesses of the 8563 seem like they would make it pretty poor in any design.


> Why they didn't embed an auto-incrementing screen address counter in the controller is beyond reckoning.

They did.

Page 361 of the Commodore 128 Programmer's Reference Guide (halfway down the page):

> At this point, the 8563 hardware takes over. It automatically increments the address contained in the update register pair (R18, R19). You as the programmer do not increment the update register pair's contents; the 8563 chip does this for you automatically. You can say that the update register pair is auto-incrementing.

You can get the ProgRef here: https://www.pagetable.com/docs/Commodore%20128%20Programmer'...


In the PDF at the link you provided, I found what you're referring to at page 311 (in-document numbering) which is page 321 per my pdf viewer.

So, yes, it did auto increment. I think what happened is that after 35 years, I just recalled my initial aggravation with addressing the 8563's RAM through that keyhole of R18, R19, R31.

But .. looking around in the ol PRG, and finding some of Mr Herd's "choice comments" out on the web, I'd still say the 8563 was a dog. Case in point, having to poll D600 for bit 7 to become set just to make sure the 8563 had cottoned onto your request to set a register.

That bottleneck to get to the 8563's RAM was not fun. The bottleneck to get into the 8563s own registers was even goofy. Why not just map them to D600-D61F ?


I have no idea how that ended up as 361.


A good reply to the "we should write more efficient code today, like we used to do in the Good Ol' Days". Inefficient code is not a new phenomenon!


No, but it's possible to make things harder on yourself than you need to also. CP/M had a "terminal" abstractions for some good reason, but it also means that every byte of output needs to go through functions to "handle" it[1] but then Commodore had to shoehorn that onto a video controller chip designed for a very different bus[2], making everything slower still.

The C128 was... it was just a bad computer, I'm sorry. This was the swan song for the 8 bit machine, the market was moving very rapidly onward in 1985 as it was released (within 3 years or so it would be routine to see new personal devices quoted with multiple megabytes of fully addressable RAM and 256kb+ color framebuffers).

Commodore threw together what they could to keep the C64 band together for one more tour, but... let's be honest, their last album sucked.

[1] vs. software for contemporary PC's that had largely abandoned the equivalent MS-DOS abstractions and was blasting bytes straight into the MDA or CGA text buffers. You could technically do that on the 128 also, but that would mean losing compatibility with a decade of CP/M software designed for other terminals, which was the only reason for having the Z80 there in the first place.

[2] The 6502 had an essentially synchronous bus where the data resulting from the address lines latched at the rising edge of the clock was present when the clock fell. The Z80 had a 3-cycle sequence that needed to be mediated by glue logic to fit what the VIC-II was expecting, and it didn't fit within a single cycle. That's the reason for the Z80 having to run at "half speed", essentially.


> Commodore threw together what they could to keep the C64 band together for one more tour, but... let's be honest, their last album sucked.

Exactly! Bil Herd said it was just supposed to get them through a year until the Amiga picked up. Nobody expected the 8-bit line to last so long with a multitasking multimedia powerhouse out there for under $1000.


Amstrad managed to keep selling 8-bit CP/M machines as (semi) dedicated word processors well into the 90s: https://en.wikipedia.org/wiki/Amstrad_PCW


Commodore kept selling the 64C until it entered chapter 11 in 1994, actually. It was the 128 that was an embarassing dead end, but the 64 had a huge base of software and there was still a market for a dirt cheap TV-connected home device for all the purposes for which it was ever useful.

It's hard to remember how fast things changed in the early era of computing. The killer app that finally put the knife in the 8 bit home computer was... the web browser.


Indeed, we went from ZX81 in 1981 to Amiga in 1985.

Every new project was obsolete or nearly so at launch.


I had an PCW for ages. My dad got it so I could write up school reports. Screen was pretty crisp text. Main drawback is it uses a 3" floppy which was hard to find. Also only after purchase did I realize it was not IBM compatible.


"Commodore threw together what they could to keep the C64 band together for one more tour, but... let's be honest, their last album sucked."

I've sometimes seen people say "Oh, modern computing sucks so much, if only the Commodore line had continued on and been the basis for computing instead of the IBM line"... and I have no idea what they mean. Even the Amiga may have been neat for the time, but trying to project that line forward all the way into 2022... well, whatever would have gotten here probably wouldn't be all that different than what we have now.

Commodore did some interesting things. They lost for reasons, though. Even the Amiga.


> Even the Amiga.

To me, the biggest tragedy was when Commodore refused to allow Sun to package the A3000 as a low-end Unix workstation.

It could have changed the world.

I regard the C128 as a very flawed project - even more of a kludge than the Apple IIgs (a 16-bit computer built around an 8-bit one from the previous decade, kind of a 16-bit Apple /// with a //c inside and two completely different ways to output pixels, plus a full professional synthesizer attached somewhere in there - fun machine, but don't look too close).

They could have used the effort that went into the 128 to beef up the VICII with a TED-like pallette and have a brilliant last C-64 computer to show.


    Apple IIgs (a 16-bit computer built around an 
    8-bit one from the previous decade, kind of a 
    16-bit Apple /// with a //c inside and two 
    completely different ways to output pixels, 
    plus a full professional synthesizer attached 
    somewhere in there - fun machine, but don't 
    look too close).
I feel like despite its weird nature, that you accurately describe, it still would have been a really legit competitor had they simply given it a faster 65C816.

It simply didn't have the CPU horsepower to really do the things it was otherwise capable of... the Mac-like GUIs, the new graphical modes in games, etc.

When you use a IIGS with one of the accelerator cards (Transwarp, etc) that gives it a faster 65C816 (7mhz, 10mhz, etc)you really get a sense of what the IIGS could have been. Because at that point it was a solid Atari ST / Amiga peer (very roughly speaking, lots of feature differences)

It's not entirely clear to me why it didn't have a faster 65C816 to begin with. I know there was a lot of competition inside Apple between the AppleII and Mac divisions, and I suspect the IIGS was hobbled so that it didn't eclipse the entry level Macs. However I'm also not sure if higher-clocked 65C816 chips were even available at the time the IIgs was designed and released, so maybe it wasn't an option.


> I know there was a lot of competition inside Apple between the AppleII and Mac divisions, and I suspect the IIGS was hobbled so that it didn't eclipse the entry level Macs.

That's exactly why. Apple didn't want the GS cutting into Mac sales.


That's certainly been the suspicion over the years, and IMO it seems extremely likely.

But... have you ever seen direct confirmation of that from folks who worked inside Apple at the time? Curiously, I've never much inside info materialize from the Apple II side of the business.


They could have used the effort that went into the 128 to beef up the VICII with a TED-like pallette and have a brilliant last C-64 computer to show.

The Commodore 65 was going to be that, kind of. It really is a shame that it never saw the light of day. At least now we have the MEGA65 for some alternate history retrocomputing.


I think, the C65 is another overrated story. It wouldn't have hit the market before 1992, which was then i486/25MHz territory, the same year that Windows 3.1 and Photoshop 2.5 were introduced.

(On the other side of the spectrum, it also the year that the cheap famiclone home computers came about, which totally failed to make an impression on US or western European markets, despite their capability to run NES games out of the box. Well, there were probably legal issues involved, as well, but there wasn't even an interested gray market for them.)


Well yes, I'm not arguing that it would have been a commercial success (nor did Commodore think so, apparently), just that it looked like it would have been a fantastic computer in and of itself and a worthy swansong for the Commodore 8-bit line.


How much of Sun wanting to use the A3000 is an over-exaggeration? I've read this before and never understood it. Sun was more than capable of producing their own 68K workstations. They had done so since the early 80's.

680x0 Unix machines were pretty long-in-the-tooth by the time the Amiga 3000 was available. Sun themselves had moved on to Sparc. NeXT was about to give up on hardware. SGI would move on to MIPS CPUs... etc


I'm guessing it could have been a risk-management strategy.

If the 680x0 product is a failure, they've saved the cost of designing a custom machine and stocking inventory.

If it's a wild success, they have a partner which can scale up on demand. I suspect even in the Amiga's mid-life, Commodore was shipping an order of magnitude more machines than Sun was.


I suppose it could've been a lower-cost alternative to the Sun 3/80 ("Sun 3x", their 68030 arch), launched in 1989?

Also, Sun had launched a 386 system (Sun 386i) a year or two before that, so they were already on Intel. I think if they wanted to go the lower cost "consumer" route, they could've found a random PC clone vendor to help them scale up faster than Commodore.


> To me, the biggest tragedy was when Commodore refused to allow Sun to package the A3000 as a low-end Unix workstation.

This is another bit of Amiga lore that is a little spun. There was no space in the market for a 68030 Unix machine in 1992 though. Even SPARC's days were numbered. Not two years later we were all running Linux on Pentiums, and we never looked back.


I wasn't aware of this at the time, but they had AT&T Unix on them:

https://en.wikipedia.org/wiki/Amiga_3000UX

$3400 in 1990 is pretty high, but I guess cheap for a "workstation." The average business would probably just suffer on a PC with DOS and Lotus 123 or something.


> The average business would probably just suffer on a PC with DOS and Lotus 123 or something.

386 PCs were going for $3500+ in 1990.


386 was five years old by that point, though you can spend as a much as you wanted of course.

But a 386 and graphics are not needed to do what I mentioned.


Prices didn't fall that quickly. 386s were premium until the 486 came along and $3500 is still a drop from the introductory pricing where they went for $5000. The 386 was an export controlled supercomputer and the market reflected that early on. 286's were the mid grade with 8086 still available for budget PCs.


I basically said this exactly same thing in a previous post. Why would Sun need Commodore? They had their own 68K systems and had moved on to Sparc anyway.


> even more of a kludge than the Apple IIgs (a 16-bit computer built around an 8-bit one from the previous decade)

Well, on the other hand, it did give us the 65816, which the SNES was built on top of.

On the other other hand, though, it probably would have been better if the SNES had gone with m68k.


The 65C816 is another device that I think gets too much play. It's a relatively straightforward extension of an 8 bit idea to 16 bits, with a few (but just a few) more registers and some address stretching trickery to get more than 64k of (fairly) easily addressible RAM.

Which is to say, the 65C816 did pretty much exactly what the 8086 had done, including a bunch of the same mistakes[1]. Which would have been fine if these were head-to-head competitors. But the 8086 shipped six years earlier!

[1] Though not quite 1:1. Intel had a 16 bit data bus from the start, but the WDC segmentation model was cleaner, etc...


Actually, the '816 segmentation model is worse than the 8086. For one thing, code can't overflow from one 64k bank to another. Once the PC goes past $FFFF, it rolls over to $0000 without changing the bank.

The only new registers are the code bank and data bank registers, and I think the code bank register can't even be directly set. You have to do a 24-bit JMP or JSR, and that automatically changes the bank.

The A/X/Y registers all get extended to 16-bits, but A can be addressed as 'A' and 'B', two 8-bit registers. A mode change is necessary for this.

It's a particularly hard CPU to generate code for. There's a commercial, free to use for private use C compiler, but most people use assembler.

Pointers are a pain in the butt... You have to use a direct page memory location to fake a 24-bit pointer. On the 8086, you could at least use the ES register to point anywhere without a DP access.


> Once the PC goes past $FFFF, it rolls over to $0000 without changing the bank.

I had forgotten about that! Yeah, that's horrendous.

> The only new registers are the code bank and data bank registers

There's the relocatable DP register, which you could use as a frame pointer (I've written code that does this). It's still quite awkward though.

> I think the code bank register can't even be directly set. You have to do a 24-bit JMP or JSR, and that automatically changes the bank.

To be fair that's how x86 works too, right? You have to do a far jump or far call to switch CS. (Also you could push addresses on the stack and do a far return on both 65816 and x86.)

> It's a particularly hard CPU to generate code for. There's a commercial, free to use for private use C compiler, but most people use assembler.

This is absolutely true. 6502 derivatives are almost uniquely hostile to C-like languages—witness all the failed attempts at porting LLVM to them—and the 65816 is no exception. I'm pretty sure that in an alternate world where 6502-based microcomputers had become dominant, the classic 6502 ISA would have quickly become relegated to some sort of legacy emulation mode, and the "normal" ISA would be nothing like it, unlike the relatively modest extensions that the 286 and 386 made to the 8086 ISA.


> but the WDC segmentation model was cleaner, etc...

Eh, I don't think this is generally true. Intel's decision to have the segments overlap so much was definitely very short-sighted, but the 8086 has a big advantage in having multiple segment registers with the ability to override which is used via an instruction prefix. This makes copying data between two pages straightforward on the 8086 whereas you need to constantly reload the databank register on the 65C816.

I think as a pure 16-bit design the 8086 is actually pretty good. It's not at all a forward-looking design though so it wasn't well suited to being the basis of the default PC architecture for decades to come.


> This makes copying data between two pages straightforward on the 8086 whereas you need to constantly reload the databank register on the 65C816.

Don't the MVN and MVP block transfer instructions let you specify separate data banks for the source and the destination?

(I agree that the 8086 was a superior design to the 65816 in most other ways.)


You're right. If all you want to do is literally copy bytes without any other processing then those instructions will handle it.


The 65816 was as much a dead end as the 8086. Unfortunately, Apple saw that and discontinued the GS. IBM, unfortunately, doubled down with the 286, then 386 and here we are.

Notably, IBM had a 68000 desktop computer that ran Xenix, the IBM CS 9000.

https://archive.org/details/byte-magazine-1984-02/page/n279/...


> The 65816 was as much a dead end as the 8086.

The 8086 was a... what now?


It's a horrible architecture. Every iteration on top of it made it even more hideous. It's a unnatural thing that should have never been. Every time an x86 powers up, it cries in despair and begs to be killed.


Your complaint is the designer’s lament, regrets because fast iterations will usually beat a cohesive development, the struggle to accept that the imperfect delivered today wins versus the perfect delivered tomorrow. A good engineer chooses their battles, fights for the right compromises, and grudgingly accepts that the evolutionary remnants in all our engineering artefacts are the price we pay for the results of iterative path dependency. For example, Intel didn’t design the x64 ISA.

  Practice beats perfection
  The engineer’s game we play
  Make do with what we've got
  And make it work today
If you watch Jim Keller talk about microprocessor design, he seems to say that designs converge due to real world constraints and factors. Human designers are imperfect, which Jim seems to be very good at acknowledging. Every now and then we get to refactor, and remove some limiting kludge. But the number of kludges increases with complexity, and kludges are the result of compromises forced by externalities, so the nirvana of a kludge-free world can only be reached in engineer’s fairy tales. Disclaimer: I was an engineer type, but turned dark by the desire for the fruits of capitalism. (Edited)


> unfortunately, doubled down with the 286, then 386 and here we are.

It may be a horrible architecture but the fact that we are here is a pretty good testament to it not being a dead end.


> Every time an x86 powers up, it cries in despair and begs to be killed.

Intel/AMD could make their new CPUs start in protected mode, even long mode - and nowadays, likely nobody other than BIOS developers would ever notice. Why don’t they? I guess it would be a fair amount of work with no clear benefit.

One thing they could do - legacy features (real mode, 16-bit protected mode, V86 mode, etc) are now an optional feature for which you have to pay a premium. They could just have a mode which disables them, and fuse that mode on in the factory. With Microsoft’s cooperation, Windows 11 could be made to run on such a CPU without much work. Likely few would demand the optional feature, and it would soon disappear


> One thing they could do - legacy features (real mode, 16-bit protected mode, V86 mode, etc) are now an optional feature for which you have to pay a premium. They could just have a mode which disables them, and fuse that mode on in the factory. With Microsoft’s cooperation, Windows 11 could be made to run on such a CPU without much work.

Customers who need them could just emulate them. Almost everyone already does anyway (DOSBox, etc.)

Though, honestly, I highly suspect the die space spent to support those features is pretty small, and power dissipation concerns mean that it's quite possible there aren't much better uses for that silicon area.


It inevitably has a cost though, even if at design-time than runtime.

Consider a code base filled with numerous legacy features which almost nobody ever uses. Their presence in the code and documentation means it takes longer for people to understand. Adding new features takes more time, because you have to consider how they interact with the legacy ones, and the risk of introducing regressions or security vulnerabilities through those obscure interactions. You need tests for all these legacy features, which makes testing take more time and be more expensive, and people working on newer features break the legacy tests and have to deal with that. Technical debt has a real cost - and why would that be any less true for a CPU defined in Verilog/VHDL than for a programming language?

And a feature flag is a good way to get rid of legacy features. First add the flag but leave it on. Then start turning it off by default but let people turn it back on for free. Then start charging people money to have it on. Then remove it entirely, and all the legacy features it gates. It works for software-I can’t see why it couldn’t work for hardware too.


Why charge money for leaving the legacy features enabled (other than "because they can" / cyberpunk dystopia)?

If they really are such a security risk, surely enterprise customers would rather pay for the ability to turn them off.


Realistically, they probably won’t charge more money just for this feature. Just limit it to higher priced SKUs.

There must be dies being thrown away right now because they work perfectly except for one of these legacy features. If they had SKUs with legacy features fused off, suddenly those dies might become usable.

Having lower-end SKUs with some features fused off allows you to use dies in which those features are broken. But, it means you have to fuse them off even on the dies in which they work. Buying a low end CPU and finding that some of them randomly have higher-end features would just be too confusing for everyone.


Most of a modern x86 die is cache, same as for every other architecture. It's unlikely that there are many dies that have some defect in the legacy microcode and nowhere else.

Fundamentally changing the instruction format might save a little bit of space, but then it wouldn't be compatible with x86-64 anymore either.


Are all these legacy features pure microcode? I'm sure they mostly are exactly that, but I would not be surprised if some of the hardwired circuitry's behaviour is influenced by mode bits for legacy features. It seems conceivable that a hardwired circuit could have a flaw which caused it to malfunction when it gets some "legacy mode enabled" signal but otherwise work correctly.

You may well be right, that a die being damaged in exactly that way, is rare enough, that the additional yield from being able to use such dies is rather marginal. But I think it would be hard for anyone to know for sure without non-public info.

> Fundamentally changing the instruction format might save a little bit of space, but then it wouldn't be compatible with x86-64 anymore either.

I wasn't talking about that. A CPU which didn't support real mode, 16-bit segments, etc, would still have the same instruction format, so instruction decode would be largely unchanged.

I wonder if we'll ever see a CPU which supports executing both x86 and ARM code, through separate instruction decoders but with a shared execution engine. Of course, there are lots of differences between the two architectures beyond just the instruction decoding, yet in some ways those differences are shrinking (when you look at features like Apple Silicon TSO mode, and Windows 11 ARM64EC architecture).


Meh. Just hold your nose when the processor boots until it gets into protected or long mode and you're good to go. If ARM or RISC-V eventually take over the x86 market, it will be due to business model reasons and not the superiority of the ISA.


Reasoning for the take-over aside, they're still superior ISAs.

If it is RISC-V, I will celebrate it.


It's funny to see this sentiment wax and wane with Intel's market fortunes. In the early 2010s on this site, with Intel riding high, you'd see lots of comments talking about how x86 was just months away from killing ARM for good on account of a better technical design. Now it's the reverse: people are treating x86 as a horrible dinosaur of an architecture that's on its last legs.

Personally, I think x86 is pretty ugly, but both compilers and chip designers have done an admirable job taming it, so it's not that bad. Process/fabrication concerns tend to dominate nowadays.


This is why I say the PC was doomed to win. IBM established the PC and then immediately lost control of it because all its components except the BIOS were COTS, so the PC ecosystem became this mishmash of companies, each contributing to and extending the platform as a whole. Any one of them can rise and fall and the ecosystem as a whole would keep on going.

Commodore was mismanaged and died. Apple was mismanaged and nearly died. A proprietary computer ecosystem controlled by a single company is at risk of being run into the ground by incompetent management or sheer bloody economics. That's why the PC and not the Amiga, Old World Mac, or Atari ST became the basis of modern computing.


"immediately lost control of it because all its components except the BIOS were COTS" is also a good description of the Apple II, but Apple was able to mostly keep clones at bay in their key markets. I believe IBM could have as well, but didn't initially care enough (the PC was a small part of its business, but micros were 100% of Apple).

Note that they took out 7 patents on the PC AT (286). When their attempt to take back control via the PS/2 in 1987 failed, they spent the 1990s getting chipset makers and clone makers to license their patents. Had they tried this in the 1980s the industry might have gone in a different direction.


I suspect the real answer is not commodity hardware, but commodity software. As in, it wasn't the PC that "won", it was MS-DOS on PC-compatible hardware.

You can't clone a Mac without their ROMs, and a clean-room implementation would fall quickly behind without a sustained effort. But Microsoft can sell anyone the disks, and your clone can respond to the right interrupt calls, and IBM can't do a thing to stop either of you.


Yes, multiple redundancies prevented complete failure unlike other platforms.

And the IBM stamp of approval legitimized the PC, starting the unstoppable tsunami.


While the Amiga was impressive in the 80's, they failed to continue on that path of technical innovation. The technical lead they provided simply vanished by having everyone else move forwards just a little bit faster than they were. Thus the inevitable erosion of their space. By the mid 90's there was little compelling that gave the Amiga an advantage and thus they had their lunch eaten. I suspect if we were to project the Amiga forward to today it would have actually meant us being in a worse place than we are today.

In the same way if we had of waited for Silicon Graphics to provide desktop 3D graphics, we would have been waiting for a LONG time and at a high price.


> While the Amiga was impressive in the 80's, they failed to continue on that path of technical innovation.

The US market was the largest computer market at the time and the Amiga sales in the US were awful. This left Commodore financially incapable of continuing on the path of technical innovation. Even in the few countries where the Amiga could have been considered a success, UK, Germany and Italy, the vast majority of sales were for the low end and margin A500.


They trashed so many of the opportunities they had, too.


> waited for Silicon Graphics to provide desktop 3D graphics

OTOH, we wouldn't need to put up with Windows for so long ;-)


Sir is aware that SGI made Windows NT workstations, right?

They were lovely things, too. I wish I'd bought one when PIIIs were being flogged off.

https://wiki.preterhuman.net/SGI_Visual_Workstation


I think people would have sat up and noticed if they took away the real-time aspects of the platform. On paper, there isn't much about Amiga which was real-time. Culturally though, so many programs were realtime. I still miss the instant latency. iOS, Android, MacOS etc nothing holds a candle to how it felt.


This. There were plenty of flaws in the Amiga experience (like "please insert disk..." dialogs that simply won't take no for an answer because the programmer didn't check for errors) but that instant feedback is what's missing from so many modern systems. I call it "having the computer's full attention" - and it comes from the fact that (using the stock GUI system) the gadgets which make up the UI are drawn and redrawn in response to input events on the "intuition" task rather than whenever the application's main loop next gets around to it.

That's why Magic User Interface felt "wrong" on the Amiga, despite being a superb toolkit in lots of ways* - drawing was relegated to the application's task, making it feel "Windowsey".

(*I would genuinely like to see a port of MUI to X / Linux.)


Because iOS, Android and MacOS are all based on a 1970s minicomputer operating system designed to communicate with ASR33 teletypes, with layers and layers of bloat added on to it over the years.

That it can support a graphical user interface at all is only due to having a lot of CPU time to waste!


I mean... You're not objectively wrong when looking at it overall. I pretty much never used CP/M mode besides a few times for WordStar.

But 14-year-old me loved his tremendously. 80-column mode felt so futuristic! BBSes looked amazing! And BASIC 7 was a lot more fun than BASIC 2. I felt like I was getting away with something when I learned I could enable 2MHz mode in subroutines, and if they were fast enough, the screen wouldn't even flicker before I reverted to VIC-capable speeds. Good times.


I'll concede ill-positioned, but the 128 was the logical upgrade for 64 owners, not the Amiga. CP/M was always a tack-on to deal with the incompatibility of the CP/M 2.2 cartridge, which is why the Z80 was on the logic board in the first place (Bil Herd has explained this in detail). Your post basically says the 128 sucks because its CP/M implementation does (which I won't dispute), but the 128's native mode is a lot stronger than CP/M and you got nearly perfect 64 compatibility to boot. That wasn't "bad" as a follow-on to a stratospherically popular home computer.


As much as I loved my C128 D, I only used C128 mode to enter „GO64“.


I used 128 mode to learn basic. But everything else was in 64.

7 cities of gold and Archon especially.


Almost all 8-bit micros were hacks made to be as cheap as possible. Commodores were no exception.


In 1985, sure. But even then the 64C was the cost-reduced variant, the 128 was the "fancy" 8 bit and almost 2x as expensive.

But note that the 128's Z-80 CP/M processor was effectively a recapitulation of the Microsoft (yes, that Microsoft) Softcard product that they shipped in 1980 for the Apple II. The Softcard was a huge hit and an absolute revelation to the small handful of users[1] in its world. In point of fact it was the single most popular CP/M computer sold anywhere.

And yet it was afflicted by most of the same problems the 128 was. Everything is about timing. Commodore happened to be five (yikes) years late to market with the idea.

[1] This was just a tiny bit before my time. And in any case I grew up in an Atari 800 family and wouldn't have been caught dead with a II plus.


> This was the swan song for the 8 bit machine

This is a common statement. It's not true. As pointed out down the comments, it was pretty much only the last 8-bit in the US market.

https://news.ycombinator.com/item?id=33925871

The rest of the world carried on making 8-bits for another decade or more.

New 8-bit machines and ranges of machines launched after in or after 1985:

• Sinclair Spectrum +2

• Sinclair Spectrum +3

• Acorn BBC Master range

• MGT SAM Coupé

• Amstrad CPC Plus range

• Amstrad PCW series

• MSX 2

• MSX 2+

• MSX Turbo-R

That's not counting 21st century reboots, of which there are hundreds.

Notably, after the collapse of Communism in Europe, the West found out about legions of enhanced ZX Spectrum clones and the like from the Warsaw Pact countries. Amazing machines with built-in floppy drives, hard disk controllers, stereo sound, improved graphics, lots more RAM (megabytes of it) and so on.

http://rk.nvg.ntnu.no/sinclair/computers/clones/russian.htm

<- pay attention to the dates column.

Some are still being made.

https://www.hackster.io/news/css-electronics-zx-nucleon-is-a...

Note this is real hardware, not FPGA emulation or anything, although some of those are amazing too.

https://github.com/UzixLS/zx-sizif-512

I know that the USA thinks that the C128 was the last new 8-bit machine, but in fact it was only about the half way point of the evolution of 8-bit home computers, and some of the more interesting machines were yet to come. Entire families of native CP/M computers that sold in the millions of units in multiple countries. Capable home games computers with amazing graphics. Powerful educational/laboratory machines that gave rise to the ARM chip.

But they weren't American, and so everyone in the USA doesn't even know that most of them existed.


You forgot the Amstrad GX4000 ;)

That and the SAM Coupé were kind of ridiculous, looking back, being so massively outdated at the time. In the UK, both were obviously worse for games than even the Atari ST, which was already starting to get shown up in that respect by the Amiga. (And the writing was on the wall for Amiga games too, thanks to increasingly impressive bitplane-unfriendly ALU-heavy PC games, and the SNES. And Windows 3.1 - which was actually turning out to be useful, no matter how janky - was attacking every other platform on the productivity side.)

The Amstrad and Sinclair markets kept going in the UK until 1991 or so, likewise the C64, as the machines were everywhere and so there was still a market for the games.

8-bit Acorn games dried up in 1990 (after probably last being a proper going concern in 1988 at best), but they kept producing the hardware until the early 90s because of demand for it from schools. (I have a BBC Master as part of my collection of crappy old UK 8-bit computers from the period. Manufacture date: week 47, 1989!)


:-D Yeah, OK, but that was part of the CPC+ series. It was just a CPC+ without a keyboard, IIRC.

The SAM Coupé was a great little machine -- I deeply regret selling mine -- but it took way too long to get to market.

I think in context with Communist Bloc improved Spectrum clones, it makes more sense. It was of course easier for companies and people over there (well, over here, for me now)_because they didn't need to worry about copyright law and could just copy the ROM.

I think there is an argument for high-end 8-bitters, and it's why the Spectrum Next exists. Because they are much easier to understand, and build, and modify, and copy, and program.

The industry rushed to 16-bit, because of the new features, but it took programmability out of owners' and users' hands.

Modern OSes are vastly, impossibly complex and people accustomed to them do not understand the virtue, indeed vital importance, of simplicity and comprehensibility.

This is why, for me, Python fails as a teaching tool for beginners. It embeds all the complexity of files handling, editors, OOPS, C output formats and so on. BASIC eliminated that. Its conceptual model was not of files and things but a much simpler and more accessible one:

You type some stuff.

No number in front? Computer does it now. Number in front? Keep it for later.

That is inspired brilliance, but the Unix weenies want pro tools for all, even for kids.


Or perhaps an indication that the C128 was ahead of its time! ;)

Interesting that some of the slowdowns are directly attributable, even back then, to greater abstraction in the newer CP/M BDOS.


Making an efficient program these days means using an efficient stack, working with threads to avoid blocked code, fighting the other 100+ processes on the CPU for resources and possibly avoiding the same on a database server, possibly on the other side of the world. In many ways its a lot harder than the early 80's where you had no networks, diddnt have to worry about security and generally total control of the local OS/CPU.


You forgot the low level parts - properly using the CPU caches. Cache thrashing can make even the fastest most amazingest CPU look like a Celeron from the 90's.


So I had a C128D (replacement for C64, which was replacement for VIC20). I had also used a few CP/M machines at this point. I know why they put the Z80 in the 128 but I always felt it was just a stupid thing that the PM/Marketing team should have been told, NO. It would have reduced cost, shipping date, etc. This is a prime example where marketing made the decision without having a real compelling business case for doing so. They could have said 100% C64 compatible with and * that said, sorry no CP/M cartridge support and it would have been just fine.


The whole slow IO library, was a problem in a lot of places and spawned the nansi.sys/etc utilities on dos too. That is because the DOS console output services were generally a lot slower than could be accomplished with a bit of hand spun assembly running out of RAM. So instead of going through ansi.sys + BIOS int 10 it was possible to hook the output int (21,2 IIRC) and dump right to the text framebuffer tracking little more than the current color and cursor position.

It was so easy I wrote my own at one point (before I discovered/got nansi.sys? I can't remember the sequence). It was IIRC faster than the later nansi sys replacements but didn't work 100% of the time either. Probably because I only implemented the common subset needed to run a couple programs I wanted faster text output from. I remember for a while switching back and forth between mine and a replacement driver depending on application before I got a 486+VLB which was so fast that I could no longer tell the difference between the two.


Most full-screen programs wrote directly to video memory anyway. It's mainly the ones ported from UNIX that needed an ANSI driver.

Now in the glorious future of 2022, every console app runs on an emulated DEC VTxxx, and if you press "Esc" it can't be sure if that's a real key or part of some control sequence unless it waits a second to find out if more bytes arrive.


Ah the interrupt handling a bunch of unnecessary things... Z80 has IM 2, you set a page-aligned 256 byte buffer, point the I register, enter Interrupt Mode 2, then you have your own custom interrupt handler instead of the one provided by the ROM.


From top answer:

> each character is compared to every possible control character before it is decided that it isn't one

That's crazy; ASCII was carefully designed to avoid this kind of shit.


Was CP/M actually widely used on the C64? I always thought it was marketing to sell machines but never was actually delivered.


People don't believe me when I tell them Amiga ended up in the worst possible hands.

This is something to point them to.


If I remember right, the team that made the Amiga was the Atari 400/800 team and the team that made the Atari ST was the Commodore 64/128 team.


Sort-of. Jay Miner (Amiga chipset) did work on the chipset in the Atari 8-bits. And Shiraz Shivji (Atari ST designer) did work on the C64. But the sets don't intersect with all the other people, just some of them.

There's not a lot in common between the ST and the C64 other than "rock bottom price" and "multiplex the memory BUS between the CPU and the VDP" (VICII/Shifter). The C64 laid in heavily on accelerating video via a custom chip (a bit like the Amiga and Atari 8 bits) but the ST's video was really just "spew a bitmap out of RAM."

Don't get me wrong, I loved my ST.


My response: the C128 ran CP/M? Really?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: