Hacker News new | past | comments | ask | show | jobs | submit login

> Even the Amiga.

To me, the biggest tragedy was when Commodore refused to allow Sun to package the A3000 as a low-end Unix workstation.

It could have changed the world.

I regard the C128 as a very flawed project - even more of a kludge than the Apple IIgs (a 16-bit computer built around an 8-bit one from the previous decade, kind of a 16-bit Apple /// with a //c inside and two completely different ways to output pixels, plus a full professional synthesizer attached somewhere in there - fun machine, but don't look too close).

They could have used the effort that went into the 128 to beef up the VICII with a TED-like pallette and have a brilliant last C-64 computer to show.




    Apple IIgs (a 16-bit computer built around an 
    8-bit one from the previous decade, kind of a 
    16-bit Apple /// with a //c inside and two 
    completely different ways to output pixels, 
    plus a full professional synthesizer attached 
    somewhere in there - fun machine, but don't 
    look too close).
I feel like despite its weird nature, that you accurately describe, it still would have been a really legit competitor had they simply given it a faster 65C816.

It simply didn't have the CPU horsepower to really do the things it was otherwise capable of... the Mac-like GUIs, the new graphical modes in games, etc.

When you use a IIGS with one of the accelerator cards (Transwarp, etc) that gives it a faster 65C816 (7mhz, 10mhz, etc)you really get a sense of what the IIGS could have been. Because at that point it was a solid Atari ST / Amiga peer (very roughly speaking, lots of feature differences)

It's not entirely clear to me why it didn't have a faster 65C816 to begin with. I know there was a lot of competition inside Apple between the AppleII and Mac divisions, and I suspect the IIGS was hobbled so that it didn't eclipse the entry level Macs. However I'm also not sure if higher-clocked 65C816 chips were even available at the time the IIgs was designed and released, so maybe it wasn't an option.


> I know there was a lot of competition inside Apple between the AppleII and Mac divisions, and I suspect the IIGS was hobbled so that it didn't eclipse the entry level Macs.

That's exactly why. Apple didn't want the GS cutting into Mac sales.


That's certainly been the suspicion over the years, and IMO it seems extremely likely.

But... have you ever seen direct confirmation of that from folks who worked inside Apple at the time? Curiously, I've never much inside info materialize from the Apple II side of the business.


They could have used the effort that went into the 128 to beef up the VICII with a TED-like pallette and have a brilliant last C-64 computer to show.

The Commodore 65 was going to be that, kind of. It really is a shame that it never saw the light of day. At least now we have the MEGA65 for some alternate history retrocomputing.


I think, the C65 is another overrated story. It wouldn't have hit the market before 1992, which was then i486/25MHz territory, the same year that Windows 3.1 and Photoshop 2.5 were introduced.

(On the other side of the spectrum, it also the year that the cheap famiclone home computers came about, which totally failed to make an impression on US or western European markets, despite their capability to run NES games out of the box. Well, there were probably legal issues involved, as well, but there wasn't even an interested gray market for them.)


Well yes, I'm not arguing that it would have been a commercial success (nor did Commodore think so, apparently), just that it looked like it would have been a fantastic computer in and of itself and a worthy swansong for the Commodore 8-bit line.


How much of Sun wanting to use the A3000 is an over-exaggeration? I've read this before and never understood it. Sun was more than capable of producing their own 68K workstations. They had done so since the early 80's.

680x0 Unix machines were pretty long-in-the-tooth by the time the Amiga 3000 was available. Sun themselves had moved on to Sparc. NeXT was about to give up on hardware. SGI would move on to MIPS CPUs... etc


I'm guessing it could have been a risk-management strategy.

If the 680x0 product is a failure, they've saved the cost of designing a custom machine and stocking inventory.

If it's a wild success, they have a partner which can scale up on demand. I suspect even in the Amiga's mid-life, Commodore was shipping an order of magnitude more machines than Sun was.


I suppose it could've been a lower-cost alternative to the Sun 3/80 ("Sun 3x", their 68030 arch), launched in 1989?

Also, Sun had launched a 386 system (Sun 386i) a year or two before that, so they were already on Intel. I think if they wanted to go the lower cost "consumer" route, they could've found a random PC clone vendor to help them scale up faster than Commodore.


> To me, the biggest tragedy was when Commodore refused to allow Sun to package the A3000 as a low-end Unix workstation.

This is another bit of Amiga lore that is a little spun. There was no space in the market for a 68030 Unix machine in 1992 though. Even SPARC's days were numbered. Not two years later we were all running Linux on Pentiums, and we never looked back.


I wasn't aware of this at the time, but they had AT&T Unix on them:

https://en.wikipedia.org/wiki/Amiga_3000UX

$3400 in 1990 is pretty high, but I guess cheap for a "workstation." The average business would probably just suffer on a PC with DOS and Lotus 123 or something.


> The average business would probably just suffer on a PC with DOS and Lotus 123 or something.

386 PCs were going for $3500+ in 1990.


386 was five years old by that point, though you can spend as a much as you wanted of course.

But a 386 and graphics are not needed to do what I mentioned.


Prices didn't fall that quickly. 386s were premium until the 486 came along and $3500 is still a drop from the introductory pricing where they went for $5000. The 386 was an export controlled supercomputer and the market reflected that early on. 286's were the mid grade with 8086 still available for budget PCs.


I basically said this exactly same thing in a previous post. Why would Sun need Commodore? They had their own 68K systems and had moved on to Sparc anyway.


> even more of a kludge than the Apple IIgs (a 16-bit computer built around an 8-bit one from the previous decade)

Well, on the other hand, it did give us the 65816, which the SNES was built on top of.

On the other other hand, though, it probably would have been better if the SNES had gone with m68k.


The 65C816 is another device that I think gets too much play. It's a relatively straightforward extension of an 8 bit idea to 16 bits, with a few (but just a few) more registers and some address stretching trickery to get more than 64k of (fairly) easily addressible RAM.

Which is to say, the 65C816 did pretty much exactly what the 8086 had done, including a bunch of the same mistakes[1]. Which would have been fine if these were head-to-head competitors. But the 8086 shipped six years earlier!

[1] Though not quite 1:1. Intel had a 16 bit data bus from the start, but the WDC segmentation model was cleaner, etc...


Actually, the '816 segmentation model is worse than the 8086. For one thing, code can't overflow from one 64k bank to another. Once the PC goes past $FFFF, it rolls over to $0000 without changing the bank.

The only new registers are the code bank and data bank registers, and I think the code bank register can't even be directly set. You have to do a 24-bit JMP or JSR, and that automatically changes the bank.

The A/X/Y registers all get extended to 16-bits, but A can be addressed as 'A' and 'B', two 8-bit registers. A mode change is necessary for this.

It's a particularly hard CPU to generate code for. There's a commercial, free to use for private use C compiler, but most people use assembler.

Pointers are a pain in the butt... You have to use a direct page memory location to fake a 24-bit pointer. On the 8086, you could at least use the ES register to point anywhere without a DP access.


> Once the PC goes past $FFFF, it rolls over to $0000 without changing the bank.

I had forgotten about that! Yeah, that's horrendous.

> The only new registers are the code bank and data bank registers

There's the relocatable DP register, which you could use as a frame pointer (I've written code that does this). It's still quite awkward though.

> I think the code bank register can't even be directly set. You have to do a 24-bit JMP or JSR, and that automatically changes the bank.

To be fair that's how x86 works too, right? You have to do a far jump or far call to switch CS. (Also you could push addresses on the stack and do a far return on both 65816 and x86.)

> It's a particularly hard CPU to generate code for. There's a commercial, free to use for private use C compiler, but most people use assembler.

This is absolutely true. 6502 derivatives are almost uniquely hostile to C-like languages—witness all the failed attempts at porting LLVM to them—and the 65816 is no exception. I'm pretty sure that in an alternate world where 6502-based microcomputers had become dominant, the classic 6502 ISA would have quickly become relegated to some sort of legacy emulation mode, and the "normal" ISA would be nothing like it, unlike the relatively modest extensions that the 286 and 386 made to the 8086 ISA.


> but the WDC segmentation model was cleaner, etc...

Eh, I don't think this is generally true. Intel's decision to have the segments overlap so much was definitely very short-sighted, but the 8086 has a big advantage in having multiple segment registers with the ability to override which is used via an instruction prefix. This makes copying data between two pages straightforward on the 8086 whereas you need to constantly reload the databank register on the 65C816.

I think as a pure 16-bit design the 8086 is actually pretty good. It's not at all a forward-looking design though so it wasn't well suited to being the basis of the default PC architecture for decades to come.


> This makes copying data between two pages straightforward on the 8086 whereas you need to constantly reload the databank register on the 65C816.

Don't the MVN and MVP block transfer instructions let you specify separate data banks for the source and the destination?

(I agree that the 8086 was a superior design to the 65816 in most other ways.)


You're right. If all you want to do is literally copy bytes without any other processing then those instructions will handle it.


The 65816 was as much a dead end as the 8086. Unfortunately, Apple saw that and discontinued the GS. IBM, unfortunately, doubled down with the 286, then 386 and here we are.

Notably, IBM had a 68000 desktop computer that ran Xenix, the IBM CS 9000.

https://archive.org/details/byte-magazine-1984-02/page/n279/...


> The 65816 was as much a dead end as the 8086.

The 8086 was a... what now?


It's a horrible architecture. Every iteration on top of it made it even more hideous. It's a unnatural thing that should have never been. Every time an x86 powers up, it cries in despair and begs to be killed.


Your complaint is the designer’s lament, regrets because fast iterations will usually beat a cohesive development, the struggle to accept that the imperfect delivered today wins versus the perfect delivered tomorrow. A good engineer chooses their battles, fights for the right compromises, and grudgingly accepts that the evolutionary remnants in all our engineering artefacts are the price we pay for the results of iterative path dependency. For example, Intel didn’t design the x64 ISA.

  Practice beats perfection
  The engineer’s game we play
  Make do with what we've got
  And make it work today
If you watch Jim Keller talk about microprocessor design, he seems to say that designs converge due to real world constraints and factors. Human designers are imperfect, which Jim seems to be very good at acknowledging. Every now and then we get to refactor, and remove some limiting kludge. But the number of kludges increases with complexity, and kludges are the result of compromises forced by externalities, so the nirvana of a kludge-free world can only be reached in engineer’s fairy tales. Disclaimer: I was an engineer type, but turned dark by the desire for the fruits of capitalism. (Edited)


> unfortunately, doubled down with the 286, then 386 and here we are.

It may be a horrible architecture but the fact that we are here is a pretty good testament to it not being a dead end.


> Every time an x86 powers up, it cries in despair and begs to be killed.

Intel/AMD could make their new CPUs start in protected mode, even long mode - and nowadays, likely nobody other than BIOS developers would ever notice. Why don’t they? I guess it would be a fair amount of work with no clear benefit.

One thing they could do - legacy features (real mode, 16-bit protected mode, V86 mode, etc) are now an optional feature for which you have to pay a premium. They could just have a mode which disables them, and fuse that mode on in the factory. With Microsoft’s cooperation, Windows 11 could be made to run on such a CPU without much work. Likely few would demand the optional feature, and it would soon disappear


> One thing they could do - legacy features (real mode, 16-bit protected mode, V86 mode, etc) are now an optional feature for which you have to pay a premium. They could just have a mode which disables them, and fuse that mode on in the factory. With Microsoft’s cooperation, Windows 11 could be made to run on such a CPU without much work.

Customers who need them could just emulate them. Almost everyone already does anyway (DOSBox, etc.)

Though, honestly, I highly suspect the die space spent to support those features is pretty small, and power dissipation concerns mean that it's quite possible there aren't much better uses for that silicon area.


It inevitably has a cost though, even if at design-time than runtime.

Consider a code base filled with numerous legacy features which almost nobody ever uses. Their presence in the code and documentation means it takes longer for people to understand. Adding new features takes more time, because you have to consider how they interact with the legacy ones, and the risk of introducing regressions or security vulnerabilities through those obscure interactions. You need tests for all these legacy features, which makes testing take more time and be more expensive, and people working on newer features break the legacy tests and have to deal with that. Technical debt has a real cost - and why would that be any less true for a CPU defined in Verilog/VHDL than for a programming language?

And a feature flag is a good way to get rid of legacy features. First add the flag but leave it on. Then start turning it off by default but let people turn it back on for free. Then start charging people money to have it on. Then remove it entirely, and all the legacy features it gates. It works for software-I can’t see why it couldn’t work for hardware too.


Why charge money for leaving the legacy features enabled (other than "because they can" / cyberpunk dystopia)?

If they really are such a security risk, surely enterprise customers would rather pay for the ability to turn them off.


Realistically, they probably won’t charge more money just for this feature. Just limit it to higher priced SKUs.

There must be dies being thrown away right now because they work perfectly except for one of these legacy features. If they had SKUs with legacy features fused off, suddenly those dies might become usable.

Having lower-end SKUs with some features fused off allows you to use dies in which those features are broken. But, it means you have to fuse them off even on the dies in which they work. Buying a low end CPU and finding that some of them randomly have higher-end features would just be too confusing for everyone.


Most of a modern x86 die is cache, same as for every other architecture. It's unlikely that there are many dies that have some defect in the legacy microcode and nowhere else.

Fundamentally changing the instruction format might save a little bit of space, but then it wouldn't be compatible with x86-64 anymore either.


Are all these legacy features pure microcode? I'm sure they mostly are exactly that, but I would not be surprised if some of the hardwired circuitry's behaviour is influenced by mode bits for legacy features. It seems conceivable that a hardwired circuit could have a flaw which caused it to malfunction when it gets some "legacy mode enabled" signal but otherwise work correctly.

You may well be right, that a die being damaged in exactly that way, is rare enough, that the additional yield from being able to use such dies is rather marginal. But I think it would be hard for anyone to know for sure without non-public info.

> Fundamentally changing the instruction format might save a little bit of space, but then it wouldn't be compatible with x86-64 anymore either.

I wasn't talking about that. A CPU which didn't support real mode, 16-bit segments, etc, would still have the same instruction format, so instruction decode would be largely unchanged.

I wonder if we'll ever see a CPU which supports executing both x86 and ARM code, through separate instruction decoders but with a shared execution engine. Of course, there are lots of differences between the two architectures beyond just the instruction decoding, yet in some ways those differences are shrinking (when you look at features like Apple Silicon TSO mode, and Windows 11 ARM64EC architecture).


Meh. Just hold your nose when the processor boots until it gets into protected or long mode and you're good to go. If ARM or RISC-V eventually take over the x86 market, it will be due to business model reasons and not the superiority of the ISA.


Reasoning for the take-over aside, they're still superior ISAs.

If it is RISC-V, I will celebrate it.


It's funny to see this sentiment wax and wane with Intel's market fortunes. In the early 2010s on this site, with Intel riding high, you'd see lots of comments talking about how x86 was just months away from killing ARM for good on account of a better technical design. Now it's the reverse: people are treating x86 as a horrible dinosaur of an architecture that's on its last legs.

Personally, I think x86 is pretty ugly, but both compilers and chip designers have done an admirable job taming it, so it's not that bad. Process/fabrication concerns tend to dominate nowadays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: