I started working professionally in the late 1980s and it seemed like the big competition at the time was between x86 and 680x0 variants.
From the Mac right up to multiprocessing minis (Bull DPX/2 for example could be had with 4 68030), plus a variety of high end workstation vendors (Sun, Apollo) the 680x0 seemed like a much higher end processor than anything Intel based.
Then Sun dropped it in favour of Sparc.
It took a while but the 386 and it's replacements gradually ate everything in that space thanks to Compaq leading the way. Even Sun had intel based machines.
Sure, the x86 family took a commanding lead over that time... But over in the corner where nobody was looking, in 1987 Acorn shipped the Archimedes with this RISC based processor they called the 'Acorn RISC Machine', or ARM... and it's looking increasingly like that whole x86/68x00 fight was a sideshow.
That is a very weird interpretation of what happened.
Acorn had horrible software, the hardware did not evolve as fast as x86 and after a few years hype it was pretty much dead.
What eventually saved it was basically an accident: due to lack of manpower the design was extremely simple, which turned out to translate to power efficiency.
Edit: the original designer put a very positive spin on things by saying that leadership gave them two things they needed to succeed: no money and no people.
I always enjoyed assembly programming the best on Motorola CPUs. Spent a lot of my youth on the 6809 and later 68020. Still have assembly programming books for both on the bookshelf behind me. Never really did enjoy assembly on the 8086 or 286, very ugly. Did some SPARC assembly programming later and it was nicer than x86 but by then assembly was fading from my life.
I mean this how it might have appeared but internally everybody know it was up. Sun by like 1985 already knew that Motorola was pushing ahead fast enough, that's why they developed SPARC. They did try to get Motorola to be more aggressive and so on, but Motorola wasn't interested.
Even Acron knew it in 1884 when they evaluated processors.
Motorola was going nowhere quickly in the mid to late 80s.
Apple as usual was late to move and their internal RISC project went nowhere fast.
The surprising part of the story is Intel managed to keep up. They were able to do so because they had absurd volume and could pay far larger teams.
I remember the Apollo reps moaning that the speed of Motorola's 68k chips weren't keeping up the the x86. I still have an Apollo motheroard with two 68k chips on it -- a work around to enable virtual memory.
Yet Domain OS was a distributed multiuser and multitasking system, with BSD and SYSV compatibility which run decent on 1989 hardware when x86 was almost only DOS and Windows 3.
But did it do the things people wanted? Why have the complexity of a multiuser system on a single user PC? Why have multitasking when the system can barely run a single task competently?
I think that's mostly wrong though, because as the P6 demonstrated complicated CISC addressing modes can be trivially decomposed and issued to a superscalar RISC core.
What really killed 68k was the thing no one here is qualified to talk about: Motorola simply fell off the cutting edge as a semiconductor manufacturer. The 68k was groundbreaking and way ahead of its time (shipped in 1978!), the 68020 was market leading, the '030 was still very competitive but starting to fall behind the newer RISC designs, leading its target market to switch. The 68040 was late and slow. The 68060 pretty much never shipped at all (it eventually had some success as an embedded device).
It's just that posters here are software people and so we want to talk about ISA all the time as if that's the most important thing. But it's not and never has been. Apple is winning now not because of "ARMness" but because TSMC pulled ahead of Intel on density and power/performance.
ISA certainly isn't the most important factor, but your ISA has to be a good enough baseline. History is littered with ISAs that made bad enough choices that were limiting at the time (VLIW, Itanium) or handicapped future generations couldn't (MIPS delay slots).
Arguably x86 and arm are the "RISCiest CISC" and "CISCiest RISC" architectures, and have succeeded due to ISA pragmatism (and having the flexibility to be pragmatic without breaking compatibility) as much as anything else.
People get caught up in intuitive notions of “complex” or “reduced” when talking about RISC and CISC, so it’s very helpful to have specific ideas about what of problems you’re creating for yourself years down the line, when you’re designing an architecture in the 1970s or 1980s. The M68K has instructions that can do these complicated copies from memory to memory, through layers of indirection, and that creates a lot of implementation complexity even though the instruction encoding itself is orthogonal. Meanwhile, the x86 instructions are less orthogonal, but the instructions that operate on memory only operate on one memory location, and the other operands must be in registers. That turned out to be the better tradeoff, long-term, IMO.
Yes, and that was exactly my point (yes, I wrote that SO answer). The indirect addressing mode that were introduced with 68020 are insane. Most people don't know them, as they didn't exist in 68000/68010/68008 and most compilers did not implement them as they were slower to use than using simpler composed addressings.
It is interesting to see what Motorola cut from the instruction set when they defined the Coldfire subset (for those who don't know, the coldfire family of CPU used the same instruction set as 68000 but radically simplified, the indirect addressing methods, the BCD mode, a lot of RMW instructions, etc. were removed. The first coldfire after 68060 which was limited to 75Mhz, ran at up to 300MHz).
BCD modes only take a few gates to implement; their biggest cost is probably the instruction encoding space they occupy. On a chip as small as the 6502 it makes sense that some users would want to avoid the expensive decimal/binary conversions and do arithmetic in decimal.
In modern x86 we have things like the avx scatter gather instructions, which implement indirect addressing as well as accessing potentially lots of pages in a single instruction.
Of course nowadays we have the transistor budgets to make even complicated instructions work.
Itanium and MIPS were... just fine though. Both architectures have parts that were very competitive along essentially all metrics the market cares about. ia64 failed for compatibility reasons, and because it wasn't "faster enough" than x86_64. No one saw much of a reason to run it, but Intel made them well and they made them fast.
And MIPS failed for the same reason ARM pulled ahead: in the late 90's Intel took a huge (really, huge) lead over the rest of the industry in process and MIPS failed along with basically every other CPU architecture of the era
Amusingly the reason ARM survived this bottleneck is because it was an "embedded" architecture in a market Intel wasn't targetting. But there's absolutely nothing technical that would prevent us from running very performant MIPS Macs or whatever in the modern world.
Itanium failed in the market because it wasn't fast enough running general purpose code (HPC workloads were a different story). As a consequence, it's ability to run existing x86 code was poor (both in hardware and software emulation). At the root, Itanium's poor performance was a direct consequence of the design assumption that a sufficiently smart compiler could extract sufficient instruction-level parallelism. While the ISA didn't prohibit later improvements like OoO execution, it always required a better compiler (and more specific workloads) than real users would have.
The performance (or complete lack thereof) of x86 compatibility mode was one problem. But VLIW's reliance on software to do the right thing (with compilers etc.) was another big thing even for native IA64 code. And a decent-performing Itanium was delayed enough to arrive during dot-bomb vs. dot-com.
If Itanium had really been the only 64-bit chip available for use in systems from multiple vendors maybe it would have succeeded by dint of an Intel monopoly position. But once x86_64 arrived from AMD and Intel ended up following suit, it was pretty much game over for Itanium.
That's kinda revisionist. In fact ia64 owned the FP benchmark space for almost a decade, taking over from Alpha and only falling behind P6 derivatives once those started getting all the high end process slots (itself because Intel was demolishing other manufacturers in the high-margin datacenter market).
The point was that ia64 was certainly not an "inferior" ISA, It did just fine by any circuit-design measure you want. Like every other ISA, it failed in the market for reasons other than logic design.
No, that's pretty much exactly what happened. Were you there?
Beating everyone else at synthetic benchmarks is about as impressive as blowing away paper targets at the firing range. Real applications (and their operating systems) shoot back.
Itanium had some specific performance strong points. FP was one. Certain types of security code was another. In fact (and I may have the details somewhat wrong) but an HP CTO co-founded a security startup, Secure64?, on the back of Itanium security code performance.
But logic design is irrelevant beyond a certain point. Intel certainly had the process chops at the time and knew how to design microprocessors given a certain set of parameters. It's just the parameters were wrong (see Intel's late step-down from frequency on x86 above all else as well--driven I was told at a very high level by Microsoft's nervousness around multicore).
FP benchmarks are just about the most synthetic benchmarks there are though. They mostly matter to people doing HPC, and those kinds of workloads are often in the realm of supercomputers.
Which I think means everybody is saying basically the same thing, ia64 was overly specialized and had far too small of a market niche to survive.
Beside the mentioned points concerning the Itanium, there was another one that is important to consider: the Itanium was simply too expensive for many customers/applications (think: "typical" companies who buy PCs so that their employees can use them for office work; private PC users who might be enthusists, but are not that deep-pocketed; ...).
The '020 was earlier than the 80386 and the '030 better. But yes, by the time of the 80486 and m68040, Motorola was better, but quite late, plus Motorola never had the impetus to bring it to higher clocks, whereas the 80486 got to 100 MHz.
It's certainly not correct to say that the m68060 "pretty much never shipped at all". I have several m68060 systems that would disagree with you. The chip even went through six revisions, with the last two often overclocked to 200% of its official speed. It actually competed well with the Pentium on integer and mixed code, although the Pentium's FPU was faster. Considering the popularity of m68k and x86 at the time, that was pretty darned impressive.
The m68060 is in countless Amiga accelerators, including ones made in the last few years such as Terrible Fire accelerators. Personally, I have a phase5 Cyberstorm MK III and Blizzard 1260. There were Atari '060 accelerators, a Sinclair QL motherboard that has the option of taking an '060, they were a native option in Amiga 4000 Towers, in DraCo computers / video processors, and in VME boards.
I'd love to get one of these to help do more m68k NetBSD pkgsrc package building:
> as the P6 demonstrated complicated CISC addressing modes can be trivially decomposed and issued to a superscalar RISC core.
Doing that required a very large amount of area and transistors in its early days. So much that very smart people thought that the extra area requirements would kill that approach. It still does take a large amount of area, but less and less relative to the available die. Moore's law basically blew past any concerns there.
But it wasn't always obvious that that would be the case.
max instruction size: 486, 12; 040, 22. (x86 has since grown to 15)
number of addressing modes: 486, 15; 040, 44
indirect addressing? 486, no; 040, yes
max number of MMU lookups: 486, 4 (but usually 1); 040, 8 (but frequently 2)
So it wasn’t just manufacturing, it was also the difficulty of the task. 680x0 was second only to the VAX in terms of the complexity of its instruction set.
> The x86 addressing modes are a lot simpler than the 680x0.
Anything but simpler and completely irrational to someone coming from the orthogonal ISA design persepective.
For a start, on x86, the loading of an address is a separate instruction with its own, dedicated opcode.
On m68k (and its spiritual predecessor PDP-11), it is «mov» (00ssssss) – the same instruction is used to move the data around and to load addresses. Logically, there is no distinction between the two as an address and a numeric constant are the same thing for the CPU (the execution context defines the semantics of the number loaded into the CPU register), so why bother with making an explicit distinction?
Having 2x separate instruction for loading addresses and moving the data around would have made more sense if data and address registers were 2x distinct register files, which m68k had and x86 did not, and speaking of the registers x86 was completely starved of general purpose registers anyway effectively having five of them (index registers are semi-general purpose anyway so they do not count). Even x86-64 today has 16x kinda general purpose registers which is very poor. AMD29k, as another extreme, could have 256 general purpose registers and 32x has been the sweet spot for many ISA's for a long time.
Secondly, there was the explicitly segmented memory model with near and far addresses. Intel unceremoniously threw the programmer under the bus with having to explicitly manage segments and offsets within each segment to calculate the actual address, and the address could not cross a 64kB segment. Memory segments have been a commonplace and predate x86, yet the complexity of handling them is typically hidden in the supervisor (kernel) level. m68k, on other hand, has had a flat memory space since day 1 that only really, for all practical reasons, took off with Windows 2000 on x86 – almost 2 decades later after m68k got it.
Lastly, comparing max instruction sizes for m68k and x86 is a bit cheeky. m68k has fixed size instruction encodings that allow the CPU to use a simple lookup table to route the processing flow as well as the extracting addressing mode(s) from the opcode could instantly give an indication of the total instruction length
Whereas x86 has had the variadic ones requiring a state machine within a CPU to decode them, especially as the x86 ISA grew in size, often stalling the opcode decoder pipeline due to the non-deterministic nature of the x86 opcode encoding.
Addresses are normally loaded with MOV on x86 too. LEA is intended to compute an address from base/index registers, using the same addressing modes that can be used to access that memory location.
For example, if you had an array of 32 bit integers on the stack, an instruction like "MOV EAX,[ESP+offset+ESI*4]" would load the value of the element indexed by ESI. Change that MOV to LEA, and it would instead give you a pointer to that element that can be passed around to another function. Without LEA, this operation would require two extra additions and one shift instruction.
Microsoft Assembler syntax confused this issue by being designed to make some operations more convenient, "type safe", and familiar to high-level programmers, while clumsier at getting to the raw memory addresses.
That led to people using "LEA reg,MyVar" when it wasn't necessary, simply because it was shorter to type than "MOV reg,OFFSET MyVar" :)
This is it. Nobody could compete with Intel's R&D budget. Selling low volume high margin chips to low volume high margin Unix workstation companies ended up being a far worse profit proposition than flooding the market with crappy but cheap chips in PCs that anybody can afford. It wasn't even close.
This is what Andy Bechtolsheim essentially said when he talked about why Sun had to develop the SPARC chip. Motorola was just too slow. Great initial architecture not iterated on fast enough.
And when Apple started the PowerPC partnership, they deliberately made sure they had two sources (Motorola and IBM) competing on the same architecture so they wouldn't be beholden to Motorola again (otherwise they were also considering the Motorola 88k)
> I think that's mostly wrong though, because as the P6 demonstrated complicated CISC addressing modes can be trivially decomposed and issued to a superscalar RISC core.
It's not the sequencing that's the issue, it's the exception correctness. Rolling back correctly and describing to the OS exactly which of the many memory accesses actually faulted in an indirect access is very complex. X86 doesn't have indirect addressing modes and never had to deal with that.
> What really killed 68k was the thing no one here is qualified to talk about: Motorola simply fell off the cutting edge as a semiconductor manufacturer.
Great answer but the root cause is Motorola failed to find enough customers to drive volume. Increased volume => increased profitability => money to invest in solving scaling problems. Intel otoh gained the customers and was able to scale.
Partly they failed to get volume is because every single workstation manufacture dropped them because they were clearly doing very badly.
Sun was a big costumer pushing them to make aggressive new chips and they were so disappointed with Motorola that they developed SPARC and released their first SPARC machine at the same time as he 68030 and most costumers preferred the SPARC unless they had software comparability issues.
Apple Mac did get a fair amount of volume to. And there were many other uses as well.
Sure it wasn't the wealth that Intel had, but they were hardly struggling. They clearly sold enough chips to pay a design team.
As @philwelch suggests, it wasn't really a foundry problem.
The Tier 1 Unix system companies (many of whom had been Moto 68K customers) already had their own RISC designs and a lot of the second and certainly third tier companies were getting acquired or going out of business. So by the time there was really a solid 88K product--at least for the server market--almost no one was lined up to design systems around the chip.
Data General did for a while. Forget who else did. But it just never got critical mass.
I mean at some point Apple made it work with their in-house ARM-based CPUs - first for phones then for laptops and desktops. But that was many years later and with a truly massive budget commitment to help the foundries and with enough volume that inhouse designs made sense and with control of the software environment on top. Not impossible but not exactly circumstances available to many companies.
Sun and DEC and IBM made CPUs for their own computers too - but not to compete for basic PCs. Motorola made a lot of phones at one point but not to the degree that they could lock in top of the line fabs.
It's not that the 68000 family was necessarily impossible to use in lower priced PCs by the way. Philips built the 68070 for use in CD-I and other consumer machines. And Apple and Amiga made it work for a while with more mainstream parts.
During this entire period there was an explosion in microprocessor designs. Let's say most of them better than the x86 generation for each of them. That would still not be enough to compete with the IBM PC-Compatible in rough price-performance and in amount and ease of procuring "personal computer" types of software. A superior technical solution is NOT sufficient to win.
I remember a technical presentation by Intel on the microarchitecture of their latest generation. And thinking "Wow, they threw everything and the kitchen sink in there - each piece to gain ~1% performance." And the thing is, it was enough. That chip was arguably a mess - and it was sufficient to keep their market going.
This is true only because today the borders between RISC and CISC do ot exist anymore, modern technologies would decode everything into uops anyway. But in 1980s and early 90s, this was not true. CISC was indeed more difficult to scale.
Yes now you can do that. But that 'trivial' thing you talk about isn't that trivial and it also requires quite a few gates. And this was an area where you didn't just have lots of gates left over.
> '030 was still very competitive
Questionable. A simple ARM low power chip beat it. And the RISC designs destroyed it.
ARM2 even in 1886 basically doubled up 68020 not to mention the bigger RISC designs.
Lots of companies were taping out RISC designs by 1985 and all of them basically beat them.
Aren't they? I recently read some comparison between the latest Ryzen 7840U and Z1 vs the Apple M2, and the Ryzens were significantly faster at the same TDP.
And that is without getting into the high-end segment, where Apple doesn't have anything that can compete with the Threadrippers and Epycs.
Is this true in anything besides cinebench? It’s always cinebench I see cited and that’s possibly the most useless efficiency figure available (bulk FPU tasks with no front end load is stuff that should be done on gpu, largely, CFD and other HPC workloads aside).
How does it do in JVM efficiency (IDE) or GCC (chrome compile) efficiency? How does it do in openFOAM efficiency?
Not on the same processes they aren't, no. Zen 3 is on 7nm. Apple is shipping chips on 5nm, and has reportedly bought up the entire fab production schedule of 3nm for the next year for a to-be-announced product.
Again, everything comes down to process. ISA isn't important.
This talking point is outdated. AMD has been shipping 5nm CPUs for a year now.
The 3nm parts have just barely dropped in phones due to process delays and poor yields (N3B will miss targets and only N3E will meet the original N3 targets a year or 18 months late). So they are a non-factor in the pc/laptop market.
Happy to see efficiency numbers (something other than cinebench please) but there is no node deficit anymore. Apple laptops and zen laptops are on even footing now.
If there is still an efficiency deficit then the goalposts will have to be moved to something like “os advantage” (but of course Asahi exists) or “designed for different goals!” (yeah no shit, doesn’t mean it’s not more efficient).
Again, I am fairly sure that apple creams x86 efficiency in stuff like gcc (chrome compiles) or JVM / JavaScript interpreting (IntelliJ/VS Code) even without “the apple software”, but, everyone treats cinebench like it’s the end-all of efficiency benchmarking.
I absolutely know the apple stuff creams x86 by multiples of efficiency (possibly 10x or more) in openFOAM - a Xeon-w 28c (or even an epyc) pulls like, 10x the power for half the performance of a m1 ultra Mac Studio.
(And yes, “but that’s bandwidth-limited!” and yes, so are workloads like gcc too! Cinebench doesn’t work like 90% of the core, it doesn’t work cache at all, it doesn’t care about latency. That’s my whole point, treating this one microbenchmark as the sole metric of efficiency is misleading when other workloads don’t work the processor in the same way.)
Node proccess. Apple can out-spend AMD 100 to 1, they use their deep pockets to buy all the fabrication capacity of TSMC's latest proccess for a while, meaning AMD only has access to the previos one. This is where Apples perf/watt advantage comea from.
Though I don't think it was the root cause of 68k's demise, offering poor backwards compatibility between generations was an unforced error by Motorola. Everything was changing so fast in computing in the 80s-90s, and Wintel's soft guarantee that you could just write/buy software and it'd work in the future was a big selling point.
ColdFire made the compromises necessary to make it scale, and the author points that out. But it was too late.
ColdFire can be made entirely 68k compatible with simple low-overhead emulation software. It could be have been a viable path forward for 68k, but they rolled it out almost 10 years too late, after they'd totally given up on 68k.
Ha, the Motorola Museum tour should have been a mandatory annual event for all executives and high-level managers.
“We build new things, we create entirely new product segments, but when markets start to mature we sell the division to someone who knows how to run that kind of business.”
They failed to do that with CPUs and then they failed to do that with mobile phones. And it pretty much killed the company.
That museum. So much stuff from one single company. It was incredible - for the first 80 years.
68k was good, but with extreme flaws, and late. This was enough so Intel become first.
Most annoying flaws: sophisticated memory interface and very limited old companion chips (from 6800 family), and for original 68k need second CPU to make virtual memory.
So what happen? - When appear first 8086/88 and 68k, RAM was small in all machines, but 8086 was simpler and 8086 system was cheaper, and near same speed.
Then appear 286, which was improved 8086 with much better speed and bigger possible RAM, but Motorola answer with 68010, just fixes flaws, this was good, but not enough in race.
Fortunately for Motorola, 286 was not fully compatible with 8086, some programs need to be modified, but many old programs from 68k family also does not work on 68020, so this was good time to think, why bother too much about compatibility, if it is not achievable.
To be honest, on PC I first seen real compatibility, most programs run without any modifications on 8086/8088/80286/80386/80486, just on each iteration faster.
First really significant issues with compatibility on x86 appear on Pentium, as I remember, Pentium-Pro was first Intels microarchitecture on which 16-bit programs was slower then on previous Intel CPUs.
But all these facts was future when was first competition of 8086 and 68k, and nobody could be sure about Intel future before 80386, even many people think 80286 is unfortunate CPU.
So I think, if Motorola made simplified 68k with fixed flaws (to be cheaper and may be to integrate more on CPU chip), it was really possible to gain momentum and overcome Intel on PC market.
Unfortunately, Motorola decide to make 68020/68030, semi-compatible with 68k, but not cheap enough, and stay second in race. May be this because Motorola that time have good sales in military, so they don't bother about future.
And after 80386 for Motorola time was lost forever. PowerPC was totally another history.
It was only used by closed vertically integrated brands like Apple and Commodore (Amiga) at a time when open component based PCs were all the rage. I don't even think M68K motherboards you could drop in a case and build with were even available. If they were it was never a big thing.
The entire market dominance of the x86/x64 architecture came out of that era and came about because you could build PCs with it and run a variety of software on it including DOS, Windows, FreeBSD, commercial Unix, and later Linux.
Also a factor was that while clone machines existed of the IBM/Intel systems and earlier Apple and other 6502 computers, often out of Taiwan, the other vertical vendors managed to keep tighter control. 68k equipment from 3rd parties for Macs and such was much harder to come by. While that temporarily increased their margins, long term it meant that PC component costs kept dropping until you could get a 586 desktop that did 75% of a SPARCstation or NeXT for 30% of the cost.
For businesses, the hardware cost of technology is much smaller than the investment they make in their workers or software systems. Having multiple interoperable hardware vendors to choose from in addition to an OS obsessed with binary backwards compatibility made Microsoft/IBM PC clones the obvious and safe choice. This created economies of scale that inevitably shrunk the market share of the other systems (Commodore, Atari, etc). Apple only managed to hold on by dominating the education and creative markets.
Oh definitely. The Mac lost because it was a closed platform in an era of open platforms where that open ecosystem was constantly driving up performance and capability while crushing prices.
The modern Mac is actually more open than the classic Mac because it's a BSD system. The UI is proprietary but the underlying OS is pretty commodity and loads of software that runs on things like Linux runs on it with little or no modification.
It was really a pretty magical time. I was a kid and a teen then and it seemed like the PC was this wide open field of limitless permission-free innovation. New software, new hardware, new capabilities were constantly being introduced and you didn't need an "entitlement" in an App Store or any of that nonsense. Nothing is really like that today except maybe the open source world, and even that kind of feels like a tar pit. There's still an open PC ecosystem but it's smaller and less dynamic.
Of course I also understand what killed it. It wasn't just cloud/SaaS or mobile. It was also the fact that we now operate in a "dark forest" war zone environment where we all have to navigate a sea of malware and exploitive surveillance-driven borderline-malware. It's hard enough for knowledgeable people but for end users downloading software is terrifying. It's like going to the worst neighborhood in the city in the middle of the night and walking around among living-dead drug addicts asking people if they know where to find something.
You can run RISC OS on a modern ARM like the Raspberry PI - it gets weirder the longer you work with it - menu items with input boxes and other GUI widgets, a strange DOS VMS UNIX hybrid CLI that appears at the bottom of the framebuffer, scrolling your graphic screen up as you type and full of terminology that's incredibly excessively British.
The Acorn-type CLI is interesting (in my view), and I never realised why until I started using a PC rather than the BBC Micro: there's no shell. The idea doesn't even really exist. The fundamental system call (the OS_CLI SWI on RISC-OS, OSCLI entry point on the BBC Micro) takes an entire command line string, and do whatever that says to do. It's like the shell is permanently there. You never need to parse the string.
This is actually pretty useful, because it means that for a lot of stuff you can simply defer to the OS. No need for interfaces to select filing system, choose floppy disk/hard disk, change directory, tweak key repeat rates, print disk catalogue, etc. - there are CLI commands for all of these, and more, so a program need only provide a way to enter a command to be subsequently executed by the permanently resident CLI. And as the program calls it, any commands so entered affect its state rather than being discarded on exit.
The average file load/save UI on the BBC Micro was that you entered a file name. All the other stuff you might want to do first (select filing system, choose disk type, select drive, change current directory, check file exists or not, etc.) would be done by executing CLI commands via the program's interface for submitting such things. Feels like RISC-OS's unusual drag'n'drop file save mechanic stems from a similar mindset.
I don't feel as if there was so much of a dominant paradigm back in 1987, when RISC OS first came out, at least not outside of the US?
I recall that period more as a melting pot of ideas and approaches from lots of manufacturers independently trying to figure out what paradigms might stick.
There were many disparate approaches to text-based OSs: MS-DOS, Acorn MOS on the BBC which is the predecessor of the shell found in RISC OS, Sinclair, Atari, etc. Likewise, separate approaches to GUIs: Mac, Gem, RISC OS / Arthur, the Amiga, etc. Windows 2. Teams from Acorn, Research Machines, Sinclair etc all basically did things in their own way.
While the Macintosh UI paradigm was considered dominant in some market segments at the time (eg DTP), there wasn't yet a universal expectation around how GUIs would work. That started happening more after Windows 3 came out, in 1990 iirc.
Certainly there must have been cross-pollination of ideas between these different groups. Pretty sure these Docks and Task Bars with icons, that we all have at the bottom of our screens now, was an idea first seen in RISC OS.
The GUI unification theory is fine but we're talking RISC OS 2023 being still like this. All the weird quirks are still there.
Other systems have merged. Even ArcaOS, the successor to OS/2 has dropped all those strange tab systems that OS/2 Warp 3 took from PenpointOS. It's fairly hard to find something that's notably different that's ostensibly still being worked on.
You can call it whatever you want but it's an inviolable horizontal container on the bottom of the screen reserved for iconified applications and storage controls upon which applications can not occlude.
Windows 1.0 doesn't really have a desktop metaphor in the modern sense as in some rectangle with free floating icons which represent arbitrary files or folders. Windows 2.0 made the icon space the entire background which can be occluded by applications and 3.0 introduced the merging of the metaphors into a "desktop" that we're all familiar with. Stating they're all equivalent I think is a bit of stretch.
> You can call it whatever you want but it's an inviolable horizontal container on the bottom of the screen reserved for iconified applications and storage controls upon which applications can not occlude.
But it wasn't a concept that Microsoft was particularly wedded to. It disappeared in Windows 3.x.
Windows itself wasn't a concept microsoft was wedded to until windows 3.0 ... after 1.0 it was mostly axed. It took a giant internal effort to get windows 2.0 resourced for development and release and then there was a bunch of politics that started with TopView (which had many clones such as quarterdeck desqview) and ended with OS/2 1.3 that got 3.0 out the door. MS was probably fully committed to OS/2 up until about 1989/90"
IBM was still pushing its MCA based PS/2 machines at the time and the writing was on the wall when the gang of nine did EISA as a response instead of forking over a license fee.
I don't know what this has to do with anything. Claiming that RISC OS invented the dock or was the first to do it is bunk. I'm a huge fan of innovation and like RISC OS but I'm a bigger fan of accurate history.
---
* accounts differ on this. Some, like Ferguson in the Computer Wars depict as a bad faith sabotage, while others such as in hard drive, brush that off as a conspiracy theory while others such as in barbarians led by Bill Gates say there were fundamental opinion conflicts by important primadonna programmers. I think it's likely all the above, there's enough people involved and the theories don't necessarily conflict
There was more diversity in paradigms and technologies before Wintel stamped it out. In particular, non-American companies were more prominent. I can't speak for Intel but I feel that Microsoft held the industry back with their remarkable commitment to shoddiness and tastelessness.
I feel a tinge of sadness reflecting on this fact when I walk through the Computer History Museum in Silicon Valley.
Go to Living Computers in Seattle when they figure out how to reopen. CHM is a shadow of Living Computers.
There's things like Xerox Altos and Apple 1s there that you can use. Sit at and actually do things with.
They sadly do not have a Pixar image computer which is a remarkably obscure little thing I've wanted to see working for a while. I've seen 2 of them but they were just in display cases. There's a few rips online with demos https://m.youtube.com/watch?v=PhhGfdkK9Ek but nothing really going over the machine
I don’t think there’s been any indication that anyone is trying to figure out how to reopen Living Computers.
Likely some other philanthropist would need to buy it and fund it in order for it to reopen.
Paul Allen’s sister seems to have little interest in continuing to fund his smaller philanthropies / interests after his death (Flying Heritage & Combat Armor Museum: closed 2020, sold 2022, reopened by new owner; Cinerama: closed 2020, sold 2023, reopening planned by new owner; Living Computers: closed 2020, no public updates)
Fun fact: the ARM 1 was designed using some custom software on a BBC Micro and tested in simulation (sub 1KLOC) before they submitted the design for tape-out and production. First silicon actually worked!
Probably more like 64 KB, assuming 6502 second processor...
(Actual usable space is less! The second processor's onboard OS uses 2 KB. The BASIC interpreter uses 16 KB for its code plus another approx 1.1 KB for workspace.)
IBM 'legitimized' the personal computer for business use. Their PC was build around Intel's processor. They accidentally made the design clonable, and the OS was not exclusivly licenced.
Compaq was first to jump into this, but soon this opportunity created a massive ecosystem of competing clones all able to run the same binary distributed software packages which fueled both a software industry and a commoditization of the platform which made it much more accesible and affordable.
This enormous ecosystem success forced Intel to remain backward compatible with old instruction sets to run older binaries, with Microsoft forced the same way on OS api's and services. Even IBM itself tried but failed to counter this s runaway train on both the hardware and the software front.
At the processor level I loved the much more sane instruction set of the 68000 family (much like I preferred the Z80 over the 6502 before), but the dynamics of the whole PC ecosytem just steamrolled over the alternative 68000 platforms, even though Atari, Commodore and Apple produced compelling designs.
Yeah, it was the vertically unbundled platform that made the decision. Relative technical qualities of x86 vs 68k had very little to do with the outcome I think.
There are many contributing factors as are being mentioned. I would say the most dominant one is the success of the x86 PC. That success depended on the continued cost/performance value. M68k systems on the whole were considered workstation machines that looked down upon less capable hardware. Exceptions to this were the Amiga and Atari ST, which unfortunately competed themselves out of existence rather than against the PC market which was squeezing them out. The Macintosh was more capable due to its software rather than CPU.
Once you have the success of DOS PCs and add growing exposure of Windows 2.x (e.g. Windows/386) filling in capabilities it's hard to compete with technically better but also much more expensive systems except in smaller or niche markets over time. Even the server market switched from the likes of Sun SPARC to x86 systems.
The theme is that cheaper and minimally viable has a larger market potential that wins if able to find a way to survive. An earlier/smaller example of this is how the 6502 ate the lunches of other/better 6800 or Z80 based systems. That success later failed due to stagnation of the hardware and operating systems, and again in-competition. The x86 PC-compatible market allowed competition between vendors while still using the same ISA and ecosystems DOS & Windows.
I grew up in this era having multiple Atari 8-bit & ST systems, using Apple ][, Macintosh, and rare access to Amigas. I was extremely disappointed with DOS+Windows prevailing over the more exciting systems from a graphics/sound gaming perspective. Market size won.
>
In the end I think specialized hardware always looses over software - that's why those things failed.
In my opinion this statement only holds in times of massive performance growth in hardware. This held true at that time, but does not necessarily hold anymore.
There is a reason why modern SoCs contain more and more specialized blocks, say, for video decoding/encoding, NPU, graphics processing (GPU), raytracing (modern GPUs), perhaps DSP, ...
That ignores that the Unix and Mac ecosystems weren't so small. So I wouldn't call the RISC processors back then 'specialized'. We tend to look at the present and say 'this is the only way it could have been' but that just isn't necessarily the case. Companies have been in good positions before and fumbled and other companies could have competed differently.
> An earlier/smaller example of this is how the 6502 ate the lunches of other/better 6800 or Z80 based systems. That success later failed due to stagnation of the hardware and operating systems, and again in-competition.
6800 and Z80 weren't much if at all better than a 6502 in terms of performance. They're all 8-bit CPUs and limited in various ways. They all also had 16-bit iterations in the 68000, 65816, and Z800.
Intel x86 and the PC and the software being written for the platform is what killed off demand for those other chips. If there were a 68090 (or whatever 68k variant it would evolve into) today inside a modern Amiga platform with a modern GPU, that people still wrote software for, I'd be using it as my daily driver.
Part of the reason that few 8-bit platforms successfully went 16-bit is because they filled up the address space too early.
If you were writing code for a Commodore 64 or Atari 800XL, you largely didn't have to think about designing your software to probe the memory map or deal with larger and smaller versions of the system.
This had two toxic outcomes:
1. The platform itself had no good way to expand upwards. If you built "C64, but with a 65C816 instead", you still had loads of software that expected to hit very specific ROM and memory-mapped hardware spots, so you couldn't remap them without breaking the world.
You could potentially work around this by building the new machine with enough memory that you can just start the "enhanced-mode universe" after the hairball that exists in the bottom 64k. But if you wanted to offer a 128k or 256k machine, you can't afford to squander all that expensive RAM. And even still...
2. The software was largely incapable of exploiting extra resources if you had them. Some software learned to deal with Commodore REU devices or the bank-switched RAM in an Atari 130XE, but that requires a lot of heavy lifting to use efficiently.
In contrast, someone who took a 128k 8088 PC clone and upgraded to a 512k 286 usually got bigger spreadsheets and longer documents for free. There was no need to think about allocating from non-contiguous memory spaces-- just a larger version of the easy to manage space that starts at 1k and ends around 640k.
To build on your idea: the presence of PC clones meant that a business that invests large sums of money in off the shelf and custom software solutions could be retained if their hardware vendor went out of business or failed to keep up with other clones. This aspect is highly underrated by many retro computing enthusiasts. Some software systems are too large to ever rewrite. These days, system emulation has come a long way. But at the time, the value proposition of essentially guaranteed long term support for business software was the killer feature that propelled the x86+DOS/Windows to become the dominant system.
Agreed. And that "hairball" (great term BTW, I love it) involved custom silicon that delivered expensive capabilities. Revising that cost too much.
The PC was kind of simple in this way. So were some of the systems. I look at the Apple, which was very PC like, and the Atari ST, and when compared to systems offering custom silicon, it seemed like a no brainer choice. But, software ports were easier, CPU speeds could be increased easily, and the lack of custom silicon drove some longer term usability.
Today, we may be moving back toward another custom silicon era. Expensive capabilities put into hardware will be compelling, but may also be something of a dead end. Maybe the scale and speed of what we have today will marginalize how this could work too.
> You could potentially work around this by building the new machine with enough memory that you can just start the "enhanced-mode universe" after the hairball that exists in the bottom 64k. But if you wanted to offer a 128k or 256k machine, you can't afford to squander all that expensive RAM. And even still...
That's exactly what PCs did with 386 . even today, if you boot on legacy bios mode, you get the same lower 1MiB address space and mappings
Note that this was apparently a decision made in 2001. If you tried this in 1988, when 2Mb was an enormous machine, and you said to users "Yeah, only half of that will be allocated to user programs", that's not gonna fly.
I feel the nostalgia, but what would you expect this platform to do that Windows or Linux on PC wouldn't? You can kind of have a very, very fast Amiga running on an emulator today.
Amiga OS was lightyears ahead of MacOS and Windows in the 80s and even into the 90s. If the Amiga had continued to be developed and gained the adoption that Windows had, and it became a dominant platform, then computing would likely be far ahead of where it is today. Amiga was revolutionary. Business mismanagement was Commodore's downfall as well as the general stupidity of humanity kept pushing the PC platform and Windows forward, and Commodore went out of business. Sure I can run an Amiga today in an emulator, but Amiga OS didn't really continue to be developed. My comment was more talking about what computing would be like if Amiga was still a popular platform and did continue to be developed in a meaningful way.
I vaguely remember those days, late '80s and early '90s, and a lot of it probably had to do with IBM's reputation. Amiga had its pro uses - like smaller TV stations using it with VideoToaster or Scala for broadcast management. But PCs had this vibe of being "serious" or "professional" that Amiga, Atari, and Mac just didn't have.
On the technical side, Amiga had some downsides too. Its OS was in ROM, and while it was way ahead of DOS and early Windows back in '87, it got outdated by the early '90s. Smaller Amiga models didn't support hard drives without buying a pricey add-on, making them more like game machines where you were stuck swapping floppies.
But if you look at OS and CPU architecture, it was almost like comparing a well-designed system to a mess. DOS was clunky, and x86 had its weird quirks: limited registers, awkward 8-bit compatibility, segmentation over paging, unnecessary IO mode/addressing instead of MMIO, messy assembler (prefixes, segments, adhoc instructions) you name it.
Motorola 68xxx is not just Amiga and fun and games. I used to own a Sun 3/60 which is 68020 and most definitely very much business, much more than IBM PC
I don't know how you can say that Sun machines were "[more] business" than IBM PCs when you look at the number of IBM PCs sold to businesses versus Sun.
IBM PCs and clones effectively OWNED the small and medium business market entirely, and a sizeable portion of large businesses, too.
Motorola was in everything. There must be two dozen different workstation-class computers from the 1980s and even early 1990s that were based on it, and a half dozen home computer platforms besides.
For professional PC users one additional thing was important. The display refresh rate.
In those days you put up with a lot of crap if you had to sit in front of the computer for the whole day. That’s why black screens with green or amber text was popular. The monitors were cheaper, but also you did not notice the screen flicker so much. The popular Hercules card had a higher Hz rate than the usual Color graphics and was considered for professionals. Despite the simple look. A while it was accepted to have lower rates, but in the beginning of the 90th the display refresh rate was a thing.
The Amiga had great graphics but especially in Europe with the 50 Hz PAL system they flickered too much. Even at that time 50 Hz were seen as unergonomic. The US and the Atari ST with 60 Hz were a bit better but in general 70 Hz was seen as necessary.
The Atari ST mono had 70 Hz but a small monitor. Still quite successful in professional places. If you just got away with it at the end of the 80th, in the early 90th the display refresh rate had to be 70 Hz.
At that point the graphic of the Atari ST and Amiga was not that impressive anymore compared to the PC (the ET4000 graphics chip was out) and if the ergonomics don’t fit either then it was hard to argue that they were a good fit for professional use where you use the computer longer during the day, like text processing etc.
Later Amigas supported VGA timings. The Amiga 3000 through de-interlacing "scandoubling" "flicker-fixer" hardware, which had been an add-on for Amiga 2000. ECS and AGA chipsets had "Productivity Mode" but only desktop programs used them, and ECS only with up to four colours on screen at once.
I used an Amiga 1200 with a VGA monitor for a few years. Because video modes were programmable, I traded a little refresh rate for higher screen resolution: 704×520 @ 50 Hz or thereabouts.
This does not mean it wasn't upgradable by using RAM, as both SetPatch and soft kickers (mkick, skick and so on) demonstrate.
Replacing the ROMs is also possible, although not ideal, and kits with Workbench disks and ROM chips were sold cheaply.
Later, it could have been replaced by an EEPROM, too, but Commodore died first.
>Amiga had some downsides too
The main issue was the way pointers were passed around as if handing out candy, not providing IPC mechanisms not reliant in memory sharing. This hinders attempts to properly leverage memory protection in later models, as well as the ability to reasonably implement SMP later on.
In no small part, Commodore was to blame. The first Amiga released in 1985; MMU-capable m68k CPUs had been available for years. Yet they chose the plain 68000, which was not MMU-capable. A corner-cutting measure that would later bite them. E.g. move from SR/CCP discrepancy, vector base forced into low chipram and so on.
Motorola jerked around their customers by announcing the deprecation and death of the 68k not once but twice; first the 88k (total flop), then the PowerPC.
Intel played around with making fancy new alternative architectures, too (i960, i860). But it never gave any of its PC customers the impression that x86 was going to be murdered. (Well, until Itanium, but that's much later and was almost serious trouble for them.)
The switch to PowerPC almost killed Apple, in my opinion. They couldn't afford the chaos and instability. System 7.6 on PowerPC was terribly unstable, and offered little to no advantages.
Meanwhile Intel iterated on the (basically inferior) x86 architecture and we got Pentium & Pentium I and proved you didn't have to go RISC.
A few years later Motorola/Freescale rolled out ColdFire, which did for 68k what Intel did for x86. But probably about 3-4 years too late, and targeted really only for embedded devices.
68k was great. But these days I don't think I'd want a big-endian machine.
Oh man. I had to study this instruction set for a systems programming course for my undergrad. The project was to make a linker/loader that could run assembly code written in this.
Boy does it have a complicated instruction set. Anyway, we early on negotiated with the instructor on what instructions we would support. Early education in defining scope for success I guess.
I am surprised that no one has commented on the difficulty of learning/using the 68000 Assembly language versus Intel.
Iirc, there were no books available to me for the Motorola Assembly language programming nor do I remember having easy access to any environments for it.
I found 68k assembly language easier than that for the 8086. Just got down my copy of "MC68000 16/32-bit Microprocessor Programmer's Reference Manual" from the bookcase. Published by Motorola, don't remember it being particularly expensive.
Back in the 1980s and 1990s Motorola would send you all the manuals for free if you called them up and asked (as would Intel). I took advantage of that and still have them sitting on my bookshelf.
That's in part because the instruction set is so logically laid out that if you understand the matrix of registers, base mnemonics and addressing modes that you can pretty much work out the rest in your head. Lance Leventhal made a pretty good book as well.
I never wrote 69k assembly but I did use the coldfire subset and it was downright delightful compared to x86asm. X86asm is by far the most confusing hacked together mess I've ever seen.
Intel won because of DOS. Once Compaq opened the market to clones, you could purchase an Intel PC from any number of manufacturers that could run DOS, and then later Windows. IBM tried to stop this with OS2, but thanks to its DOS compatibility layer, Microsoft convinced everyone to develop for DOS, since it wasn't going away in OS2.
Yep, the IBM PC clones are what opened things up for x86 - you could get PC clones for a lot cheaper than Macs or even IBM branded PCs. I suspect that if IBM had decided to not allow clones (either by design or by lawsuits) that the 68K family would have been viable for longer (and/or we would have seen Macs transition to the 88K). There would have essentially have been main 2 choices for a while:IBM PC or Macs. Which would also have meant that Intel wouldn't have completely dominated the processor market - it would have been more of a 2 player CPU market.
We also had the Amiga and the Atari ST machines on 68K then as well, so without PC clones they would likely have become the low-price choice for a while.
I mentioned this in another comment, but the presence of interchangeable clones created a form of distributed long term support for business software. This was the killer feature that propelled business investment. Companies invest a lot in tailoring off the shelf software and custom software. Knowing that you have the ability to switch to another vendor made the decision a no-brainer for most businesses.
I think that seven different answers there and at least that many, also all different, posted here indicates that there isn't in fact a demonstrably correct answer explaining why the global market did what it did.
Of course, any economist would say "Welcome to economics!" at this point. (-:
Yes. That's literally the explanation. Motorola wanted to move people to their 88k RISC CPU. Apple was planning to move the Mac to 88k; the 68k emulator on the first PowerMacs was originally for the 88k.
We had 88K nuBus cards for our Macs. I was digging into 88K runtime architecture and toolchains to support it.
Without warning, one fine day we were instructed to immediately remove the cards and return them to managers. No explanations. It was the steepest edge to "this project is getting canceled" I've experienced.
I liked programming the m68k cpus. They were also the CPU used in my computer science department curricula for assembly language programming classes.
At school we had lots of Sun{2,3,4}, Apollo, HP, Mac, and NeXT computers which we could practice on. Kinda saw the writing on the wall when we got a 6 CPU i386 sequent symmetry system and then SPARC, MIPS RISC, and PowerPC while nothing really from Motorola. I never enjoyed programming x86 cpus after being self taught on 6502 and then m68k systems :-)
I still have an ATARI Mega ST and a Sun 2 at home for sentimental reasons only.
Something has to be said about clock race too... Even the Sun workstation with 68020 Motorola CPU would only clock at 16 to 20 Mhz I think. Meanwhile the 386 came out in 1985 (announced in 1984 according to Wikipedia) and could go to 40 Mhz.
One answer on SO (with only six upvotes) says that Motorola couldn't keep up the clock race.
When technology is advancing at an insane pace and you see a CPU line that is already not keeping up with the clock race, it probably doesn't make much sense to bet on that CPU line.
I remember going from my beloved Commodore Amiga (68000) to a PC felt like going back in time but the PC's 386 would clock at 40 Mhz while the Amiga would clock a... 7 Mhz.
MIPS, which at about that time was derided as “Meaningless Integer Performance Statistic” by those who moved in the supercomputing (now ‘HPC’) circles.
That said, 68000 interrupt responsiveness wasn't great at all when compared the 8-bit systems it was replacing. An Atari ST is positively insane in the # of cycles it took to respond to an interrupt vs e.g. the C64 which was designed by some of the same people. Wider bus, more registers, different design constraints.
The 68000 was great for doing what you mention: bringing VAX-like tech to the masses. Looking back I don't think it was a great "home computer" processor, and not great for games and the like, either.
In anyways, I'd say by the mid-90s the 68k architecture could not keep up with x86 because Motorola just stopped investing in it.
lots of marketing relies on superficial understanding by the viewer, partial attention and mistaking good looks for other qualities. In other words, yes, agree.. look at the results in the markets.
one part of this has a formal name!
--
The "anchoring effect" names a tendency to be influenced by irrelevant numbers. Shown greater/lesser numbers, experimental subjects gave greater/lesser responses.
The Amiga outsourced a lot of its work to custom peripheral chips, so doing something like a mass move of memory from one location to another, on a bit boundary, didn't rely on the CPU. Likewise, you could change the addresses of video bitplanes, which could give the illusion of graphics animation without doing huge memory moves. All of that accumulated to mean an Amiga had a better human interface than a PC, at least in that era, even if the PC could do more math.
Yeah but it's an architecture that couldn't make it past the late 80s.
For one because it couldn't keep pace with Moore's law and economies of scale. Custom chips are expensive and the talent to design them hard to source.
But most importantly the bus multiplexing on systems like that -- sharing video memory between CPU and dedicated coprocessor -- makes no sense in the 90s and beyond when memory became many multiples slower than the CPU.
The whole assumption of the Jay Miner design is that the coprocessor could do things faster than the CPU.
... but then CPUs just became way faster than main memory, and those fancy coprocessors would have been sitting there waiting on the bus just like anything else.
When GPUs came on the scene, they took a very different path. They have their own memory, and do very specialist things, and for a long time were very limited in the kind of bandwidth they had back to the CPU, too.
Anyways, Amiga had a nice system for 1986. By the 90s it was pointless and mass produced ISA video cards outpaced it, and they did so with a design entirely different.
Amiga and systems like it are not "paths not taken", they're design dead-ends.
With later systems, the coprocessors (blitter, at least) were often slower than the CPU. I remember having an A3000 and installing "CpuBlit" (see http://aminet.net/package/util/boot/CpuBlit ) Text scrolling speeds were vastly improved!
Exactly; there was a similar situation with Atari corp. Initial ST shipped without the planned but not finished yet "blitter" chip. With the STe they rolled that out, but a couple years later and it never got good market penetration.
But by the time they got to the late 80s with the TT030 and the Falcon 030 it become pointless because main CPU was in most cases faster than hardware blitting.
That's the problem with custom chips. An approach that was invaluable in the 70s and 80s for the 8-bit machines, but became a handicap by the 90s.
It is a pretty easy mistake to make if you are used to how fast new processors come out now, but you comparing an i386 from 1991 to to a 68020 from the mid 80s.
In 1985 when the 386 came out I believe the fastest speed you could get was 16MHz. They added higher speed variants for years afterwards. Intel made a 40MHz 386 in 1991 that was strictly aimed at embedded users who want more perf but were not ready to move to 486 based designs (386CX40), I doubt almost anyone used on in a PC. AMD made a Am386 at 40MHz which was a reverse engineered clone of the 386, but again that came out in the 90s (the big selling point was that you could reuse your existing 386 motherboards instead of replacing them like you needed to for a 486).
I'm sure the fact that 68k evolved over its lifetime from an incredibly elaborate interpretive microcoded system to something that supported pipelined out-of-order doesn't help, either.
40MHz 386 was is 1992. Amiga is another issue, Commodore was selling same 1985 design with no improvements all the way to 1992, and with only very minor ones past that.
Because of the vast amounts of money being poured into wintel (dostel?) at the time, no one else could compete after a decade+ of being outspent 10x on R&D by Intel.
Not Sparc, not Mips, no not Motorola either. Was just reading about DEC/Alpha in another thread.
Software lock-in, first from IBM then MS contributed to massive consolidation in the industry and until the old players tapped-out.
Presumably, with Intel's budget Motorola could have paved over the 68k's flaws just like was done with x86.
I too suspect they could have - but even if they could have, it wasn’t at all clear that it would’ve been the right thing to do. Even for Intel I don’t think it was clear for a while that their gambit was going to pay off and bury RISC (or Itanium), and there was much less incentive for Motorola to maintain backwards compatibility. A clean design that was faster - and easier to make faster - must’ve seemed like a sure bet at the time.
Intel had so much money to burn they tried at least three different strategies and the market chose the winner. Compatible with Pentium, RISC with i860, and clean-slate with Itanium are those that come to mind.
Each of those projects undoubtedly cost billions. An incredibly luxurious position to be in.
I assume that's one of the reasons why they teamed up with IBM and Apple for the PowerPC. More resources that can be used for CPU development compared to them going it alone.
Well, you had other issues going on within the group. IBM only seemed interested in server chips and later on their higher volume chips for the Nintendo Wii and Xbox 360. Apple alienated Motorola by pulling their license for MacOS right before they were going to release a series of laptops that would compete with the PowerBooks. Motorola was not going to prioritize chips for Apple from that point on. Apple finally gave up and moved to Intel after that.
That’s the same issue. Apple was selling less than 2 million Macs then. Why invest in a low volume business?
We are seeing the same issue now with Macs. Apple would never have the resources to invest in the M series if they weren’t also selling 150 million A series iPhones and iPads, a few million M series iPads, 20-30 million S series Watches and miscellaneous other A series devices like AppleTVs and monitors.
The issue now is that even Apple is not willing to invest the resources needed to make high end ARM chips to compete with Intel on the very high end.
Looks like the Sega Genesis used it because they got a 90% discount, and then paired it with a Z80 over fears it couldn't handle sound and video at the same time.
Coming back to this thread a day later. I think the real story is that: 68k could not keep up because Motorola just stopped investing in it. Dunno if this is a chicken vs egg thing due to its already declining popularity, or just bad leadership at Motorola, or both. But Intel shoveled money into the R&D furnace for x86 while Motorola spent their energies elsewhere. And so they stopped competing, and then stopped existing.
There were many microprocessor designs out or coming out from everybody. Plenty of them got design-ins (chosen to have a product built around them). For PC, nobody could compete with the wave of the PC-compatibles running Windows on x86.
Many of the new microprocessors did come out in non-PC products like workstations - where it didn't matter as much.
Failure to try hard enough seems like the likely answer. X86 sold more & thus got more money poured into making it faster. X86 had more companies in the running for value longer (three then two, after Via really vanished).
I think it was powerpc but not 68k, but one notable different & distinct characteristic of ppc was that it has inverted page tables. I would not expect it to be a major make or break architectural difference, it's probably not a huge difference, but different ISAs seem not that consequential, while having totally different memory architectures makes the whole effort to optimize very very different, changes how all caches work deeply.
There were some very neat objects oriented sympathetic ways inverted page tables worked that seem potentially mechanistically sympathetic to have expressed at a hardware level. But there's also a real possibility that various caches were harder to make run fast, with inverted page tables. PPC's different memory architecture was fairly unique.
From what I recall at the time Apple (as well as others) tended to do things to keep M$ OSs off their hardware and differentiate themselves from IBM so they would have more control of the platform and sell boxes. M$ goal was to be the dominate OS then. IBM misguidedly chose to make money on hardware. They were too accustom to getting exorbitant amounts for their software ie leasing it.
It was not so easy to port an OS back then and IBM PCs were the dominate species so too was DOS then Windows. Zilog was starting to fade also - the Z8000 never caught on. Apple became the only company keeping the 68000 alive. Motorola was not up to the challenge of competing with Intel or Apple was not making enough volume to make it worth their effort. Apple - fighting the last war - thought IBM was their competitor so eschewed any compatibility with IBM PCs and chose to go it alone. It wasn't until later (after Jobs return) that they realized they were wrong and things had moved on. Reasons for choosing a CPU where more about power and speed.
The original question was "in the 21st century?" which doesn't make much sense combined with "in personal computers". The fight was long over for the 68000, or for PCs, by the time of the 21st century.
Damn, I had a hunch that the fact I when learned it in my comp sci degree it wouldn't be useful in the future. I think it was 2012 when I took the class >.>
id's Doom (and Quake) level editors were written for NeXTStep which ran on NeXT hardware which was based on the Motorola 68030 or 68040. OpenStep later made it to x86 (also PA-RISC, SPARC, and via OS X, PPC, x64, and ARM64) but the level editors used the original NeXT classes (NX* instead of NS*) which didn't make the transition to OpenStep.
Lots of excellent answers but this is the best response that gets to the core of the question (don’t know how to make a link to an answer, so pasting it here):
> Processor architectures come and go. The history of the 68k architecture is perfectly normal. It's the x86 that's anomalous. I believe that is the result of Microsoft's peculiar inability to switch architectures. No other software company seems so stuck: they move opportunistically to whatever architecture suits their immediate needs. Apple has evolved 6502->68k->PPC->x86->ARM. –
John Doty Sep 25 at 12:14
> Microsoft's peculiar inability to switch architectures.
More like third party publishers? Windows has been released for many architectures in the past but without bringing the software ecosystem along it never worked out. They presently seem to be trying to correct that, with an x86 emulator included in their ARM version of Windows 11, but don't have the hardware offerings to create demand.
Right, Windows NT was shipped on PowerPC, MIPS and Alpha up until 1999; Then MS shipped Windows Server on IA64 Itanium from around that time right up to 2010, alongside the x64 version - including shipping IA64 versions of .NET, SQL Server, etc. And they began shipping Windows on ARM with WindowsRT in 2011.
I actually don't think there has been any significant period where Microsoft was only shipping core Windows operating systems on only x86. It's possible that's actually a deliberate strategy.
But to your point, what they have never been able to do was persuade any significant part of the ecosystem to notice.
There was also a PowerPC build of Windows XP, which was preloaded on Xbox 360 dev kits (which were ironically PowerMac G5 towers configured to roughly match 360 hardware).
> They presently seem to be trying to correct that, with an x86 emulator
DEC created an x86 emulator for the DEC Alpha version of Windows NT, but it still didn't get any traction, despite the Alpha being the performance king back in the day.
Very wrong aswer though. Microsoft's NT OS line runs on multiple architectures. Apple's 68k to PPC transition was a disaster. They fixed it later, but the first gen PPC was near unworkable. I loved Apple's Mac line before that, but this made me switch.
68k to PPC transition really wasn’t a disaster. Was these a year or two when certain apps (Photoshop was a big one) we’re not updated for PPC, and this ran about the same or slightly slower on a PPC Mac as compared to a 2 year old Quadra? Sure. But clock speeds advanced quick enough and Adobe finally got their native apps working and things were pretty good by year 2 of the transition.
I had one of those first gen PPC Macs, Having spend big on a new machine and have everything run much slower than on your old machine, and that in a time where unlike today we did not have performance to spare, was a real turnoff.
It got abetter eventually, but by then 1st gen was already replaced, and you had to buy the next version of the software as well if eventually a PPC native version came available. Office software in those days was also much more expensive than today.
At the same time Microsoft released Windows 95, arguably the first Windows with a usable GUI. The PPC switch was a disaster for Apple, no doubt about it.
As I have pointed out elsewhere, that's a question comment, not an answer. And this very discussion here exemplifies the problem with people wrongly using comments for answers. Most of the objections raised here to that question comment were in fact already raised on StackExchange. Several are hidden because of the way that StackExchange decides what comments to show.
If it had been an actual answer, in contrast, you would have had a "Share" hyperlink to click on, bringing up a pop-up that allows you to copy a hyperlink for the answer, even on your device. The person mis-placing an answer in a question comment hasn't helped you, either.
Microsoft can switch architectures though. I have an ARM64 Windows VM on my M1 mac. It runs x86 executables seamlessly and fast enough that you don't even notice that there's an emulation/translation layer underneath.
The Windows 9x series (and earlier) were tightly coupled with x86 (and DOS).
The Windows NT family of operating systems has always had the "Hardware Abstraction Layer" (aka HAL) which helped with porting the operating system to other architectures.
It's only anomalous in that it's the one that happened to survive. As the industry matured, players consolidated, and there were more benefits to one dominant architecture with a few implementations than multiple architectures. At some point, CPUs got so complex the instruction set became more of an API than anything actually rooted in hardware, extending the lifespan of the instruction set even longer.
> "It's the x86 that's anomalous. I believe that is the result of Microsoft's peculiar inability to switch architectures."
Why do people keep forgetting that Intel was at the forefront of fab process improvements for 2-3 decades, thanks to the enormous revenue and economy of scale of manufacturing coming from the computerization of society from the '90s onward? None of the UNIX server / workstation OEMs with their own CPU architectures could keep up, not even the well heeled Sun Microsystems. AMD couldn't keep up and had to spin off GlobalFoundries. ARM / MIPS and other minor architectures eked out meager existences as embedded processors only because Intel didn't deign to pursue a low margin market. It was a combination of Intel's fab prowess and the lack of motivation from software developers to build Windows software for non-x86 platforms that kept x86 chugging along.
> I believe that is the result of Microsoft's peculiar inability to switch architectures.
This is simply not true.
I'm running Windows right now--to post this response--on an ARM Windows laptop. Microsoft has had Windows on RISC machines. Alpha, MIPS, PowerP and Itanium. They were all reado, should any of these architecturs take off. (See: https://virtuallyfun.com/2023/08/05/come-meet-tenox-check-ou...)
6502 was Apple II, the IIGS used a 16 bit 65C816, both of those are seperate from the Macintosh going through 68k->PPC->x86->ARM, and to be even more pedantic 68K -> PPC was the old mac system, then PPC ->x86 was OSX, and the current x86 ->ARM has been MacOS,
I have the same experience: I first learned 6502 (Apple II), then 68000 (Amiga), and finally (x86). The 68000 was very straightforward in the number of registers and memory addressing. Mainly when you compare with an 8088.
I learned assembly for ARM2, loved it, moved from Archimedes to PC and pretty much immediately went "oh my word what the flying fornication NO" and have basically never written assembly since.
I loved the 68K. It had a lovely, regular architecture and assembly-language. I learned it with a Sinclair QL (actually 68008, but close enough). I'm sorry it died.
Because IBM picked the other chip for their flagship offering. It's as simple as that. If IBM had picked the 68K series a lot more funding would have been available to keep the Motorola chips competitive. In this branch of the multiverse they had to compete with the IBM/Intel combination, which with their massive sales of PCs and compatibles had both better margins and substantially larger volume. Motorola didn't stand a chance, even if they may have had bit of a head start technology wise.
68000 assembler was a delight to use, completely regular with lots of powerful instructions. I was lucky enough to write 68k software professionally for a while (on the MVME147/167). However that was part of its downfall because it's hard to make a superscalar architecture when individual instructions have many complicated side effects.
Also Intel just had way more money because of the mighty Wintel monopoly. They could scale up the manufacturing side.
In the era under discussion, a "PC" was _by definition_ a computer with an x86 processor, and Macs and Amigas and Ataris were Something Else for boutique users.
"by definition" is a little harsh, and "with an x86 processor" is a little too narrow:
"The designation "PC", as used in much of personal computer history, has not meant "personal computer" generally, but rather an x86 computer capable of running the same software that a contemporary IBM PC could.":
All that got thrown out the window when IBM named their computer the "IBM Personal Computer." It was very effective salesmanship. It was _wrong_ and I don't like it BUT it was very effective.
From the Mac right up to multiprocessing minis (Bull DPX/2 for example could be had with 4 68030), plus a variety of high end workstation vendors (Sun, Apollo) the 680x0 seemed like a much higher end processor than anything Intel based.
Then Sun dropped it in favour of Sparc.
It took a while but the 386 and it's replacements gradually ate everything in that space thanks to Compaq leading the way. Even Sun had intel based machines.