
Apple's long processor journey - AndrewDucker
https://liam-on-linux.livejournal.com/69433.html
======
perl4ever
BeOS seemed really nice at the time. I'm not sure if it would have worked out
better in the long run, but it sounded great.

I always had the sense that consumer focused computers should be engineered
with real-time capabilities, and instead, they ended up being based on time-
sharing systems which seems like it hampers UI and media processing. For
decades, and especially presently, I get so frustrated with lags and hiccups
no matter how fast the CPU and storage are.

~~~
pvg
_worked out better in the long run_

The bigger obstacle is that it would not have worked at all in the short run -
it was closer to a technology demonstrator than a complete OS. It's often
presented as some sort of key decision between similar options, perhaps
because Jean-Louis Gassée was also a former Apple exec and remains a well-
known commentator. But I don't think it was anything like that, Apple already
had plenty of its own half-finished OS tech.

~~~
chipotle_coyote
I occasionally hear this, but it's at least worth noting that there are people
-- like me -- who ran BeOS full-time. I did for over a year. Gobe Productive
was an AppleWorks-like office suite (by the original authors of AppleWorks, no
less), Pe was a good BBEdit-ish code editor, the image editor e-Picture
resembled Macromedia Fireworks... I could go on with many other apps. As
subjective as it is, I preferred BeOS to Linux as a desktop OS at that time
because the nascent app ecosystem was already nicer to use and, at least to
me, more complete. (It's important to remember that "at that time" was 1999,
which predates the first release of OpenOffice by over two years!)

In any case, NextStep didn't do anything in the short run for Apple -- it took
dozens of engineers about four years to turn NextStep into OS X, and it's hard
to believe that they couldn't have done the same thing with BeOS. Most of the
reports I've read suggest BeOS was actually Apple's first choice, but Be, Inc.
demanded what they considered an unreasonable price.

I think the biggest difference was vision: BeOS was shooting for the creative
media market, while NeXT's biggest successes had come in very enterprise-ish
verticals. BeOS was arguably a closer fit to the Mac's "prosumer and creative
professional" vision; my unconfirmed, probably off-the-wall suspicion has
always been that The Enterprise (tm) was closer to Gil Amelio's button-down,
old school semiconductor industry heart. Buying NeXT clearly turned out to be
great for Apple as a company -- just not for the reasons Amelio had in mind.

~~~
pvg
Sure, there were people who ran BeOS full-time. But 'preferable to an
enthusiast to Linux in 1997' is too low a bar for 'basis for a consumer OS'.

 _In any case, NextStep didn 't do anything in the short run for Apple -- it
took dozens of engineers about four years to turn NextStep into OS X, and it's
hard to believe that they couldn't have done the same thing with BeOS._

I think it's very easy to believe if you compare them technically. I'm happy
to get into that discussion if you're interested but it seems obvious to me a
complete, mature OS that was (at the time) considered fairly advanced
technologically, had a track record of being ported to every architecture
available (along with a track record of running inside/alongside other
environments), plus the team that built and maintained all of that was a saner
choice than an unfinished OS. Finishing an OS was precisely the problem Apple
had for years and was trying to buy its way out of. BeOS and NextStep were not
anything close to the same starting point.

 _I think the biggest difference was vision: BeOS was shooting for the
creative media market, while NeXT 's biggest successes had come in very
enterprise-ish verticals._

I think there's some truth to this, in that Apple's computers had long been
pariahs in business settings and getting out of that rut was a thing Apple
wanted to do. I imagine it was a factor. At the same time - there was probably
way more actual prosumer and creative professional software for NextStep than
there ever was for BeOS and much more importantly - Apple was looking to buy
technology, not vision.

------
gok
Perhaps better titled "the Mac's long journey", since it's only about one
Apple platform and not really about processors. And even then it seems to end
around 2006.

~~~
lproven
Hi, original author here.

Yes, I'm talking about Apple Macs, specifically. I am not aware of any other
Apple product line that has transitioned from any one processor architecture
to another different one while maintaining any form of software compatibility;
are you?

Non-computers don't matter. Nobody cares if a laser printer has the same CPU
as the previous generation of laser printer, so long as it still prints and
there's a driver. Nobody cares if a new iPod has a different CPU, because
iPods didn't run apps.

The iOS line has never made a CPU transition, so there's nothing to write
about. It's ARM as it's always been. 32-bit to 64-bit is no big deal; I did a
FOSDEM talk on this theme a fortnight ago:
[https://news.ycombinator.com/item?id=22265615](https://news.ycombinator.com/item?id=22265615)

Only Apple's Mac made a journey like this, so there's nothing else to talk
about. The Apple IIGS had a different CPU but was backwards-compatible; no
transition involved.

It ends around 200 _8_ because by then Apple was making 64-bit dual-core x86
machines with 64-bit UEFI and they still are. That was their last transition
so far. There's nothing more to say.

~~~
shaabanban
You could make the argument that they've made at least a partial transition
since the 2008 64 bit UEFI with the whole T2 ARM co-processor thing.

~~~
chipotle_coyote
Mac software the way the original author is talking about it is still
basically running like 64-bit Intel Mac software, though, right? Catalina is a
transition of sorts in that it's dropping all 32-bit compatibility, so that
could be a footnote, but I think that's about it -- until-slash-if Macs make
the oft-rumored transition to Apple-designed ARM CPUs.

------
classichasclass
This is a little unfair. Yes, the notionally "normal" state of the system, at
least from the view of the nanokernel, is to be running 68K code. However, by
the days of 8.5, most of the OS _was_ native, with the remaining legacy being
all those obnoxious UPPs. In fact, porting the classic Mac OS to PowerPC by
then was actually the _low_ -effort move while Apple management scrounged
around to figure out what to do after the demise of Copland.

~~~
lproven
Hi. Original blog post author here.

Not really, no.

I was there and used and supported these machines throughout.

There was a CDEV available at the time. I can't find it any more. It placed an
"indicator light" in your menu bar, which glowed red when the OS was executing
68K code and green for PowerPC code. I installed it on all the machines I
could.

It sat there red 99% of the time. Occasionally it flashed green briefly. Only
a very few very-CPU-intensive apps made it stay green: applying large
Photoshop filters, for example.

Even by MacOS 9.2.2 in my normal usage, browsing the web, doing email and
spreadsheets and chat, and writing, it stayed red most of the time. PowerPC-
native browsers such as WAMCOM call OS code all the time, and the OS code
mostly stayed 68K.

As for it being the low-effort move, I specifically addressed this point.

~~~
classichasclass
I don't know what CDEV you're referring to, but what it's observing sounds
like a distortion. Every UPP call into the OS looks like 68K code. This
enables 68K apps to "just do it." There was always some sort of thunk to
handle registers and calling convention, but by 8.5 and certainly by 9 the
code on the other side was usually native.

Certainly some calls were still 68K. That's why almost every OS call still
needed to go through that song and dance. If all the CDEV did was see the
border at the Mixed Mode Manager switch, the machine would indeed appear to be
running 68K code most of the time except for those few generally non-UI PPC-
specific APIs, even though it isn't (and I challenge you to spend a little
time in MacsBug and see that this is true).

------
abelliqueux
Full story on the transition from 68000 to powerpc :

[https://www.filfre.net/2020/02/the-deal-of-the-century-or-
th...](https://www.filfre.net/2020/02/the-deal-of-the-century-or-the-alliance-
of-losers/)

~~~
tambourine_man
That was one fine article. Thanks for the link.

------
trimbo
I know this is about Mac's transitions and call me sentimental but I'm a
little sad there was no mention of the 6502 and 65816.

~~~
jdswain
And quite a bit of MacOS was ported back to the IIgs, most notably QuickDraw.
The IIgs had ADB before the Mac, and colour QuickDraw years before the Mac had
colour graphics.

At the time it looks like the 65816 was deliberately speed limited so as to
not overshadow the Mac. Mac was 8MHz, IIgs was 2.8MHz, the 65816 eventually
got to 14MHz+, if Apple released an 8MHz IIgs it would have compared very well
against the Mac. Woz is even quoted as saying that.

Also, the Mac IIfx had two 10MHz 6502 based I/O processors to offload work
from the main CPU.

~~~
duskwuff
> The IIgs had ADB before the Mac

Marginally. The IIgs was released in September 1986; the Macintosh SE and II
(which were the first Macs to support ADB) came out in March 1987.

The 65C816 _eventually_ made it up to 14 MHz, but I'm not sure those speed
ratings were available in 1986. Even when they did, integrating them into the
IIgs took some significant effort by accelerator manufacturers.

> if Apple released an 8MHz IIgs it would have compared very well against the
> Mac. Woz is even quoted as saying that.

Perhaps against an 8 MHz 68000 -- even then, the 16-bit data bus, 32-bit
registers, and hardware multiply/divide on the 68000 would have made it stiff
competition for the 65C816, which still used an 8-bit data bus, 16-bit
registers, and supported no arithmetic operations more complex than addition,
subtraction, and comparison. Against the later 680x0 parts, there'd have been
no contest.

------
cable2600
Didn't IBM have OS/2 Warp for PowerPC CHRP systems coming out that later
turned into vaporware? It was supposed to use X86 emulation to run DOS, 16 bit
Windows, and OS/2 programs on the PowerPC Macs.

Don't forget Linux for PowerMacs. Apple once had MKLinux available.

~~~
kylek
I cut my linux-teeth on PowerMacs. OS X was going to be based on unix, so I
thought I'd get a jump start on this new fangled stuff (I was 11 -- "It's a
UNIX system...I know this!"). I remember an issue of Mac Addict arrived with a
copy of LinuxPPC and an app that, through something that felt like witchcraft
at the time, would let you boot into linux directly from Mac OS.

(Also, some random OS/2-powerpc history-
[https://www.os2museum.com/wp/os2-history/os2-warp-powerpc-
ed...](https://www.os2museum.com/wp/os2-history/os2-warp-powerpc-edition/) )

------
throwaway3157
I was thinking today that we need more computer archeology, and then I read
this short post on macOS & professor history I was unaware of. We need more of
this!

~~~
dcolkitt
David Friedman has an interesting book _Legal Systems Very Different From Our
Own_. It's basically what it sounds like, a survey of legal systems from
distant past and foreign civilization that evolved in very separate conditions
from what we know.

I think, it'd be interesting if a technically minded person wrote an
equivalent work _Computer Systems Very Different From Our Own_. By necessity
most of it would probably be confined to retro-computing. But I think it'd be
a pretty worthwhile endeavor.

Not because those computers are likely to be better designed than our current
systems. If anything quite the opposite. Like Friedman's legal systems, most
of them probably have glaring pitfalls relative to modern state of the art.
But it's a useful exercise to think about very distant points in the design
space, and how and why those decisions were made.

Even if the comparison point is clearly inferior, by virtue of being very
exotic it helps us better understand the tradeoffs we make in our everyday
incremental designs.

~~~
senderista
Like ternary architectures?
[https://en.m.wikipedia.org/wiki/Setun](https://en.m.wikipedia.org/wiki/Setun)

------
kenips
Thanks for this. It’s a great read. Looking for part II on more of OS X‘s
evolution (64 bit, Intel, DriverKit, etc).

------
therealmarv
I hope to see AMD in this list soon. Apple is not using the best desktop and
laptop processors nowadays.

~~~
scarface74
The best _laptop_ processor would be an Apple designed ARM chip.

~~~
arvinsim
You are basing that on what metrics? It's all just speculation for now.

AMD 4000 mobile processors, on the other hand, already exist.

~~~
scarface74
We know the performance/power trade offs of Apple’s ARM chips in iPad Pros.

~~~
duskwuff
And the odds are good that even _that_ could be scaled up further for a
laptop-class part. The iPad Pro is a passively cooled device which is expected
to always run on battery power, after all; a laptop would have larger
batteries and more thermal margin.

Not that it'd even have to be scaled up by much. The A13 (iPhone 11) already
performs comparably to many Intel mobile parts; an -X variant (future iPad
Pro) would surely improve upon that.

~~~
zrm
The scaling is really the question. ARM processors generally have better
performance per watt because they're designed to prioritize power efficiency
over absolute performance. The Intel parts Apple's ARM processors compare to
on performance are the lowest power ones, which isn't exactly the sweet spot
for Intel's microarchitecture.

If you gave the ARM designers the (higher) laptop power budget, almost
certainly they'd increase performance at the expense of performance per watt.
In other words they'd take more of the trade offs that Intel and AMD do. But
to justify switching architectures it can't just be the same, it would have to
be enough better to be worth the transition.

~~~
amalter
This has been exactly proven out by Amazon - [https://aws.amazon.com/about-
aws/whats-new/2019/12/announcin...](https://aws.amazon.com/about-aws/whats-
new/2019/12/announcing-new-amazon-ec2-m6g-c6g-and-r6g-instances-powered-by-
next-generation-arm-based-aws-graviton2-processors/)

If Amazon can make a server class ARM chip that is competitive or better with
their custom designed Xeons, I take it as a truism Apple could do the same for
mobile.

~~~
wu_187
The problem is the use-case. The software Mac laptops run would perform
horribly on ARM. A Macbook Pro is not going to have an ARM processor.

~~~
scarface74
Why do you think the software Macs run are so much different than the software
that iPad Pros run? They run some of the same frameworks. Since when you run
iOS apps in the simulator for Macs you actually running x86 builds of iOS apps
linked against an x86 build of the iOS frameworks, it’s not hard to compare
performance

------
dehrmann
As Apple's rumored to be eyeing ARM for Macs, how impractical is it to make a
more power efficient x86 CPU (like an Atom 2), or is x86 the 737 of CPU
architectures?

------
msoad
I simply can’t scroll on this website

Safari iOS

~~~
gumby
I read it in reader mode, but I am on an ipad which perhaps has a different
safari from the iphone.

------
Brave-Steak
What was the reasoning for transitioning from Pascal to C?

~~~
Dalrymple
Pure Pascal is not really suitable as an operating system implementation
language. The obvious design choices are to:

1\. Transition to C completely.

2\. Extend Pascal (like HP did - HP called their resulting language MODCAL).

3\. Support in-line C code in the Pascal compiler.

4\. Support in-line assembly in the Pascal compiler.

For superior library support and other reasons, Apple made the right choice
here.

~~~
eschaton
Pascal as used on the Apple II, III, Lisa, Mac, and IIgs had the minimal
extensions needed to be a systems language and was pretty much isomorphic to
C.

The “switch” from Pascal to C in the Mac market was less a “switch” and more a
change in preference by developers—you could use either language and line by
line your code would be equivalent. And the switch at Apple was likely just
following the market.

The APIs themselves used a language independent calling convention—arguments
passed in CPU registers, not the stack, and system calls invoked by invalid
opcodes in the 0xA000–0xAFFF range, not by JSR to a function pointer—so
neither Pascal nor C really had an “advantage.”

