Hacker News new | past | comments | ask | show | jobs | submit login
The Acorn Archimedes was the first desktop to use the ARM architecture (ieee.org)
169 points by lproven 18 days ago | hide | past | favorite | 229 comments



My brother had a Archimedes. At the time they were introduced, they were amazing things. 4 million colours, blazingly fast. At the time I was convinced that this would quickly replace Intel-based PCs. That never happened, of course.

Acorn had a tendency to be ahead of its time. Our first home computer, the Acorn Electron, could be expanded with a 3.5" floppy drive. Later, when we got a PC, we moved from 3.5" to 5.25" floppies. That was my first disappointment in having to make a step backwards in tech.

The Archimes' successor, the RISC PC, could even have an Intel processor as co-processor. Or, instead, a card with 4 or 5 more ARM processors. At the time is was probably the cheapest way for a consumer to get a multi-processor machine.


Acorn were indeed way ahead of their time. Especially when it came to flunking software projects. They had the hardware done but the OS was a quick hack job because their proper OS (ARX) was late. So they ported MOS from the BBC series and threw a GUI on top. And it initially sucked terribly. I was there for it.

I bought a fully stacked A440 when they were released, many thanks to the kindness of a deceased relative and much to the dismay of it mother who said I should have bought history books and art materials instead.

Really if they had something RISC OS 2 quality on day one, which would have been a monumental feat like the hardware was they’d have done better. I enjoyed every moment of using the machine after RISC OS 2 came out. Before that I was slightly scared I’d burned a monumental amount of money.


One of the things I loved about RiscOS was a small UI feature. There were no open or save dialogs. Instead, when you saved a file, it minimised to its icon and then you dragged it to the relevant folder. To open a file, you located it and double-clicked.


For me drag-and-drop is not complete without also being able to save by drag-and-drop. On RISC OS you can save your files by dragging the file icon to the place you want to save your file. This can be a directory window, or it could also be another app icon on the taskbar, or an open app window.

This way it is possible to create workflows where you have small specialized apps that complement each other. You would open a file by dragging it to the first app, edit it, drag the result to a 2nd app do some more editing, then drag the result to the next app and so on and save only the final result back to disk. This is basically supporting the Unix philosophy of having many small programs that do one thing well, and combining them to achieve the desired result.

Instead what we now mostly have on all platforms are huge monolithic programs that try to cram every possible feature into a single package. One reason for this, is in my opinion, that a workflow that is similar to the RISC OS way of dragging from app to app is just to cumbersome on systems without drag and drop between apps, as every step one has to go through the filing system, saving a file by browsing through a directory tree and browse the directory tree again when loading the file in the next app.


iOS (particularly iPadOS) seems to get this about right. Copy / paste / sharing seems to work very well for various work pipelines I use. I rarely if ever touch the filesystem.


Good point.

I tend to use my iPad as my primary computer - and once you get your head around the fact that everything runs off the share menu it's basically the same concept, without the physical dragging.


Yeah same here on the primary computer front. It's certainly the best hassle vs results computing device I've ever owned.


One of my favourite things about RISC OS was the font manager: At the time, it was normal for every application to manage its own fonts (Macintosh may have been an exception), but on RISC OS, fonts were shared between applications. Not only that, but the same renderer was used for screen use and printing, so you really had WYSIWYG, unlike the Mac, which used different rendering on screen and paper (using PostScript for printing and something else for screen use), which resulted in subtle differences, including things that lined up perfectly on screen being not even closely lined up in print (that hit me several times when using the university Mac). Furthermore, the screen renderer used anti-aliasing, so edges would not appear as ragged (on the fairly limited screen resolution), and allowed characters to be positioned at quarter-pixel resolution. And you could scale fonts to any size, as they were rendered from cubic Bezier curves.

This was 1988. TrueType for Mac was not until 1981, and it used quadratic Bezier curves, which are inferior to cubic, and AFAIR did not use anti-aliasing and the hinting system was inferior to that used by RISC OS.

I really wish Acorn had licensed their font technology to other companies, but like many other of their technologies (such as Econet and the video codec used by RISC OS) Acorn developed these for their own use only. It is almost a miracle that the ARM processor escaped the Acorn-only fate of so many other technologies.


Yes! This sort of universal drag-and-drop is something I've been hoping other OSes would pick up on, but you only ever get part-way implementations.


I liked that but sometimes it was a pain in the butt. Particularly if you'd forgotten to open the destination directory window already and the save thing was down 3 levels of menus.


Indeed. RiscOS embraced the file/folder metaphor in a way that modern UIs could really stand to learn from.


I really wish someone would pick up and modernize the ROX Desktop -- which is the Acorn RISC OS UI, on top of Linux.

http://rox.sourceforge.net/desktop/

It would partner really well with a distro such as GoboLinux or possibly even sta.li.


I think it'd be a good fit for Hello [0], AppDirs and AppImages being very closely related, but I don't think probonopd agrees, sadly.

[0] https://github.com/probonopd/hello


Yes I also thought this was great, and I still ofetn think it would be nice to have it back. I already have my folder open in Finder or Explorer, so my no let me drag it there to save.

I guess you kind of have the same thing in mocOS with the proxy icon.


I couldn’t stretch to an A440, but did get an A310. What a great machine. It was blindingly fast for the time (1987). In about 1994 it still out performed a new 486DX100 PC


Yeah the A440 was not cheap. I didn't understand money as much as I do now back then. Equivalent inflation adjusted is about £6500 now (!). Which is why my parents had a heart attack when I just blew it instantly on that. Absolutely would not have bought it if I had some more clue. But to this day I have no regrets as it gave me an excellent foundation, understanding and career in computing.

But yes the things lasted a hell of a long time. Mine went until 1998 at which point it was replaced with a Pentium II box running Windows dual booting with Linux. Got 10 years out of that machine which is remarkable considering the progress and change in the 1990s.

I'm on a Mac mini at the moment with an iPad Pro as a secondary machine so the moment an ARM Mac comes out I'm back with an ARM desktop again. It's all a big circle. Perhaps ARX was going to be an early macOS X contender... who knows?


I facilitated a talk at the London RISC OS user group by the lead architect of RO, Paul Fellows: http://www.rougol.jellybaby.net/meetings/2012/PaulFellows/

From what he said about ARX, I suspect not...


Thanks for this. Will read later.

Wasn’t ARX basically Unix reimplemented in Modula2 ?


I used an A310 and later an A440 as my main machines right up until the late ‘90s when we got one of the original bondi-blue iMacs.

Loved RiscOS - I knew it inside out and backwards, and it was so ahead of it’s time, that using one of the terrible RM Nimbus 286 PCs they had at college felt like being suddenly transported back to the Stone Age.


Idle thought: I imagine that as soon as the Apple ARM development boxes drop, someone is going to start evaluating how to port RiscOS to Mac (either on the bare metal or, more likely, via virtualization).


Never thought of that. I'm getting even more excited now, which is a rarity as I'm a miserable sod.


Yeah that would be amazing.

As an aside: I'd love to see the alternate timeline where RiscOS continued to be developed, instead of getting bogged down with pointless legal issues for 20 years.

What would it be like? Would it have gone through a Mac OS-like transition from RiscOS Classic to RiscOS X?

If it was open sourced in the early 2000s, could it have become a viable competitor to Linux on the desktop and Android on mobile? As a lightweight but powerful OS for low-powered ARM devices it should have been perfectly placed.


Not sure whether you're confusing some OS or application performance with the performance of the CPU or are misremembering. A 8MHz ARM2 performed at about 5MIPS. That was impressive in '87, but in '90 my 20MHz 80386 ran at about 6MIPS. The 486DX100 of the mid nineties was much faster.


Possibly in terms of feel. They really ran a pile of crap on a 486DX100 in comparison. The RISC OS machines of the early 1990s had a similar feel to a modern PC on the performance front with every-day tasks.


I'm not sure if hack-job OSs are 'ahead of their time'. These were also the days of MS-DOS.

But it's certainly good to remember that not everything Acorn did was shiny and golden. They deserve to be remembered more than they often are, but they've had their fair share of misses too.


Oh yeah the entire 1990s was a shit show where they burned the entire company to the ground through incompetence.


> the OS was a quick hack job

You mean Arthur? I have to agree - the only impressive thing about it was that it was written in BBC Basic. I also spent too much on an A440 (and a multisync monitor that literally made my hair stand on end when I turned it on). Still, I have no regrets. As you say, RISC OS 2 delivered the goods. By day, I was working in monochrome text mode on a clunky old 16-bit IBM XT. By night, I was in the glorious high-res colour 32-bit future. One of the finest computers even made IMHO (except for the mouse!)


Yes Arthur was the hack job I was referring to :)

My father was using a 286 by then. It was vastly inferior however he built some software with MS PDS (quickbasic) I think it was and made a lot of cash. Software in the blood.


> 4 million colours

That would be 22 bit, which seems quite unlikely. Perhaps you're thinking of 4096 (= 2^12 = (16)^3)? It seems only 256 of them could be shown at once and only 240 could be modified [1][2][3]? If those are correct then it is more limited than VGA, which could show 256 colours chosen from 262144 (= 2^18 = (64)^3). Some Archimedes resolutions were more like low-end SVGA though [1].

[1] https://lowendmac.com/2018/acorn-archimedes-computers/

[2] https://en.wikipedia.org/wiki/Color_depth

Edit:

[3] Original brochures! http://chrisacorns.computinghistory.org.uk/docs/Acorn/Brochu... and http://chrisacorns.computinghistory.org.uk/docs/Acorn/Brochu...


You're probably right. I do remember hearing about 4 million colours at the time, and being wowed by the amazingly subtle colour, but 4 million does indeed sound very unlikely.


I was an A3000 owner, it had a 4096 color palette (12 bits), note "palette", the best resolutions supported 1 byte per pixel, so 256 colors on screen, out of 4096.


Nitpick: That would be a 256 colour palette, from a 4096 colour colour-space.


Your right, but the terms where never used correctly back in the day..


[OP here]

4096 colours. :-)

Otherwise, yes, pretty much. I bought a 2nd hand A310 in 1989 and it easily outperformed the fastest machines my employer sold (e.g. the IBM PS/2 Model 70-A21, a blazing 25MHz 80386DX with 64 KB of secondary cache) by a factor of about 8×.

Given that the state of the PC OS art at that time was MS-DOS 3.3, which very few people bothered to put Windows 2 on top of, Acorn's RISC OS 2 was astonishingly advanced, too.


The acronym "ARM" originally stood for "Acorn RISC Machine." https://en.wikipedia.org/wiki/ARM_architecture


And present-day ARM cores are documented in (among many other) ARM's Architecture Reference Manuals, i.e. the ARM ARMs. They're so funny.


And the architectures are Cortex-A, Cortex-R and Cortex-M!


Ah yes, that too. So much funs.

I remember attending a lecture given by Steve Furber in 1985 in Cambridge (UK) on the new chip that he and Sophie Wilson had developed. The emphasis then was on having an affordable 32 bit rather than low power. It seemed incredible that such a small team could have developed a CPU at their first attempt that was faster than offerings from their much bigger competitors.

Anyway much of the commercial success of ARM must be down to Robin Saxby who was ARM's first CEO. His interview with Charbax is fascinating [1]. He inherited a company with two customers, one of whom was failing, some interesting IP but not much else. He seems to have had a relentless focus on providing customers with what they needed - for example providing the Thumb 16 bit instruction set to reduce code size when potential customers such as Nokia were reluctant to move to 32 bit designs.

[1] https://www.youtube.com/watch?v=FO5PsAY5aaI


> He inherited a company with two customers, one of whom was failing, some interesting IP but not much else. Among the else were those extraordinarily talented and dedicated employees, were they not?


You're absolutely right and no offense intended to a pretty remarkable team of 12 who moved from Acorn. For balance here is Mike Muller from that team who just retired as CTO of ARM last year.

https://www.youtube.com/watch?v=ljbdhICqETE

and also Mike Muller actually presenting to Apple Computer staff in 1992!

https://www.youtube.com/watch?v=ZV1NdS_w4As

Edit: added Apple presentation


This is great, thank you!


I fired up my Armbook to write this comment. It's just an original Pinebook with a special SD card to boot into RiscOS, but it's quite neat and very fast.

It's also the first true ARM/RiscOS laptop that has been made since the original Acorn A4. That machine was the first portable ARM device ever made.

Operating systems have been born and died between the arrival of these 2 machines.


I still use Samsung DeX despite it falling out of popularity after the inital bump. They even had virtualization for a Samsung and Canonical provided Ubuntu image that you could run in an app called Linux on Dex.

It has clear advantages for video editing, especially on export given the arm chips often just spend time decoding video and encoding is quick cheap and streamlined.

can't speak for other industries, but as a dev most stuff in apt is arm64 compatible and therefore runs fine inside of a container on android (this was disabled with android 10 unfortunately)


I have been using a pinebook pro for web development. Some docker images do not work for arm64 and RAM is constrained (setting up zram helps), but otherwise it is a great device.


Title is factually correct, but is somewhat misleading - it implies ARM arch existed and then the Archimedes was designed around it (and was the first desktop to have that property);

In fact, the ARM was designed for the Acorn Archimedes, and that’s why no desktop, or server, or anything, used it before the Archimedes - ARM stands for “Acorn RISC Machine”.


VLSI made the ARM2 and chipset available in early 1987. I seriously considered using that in a machine I was designing instead of the 68020. But I had been burned by Motorola announcing chips and then cancelling them when almost nobody used them (an update of the TRS-80 Color Computer chipset that could be used with the 6809 or 68000) so I decided to continue with the 68020 until somebody else came out with an ARM product. Acorn launched the Archimedes at the end of 1987 and I bought the chips and the manuals and wrote an ARM assembler and simulator. But the project was frozen and I only actually built the machine in 1992 though the chips were obsolete by then.

http://www.smalltalk.org.br/fotos/casa3.jpg

While the chips were indeed created specifically for the Archimedes, it would have been possible for some other company to have made a machine with them before Acorn.


I remember you from the Tunes project. :)

http://tunes.org/


Your story sounds interesting - is there more we can read about it?


Some pictures from 1983 to 2008:

http://www.merlintec.com/swiki/hardware/28.html

A talk about Smalltalk computers includes my projects, but I didn't make it very clear in the slides which ones are mine (13-15, 22, 24, 28, 29, 35, 36, 40, 42-54):

http://www.merlintec.com/download/2019_slides_jecel_fast1v2....

https://www.youtube.com/watch?v=tATpzsyC6OA


Yeah, perhaps something like, “The Acorn Archimedes was the first desktop using (and original target of) the ARM architecture,” would be more suiting. Or even, “The ARM architecture was originally designed for a PC, the Acorn Archimedes”


OP here. For reasons I do not understand, HN seems to have re-titled my submission.

The original title is: « Why Wait For Apple? Try Out The Original ARM Desktop Experience Today With A Raspberry Pi

Future MacBooks and iMacs are going to be powered using a ARM-based chip, but the Acorn Archimedes was the first desktop to use this architecture » I very slightly edited the 1st line to fit, abbreviating "Raspberry Pi".


Calling the Archimedes a PC would be kind of misleading. It was a personal computer in the sense used before the term implied an IBM compatible.


Aren’t people calling RasPi a PC? I think it’s come back around.


Nowadays ARM is just "arm". The acronym has no specific meaning.


I'm very curious to see this transition to ARM play out. There is a lot of software that will now be compiled for ARM that can potentially be used in open-source projects on much more cost-effective boards. Music (VST) plugins come to mind.


The thing about arm that's always troubled me, which I only see talked about infrequently, is the way it's licensed vs x86_64 chips. Arm chips are licensed in such a way that allows for a multitude of different proprietary configurations that all are essentially arm chips, but locked down at the whims of the manufacturers.

I understand power usage is a big concern these days, but I can't help but feel this aspect of arm is greatly overlooked and that the push towards arm devices is also partially inspired by the ability to lock down chip designs to a greater capacity than x86_64 chips.

Just look at the existing fragmentation among arm powered devices as is. I don't really like the thought of what an arm powered future could look like.

I really wish I could remember what video it was, but I remember years ago watching the ceo of arm present a new chip with built in emulation technology, that theoretically, would have allowed multiple operating systems to run at once or something. I remember the person talking to him asked if he forsaw this as the future, and he just laughed and said oh no, no manufacturers would never dream of allowing this and furthermore they'll likely use this technology to further specialize and lockdown their chips.

It may be personal bias, but since then and other things i've learned about arm, I've just had this lingering unease as to what them gaining dominance would mean to general computing.


The thing with x86_64 is that Apple/Amazon/Google/Samsung/Fujitsu/etc cannot license it. There are only two corporations with x86 licenses granted by Intel: AMD and VIA. These licenses are not transferable, and a even a badly managed acquisition could result in the license being revoked.

So moving out of the x86/x86_64 world is really a no-brainer: why would all the electronic giants keep paying for x86 when they can get out of the de-facto duopoly? It seems that they are settling on ARM (Amazon Graviton, Fujitsu A64FX, Apple A12Z, etc.). It could have been another architecture, but it couldn't be x86_64.

On the other hand, there are other architectures besides ARM and x86: RISC-V, OpenSPARC, etc. Maybe one of these would be better for developers who don't want to deal with the ARM landscape fragmentation.


Why does x86 need a license anymore though? Any patents on x86 and its extensions right through to the first-gen Pentium 4 chips released in 2000 will have expired by now. Creating a clean-room implementation of an existing ISA doesn't necessarily mean you need a license - it's basically the same thing as reimplementing the Java VM specification: no royalties to Oracle required! (...for now).

I do appreciate that the x64 is more recent and will still be patented to heck - but there's only 3 more years until the first of those expire, methinks.

PAE for x86 means that programs needing more than 4GiB of main-memory can be run without needing a true 64-bit ISA - and PAE won't be patented anymore as that was introduced around 1995.


Yes 64 bit is probably impossible, as there will be new patented features in pretty much every model coming out, so most of your instruction set would be stuck 20 years out of date. I think there's not much point in new 32 bit x86 processors now, as there are low power SBCs that would cover most uses (unless you need old proprietary x86 software, but then just buy a PC second hand).


Don't forget OpenPOWER. The Power ISA is royalty-free.


>The thing with x86_64 is that Apple/Amazon/Google/Samsung/Fujitsu/etc cannot license it. There are only two corporations with x86 licenses granted by Intel: AMD and VIA. These licenses are not transferable, and a even a badly managed acquisition could result in the license being revoked.

I don't disagree with you, but is that really a world developers or even consumers want? A world where each device manufacturer creates their own chip based around arm's processor designs that's incompatible with the next device's chip despite being virtually identical?

You're right, as a corporation looking to make money off all aspects of everything, it's a no brainer. For everyone else, it's a worse situation than the processor fragmentation of the 80's and early 90's.

X86_64, amd and intel, took advantage of the situation of the time and provided a somewhat standard, though controlled by them, processor design that allowed tons of individuals and companies to flourish. Moving to yet another highly controlled set of fragmented platforms is a step back to the 80's.

Ideally, we should be moving forward towards a system that provides a universal set of instructions licensable by all, without allowing licensees to restrict their implementations.

There's a lot of problems with our current paradigm but backpedaling to an even more controlled and fragmented platform is not the solution.

I really believe open hardware is a serious issue that's only starting to be addressed, meanwhile big established players are making their moves to stop it.

Arm is a big established player, one of the biggest, they may not be as forward facing as Microsoft or Apple or google, but damn does that company have a huge influence on everything.

Widely adopted open standards for hardware that does not encourage proprietary licences on basic chip design is a definite no brainer on the way forward.

The hurdles that need to be crossed though are greater than with software. Real tangible goods that require manufacturing require tons of overhead and complexity. This challenge is something I feel is the open source idealogy's next great obstacle and overcoming it will open computing so much more than even open source software has for the world. But, only if people has a whole come together and work towards it by not accepting anything less.


People seem fine with not being able to install the OS of their choice on iPhones, televisions and robot vacuums. So it seems unlikely that software compatibility is that important in that space. The manufacturers will just compile their code for the chip in question, move on to the next device and do the same thing. Of course there will be the ARM SBC corner for devs to play in.

Your vision of an open ISA like RISC-V winning is something that I mainly see benefiting manufacturers making closed hardware, not people who want general purpose hardware for running general purpose operating systems. For example Western Digital and nvidia are going to be using RISC-V as part of every SSD/HDD and GPU, but we will not get to run our own code on them. There are also hardware vendors selling RISC-V microcontrollers, but nothing that could run Linux or BSD, even SiFive only has Arduino-level boards out (they had Linux-capable ones but don't seem to be making more of them for sale). Eventually there might be a RISC-V entry in the SBC market, but I think ARM is still going to own that market for a long time.


Saw this [1] a few days ago. Looks like Microchip are going to use RISC-V to produce a SoC+FPGA product, their competitors use ARM for their products.

[1] https://www.crowdsupply.com/microchip/polarfire-soc-icicle-k...


X86, as I understand, has the advantage of a fairly widely implemented ISA that specifies all sorts of things about how other things on the motherboard communicate with each other and with the CPU


> specifies all sorts of things about how other things on the motherboard communicate with each other and with the CPU

That used to be true, but I dont know how true it is now.

Once upon a time, ISA and ATA were both wired right to the bus.

By the P3/Athlon era everything was probably on a super-io controller and or south bridge, but that communication towards the northbridge/CPU was over PCI or ever fancier links.

But the clear cutting point to me is EFI. Most of the standards in how things communicate on older PCs was more based on the IBM BIOS. Early clones couldnt exist until that was nailed down. But... EFI replaces all of that with a new abstraction.


I’m not a hardware person, really, but—even with EFI—my understanding is that x86 motherboards all implement standards that specify how the motherboard’s features can be accessed from software in a way that is fairly uncommon in ARM/MIPS/etc.


Basically correct; the key bit is that x86 usually has a bus that can be enumerated by the OS. So you boot, and the OS can look around and see what hardware it has (and where it all is). On most ARM, the OS boots blind and you have to tell it (usually via device tree) what devices are attached and where.


There's a spec for putting a device tree in an ARM chip's rom, the problem being that it's often wrong in some way, requiring device-specific corrections.

Note, however, that there's a lot of "quirks" and similar device-specific corrections for x86 as well. These get presented to Microsoft by the manufacturer, and Linux gets to reverse-engineer them.


> There's a spec for putting a device tree in an ARM chip's rom, the problem being that it's often wrong in some way, requiring device-specific corrections.

I was aware of being able to have ex. u-boot pass a device tree file to the kernel at boot time; are you talking about that, or something more generic?


Architecture is one thing, availability another. ARM is a lot more available on the global market, from multiple foundries.

What I think is interesting is what this will do to the low-power x86 front, not just in terms of technical innovation but also availability. I'm perfect content building for x86, if it becomes as-available as the raspberry-pi class of devices ...


If I were a user in the music/film industry I would be worried if I depended on Apple.

Apple has shown pretty clearly they see themselves as a consumer electronics and services company above all. If all the "professional" users who rely on Logic or FC or Apple in general jumped ship, Apple wouldn't likely notice it in their bottom line (loss of mindshare not withstanding). They have no vested interest in catering to these users and have produced hardware intended for them as an afterthought. The high cost of hardware and absence of middle tier products that make sense for these users is just icing on the cake.

"Professional" users of Logic or FC have little leverage to affect the course of Apple one way or another (unlike a more focused company whose business model is catering to these groups).

On the other hand Apple pro users do have a long history of being abused by the company so they might be alright with it in the end.

Anyway, I'd be looking for an out if I were them.


I think this is completely backward.

To a first approximation, Apple only cares about video producers, software developers, publishers, and musicians, in their Mac strategy. In approximately that order.

The only reason developers are on that list, is because XCode is how software gets written for the rest of their platforms; the dominance of Macs as personal workstations for Valley-style development is only a side effect of that.

These users provide a halo for everyone else. A college student might spend an extra $1000 on their laptop because they want to moonlight on some beats, or have a YouTube channel, that sort of thing.

I remember seeing this disconnect when the new Mac Pro was released and many here dismissed it as too expensive. Someone who works in CGI came around to say that, no, $50,000 is a normal amount of money to spend on their workstations and that their company was ecstatic to be able to keep working with macOS.


Yeah, unless you’ve actually priced out business-class setups, the Mac Pro’s pricing seems crazy.

I’ve repeatedly found that the “Apple Tax” is at most something like $500: other companies tend to advertise configurations that I wouldn’t buy, and when I tweak them to match my requirements, their price is not notably different from a Mac


If you price like for like sure. My issue is usually that I almost never do. I usually fall into needing something akin to a mac mini but slightly bigger. The last mac pro I bought was a dual-G4 model when the gap between the mac pro and lower end models didn't seem so severe. (Although I obviously do not represent a sizeable demographic of the market).

For Apple, not catering to every particular consumer whim/need out there is just smart business. But for the consumer, that is just another weakness of the mac ecosystem.


If you include the iMacs (e.g. the iMac Pro), I think there aren’t really all that many gaps in the product line, spec or price wise. It’s mostly that you can’t necessarily get the machine you want in the form factor you’d like.


That's sort of a deal-breaker for a lot of use cases. Think of all those small corporate desktops out there which are beefier than a Mac Mini and wired to cheapo VGA-only 19" LCD monitors because finance isn't going to pay to replace a working screen. The iMac form factor is never gonna compete there.

And of course the neglected "I just want internal drives or expansion cards" demographic. I;m not sure if the cheese grater design fixes it, but with the trashcan Mac Pro, there were plenty of configurations where the equivalent Dell/HP/Lenovo workstation was one clean self-contained box stuffed with PCI-e cards and SATA/SAS drives, and the Mac Pro was an angry squid of Thunderbolt, USB, and power cables to feed an array of external drives and devices.

I understand that their brand is based on seamless design and we-make-the-decisions-for-you presentation, but it feels like there'd be an opportunity for them to use a small-scale clone program as a market research tool.

Have it sell the form factors that Apple won't. The long-whined-for xMac. Units with servicable/cleanable designs for embedded markets. A mini-ITX Mac mainboard you can fit into existing kiosk/appliance designs. A rebranded Toughbook running MacOS. Something in a huge rackmount/server cube case that you can fit with a dozen internal drives. Frankly, I'd envision it as a wholly-owned operation, that charges over-the-odds prices. If people are still willing to put their money where their mouth is, they can claim epiphany and make an Apple version of the same design. If not, they can declare the business a failure, shutter it in a year, and start over next time they want to trial a product.


>Someone who works in CGI came around to say that, no, $50,000 is a normal amount of money to spend on their workstations

You'd still be crazy to spend it on a Mac Pro unless you had very specific MacOS needs. Building a threadripper multi-Nvidia GPU PC is going to outperform it in any meaningful way with a modern CGI workflow.

OS starts to matter very little when it's a choice of seeing almost a finished image in almost real time which is what you get with CUDA backed rendering engines VS having to still chug away on CPU.


I submit that it's not just about performance.

Both Linux and Windows still take a _lot_ more maintenance than macOS. If you have a room full of creative video artists, who are not techies, then you want them to have maximum productive up-time. You do not want to have to employ a small army of system support techies to keep those boxes running.

I used to work on a PC magazine in the West End of London. The mag was about Windows PCs, and was written on Windows PCs, but it was laid out on Macs. I supported both, and the servers and the network.

The PC side of things needed more than 10x as much support.

That's not materially different even today.

Secondly, it's not just about the boxes and their OS. It's also about the apps. A lovely fast sleek Linux distro is no use at all if it doesn't run the apps you need... and if those apps only run on one vendor's kit, or even just run best on that vendor's kit, then that is the kit you buy.


I get that but it is about performance when we reach the magnitudes GPU rendering is over CPU rendering.

This isn't 20% faster, it's the difference between seeing the image in almost finished form and interacting with it with real time responsiveness vs having to wait minutes for an image. [1]

[1]: https://www.youtube.com/watch?v=Gf_P1G_wbK8&t=0m33s


I can believe that. What sort of support did the non-mac systems mainly need compared to the mac systems?


I dunno I guess I've been around a lot of people who fall in the middle, that's a place that Apple doesn't cater to. By the middle I mean, small studios (one or a few people). Their high-end stuff certainly caters to higher-end professional markets.

What's more the way they've handled their software (Logic, transition to FC X) doesn't seem like professional users are foremost on their mind.


Most of high-end VFX is on Linux these days anyway... And a lot of the GPU stuff is CUDA, so nvidia only...

I know for certain that many VFX software companies aren't too happy about OpenGL being deprecated and Vulcan not being supported on recent MacOS releases.

IMO it's going to depend quite heavily on the demand for high-end VFX software on MacOS as to whether they bother with ARM ports if the MacPro does move over...


Speaking of VFX, while everyone here jumps of love with Vulkan, big names like OTOY are already using CUDA (for OctaneRender), because Vulkan does not provide what they are looking for.


(From my perspective) Apple needs to keep up the appearance of being a "pro" brand - a large part of their marketing to non professional users relies on it.


It seems to be such short-term thinking to me. They became a desirable brand because creatives both used and recommended them.


I'm currently in the market for a pro NLE. In the mockups I've seen of actual workspaces on some NLE websites, not a single Apple logo was seen (but other logos were).


Isn’t the music industry still reeling from Catalina? Lots of 32-bit audio software out there not getting updated.


anyone making music with a 4GB RAM ceiling is asking for a headache


I don't think its full DAW's that are kaput, its plugins and VST's that are no longer working.


Can confirm, this has been deadly for those of us who have carefully curated a set of VST's in projects over the years .. there are songs I cannot open any more, alas .. until I find replacements for some of the synths. Well, its a forced upgrade, but nevertheless .. sad ..


For what it's worth, I believe that the RPi4B now also has an option for 8GB RAM.


And the 32 bit os can access it all (just not per process).


The joys of PAE.

For what it's worth, there are 64-bit distros available for the Pi4. IIRC Raspberry Pi OS (formerly known as Raspbian) has been fully 64-bit since the release of the 8GB model, and I know Ubuntu Server offered a 64-bit image for some time before that.



Umm... So then making music on a 1MiB machine should be impossible ?


Here's the original Lemmings soundtrack from the Archimedes version (i.e. the machine under discussion). The game probably ran with 1MiB RAM, certainly it would have worked with 2MiB: https://www.youtube.com/watch?v=QwXthGJfHLc

That is the sound from the internal speaker. The computers were also popular for MIDI control, even being chosen by professionals for this, though I can't find a video.


What if that limit only applies to individual plugin processes?


VSTs are already really easy to compile for ARM and iPads have been ubiquitously supported for a long time now.

https://iplug2.github.io/


VST? Is that like an AU?

Kidding, but since you mentioned iPad...


Well the caveat is that the code for a VST could be easily compiled for an ARM target and the result would be an AU.


It would just be a VST built for ARM, no? They’re different plugin interfaces.


Depends really. Something like iPlug2 (https://github.com/iPlug2/iPlug2) would allow you to write code once and compile to all these targets.

> There is a lot of software that will now be compiled for ARM that can potentially be used in open-source projects on much more cost-effective boards

Mac software will continue to rely on proprietary APIs. Software that didn't is most likely already ported.


VST3 and AU plug-ins have been running on ARM for a decade.


Point me to the ARM Line 6 Helix Native then. Of course it is possible to compile for ARM, but mainstreaming the architecture at the level of Apple implies that all the big players will be building for ARM.


>Music (VST) plugins come to mind.

There have been multiple attempts to build an x86-based VST host, over and over.

It will be about 1000x easier to do, if everyone builds for ARM.

However! ARM DSP/synthesis is fraught with minefields. One mans ARM is not a SHARC, etc.

It should be noted that there are already mainstream synth manufacturers shipping ARM platforms .. and there is no virulent unruly lunatic fringe like the synth lunatic fringe, btw.


Don't audio applications tend to use a ton of floating point? I thought that was one of the few areas where x86 still has a significant advantage.


You can also buy an ARM-based six-core laptop for $200.

https://store.pine64.org/?product=14%e2%80%b3-pinebook-pro-l...


I love my Pinebook Pro and I would buy it again, but do be advised it's still kind of an early product. Very much worth it but you'll run into annoyances that are the results of not being totally polished yet. If you aren't willing to play with it, might want to look at a cheap Chromebook instead.

That reminds me I want to try the Manjaro build out. I've heard good things. I actually really liked the ChromiumOS build but it did take some getting used to (I'm a ChromeOS n00b)


Agreed. It's fun to tinker with but I would not want to use it all day every day. The touchpad is the achilles heel here, in my opinion. It's just not good.


Touchpad is terrible but I adore everything else about this laptop. The parts it's built from, the community around it, the anti-branding, the price, it does so many other things amazingly and fresh that makes it worth getting over the touchpad.


I do enjoy it, and I think if it were $100 more and included a better display (brightness is not great) and a much better trackpad, I would love it.


If only it had more RAM.


It would be a dream machine with 32 gigs of RAM and I would abandon my current MacBook subscription instantly.


Zram helps. It compresses RAM and makes it available as a swap device, at the expense of some extra compute time.


Six cores means nothing when the cores are slow.


What makes a core fast vs slow?

Isn't this just Verilog code? Isn't there only a couple of ways to implement a fast CPU?


This machine has a six-core SoC that has 2 fast cores and four slow ones. It's more or less equivalent to what you'd expect from a low-end laptop, which is consistent with the price.

In general, faster cores will dedicate more silicon to out-of-order or speculative execution, branch prediction, internal caches of various sorts, etc. The slow cores, OTOH, are most likely simpler, in-order cores with less less smart tricks, aiming to be simple and low-power. Vendors add/remove/tune all these tricks and more (memory channels, IO lines) to put processors in a given spot.


It means you get to work with issues that emerge from asymmetric multiple cores. It's more fun if you like to look to the metal from up close.


Really good talk about this computer from CCC last year for those who haven't seen yet: https://media.ccc.de/v/36c3-10703-the_ultimate_acorn_archime....


I have an Acorn Archimedes in my basement. Even though I don't use it anymore, I won't throw it out. I did a lot of things on it back then. It was just miles ahead of anything Microsoft did at that time. Good times.


Might want to keep an eye on the battery if it's an A540, A3000, A5000, A3010, A3020 or A4000.


It's an A440, but actually I also have an A3000. Are there any specific problems with these batteries?


Most likely they leak and the contents corrode the traces on the PCB they're soldered on


They need to be removed. Cutting them out is alright.

If they haven't leaked already, they are about to.


Thanks. The A3000 might end up in a museum, but I'll look right into it.


> I did a lot of things on it back then

I recall playing E-Type[0] for hours on the school's Archimedes back in the day. Cracking game!

[0] https://en.wikipedia.org/wiki/E-Type_(video_game)


I made my first ever computer program on an Archimedes.

It was a ‘compatibility calculator’ where you put in the names of two classmates and it gives them a percentage of love between them. We used to do it on paper, taking all the instances of l,o,v,e in each name then adding them to the number adjacent until you just have one number.

It went pretty viral within our class of 20.


I wrote my first ever (and last) virus for the Archimedes...

Some history:

Waaay back in the mists of time (1988) I was a 1st-year undergrad in Physics. Together with a couple of friends, I wrote a virus, just to see if we could (having read through the Advanced User Guide and the Econet System User Guide), then let it loose on just one of the networked archimedes machines in the year-1 lab.

I guess I should say that the virus was completely harmless, it just prepended 'Copyright (c) 1988 The Virus' to the start of directory listings. It was written for Acorn Archimedes (the lab hadn't got onto PC's by this time, and the Acorn range had loads of ports, which physics labs like :-)

It spread like wildfire. People would come in, log into the network, and become infected because the last person to use their current computer was infected. It would then infect their account, so wherever they logged on in future would also infect the computer they were using then. A couple of hours later, and most of the lab was infected.

You have to remember that viruses in those days weren't really networked. They came on floppy disks for Atari ST's and Amiga's. I witnessed people logging onto the same computer "to see if they were infected too". Of course, the act of logging in would infect them...

Of course "authority" was not amused. Actually they were seriously unamused, not that they caught us. They shut down the year-1,2,3 network and disinfected all the accounts on the network server by hand. Ouch.

There were basically 3 ways the virus could be activated:

- Typing any 'star' command (eg: "* .", which gave you a directory listing. Sneaky, I thought, since the virus announced itself when you did a '* .' When you thought you'd beaten it, you'd do a '* .' to see if it was still there :-)

- The events (keypress, network, disk etc.) all activated the virus if inactive, and also re-enabled the interrupts, if they had been disabled

- The interrupts (NMI,VBI,..) all activated the virus if inactive, and also re-enabled the events, if they had been deactivated.

On activation, the virus would replicate itself to the current mass-storage media. This was to cause problems because we hadn't really counted on just how effective this would be. Within a few days of the virus being cleansed (and everyone settling back to normal), it suddenly made a re-appearance again, racing through the network once more within an hour or two. Someone had put the virus onto their floppy disk (by typing *. on the floppy when saving their work, rather than the network) and had then brought the disk back into college and re-infected the network.

If we thought authority was unamused last time, this time they held a meeting for the entire department, and calmly said the culprit when found would be expelled. Excrement and fans came to mind. Of course, they thought we'd just re-released it, but in fact it was just too successful for comfort...

Since we had "shot our bolt", owning up didn't seem like a good idea. The only solution we came up with was to write another (silent, this time :-) virus which would disable any copy of the old one, whilst hiding itself from the users. We built in a time-to-die of a couple of months, let it go, and prayed...

We had actually built in a kill-switch to the original virus, which would disable and remove it - we didn't want to be infected ourselves (at the start). Of course, it became a matter of self-preservation to be infected later on in the saga - 3 accounts unaccountably (pun intended :-) uninfected... It wasn't too hard to destroy the original by having the new virus "press" the key combination that deleted the old one.

So, everyone was happy. Infected with the counter-virus for a while, but happy. "Authority" thought they'd laid down the law, and been taken seriously (oh if they knew...) and we'd not been expelled. Everyone else lost their infections within a few months ...

Anyway. I've never written anything remotely like a virus since [grin]


Did a sorta similar thing on RiscOS, in BASIC, on our high school computers. It would hide in application directories and put itself in as !Boot (so it'd run when the icon was viewed) and then copy itself into any other application that was run/viewed (I forget which.)

Being the smartarse I was, I also unlocked the harddrive of some of the computers (hold some key on boot), put it into an installed application, and then relocked it behind me. This way anyone using that computer was guaranteed to get it on their disk.

My fall came when I left a disk with the raw source on it with my name on it in the lab and the teacher found it and figured out what it was.

Luckily the teacher was a good sort, I got a talking to from the principal, and banned from the lab for the rest of the school year (but it was September, so only a couple of months.) The next year they started giving me a lot more access to our various school machines so that I could do whatever projects I wanted with them, and once they start to do that you don't want to piss them off and have them take it away, so I behaved :)


Oh wow... very similar story from me that started from not being able to gain access to the "End of term only" games that the IT admin let us play.

This virus was able to hide in directorys `!Boot`, `!Run` and would share anything it found of interest.

Long story short, Pineapple Software came in - said it was pretty advanced and ended up patching their antivirus for it!

Pretty scary for a 12 year old!


Yay Acorn Archimedes. Programming it was such fun after a BBC Model B, like stepping from a cupboard into a cathedral.


It didn't hurt that the instruction set and register layout was an absolute joy and so easy to learn compared to x86.


i'm sad i missed out on it, jumped from a bbc b to an "ibm compatible"


I moved on from a BBC B to an Archimedes. Still have both machines in my basement. As I had to use a PC for the IT studies back in those days, I used a PC emulator. This allowed me to use Pascal and other stuff when needed. So I didn't get an 'IBM compatible' until much later.


Acorn Archimedes was where I learnt to code properly. Our teacher (Bruce Dickson, legend) said that we weren't allowed to play games on the Archimedes machines unless we coded them. I completely transparent ploy, we knew we were played but it worked. I made a game called Stick Fighter and a tanks game.

My favourite app was called Euclid, which was a 3D app like Blender. I built cities, an American semi truck and all manner of stuff. My love of gaming and 3D continues today.


RISCOS still lives! You can easily run it on a RPi (amongst other things):

https://www.riscosopen.org/


I've tried RISC OS on RPI years ago, but it didn't work that well. Sometimes it dropped mouse clicks from some reason, also there were pauses for 1 sec at seemingly random times (though this might be caused by slow SD card).


First of all, the article is very apple centeric, as if the author is completely unaware of anything outside his bubble.

Back to the article: what author kind of missed is that ARM was Acorns spinoff of the CPU business since their computer flopped.

As an owner of one, I found it interesting but the OS was really odd (for example . was the folder sepeator which caused "interesting" problems for C compilers). They also promised high performance x86 emulation using their RISC magic but failed to deliver.

Transmeta was eventually able to deliver what Acorn promised, but that came 5-10 years later.

Anyway, what finally saved Acorn was Nokia and Ericsson picking ARM up for phones due to low power usage.


The x86 daughter-boards in the RiscPC were a neat trick.


Yeah, but that's all it was, a neat trick. Not really well thought from a business pov and technically they had some issues, specially since acorn hw was so different from a standard PC.



It suddenly occurs to me that RISC OS will now be a viable alternate operating system for modern PCs.

And I'll be able to run BBC BASIC natively on a Mac.


Apple has said they’re not going to support booting non-Apple operating systems on the ARM Macs:

“We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering. https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...


It really frustrates me.

Apple put so much effort into very beautiful hardware only to completely lock in their users. I understand why, but they have one of the only market positions that could charge a premium for a computer ecosystem which is genuinely free from the cancer of the "free" business model like Google and Facebook and seem genuinely care about avoiding that model while simultaneously locking people in to an almost Soviet degree.

I'd love to own a beautiful arm laptop, but I find apple software just dull - not as "big" as windows but not as fresh as Linux.


It will be a challenge to boot anything else than macOS on those machines. They won’t support Boot Camp, and require boot binaries to be cryptographically signed by Apple (https://developer.apple.com/videos/play/wwdc2020/10686/, starting around 15m30s)

Virtualization on top of macOS should work, though.


Hopefully someone might challenge this during the oncoming wave of antitrust investigations.


"Now it’s back to competing on performance, and its ability to succeed will likely depend on how well Apple has matched its surrounding silicon hardware accelerators to the needs of developers and consumers. "

One thing the author seems to have missed is that what sets Apple apart from most ARM based SoC developers is that Apple has an archicture license. They don't use the cores provided by ARM. They implement the ISA by themselves. And they are really good at it. Look at the A12 chips for example.

The ARM A-7x cores are quite powerful. But we can be sure Apple will take full advantage of the higher TDP and incease performance in the cores beyond what ARM provides in their cores.

Also, expect more CPU cores/device than what Intel provides.

The thing about the cores is alo true. Tighter integration with coprocessors for AI/ML, Crypto (and improved ability to shut the down, increase/decrease clock speed etc) will also be a boost compared to x86, where it is either done in SW of using an internal or external GPU.


Apple is far from the only company with that kind of license ( Qualcomm, Nvidia, Samsung, even Intel)


I hope you noticed "most". I did not say "the only". There are many, many ARM SoC developers. Very few of them have the right to modify or develop their own ARM core.

It is also a question of what you do with your license. The cores in Qualcomm Snapdragon chipsets for example differs quite little from the cores from ARM.

Simularly with Samsung Exynos. They are basically A7x with some other big.LITTLE combination.

Apple on the other hand have repeatedly shown that they are both willing and capable at doing their own micro architecture. I would like to credit the aquisition of PA Semi for starting this. But they of course are not the only people Apple have added to build competence and capability in this area.

https://en.wikipedia.org/wiki/P.A._Semi


Not trying to start an argument with you, but this is incorrect.

Essentially all major vendors have this license, and essentially all major vendors do their own heavily modified designs. They have different goals,priorities, budgets and abilities but if you buy a COTS soc from a major vendor you can bet your neck it's not original arm hard macro or even a lightly modified custom version.

In fact, the only major 64-bit pure-arm soc I can think of right now is the Hisense (Huawei) Kirin 960.


Not 100% sure that Intel has a 64 bit architectural license that is in the public domain (happy to be corrected). AMD certainly does.


Not sure what the status is today, but 2015 the answer was no on Intel:

https://www.electronicsweekly.com/news/business/finance/arm-...

"The company has seven publicly announced 64-bit architectural licensees: Applied Micro, Broadcom, Cavium, Apple, Huawei, Nvidia, AMD and Samsung. It also has another seven publicly announced 32-bit architectural licensees, of which five – Marvell, Microsoft, Qualcomm, Intel and Faraday – do not have a 64-bit licence."


Thanks, I did not know that. I wonder why AMD has a license. These licenses are _not cheap_ !!

Fun fact: AMD Zen processors contain at least one ARM cpu but as far as I know they are always 32bit and not modified from original hard macros.


They used to be quite keen on ARM in the datacentre - the last slide here [1] is interesting! Seems like the strategy changed.

[1] https://www.extremetech.com/computing/175583-amd-unveils-its...


Interesting history piece.

But to answer the question from the title: Since people wonder about the MacOS experience on ARM. As the article rightfully says phones (thus the probably most used end-user computers) are on ARM as well and are being used for many things desktops were used before ... :)


Also, even the humblest Raspberry Pi board is orders of magnitude faster than the fastest Acorn box.


Apple's investment in ARM when it was originally spun out from Acorn a very far-sighted decision in retrospect.

Another curio, for a while Intel made the fastest ARM processors, after they bought StrongARM from DEC...


RaspPi's don't provide desktop class performance. I've owned several and they could never be my daily drivers. (they're just not very fast, even when running Raspbian)

These ARM servers however.... (32-core and 48-core ARM processors)

https://www.asacomputers.com/Cavium-ThunderX.html


I had naively thought the term “Apps” was coined by Apple during the initial iPhone release, as it had been called “Applications” on the Mac ever since the time I had switched (from XP to OSX Tiger).

Seeing the Archimedes screenshot with “Apps” in the taskbar made me realize I was sorely wrong. Does anyone have any interesting history on who was first to start using “Apps”?


GEM ( https://en.m.wikipedia.org/wiki/GEM_(desktop_environment) ) treated files with the .APP extension as programs.


“App” is the obvious abbreviation of application, I think the iPhone sort of publicized it, but I’ve always used phrases like “spinning up a new app” even with things like Delphi in the 90s


Calling computer programs "applications" is a neologism. Once that catches, shortening to "apps" is automatic and happens everywhere at once, somewhat the way programs became "proggies" in places.

Some of us are still cheesed at having our old, comfortable programs re-branded to dodgy new applications:

"Programs run, applications crash."


“Applications” doesn’t seem to be particularly new to me: e.g., Rapid Application Development (RAD) tools like Delphi and Visual Basic were fairly popular in the 90s. And, the jargon file attests to the use of the term to signify programs for non-developers going as far back as 90: http://www.catb.org/jargon/oldversions/jarg211.txt


The suits tried to get people calling what they sold "applications" right from the beginning. "Nobody will pay for a program, but they will pay anything for an application." Back then, an application was, literally, what you were using the computer to get done: a spreadsheet would be a program, budgeting an application. The suits bought and sold "applications"; users ran programs.

Then Eternal September happened, there were too few to push back, and all the other stuff paying customers used to demand--training, printed manuals, not crashing all the damn time--fell away, and finally the program became, itself, the whole application.

Then the users needed an abbreviation for what it was that was crashing: the "app".


In the 80's, I wasted far too much time writing PROGRESS 4GL "Applications", which was 4GL code applied to a particular database ..


This version of the jargon file from 1990 has an entry for “app”

http://www.catb.org/jargon/oldversions/jarg211.txt


Wiktionary has a cite for the phrase “killer app” in ‘99

https://en.m.wiktionary.org/wiki/app


iOS .app was inherited from OS X .app which was inherited from NeXTStep .app, circa 1988.


fun thing: executable application folders like in OSX were a RiscOS thing.


And a NeXT thing.


And the original Mac had a very similar concept with a single file that had an embedded resource fork. And even DOS applications were very often distributed as single folders with all the application files in them, you just needed to run the actual binary manually.

The idea that one-object = one-application is at least as old as desktop computing itself because it fits seamlessly into the file/folder metaphor. Sadly, modern desktops have largely degraded in this regard.


Well, on MacOS they come straight from NeXT so it's no big deal. But what's interesting is that they were on other systems in the 80s too - whereas in the 2000s they were one of the things that made people say MacOS is better than anything else, since only they have these special folders.


We used the term in Newton, circa 1991.


... and a peer here mentions that the term existed in GEM as well. I worked on that a bit, too. Guess I forgot :-)


Someone should port RISC iX to the Raspberry Pi.


RISC iX was just BSD ported to Acorn hardware, so there's very little to distinguish it from any ARM port of a BSD OS.


It'd still be cool.


I think you can emulate it, but it's a bit awkward.


The interesting aspect is that it had IXI's X.desktop. It ended up in SCO Unixware, but it's an interesting historical artifact.


Yeah, also an Econet implementation and some novel on-the-fly executable compression in the file system I think? the whole thing was slightly hamstrung by the comparatively huge page sizes of MEMC...


My first paid gigs was software for the Arc while I was still in school (BBC’s and Arc’s were the computer of choice of many U.K. schools at the time). My first HTML was written on an Arc.

Fond memories of the platform. Lander is still hard as fuck to this day...


I would really love to see the macOS 11 beta running on something other than DTK as a hackintosh.


Will hackintosh be a thing once the ARM transition is complete? Surely Apple will bake in some security into their silicon that will prevent OS11 from running on anything but Genuine™ Apple Bionic.


Doesn't need to be security related at all, where will you get the drivers for your hackintosh ARM GPU, Network, Bluetooth, etc. With Intel Apple was using standard off the shelf components, with ARM it's all proprietary and baked in to their SoC - why would they write support for any other SoC ? Not to mention they will probably include proprietary chip specific stuff and assume it's there which will not be present on other devices.


where will you get the drivers for your hackintosh ARM GPU, Network, Bluetooth, etc.

The community --- providing it still exists --- will write them... or at least some of them. When I was playing around with Hackintosh many years ago (when they were still 32-bit), I took the lack of drivers as a motivation to learn how to write some. Of course, the PC legacy really helped with things like BIOS and related standards, so ARM will definitely have a much steeper curve.

(The diversity of ARM platforms in general put me off writing code for them --- it feels like a losing battle when there are so many platforms, especially when most of then are unlikely to even exist for much time. The fact that it is the same CPU core matters little when everything else is different. Maybe it is the same for many others.)


How long did it take for opensource drivers for stuff like Ras Pi GPU ? On the flipside is there any support for recent nvidia cards in hackintosh ?

I seriously doubt the effort required to write a functional GPU driver is going to come from OSS for something as fringe as Hackintosh on a random ARM SoC, like you said they get replaced constantly and the landscape is fragmented.


I think an important part of the Hackintosh ethos is running on commodity hardware. Even though I'm sure Hackintosh will be ported to Arm I'm not sure it will be very popular because of a lack of commodity hardware. This of course may change in the coming years as ARM moves more into servers and desktops. Apple's move will surely accelerate this.

In the meantime, Intel Macs will be alive and well. There is a 100 million Intel installed base. I doubt that Apple will be treating those customers badly in terms of support and 3rd party software vendors certainly won't. At current sale rates it will take Apple 5 years to draw even with an ARM installed base. Also the question of how popular the move to ARM will be is still open, given the severe backwards step in terms of compatibility. Apple may continue to offer some Intel hardware a bit longer than they intend.

Finally, for me at least, the thought of being stuck at some final version of macOS that supports my Intel Hackintosh is kind of appealing. Each major update seems like a step backward. Hopefully developers will appreciate these Intel diehards and support them longer than Apple does.


Very certainly not.

Apple can add all sort of logic in their chips that competitors won't be able to replicate.


Agreed. People need to stop thinking about this as a change in ISA. The important part comes when Apple starts loading their silicon with custom, specialized components aimed at improving OSX.


> when Apple starts loading their silicon with custom, specialized components aimed at improving OSX.

That's already the case, and was advertised. See the keynote picture that's used (for example) in this article:

https://appleinsider.com/articles/20/06/23/why-the-macs-migr...

"Secure Enclave", "Machine Learning Accelerators", "HDR Display support", "Neural Engine", it's all already in there.


Apple is unlikely to ever have any major SoC features which are generally useful to end users that are exclusively available in their chips for more than a year or two. Even if Win10 ARM vendors or whatever’s coming for PC are playing catch-up with Apple, we’re unlikely to see major divergence. People still need to get roughly the same things done whether they prefer Apple or otherwise.


It's not enough to have an equivalent capability - if you want a Hackintosh, you'll need to present those capabilities in a way that fools macOS into believing it's running on supported hardware. For now, ARM has been limited to the low-power desktop/mobile and high-end server niches. Apple Silicon will go for the space between as well.

OTOH, Apple is smart enough not to tightly marry its software to a given hardware architecture - they were compiling Rhapsody for Intel from day one and a lot of iOS is directly lifted from macOS. Its lineage started running on 68K, then HP-PA, SPARC, x86, then mostly PPC, then mostly x86 again, and now it'll switch to mostly ARM.


Apple's custom silicon play is not about the users.

Its about the media vendors.

Apple can afford to outright buy most of them. What I think they are doing, is preparing themselves for a time when they can own media, as well as the devices you use to view/experience it.

I don't think the Piracy wars are over, guys. They're gunning for us, and .. unless we've got tunnelling electron microscopes and other tawdry machines, its gonna be harder and harder to crack.


Hypervisor + Emulation? Unless they did some very serious obfuscation of the binaries they distribute it wouldn't be particularly difficult to guess what their new hardware did. Obviously wouldn't be as fast but it seems doable.


You'd need a computer that's substantially more powerful than a Mac in order to emulate one. I believe Apple's premium will not be high enough to make this economically viable.


Hackintosh will be a thing as long as the last non-T2 Intel Mac is supported. That could be anywhere from 3-6 more years.


As long as there's mutable software binaries, DRM can be subverted. If core OS11 software lived on a special R/W chip instead of a disk, that'd be quite a move. Apple hasn't even done that with iPhone though.


No one has managed to get iOS running on non-Apple hardware, how will Apple-Silicon-only macos be any different?

I don't see it happening myself.


Not true. Corellium has. And if they've virtualized it on non-Apple hardware, others can, too.

https://twitter.com/corelliumhq


Huh, I'd assumed they were doing some type of emulation thing (which is still impressive as heck), are they actually executing native instructions on a non-Apple CPU?

Actually, I'd love to read more about Corellium in general—have they made any technical details public, even high level? They seem to be very secretive about how it all works—which, to be fair, I can totally understand.


They have: https://corellium.com

Though there is an ongoing lawsuit as a result.


Welp, here you go then.

https://www.reddit.com/r/hackintosh/comments/hfa7ys/big_sur_...

There are a few other examples on that subreddit.


That's the Intel version, and since OP mentioned the DTK, I'm pretty sure they meant to imply the ARM version.


They could've meant anything, but since macOS 11 is ARM and Intel and they mentioned a "hackintosh", I assume they meant Intel. No big deal. Maybe they will clarify.


The first desktop to use the ARM architecture was the BBC micro, it was used as the second processor when it was being developed.

The Archimedes was the first ARM desktop computer.

An important distinction I feel.


...which is specifically explained in the article! Did it get something wrong?


You missed my point ...

The (current?) title of the HN news item is:

The Acorn Archimedes was the first desktop to use the ARM architecture

Which is technically incorrect.

It doesn’t even vaguely match the articles title of:

“Why Wait For Apple? Try Out The Original ARM Desktop Experience Today With A Raspberry Pi”


OP here. As I commented elsewhere: this is not the title that I submitted to HN, and it's not the title on the target page. Someone edited it and I am not sure why.


Agreed, it would be nice if HN was less opaque about such things.

Thanks for the clarification though, it’s much appreciated! As was the article!


I would have started infant school in the UK in 1995 and we still had one of these in my classroom until I was in Year 2. Great educational computers.


These felt so futuristic at the time, anti-aliased fonts, nice smooth scrolling/dragging window dragging all while PCs were stuck on Windows 3.1.


This is kind of a dumb article. It's not like those of us in the computer industry don't already know about ARM. Before Apple's announcement you could run Risc OS on Raspberry Pi — It is one of the more prominent install images for RPi — It's not like Risc OS is what Mac OS on ARM is going to look and feel like.

Ugh. These kind of articles annoy me.


For what it's worth, I wasn't even aware Risc Os existed and I'm in the industry. Just a little perspective.


I had heard of RISC OS but never used it. The screenshot looks pretty ahead of its time for 1989. Nicer looking than NeXT for example, though similar aesthetic.

[Edit: After some digging around it seems like perhaps the screenshot features the visual style introduced in the 1999 release or thereabouts?]

Seems like on HN of late there's some interest in 1990s UI revival. I'm thinking RISC OS looks like a good candidate for that.


I also didn't use RISC OS - I came close in school, but the ancient machines always got replaced the year before me.

However, I have use ROX for a long time - called RISC OS on X - it is/was basically a file manager based around RISC OS, as well as some associated desktop technologies (like app directories). I think it's fair to describe it as a dead as a doornail (I just upgraded to Ubuntu 20.04, and since it no longer comes with pygtk2, various tools are dead). Some of its innovations found their way into other Linux desktop technologies, such as shared-mime-info.

But if you want to see what once was, it's a place to begin.

(ROX was actually a lot nicer than RISC OS, when I finally installed it on my RPi.)


ROX was so good. I had it on my first Linux machine - a Mandrake install that I only ended up with because it was the only distro I could find with an installer that would give me working video card settings. I lived in that UI.

Time for a resurrection, maybe?


In the days when computers were places you stored your data, ROX was great. It was the first GUI that made me manage my files using a mouse. Nowadays, computers are mostly glorified web browser containers. And the way Gnome is so integrated into itself now means you can kinda either use a good window/session manager and Nautilus (with a good file manager on the side), or a crappy window manager, no session manager, and a good filemanager.

(In the olden days I had Sawfish set up so that it had a button that would take a look at the path it displayed or interact with the app via whatever scripting it provided, and show the active document in the ROX. Ah, so great. Sawfish is, nowadays, too primitive for me - I want an expose style feature to find my window.)

I still like to switch to ROX to rename or move files in my codebase rather than do it in IntelliJ. I had a plugin for that for a while but I never quite got around to setting it up again on some migration or upgrade or something. Nowadays it's only VIM/gVim that responds to F12 and shows the document location.

I have given some thought to porting it to Gtk3 and Gtk4. I guess getting an infrastructure around it isn't going to happen - but the Filer was the centre.


It's still part of the default desktop for the antiX distro (ROX-IceWM).


Only the file manager -- not the whole desktop environment.

The desktop manager is called ROX-Session and provides an icon bar, a pinboard and so on as well as the filer windows.


> I have use ROX for a long time - called RISC OS on X - it is/was basically a file manager based around RISC OS, as well as some associated desktop technologies (like app directories). I think it's fair to describe it as a dead as a doornail

Yep. That the project basically langusihed in obscurity and died out is emblematic of why I can't take Linux Desktop seriously.


I don't understand that sentiment. There are dozens of UIs on X. Some have had recent development and some haven't. A lot of the ones that haven't are still pretty usable, they're just kind of "done".


There are dozens of UIs on X and all of them behave pretty much the same way re: applications and file management. Hell, there isn't even a maintained spacial file manager anymore.


This page was on HN a couple of months ago. It shows the 1992 version (RISC OS 3.11), the 1991 version (RISC OS 3) is almost identical, and the 1989 version mostly differs in functionality rather than appearance.

The final section (Decoration) shows part of the re-skinning/theming possible in 1992. Different icons (e.g. 3D appearance) were also a standard part of a custom theme, but proportional text under icons, on menus and on titlebars was introduced in 1994, with RISC OS 3.5.

https://telcontar.net/Misc/GUI/RISCOS/


From memory, the default RiscOS was kinda flat looking, but there was some application you could run that added a theme to it that made it more like the screenshot. This would have been in the mid-90s ('96 or so) when I was using them.

I do remember if you ended up in a mode that wasn't running that theme and got the default, it felt pretty old and boring, but the theme jazzed it up quite a lot. Can't remember the specifics about it though.


> seems there’s some interest in 1990s UI revival

NeXT and BeOS being the ones that really stick in mind.


It was a magical machine.


Photoshop; Adobe Creative Suite. I also missed Source Tree, but that one didn’t change the kind of work I could take.

Also I think I’m waiting for apple to put a freaking touch screen and keyboard on the same product. (Touch Bar does NOT count)

Then again, I’m an ARM desktop user already!

Why wait for Apple? idk, let’s wait and see.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: