Acorn had a tendency to be ahead of its time. Our first home computer, the Acorn Electron, could be expanded with a 3.5" floppy drive. Later, when we got a PC, we moved from 3.5" to 5.25" floppies. That was my first disappointment in having to make a step backwards in tech.
The Archimes' successor, the RISC PC, could even have an Intel processor as co-processor. Or, instead, a card with 4 or 5 more ARM processors. At the time is was probably the cheapest way for a consumer to get a multi-processor machine.
I bought a fully stacked A440 when they were released, many thanks to the kindness of a deceased relative and much to the dismay of it mother who said I should have bought history books and art materials instead.
Really if they had something RISC OS 2 quality on day one, which would have been a monumental feat like the hardware was they’d have done better. I enjoyed every moment of using the machine after RISC OS 2 came out. Before that I was slightly scared I’d burned a monumental amount of money.
This way it is possible to create workflows where you have small specialized apps that complement each other. You would open a file by dragging it to the first app, edit it, drag the result to a 2nd app do some more editing, then drag the result to the next app and so on and save only the final result back to disk.
This is basically supporting the Unix philosophy of having many small programs that do one thing well, and combining them to achieve the desired result.
Instead what we now mostly have on all platforms are huge monolithic programs that try to cram every possible feature into a single package. One reason for this, is in my opinion, that a workflow that is similar to the RISC OS way of dragging from app to app is just to cumbersome on systems without drag and drop between apps, as every step one has to go through the filing system, saving a file by browsing through a directory tree and browse the directory tree again when loading the file in the next app.
I tend to use my iPad as my primary computer - and once you get your head around the fact that everything runs off the share menu it's basically the same concept, without the physical dragging.
This was 1988. TrueType for Mac was not until 1981, and it used quadratic Bezier curves, which are inferior to cubic, and AFAIR did not use anti-aliasing and the hinting system was inferior to that used by RISC OS.
I really wish Acorn had licensed their font technology to other companies, but like many other of their technologies (such as Econet and the video codec used by RISC OS) Acorn developed these for their own use only. It is almost a miracle that the ARM processor escaped the Acorn-only fate of so many other technologies.
It would partner really well with a distro such as GoboLinux or possibly even sta.li.
I guess you kind of have the same thing in mocOS with the proxy icon.
But yes the things lasted a hell of a long time. Mine went until 1998 at which point it was replaced with a Pentium II box running Windows dual booting with Linux. Got 10 years out of that machine which is remarkable considering the progress and change in the 1990s.
I'm on a Mac mini at the moment with an iPad Pro as a secondary machine so the moment an ARM Mac comes out I'm back with an ARM desktop again. It's all a big circle. Perhaps ARX was going to be an early macOS X contender... who knows?
From what he said about ARX, I suspect not...
Wasn’t ARX basically Unix reimplemented in Modula2 ?
Loved RiscOS - I knew it inside out and backwards, and it was so ahead of it’s time, that using one of the terrible RM Nimbus 286 PCs they had at college felt like being suddenly transported back to the Stone Age.
As an aside: I'd love to see the alternate timeline where RiscOS continued to be developed, instead of getting bogged down with pointless legal issues for 20 years.
What would it be like? Would it have gone through a Mac OS-like transition from RiscOS Classic to RiscOS X?
If it was open sourced in the early 2000s, could it have become a viable competitor to Linux on the desktop and Android on mobile?
As a lightweight but powerful OS for low-powered ARM devices it should have been perfectly placed.
But it's certainly good to remember that not everything Acorn did was shiny and golden. They deserve to be remembered more than they often are, but they've had their fair share of misses too.
You mean Arthur? I have to agree - the only impressive thing about it was that it was written in BBC Basic. I also spent too much on an A440 (and a multisync monitor that literally made my hair stand on end when I turned it on). Still, I have no regrets. As you say, RISC OS 2 delivered the goods. By day, I was working in monochrome text mode on a clunky old 16-bit IBM XT. By night, I was in the glorious high-res colour 32-bit future. One of the finest computers even made IMHO (except for the mouse!)
My father was using a 286 by then. It was vastly inferior however he built some software with MS PDS (quickbasic) I think it was and made a lot of cash. Software in the blood.
That would be 22 bit, which seems quite unlikely. Perhaps you're thinking of 4096 (= 2^12 = (16)^3)? It seems only 256 of them could be shown at once and only 240 could be modified ? If those are correct then it is more limited than VGA, which could show 256 colours chosen from 262144 (= 2^18 = (64)^3). Some Archimedes resolutions were more like low-end SVGA though .
 Original brochures! http://chrisacorns.computinghistory.org.uk/docs/Acorn/Brochu... and http://chrisacorns.computinghistory.org.uk/docs/Acorn/Brochu...
4096 colours. :-)
Otherwise, yes, pretty much. I bought a 2nd hand A310 in 1989 and it easily outperformed the fastest machines my employer sold (e.g. the IBM PS/2 Model 70-A21, a blazing 25MHz 80386DX with 64 KB of secondary cache) by a factor of about 8×.
Given that the state of the PC OS art at that time was MS-DOS 3.3, which very few people bothered to put Windows 2 on top of, Acorn's RISC OS 2 was astonishingly advanced, too.
Anyway much of the commercial success of ARM must be down to Robin Saxby who was ARM's first CEO. His interview with Charbax is fascinating . He inherited a company with two customers, one of whom was failing, some interesting IP but not much else. He seems to have had a relentless focus on providing customers with what they needed - for example providing the Thumb 16 bit instruction set to reduce code size when potential customers such as Nokia were reluctant to move to 32 bit designs.
and also Mike Muller actually presenting to Apple Computer staff in 1992!
Edit: added Apple presentation
It's also the first true ARM/RiscOS laptop that has been made since the original Acorn A4. That machine was the first portable ARM device ever made.
Operating systems have been born and died between the arrival of these 2 machines.
It has clear advantages for video editing, especially on export given the arm chips often just spend time decoding video and encoding is quick cheap and streamlined.
can't speak for other industries, but as a dev most stuff in apt is arm64 compatible and therefore runs fine inside of a container on android (this was disabled with android 10 unfortunately)
In fact, the ARM was designed for the Acorn Archimedes, and that’s why no desktop, or server, or anything, used it before the Archimedes - ARM stands for “Acorn RISC Machine”.
While the chips were indeed created specifically for the Archimedes, it would have been possible for some other company to have made a machine with them before Acorn.
A talk about Smalltalk computers includes my projects, but I didn't make it very clear in the slides which ones are mine (13-15, 22, 24, 28, 29, 35, 36, 40, 42-54):
The original title is:
Why Wait For Apple? Try Out The Original ARM Desktop Experience Today With A Raspberry Pi
Future MacBooks and iMacs are going to be powered using a ARM-based chip, but the Acorn Archimedes was the first desktop to use this architecture
I very slightly edited the 1st line to fit, abbreviating "Raspberry Pi".
I understand power usage is a big concern these days, but I can't help but feel this aspect of arm is greatly overlooked and that the push towards arm devices is also partially inspired by the ability to lock down chip designs to a greater capacity than x86_64 chips.
Just look at the existing fragmentation among arm powered devices as is. I don't really like the thought of what an arm powered future could look like.
I really wish I could remember what video it was, but I remember years ago watching the ceo of arm present a new chip with built in emulation technology, that theoretically, would have allowed multiple operating systems to run at once or something. I remember the person talking to him asked if he forsaw this as the future, and he just laughed and said oh no, no manufacturers would never dream of allowing this and furthermore they'll likely use this technology to further specialize and lockdown their chips.
It may be personal bias, but since then and other things i've learned about arm, I've just had this lingering unease as to what them gaining dominance would mean to general computing.
So moving out of the x86/x86_64 world is really a no-brainer: why would all the electronic giants keep paying for x86 when they can get out of the de-facto duopoly? It seems that they are settling on ARM (Amazon Graviton, Fujitsu A64FX, Apple A12Z, etc.). It could have been another architecture, but it couldn't be x86_64.
On the other hand, there are other architectures besides ARM and x86: RISC-V, OpenSPARC, etc. Maybe one of these would be better for developers who don't want to deal with the ARM landscape fragmentation.
I do appreciate that the x64 is more recent and will still be patented to heck - but there's only 3 more years until the first of those expire, methinks.
PAE for x86 means that programs needing more than 4GiB of main-memory can be run without needing a true 64-bit ISA - and PAE won't be patented anymore as that was introduced around 1995.
I don't disagree with you, but is that really a world developers or even consumers want? A world where each device manufacturer creates their own chip based around arm's processor designs that's incompatible with the next device's chip despite being virtually identical?
You're right, as a corporation looking to make money off all aspects of everything, it's a no brainer. For everyone else, it's a worse situation than the processor fragmentation of the 80's and early 90's.
X86_64, amd and intel, took advantage of the situation of the time and provided a somewhat standard, though controlled by them, processor design that allowed tons of individuals and companies to flourish. Moving to yet another highly controlled set of fragmented platforms is a step back to the 80's.
Ideally, we should be moving forward towards a system that provides a universal set of instructions licensable by all, without allowing licensees to restrict their implementations.
There's a lot of problems with our current paradigm but backpedaling to an even more controlled and fragmented platform is not the solution.
I really believe open hardware is a serious issue that's only starting to be addressed, meanwhile big established players are making their moves to stop it.
Arm is a big established player, one of the biggest, they may not be as forward facing as Microsoft or Apple or google, but damn does that company have a huge influence on everything.
Widely adopted open standards for hardware that does not encourage proprietary licences on basic chip design is a definite no brainer on the way forward.
The hurdles that need to be crossed though are greater than with software. Real tangible goods that require manufacturing require tons of overhead and complexity. This challenge is something I feel is the open source idealogy's next great obstacle and overcoming it will open computing so much more than even open source software has for the world. But, only if people has a whole come together and work towards it by not accepting anything less.
Your vision of an open ISA like RISC-V winning is something that I mainly see benefiting manufacturers making closed hardware, not people who want general purpose hardware for running general purpose operating systems. For example Western Digital and nvidia are going to be using RISC-V as part of every SSD/HDD and GPU, but we will not get to run our own code on them. There are also hardware vendors selling RISC-V microcontrollers, but nothing that could run Linux or BSD, even SiFive only has Arduino-level boards out (they had Linux-capable ones but don't seem to be making more of them for sale). Eventually there might be a RISC-V entry in the SBC market, but I think ARM is still going to own that market for a long time.
That used to be true, but I dont know how true it is now.
Once upon a time, ISA and ATA were both wired right to the bus.
By the P3/Athlon era everything was probably on a super-io controller and or south bridge, but that communication towards the northbridge/CPU was over PCI or ever fancier links.
But the clear cutting point to me is EFI. Most of the standards in how things communicate on older PCs was more based on the IBM BIOS. Early clones couldnt exist until that was nailed down. But... EFI replaces all of that with a new abstraction.
Note, however, that there's a lot of "quirks" and similar device-specific corrections for x86 as well. These get presented to Microsoft by the manufacturer, and Linux gets to reverse-engineer them.
I was aware of being able to have ex. u-boot pass a device tree file to the kernel at boot time; are you talking about that, or something more generic?
What I think is interesting is what this will do to the low-power x86 front, not just in terms of technical innovation but also availability. I'm perfect content building for x86, if it becomes as-available as the raspberry-pi class of devices ...
Apple has shown pretty clearly they see themselves as a consumer electronics and services company above all. If all the "professional" users who rely on Logic or FC or Apple in general jumped ship, Apple wouldn't likely notice it in their bottom line (loss of mindshare not withstanding). They have no vested interest in catering to these users and have produced hardware intended for them as an afterthought. The high cost of hardware and absence of middle tier products that make sense for these users is just icing on the cake.
"Professional" users of Logic or FC have little leverage to affect the course of Apple one way or another (unlike a more focused company whose business model is catering to these groups).
On the other hand Apple pro users do have a long history of being abused by the company so they might be alright with it in the end.
Anyway, I'd be looking for an out if I were them.
To a first approximation, Apple only cares about video producers, software developers, publishers, and musicians, in their Mac strategy. In approximately that order.
The only reason developers are on that list, is because XCode is how software gets written for the rest of their platforms; the dominance of Macs as personal workstations for Valley-style development is only a side effect of that.
These users provide a halo for everyone else. A college student might spend an extra $1000 on their laptop because they want to moonlight on some beats, or have a YouTube channel, that sort of thing.
I remember seeing this disconnect when the new Mac Pro was released and many here dismissed it as too expensive. Someone who works in CGI came around to say that, no, $50,000 is a normal amount of money to spend on their workstations and that their company was ecstatic to be able to keep working with macOS.
I’ve repeatedly found that the “Apple Tax” is at most something like $500: other companies tend to advertise configurations that I wouldn’t buy, and when I tweak them to match my requirements, their price is not notably different from a Mac
For Apple, not catering to every particular consumer whim/need out there is just smart business. But for the consumer, that is just another weakness of the mac ecosystem.
And of course the neglected "I just want internal drives or expansion cards" demographic. I;m not sure if the cheese grater design fixes it, but with the trashcan Mac Pro, there were plenty of configurations where the equivalent Dell/HP/Lenovo workstation was one clean self-contained box stuffed with PCI-e cards and SATA/SAS drives, and the Mac Pro was an angry squid of Thunderbolt, USB, and power cables to feed an array of external drives and devices.
I understand that their brand is based on seamless design and we-make-the-decisions-for-you presentation, but it feels like there'd be an opportunity for them to use a small-scale clone program as a market research tool.
Have it sell the form factors that Apple won't. The long-whined-for xMac. Units with servicable/cleanable designs for embedded markets. A mini-ITX Mac mainboard you can fit into existing kiosk/appliance designs. A rebranded Toughbook running MacOS. Something in a huge rackmount/server cube case that you can fit with a dozen internal drives. Frankly, I'd envision it as a wholly-owned operation, that charges over-the-odds prices. If people are still willing to put their money where their mouth is, they can claim epiphany and make an Apple version of the same design. If not, they can declare the business a failure, shutter it in a year, and start over next time they want to trial a product.
You'd still be crazy to spend it on a Mac Pro unless you had very specific MacOS needs. Building a threadripper multi-Nvidia GPU PC is going to outperform it in any meaningful way with a modern CGI workflow.
OS starts to matter very little when it's a choice of seeing almost a finished image in almost real time which is what you get with CUDA backed rendering engines VS having to still chug away on CPU.
Both Linux and Windows still take a _lot_ more maintenance than macOS. If you have a room full of creative video artists, who are not techies, then you want them to have maximum productive up-time. You do not want to have to employ a small army of system support techies to keep those boxes running.
I used to work on a PC magazine in the West End of London. The mag was about Windows PCs, and was written on Windows PCs, but it was laid out on Macs. I supported both, and the servers and the network.
The PC side of things needed more than 10x as much support.
That's not materially different even today.
Secondly, it's not just about the boxes and their OS. It's also about the apps. A lovely fast sleek Linux distro is no use at all if it doesn't run the apps you need... and if those apps only run on one vendor's kit, or even just run best on that vendor's kit, then that is the kit you buy.
This isn't 20% faster, it's the difference between seeing the image in almost finished form and interacting with it with real time responsiveness vs having to wait minutes for an image. 
What's more the way they've handled their software (Logic, transition to FC X) doesn't seem like professional users are foremost on their mind.
I know for certain that many VFX software companies aren't too happy about OpenGL being deprecated and Vulcan not being supported on recent MacOS releases.
IMO it's going to depend quite heavily on the demand for high-end VFX software on MacOS as to whether they bother with ARM ports if the MacPro does move over...
For what it's worth, there are 64-bit distros available for the Pi4. IIRC Raspberry Pi OS (formerly known as Raspbian) has been fully 64-bit since the release of the 8GB model, and I know Ubuntu Server offered a 64-bit image for some time before that.
That is the sound from the internal speaker. The computers were also popular for MIDI control, even being chosen by professionals for this, though I can't find a video.
Kidding, but since you mentioned iPad...
Mac software will continue to rely on proprietary APIs. Software that didn't is most likely already ported.
There have been multiple attempts to build an x86-based VST host, over and over.
It will be about 1000x easier to do, if everyone builds for ARM.
However! ARM DSP/synthesis is fraught with minefields. One mans ARM is not a SHARC, etc.
It should be noted that there are already mainstream synth manufacturers shipping ARM platforms .. and there is no virulent unruly lunatic fringe like the synth lunatic fringe, btw.
That reminds me I want to try the Manjaro build out. I've heard good things. I actually really liked the ChromiumOS build but it did take some getting used to (I'm a ChromeOS n00b)
Isn't this just Verilog code? Isn't there only a couple of ways to implement a fast CPU?
In general, faster cores will dedicate more silicon to out-of-order or speculative execution, branch prediction, internal caches of various sorts, etc. The slow cores, OTOH, are most likely simpler, in-order cores with less less smart tricks, aiming to be simple and low-power. Vendors add/remove/tune all these tricks and more (memory channels, IO lines) to put processors in a given spot.
If they haven't leaked already, they are about to.
I recall playing E-Type for hours on the school's Archimedes back in the day. Cracking game!
It was a ‘compatibility calculator’ where you put in the names of two classmates and it gives them a percentage of love between them. We used to do it on paper, taking all the instances of l,o,v,e in each name then adding them to the number adjacent until you just have one number.
It went pretty viral within our class of 20.
Waaay back in the mists of time (1988) I was a 1st-year undergrad in Physics. Together with a couple of friends, I wrote a virus, just to see if we could (having read through the Advanced User Guide and the Econet System User Guide), then let it loose on just one of the networked archimedes machines in the year-1 lab.
I guess I should say that the virus was completely harmless, it just prepended 'Copyright (c) 1988 The Virus' to the start of directory listings. It was written for Acorn Archimedes (the lab hadn't got onto PC's by this time, and the Acorn range had loads of ports, which physics labs like :-)
It spread like wildfire. People would come in, log into the network, and become infected because the last person to use their current computer was infected. It would then infect their account, so wherever they logged on in future would also infect the computer they were using then. A couple of hours later, and most of the lab was infected.
You have to remember that viruses in those days weren't really networked. They came on floppy disks for Atari ST's and Amiga's. I witnessed people logging onto the same computer "to see if they were infected too". Of course, the act of logging in would infect them...
Of course "authority" was not amused. Actually they were seriously unamused, not that they caught us. They shut down the year-1,2,3 network and disinfected all the accounts on the network server by hand. Ouch.
There were basically 3 ways the virus could be activated:
- Typing any 'star' command (eg: "* .", which gave you a directory listing. Sneaky, I thought, since the virus announced itself when you did a '* .' When you thought you'd beaten it, you'd do a '* .' to see if it was still there :-)
- The events (keypress, network, disk etc.) all activated the virus if inactive, and also re-enabled the interrupts, if they had been disabled
- The interrupts (NMI,VBI,..) all activated the virus if inactive, and also re-enabled the events, if they had been deactivated.
On activation, the virus would replicate itself to the current mass-storage media. This was to cause problems because we hadn't really counted on just how effective this would be. Within a few days of the virus being cleansed (and everyone settling back to normal), it suddenly made a re-appearance again, racing through the network once more within an hour or two. Someone had put the virus onto their floppy disk (by typing *. on the floppy when saving their work, rather than the network) and had then brought the disk back into college and re-infected the network.
If we thought authority was unamused last time, this time they held a meeting for the entire department, and calmly said the culprit when found would be expelled. Excrement and fans came to mind. Of course, they thought we'd just re-released it, but in fact it was just too successful for comfort...
Since we had "shot our bolt", owning up didn't seem like a good idea. The only solution we came up with was to write another (silent, this time :-) virus which would disable any copy of the old one, whilst hiding itself from the users. We built in a time-to-die of a couple of months, let it go, and prayed...
We had actually built in a kill-switch to the original virus, which would disable and remove it - we didn't want to be infected ourselves (at the start). Of course, it became a matter of self-preservation to be infected later on in the saga - 3 accounts unaccountably (pun intended :-) uninfected... It wasn't too hard to destroy the original by having the new virus "press" the key combination that deleted the old one.
So, everyone was happy. Infected with the counter-virus for a while, but happy. "Authority" thought they'd laid down the law, and been taken seriously (oh if they knew...) and we'd not been expelled. Everyone else lost their infections within a few months ...
Anyway. I've never written anything remotely like a virus since [grin]
Being the smartarse I was, I also unlocked the harddrive of some of the computers (hold some key on boot), put it into an installed application, and then relocked it behind me. This way anyone using that computer was guaranteed to get it on their disk.
My fall came when I left a disk with the raw source on it with my name on it in the lab and the teacher found it and figured out what it was.
Luckily the teacher was a good sort, I got a talking to from the principal, and banned from the lab for the rest of the school year (but it was September, so only a couple of months.) The next year they started giving me a lot more access to our various school machines so that I could do whatever projects I wanted with them, and once they start to do that you don't want to piss them off and have them take it away, so I behaved :)
This virus was able to hide in directorys `!Boot`, `!Run` and would share anything it found of interest.
Long story short, Pineapple Software came in - said it was pretty advanced and ended up patching their antivirus for it!
Pretty scary for a 12 year old!
My favourite app was called Euclid, which was a 3D app like Blender. I built cities, an American semi truck and all manner of stuff. My love of gaming and 3D continues today.
Back to the article: what author kind of missed is that ARM was Acorns spinoff of the CPU business since their computer flopped.
As an owner of one, I found it interesting but the OS was really odd (for example . was the folder sepeator which caused "interesting" problems for C compilers). They also promised high performance x86 emulation using their RISC magic but failed to deliver.
Transmeta was eventually able to deliver what Acorn promised, but that came 5-10 years later.
Anyway, what finally saved Acorn was Nokia and Ericsson picking ARM up for phones due to low power usage.
And I'll be able to run BBC BASIC natively on a Mac.
“We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering.
Apple put so much effort into very beautiful hardware only to completely lock in their users. I understand why, but they have one of the only market positions that could charge a premium for a computer ecosystem which is genuinely free from the cancer of the "free" business model like Google and Facebook and seem genuinely care about avoiding that model while simultaneously locking people in to an almost Soviet degree.
I'd love to own a beautiful arm laptop, but I find apple software just dull - not as "big" as windows but not as fresh as Linux.
Virtualization on top of macOS should work, though.
One thing the author seems to have missed is that what sets Apple apart from most ARM based SoC developers is that Apple has an archicture license. They don't use the cores provided by ARM. They implement the ISA by themselves. And they are really good at it. Look at the A12 chips for example.
The ARM A-7x cores are quite powerful. But we can be sure Apple will take full advantage of the higher TDP and incease performance in the cores beyond what ARM provides in their cores.
Also, expect more CPU cores/device than what Intel provides.
The thing about the cores is alo true. Tighter integration with coprocessors for AI/ML, Crypto (and improved ability to shut the down, increase/decrease clock speed etc) will also be a boost compared to x86, where it is either done in SW of using an internal or external GPU.
It is also a question of what you do with your license. The cores in Qualcomm Snapdragon chipsets for example differs quite little from the cores from ARM.
Simularly with Samsung Exynos. They are basically A7x with some other big.LITTLE combination.
Apple on the other hand have repeatedly shown that they are both willing and capable at doing their own micro architecture. I would like to credit the aquisition of PA Semi for starting this. But they of course are not the only people Apple have added to build competence and capability in this area.
Essentially all major vendors have this license, and essentially all major vendors do their own heavily modified designs. They have different goals,priorities, budgets and abilities but if you buy a COTS soc from a major vendor you can bet your neck it's not original arm hard macro or even a lightly modified custom version.
In fact, the only major 64-bit pure-arm soc I can think of right now is the Hisense (Huawei) Kirin 960.
"The company has seven publicly announced 64-bit architectural licensees: Applied Micro, Broadcom, Cavium, Apple, Huawei, Nvidia, AMD and Samsung. It also has another seven publicly announced 32-bit architectural licensees, of which five – Marvell, Microsoft, Qualcomm, Intel and Faraday – do not have a 64-bit licence."
Fun fact: AMD Zen processors contain at least one ARM cpu but as far as I know they are always 32bit and not modified from original hard macros.
But to answer the question from the title: Since people wonder about the MacOS experience on ARM. As the article rightfully says phones (thus the probably most used end-user computers) are on ARM as well and are being used for many things desktops were used before ... :)
Another curio, for a while Intel made the fastest ARM processors, after they bought StrongARM from DEC...
These ARM servers however.... (32-core and 48-core ARM processors)
Seeing the Archimedes screenshot with “Apps” in the taskbar made me realize I was sorely wrong. Does anyone have any interesting history on who was first to start using “Apps”?
Some of us are still cheesed at having our old, comfortable programs re-branded to dodgy new applications:
"Programs run, applications crash."
Then Eternal September happened, there were too few to push back, and all the other stuff paying customers used to demand--training, printed manuals, not crashing all the damn time--fell away, and finally the program became, itself, the whole application.
Then the users needed an abbreviation for what it was that was crashing: the "app".
The idea that one-object = one-application is at least as old as desktop computing itself because it fits seamlessly into the file/folder metaphor. Sadly, modern desktops have largely degraded in this regard.
Fond memories of the platform. Lander is still hard as fuck to this day...
The community --- providing it still exists --- will write them... or at least some of them. When I was playing around with Hackintosh many years ago (when they were still 32-bit), I took the lack of drivers as a motivation to learn how to write some. Of course, the PC legacy really helped with things like BIOS and related standards, so ARM will definitely have a much steeper curve.
(The diversity of ARM platforms in general put me off writing code for them --- it feels like a losing battle when there are so many platforms, especially when most of then are unlikely to even exist for much time. The fact that it is the same CPU core matters little when everything else is different. Maybe it is the same for many others.)
I seriously doubt the effort required to write a functional GPU driver is going to come from OSS for something as fringe as Hackintosh on a random ARM SoC, like you said they get replaced constantly and the landscape is fragmented.
In the meantime, Intel Macs will be alive and well. There is a 100 million Intel installed base. I doubt that Apple will be treating those customers badly in terms of support and 3rd party software vendors certainly won't. At current sale rates it will take Apple 5 years to draw even with an ARM installed base. Also the question of how popular the move to ARM will be is still open, given the severe backwards step in terms of compatibility. Apple may continue to offer some Intel hardware a bit longer than they intend.
Finally, for me at least, the thought of being stuck at some final version of macOS that supports my Intel Hackintosh is kind of appealing. Each major update seems like a step backward. Hopefully developers will appreciate these Intel diehards and support them longer than Apple does.
Apple can add all sort of logic in their chips that competitors won't be able to replicate.
That's already the case, and was advertised. See the keynote picture that's used (for example) in this article:
"Secure Enclave", "Machine Learning Accelerators", "HDR Display support", "Neural Engine", it's all already in there.
OTOH, Apple is smart enough not to tightly marry its software to a given hardware architecture - they were compiling Rhapsody for Intel from day one and a lot of iOS is directly lifted from macOS. Its lineage started running on 68K, then HP-PA, SPARC, x86, then mostly PPC, then mostly x86 again, and now it'll switch to mostly ARM.
Its about the media vendors.
Apple can afford to outright buy most of them. What I think they are doing, is preparing themselves for a time when they can own media, as well as the devices you use to view/experience it.
I don't think the Piracy wars are over, guys. They're gunning for us, and .. unless we've got tunnelling electron microscopes and other tawdry machines, its gonna be harder and harder to crack.
I don't see it happening myself.
Actually, I'd love to read more about Corellium in general—have they made any technical details public, even high level? They seem to be very secretive about how it all works—which, to be fair, I can totally understand.
Though there is an ongoing lawsuit as a result.
There are a few other examples on that subreddit.
The Archimedes was the first ARM desktop computer.
An important distinction I feel.
The (current?) title of the HN news item is:
The Acorn Archimedes was the first desktop to use the ARM architecture
Which is technically incorrect.
It doesn’t even vaguely match the articles title of:
“Why Wait For Apple? Try Out The Original ARM Desktop Experience Today With A Raspberry Pi”
Thanks for the clarification though, it’s much appreciated! As was the article!
Ugh. These kind of articles annoy me.
[Edit: After some digging around it seems like perhaps the screenshot features the visual style introduced in the 1999 release or thereabouts?]
Seems like on HN of late there's some interest in 1990s UI revival. I'm thinking RISC OS looks like a good candidate for that.
However, I have use ROX for a long time - called RISC OS on X - it is/was basically a file manager based around RISC OS, as well as some associated desktop technologies (like app directories). I think it's fair to describe it as a dead as a doornail (I just upgraded to Ubuntu 20.04, and since it no longer comes with pygtk2, various tools are dead). Some of its innovations found their way into other Linux desktop technologies, such as shared-mime-info.
But if you want to see what once was, it's a place to begin.
(ROX was actually a lot nicer than RISC OS, when I finally installed it on my RPi.)
Time for a resurrection, maybe?
(In the olden days I had Sawfish set up so that it had a button that would take a look at the path it displayed or interact with the app via whatever scripting it provided, and show the active document in the ROX. Ah, so great. Sawfish is, nowadays, too primitive for me - I want an expose style feature to find my window.)
I still like to switch to ROX to rename or move files in my codebase rather than do it in IntelliJ. I had a plugin for that for a while but I never quite got around to setting it up again on some migration or upgrade or something. Nowadays it's only VIM/gVim that responds to F12 and shows the document location.
I have given some thought to porting it to Gtk3 and Gtk4. I guess getting an infrastructure around it isn't going to happen - but the Filer was the centre.
The desktop manager is called ROX-Session and provides an icon bar, a pinboard and so on as well as the filer windows.
Yep. That the project basically langusihed in obscurity and died out is emblematic of why I can't take Linux Desktop seriously.
The final section (Decoration) shows part of the re-skinning/theming possible in 1992. Different icons (e.g. 3D appearance) were also a standard part of a custom theme, but proportional text under icons, on menus and on titlebars was introduced in 1994, with RISC OS 3.5.
I do remember if you ended up in a mode that wasn't running that theme and got the default, it felt pretty old and boring, but the theme jazzed it up quite a lot. Can't remember the specifics about it though.
NeXT and BeOS being the ones that really stick in mind.
Also I think I’m waiting for apple to put a freaking touch screen and keyboard on the same product. (Touch Bar does NOT count)
Then again, I’m an ARM desktop user already!
Why wait for Apple? idk, let’s wait and see.