Hacker News new | past | comments | ask | show | jobs | submit login
Got an old Raspberry Pi spare? Try RISC OS. It is, something else (theregister.com)
272 points by m_c 57 days ago | hide | past | favorite | 200 comments



It was ahead of its time in UX, but rather behind in the foundations. It's a single user system with no real security, and there was no system of shared libraries - to share code between applications, it was usual to put the shared code in a kernel module and call the kernel. Even the standard C library worked this way.

Amusingly, when you invoked the system console -which was at a lower level than the gui system, effectively pausing it - the command line appeared at the bottom of the screen and the frozen gui scrolled up as you entered more commands; until you exit the system console. (It was also possible to get a command line in a window, which could do slightly less - I forget exactly what)


> rather behind in the foundations. It's a single user system with no real security

I believe multi-user systems are actually an ancient, outdated rather than a "modern" concept. It made sense when computers were huge, expensive and many users shared one even at work, let alone at home. Nowadays computers are almost never shared. Even when people used to have just one home PC per family (during pre-Win7 days) they mostly preferred to disable the sign-in screen and share the whole environment.

Nowadays multi-user OS facilities definitely help to build security but they were not designed just for this. Modern security can be done better without an OS-level concept of a user.


Having grown up with RISC OS, I never considered lack of multi user an omission. But the lack of process isolation (or simply: lack of processes) was a real head-ache.

No memory protection or hardware access, rudimentary virtual memory "page mapping". Any tardy calls to Wimp_Poll() freezing the entire GUI. "Cooperative multitasking" also means it's still impossible to do something useful with more than 1 Raspberry Pi core.

So: lacking in foundations, yes. But multi user, not so much.


Fair. On reflection I agree with all the people that have pointed out that multiuser isn't that important. It was just, historically, the feature that made operating systems become serious about security, which is now essential for other reasons.


After the PC revolution, "User" became synonymous with roles. root, bin, mail, daemon, sshd, www-data are probably some of the users on your system.


> Nowadays computers are almost never shared.

I find it ironic that Google TV still does not have this feature, it is the one "computer" that people probably still share regularly.

Sure you can login to multiple accounts and have different "profiles", but that just changes home screen recommendations, all the apps and their sessions are shared. So since I share it with a few roommates, we're constantly having to logout and in to our accounts in different apps to access our watch histories, our plex servers, etc.

On the other hand, my android phone does multiple accounts perfectly, why can't it work the same way on the TV?


Just buy another Google TV and hook them up to a KVM switch.


Today, the same OS core is used in phones, laptops, desktops and servers. Explicitly making it single-user means giving up on servers and parts of the market for the former three.

You could make the case for removing the concept of users from the kernel and forcing the OS to write some security plugin. The OS could then have a tailored security mechanism for every usecase, but they'd probably end up including users anyway (for the people who do want them).


One such case is company-issued phones. They require the company-provided apps to be isolated from any user-specific configuration, creating an effective multi-user system where user apps (such as Apple Music) run with different privileges (and network access) than, say, Teams or Outlook.


I think this is a good idea anyway. I love how if I install some dodgy app on my phone, it can't access the private, stored data of other apps. It can't steal my google or facebook credentials. And it can't cryptolocker my filesystem.

My desktop computers are designed with this old "user security" model that I don't use at all - since I'm the only user anyway. User security protects ... uh, the operating system I suppose, which I could reinstall in 20 minutes anyway. But we're missing a much more important security boundary - which is between one bad program and all my other stuff. Every program you run today on desktop is inexplicably executed with full permission over all of your private files, and, worse, it has full network access. Its an insanely terrible design.

We /could/ retrofit the user security model to help us isolate applications. But personally I think it would be easier to just design and implement something good from scratch.

(For the security people in the room, the threat model is a bad program, or single bad npm package gets pulled into a program you run. How do we limit the blast radius?)


You might be interested in Qubes OS, which runs every application in a virtual machine.

https://en.wikipedia.org/wiki/Qubes_OS


Every app on your phone runs as a different user


> I believe multi-user systems are actually an ancient, outdated rather than a "modern" concept.

I thought that in the 80s, and even had the same experience with sharing the family PC. Then networking (and the internet) happened, and suddenly multiuser became quite useful again - even at home.

> Modern security can be done better without an OS-level concept of a user.

Perhaps, but I have yet to see a userless model that doesn't have issues on around administrative access control. Even dumb terminals had issues with this where students would change settings and then set the admin password on the terminal... As long as people use computers, users will be users...


OS security always requires a concept of a user, unless you want to run all processes under the same identity.


You could do with some other concept than users. E.g. namespaces or capabilities.


This is what uid and gid are used for in Linux respectively. There is only a concept of "user" and "group" when you add a Unix-like operating system in top such as GNU or Alpine.


It is another way to call a "user", hence why I used the word indentity instead.


No its not. There's all sorts of stuff you can do with capabilities that are really awkward to even think about with users. Like, when a database starts, you can hand it the capabilities it needs to access a specific directory, listen on a specific port and append (but not read) a specific log file.

User based security is usually done by the folder being owned by the database, or an ACL or something. But with capabilities, you don't need to make a database user or set any flags on the folder, or make sure the database's configuration matches the filesystem permissions. The capability is basically a pre baked file handle that can be used directly. And capabilities can do lots more stuff! They can be more fine grained - eg, "Whatsapp can only access these specific photos in my camera roll". And a program can pass a subset of its capabilities to a child process.


Capabilites have ownership, ownership requires belonging to something, all the way up to booting the OS.


Capabilities can just be variables. Variables all belong to something, all the way up to the OS. Would you say variables are a type of user permission? Obviously not. Variables are more versatile than that. So are capabilities.


But if you're sandboxing and running all applications in a separate space, is it really correct to refer to it as "users" anymore?


3rd party apps are written (controlled) by other people, and you are effectively granting that 3rd party permission to run their code on your device, so the user concept isn’t that far off the mark. Since ~nobody reads the source code of 3rd party apps, security vetting is usually a matter of deciding to trust the app’s author, same as how you would trust anyone else that you give a user account to.


You grant them capabilities, a handle which permits access to a particular directory, one which allows network access (or possibly even further limited than that) etc.

No user, just a series of object handles which permit them to perform the task and nothing more.


Sure, but capabilities and handles are technical terms of art below the level at which regular users need to understand. The idea that an app is another person using your computer is not a terrible abstraction in terms of helping people make sense of what's happening.


Maybe. But they're a bad mental model for software developers because user based security is way more limited than what you can do with capabilities.

And they're confusing for users, too. Signal isn't another "user" on my phone. Its still me. I just decide what capabilities I grant it on the day. "Yes, you can use location tracking for now - but only until later in the day."


That sounds like just brutally quartering users into their granular permissions. ;-)


Maybe? I guess the difference is that with a capability object model (or similar arrangement) the _only_ way the application has to interact with the outside world is via those objects/handles it's been granted. There's no risk of escape because they only have access to the handles the process has been provided.

It's almost the opposite of a permission model in some ways, permission models restrict access to a global array of functionality where capability models allow access only to what's been provided.


Sorry. I was joking. It feels like you are giving applications small fragments of the user.


Sorry, I appreciate the joke (really :) ) but I also worry that people keep trying to add the concept of "users" or an existing role or permissions model into capability-based systems when they're unnecessary.

You could maybe model or emulate a user in a capability system by providing the login's session manager an object with read/write access to the configured "user" directory, read and execute access to an applications collection, and full access to the root window. From there, when the user starts a new application it's given an object with access to a window created for it (by calling "createWindow" on the root window object, so it can't even do something like enumerate other windows or whatever), and whichever other requirements were configured as part of its install.

It's capabilities all the way down with no "user" involved.


Capabilites are just another way of doing role management in processes, and require identities for administration anyway.

It is yet another way of managing processes identitities.


A capability is not an identity. And the difference matters a great deal in how you build software that takes advantage of capability based security models.


Identity is reflected by who owns the capability.


Capabilities are often transient. Or in a capability based OS, capabilities are given to a process when it launches and the capabilities naturally go away when the program closes. "Who owns the capability" is the wrong way to think about it. But I'll lose this argument.

Because I suspect if you really want to, you really can think about all "permission systems" as identity systems, and shoehorn in users or something. This cognitive distortion is totally possible. My claim is that its a bad mental model. Its like if you mentally translate all programming ideas into assembler, or java or something, it would make it hard to properly understand and appreciate a lot of higher level programming ideas. Haskell's beauty doesn't make any sense if you mentally translate everything into java. There are programs you just can't write with this mindset, and you would be a terrible haskell programmer.

Its the same with capabilities. They're not user accounts. They're not identities. They can be transient or persisted. Fine grained or coarse grained. A capability can be a function argument - arguably the C FILE struct is a capability object. Or they can be a permission box. Its just, a bigger idea than identity.


You're so obviously correct in this. TBH I think the only reason the term "users" come up when it comes to isolating programmes from one another is because that was the only hammer Unix had, so it was coopted. The original point stands, the actual multi-user aspect is completely secondary and unnecessary to 90% of people. I'm the only person who uses my computer, but still I want to be able to isolate application 1 from application 2.


This is a very Android centric viewpoint ;)


Unpacking that witty but cryptic comment:

Yes, you can have privilege-based security without user accounts, if you accept that you do not have control over your own hardware because only the OS vendor has administrative rights.

In other words: yes, you can have no-sign-in and no user accounts, but it's still there and you don't have admin access to your own computer.

Stepping back a level:

Smartphone OSes do not show accounts and permissions, but they are still there, just concealed. Same as they still have complex filesystems, but they are hidden.

Stepping back another level:

This is a bad way to design OSes: when you need to hide away major parts of the functionality, then you shouldn't have that functionality. It should not be in your design in the first place.


> Yes, you can have privilege-based security without user accounts, if you accept that you do not have control over your own hardware because only the OS vendor has administrative rights.

Or maybe just that the really in-depth administration and modification of your operating system happens prior to the OS running on your device, when it's being built — as a sort of configuration or specification step that happens prior to even installing the operating system or booting up your computer in the the first place, in a continuous integration system in the cloud perhaps, or on another existing computer? That's kind of how Fedora Silverblue works — almost everything you do is completely in unprivileged space, in a container or with a flatpak sandbox, or through policykit; you basically never use the root account at all, because you can't really do a whole lot of really in-depth customization of your OS internals on the operating system image that's actually installed and running on your system. Instead, you specify the modifications you want to make to an upstream image using something like BlueBuild[1] and then those modifications are automated and happen prior to anything ever hitting your computer in an automated ci/cd system (which could theoretically be self-hosted).

Like, I think there is a way to adapt the security and reliability benefits of the way e.g. macOS works that doesn't take control away from the user, just moves it somewhere else. And I think it's much safer for all of the really deep modification of your system, all of the system administration you do as the root user, to be essentially air gapped from the computer that you're actually running various applications and installing and building things and curling to bash on, on a system that's ostensibly clean.

[1]: https://blue-build.org/


Even iOS has different users for all those processes running on the device. :)


Not to any reasonable capacity


> Nowadays computers are almost never shared.

That doesn't seem very accurate, unless you're meaning strictly personal computers?


Modern multi-user paradigms also have very weird ideas of what’s shared. Like, installing or updating software is the same permission tier as accessing another user’s documents, wtf?


_Technically_ it doesn't have to be.

Super user is super user. It can always access anyone's files. Allowing unrestricted access to super user essentially destroys any sense of security.

You can very much allow only access to certain commands under super user, e.g. only allow users to run pacman. Of course now you are trusting that said commands won't leak the permissions.

I agree that it's a mess.

My personal and biggest issue is not even across user boundaries, but inside a single user.

What do you mean my Firefox client can read my .ssh files???



I almost linked that in my ggp comment, but really I’m making the opposite argument as the comic.

Either way you slice it, though, it’s clearly a huge disconnect between what is important to the human using a system vs what is important to the system itself, and the relative lengths gone to to protect those two sets of things.


it does not make much sense as you typically protect important online activity with 2FA


To me, a "modern multi-user paradigm" is Nix with Home Manager. Where most of my software is installed in my user's environment and not on the system level. Thus, if there were another user on the same machine, we could each manage our own software and updates without affecting the other.


Don't mind me, just updating the software you use to access your documents, nothing to see here, move along.


> Like, installing or updating software is the same permission tier as accessing another user’s documents, wtf?

To some extent, yes. If I can install software of my own choosing on basically any normal desktop OS that will appear to other users of the system as "LibreOffice", "Firefox", etc. then I more or less have access to all their data.

MacOS is starting to sandbox applications but not by a lot, and of course Windows Store sandboxed apps are more or less dead in the water.


Do not mix Windows Store (UWP) sandboxes, with Windows sandbox, which not only is pretty much alive, is making its way across Windows 11 updates since last year.

Check "Windows 11 Security", "App Isolation", "Sandbox", "Standard User", "Pluton".

https://github.com/dwizzzle/Presentations/blob/master/David%...


I share my desktop PC with my kids and wife (although, TBH, I use the PC 90% of the time and the rest of the family uses the remaining 10%). Although we use the PC one at a time, being able to each have his own environment does help. So multiuser is cool for me.


Multi user is getting replaced by virtualization / containerization. The layers are simply getting shuffled


I think pervasive process/app sandboxing – or at the very least proactively and aggressing limiting process capabilites a la OpenBSD pledge and unveil – is a key development as well that's over the horizon as well.

(What's old is new again with virtualization: IBM took that approach to make time-sharing happen with CP/CMS on System/360 – then VM/370, then z/VM...)


multi user support has other purposes. it means you can turn the OS into a server OS which can be shared across numerous accounts. lacking that capability means you dissociate a desktop OS from a server OS for no good reason.


Single-user means I have to trust everything I run.

I mean, some of us here actually remember MS-DOS. It's not a huge secret to us.


Sharing code? I like my apps bundling a whole browser to render a couple buttons


The march toward systemd-electron continues.


This can actually make sense - why not make Electron a shared part of the OS instead of so many apps bundling it. As I understand Apple once built its entire GUI system around a flavour of PostScript which was designed for documents typesetting. Now the world just is doing a similar thing with HTML.


Because I'm old and I'm sick to death of everything being a web app. Because I hate web app programming so it limits my employment options :-)


I'd be more mad about this if building native applications wasn't super shitty these days. Xcode crashes all the time. SwiftUI is terribly documented and buggy. And lots of standard things you see apple do in their apps are basically impossible to do from 3rd party code. And its impossible to debug anything because its all closed source. And windows has about 8 different native UI libraries that all look and feel different, and they're constantly making new ones instead of making one UI be good and well supported.

I hate electron with a burning passion. But at least the web has open standards, good debugging tools and modern, performant, well documented and pleasant to use UI libraries like React, SolidJS and so on.

Just don't ask about rich text editing on the web. Oh god. Its been decades and its still so shit.


I feel the same.


> This can actually make sense - why not make Electron a shared part of the OS instead of so many apps bundling it.

Microsoft did this ages ago, they lost a big anti-trust trial over it.

Aside from that, people ship electron because it works the same across different OSs, if you just want to target one OS, you are better off using that OSs native dev toolkit. Although good luck finding anyone who knows how to write native apps for desktops anymore, and if you are on Windows, good luck figuring out what toolkit you are supposed to use now days!

And Linux has had the decades long problem of QT vs GTK.

So really the only platform with a native toolkit is MacOS, although when it first came out there were actually multiple toolkits to choose from there as well, and now days I think there is some argument over using Swift of not still (not sure, don't keep up).

Or you can just use Electron and skip the above mess entirely.


> people ship electron because it works the same across different OSs

Why does Microsoft build the very VisualStudio installer with Electron then?

To me it seems companies ship electron because it's easy to hire a JavaScript developer.


> Why does Microsoft build the very VisualStudio installer with Electron then?

> To me it seems companies ship electron because it's easy to hire a JavaScript developer.

When I worked at Microsoft, one team I was on, very ironically, had a really hard time finding Windows developers.

We actually resorted to drawing straws to see who on the team would have to learn native Windows development!

IMHO a large part of the problem is that native Windows development is a career dead end, unless you work at Microsoft, there are relatively few well paying jobs for what is now a niche skillset.


Unless one works for the games industry, does IT at big corp, embedded devices programming.


Games pays badly, big corp IT pays badly, and unfortunately despite the difficulty, embedded tends to pays poorly unless you are at one of the big tech companies.


And yet the reason why they pay badly is the endless queue of people trying to get one of those jobs.


None of which are particularly sexy, unfortunately.


Given the hoards of candidates for game development and IoT jobs, putting up with the pay and hours, at least two of those are quite sexy.


Game development, maybe. The IoT people just want something to pay their bills.


Visual Studio installer uses WPF.

They did indeed try to use Electron, and the backslash was big enough they went back to WPF.


> Although good luck finding anyone who knows how to write native apps for desktops anymore

Hi :-)

TBH, in a room full of HN regulars, to find a developer to write a native GUI app, you can throw a brick and hire whoever says "Ow!"


> why not make Electron a shared part of the OS instead of so many apps bundling it

You're describing a system webview, which is a thing on Android, Windows, iOS, and macOS.


Isn't Chrome OS a better fit for his description?


> why not make Electron a shared part of the OS instead of so many apps bundling it.

Because it's a wrong approach to a problem that was solved decades ago by much smaller and faster system libraries for UI development.


Apparently these libraries and the languages they are designed to be used with failed to offer sufficiently easy way to implement the UX people want.

I myself strongly prefer classic desktop GUIs adhering to the 90s Microsoft and Apple design guidelines, also well-designed (rather than chaotically evolved like JavaScript) programming languages too yet the objective reality seems like that's not what the demand is for - real-life companies and people prefer fast-entry non-proprietary languages like JavaScript and virtually-unlimited expression like what CSS gives. The only libraries I know can technically be good alternatives to Electron are Qt Quick, WPF (and its spinoffs) and JavaFX but they all have downsides which limit their adoption.


Microsoft already did that with Windows 98, it was called Active Desktop.

And you're mixing Apple with NeXT (NeXTSTEP) and Sun (NeWS).


One attempt, XULRunner didn't work out.


Home computers didn't need multi-user capability or much in the way of security (other than anti-virus) back then. I'd argue they still don't. I don't think these two things were the problem.

I can take or leave shared libraries. They seem to cause a lot of trouble, but so do statics, so I'm on the fence there. But in the context of when this was released it's a non-issue.

I'll give you the CLI thing though. If the CLI couldn't be full-featured in a window that was an oversight.


The main limitations of the multitasking command window were that it couldn't be used to drop into a language like the inbuilt BASIC interpreter and then change screen modes in BASIC code, or draw graphics using the BASIC plotting keywords, and so on. The command-line UI foreground and background colours were configured in the host app (typically Edit) rather than being changeable dynamically from within the command-line environment. There may be other limitations that I am not aware of, but for text-based interaction with typical command-line tools or BASIC in text-only mode, it's fine.


I don't remember any limitation of the windowed CLI on RISC OS, except it was slower if there was a lot of output.

The OS generally had very little usage of the CLI though, since the GUI was present in ROM and booted to the desktop in about 3 seconds.


Unpopular take but here I go (bye bye karma): Code sharing between applications, beyond what is basic, common and stable enough that it could live in the kernel, is not a good idea in the long run. It served a purpose back when memory was scarce and libraries (and their versions) few.

Containerization for all it's isolation magic has primarily been successful as a way to package "more or less the entire OS dependency" because sharing code is hard. How many containers ship with only one single binary (the active data set) ? None, code sharing is the primary problem that containers solve.

Static linking solves it better. RiscOS approach actually solves it better, too.


Code sharing should not be done, because it is a hard problem?

Stuffing whole OSes + apps + their dependencies in containers, and running a # of those, is not the solution. That works for single-purpose uses like servers. Not for user-facing OSes that run a # of apps side-by-side.

Solving that "how to share code reliably" problem is the solution. Being a hard problem means it's worthwhile to find a good solution for it.


Sharing code runtime is not a hard problem, it's the wrong problem. Code _sharing_ shouldn't be done. Code _reuse_ sure, absolutely, and it's done right by including that exact, specific code at compile time.

And yes, stuffing whole OSes + apps + deps into containers is indeed not the solution, it's the symptom.


It’s a bit like what happens when you hit Stop-A on a Sun workstation, but in that case you are dropped to the basement under the OS.


Back in my day we shared code with static libraries, it appears to be quite hip in some Linux distributions nowadays.


Nowadays there is also more and more nothing shared besides the kernel. Look at technologies like electron, docker, flatpak, etc.

Similar to multiuser: Security is important, but as applications get more and more powerful it is not about what individual users can and can't do, but what each application can and can't do.


> the command line appeared at the bottom of the screen and the frozen gui scrolled up as you entered more commands

I don’t know why, but that’s fucking hilarious


I'm not even sure it was ahead in its UX; it had a three button mouse, and daily operations needed all three mouse buttons. I had to use them for years at school and I feel you could never quite be sure what the third button would do. In some cases, it was like what shift-click does today, in other cases it selected menu items without closing the menu, in other cases it moved windows without bringing them to front and in yet still different cases it opened a _different_ menu to the one the middle-click did.

For menus, I feel it was the worst of all systems. The Mac and Amiga had menus consistently at the top of the screen, and the Mac was good for discoverability in that it showed you the menus were there without you having to click a button. Windows also did that, but menus were attached to windows (bleh). RISC OS was worst of all, _every_ menu is a context menu, including app-level menus - and you got different menus depending on whether you middle-clicked on the icon bar icon, or you right-clicked on the icon bar icon.

There was no standard file requester, _everything_ was drag and drop; to load a file, you had to drag it onto the application (although yes, default file associations allow you to double-click it). To save, first make sure you've got a filer window of the directory you want to save to open and visible on screen, then middle-click in the window of the file you're working on, navigate to File -> Save -> a tiny box with a file icon appears, you get to type the filename, then you have to _drag_ the file icon to the folder to save. And if you accidentally mouse-out of that box while typing the name, you lose the name.

The OS was also ridiculous in some of its APIs, particularly that there were a million and one things under the calls OS_Byte and OS_Word - yes, really, API calls all clustered together because they return a byte or return a word. It's a design holdover from the original BBC Micro's OSBYTE and OSWORD calls. There's also a pile of crap multiplexed behind "VDU" calls, and much like terminal emulators, there's a lot of behaviour you can invoke by printing specific control sequences to the text screen, including mode-switching.

It had a weird system where _all_ OS calls were either "I'll handle errors" (e.g. SWI XOS_WriteC) _or_ "let the system handle errors" (SWI OS_WriteC), which in most cases means that if the OS call ever had an error, it stopped and exited your entire program. The problem with this approach is lots of programmers chose to write software that falls over at the slightest provocation, rather than think to deal with every error and decide how much to deal with recovering it. So, for example, let's say you've been working in a paint package on your masterpiece, you save to disk, and there's a read/write error. Goodbye masterpiece.

You can get a flavour of its programming environment from http://www.riscos.com/support/developers/prm/

It also had its own filesystem metadata craziness; there were no file extensions, but rather file type metadata saved separately (the Mac also had this madness), and it also saved the "load address" along with the file.

Nonetheless, what I did like about it was:

1. That the whole OS was ROM-resident and you can boot a device with no media needed at all, within 3 seconds of turning it on. AmigaOS was _nearly_ all ROM-resident, but nonetheless required a boot disk to get to Workbench (all you need on that boot disk to get to Workbench is a ~200 byte program that launches it; it was clearly a deliberate choice to insist on a bootdisk, and I think it would've been better if it didn't)

2. That it pioneered "an app is a special kind of directory", so you can keep all your app's assets inside a folder. Mac OS at this time was using the awful resource-fork system to do this, but by Mac OS X it had seen the light and create apps and resource bundles

3. That it had a built-in BASIC interpreter, and this was a very fine BASIC because it had a full assembler built into it and it had fantastic BASIC-to-machine-code interoperability. You could write all the bits that needed to be fast in assembler, while writing the rest in BASIC. There were even a few commercial games released written in BASIC+Assembler.

Overall, AmigaOS was a much better OS than RISC OS, but I do still have space in my heart for the plucky British operating system.


> you got different menus depending on whether you middle-clicked on the icon bar icon, or you right-clicked on the icon bar icon

Are you sure you're talking about RISC OS? Because you're got the overview right but your details are weird. Right-clicking never pops a menu in RISC OS. Only the middle button does that. That's why the buttons in RISC OS have names. They're not "left, middle, and right" they are "select, menu, and adjust".

The "everything is an object" drag-and-drop was absurdly powerful. You're not limited to dragging into a directory, you can equally well "save" a file directly into another application, avoiding the middle step of dumping it onto the disk first.

Personally, I find that the trend in making all UIs as easy as possible for the beginner to be a step backwards. Yes indeed, beginners can get going quicker, but then you've very quickly learned everything and there's no where to go. The pro user cannot work faster. The pro user cannot do more. We're all stuck behind fisher price interfaces.

If you treat users as if they are children, they will always use your software like a child.


> The "everything is an object" drag-and-drop was absurdly powerful. You're not limited to dragging into a directory, you can equally well "save" a file directly into another application, avoiding the middle step of dumping it onto the disk first.

So true!

For sure I was biased when I left RISC OS in favour of Windows 3.1 early 90s, but it took me years to get used to the clumsy COPY-CUT-PASTE metaphors still dominant today.


> Because you're got the overview right but your details are weird.

Having just booted up RedSquirrel, yes, I accept I'm slightly misremembering. Right-clicking doesn't open a menu, it is consistently middle-click.

However, it is still somewhat inconsistent. In several applications (for example, !Maestro or a filer icon), left and right click on the icon bar do the same thing, while in others (for example !Edit), only left click opens a new window and right click does nothing.

Playing around with RISC OS 3.10 again, I'm also reminded of the nonsense that is menu items with arrows on them (indicating sub-menus, or sub-windows like save boxes) require you to successfully slide the mouse through the arrow to open them. Almost all other menus I've seen will open sub-menus as soon as you land _anywhere_ on the menu item.

While drag-and-drop may be powerful, the ergonomics of the UI were atrocious. I don't think anyone at Acorn had heard of Fitts' Law. The tiny save box also required you had exactly the right filer window open, ready and waiting. You couldn't easily change your mind like you can with a file requester (and also with modern MacOS's spring-loaded folders).

I also think the OS was designed with the expectation that overlapping windows would be _normal_, and I don't think that's ever been the case, certainly not how I use computers. Most windows I have open are fullscreen, and I switch between them (most commonly with the alt-tab concept that Windows brought). I might have _internal_ windows inside one app's window (for example, multiple code editing windows and terminals in an IDE, or tools, palettes and layers in graphics editing), but only on special occasions do I have two separate _application_ windows visibly open on the same screen, and when I do, they're usually side by side, not overlapping.


I still have all my windows overlapping, and I suspect it's because my formative years were spent in RISC OS. Also I still find "File Open" dialogs weird - every app having a miniature and less-functional tiny 'Filer' built in...


If you press F3 (the standard save shortcut in RISC OS), you get a Save As dialogue box which will persist even if you mouse-out.


A previous bit of discussion: https://news.ycombinator.com/item?id=37796610


Acorn was a curious company. It managed to get incredible amounts of work done, by assigning big projects to individuals instead of teams. My memory is not to be relied on at this lapse of time, but I seem to recall that in the final years there was a browser maintained by one guy, a port of Java by two, and an implementation of directX by another. Obviously all those projects were much smaller back then (around 98) but still, those devs were doing the work of what another company had a team to do. And in fact this does work, as communication overhead is reduced, but in many cases the increase in productivity loses strategically to the slower time to market.


From personal experience it's amazing how much more productive I am on solo projects vs working with other people. When you're solo you can just go, but in a team everything needs to be discussed or at least communicated.


Scaling software development has been THE vexing problem since day one. There's no doubt that the most efficient system fits in the head of a single person; the challenge is then what?


Either software development teams are a wonderful metaphor for multithreaded code, or multithreaded code is a wonderful metaphor for software development teams. I'm not sure which.


Plot twist: the dining philosophers problem was based on a real-life software development team.


Each wants to claim a keyboard and a mouse, but due to new pair programming requirements set by management there is one fewer set of peripherals than there are developers.


Related to this subject is Casey Muratori's video about Conway's law and a possible extension to it. The communication overhead of working in teams, and the fact it's harder to address cross-cutting concerns in them, is a key theme in it.

https://www.youtube.com/watch?v=5IUj1EZwpJY

There is also Descartes' quote about how a work produced by one master is often better than one in which many are involved, because of the unifying vision.


But one master cannot go to the Moon.

It takes coordinated effort of many, many people. The same with making a-bomb, the same with making anything bigger in software.

It’s nice that Linus started kernel and GIT but nowadays he’s not writing much code and most likely he would is not able to review personally each and every PR.


You're right. We also have proverbs like "two heads are better than one" and "standing on the shoulder of giants" so people recognise that both sides are important and have their value.

Right now, I'm working on my own on a personal project attempting to do something a little novel and I appreciate being able to go back and refine my ideas/previous code based on things I learn and additional thinking (even rewriting from scratch), when I'm more likely to face friction (like "stick to the suboptimal approach; it's not that bad") and cause trouble for teammates if I was working with someone else. So the value of working alone speaks more to me currently than the value of working in teams, but they both have their place.


I read The Mythical Man Month recently (first published in 1975), and while some of it is charmingly dated (have a secretary take your code and run it down for you!), it's astonishing how much of its discussion and advice for structuring a team of programmers remains relevant even today.


As a teenager, I misread the title and borrowed it from the library. Imagine my disappointment and embarrassment when I realized that there was no mention of a human-moth hybrid at all.


Efficient but not necessarily better. When I'm solo developing I go back later and I made some questionable decisions a team member would have identified.


My amusing if cynic take: it’s a function of the number of opinions about how to do it. A project can tolerate 2 easily, 3 in many cases, 4 and above is difficult terrain. To scale up team count, you need to increase the count of “unopinionated, doesn’t really care devs” to prevent too many opinionated devs landing on the same part(s) of the projects and conflicting. Put one or 2 on each pillar of the project - 3 tops if they work together excellently. If a project needs more bodies, drop in unopinionated devs. There’s enough bus factor that they catch each others code, but not so much that it grinds to a halt in communication overhead.


I've pretty much found the most it scales linearly is 2, and only then in good conditions such as working well together, greenfielding, and something with clear enough boundaries.

After that, well, it basically flatlines and even seems to decrease at times.


Old joke but - Two engineers can do in two months what one engineer can do in one month.

But this sort of goes with the African proverb, "If you want to go fast, go alone. If you want to go far, go together."


> but in many cases the increase in productivity loses strategically to the slower time to market.

I disagree that it always means slower time to market - if the individual is empowered and minimum process (no PMs, "grooming", estimates, etc) a sharp individual can run circles around a full team.


Well, "many cases" isn't "always". A bigger team will get there faster if the development is larger than a certain size, if it's smaller then an individual can win.


I recall seeing a Macromedia Director player but never heard of a DirectX port. In any case the lack of hardware floating point in most of their machines was looking like a big mistake by the mid 90s. Their compilers were also way off the pace and that was getting to be a problem.

Tbh I think I am slightly bitter about having stuck with Acorn a bit too much, and should have jumped away sooner. It is clear Acorn knew they were toast even before the Risc PC. A lot of these very impressive developments were consequently glorious wastes of time, which is kind of tragic too.


I'm not sure if the directX port ever saw the light of day. At that point the top brass were putting their hopes in the 'Network Computer' [1] and Set Top Boxes, so it may have shipped with one of those, or been intended to.

[1]https://en.wikipedia.org/wiki/Network_Computer_Reference_Pro...


Brian McCullough's book How the Internet Happened (https://www.amazon.com/How-Internet-Happened-Netscape-iPhone...) angles this in terms of the centralized digital superhighway versus the open distributed internet. (Where thin clients, set top boxes, etc. are in the first group and web browsers and WWW protocol are the second.)

Essentially, by 2005 the open internet had won, but the iPhone (or more precisely: the App store) became the dream of a thin client and became the platform NCs had initially targeted — turning the internet into a walled garden with a vengeance.


The NC is one of those things that feels like it was simply to far ahead of where the technology and the mind set of users was. Nowadays it would have much more traction thanks to SaaS.

In tech, one step ahead is an innovator. Two step ahead is a martyr.


> The NC is one of those things that feels like it was simply to far ahead of where the technology and the mind set of users was.

They were just like X dumb terminals that predated them by about a decade: far behind what the technology was offering with storage and computing power becoming cheaper every year. I'm glad they never caught up, and hope the same happens to Chromeboxes/books; I don't want prices of common hardware I use go up because of market shrinkage due to lots of people ditching real computers in favor of dumb terminals where even the simplest service is something that they must access and run remotely with no or reduced local storage/computing power. Sorry for having an unpopular opinion, but to me SaaS is like going back 40-50 years to the mainframes era, and essentially is a way to put everything behind a counter so that users can be charged tomorrow for what today is still free.


Trivia at this point. But the Oracle Network Computer was a low-end x86 box running FreeBSD and a full-screen Netscape Navigator. Very much a product of its time.

(WebTV, later purchased by Microsoft, was the more successful product in this space.)


Acorn did the reference profile NC for Oracle, and it was an ARM7-based machine with NCOS, a stripped down version of RISC OS. The Acorn-built NCs were then sold under a variety of brandings, including Acorn's own and Xemplar (the Acorn/Apple education collaboration in the UK). Was the Oracle Network Computer a later variant?


Possibly, this was after it was spun-off as a separate company.


Yeah, this was back in the days of dial-up.

Chromebooks are effectively the modern NC


A great example. Chromebooks also managed to also take the better ideas of Netbooks and go with them.


Obligatory Fred Brooks quote: "The bearing of a child takes nine months, no matter how many women are assigned.”


Hah, try getting them to sit through refinement meetings, and I bet it'll take a lot longer!


RISC OS has been limping on thanks to the efforts of some extremely hard-working volunteers, but a roadblock is coming. The Pi 5 drops support for 32-bit ARM code, in which RISC OS written, and since enormous chunks of it are written in assembly, there is no trivial way to port.

Even so, it's heartwarming that people continue to put efforts into operating systems that aren't related to Unix or Windows. I'm happy to see people use this, and AmigaOS, and BeOS, and others. Computing shouldn't be a monoculture.


BeOS/Haiku are surprisingly modern considering their age. They made a lot of technology choices at the time that short term were limiting (multi threading everywhere) but long term were very nice. It was just that the short term issues was one of many anchors of them when they needed everything to go their way.

Haiku is the closest of these to being daily runner ready but like a lot of systems, lack of driver support prevents it from that lofty goal. It is pretty much the only reason that Linux has been able to go so far.


> The Pi 5 drops support for 32-bit ARM code

Note that EL0 (userspace) support is still present, but RISC OS cannot currently run entirely in userspace.


I kind of went down a rabbit hole last year after the Xerox Star emulator was posted here. It was really cool, but super slow, so I felt like emulating another old system. I ended up writing a GUI calculator for RISC OS in ARM assembly, neither of which I had exposure to before. It was a blast. Very interesting system, it's like visiting a country on the other side of the planet.

There's a pretty good emulator, they've got a bundle fully loaded with all kinds of RISC OS tools.

https://www.marutan.net/rpcemu/easystart.html


That's my article -- I posted it here too:

https://news.ycombinator.com/item?id=40234430


Link to HN discussion of another of your articles with more RiscOS history.

Modernising RISC OS in 2020: is there hope for the ancient ARM OS? 111 points by lproven on Oct 10, 2020 | hide | past | favorite | 53 comments

https://news.ycombinator.com/item?id=24735766


I have written about RISC OS quite a few times over the years.

• RISC OS: 35-year-old original Arm operating system is alive and well

https://www.theregister.com/2022/06/21/risc_os_35/

• Original Acorn Arthur project lead explains RISC OS genesis – Paul Fellows describes how it beat the overambitious ARX to Acorn's Archimedes computer

https://www.theregister.com/2022/06/23/how_risc_os_happened/

• Bringing the first native OS for Arm back from the brink – Steve Revill of RISC OS Open chats to us about taking the project into the future

https://www.theregister.com/2023/01/17/retro_tech_week_rool/


RISC OS was the first graphical OS I ever used - my father ran Sibelius (the very first version I think) on an Acorn computer for engraving music. The three-button mouse approach is totally unique, I'm glad they explain it in this article!


Press F12 and dust off your Dabs Press BASIC WIMP programming for the Acorn…

A machine that you could code up a full GUI application with the BASIC interpreter in ROM, enabling children everywhere to which a C compiler was Unobtainium.


I’ve got fond memories of RISC OS from my school’s room full of Archimedes. I spent a a fair bit of time designing and failing to build a text adventure in Basic, while my teacher tried to get us to learn how to use TechWriter. Best feature was being able to drag a slider to create a RAM disk.

Amazingly you can still buy a copy of TechWriter 9.1 for £85! http://www.mw-software.com/software/ewtw/ewtw.html


The ability to assign memory to various aspects of the system dynamically by dragging bar charts is something I wish existed in modern operating systems. Sheer UX genius. I wonder who came up with it?


Anybody know a quick work around for the middle mouse button? Actually it appears I'm also having trouble with the keyboard too. My wireless keyboard+trackpad worked fine yesterday on raspberry pi os. It is refreshing though to see an OS that is lightweight and quick to install, and doesn't give you any surprise errors after you got it all updated about stuff like the web-browser is not supported on your model B+(even if i wasn't able to connect to internet and actually test functionality). Life is tough! I'm spending too much time trying to figure out all this random half-baked tech.


I forgot this is an 8 day old thread. There is actually a lot about RISC OS that i think seems favorable over running raspi os, just based on the tiny amount of testing i did on both, and getting neither to work for my use-case: I like how small RISC OS is, and it seems to have a better method for adjusting scan to my TV - raspi os doesn't seem to have a way to adjust that. It does seem like there could be some improvements at making it more plug and play; or maybe it is my wireless keyboards fault. i'll have to report back after i get that figured out.


Ok, well final story is, i rummaged around and found some wired keyboards and mice. It was a little frustrating, but i finally found a pair that seemed to function. I'm still not sure what the point of 3 mouse buttons is; it seems like right and left click do the same thing(that is with the mouse i was able to actually get working - maybe RISC OS doesn't like any of my mice). Also the [FN]+Q,W,E,arrows trick didn't work on my keyboard that actually had a Function key. I guess i could have rummaged more(and maybe i will), but my hardware generally works fine with the linux distros I have used(mostly MX). It seems like some more work could be done here. Also, my wifi dongle seems to be recognized but incorrectly as a usb-ethernet interface, so i have an IP, but no internet, and i can't find a way to change that. riscosopen.org says sign-ups are disabled i need to email the webmaster; maybe i will; i am interested in trying out RISC OS, but it doesn't give me much confidence with my experience so far. ...it looks like MX still publishes a 32bit version, so i guess i can try and run that on my old Model 2 B+? ...but i bet it's going to have the same problem with no web-browser. So i guess all my 32bit stuff is bricks now? on a side note, i noticed some degradation in the desktop experience with the latest version of MX on my old laptop(it is a Thinkpad T510 with an intel i5). It is still functional, but i can feel the walls closing in.


Well, i went back to raspi os, and installed midori, and it successfully ran and loaded a page, though it quickly became apparent that raspbery pi os on an old model 2 B+ was not going to work(cpu + mem overload = slow) I have a vague memory of connecting a screen+keyboard+mouse to my raspi some years ago, though I can't really remember what the UX was like. In any case, it is unusable now with the latest version of raspi os. Here's a pic if you don't believe me(not that it is very much proof, but notice the scan is messed up - there is no way to fix it apparently): https://i.postimg.cc/3NjBDR6f/KIMG0503.jpg

I'm going to have to figure out another solution for my idea of turning TVs into cheap workstations. Sadly it seems like nobody is interested in making retro-computing accessible, and my favorite brand of linux seems to be slowly moving on to new hardware support(things seem degraded after updating). It's just that it makes it hard to participate(and contribute) when simple setup is so difficult, and hardware support becomes degraded. Maybe there is some project out there i just haven't seen yet that will fix all my problems, but it just seems less likely the longer i hang around. Actually there are a few options that come to mind, but testing OSs takes time, and so i think i'll just leave it at that for now.

Take care all.


Still remember viewing the Acorn Archimedes with the RISC OS for the first time in 1987, when it was launched. Someone wrote a 10 line demo that rolled down the current screen, using a curl motion using bitmap copy/transform, using Basic. Breathtaking performance/speed.

Never knew why it bombed in the marketplace.


> Never knew why it bombed in the marketplace.

Bombed?

RISC OS machines sold for some 15 years in the face of growing competition from Windows and Linux.

Acorn chose to shut it down and span off ARM.

Castle Technologies continued selling it and made a new 32-bit-clean version and new hardware for it.

RISC OS Developments bought Castle and made RISC OS Apache-licensed FOSS.

The OS is still alive and in development nearly 40 years after release. You can buy new hardware running it.

You can buy new releases of the OS for original Acorn hardware and emaulators to run its apps on Windows and Mac OS X.

The CPU family is the best-selling in the world and outsells all x86 chips put together by something like 100:1.

Arm sold 8 Billion CPUs last year: https://newsroom.arm.com/news/arm-announces-q3-fy22-results

Intel sold 50 Million: https://tweakreviews.com/processor---cpu/intel-sold-the-most...

Tell me again about how this platform "bombed"?


Yes, the Archimedes/RISC OS system itself bombed and got outsold by the PC, Mac, Atari ST and the Amiga.

But the ARM processor/ISA itself that came out of the original ARM2 on Archimedes is now world dominating.

Both are true.


In the long run, _everything_ got outsold by the PC, including the Mac.

But in the long run we're all dead (as J M Keynes put it). Step back too far and everything gets lost in the noise.

Archimedes was the reason RISC OS was created... and Acorn was the reason Arm was. But originally Acorn's ARM machines were to run ARX.

https://en.wikipedia.org/wiki/ARX_(operating_system)

Arthur outdid ARX, so ARX was cancelled and Arthur shipped. Arthur became RISC OS.

It closely parallels how CAOS was cancelled and AmigaDOS shipped, and later was renamed AmigaOS.

http://www.bambi-amiga.co.uk/amigahistory/caos.html

Oddly the bit of AmigaDOS 1.x that was ripped out of AmigaOS >= 2.x became its own OS, HeliOS:

https://www.theregister.com/2021/12/06/heliosng/

There are a whole bunch of questions on Quora asking why the Amiga flopped. It did not flop. It sold millions of units for years and derivatives of its hardware and software are still on sale today.

The ST didn't flop. It sold millions too, and both EmuTOS and AFROS/Aranym are still around and maintained.

The Archimedes didn't flop. It sold lots, it established a line of machines and OSes that are still on sale, and an offshoot of the company is still around and worth billions and totally dominates the computer industry. Today its CPUs power the Mac, and iPad/iPhone and Android AND WINDOWS -- and outsell PCs by about 10x over.

The latest version of the native PC OS runs on Arm chips as well and there are Arm-based PCs on sale now.

Do we say the PC bombed because DOS and Windows 3/9x are dead? Of course not!

Do we say the Mac bombed because it killed its OS, bought one in, and then moved to Intel for 15 years? Of course not!

Did the Mac bomb because all new Apple kit runs on that bought-in OS on a chip design that came out of Acorn? Of course not!

So Acorn's original OS largely died out and has little industry relevance now. So what? So did classic MacOS. So did MS-DOS. So did CP/M. So did original Windows.

Windows today is based on the result of a cancelled DEC OS, Mica, and a cancelled IBM OS, OS/2. The bit MS wrote, MT/DOS, is long gone and went FOSS last month.

Apple OSes today are all based on NeXTstep, and that was based on BSD tech.

But the chips they all run on -- not the only chips, but the best-selling ones -- are Acorn designs.

"Bombed," my ass.


The Acorn-designed ARMs were released in the late 1980s and very early 1990s. The best-selling designs are the recent ones which do share heritage but are far removed from the excellent work that Furber, Wilson and team did while at Acorn.

RISC OS is a plaything for enthusiasts, a relic if an age where computing was a hobby rather than the corporate monstrosity that it has become.

As a hobbyist and RISC OS user I am fine with that. But Acorn itself got out of the game in 1988 after Phoebe took only 1,400 preorders. Had they continued, Galileo may well have succeeded RISC OS with a more modern foundation.


I thought the Amiga guys were the rabid fanbois...


Go on then. Falsify my numbers.


Best remember that compared to today, there hardly was a market place till late nineties. 70s and early 80s were dominated by hardware enthusiast, 80s game software and a nerdy minority, business PCs had little momentum till late 80s and 90s. All this time, the market was flooded with different types of mutually incompatible micro computers. After DOS compatible PC gained momentum, the only company still (barely) standing was Apple, and it ultimately adopted comorbidity PC hardware too.

So I think the reason Acorn (and Commodore, Sinclair, Amstrad, RadioShack as well as Sun, DEC and SGI) went belly up is primarily that while the market was growing, there simply wasn't enough space in the market to compete with wintel dominance.


Using RISC OS on an RPI was as close to using a computer for the first time as I've ever gotten again. So many ways of doing things are not bad, but completely alien if you've never done it.

It actually gave me an appreciate for how computers must be for those who aren't used to them.


RISC OS is an interesting beast, but I’d LOVE to see RISC/IX ported to the RPi and other small boards.

Has anyone saved the sources anywhere? Would whoever now owns IXI IP help with the desktop part?


You mean the Archimedes GUI on top of the BSD kernel?

- https://en.wikipedia.org/wiki/RISC_iX

I have memories of drooling over those R series machines or anything MIPS/Sparc/Alpha based in the early 90s for that matter. I think ARM systems at the time were never competitive in terms of speed (compared to SUN, SGI, HP), but they were Archimedes' sexy sisters.


Acorn bought-in a couple of GUIs for RISC iX (Motif mwm/twm in later versions, X.desktop from IXI Ltd in the earlier release), but I'm not sure if any were exclusive, and none looked like the RISC OS GUI - I wouldn't characterise anything about the RISC iX GUI as being particularly 'Archimedes'. There's a video of someone playing around with the GUIs here: https://www.youtube.com/watch?v=8r7vgQsuoT4

If only it had been given the RISC OS GUI with BSD underneath - that would have been way easier to port to modern Unix-like foundations and may have had a healthier-but-niche future as something 'modern' beyond Acorn's existence.


Ah yes, Motif! Now I remember, I stand corrected.

Before Gnome, Qt, KDE we of course had motif and X with it's license issues. If I remember correctly, early Slackware CD bundles came with motif as well.


IIRC, KDE was created because Motif/CDE were proprietary, but QT was free and had an open-source license, but wasn't compatible with the GPL. There was an agreement with Troltech, the creators of QT that would make QT available under a BSD-style license if they didn't release a free/open-source version after a year.

GNOME was created because of license issues with QT not being compatible with the GPL, so GNOME used GTK which was created by the GIMP project because Motif wasn't free.


There was a free implementation called “lesstif” IIRC. Motif was proprietary until much later, as was CDE.

IXI’s desktop runs on top of Motif.


[Article author here]

> You mean the Archimedes GUI on top of the BSD kernel?

No. (Although that sounds fun, I never heard of such a thing.)

RISC/ix was Acorn's ARM UNIX. It had nothing of the Acorn GUI -- it was fairly standard X11, I think with Motif or something Motif-like.

Acorn did add some Acornish tweaks to it, including its text editing and its memory allocation, but it looked and ran much like any other late-1980s UNIX, as far as I now. I have never got to try it myself, sadly.


I still think it’d be fun to bring it back to life. I don’t think there are many copies still usable and it was picky about hardware - it wouldn’t run on more modern Acorn boxes.


RISC/ix?

It was based on BSD 4.3. The efforts went into BSD 4.4 and that went into NetBSD and NetBSD 1.2 incorporated Acorn's ARM port.

https://groups.google.com/g/comp.sys.acorn/c/G19nI9eac-o/m/q...

https://www.netbsd.org/changes/changes-1.2.html#port-arm32

I'd say that the bits that matter mostly survived and still do.


For me, the desktop apps are vitally important to rebuild the experience.


SCO owned IXI at one point and sold the desktop to various vendors until CDE emerged.


Skimmed the article, didn't see this important bit mentioned: I'm pretty sure the OS is available through rpi-imager, meaning it's trivial to set up on an RPi and presumably has some notion of official endorsement from RPi org.


[Article author here]

Yes it is. I didn't think that was an important bit.

True story. ROOL went to show the RPi foundation their first version. It booted to the desktop, had an Apps folder with text editor, graphics, sound, command prompt, BASIC etc.

Eben Upton asked how big it was.

They told him it was 6MB.

Eben asked "no, not the kernel, the whole OS?"

They said "that is the whole OS. We haven't got SD card reading working yet. This is booted from the FAT partition kernel image."

Upton commented that if he had known the Archimedes OS was still around and it was FOSS, and it ran Python, he'd have made it the Pi official firmware.

Such a giant missed changes. Hundreds of millions of people would have had Pis that booted into RISC OS.

Acorn's own original Unix hardware booted RISC OS from ROM, then you clicked a desktop icon to load RISC/ix, Acorn's BSD.

The Pi was very nearly the same. A 10¢ flash ROM would have held the whole OS as it does on the Raspberry Pi Pico. That is the price Upton gave me personally in an interview.

https://www.theregister.com/2022/01/17/raspberries_pi_direct...

Pis would have booted direct into RISC OS without an SD card needed.


That would have been sweet.


Raspberry Pi is amazing. Everything from the size and simplicity of hardware to that of Raspbian. Super moddable, modular, surprisingly performant. As a lot of others do, I use one for most of my sites and apps as a home server. 15+ apps, sites, and APIs running on this thing 24/7/365 over WiFi.

What would be the benefit of using RISC OS over Raspbian, or even Ubuntu Server? Is it pure nostalgia like running Windows XP on a Pi?


> simplicity of ... Raspbian

You think Raspbian is simple?

https://www.raspberrypi.com/software/operating-systems/

« Raspberry Pi OS with desktop and recommended software

    Release date: March 15th 2024
    System: 32-bit
    Kernel version: 6.6
    Debian version: 12 (bookworm)
    Size: 2,678MB
»

https://www.riscosopen.org/content/downloads/raspberry-pi

« Complete SD card images

RISC OS Pi

2024-04-28 06:15:00

For Pi Zero & ZeroW & Zero2W, Pi 1 models A(+) & B(+), Pi 2 model B, Pi 3 models A+ & B(+), Pi 4 model B, Pi 400, Compute Module 1 & 3(+) & 4.

Version 5.30 Size 155.1 MB »

2.6GB of code, vs 155MB.

It's nearly twenty times bigger.

17.25x as big. And yes, I chose the image with "recommended software" because that 155MB RISC OS image is packed with dozens of apps as well.


Apps? So you would use this OS for the amazing consumer experience? And in your mind is that comparable to a Linux distro like Raspbian or Ubuntu in terms of availability of apps?

Strange way to gauge simplicity too "this screen has 10x less pixels!"


What's better for a tiny underpowered computer with not much RAM...

* A tiny simple OS with a modest selection of really good apps?

* Or a huge slow complicated OS with lots and lots of indifferent-quality apps?


The way you're talking (Linux is a "slow complicated OS") means you're likely some dark wizard in a tower somewhere inventing either the apocalypse or the next great consumer experience.

For that reason I thank you for your likely many contributions to obscure open source projects and gosh speed on your endeavors.


No, not really.

I'm a techie turned journalist, who's been using computers for well over 40 years now, with a particular interest in obscure and niche OSes and programming languages.

I've used an exceptionally broad range of computers for someone still active in the industry. Counting the entire field of PC-compatible x86 machines as one, then I'd guesstimate I've used and worked with 30 or 40 different architectures. Counting all forms of Unix-like OS from SCO Xenix to Linux as being Unix, being 1 OS in different implementations, then again, I'd estimate 30 to 40 different OSes.

I remember how small and simple OSes used to be. I remember the era when a multitasking GUI OS fitted easily into a single megabyte of RAM. When a machine with 4 or 8MB of RAM was more than adequate for exploring the Internet.

I am especially interested in OSes that are still around today, still being maintained, that are small enough for a single person to read the entire codebase, all of it, every line, in a matter of weeks and understand the whole thing in months.

There are several such systems.

There seems to me to be a belief today that a serious useful system must inherently be gigabytes of code, tens of millions of lines, and nobody can understand the whole thing. That is simply NOT TRUE and it never was.

No, I am not writing such things. I am writing about them and trying to bring more peoples' attention to them.


> What would be the benefit of using RISC OS over Raspbian, or even Ubuntu Server? Is it pure nostalgia like running Windows XP on a Pi?

It's small - apps are hundreds of KB to a few MBs and it's fast/responsive. The question is what do you want to use it for? There are apps for most things, but as it's not Unix or Windows it doesn't have a lot of ports of bigger open source apps.


> The question is what do you want to use it for?

Exactly, people replied as if those apps and use cases are actually used today.

So it's just nostalgia right?


It could also be that it meets its users needs and gets out of the way. Commercial software is still sold for Risc OS. The market is small, I imagine even smaller than the Amiga, but some people still buy it.

Is it nostalgia, or just if it aint broke don't fix it.


> Is it nostalgia, or just if it aint broke don't fix it.

I see - yeah I get that, like classic cars, sailboats, SQL, etc.


It's raw, like if you had a Forth system with a GUI that isn't aggressively obtuse. You can touch everything in it with a BASIC dialect, and at least in some variants drop into assembler and run the hardware somewhat directly.

This allows e.g. interesting graphics programming, and there's very little that gets in the way of immediately testing ideas.



Excellent, I've been waiting for Wifi support, it's a game changer. Will be mucking around with this at the weekend.


Single user issues aside, this looks better than every version of Linux I’ve seen in the past decade.


The (abandoned) ROLF project was an attempt to get a RISC OS look-and-feel for Linux: https://web.archive.org/web/20070211082559/http://stoppers.d...

There is also ROX Desktop, which has had some recent commits: https://github.com/rox-desktop/


Wow, the article itself brings me down memory lane. The one thing that will not bring me into this platform: cooperative multitasking. (Shudder) I’m sure it feels fast though!


You should try it.

It is a good demo that some of the tech that Unix folks fetishize is actually an optional extra, and if you let it go, you can do more in 1% of the space.


RISC architecture is gonna change everything


It did.

Arm sold 160 times as many CPUs as Intel did last year.

x86 is a rounding error. It's under half a percent of the CPU market.


It gives me strong Amiga-vibes and it seems it's open-source now... I like it and will definitely try it out!


The most impressive thing to me was the inbuilt BBC Basic, and the availability of every system function to basic with SWI calls.

This contrasts with languages like GW Basic or even commercial QuickBasic (QBX) or Visual Basic: second class citizens when it came to accessing system calls under Windows (or dos).

In fact, I wrote simple basic code to access the undocumented random number generator within the BCM2835 chip - it worked perfectly under RiscOS and even a dedicated BBC Basic emulutor for the Pico.

This is what the code looks like for those that are interested:

REM MAP MEMORY

SYS "OS_Memory",13,&20104000,32 TO ,,,RNG_CTRL%

SYS "OS_Memory",13,&20104004,32 TO ,,,RNG_STATUS%

SYS "OS_Memory",13,&20104008,32 TO ,,,RNG_DATA%

SYS "OS_Memory",13,&2010400C,32 TO ,,,RNG_FF_THRES%

SYS "OS_Memory",13,&20104010,32 TO ,,,RNG_INT_MASK%

REM CHECK WHERE THE REGISTERS ARE MAPPED

PRINT “RNG_CTRL MAPPED AT &”;STR$~(RNG_CTRL%)

PRINT “RNG_STATUS MAPPED AT &”;STR$~(RNG_STATUS%)

PRINT “RNG_DATA MAPPED AT &”;STR$~(RNG_DATA%)

PRINT “RNG_FF_THRES MAPPED AT &”;STR$~(RNG_FF_THRES%)

PRINT “RNG_INT_MASK MAPPED AT &”;STR$~(RNG_INT_MASK%)

REM THESE INTS BECOME REGISTERS R0-R7 RESPECTIVELY

A%=1

B%=RNG_DATA%

C%=RNG_INT_MASK%

D%=RNG_STATUS%

E%=RNG_CTRL%

F%=&1

G%=&4000000

REM GET A RANDOM NUMBER FROM THE RNG

DIM RNG% 30

P% = RNG%

[ OPT 1

SWI “OS_EnterOS”

LDR R0,[R1]

SWI “OS_LeaveOS”

MOV PC,R14

ALIGN

]

REM INIT THE RNG

DIM INIT% 30

P% = INIT%

[ OPT 1

SWI “OS_EnterOS”

STR R5,[R4]

STR R6,[R3]

SWI “OS_LeaveOS”

MOV PC,R14

ALIGN

]

REM LETS INIT…

CALL INIT%

A%=0

X%=0

FIRST%=0

REM KEEP READING RANDOM NUMBERS UNTIL THEY ACTUALLY BECOME RANDOM

REPEAT

  X%=A%

  A% = USR (RNG%)

  PRINT “WARMING UP &”;STR$~(A%)

  IF FIRST%<50 THEN X%=A%:FIRST%=FIRST%+1
UNTIL X%<>A%

REM DISPLAY RANDOM NUMBERS (RETURNED FROM R0)

REPEAT

  A%=USR (RNG%)

  PRINT “&”;STR$~(A%);" ";
UNTIL 1=0


This was fixed around Visual Basic 5, when it got AOT compilation with VC++ backend, OCX tooling in VB, it got a sweet spot in VB 6. Naturally this was kind of triggered by Delphi's competition, and then .NET came around.


“ significant chunks of it were hand-coded in Arm assembly code” as all modern OS should be at the kernel level


giving them grief now though, because the Pi 5's chipset drops support for 32 bit ARM..


Thank goodness they are not.


> RISC OS gives applications access to much of the memory map, and so if a program accidentally scribbles over the wrong parts of that address space, the whole computer can freeze up – which in testing our Pi 400 did several times.

Enough said.


Yep. There are good aspects and bad aspects to old-school 1980s OS design.

On the other hand, the entire PC industry was built on DOS and 16-bit Windows which were exactly like this.

Apple made enough money to buy the dying NeXT from selling classic 68K and PowerPC Macintoshes which were exactly like this.

Early Linux was exactly like this, too. I remember running Red Hat Linux 4.2 on my SPARCstation and having kernel panics right, left and centre. Several a day, every day.

Everything we use today, all the hardware and all the software, was built both on software like this and from software like this.


> Early Linux was exactly like this, too. I remember running Red Hat Linux 4.2 on my SPARCstation and having kernel panics right, left and centre. Several a day, every day.

No, not exactly like this. Even early versions of Linux used separate address spaces for (each) user process and system memory (like all, but the earliest Unix systems), preventing unprivileged user space processes to clobber system memory. A wayward pointer in an unprivileged application is strictly speaking undefined behavior, but on Unix systems typically causes a segmentation fault signal (by default terminating the offending application), not a crash of the whole system.

That doesn't mean that there were no bugs in the kernel and crashes resulting from those, but even already mid-nineties, Linux, while perhaps not yet comparable with the likes of Solaris and Interactive Unix, wasn't any worse than SCO Unix and much more stable than the 16bit offerings from MS. Linux kernel crashes were rare (not as rare as today, thanks to continuous effort of hundreds of contributors and the prioritizing of the fixing of regressions), more often however users experienced out-of-memory situations, which before the addition of the oom-killer could effectively freeze a Linux system for a long time or even indefinitely. Also the X11-server or rather some graphics card driver (part of the X11 server then) wasn't quite of the same quality and when it crashed, terminated the user's session (all of the user's application started during that session). I.o.w., used as a small server, Linux was reasonable stable early on, used as a GUI desktop system not so much.


Apple actually needed a little external help, as they were already on the red.


Selling its shares in Arm Ltd kept it afloat at one point, I believe.

Who or what else?



Oh not that tired old lie. :-(

It's bullsh1t. It's not true. It wasn't an "investment". Microsoft stole Apple code and used it in Video for Windows. It got caught. Apple took MS to court and was going to win, so it settled. The marketing-lizards span this as "investing" but it's not true.

https://www.zdnet.com/article/stop-the-lies-the-day-that-mic...


Not only I walk this planet since the 1970's, I lived through this "lie" on all key publications of the time, and I used the Cult of Mac site on purpose, exactly because I was expecting that reply of yours.


Good for you. Since the 1960s here, myself. I just barely remember the first moon landing.

It's still a lie, no matter how many people believe it. Compare with the flat earthers, or people who use homeopathy, or all religions.

The code was stolen and used improperly by the San Francisco Canyon Company. It's all a matter of historical record. Read the references here:

https://en.wikipedia.org/wiki/San_Francisco_Canyon_Company


That UI looks like something out of the late 80s / early 90s. I can smell the VCR head cleaner and scratch-n-sniff stickers and hear the dial-up internet bee-bee-bee-bee-gurglegugrle-dong-ding-dong-ding-whooooooooosh-wheeeeeeeesh-bling! bling whooooo.

Can we at least upgrade the fonts, colors, and negative space to make it look more 2020s?


Can we at least upgrade the fonts, colors, and negative space to make it look more 2020s?

Of course YOU can, it's open source, feel free to hack away at it.


There's been the Desktop Modernization Engine / Project

https://paolozaino.wordpress.com/portfolio/risc-os-desktop-m...

discussion here:

https://www.riscosopen.org/forum/forums/1/topics/17696

Development seems to have stalled / slowed down.


I know this is a joke but this OS was first released in 1987... I forgot all about it, but it's pretty cool that you can still run it on hardware.


> Can we at least upgrade the fonts, colors, and negative space to make it look more 2020s?

Ew. Please no.

I hate 2020 OS design. It's flat and ugly and confusing.

And this is very much not just me.

https://medium.com/@jared.cyr/is-flat-design-overrated-1b9d4...

https://www.spinxdigital.com/blog/downsides-flat-design-tren...

https://www.nngroup.com/articles/flat-design/

1990s design was clearer, cleaner and better, dammit.

https://twitter.com/RetroTechDreams/status/17860872619535321...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: