Hacker News new | past | comments | ask | show | jobs | submit login

I think it's one of these things that's more approachable if you start by working on specific subsystems of an existing OS. This way you don't have to figure everything out at once. At least that's how I did it.

I think writing an OS from scratch is discouraging and not very accessible because... writing an OS is discouraging and not very accessible. It's a bit as if a painter was saying "It tried looking up guides on how to paint The Wedding Feast at Cana but they all make it look super difficult, I wish I could get an easy step-by-step tutorial". Even if you break it down in small, digestible parts I'd wager that you'll end up with a few hundred episodes before you even get a basic microkernel up and running on a modern system. It's truly a daunting task.

I learned that the hard way: I tried to write a guide on how to emulate a PlayStation from scratch. At first you focus on emulating the instructions in the CPU, that's relatively focused and straightforward. But then once you're done with that you need to explain the various peripherals, and the main bus, and the interrupts, and the cache, and the timers, and the pipelines, and the intricacies of the various CD formats, and the memorycard filesystem, and video modes, and paging, and high impedance, and metastability and everything kind of interacts with everything else so it's difficult to maintain a coherent narrative.

On top of that a PlayStation is a ridiculously simple system compared to a modern desktop computer. The task of writing a very accessible guide on how to write an OS that would take you from zero to a working userland is absolutely tremendous. I think you could probably write a thousand-page book on the virtual memory subsystem alone.




”I think writing an OS from scratch is discouraging and not very accessible”

A lot of work, yes, but, IMO, fairly accessible if you are willing to skip the steps initializing the hardware (“it’s tedious, but luckily somebody did it for us”), don’t aim for replacing the state of the art, and support limited hardware (certainly no USB-C, for example)

For example, your first usable system doesn’t need paging, memory protection, or even a halfway decent memory allocator. You also need not support zillions of different devices (Linus didn't, either, with his first usable system)

Just start with an OS that runs a fixed max number of processes each in some fixed amount of memory in round-robin fashion. Let’s say you have a gig of RAM. Split it in 1024 1 megabyte blocks, reserve one for your OS, and require programs to be position independent.

Yes, that’s almost equal to a single process running multiple threads (in some sense even worse, as a program wanting over a MB of memory can’t run, even if the other processes take only a small fraction of their megabyte or memory) but that’s what makes it accessible. It also isn’t too dissimilar from what was done historically (it isn’t that many steps away from the original model of Windows CE, for example, and that isn’t even 25 years old).

If you have that and a minimalistic file system, you probably can run quite a bit of Linux userland on it (slowly because of the round-robin scheduler, and crashing often when processes run out of memory), but then it’s just a matter of scratching itches as they come up. Memory protection probably would be one of the first features I would add, as it makes the system way more robust. An editor and a (simple, say a single-pass compiler of a pascal-like language) compiler also would be high up, to make the system self-compiling. VM paging probably would come late in the game now that small systems have at least a gig of RAM.


For what it's worth, this is exactly how most Operating Systems courses in undergraduate CS work, so it definitely seems like many different smart people have converged on this as the right way to go. At least in my course, we did build a toy OS by the end of the semester, but it was rigorously divided projects for the individual subsystems in isolation: write the pre-emptive scheduler, write the networking layer, write the filesystem, etc. A modern OS is an incredibly complex beast and trying to grasp it all at once is just going to lead to madness.


I fully agree, but I think it's good to have options on how to approach the issue. Generally, I've referred to OSDev when I need more real-world information on how things tend to work in practice and resources like Phil's when writing a toy operating system.

I wouldn't consider Phil's blog a good resource for, e.g., gaining particular insight into how something like the Linux kernel works, but I also don't think Phil intended it for that use.


> I need more real-world information on how things tend to work in practice.

Have you tried reading any book?


I think it's one of these things that's more approachable if you start by working on specific subsystems of an existing OS. This way you don't have to figure everything out at once.

The trouble is, you get yet another UNIX clone that way.


> I think writing an OS from scratch is discouraging and not very accessible because... writing an OS is discouraging and not very accessible

I actually learned C by writing an OS from ages 16 to 18. I don't know if I'd recommend it to everyone, but it is surprisingly accessible if you limit scope. The real hard parts are dealing with complicated hardware. If you care about the basics like keyboard, mouse, display, flat memory, no threads, then its incredibly easy in 16bit C for x86. 32bit C (i386) is also easy, but does require a bit more setup etc.

It is incredibly educational learning how to write your own libc and having to manage toolchains etc. You very quickly learn the exact boundaries of what is computer controlled and what is toolchain controlled, and of course what is controlled by your own code.

And yes, I had a similar experience to you re: emulators. Tried making a simple 8086 computer emulator. CPU instructions are really easy and really the part I enjoyed. Emulating the hardware and all the machine protocols gets very difficult and complicated very quickly. I ended up, after a few months of that, rescoping the project to only be an 8086 CPU emulator, which was one of the few projects of my youth I actually completed.


i dont disagree with you, but there are alot more basic resources available now then when I learned kernel programming in the early 90s.

also, being able to develop under qemu is a massive* timesaver. not only can you reboot completely from the terminal, and have access to a gdb stub, but you can also get tracing information directly from the "hardware" itself.

using the virtio drivers gives you a pretty quick onramp to having network and storage support.

you could* write a 1000 page book on vm...but writing a little interface that builds x84-64 page tables really is a couple hundred lines. interrupts another 1-2 hundred. threads another, syscall dispatch, etc.

I'm not suggesting that everyone go off and build their own production grade private OS.. but its a pretty fun exercise, and it gives you a really great perspective on whats going on down there.


That's why it's important to choose simpler systems. I've been having fun writing a Forth-based Z80 operating system for the TI-84+ calculator[1]. There isn't too many moving parts and there are many "toy" systems to learn from such as one that boots up to a smiley face[2]. It's all about choice of architecture.

[1] https://github.com/siraben/zkeme80 [2] https://www.ticalc.org/archives/files/fileinfo/442/44227.htm...


I agree.

By the way, you mention high impedance and metastabilty. Knowing next to nothing about the PlayStation, are those concepts relevant for emulation there for some reason? For other platforms I think that’s an abstraction layer that even accurate emulators rarely hit (though I might be wrong). Is there something special on the PlayStation?


It's been a while since I've looked into it so I hope I won't be too far off the mark:

- Regarding high impedance it might be relevant if you want to explain how the controller and memory card interface works, and why you read full 1s when nothing is connected (high impedance line with a pull-up resistor). More generally the issue will crop up every time you have an interface that might be floating, I think people who have no background in electronic engineering might assume that unconnected wire == 0 on the line.

- Regarding metastability it crops up in the timers. The PSX timers can either run from the CPU clock or from signals coming from the GPU (pixelclock and horizontal blanking). IIRC this is used in some lightgun games to be able to synchronize the signal coming from the gun ("I saw the lightbeam") with the current position of the pixel in the frame currently being sent to the TV set.

The problem with that is that the signals coming from the GPU are not synchronous with the CPU and apparently are not resynchronized. That creates metastability issues when the value from these timers are read from the CPU and you may end up reading bogus values from time to time. The software workaround is to read the register values in a loop until you get twice the same value in row. Now, you probably don't need to emulate that but if you want to be thorough it's probably worth pointing out.

So in summary you're right, you don't really need to know that in order to write a proper PSX emulator but if you really want to get into all the details you'll probably want to brush the subject. At least these are concepts that anybody is sure to encounter eventually is they spend time in bare-metal land...


Ah, thanks. Now that you mention it, high impedance data lines ("open bus") do play a role in other platforms as well. Even more so, on the C64 for example some undocumented instructions cause conflicting drivers to drive the same lines and produce randomish/unstable outcomes.

And nice to know that there are indeed metastability issues that crop up very visibly in the PlayStation!


It's conceivable that metastability issues could be used as a part of copy protection. Well, protection against emulation more like.

Of course it's not the smartest choice for that purpose, because there might be chip-to-chip differences. And environmental factors like temperature could affect it as well.

But who knows what people wrote after Bleem PS1 emulator was public?

So have metastability issues (or other HW bugs) ever been used for copy/emulation protection?


Most (all?) copy-protection schemes on the PSX had to do with the CD subsystem. I think they were mainly attempting to defeat hardware "modchips" used to play non-original copies of games. Obviously those are trivial to bypass in an emulator.

I'm not aware of any copy-protection scheme on the console that would target specifically emulators. I guess Bleem was not big enough a threat to warrant specific protections?

Besides I expect that Bleem, in order to run full speed on the underpowered hardware of the time, must have had terrible accuracy and therefore must have employed a wealth of game-specific hacks to get around issues. As such I expect that if they had decided to emulate your game they wouldn't have had too many issues reverse-engineering any trivial metastability-detecting code to implement an ad-hoc hack.


There are copy protection schemes on magnetic disks that mess with the clock bits on the disk to produce an unstable value:

https://retrocomputing.stackexchange.com/a/7853


This would make sense....if existing OSes were very good at maintaining non-leaky abstractions. As it stands, one would have to learn tons of stuff unrelated to the task at hand, on top of all the legacy cruft that usually comes with the hardware itself.


To your comment about a PlayStation being ridiculously simple compared to a desktop computer I'd call that close to true in the case of the now 20 year old first generation. The most recent two have basically been custom extensions on top of an operating system (FreeBSD 9). Which is to say they are desktop computers.


I generally assume if there's no numbers appended to the end of PlayStation it's referring to the PlayStation 1. Also, based on the context of what the parent poster was describing, I think it's fairly safe to assume they were in fact referring to the 'first generation' Playstation as you call it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: