Hacker News new | comments | show | ask | jobs | submit login
I’m writing my own OS (gusc.lv)
299 points by maidenhead 1785 days ago | hide | past | web | 200 comments | favorite

Sigh, folks give the guy a break.

Sure he doesn't know what he doesn't know, but he has decided to fix that. Which, if you know your history, is not a whole lot different than Linus back when he was calling out Minix for being crap.

The challenge here is that the barrier to speaking on the interwebs is quite low so you can make a fool of yourself if you're not careful.

Jean Labrosse, who wrote uC/OS (which everyone called mucos), in his original thesis statement made many of these exact same arguments. And like your author he made some choices that he felt were reasonable, only to learn through experience that perhaps they weren't a well thought out as he had hoped.

I am a huge fan of people just saying "How hard can it be?" and jumping in. Enjoy the ride, they can surprise you if you under estimate them.

So assuming this person notes that they are getting a ton of traffic from HN, and comes here to investigate, my three suggested books are :

Operating System Concepts [1], Operating System Implementation [2], and The Design of UNIX [3]. Preferably in that order. Any decent college library should have all three in the stacks.

[1] www.amazon.com/Operating-System-Concepts-Seventh-Edition/dp/0471694665/

[2] www.amazon.com/Operating-Systems-Design-Implementation-3rd/dp/0131429388/

[3] www.amazon.com/Design-Operating-System-Prentice-Hall-Software/dp/0132017997/

Agreed; the criticism here is way too harsh. In particular, I think it's somewhat amusing that people are giving him grief for wanting to "jump to C as soon as possible." Any rational kernel developer wants to jump to C as soon as possible -- and wants as much of the system to be in C as possible. (Indeed, this is the ethos that drove Unix -- and separated it from the horrifically unportable systems that predated it.[1])

Further, there are some good ideas here -- in particular, jumping directly to long mode allows one to avoid much of the replaying of the history of the PC that one is historically required to do to boot an x86. Most civilians (and I dare say, most of the negative commenters here) have NFI how ugly this process is and how much of a drag it is on future development. With the decline of the PC, it's reasonable to believe that a future is coming in which x86 becomes primarily a server-side microprocessor -- and begins to shed much of the baggage from the misadventures of the 1980s and 1990s.

All that said: there is a certain arrogance of youth here, and one hopes that when reality has sandblasted it away, the resulting humility will find its way to a formal computer science education and ultimately into professional software engineering; our discipline needs more people who have dared to write an OS from scratch, not fewer.

[1] http://cm.bell-labs.com/who/dmr/cacm.html

Very well said, particularly our discipline needs more people who have dared to write an OS from scratch, not fewer. Truth is that I've thought about doing what the OP is talking about, but quailed at the complexity of it.

That said, I'd be interested in making an OS that gets to JavaScript as fast as possible. Yes the system would be a dog, but sure would be fun to build.

At least when it comes to systems programming, but probably across our whole field, most seeming impossibility has more to do with fear than real difficulty. I've spent a lot of time clumsily trying to paint a target on exactly what that fear is, but I've learned by having a lot of crazy stuff shoved into my face as a security researcher that if you can just get yourself to focus on a task and start coding, a lot of contraptions that seem so hard to build as to be implausible are actually the work of just a day or two; at least, the kernel of those contraptions (emulators, hypervisors, compilers, debuggers, drivers, interpreters, routing protocols, static analyzers).

If you have a basic aptitude for coding, not knowing how hard a task is isn't a liability; it's a powerful asset. If you haven't actually done real work on an OS, you wish you thought it was as easy as this guy does.

When asked, "How could you possibly have done the first interactive graphics program, the first non-procedural programming language, the first object oriented software system, all in one year?" Ivan replied: "Well, I didn't know it was hard."


Worth noting that the person that asked him this was Alan Kay.

I have, however, balked at the enormity going from the kernel of something to the whole completed thing suitable for commercial release.

There's two kinds of fear; fear of the unknown - where you may learn to wade in anyway - and the more awkward fear of the known. The latter, if you believe the project's basis to be questionable, can destroy motivation. It made me quit my last job.

Just start what you like to do.

I don't know a lot about math (I still don't) but I always wanted to write a 3D render engine. I also was afraid of the complexity. But then I figured I didn't know anything about it's complexity because I never tried.

So I started with Processing and wrote an photo-realistic ligh-tracer. Very slow but very very fun. Then I wrote a multi-core version which was a little faster. Then an path-tracer which was faster. Then an exporter for Blender. Then I ported the project to Java. Is it as good as current render engines? No! But I don't care because I learned a lot and had a lot of fun.

Then I liked to have a CNC router. But I could not afford one. So I just built one and it worked!

Moral of the story: just do it. You will fail sometimes but who cares? You will always learn a lot.

Cool. Well I did do something recently that was a lot of fun, and not a little useful: I was annoyed with D3's opaque API and wondered if I could do better. So I wrote a particle system in mostly raw JavaScript - and actually, the code looked (and looks) a lot better to me. I even did some quirky things that ended up being useful like storing velocity vectors in normalized polar coordinates. It only took a few hours, and was quite fun to do. Not quite an operating system, but still. :)

Smalltalk used to be an OS, and there have been some things resembling ports of the Smalltalk image to JavaScript. Also, JIT compilation is good enough, that such an OS need not be a dog, provided a clean architecture. BeOS showed that to be the case on machines orders of magnitude slower than today's.

BeOS was written in C++ and compiled with gcc 2. For all intends and purposes the whole operating system was native code, certainly not running over a VM.

Yes, but it could achieve feats of multitasking and multimedia display mainstream OS could only achieve on machines with much more power. It showed that a 10X increase in efficiency is possible though architecture.

Javascript seems like throwing in the towel, lets go for coffeescript! Or markdown!

That is an odd statement when you considered that CoffeeScript is transformed to JavaScript before it runs. It sounds like saying "Let's drop the cows and just get the milk straight from the store instead"

I think that was the point. The markdown comment is what gives me that idea.

Yet it is an odd statement to get to C as fast as possible, because that is transformed to assembly before it runs.

i like your sarcasm.

And amazingly fun to hack on.

Dead comment from iRobot:

iRobot 10 hours ago | link [dead]

The BIOS on the original IBM PC totally made it easy to code a homebrew bare metal OS which could access all the peripherals. These BIOS calls exists today in i7 based motherboards allowing OS's coded in the 80's to still (mostly) function on a modern Motherboard, its biggest fault was being 16 bit which meant all newer OS's needed to write there own once they switched to 32/64 bit mode.

The BIOS layer made the PC easier. I find it frustrating that almost every new SOC I get, even from the same manufacturer requires me to re-code all my IO routines.

If a common BIOS existed across the ARM/x86 which emulated the simpleness of the original BIOS concept and not the API hell you usually get now, there would be a lot more adventurers in homebrew OS's

Note to iRobot: it looks like your comment 165 days ago about "Melissa[sic] Gates" got your account killed.

Totally agreed here. This is one of the reasons I really like the "build your own OS" course for the Raspberry Pi as the ARM ecosystem doesn't suffer from this legacy of system design. That said, the current SoC's sort of assume you are building a phone, but perhaps that will change.

Give him a break? Why? That guy is awesome. Look at him go! "What does the IDT look like? Do I have to fake it up before I switch to protected mode? And are interrupt handlers just C callbacks?"

Here's the thing about all this: those are questions with answers that are straightforward to find, and he is probably going to find them, because he seems totally fearless.

I could never in a million years write a blog post that so forthrightly laid out the stuff I don't know and sort of plaintively said "I'm going to try to figure this stuff out, and in the meantime, I'm putting system configuration under /system/configuration and not /etc". I think I envy him intensely.

Note to people babbling about what a herculean task building an OS is: I started coding as a teenager in the early '90s and multiple friends of mine wrote plausible protected mode operating systems. A basic operating system is not that hard. Going head-to-head with OS X as a one-person project is insanely ambitious, but who cares? By the time that becomes totally apparent to him, he may well be an unstoppable systems programming juggernaut.

My way of seeing it is... the system is everything, so the system is /.

Everything is part of the system, so the system configuration should be either /configuration or /config. But then, you already have /etc... why bother?

Anyway I doubt he's aiming to be POSIX compliant so... he's free to do what he pleases. The journey is going to make him a much better programmer.


As one of those fools that wrote their own OS (A QnX clone, which I'm now wondering if I should port it to the raspberry pi in my non-existent spare time) I can completely sympathize.

The good news is that the way clueless newbies learn is by doing, and there is nothing more helpful here than healthy self over-estimation. It stops you from being discouraged when you probably should be and great things can come of that. Worst case he will learn, and probably a lot more than from building yet-another-to-do-list-in-insert-fashionable-language-here.

And if you haven't written your own OS just yet, trust me it is easier than it seems and harder than it seems at the same time. It's easier to get started and to get something working (especially with VMs nowadays, in my time we had to reboot the hardware 50 times per day (cue 'hah!, you had hardware' comment including chisels and stone tablets)) and harder because it is just simply hard to get it perfect.

Oh, and Tanenbaum was right.

Seems fitting: the birth of QNX itself was Dan Dodge and Gord Bell collaborating on writing the operating system for the computers that they had respectively homebrewed. (I don't know if this is still true, but for years dtdodge had the computer he had built in his office -- it was an amazing piece of engineering, with the hand-made modem in particular inducing a kind of slack-jawed awe.) Motivated by Unix (and implementing it originally in B, if I recall correctly), they called their OS "Quick Unix" or "Qunix" -- which (after a brief infestation of AT&T lawyers) became QNX. Point is: great things spring from motivated amateurs.

Oh, and agreed that Tanenbaum was right. ;)

Tanenbaum was right about what? The infamous microkernel debate? Did the experience from writing your own OS solidify that belief?

I think Linus' argument was basically that microkernels require distributed algorithms, and distributed algorithms are more complex.

But maybe in a multicore world that argument is weakened. I like this paper: "Your computer is a distributed system already, why isn't your OS?"


> The infamous microkernel debate?


> Did the experience from writing your own OS solidify that belief?

Absolutely. Micro-kernels have many advantages other than a slight overhead due to message passing (and a large chunk o that overhead can be overcome by using the paging mechanism in a clever way). They're easier to secure, much easier to stabilize, support such luxuries as on the fly upgrades without powering down with grace and allow you to develop drivers in userland greatly simplifying debugging and testing as well as allowing you to do hard real-time (and by extension soft real time) much easier than you could ever do it using a macro kernel.

I've built some pretty large systems using QnX in the 80's and 90's that I would have a real problem with re-implementing even today on todays hardware without the benefits brought by a micro kernel with network transparent message passing. If you haven't used a setup like that then it is probably hard to see the advantages, it goes way beyond some theoretical debate.

In practice two systems side-by-side, one running QnX, one running Linux will have the QnX system come out way ahead in terms of responsiveness for interactive tasks and things like latency and real world throughput.

We'll never know what the world would have looked like if Linus hadn't been as pig headed during that whole debate. Likely we wouldn't be stuck with a re-write of a 30 year old kernel.

The bit where Linux got it right and Tanenbaum got it wrong was that GPL'ing an OS was a much better move than to do a deal with Prentice Hall (who published the minix source). And minix wasn't the most elegant micro kernel either, which may have skewed Linus' perception of what it was that Tanenbaum was getting at.

My guess is if he would have used QnX instead of having looked at minix that he would have readily agreed with Tanenbaum, but we'll never know about that and Linux is here to stay for a long time.

If you haven't used QnX give it a shot and see how it works for you, you might be pleasantly surprised.

Thanks for the pointers. I have heard of QnX through a book about message-based programming with SIMPL. SIMPL I think borrows the QnX APIs for making modular and networked systems.

I prefer open source so I've been taking a look at Minix 3. It seems really cool. And it's only 6 or so years old -- at the time of the argument Minix wasn't meant to be a production system, but now it is.

I feel like it must be easier to trace application performance with Minix since you have natural points to insert hooks. With monolithic kernels it's hard to understand what is really going on.

I see a lot of potential advantages of a microkernel in distributed systems. For example, Amazon EC2 has well known I/O sharing issues. With a microkernel, you could fairly easily reimplement the file server with your own custom disk scheduling logic based on identities (not Unix users) and priorities.

In Linux I know there is some work with containers, but I don't think it is as customizable as you would like.

I agree. There's nothing wrong with what the guy's doing. He'll learn a lot from it, and I wish him well. It's also good to be writing about the things you do, successful or not.

It's not even remotely newsworthy, though.

Agreed it isn't newsworthy, but it can be inspirational to a reader who hasn't had the 'guts' to take this leap. I'm sure a lot of people reading HN think about writing a new OS it's one of the archetypes [1]. So seeing someone take the plunge can be liberating. Seeing him get slammed on the site can be de-motivating so I was hoping to counter some of that. Back when I was doing a Java based OS I spent a lot of time reading the basic kernel in eCos which had a very nice packaging an build system. I would not have invested that time if I hadn't been aspiring to other things.

[1] Programmers seem to come in 3 archetypes; the systems guy who dreams of building a new OS, the language guy who dreams of building a new programming language, and the networking guy who dreams of building a new protocol. Some folks are parts of all three, I once thought if you wrote an RPG where the characters were coders these three areas would be where you would add skill points.

Also, text editor.

"Not newsworthy" is a valid critique of a submission from a news site, but you should avoid making it of blogs like this. The author didn't submit their own post. I know you're implicitly criticizing the submitter (and the voters), but that's a subtle and annoying distinction.

most other content on HN isn't newsworthy either

the leaping into the unknown and figuring it out is one of the essential qualities of a good hacker.

a massively minimal os is not that hard.

He might also want to take a look at Pure64 [1] to handle the bootup stuff in a super lightweight manner (and also see how it's done).

[1] http://www.returninfinity.com/pure64.html

As someone who once implemented a prototype distributed OS (about 10% of an inch-thick spec) that the designers said was a total impossibility given the 6 month time constraint, I say he's biting off a big chunk, but more power to him for trying.

Some of my favorite work has come from me saying "How hard can it be? Screw it… I'm building this myself." Even from the failures I learned a thing or two, and I had a ton of fun at the same time.

I for one love being in way over my head. Keeps things interesting!

I so agree with you. I love daring people who set out to make the impossible despite the odds and prevalent "common sense". They do not ponder in doubts and expectations - they live their dream.

When Musk decided to build his own rocket, people thought he's a nut. Now he's an icon of innovation. A qualitative leap in any area requires a bold outlook, which is bound to be unpopular with the establishment. As PG once noted, if you experience a lot of opposition, it may be a sign that you're on the right track :)

I think the guy could use our support. HN is not very different from him in spirit.

I was looking at the version of your first listed book, and noticed what appears to be a more recent revision, 2004 versus 2011:


Do you still recommend the Seventh Edition over the newer release?

Hmm, reading the description on Amazon it seems like they have added more 'implementation' examples. For me at least I found better sources of implementation examples (the Bach book on UNIX for example) so I suspect, but can't confirm, that the newest one is probably not a whole lot better, and if you can find the 7th edition at a discount (or the 6th or 5th) you might be able to add it to your library less expensively.

If any of you naysayer arsehats (you know who you are) bothered reading to the bottom of his article, you'd have seen that he has a section where you (if you do indeed know about OS development) could help him by answering his questions. I pasted it below for reference:


My research

What I’ve found out so far:

Boot sequence: BIOS;

Master Boot Record (MBR);

Bootloader – the program that takes it over from MBR and loads your Kernel;

Kernel. How to write your own MBR and write it to Disk on windows.

I’ve written a small utility in Visual C++ that allows directly to read/write from disk (download here, source included for Visual Studio 2010 Express);

How to write bare bones C kernel entry point.

How to write “naked” functions on Windows in Visual Studio

Missing link - I still don’t know how to properly step from MBR to Bootloader to Kerlen, that is – write your own MBR code that would load bootloader, pass the execution to bootloader that would load and pass execution to bare bones C kernel:

What exactly is Global Descriptor Table (GDT) and Interrupt Descriptor Table (IDT), and how it looks in C and Assembly? How and when, and again how, if when is later (for example in Long Mode or Protected Mode) to set up all this GDT and IDT stuff. They say you have to set it up before the kernel. Then they say you can set it up with dummy values and set it up later in kernel. Then they say, that to set it up, you have to be in Real Mode, so your kernel (which might be way over 1Mb of real mode space), needs to switch between modes. And then if your kernel is over 1Mb, you can’t access memory locations after 1Mb, and so on… It’s confusing, but I’m going to find it out and post it here later on.

How to handle Interrupts in C? Will they perform as callbacks that await some return values or do I have to use inline assembly to process them correctly;

Is it possible to write MBR in C? I do understand that you still have to set ORG to 7c00h, and use some specific assembly instructions, but if they could be wrapped in C using inline assembly and C entry point can be glued with few lines of assembly code, why not?

I almost didn't post this comment because it makes me sound like such a killjoy, but:

"I spend most of my days in the world of PHP, JavaScript (I love jQuery) and a little bit of HTML, CSS and ActionScript 3.0. I’ve done some hobby software development and also wrapped my head arround languages like C, C++ (except for templates), C# and Java."

It sounds like you don't have the experience required to make an OS. I certainly don't either (I'm no C-head) so I am in no position to snark, but you're going to fail in this endeavour.

That doesn't mean it's pointless, though- I think it'll be a tremendous learning experience in getting to grips with the core of how computers actually work. So, good luck. Just don't go thinking you're going to make the next Linux out of this.

EDIT: It's also important to note that the author didn't submit this to HN. He didn't say "take this, HN amateurs!", he just posted something on his blog that someone else picked up.

You know what, if you have hesitation about posting something, then you probably shouldn't. It's one thing to have a specific concern about his project, but it's quite another to call into question his skill as a programmer and indeed quite arrogant to assume that you know what it takes to build an OS.

There is a big difference between saying "You don't have what it takes" and saying, "I think the obstacles you face will be very large, good luck!"

He is right though - I was really into messing with kernel development a few years ago, and despite being an experienced C/C++ developer and working on it on and off for a couple of years, nothing useful came out of it.

You really need to be extremely familiar with computer internals, have a very good grasp of the instruction set architecture of the platform you're targeting, have expert C knowledge, and at least be very comfortable programming with assembly.

And even then you're going to fail unless you're working with a large team - there's just too many parts of a modern operating system for one person to ever tackle. Even if you only want to support a single set of hardware (one graphics card, one network card, etc), you'd spend years just writing drivers for everything and by then the hardware would be long obsolete.

I'm not trying to say that he shouldn't do it - it is great for learning how computers work on a low level - but he really does need lots of experience, and should go into it with the correct expectations.

You are overstating what an operating system is. The core of most modern operating systems were written by a single person, or two people - including Windows.

Operating systems become large because of hardware support. They are also large now because what is considered 'operating system' has changed - from including a desktop environment to including a web browser.

The reason why OS X and Windows have large teams and year-long development cycles is because they are a complete stack of applications, not just a kernel, fs etc.

To get an OS written from scratch to boot on specific hardware with some basic functionality should be an 8-12 week job for a competent C developer.

The best thing I did in my teens was to grab x86 docs and attempt to write an OS. I thoroughly recommend it to everybody as a good project to learn system and development (along with writing a compiler and writing a simple database).

quite arrogant to assume that you know what it takes to build an OS

With all of the (largely web/app-centric) development work that I have done, there are numerous concepts in OS development that baffle me. I don't claim to know what it takes to build an OS, but I know that it is beyond me.

Also, I wasn't questioning his skill as a programmer, I was questioning the undertaking that he is discussing. Any solo developer would struggle to create an OS (especially when they have a full time job to contend with), let alone someone that is going to have to learn a lot along the way. So my suggestion was to go ahead, but treat it as a learning experience rather than a deliverable product.

Everyone that ever created an OS from scratch had to learn first. The only question is how long it will take, but there is no reason to say "you will fail".

treat it as a learning experience rather than a deliverable product.

But what is the purpose of mentioning this?

If you are recommending that he go ahead with the endeavour, then what does it matter if he ends up with a deliverable product or not.

Also, these kind of projects can have a timeframe of years, regardless of programming skill. If he finishes it, it will be a result of him sticking to it in the long run, not a result of how much C knowledge he had on the first day of the project.

As I thought more about this very, very important point, I thought I'd give shout out to Pure64: http://www.returninfinity.com/pure64.html

It _will_ be a tremendous learning experience for him to try his hand at writing an OS. But there's always the dark side, the discouragement, difficulty, and flames from the internet along the lines of "Lol, you'll fail." (And I don't mean to belittle the parent post – I'm referring to all the other responses this guy is going to get.)

So take a look at Pure64: it's a project from the University of Waterloo – unofficially – but for someone just getting their feet wet, it's a great way to get down to writing Bare Metal Code but you can skip a whole lot of pain!

> Just don't go thinking you're going to make the next Linux out of this.

I don't think many people that decide to build an OS, regardless of C/asm experience, have the goal of making the next Linux. Most (myself included) get into OS development because it's interesting and a great learning experience.

this is a terrible comment that runs completely against the spirit of what hacking is.

he didn't say anywhere that he expects to build the next Linux, it is all about learning. in that, he has already succeeded, as you can see from his progress all that he has learned and all the little tips he has picked up from trying

i'm just shocked that on a site about hackers that somebody would call out the efforts of a hacker to learn something new as 'going to fail'.

edit: and how frikkin' arrogant are you to assume that just because this guy does PHP and Javascript work that he isn't capable of learning about operating systems. seriously, get your head out of your ass.

You have it backwards. Going from high level, dynamically typed languages into building an OS is a fantastic way to learn low level programming and how hardware works. Whether or not he's successful isn't really what matters.

Even Linus wasn't out to build the next Linux. He was just about as clueless as this guy when he started out with two processes alternating in cooperative multi-tasking mode. Then pre-emptive mode (using interrupts and stack switching). Then some basic screen IO iirc and so on.

It all grew from there.

Indeed. I always feel it is better for people in his position (attempting something so incredibly complex with a very low level of related experience) to do the work, then talk about it - rather than the other way around. Talk is cheap when you don't know what you are doing.

Talk is also cheap when you're talking about somebody else not knowing what they're doing.

Then again, the flames may just make the guy a bit more determined – angry enough not to give up.

And I also don't have the skills to make an OS. Yet, in school I'm making a simple one.

That's how you learn. If I had the skills, what would be the point of the exercise?

I think people are being overly pessimistic. It's not that ambitious of a project (particularly a hobby project at that). Why? Well, he doesn't have to support thousands of different drivers and hardware configurations -- he only needs code that supports his own. That eliminates a good chunk of the OS code.

Second, a lot of grunt-code can be found in open source projects, so most of the tedious/time-consuming programming can be eliminated if he chooses to follow this option.

Drop preemptive multitasking, cache-optimization, modes, virtual memory, and networking and there's not too much left.

And finally, it doesn't take that long to understand the GDT. Mine became corrupted once so I took a day to learn how it worked -- fixed that crap in a hex editor.

So no, he's probably not going to invent the next highly polished OS that handles every edge case and has been rigorously tested against bugs, but then again I don't think it's unreasonable to see a simple little functional OS.

As someone who's just started on a less ambitious OS project, I couldn't agree more. I'm writing basically exclusively for legacy hardware because I'm not expecting to run it outside of QEMU, but one of the reasons Linux took so many man hours to write is because it runs on everything while maintaining stability. If you recall, Linus started writing specifically for his own machine and everybody hopped on the wagon for their own.

Sounds like you still don't understand the GDT :) [1]

I agre with you, though. As with any hobby project, picking the bits that are interesting or fun makes it easy to keep going.

[1]http://en.wikipedia.org/wiki/Global_Descriptor_Table (maybe you're thinking of the GPT?)

D'oh. Slaps forehead.

Most of the comments seem so negative... When I was 16 I was also interested in OS development and actually wrote a bootsector, a very small kernel and support for fat12 in assembler. Previous experience? Basic and QBasic. It's quite a good way to learn and it is possible when you take one step at a time and are willing to spend some time reading... Your mind is a great tool, have some faith in it.

Oh dear... overly ambitious plans, changing everything for the sake of change, fearing assembly, using VS for OS development, and not even knowing how the GDT and IDT work. There's so much wrong with this I don't even know where to begin. I sense a doomed project.

I had the opposite reaction. Many of the things the OP wants to change are really bothering me about "modern" operating systems.

A recipe for coming up with good ideas is looking at something that has been around for a while and asking "What would it be like if it had been invented now?".

And if all else fails, this will be a wonderful learning experience.

I know this is a cliche, but before writing comments like this ask yourself if a) you are unnecessarily discouraging somebody for no reason and b) you would have said the same thing about successful projects when they were first started.

Finally, the one piece of advice I'd give on this project is that you shouldn't focus too much on the overly ambitious goals and don't worry if you can't accommodate for all of them just yet. The likelihood of achieving the next milestone is somewhat proportional to the number of milestones you have achieved. Get to v0.01 and take it from there.

"I know this is a cliche, but before writing comments like this ask yourself if a) you are unnecessarily discouraging somebody for no reason and b) you would have said the same thing about successful projects when they were first started."

a.) No, I don't think so. Considering the number of half-started kernels on OSdev.org, the creators of which incredibly often were convinced that they were going to create the newest awesomest kernel everyone is going to use, trying to get the OP to take a good look at his skillset seems like a proper thing to do.

b.) I would have, if the creators seemed completely ignorant about the basics of what they were trying to do.

This is not to say that I would tell the OP to give up on OS development - I personally love it. But I think a serious reality check is in order, and because we get people similar to the OP on the OSdev forums an awful lot, sometimes this frustrates me.

I don't really expect OP to succeed, but I'm sure it'll be educational. And really, the more enthusiasm for a project the better.

A recipe for coming up with good ideas is looking at something that has been around for a while


and asking "What would it be like if it had been invented now?".

creat(2) would have the second 'e'.

I'd like to think the "everything is a file" mentality would've been taken a touch farther, and the command line tools would deal with networked resources better (eg cp ssh://example.com/~/somefile ~/localcopy)

Oh, you mean like Plan 9? :)

I would have mentioned it, but I was speaking as if Unix were invented today. Plan 9 is an almost textbook second-system effect; it seems like you have to have invented a Unix first before even starting on something like Plan 9.

It's basically impossible for this to fail if you view the whole thing as a learning experience for the author.

Nothing wrong with ambition! It will be a great learning experience if nothing else.

Doomed = not commericailize-able? That doesn't mean its without value to the person behind it if they learn and derive satisfaction from the experience.

While I certainly agree that knowledge of how interrupts work is foundational to writing an O/S, the comment about using VS is just silly. Last time I checked MASM was included.

Please see my comment below: http://news.ycombinator.com/item?id=4833953

I've done quite a lot of (embedded) OS development in VS. It's a pretty nice experience, because you can generate a console app that you can use as an emulator that allows you to prototype really fast, and use your embedded toolchain when building for the device.

With the obvious caveats that you write as much of your code to be cross-platform as possible, and be careful with endianness.

> using VS for OS development

What does Microsoft use?

Microsoft has lots of great command line tools, I used MASM(Microsoft Macro Assembler) and the compiler and linker when I was learning (long long ago).

Nothing is more flexible that command line tools as it lets you do scripting(programming your own environment).

It does not matter if you want to use "already made by someone else windows, text, menus and buttons", but it is essential when you make (or manage what other people's make) everything.

My point wasn't that OS development is impossible in VS. However, given how much the OP seems to know about the subject, and the fact that virtually all resources about it presume that one is using a Linuxesque environment, I wouldn't recommend VS.

the last I heard (3-4 years ago), the OS/kernel team used emacs. the kernel group has its own build infrastructure that is partially / mostly exported in the DDK.

Untrue! I know "kernel people" who use Vim, too! :-)

However, my impression is most of them edit code in MSVC, build with the DDK tools, and debug with windbg/ntsd/kd like most other driver developers.

I don't think writing your own OS is as difficult as people are saying. You're not trying to compete against Windows or Linux, you just want to get something up and running. I'm sure bugs and crashes will be prevalent, but I'm sure it would be a great project.

A really, really good book for this that I've read is "Developing Your Own 32-Bit Operating System" by Richard Burgess. It starts you from the beginning, and walks you through all the steps in writing a really basic OS.

It's old and out of print, but it's definitely the best one I've seen.

Edit: I just found the website, they are offering the book free here:


I've wrote my own OS back in the late '90s based heavily on this book. Probably one of the best learning experiences I've had. I'm not sure how relevant it is today. I'd love to see a second edition brought up to date.

Please read Linus Torvald's book "Just for Fun: The Story of an Accidental Revolution" http://www.amazon.com/Just-Fun-Story-Accidental-Revolutionar.... It mostly deals with his experience building Linux and the insane amount of work it took (he was a student at the time he wrote the first version, with the help of hundreds of other people via distributed development). It might give you some additional insight into the effort involved.

> How to handle Interrupts in C? Will they perform as callbacks that await some return values or do I have to use inline assembly to process them correctly;

Not to disappoint you, but you should try doing some more low level programming or dabbling with some existing OS code to have an idea how this kind of programs look like. Maybe having a look at Minix for a reference of simple OS?

Have you thought about targeting ARM? Its architecture may be way less trickier than most Intel CPUs.

Well, good luck with that. Worst case scenario, you'll end up reading lots of interesting resources.

> Have you thought about targeting ARM? Its architecture may be way less trickier than most Intel CPUs.

Can anyone comment on whether this is really true?

When comparing the Intel 64 [1] and ARMv6[2] instruction sets that's a quite trivial assumption. The difference on the number of instructions implemented by those architecture reflects well the complexity gap between both.

Also, there are many other details that make targeting ARM much easier, for instance the bootloading process on ARM is more straightforward with no BIOS or EFI involved[3].

1: http://download.intel.com/products/processor/manual/325462.p...

2: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....

3: http://www.keil.com/download/docs/52.asp


Too ambitious. Doing that requires millions of dollars and tens of thousands of man hours to make. How do I know? I do electronics and low level programming work and I am really good at it. Just understanding the bugs that manufactures put in hardware and then solve in software(because it is way cheaper) takes a ton of work.

As I suppose he is not super rich, he will have to convince people to join their project, a la Linus.

Good luck with that!! I really wish a clean-no backwards compatible OS were real, I will add native OpenCL, OpenVG and OpenGL to the list, but my arachnid sense tells me a person that does not use Unix will have a hard time getting traction with geeks.

UPDATE: The original author has posted a Part 2. Looks like he's already started writing code and hitting milestones. http://gusc.lv/2012/11/im-writing-my-own-os-p2/

Anyone who has ever written a small RTOS on a small 8 bit embedded processor will only laugh at the OP. And, I hate to say, it would be justified. There are about twenty or thirty books between where he is now and where he'd have to be in order to even start talking about designing an OS for a desktop platform. Add to that 10,000 hours of coding low to high-level projects across embedded to desktop platforms.

A quick read of the "About" page is probably in order:


What to say?

"Someone holding a cat by the tail learns something he can learn in no other way" --Mark Twain.

Here's the tip of the tail:





Have fun.

Anyone who has ever built a high-speed rail network across the US will only laugh at you. And, I hate to say, it would be justified. There are about five to six hundred books between where you are now and where you'd have to be in order to even start talking about designing an electric railway system for transporting cargo at >200mph. Add to that 50,000 hours of working in engineering projects across the transportation and logistics industries.

In case you're lost: http://news.ycombinator.com/item?id=4815463

You could have just posted the book recommendations with a handwave. Dismissing other people's ideas because "they don't know better" (or you know better) doesn't add anything and is harmful to discussion.

And attacking the messenger does not make your argument at all.

There's no comparison whatsoever here. My high-speed cargo rail proposal/idea is actually DESIGNED for criticism out of the self realization that I am no expert in the field. I gathered as much data as I could. Did a bunch of math. Studied some of the issues involved and devoted a non-trivial amount of time to understanding the underlying issues.

Had you engaged me privately you would have also realized that I am very aware of the near-impossibility of the project as I proposed it due to a myriad of issues, not the least of which are political and environmental. Of course there's the simple practical fact that it is probably nearly impossible to trench new territory to build a new railroad system in the US today.

The more important point of raising the issue was to highlight the issue of just how badly ocean-based container shipping methods are polluting our planet and creating a situation that is has escalated into the proverbial elephant in the room.

So, yes, I've done a bit more work than the OP has done in truly understanding --in his case-- what operating systems are about, how to write the, why things are done in certain ways, the history behind some of the approaches, what works, what definitely does not work, and more.

And, yes, I have written several real-time operating systems for embedded systems, some of them mission critical. And, no, in retrospect it would have been a far better idea to license existing technology but as a programmer sometimes you don't have the option to make those decisions if you employer is dead set on a given approach.

No, I have never written a workstation-class OS. I know better than that. Today, it would be lunacy to even suggest it, particularly for a solo programmer, even with a ton of experience.

Anyhow, you succeeded at getting a rise out of me. Congratulations. I hope you are happy. It still doesn't change the fact that attacking the messenger does not invalidate anything I have said or prove whatever your fucking point might be.

At my university, students in groups of 1 or 2 write a fully preemptible, multiprocessor unixy kernel and userspace on x86. Essentially nothing is provided except a bootloader, a build system, a syscall spec, and some moderate protection from the details of some hardware beyond the CPU. This is considered doable in a semester by college students, some of who have no preparation except a general "here's what a CPU register is, here's what caching is" course. No one reads 20-30 books, but rather most students work from first principles and wind up reinventing the usual algorithms on their own.

I thought I'd offer a more optimistic counterpoint. I think that the 10,000 hours figure is way, way overestimating the amount of time needed in order to create something usable enough to make you satisfied, and which could teach you enough to understand any real operating system at the level of source code.

Although, yeah, if you've got commercial aspirations like OP, then I think you're in for it.

> Although, yeah, if you've got commercial aspirations like OP, then I think you're in for it.

I think you nailed it right there. If the OP had said something akin to "I want to write a small OS with a command line interface to learn about the topic" it would have been an entirely different question. I would encourage anyone to do that. It would be an excellent learning experience and the foundation for a lot more interesting work.

If you go back and read the post, this is no small OS any of of the many sub-projects he is proposing is a huge undertaking for a developer who self-describes this way: "I spend most of my days in the world of PHP, JavaScript (I love jQuery) and a little bit of HTML, CSS and ActionScript 3.0".

With regards to your comment about writing a "fully preemptible, multiprocessor unixy kernel and userspace on x86" in school. Sure. Of course. But, keep in mind that you are actually being TAUGHT how to do this and guided throughout the process. You also mentioned that "a bootloader, a build system, a syscall spec, and some moderate protection from the details of some hardware beyond the CPU" are provided. The OP is talking about writing everything!

For example, he seems to talk about writing PCIe, SATA and USB interfaces. That alone could take a newbie a whole year to figure out. Particularly if coming from being a web developer.

Insane? Yes. Impossible? Of course not. Probable? Nope.

Agreed. I just wanted to talk in general to make sure that people weren't too intimidated. I think a hobby OS is a great way to become a better programmer.

About the course: the extent to which we were guided was minimal by design. It wasn't a matter of "here's a skeleton, here are steps A, B, and C to get it working", but rather "here's an API, implement it; here are some general design ideas". I think that this is comparable to what you'd get if you sat down with a book on your own, and so I hope that it might give people a decent idea of the level of difficulty of such a project.

I emphatically agree with your point about the difficulty of a newbie + PCI situation, although a year still seems steep.

Bullshit. A simple operating system is not much work with respect to getting a kernel off the ground.

Back in the old days before the pc took off and people started expecting abstractions for everything conceivable, this was a normal part of a project. At university, I was tasked with building an rtos platform for the m68k. Took about a month from zero knowledge to working multitasking os with DMA, memory management and protection and a virtual machine which ran plc-style ladder logic.

The only problem is if you start with x86, you're going to have to fight the layers of fluff that have built up since the 8086 (ldt/gdt/long mode/segments/pci bus/shitty instruction set etc).

I'd go for ARM.

> Bullshit. A simple operating system is not much work with respect to getting a kernel off the ground.

Did you read the article? There's nothing there than hints a simple operating system at all.

So, yeah: Bullshit. The operating system he is talking about is no hobby project.

Also, define "simple operating system". What is it? What can it do? What can't it do?

I wrote a micro kernel based OS. Then I wrote a graphics driver to get bit mapped graphics. Then a window manager using that bit mapped graphics driver. Then a bunch of applications using that window manager.

And I never read any one of those books you're listing there other than K&R and the 486 reference manuals. Sure enough I had a fair grasp of the x86 processor architecture before starting this and I'd done a lot of low level 8 bit work. But on the whole I spent more time waiting for it to reboot than I did writing code or reading books and I still managed to do all this in about two years.

This is doable. It's hard, but it is doable, and it is a lot easier now than when I did it. For one you have VMs now which make it a thousand times easier to debug a kernel. No more need to use a dos-extender to bootstrap your fledgling kernel code and so on.

This guy is way out of his depth, that's for sure. But what is also for sure is that he's going to learn quickly and seems on the right road for that (acknowledging what he doesn't know yet).

Don't tell other people what they can't do. Just wait and see, they just might surprise you. You'd have been talking to Linus like that just the same. And you would have been right about him being out of his depth, and you would have been wrong about him not being able to achieve his goal in the longer term.

Maybe this guy will get discouraged, maybe he won't. But no need to kill his enthusiasm with a negative attitude. If you're so smart, why not give him a hand, point him in the right direction on the concrete questions he's asking rather than to literally throw the book (or in this case a whole library) at him and tell him he's clueless.

He probably already knows that anyway, but at least he's willing to learn.

Successive abstractions. What he/she proposed was an end game, not the first steps.

From a previous comment of yours: "Every new programmer needs to start with C. In fact, I am convinced that every new programmer needs to start with C and be tasked with writing an RTOS on a small memory-limited 8 bit processor. And then write several applications that run within that RTOS."

He is not that far off from your suggestions. Why do you think 8bit and realtime is better than AMD64? The architecture is a lot more complicated. On the other it is probably much better documented as well.

I think that there's huge value in starting with something very simple and as "raw", if you will, as possible. Any one of the various small 8 bit processors out there are very easy to understand. You have a few registers, interrupts, a small amount of RAM, perhaps some Flash, serial ports, etc. Simple.

The idea here is to really get down to the basics and understand them with a series of incremental projects.

Of course, there are no universally true rules about this stuff. This happens to be my opinion based on over quite of few years of developing and shipping products that entail both electronics and software.

As an example, I am teaching my own son how to program with C and Java almost simultaneously. Why? Well, he is not learning on his own, his Dad happens to knows this stuff pretty well and we are spending a lot of time on all of it. So, navigating two languages at the same time is working out OK. I've also had him sit with me while I work in Objective-C and ask questions as I go along.

In about three months we are going to build a real physical alarm clock using a small microprocessor and LED displays. I am going to to bootstrap Forth on that processor. The job will also require writing a simple screen text editor in order to make the clock its own development system.

So, by the middle of next year he will have been exposed to raw C, Java and a threaded interpreted language like Forth. I want to expose him to Lisp as well but don't yet know when it will make sense to do that. Maybe in a year or so. With three programming paradigms on the table it will be far more important to explore algorithms and data structures/data representation and understand how they look like with each technology.

> Every new programmer needs to start with C.

Wow, what an idiotic thing to say. C is one of the worst languages for learning the actual fundamentals of programming, which are algorithms and data structures.

Which begs the question of what a freaking moron Knuth must be to have presented TAOCP in terms of assembly language.

Knuth is Knuth. You are not Knuth.

Absolutely not true. You can learn all of that and more with C. You might not like C and that is a different issue.

Let's take it down even further: Every language you care to suggest ultimately ends-up in machine language. You can implement ANY algorithm or data structure management you care to mention in assembler. So, assembler isn't any less capable in that regard than any language anyone might care to propose.

Now, of course there's the practical matter of the very real fact that doing object oriented programming --as an example-- in assembler would be extremely painful, so yeah, this would not be the first choice.

Nobody who has done a reasonable amount of programming across tools and platforms would, for a minute, suggest that C is the be-all and end-all of programming languages. That I have never said anywhere. In fact, in a recent post I believe I suggested a progression involving assembler, Forth, C, Lisp, C++, Java (or other OO options). Even less popular languages like APL have huge lessons to teach.

As the level of abstraction increases one can focus on more complex problems, algorithms and data structures. That is true.

One of the problems with a lot of programmers I run into these days is that a lot of what happens behind the code they write is absolute magic to them. They have almost zero clue as to what happens behind the scenes. That's why I tend to like the idea of starting out with something like C. It is very raw and it can get as complex as you care to make it.

One can use C to write everything from device drivers, operating systems, mission critical embedded systems, database managers, boot loaders, image processors, file managers, genetic solvers, complex state machines and more. There's virtually nothing that cannot be done with C.

Is it ideal? No such language exists. However, I'll go out on a limb and say that if I have two programmers in front of me and one only learned, say, Objective-C and nothing more while the other started out with C and then moved to Objective-C, the second programmer will be far better and write better code than the first.

All of that said, there is no magic bullet here. Start with whatever you want. No two paths are the same. Just different opinions.

> So, assembler isn't any less capable in that regard than any language anyone might care to propose.

You're arguing against a strawman here.

> Nobody who has done a reasonable amount of programming across tools and platforms would, for a minute, suggest that C is the be-all and end-all of programming languages.

And I never used that strawman.

> As the level of abstraction increases one can focus on more complex problems, algorithms and data structures.

And this is my point. You can focus on what you're learning without having to waste time on anything else.

Why don't you advocate a return to punch cards?

> One of the problems with a lot of programmers I run into these days is that a lot of what happens behind the code they write is absolute magic to them.

And most programmers don't know enough physics to understand how a transistor works, either. You can learn stuff like that as and when you need it. The actual core needs to come first.

> There's virtually nothing that cannot be done with C.

Ditto machine language, as you just said. So why didn't you say everyone needs to start with machine language?

I merely used the fact that all languages ultimately compile to (or are interpreted by) machine code to illustrate the fact that you calling me an idiot and asserting that "C is one of the worst languages for learning the actual fundamentals of programming, which are algorithms and data structures." is, well, misplaced.

You can learn just as much with assembler. It would be a huge pain in the ass. And, just in case there's any doubt, I am not proposing that anyone use assembler to learn complex algorithms, patterns or data structures.

Your original comment "what an idiotic thing to say" is just false. You can learn ALL fundamentals of programming with C. And, yes, you can learn ALL fundamentals of data structures with C.

Classes and OO are not "fundamentals". That's the next level. And there's a whole movement proposing that there are huge issues with OO to boot.

I have a question. You were quick to call me an idiot for suggesting that newbies need to start with C. OK. I have a thick skin. Thanks.

Now, let's move on. I noticed that you did not offer a solution. What would you suggest someone should start with? Why? How is it better than starting with C?

Now, keep in mind that we are talking about STARTING here. We are not talking about --and I have never suggested that-- C is the ONLY language someone should learn. Quite the contrary.

Your ball.

> you calling me an idiot

I NEVER DID THAT. I merely said an idea was idiotic.

I will not proceed until you acknowledge that. It's a question of honesty.

Hmm, semantics? Dont' know:


    1. of, pertaining to, or characteristic of an idiot.
    2. senselessly foolish or stupid: an idiotic remark.
Either way, not a wonderful statement to make. But, that's OK. I can take the criticism, even if misplaced. I am far more interested in how you would answer my questions. Reminding you that we are talking about what might constitute a reasonable choice for someone to learn as their very first programming language, the questions were:

    What would you suggest someone should start with? 
    How is it better than starting with C?
And I'll add:

    What will they learn that they cannot learn with C?
    How would learning C as their first language hinder them?
    Why is C an idiotic first choice?

> Hmm, semantics? Dont' know:

Newton held idiotic ideas. Was Newton an idiot? No. Did I just call Newton an idiot? No.

> What would you suggest someone should start with?

It depends on the person and why they want to program.

> Why?

Because I don't think C is the best choice for all tasks. In fact, I think C is a poor choice for most of the reasons people start programming.

> How is it better than starting with C?

Because C forces the programmer to prioritize machine efficiency above everything else. Algorithms get contorted to account for the fact the programmer must explicitly allocate and release all resources. Data structures get hammered down into whatever form will fit C's simplistic (and not very machine efficient) memory model.

In short, everything is simplified and contorted to fit the C worldview. The programmer is forced to act as their own compiler, turning whatever program they want to write into something the C compiler will accept.

> What will they learn that they cannot learn with C?

A clearer understanding of things like recursive data structures, which are complicated with excess allocation, deallocation, and error-checking noise code in C.

Compare a parser written in Haskell to one written in C: The string-handling code is reduced to a minimum, whereas in C it must be performed with obscene verbosity.

> How would learning C as their first language hinder them?

> Why is C an idiotic first choice?

It is purely wasteful to have new programmers worry about arbitrary complexities in addition to essential complexities. It is wasteful to have new programmers writing the verbose nonsense C imposes on them every time they want to do anything with a block of text. That time should be spent learning more about the theory behind programming, the stuff that won't change in a few years because it is built on sound logic, not accidents of the current generation of hardware design.

> Because C forces the programmer to prioritize machine efficiency above everything else. Algorithms get contorted to account for the fact the programmer must explicitly allocate and release all resources. Data structures get hammered down into whatever form will fit C's simplistic (and not very machine efficient) memory model.

Well. We couldn't disagree more.

I love APL because it absolutely removes you from nearly everything low-level and allows you to focus on the problem at hand with an incredible ability to express ideas. I did about ten years of serious work with APL. I would not suggest that a new programmer start with APL. You really need to know the low level stuff. Particularly if we are talking about writing an operating system and drivers.

Nobody is suggesting that a programmer must never stray outside of C. That would be, to echo your sentiment, idiotic. A good foundation in C makes all else non-magical, which is important.

Coding algs and data structures in C lets you see how those things _actually work_ in the computer. A lot is hidden by the abstractions of higher-level languages.

In particular, I am thinking about pointers and memory management, but there are other things.

Yup. Exactly why I think C is a great starting point.

This is also why I think we have so much bloated code these days. Everything has to be an object with a pile of methods and properties, whether you need them or not. Meanwhile nobody seems to be able to figure out that you might be able to solve the problem with a simple lookup table and clean, fast C code. There was a blog post somewhere about exactly that example recently but I can't remember where I saw it.

I wrote a GA in Objective-C because, well, I got lazy. Then, after seeing the dismal performance I got I re-coded it in C. It's been a couple of years but I think that the performance was hundreds of times faster than anything the optimized Objective-C code could achieve. The heavy bloated NS data types just don't cut it when it comes to raw performance.

Someone who has only been exposed to OO languages simply has no clue as to what is happening when they are filling out the objects they are creating with all of those methods and properties or instantiating a pile of them.

> This is also why I think we have so much bloated code these days.

'Bloat' is a snarl term. It's meaningless. It literally means nothing, except to express negative emotion.

> I wrote a GA in Objective-C because, well, I got lazy. Then, after seeing the dismal performance I got I re-coded it in C.

Did you try any other algorithms? Any other data structures? Simply picking a new language is laziness.

No, it's not a snarl term. It's very real.

When dealing win an array is 400 times slower in a "modern OO language" then in raw C, well, the code id fucking bloated.

When you can use a simple data structure and some code to solve a problem and, instead, write an object with a pile of properties and methods because, well, that's all you know, that's bloated code.

Of course there are lots of places where OO makes absolute sense. And the fat and slow code is the compromise you might have to make. That's the way it goes.

With regards to my GA example. No, I had to implement a GA. That's what was required to even attempt to solve the problem at hand. Later on we used it to train a NN, which made the ultimate solution faster. But, the GA was required. There was no way around it and Objective-C was such a an absolute pig at it that it made it unusable.

> Simply picking a new language is laziness

See, there's the difference. I started programming at a very low level and have experienced programming languages and approaches above that, from C, to C++, Forth, Lisp, APL, Python, Java, etc.

I have even done extensive hardware design with reconfigurable hardware like PLD, PLA's and FPGA's using Verilog/VHDL. I have designed my own DDR memory controllers as well as raw-mode driver controllers and written all of the driver software for the required embedded system. My last design was a combination embedded DSP and FPGA that processed high resolution image data in real time at a rate of approximately SIX BILLION bytes per second.

So, yes, I am an idiot and make really fucking dumb suggestions.

Because of that I would like to think that, if the choice exists --and very often it does not-- I do my best to pick the best tool for the job.

More often than not, when it's pedal-to-the-metal time C is the most sensible choice. It used to be that you had to get down to assembler to really optimize things, but these days you can get a way with a lot if C is used smartly.

> When dealing win an array is 400 times slower in a "modern OO language" then in raw C, well, the code id fucking bloated.

Social science numbers do not impress me. Besides, what is a "modern OO language"? Haskell? How can you give any numbers without even specifying that detail?

> Of course there are lots of places where OO makes absolute sense. And the fat and slow code is the compromise you might have to make.

Your idea that "OO = fat and slow" is blown away by actual benchmarks.


(And, yes, unless and until you define what "OO" is to you, I'll pick Haskell as a perfectly reasonable OO language. Given than I've seen C called OO by people with better writing skills than you, this is hardly a strange choice in this context.)

> So, yes, I am an idiot

Again, I did not call you an idiot. The only one calling you an idiot here is you.

> More often than not, when it's pedal-to-the-metal time C is the most sensible choice.

I agree fully with this. However, I disagree that "pedal-to-the-metal time" is all of the time, or even most of the time. Especially when you're trying to teach programming.

Do you teach new drivers in an F1 racecar? Why or why not?

> Coding algs and data structures in C lets you see how those things _actually work_ in the computer.

No. Not really. C doesn't show you any of the essential parts of cache, opcode reordering, how multicore interacts with your code, or much of anything else that actually makes hardware fast.

C makes you act as if your computer was a VAX.

I am holding my breath to learn what language you are going to propose, as a first language, that teaches all of those things.

Robomartin: I learned in C. I also find C to be superior to e.g. Java for learning data structures and algorithms. On the other hand you are losing this argument:

Below let X represent roughly the sentiment "C is a good learning language, since it teaches you what happens at a low level"

darleth: C sucks as an intro language robomartin: No it doesn't because X darleth: X was true 30 years ago but isn't anymore robomartin: well C is still better because there is no language that does X

A better refutation is that I cannot predict the order of complexity for an algorithm written in Haskell that I could trivially do in C. Haskell presents immutable semantics, but underneath it all, the compiler will do fancy tricks to reuse storage in a way that is not trivially predictable for a beginner.

Similarly with Java, you end up having to explain pointers and memory and all that nastyness the first time the GC freezes for 1-2 seconds when they are testing the scaling of an algorithm they implemented in it.

Yes there is a "learn that when you need it" for a lot of stuff, but for someone actually learning fundamentals like data-structures and algorithms, we are talking about a professional or at least a serious student of CS. Someone in that boat will need to be exposed to these low-level concepts early and often because it is a major stumbling block for a lot of people.

If you just want to write a webapp, use PHP. If you want to learn these fundamentals you will also need to be exposed to the mess underneath, and it needs to happen sooner than most people think.

> Robomartin: I learned in C. I also find C to be superior to e.g. Java for learning data structures and algorithms. On the other hand you are losing this argument

I appreciate your sentiment. However, I think you made the mistake of assuming that there is an argument here. :)

I find that most software engineers who, if I may use the phrase, "know their shit", understand the value of coming-up from low level code very well. I have long given-up on the idea of making everyone understand this. Some get it, some don't. Some are receptive to reason, others are not.

I am working on what I think is an interesting project. Next summer I hope to launch a local effort to start a tech summer camp for teenagers. Of course, we will, among other things, teach programming.

They are going to start with C in the context of robotics. I have been teaching my kid using the excellent RobotC from CMU. This package hides some of the robotics sausage-making but it is still low-level enough to be very useful. After that we might move them to real C with a small embedded project on something like a Microchip PIC or an 8051 derivative.

In fact, I am actually thinking really hard about the idea of teaching them microcode. The raw concept would be to actually design a very simple 4 bit microprocessor with an equally simple ALU and sequencer. The kids could then set the bit patterns in the instruction sequencer to create a set of simple machine language instructions. This is very do-able if you keep it super-simple. It is also really satisfying to see something like that actually execute code and work. From that to understanding low-level constructs in C is a very easy step.

After C we would move to Java using the excellent GreenFoot framework.

So, the idea at this point would be Microcode -> RobotC -> full C -> Java.

Anyone interested in this please contact me privately.

> A better refutation is that I cannot predict the order of complexity for an algorithm written in Haskell that I could trivially do in C. Haskell presents immutable semantics, but underneath it all, the compiler will do fancy tricks to reuse storage in a way that is not trivially predictable for a beginner.

Except this is also true for C at this point. Maybe the order won't change, but maybe it will at that, if the compiler finds a way to parallelize the right loops.

C compilers have to translate C code, which implicitly assumes a computer with a very simplistic memory model (no registers, no cache), into performant machine code. This means C compilers have to deal with the register scheduling and the cache all by themselves, leading to code beginners have a hard time predicting, let alone understanding.

Add to that little tricks like using MMX registers for string handing and complex loop manipulation and you have straightforward C being transformed into, at best, with a good compiler, machine code that you need to be fairly well-versed in a specific platform to understand.

This is why I get so annoyed when people say C is closer to the machine. No. The last machine C was especially close to was the VAX. C has gotten a lot further away from the machine in the last few decades.

The implication here is that you should teach C as an end in itself, not as an entry point into machine language. If you want to teach machine language, do it in its own course that has a strong focus on the underlying hardware. And don't claim C is 'just like' assembly.

1) Nowhere in my comment did I say C is closer to the machine.

2) Despite #1 C is still closer to the machine than Haskell, and I'm not sure how you could maintain otherwise

3) Nearly all of the C optimizations will, at best, make a speedup by a constant factor. Things that add (or remove) an O(n) factor in Haskell can and do happen.

> I am holding my breath to learn what language you are going to propose, as a first language, that teaches all of those things.

You're not reading my other posts, then. I explicitly said programmers can learn those things as and when they need to.

Load of crap. You can write a small OS in a semester using Tanenbaum's book. It's trivial.

There's nothing trivial about it. Following a book while being taught at a University over a semester is vastly different from what the OP is talking about. For example, he is talking about writing his own PCIe, SATA and USB drivers as well as, well, everything. He is not talking about starting with MINIX and a well-digested book that guides you through the process.

In fact, your suggestion is exactly on point: The OP should pick-up Tannenbaum's book and take a year to implement everything in the book. Why a year? He is a web developer and, I presume, working. It will take more than time to learn what he does not know in order to even do the work. So, let's say a year.

I would suspect that after doing that his view of what he proposed might just be radically different.

For example, wait until he figures out that he has to write drivers for chip-to-chip interfaces such as SPI and I2C. Or that implementing a full USB stack will also require supporting the very legacy interfaces he wants to avoid. Or that writing low-level code to configure and boot devices such as DRAM memory controllers and graphics chips might just be a little tougher than he thought.

There's a reason why Linux has had tens of thousands of contributors and millions of lines of code:


...and that's just the kernel.

I didn't say what the OP wanted to do was trivial, but rather that bringing up some sort of OS on a desktop platform is trivial.

I would urge you to really think that statement through. Are you proposing that writing all the drivers and low-level code that already exists is trivial? I don't think you are. What, then, is bringing up an OS on a desktop platform?

Writing some kind of a minimalist hobby OS on top of the huge body of work that is represented by the drivers and code that serve to wake up the machine is very different from having to start from scratch.

My original comment has nothing whatsoever to do with anything other than the originally linked blog post which describes almost literally starting from scratch, ignoring decades of wisdom and re-writing everything. That is simply not reasonable for someone who's experience is limited to doing web coding and dabbling with C for hobby projects. In that context, just writing the PCI driver code is an almost insurmountable task.

If I were advising this fellow I'd suggest that he study and try to implement the simplest of OS's on a small embedded development board. This cuts through all the crud. Then, if he survives that, I might suggest that he moves on to Tanenbaum's book and take the time to implement all of that. Again, in the context of a working web professional, that's easily a year or more of work.

After that --with far more knowledge at hand-- I might suggest that he start to now ask the right questions and create a list of modifications for the product that came out of the book.

Far, very, very far from the above is the idea of starting with a completely blank slate and rolling a new OS that takes advantage of nearly nothing from prior generations of OS's. And to do that all by himself.

That's a bummer. Reading this article was inspirational.

Don't let this squash your inspiration. It won't stop this guy from learning what he needs to learn.

I've written a small RTOS for an 8-bit embedded processor, and ported it to run on a 32-bit embedded processor too. I don't laugh at the OP; I think he's got the true hacker spirit and I wish him well. It sounds like a fun project.

What's the harm here? He's going to dive in and learn something, and he's probably going to get further along than you expect, because this stuff just isn't as complicated as people like to think it is.

I'll be you didn't write drivers for USB, Hard disk, DDR3 controller, wear-leveling Flash storage, Ethernet/Networking, Graphics, etc.

You have to keep in mind that the OP is talking about such things as writing his own PCIe and USB stacks as well as everything else. He is leaving all history and prior work on the floor and re-inventing the wheel.

That's very far from writing a small RTOS for an 8-bit processor. In fact, my suggestion is that he should do just what you did in order to understand the subject a lot better. There's a lot of good Computer Science that can be learned with a small 8-bit processor.

I remember my 1st OS - CTOS running on Convergent Technologies hardware. After a couple weeks on the job, I had read all the code and changed many modules, and I remember thinking "Wait, I never saw any magic. Its all just code!"

What makes you think he's incapable of getting through those twenty or thirty books? Did you have the experience to write an OS when you started your first OS? No, that's why you write your first OS. I think he knows what he's doing when he grabs the cat's tail.

Writing your own OS is not the problem here. My first one was actually bootstrapping Forth on an embedded system. Let's call that an OS if we are going to stretch things. It did involve writing a pile of drivers for I/O --parallel, serial, i2c, etc. It also involved designing and building my own floppy disk controller as well as programming the driver to run it. Then I had to write my own screen editor to be able to edit code on the little machine itself.

Did I know all of that stuff? Of course not. Did it compare in complexity to what this article is proposing. Nope. The OS described in the article is far, far more complex than what I just described.

Is is incapable of doing it? Nope. I did not say that. I think I said that anyone who has written a non-trivial RTOS would laugh at the idea of what he described. Why? Because it is a monumental job for one person, particularly if they've almost done zero real development at the embedded level and they also have to work for a living.

I got started designing my own microprocessor boards, bootstrapping them and writing assembly, Forth and C programs before when I was about 14 years old. By the time I got to college I knew low-level programming pretty well. As the challenge to start diving into writing real RTOS's presented itself I could devote every waking hour to the task. Someone starting as a web developer --who presumably still needs to keep working-- and wanting to develop such an extensive OS is just, well, let's just say it's hard.

It's like a pilot writing about building his or her own plane.

Compared to building an OS and support libraies, building your own plane is trivial. Flying it may take balls, though. For the best example I know of: short video at http://www.joostconijn.org/film/vliegtuig/index.php.

The guy built three planes. One didn't take of, one crashed, one brought him across Africa in short hops with landings at rebel-occupied airports where he didn't always manage to announce his arrival.

Short thread at http://www.homebuiltairplanes.com/forums/hangar-flying/12196....

(Apologies for deviating from the subject, but I figure hackers might find this interesting)

Whoah. That's some serious balls - def someone Colbert needs to have on the show.

What about writing a small RTOS? How hard do you think that is?

Not hard at all if you take the time to study the subject.

If you really have a good grasp of the concepts sometimes the hard part is the drudgery of possibly having to write all the device drivers for the various devices and peripherals that the RTOS has to service.

In the case of the OP, he seems to be talking about rewriting everything from the most basic device drivers on up to bootloaders and even the GUI. That's a ton of work and it requires knowledge across a bunch of areas he is not yet well-versed in.

Also, when it comes to the idea of writing an RTOS, there's a huge difference between an RTOS for, say, a non-mission-critical device of some sort and something that could kill somebody (plane, powerful machinery, etc.). That is hard not because the concepts require superior intellect but rather because you really have to understand the code and potential issues behind how you wrote it very well and test like a maniac.

I have written RTOS's for embedded control of devices that could take someone's arm off in a fraction of a second. Hard? Extremely, when you consider what the stakes are and particularly so if it is your own business, your own money and your own reputation on the line. There's a lot more to programming that bits and bytes.

Judging by his arrogance complex, I'm sure he thinks it's pretty tough!

Looks like you haven't even started thinking about the problems you'll run into architecting the actual mechanisms and policies of the operating system; process scheduling, virtual memory, etc. That's probably for the better—one thing at a time. For when you do get to that stage, I recommend Operating Systems: Three Easy Pieces. http://pages.cs.wisc.edu/~remzi/OSTEP/

Holly jumping Jesus … I just got my 15 minutes of shame/fame over the internet. I linked my “just write down what you’re thinking” blog post on dzone.com and somehow it got posted on Hacker News and from there … Shit just hit the fan. It seems that writing “I’m writing my own OS” as a blog title can be translated from “I have spare time, I want to try out new things” into “Fuck this shit, I’m going for a revolution!!!” It’s time consuming to answer all the comments I’ve gained, so I’m writing this post as an answer to all of you.


And thank you again for inspiration (even the cynicism is inspirational ;)

Nope. If you want to know how to make a small OS that can run on a PC, take a look at xv6 which is a modern re-implementation of the sixth edition of Unix and is on Github. You can compile it and then run it in qemu. Fork it, read and understand the source and then expand upon it to your hearts content. That would at least give a very good starting point.

There is a surprising amount of negativity coming from what I expect to be a 'hacker' crowd. This kid (he's 30 can I call him a kid?) has ambition and what appears to be the drive to create his own OS - I mean, if you read the bottom of his page, he did some research (sure, wiki-ing them is not equal to actually understanding them), but it at least shows willingness to learn, and we should not be putting him down at all.

Sometimes the HN crowd surprises me. We pride ourselves in being hackers, most often idealistic (bitcoins and patent law change anyone?) but when a singular person shows idealistic ambition, we immediately engage in poppy cutting.


I think his enthusiasm and attempts are good things.

But. His post is just that - his expression of enthusiasm. There's not much of anything else here yet. We have nothing to discuss but his enthusiasm.

Having seen these kinds of ideas consistently end up as "well... it got hard" a week later, it triggers my grumpy "sure whatever let me know how it goes in a month" reflex.

I'm surprised by the opposite: I have come to expect this kind of negativity here at HN, but about half the comments are like yours.

I'll be following this. Don't listen to the hate -- dive in, I'm sure you'll learn a ton, and maybe build something useful for yourself and others.

This is the exact definition of hacking, if you ask me.

Implementing an OS is hard work, even if you build on the very hard work of those who have tried before you in terms of APIs, abstraction layers, etc. It doesn't sound like you have a lot of familiarity with low-level software development, or software development in general, so you just took a "hard mode" project and cranked it up to "nightmare".

Generally this is a bad idea because without any external motivation, you lose interest and stop working. With external motivation is worse, because you can burn out and become a catatonic shell of a person, staring absently into space for the rest of your life.

Just some FYIs:

> On the side note - It’s 21st century, but our PCs are still booting up as old-fart Intel 8086.

You should read about EFI (http://www.intel.com/content/www/us/en/architecture-and-tech...)

You should also read all of the lecture materials from good universities OS classes. In those classes, you basically do this. Some classes are more guided than others. Some places to start:

- CMU: http://www.cs.cmu.edu/~410/

- UMD: https://www.cs.umd.edu/~shankar/412-F12/

UMD uses a toy operating system called GeekOS that the students extend. You might find browsing its source code useful (http://code.google.com/p/geekos/)

Good luck!

There are a bunch of teaching OSes which are a great place to start for this stuff.

The actively developed ones we chose between when teaching the OS class at UW this fall were:

JOS (mit) https://github.com/guanqun/mit-jos

Pintos (stanford) http://www.stanford.edu/class/cs140/projects/

OS161 (harvard) http://www.eecs.harvard.edu/~syrah/os161/

This is pretty interesting. I find this type of behavior to be everywhere in the software world. If I cannot figure something out with a platform, let me re-write it. Interesting observation is that the number of these "re-writes" is inversely proportional to the experience of the person proposing these ideas.

Good luck to the author, nonetheless it will be a good learning experience for him.

Also interesting is that once you start re-writing and bump into hard issues, you decide to take a peek at how others have solved them, you eventually find yourself understanding the platform you're trying to re-write and realize that maybe that platform is not as bad as you thought it were.

"Those who don't understand Unix are condemned to reinvent it, poorly."

On threads such as this one it is easy --and even popular-- to dump on those, such as myself, who come down on the "nay" side, even when this is firmly based on experience and having made plenty of mistakes in the field.

The linked article does NOT talk about a one-semester school project or a quick-and-simple learning OS.

No, the article talks about a web developer with no real experience writing low-level code not only wanting to bootstrap every single device driver but also ignoring years of accumulated knowledge and code libraries to write an OS that boots directly into graphical mode, does not take advantage of POSIX and more.

There's nothing wrong with the "How hard can it be?" approach to learning. I've done this many times. And almost every single time I came away with "I sure learned a lot, but what the fuck was I thinking?". The last time I pulled one of those was about fifteen years ago and the "three month project" took nearly two years.

What he is talking about is more complex than writing the Linux kernel from scratch because he wants to re-invent everything. Here are some stats on the Linux kernel:


Even if his project was 10% of this it would still be a grotesque miscalculation for a single developer, otherwise employed and without the experience to back-up some of what he is proposing.

If, on the other hand, the post had suggested something like this it would have been far more reasonable an idea:

"Hey, I just spent a year implementing everything in the Tanenbaum book. Now I would like to start from that base and enhance the OS to make it do this...".

Let's compare notes in a year and see how far he got.

I could have guessed this would turn into a massive flamefest, but c'mon people. So what if he's green? This will be a learning experience for him. And I'll hazard that in the end, even if he never ends up writing a whole OS from scratch, that he will have gained more valuable low level experience than the vast majority of those here.

I agree that this project is hugely ambitious for one man. So was Linux. Linus recruited a bunch of other hobby programmers to help him make it. Although he may be jumping the gun on announcing it, he's got a lot of good ideas about stripping back all the obfuscation that's resulted from 30 years of stacking things onto the same old conventions and wiggling them around until they fit.

I'm sure the idea of building a modern OS that is straightforward and written in a simple, popular language like C (and possibly Python later for higher-level stuff) will appeal to a wide range of people who will all want to help. I'd love to see this project happen, and if the day comes where Gusts is calling for help, I'll be right there in line to help him make this.

The Unix/Linux systems are designed by a lot of very smart people and have gone through many iterations. No doubt they do have some historical baggage, but, there are good reasons why the current design is as-is. Not trying to learn the history will doomed to repeat it.

"I still don’t understand the meaning and differences between /bin, /usr/bin and /usr/local/bin"

Oh. OK, then.

I hope you know plan9 or other OSes beside *nix/POSIX so you have a larger field of view.

If I had crossed the desire threshold to start that project (#1 project in my mind since I left college) I'd leave the C ecosystem altogether, design a typed ,functional ,binary friendly, modular, subset of C (and probably be forever alone). Something in the groove of http://en.wikipedia.org/wiki/BitC, even though its talented author concluded it wasn't a successful path.

I think his critique of linux files layout and other points made are completely spot on. Me and I am sure others often were thinking how it would be great if things were different. I happen to know why those folders are named as they are, and they are completely arbitrary. He also started on it, did some initial progress. So he did a php before, boo-hoo.

I would prefer he decided to fork linux and change things he didn't like, then start from skratch. However, there is a great value starting from scratch. I wish I had a life :) to join him and figure out things together, it would be a blast, how many times in your life you have a chance to work on actual modern OS.

I believe it is totally possible for him to accomplish what he started, if knowledgeable people would join him and work with project together. Today with amazing tools, it is good time to create a new OS that would have modern tooling.

I wrote recently on my blog about a need for developer distribution of linux. Strangely this is still missing. http://softwaredevelopmentinchicago.com/2012/10/17/ubuntu-al...

It is great that we are discussing this. That is how things start.

Short advice: find a compiler that supports modern decent C (C99), NOT VS (http://www.infoq.com/news/2012/05/vs_c99_support/) - may not be relevant for your kernel code that much, but at least at the end of your adventure you will have learned how to write good modern C (and no, C it not dead and replaced by C++, they are languages with different philosophies and used by different kinds of programmers and they are both evolving on they own route, despite FUD originating from Microsoft and other sources)

...and when you reach to the GUI part, do the same for C++, use the latest version and language features: I've heard that VS2012 lasts upgrade got closer to it, but google around before settling on it

...or to keep it simpler: better use GCC compilers (since the Linux kernel is built with it, you should find enough compiler specific docs and related tools too)

Take care when writing your own bootloader, or you may find yourself essentially maintaining two separate kernel trees.

I love little thought experiments like this, so heres my 2 cents:

1. Targeting a modern architecture is good, but if I were being this ambitious, I would wager having such a backwards compatable burdened architecture like x86_64 (even when it is highly performant just through raw funding dollars) I would still rather start at square 1 on some risc 64 bit virtual 48 bit physical word system. Go even further, and design such a hardware ecosystem with heterogeneous computing built into the foundations - have arbitrary numbers of ALUs and FPUs and have different pipeline structures allowing for various degrees of SIMD parallelism across some tightly integrated weak cores and more heavily pipelined and bulkier serial cores, and have an intelligent enough instruction set to allow for scheduling (or even better, the hardware itself) to recognize parallel tasks and execute them with varying degrees of parallelism. Take AMD Fusion or Tegra to the next level and instead of having a discrete gpu and cpu on one die mash them together and share all the resources.

2. I'd kick C out. If I'm going with a new architecture, I need to write the compiler from scratch anyway. I might consider LLVM for such a system, just because the intermediary assembly layer is intentionally lossless and allows for backwards language compatability with everything under the sun right now. But ditch C, take modern language concepts from C++, Python etc, and cut down on the glyphic syntax and try rethinking the distribution of special characters (I think pointer<int> c makes more sense than int (star)c, for example - go even further, and provide 3 levels of verbosity for each concept, like pointer<int32> c, ptr<int32> c, and &:i32 c). I would definitely want to fix standard type sizes at the least, having things like i32 integers instead of the int type being 16 or 32 bit, etc, with some more modern niceities like the D style real float that uses the architecture restricted maximum FPU register size).

3. Screw UEFI, mainly because it is a design by consortium concept - it is inherently cumbersome because it was a commitee project between industry giants rather than a revolution in booting. I do like cutting down on legacy interfaces, I'd go even further and try to minimize my system to (in theory) one serial transport and one digital, maybe 4, with unidirectional and bidirectional versions of both, and maybe support for some classic analog busses (like audio, which doesn't make much sense to transport in digital format, although I haven't looked into it much). Everything plug and play, everything attempting to provide power over a channel so you don't need additional power connectivity if you can avoid it. For the BIOS, I would replace it with some metric of scan busses for profiles -> incite some kind of device-wise self test -> provide device information in memory to the payload binary, to allow memory mapping and all the other goodness. Maybe even have the bios itself act as a sub-kernel and provide the mapping itself. Maybe even fork the kernel, and treat it like some kind of paravirtualized device environment where the bios never overrides itself with the payload but instead stays active as a device interface. Saves a lot of code redundancy between the two then. It would of course have an integrated bootloader and the ability to parse storage device trees for some bootable material. Maybe have file system standards where each partition has a table of pointers to loadable binaries somewhere, or maybe stick them in some partition table entry (obviously not a FS expert here).

4. Screw URIs, go straight for a kernelwise VFS that can reference everything. I'd love to see /net/<IP address>/ referening the top level of some remote servers public resources. You could have a universal network protocol where each connection is treated as a virtual mount, and individual files (and everything is a file, of course) can dictate if they use streamed or packet based data off some network transaction about the base protocol. So instead of having http://google.com, you could use /net/google.com/ which when opened does DNS resolution in the VFS to (well, ipv6, obviously - we are talking about a new OS here, so 2001:4860:8006::62 - and as a side note, I would never try to get rid of IP as the underlying transport protocol - as insane I might be about redesigning hardware and rethinking stuff people much smarter than myself came up with, I know you will never ursurp IP as the network trasport everyone uses to connect the world ever). And then when you open google.com/search, you open a remote file that interprets the "arguements" of ?q=baconatorextreme on the extension into the returned page file that you access.

I agree with getting rid of Unix directories, they are outdated crappy, and all their names make no sense. However, /bin is meant to be system required binaries to boot, where sbin is root utility binaries, /usr/bin is general purpose executables that might not be local to the machine and might be a remote mount, and /usr/local/bin is the local machines installed binaries. Of course these polcies are never abided by, and they still have /etc, /usr/games, and a bunch of other folders to make life a hassle.

That's enough rates for a HN comment thread though, I'll stop and spare y'all :P

> I would still rather start at square 1 on some risc 64 bit virtual 48 bit physical word system.

A friend of mine wrote a little OS (similar to vxWorks) for 64-bit MIPS in the 90s; I helped him debug a couple of problems. This was actual production code to go in a product (our employer was too cheap to license vxWorks for the secondary processor in the product).

It's incredibly easy to get something up and running on an architecture like this. (Admittedly he cheated by virtue of not needing to interface with hw to get loaded -- everything gets much easier when your EEPROM is wired to the boot vector.) Get your kernel data structures set up, handle interrupts, and BOOM! you've got multitasking. (I think it took him about a week.)

^This ... is exactly what I'm thinking.

But I have no access to such an industry like hardware production. Software is cheap, hardware is not :(

As for URIs - this is where they fit in well, remember that // stands for domain root, so instead of /net/google.com it's ok to use //google.com. Schema on the other hand is a hint for a port to use, so http://google.com/ is the same as //google.com:80 + it can be used for URI driver negotiation. Simply go with fopen("ftp://domain.com/file.txt") and kernel uses FTP driver, go with fopen("imap://me:pass@domain.com/inbox/") you'll receive your inbox folder on a mailserver, etc. Avoiding schema would fall back to file://, avoiding domain falls back to localhost, so /documents/hello.txt is really file://localhost/documents/hello.txt and localhost is managed with a VFS where you can create even funnier stuff, like linking other types of URLs to your file system. For examlpe, file://localhost/some_site.html -> http://www.google.com/ or file://localhost/documents/work/ -> ftp://me:pass@work.tld/home/

Basically, that is my idea of what I'd love to do + yes, complete hardware overhaul, but that's not for me (at least not for now) as I'm living in banana republic and we don't have 6 layer PCB printing/soldering facilities here :D

awesome comment ! gj

I am reminded of http://www.sparrowos.com/ (aka losethos).

I'd like to see someone try re-inventing a minimalist userspace. Create an OS running on the Linux kernel without coreutils, binutils etc., and see how far you can go.

If you strip out loadable module support and such, is it possible to boot without the usual POSIX support structure? Without filesystems?

I ported Linux to an embedded system that booted straight into ash (minimal shell). Other than ash, it only had custom applications. We didn't bother with a separate init app even - ash served that role sufficiently well.

Linux has very minimal requirements of userspace - pretty much you need to have a process that will act as your init process that knows to wait to clean up any zombine processes now and again. The rest is pretty much up to how you want to organize things.

It is possible to have just the kernel and just what you want to run hardwired as the init process, say some minimal web server. A slightly more sane solution is to have a minimal init process that can respawn your server if it crashes.

As for filesystems, there needs to be something, I don't think it can work at all without a root file system, but a ram disk will do fine. The FHS directory structure isn't needed at all of course.

I've seen some ridiculously stripped down embedded Linux systems. Most tend to have something like Busybox though, with a fairly conventional look, but some prefer to just use that while developing and rip it out in the deployed version.

Something like Android?

Writing new OS entirely from scratch? without any intention to rely on some useful parts of previous projects? This is literally what we call "Reinventing The Wheel", and if you go through human history you will find that this is absolutely NOT the best way of development.

If that were true then there would only be one wheel design in all of human history.

As many people here said, this guy may not know what he doesn't know. But I am admiring his braveness and it is good thing for someone to shoulder off all legacy burdens and start a fresh OS.

As a developer, I have similar feeling on softwares including OSes, and I started a fresh vector editor project(Torapp guilloche online designer http://www.torapp.info), I know a vector editor is much simpler than an OS, but it is also pretty complicated. When designing the editor, I learned a lot and changed designs multiple times. I am sure that guy will learn and even if he can not complete an OS, he may leave some well-designed code base for other people to start with.

Read books, Guncha! Good luck. If Stallman did that, so can you.

Good luck, I certainly hope you don't end up like losethos.

> No Unix directory tree. I hate it – it does not say anything to a user

Actually it has a lot to say, but in this case it just appealed to the fifth amendment.

This is one of those times that the phrase "RTFM" actually makes sense. If the author had read what each one was rather than guessing, he'd have enjoyed it all a lot more. Nobody likes typing out "/system/configurations/networking". That's why they wound up with things like "/sys/conf/net". But hey, if he succeeds, maybe he'll come around.

Do you have any reason to assume he won't implement tab completion? Sure doesn't take long to write out /System/Library/[..] on os x

Good luck, have fun.

Won't be big and professional like gnu.

Seriously though, good luck.

This guy is slightly clueless but he has the spirit. I've written my own hobby operating system skeleton and it was a very good learning experience.

Here's a few notes about his plans:

  > Target modern architecture
  > Avoid legacy, drop it as fast as you can. You can even skip the Protected mode and jump directly to Long mode
I went on and wrote my hobby OS on x86_64 too. Unfortunately, working in x86_64 long mode is a little bit more difficult than using 32 bit protected mode. You can go direct to long mode, but you'll have to write that from scratch. GRUB and other multiboot protocol capable bootloaders set up 32-bit protected mode for you but not long mode. You cannot be in long mode without paging enabled (unlike in protected mode).

So if you want to "skip" protected mode, you'll have to write a pile of assembly code to get there. x86_64 is a lot more work than 32bit x86.

  > Jump to C as soon as possible
This is most definitely the right thing to do. Jump into C code as soon as possible. Getting shit done in Assembly is so much slower.

You only need a few pieces of assembly code to get an operating system running: the boot code and the interrupt handler code. The boot code and the interrupt handler are just small trampolines that go to C code as soon as possible.

In addition to the boot and interrupt handler code, you occasionally need to use some privileged mode CPU instructions (disable interrupts or change page table, etc). Use inline assembler for that.

Anyone who (in this thread) suggested using something else than C seemed to be fairly clueless about it. Of the choices you have available, C is the simplest way to go. Everything else is either more work or more difficult.

  > Forget old interfaces like PCI, IDE, PS/2, Serial/Parallel ports.
Not so fast. You most likely want to implement a serial console for your operating system. Maybe even add a serial port debugging interface (GDB stubs).

You're most likely going to have to deal with PCI bus at some point too, although many devices don't use the physical pci buses on motherboards, some devices still hook up to the pci bus. Look at the output of "lspci" on Linux, all of those devices are accessed through PCI. This includes USB, PCIe, SATA, IDE, Network interfaces, etc.

Again, using the modern buses is a lot more work than using the old ones and it partially builds upon the old things.

  > Why does every tutorial still use such an ancient device as Floppy?
Because when doing a bootloader from scratch for a tutorial, it's a lot easier to use the floppy disk than it is to use a real hard disk or any other media.

  > Avoid the use of GRUB or any other multiboot bootloader – make my own and allow only my own OS on the system
No no no. If you want to build an operating system, do not build a bootloader. Use the multiboot protocol and things will be a lot easier. You'll get started so much faster and get to the real stuff sooner. (NOTE: I don't know how UEFI devices boot, it might contain something like multiboot).

Most hobby operating systems are just half-assed stage 1 bootloaders. Just get over the fact that you'll have to use code written by others and get booted.

Popular emulators (bochs, qemu) can boot multiboot kernels directly so you'll save a lot of time there too.

You need to get booted in an emulator and running under a debugger as quickly as possible. Operating system development is so much easier to do with a debugger at hand. Failures generally cause a boot loop or hang the device so there won't be a lot of diagnostics to help with issues.

So my advice is: set up Qemu + GDB + multiboot, and get your kernel booted in a debugger as early as you can.

I won't go into commenting his wacky ideas about VFS structure or APIs. It's nice to make great plans up front but by the time you're booted to your own kernel, a lot of the naïve ideas you started with will be "corrected".

Happy hacking and do not listen to the naysayers.

PS. here's my hobby OS: http://github.com/rikusalminen/danjeros

OK, if you don't have any real experience in low-level embedded coding (relevant to device drivers), RTOS or OS design in general, file systems, data structures, algorithms, interfaces, etc. And, if you have "hobby level" experience with Assembler, C and C++. And, if your intent is to write a desktop OS, from the ground up, without making use of existing technologies, drivers, file systems, memory management, POSIX, etc. Here's a list of books that could be considered required reading before you can really start to write specifications and code. Pick twenty of these and that might be a good start.

In no particular order:

1- http://www.amazon.com/C-Programming-Language-2nd-Edition/dp/...

2- http://www.amazon.com/The-Answer-Book-Solutions-Programming/...

3- http://www.amazon.com/The-Standard-Library-P-J-Plauger/dp/01...

4- http://www.amazon.com/C-Traps-Pitfalls-Andrew-Koenig/dp/0201...

5- http://www.amazon.com/Expert-Programming-Peter-van-Linden/dp...

6- http://www.amazon.com/Data-Structures-In-Noel-Kalicharan/dp/...

7- http://www.amazon.com/Data-Structures-Using-Aaron-Tenenbaum/...

8- http://www.amazon.com/Mastering-Algorithms-C-Kyle-Loudon/dp/...

9- http://www.amazon.com/Code-Complete-Practical-Handbook-Const...

10- http://www.amazon.com/Design-Patterns-Elements-Reusable-Obje...

11- http://www.amazon.com/The-Mythical-Man-Month-Engineering-Ann...

12- http://www.amazon.com/The-Programming-Language-4th-Edition/d...

13- http://www.amazon.com/The-Standard-Library-Tutorial-Referenc...

14- http://www.amazon.com/API-Design-C-Martin-Reddy/dp/012385003...

15- http://www.amazon.com/The-Linux-Programming-Interface-Handbo...

16- http://www.amazon.com/Computer-Systems-Programmers-Perspecti...

17- http://www.amazon.com/System-Programming-Unix-Adam-Hoover/dp...

18- http://www.amazon.com/Memory-Programming-Concept-Frantisek-F...

19- http://www.amazon.com/Memory-Management-Implementations-Prog...

20- http://www.amazon.com/UNIX-Filesystems-Evolution-Design-Impl...

21- http://www.amazon.com/PCI-System-Architecture-4th-Edition/dp...

22- http://www.amazon.com/Universal-Serial-System-Architecture-E...

23- http://www.amazon.com/Introduction-PCI-Express-Hardware-Deve...

24- http://www.amazon.com/Serial-Storage-Architecture-Applicatio...

25- http://www.amazon.com/SATA-Storage-Technology-Serial-ATA/dp/...

26- http://www.amazon.com/Beyond-BIOS-Developing-Extensible-Inte...

27- http://www.amazon.com/Professional-Assembly-Language-Program...

28- http://www.amazon.com/Linux-Kernel-Development-3rd-Edition/d...

29- http://www.amazon.com/Version-Control-Git-collaborative-deve...

30- http://www.amazon.com/Embedded-Software-Primer-David-Simon/d...

31- http://www.amazon.com/Programming-Embedded-Systems-C/dp/1565...

32- http://www.amazon.com/Making-Embedded-Systems-Patterns-Softw...

33- http://www.amazon.com/Operating-System-Concepts-Abraham-Silb...

34- http://www.amazon.com/Performance-Preemptive-Multitasking-Mi...

35- http://www.amazon.com/Design-Operating-System-Prentice-Hall-...

36- http://www.amazon.com/Unix-Network-Programming-Sockets-Netwo...

37- http://www.amazon.com/TCP-Illustrated-Volume-Addison-Wesley-...

38- http://www.amazon.com/TCP-IP-Illustrated-Vol-Implementation/...

39- http://www.amazon.com/TCP-Illustrated-Vol-Transactions-Proto...

40- http://www.amazon.com/User-Interface-Design-Programmers-Spol...

41- http://www.amazon.com/Designing-Interfaces-Jenifer-Tidwell/d...

42- http://www.amazon.com/Designing-Interfaces-Jenifer-Tidwell/d...

43- http://www.amazon.com/Programming-POSIX-Threads-David-Butenh...

44- http://www.intel.com/p/en_US/embedded/hwsw/software/hd-gma#d...

45- http://www.intel.com/content/www/us/en/processors/architectu...

46- http://www.intel.com/p/en_US/embedded/hwsw/hardware/core-b75...

47- http://www.hdmi.org/index.aspx

48- http://en.wikipedia.org/wiki/Digital_Visual_Interface

49- http://www.amazon.com/Essential-Device-Drivers-Sreekrishnan-...

50- http://www.amazon.com/Making-Embedded-Systems-Patterns-Softw...

51- http://www.amazon.com/Python-Programming-Introduction-Comput...

52- http://www.amazon.com/Practical-System-Design-Dominic-Giampa...

53- http://www.amazon.com/File-Systems-Structures-Thomas-Harbron...

54- ...well, I'll stop here.

Of course, the equivalent knowledge can be obtained by trial-and-error, which would take longer and might result in costly errors and imperfect design. The greater danger here is that a sole developer, without the feedback and interaction of even a small group of capable and experienced programmers could simply burn a lot of time repeating the mistakes made by those who have already trenched that territory.

If the goal is to write a small RTOS on a small but nicely-featured microcontroller, then the C books and the uC/OS book might be a good shove in the right direction. Things start getting complicated if you need to write such things as a full USB stack, PCIe subsystem, graphics drivers, etc.

> If the goal is to write a small RTOS on a small but nicely-featured microcontroller, then the C books and the uC/OS book might be a good shove in the right direction. Things start getting complicated if you need to write such things as a full USB stack, PCIe subsystem, graphics drivers, etc.

I've always wondered if there could be created some way to skip this step in [research] OS prototyping, by creating a shared library (exokernel?) of just drivers, while leaving the "design decisions" of the OS (system calls, memory management, scheduling, filesystems, &c.--you know, the things people get into OS development to play with) to the developer.

People already sort of do this by targeting an emulator like VirtualBox to begin with--by doing so, you only (initially) need one driver for each feature you want to add, and the emulator takes care of portability. But this approach can't be scaled up to a hypervisor (Xen) or KVM, because those expect their guest operating systems to also have relevant drivers for (at least some of) the hardware.

I'm wondering at this point if you could, say, fork Linux to strip it down to "just the drivers" to start such a project (possibly even continuing to merge in driver-related commits from upstream) or if this would be a meaningless proposition--how reliant are various drivers of an OS on OS kernel-level daemons that themselves rely on the particular implementation of OS process management, OS IPC, etc.? Could you code for the Linux driver-base without your project growing strongly isomorphic structures such as init, acpid, etc.?

Because, if you could--if the driver-base could just rely on a clean, standardized, exported C API from the rest of the kernel, then perhaps (and this is the starry-eyed dream of mine) we could move "hardware support development" to a separate project from "kernel development", and projects like HURD and Plan9 could "get off the ground" in terms of driver support.

A lot depends on the platform. If the OS is for a WinTel motherboard it is one thing. If, however, the idea is to bypass driver development for a wide range of platforms it gets complicated.

In my experience one of the most painful aspects of bringing up an OS on a new platform is exactly this issue of drivers as well as file systems. A little google-ing quickly reveals that these are some of the areas where one might have to spend big bucks in the embedded world in order to license such modules as FFS (Flash File System) with wear leveling and other features as well as USB and networking stacks. Rolling your own as a solo developer or even a small team could very well fit into the definition of insanity. I have done a good chunk of a special purpose high-performance FFS. It was an all-absorbing project for months and, realistically, in the end, it did not match all of the capabilities of what could be had commercially.

This is where it is easy to justify moving into a more advanced platform in order to be able to leverage Embedded Linux. Here you get to benefit and leverage the work of tens of thousands of developers devoted to scratching very specific itches.

The down-side, of course, is that if what you need isn't implemented in the boad support package for the processor you happen to be working with, well, you are screwed. The idea that you can just write it yourself because it's Linux is only applicable if you or your team are well-versed in Linux dev at a low enough level. If that is not the case you are back to square one. If you have to go that route you have to hire an additional developer that knows this stuff inside out. That could mean $100K per year. So now your are, once again, back at square one: hiring a dev might actually be more exoensive than licensing a commercial OS with support, drivers, etc.

I was faced with exactly that conundrum a few years ago. We ended-up going with Windows CE (as ugly as that may sound). There are many reasons for that but the most compelling one may have been that we could identify an OEM board with the right I/O, features, form factor, price and full support for all of the drivers and subsystems we needed. In other words, we could focus on developing the actual product rather than having to dig deeper and deeper into low-level issues.

It'd be great if low level drivers could be universal and platform independent to the degree that they could be used as you suggest. Obviously VM-based platforms like Java can offer something like that so long as someone has done the low-level work for you. All that means is that you don't have to deal with the drivers.

To go a little further, part of the problem is that no standard interface exists to talk to chips. In other words, configuring and running a DVI transmitter, a serial port and a Bluetooth I/F are vastly different even when you might be doing some of the same things. Setting up data rates, clocks, etc. can be day and night from chip to chip.

I haven't really given it much thought. My knee-jerk reaction is that it would be very hard to crate a unified, discoverable, platform-independent mechanism to program chips. The closest one could possibly approach this idea would be if chip makers were expected to provide drivers written to a common interface. Well, not likely or practical.

Not an easy problem.

> It'd be great if low level drivers could be universal and platform independent to the degree that they could be used as you suggest. Obviously VM-based platforms like Java can offer something like that so long as someone has done the low-level work for you. All that means is that you don't have to deal with the drivers.

Another thought: if not just a package of drivers, then how stripped down (for the purpose of raw efficiency) could you make an operating system intended only to run an emulator (IA32, JVM, BEAM, whatever) for "your" operating system? Presumably you could strip away scheduling, memory security, etc. since the application VM could be handling those if it wanted to. Is there already a major project to do this for Linux?

OS development can be fun. I'm writing a hobby OS myself, but currently using GRUB. GDT and interrupts are working, so I can already e.g. get keyboard input. But there's nothing like usermode programs yet. I'm 16, so what bcantrill said might be true ("there is a certain arrogance of youth here").

I would avoid the project unless you truly understand the scope of what you're doing and are dying to get your hands dirty with vm and filesys implementation despite little reward (other than the satisfaction of learning). OS dev is quite low level and infamously hard to debug

Good luck and thumbs up for the idea of writing a new OS. I think an inspiration would be to look at BareMetal OS, perhaps you can find some ideas there.


Sigh. I don't know what to say. I admire OP's desire to dive into OS development but I hope he has the perseverance to carry it through, because he has a long way to fill in the huge gaps in his knowledge to build an OS.

I like the idea of the directory tree, good luck! I have been interested in OS Development lately, I have read some of the James tutorial, the OsDev Wiki, and the Tanenbaum's book... but still have no idea what I'm doing.

I would look at the Design of Unix. That's what Linus used to make Linux. Individually study everything you don't understand...

Also, prepare for about 6 months of hard yet rewarding, given that you put in about 50 hours a week ;)

Today's modern interfaces are tomorrow's obsolete ones. It's not like PCIe will last any longer than PCI, PCI-X, ISA, Vesa Local Bus, EISA, etc.

I don't see why we aren't nearing a threshold where we can do all system interconnects over some specificed serial digital interface and a parallel one. Past a point, it becomes more logical than trying to optimize edge case performants of things (like how dvi is uni-directional raw video digraphs) you can just have some usb-like (but not usb, usb is ugly and tacky and a mess) digital connector for every peripheral from the monitor to the network adapter to the speakers, maybe even off one controller with software handling the handshakes to determine what everything is connected to.

And then get rid of sata/pci/sas etc internal connectors and just use the same interconnect hub as the external devices. Again, handshakes to determine device connectivity.

Wouldn't that be so easy? One connection to rule them all! I'm not trying to say it would be easy to get to, we are mired in a world where we look at a concept like this and say "how dumb, you aren't utilizing a bidirectional link with video feeds, or you aren't using the power connectivity to a network router, or stream based vs packet based trasport layers having speed / bandwidth advantages over one or the other. But wouldn't it be great to plug a new gpu into a real universal serial bus?

I remember a guy saying the same a few years ago ... what's his name? Linus?

This is cool. Big project for one person, but cool none the less

Write your own OS: great!

Say you're writing your own OS: ok, sure...

3rd Year ECE + Hope . How did this make HN?

Good lord ...

[deleted] ChuckMcM is right, let him find out what he doesn't know. Props for the ambition.



yeah, free speech sucks when you don't agree with it. :((




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact