
After 5 days, my OS doesn't crash when I press a key - jvns
http://jvns.ca/blog/2013/12/04/day-37-how-a-keyboard-works/
======
PhasmaFelis
I've never done OS programming, but something about this just tickles me. I
think it's the way she cheerfully acknowledges screwing up constantly but
still gamely keeps tackling the problem. To admit your mistakes without shame,
and try again without bitterness--to lose your ego, in other words--is
something I certainly aspire to, as a programmer and a human being.

~~~
joshguthrie
In a context where everything "low-level" is handed to us without a "how-to",
looking under the hood to build your own thing is always a pleasure, be it a
new programming language or a whole OS.

To attain this state of zen-itude, just look up something that you'd like to
do where documentation is hard to come by ("how to write your own shell from
scratch" is a funny one if you can want to learn a LOT about UNIX) and start
your own project.

Related: とあるOS is a nice hobby OS too:
[https://github.com/klange/toaruos](https://github.com/klange/toaruos)

------
exDM69
Another good read and really gives the insight on how much time and effort it
takes to make anything happen on bare metal.

@jvns: __SPOILERS AHEAD, DO NOT READ FURTHER IF YOU WANT TO GET TO THE HEART
OF THIS PROBLEM YOURSELF __

As for your problem with the keyboard interrupt firing only once, I think
(after a quick glance over the code) that you do not signal "end of interrupt"
(EOI) message to the 8259 PIC chip after you have received an interrupt. I
faintly recall having this same issue in my ordeals with the PIC chip, if you
don't signal EOI, the PIC won't fire the next keyboard interrupt.

Although I think I was working with 8253/8254 PIT (Programmable Interval
Timer) chip at the time but I think the same applies to the keyboard interrupt
too, it's being handled by the PIC too.

Here's the relevant part from my project:
[https://github.com/rikusalminen/danjeros/blob/master/src/arc...](https://github.com/rikusalminen/danjeros/blob/master/src/arch/x86_64/interrupt.c#L68)
[https://github.com/rikusalminen/danjeros/blob/master/src/arc...](https://github.com/rikusalminen/danjeros/blob/master/src/arch/x86/pic.c#L41)

I'm sorry to spoil your "fun" debugging it.

Dealing with any of this is a fun exercise in being close to the metal but
unfortunately has no application in modern computers because these days the
built-in IO-APIC/LAPIC has replaced the legacy PIC chip.

~~~
kanamekun
Spoilers for a bug report... that's a new one! Very thoughtful. :)

------
vertis
Nothing quite like trying to write a complex system from scratch to realise
just how many giant shoulders you stand on every day.

~~~
Trufa
I love this article:
[https://plus.google.com/112218872649456413744/posts/dfydM2Cn...](https://plus.google.com/112218872649456413744/posts/dfydM2Cnepe)

~~~
ColinWright
Now (re-)submitted as an item in its own right:

[https://news.ycombinator.com/item?id=6853813](https://news.ycombinator.com/item?id=6853813)

~~~
Trufa
It has been submitted several times before if I'm not mistaken, in fact I know
it from HN.

~~~
ColinWright
Indeed - I also knew it from HN. But

* it's not been submitted for a long time,

* discussion there is closed,

* it's _really_ good,

* it's technical, and

* it's relevant.

I commented about why I've resubmitted it here:

[https://news.ycombinator.com/item?id=6853824](https://news.ycombinator.com/item?id=6853824)

------
mason55
When I was in undergrad I wanted to take Operating Systems 2 but it was only
offered once/year and didn't match up with my free time. So I did an
independent study where my plan was to write a basic ethernet driver for
Linux.

I have never been as humbled by any sort of programming in my life. This was
back in 2004 so the resources available online weren't nearly as good. I
basically figured everything out by reading Intel's x86 documentation from
start to finish. It took me a solid 3 weeks just to figure out how to start
talking to the ethernet card. I spent days reading the same few pages in the
manual over and over trying to understand how to get everything to fit
together in assembly & C.

After 9 weeks I was able to address & initialize the ethernet card after the
OS booted. I never was able to send or receive any data after a solid 50 hours
of work.

Oh yeah, and I got a solid A for my work.

~~~
dclusin
With regards to kernel programming, I'm reminded of the old quote from Star
Trek, "It's life, Jim, but not as we know it."

------
Toenex
Working at the layer below is a very useful exercise. As part of my undergrad
course in EE in the late 1980's we had to design and simulate a 4 bit micro-
processor using logic components (essentially NAND gates). I got completely
immersed in that project. Not only did it force me to understand a complete,
albeit simple processor but actually building it gave me a lot more confidence
in writing code. Things have moved on a lot since then and there are many
layers of abstraction meaning that most developers don't need to understand
the hardware, but having a decent 'programmers model' for the OS can only be a
good thing.

~~~
chinpokomon
I built a 4 bit microcontroller the same way around 2000. We flashed the gates
to an Altera EPROM and other than external memory chips, the state machine,
ALU, and other sub systems were assembled using logic gates we burned. We
supported 16 instructions and could access 16 words of memory. That was enough
to code a "password" based security system that would either grant access by
raising an external line or it would output an alarm state if the code was
invalid. As I recall it could only handle about 25Hz max. Still one of the
most influential projects I worked on in college.

------
ChuckMcM
I enjoyed this. It brought back memories of bringing up BSD2.9 on the PDP
11/55t we had in the lab. That particular PDP 11 was pretty rare (it had some
memory split features to improve performance for computational tasks) and
poking around in the kernel to get to the point where init was started.
Nothing quite like that thrill, a mixture of power, and 'oh crap this is going
to be a lot of work'.

~~~
Locke1689
What she's doing is surprisingly a lot harder. Getting a modern IA-32(e)
architecture into the 64 bit mode we all know and love is INSANE.

~~~
ChuckMcM
I'm not going to argue on 'harder' or 'not as hard' as such things are often
personal measures that are difficult to quantify. I will however point out the
meta fallacy of even thinking about these things in terms of 'not hard.'

The truth is, computers have become exceptionally complex. That is one of the
reasons I've been building a medium complexity standalone system (ARM Cortex M
based) to give folks something that is somewhere between 8 bit Arduino type
experiences and 64 bit IA-64 or even ARM Cortex A9 level complexity. I
realized when I started finding ways to teach my kids about computation that I
was very lucky to have things like PDP-11's, VAXen, 68000's, and DEC-10s to
play with which did not present this huge wall of complexity that needed to be
scaled to get to the fundamentals. My target is a self hosted 'DOS' style
Monitor/OS for the Cortex M4 series. Complex enough to host its own
development environment and tools, but simple enough that you can keep it all
in your head at the same time.

~~~
Locke1689
I was actually just pointing out the irony that 30 years of processor
innovation have lead to making the task of booting even more complicated than
it used to be ;)

(Note: I understand why, that doesn't make it less ironic.)

------
raverbashing
For some reason HN ate my previous comment

Doing OS things is hard. In x86 it's _extra hard_

x86 is one kludge on top of another. Like interrupt chaining

These issues remind me of making a Sound Blaster card work on DOS (I was
making a program to play a simple file in Pascal/C)

It's very complicated, and several trials to make it work (and no
Stackoverflow, Wikis, etc)

~~~
dmytrish
That's very true from both compiler and OS point of view. Intel machine code
is so kludgy: variable-length codes, subtle differences in prefixes in
different modes, exceptions in register handling all over the place, plenty of
addressing modes for memory access, special machine codes for a qualified
register (`mov %ax` code differs from `mov %cx/%dx/%bx/...`), more lengthy
encoding for more often uses (like `mov (%esp)` requires SIB byte whereas it's
not true for all other registers) and so on.

From OS perspective, it's also overcomplicated: switching from real to
protected mode is an exercise in doing every action in a long chain properly
and in the right order or the whole undertaking fails. Remapping IRQs into
interrupts that do not mess with the CPU internal exceptions, weird legacy
memory layout with BIOS, a slew of CPU modes (real/protected/PAE/long/etc),
crazy format of entries in Interrupt Descriptor Table (where four bits of a
pointer may be stored separately from the main part), where bits are often
used differently for different contexts; redundant built-in features like poor
multitasking support (Linux and other sane kernels try to avoid it as much as
possible, but it's still is not possible to go into kernel mode from userspace
without filling kernel stack pointer into a TSS entry), memory segments (in
protected mode, where a pointer can address the whole virtual memory unlike
real mode), four privilege rings (whereas all modern OSes use two: 0 for
kernel space and 3 for userspace).

All that does not feel like it should be and often it only gets worse.

~~~
hansjorg
Going from Motorola 68k to x86 with few preconceptions in the nineties was
quite a shock after slowly having been convinced that the grass actually was
greener.

On the x86 side I was surprised to find an unnecessarily cryptic assembly
language, little-endian words and a byzantine memory model for starters.

Hacks upon layers of hacks and not the intuitive and aesthetically pleasing
solutions seems to win out in most cases.

~~~
marvin
It's a wonder any of this stuff works at all. I find myself saying this almost
every time I learn something new about how computer systems are designed,
built and used.

------
tehwalrus
> _I’m seriously amazed that operating systems exist and are available for
> free._

Made me smile :)

------
asperous
She better not keep this up, first it's just tinkering around and then boom...
it's a slippery slope to creating a whole kernel.

------
voltagex_
I really like the step by step list - nothing's been removed and I can see
exactly how frustrating/enlightening the process was!

~~~
lonewolf3
Agreed- the frustration was more than enough!

------
unfamiliar
>Press keys. Nothing happens. Hours pass. Realize interrupts are turned off
and I need to turn them on.

>THE OS IS STILL CRASHING WHEN I PRESS A KEY. This continues for 2 days.
Remember that now that I have remapped interrupt 1 to interrupt 33 and I need
to update my IDT.

I have issues like this a lot (i.e. forgetting I've tweaked something and
wondering what is causing weird effects). Any advice on how to avoid it?

~~~
zenojevski
Use a rubberducking text file.

Every time you think, write down a line or two delineating your current
reasoning and your debugging info. When you make changes, highlight them to
mark them as pending ("need to update IDT"). You can also categorize, tag
lines/pieces, and so on.

When you're stuck just backtrack this file, whether it is a hour or two days
from now, and you'll avoid all of this.

------
forktheif
I'm guessing that a lot of the difficulty comes from the fact he's creating an
OS for a PC, which is a pretty quirky and inconsistent architecture.

~~~
wikwocket
The blog author is not a 'he' but a 'she.' Her name is mentioned through the
article.

And _I 'm_ guessing that the difficulty comes from the fact that she's
building a freaking operating system from scratch... ;)

~~~
revasm
Is the gender of said author supposed to be curious or worthy of attention?
I'm not intending to be impolite, but this is the third time in several days
that HN posters have brusquely corrected a gender pronoun in relation to a
submission of nondescript importance.

~~~
steveklabnik
It's quite disrespectful to call someone a 'he' when they are obviously a
'she.' This is only vaguely related to the recent discussion about 'they' for
people of indeterminate gender.

It's also quite jarring to read; I was confused if the OP was speaking of
someone else. People often point out grammar or factual mistakes, I see your
parent as doing no different.

~~~
revasm
I was under the impression that banal grammar corrections are frowned upon
here on HN, because they do nothing to further the topic or encourage
interesting discussion. It's one of the reasons why Reddit is so tedious.

~~~
phaer
They point is not so much a correction of grammar, but a correction of the
wrong implicit assumption that a person who writes about osdev has to be male.
Its no drama if it happens once, but as said in this thread it happened
several times in the last few days. And if it happens that often it
effectively makes women in this field invisible, especially as role models for
the next generation(s).

EDIT: And it goes a bit further. The sole reason why we are having this
discussion is because the author explicitly used a female name and her
writings still got perceived as those of a male author. If there no name or
just a generic nick, most people in here - myself probably included - would
have assumed its a male by default.

------
mwill
Reading this gives me a yearning in my belly to drop everything and hack away
at a toy OS for a while

~~~
Jemaclus
I did this last week in the three days before Thanksgiving. I got stuck at the
bootloader. Oops? :)

------
jlawer
This brings back memories...

I remember fresh out of high school following a few tutorials and writing a
basic kernel & code for my old Pentium MMX and reading through the intel
developer manuals for the 386 processor , and all the complexity involved that
was inherited from the 8086 & 286 processors. Was a fantastic learning
exercise... though I would suggest against using Pascal as the implementation
language.

------
moron4hire
I think, after I clear out my current project log, OS hacking is the next
thing on my plate. I'll probably start with drivers, but I eventually want to
get into learning the core concepts. OS kernels and compiler design are kind
of the last "things" that I've not yet done with programming, and probably
couldn't just jump into it feet first and be productive. I've done AI,
graphics, real-time embedded programming, high-performance algorithm stuffs,
large scale data crunching, etc. etc., and they all pretty much come down to
the same stuff: know your math and don't waste cycles. 'Spose that's kernels
and compilers, too.

------
shubhamjain
Would echoing the chracter on screen require a seperate video driver to mark
the pixels that make up the character ? Considering how hard is to make the
keyboard driver, wouldn't the video driver be many folds harder?

~~~
stephen_g
Text mode is super easy - you just set bytes in the correct memory location
(0xb8000000 if I remember correctly).

Writing an actual graphics driver is tricky - although there are some ways to
get decent frame buffer modes (like VESA Bios Extentions), to do things really
well requires thousands of lines of code before you can even get a pixel on
the screen, and is different for every card.

It may be a bit easier now - you could look at the kernel mode setting code in
the Linux kernel that exists now for Intel, nVidia (through Norveau) and AMD
cards that didn't exist a few years ago when I was dabbling around in OS dev.
Still, it would be a massively difficult task. You'd probably want to use the
full Mesa stack and open source drivers that use it if possible, but writing
the runtime to support it would be a big task.

~~~
pbsd
0xB8000, to be precise. This is 16-bit 8086 legacy, where addresses were
20-bit and segmented.

The most fun mode was mode 0x13 from INT 0x10, with its fantastic 320x200
resolution and 256 colors. That one had the buffer at 0xA0000.

~~~
Sharlin
Oh mode 0x13 and warm memories. Super easy to get working, as well, just an
interrupt and start writing pixels to memory. Would probably be one of the
first thing I'd do if I were to write a kernel.

------
zwieback
Cool, sounds like so much fun. I remember writing OS/2 drivers and getting
interrupt handling wrong would cause strange ways of destabilizing the system.
Once you get it sort of going you're afraid every time you press a key or do
anything with the system because you know your brand new code is down there in
the bowels waiting to cause a triple fault.

------
topbanana
How do OS developers typically debug this sort of thing? Do they attach
external hardware to step through, or is it serial logging?

~~~
dmytrish
From my small OSdev experience: write printf before anything else (it's not
that hard, just write bytes into *0xB8000). This can be done even before
getting any interrupts working, fortunately (without interrupts any input is
almost impossible, I don't want to talk about polling).

------
mediumdave
Back in the day when I was playing around with osdev, I ran my code in the
Bochs emulator. WAY easier than running on actual hardware, especially
starting the emulator was 10x faster then booting on an actual PC.

I do remember the excitement of getting interrupts to work - good times.
Getting task switching to work was magic.

~~~
cgh
She mentions in the article that she's using Qemu.

~~~
mediumdave
Ah - I missed that.

True emulators like Bochs are still potentially a win if they provide fine-
grained debugging features. Bochs was nice because you could single-step
through assembly and view contents of registers/memory quite easily. You could
also attach gdb to it (although this was somewhat flaky.)

------
phearme
Congrats! For GNU/Hurd it took years to achieve it.

------
bedspax
and with two keys pressed at the same time?

------
vacri
To be honest, my first thought on seeing the headline was a particularly bad
linux horror story...

------
aortega
I really wanted to not sound like an asshole and start critizicing this
because I like low-level programming articles but this article is confunsing:

1) You don't need an OS to press keys and make them come out in the screen. An
OS is kind of a library that an application uses to do things.

2) I believe what the article refers as "OS" is a small process manager. I
don't know if it's preemptive, no mention of timer of any kind.

3) As far as I can see, this is not an OS but a ring-0 application that reads
the keyboard using IRQs. IMHO an Operative System should at least supply
filesystem/memory manager or process-manager services.

Hacker school alumni, those articles are awesome but please get your concepts
right and at least learn the correct terminology or you will only confuse
people. But I suspect those articles are vague on purpose to incite
controversy.

~~~
exDM69
You can call it a ring-0 application if wish to argue about semantics or the
definition of an OS but you really do end up sounding like an asshole.

Every OS projects starts out as a "ring 0 application" running on bare metal.
An application which does nothing except handle interrupts and spew out
numbers on the screen to tell that it is working. And most OS projects never
really evolve a lot past that, because they are primarily intended as
educational tools for the author and nothing else.

If you had a point, you could have made it in a way that makes you seem like a
decent person, not an angry antisocial geek.

~~~
beering
Also, aortega wrote "Operative System" instead of "operating system", so
because of that incorrect terminology, we can ignore everything in the post.
That's how this works, right?

------
optymizer

      >  This continues for 2 days
    

This thing made me scoff at the article the most. I've written a unix-like OS
for ARM 2 years ago as a final project for an OS class I took on campus. There
were 20 other people in the class, and all of us forgot to update the IDT at
some point. It was a common mistake, so for this article to emphasize it like
it's some bug that takes 2 days is just lame.

Imagine if my blog post would talk about my experiences with building an HTML
page, and how it took me 2 days to change a <p> element to a <div>. Let's get
serious here.

How long will it take the author to write a circular buffer to read and write
characters to the UART? A month? That says nothing about the difficulty of the
actual task.

You know, when it comes to writing operating systems, I like to think of this
quote: "it's hard, it's harder than it looks, but it ain't THAT hard" (I first
heard it when the commentators were laughing at Chris Andersen was struggling
to dunk in an NBA Dunk Contest).

If you're interested in writing an OS, make sure your primary source isn't
some blog post. My textbook was Tannenbaum's Operating System book, and it's a
very solid book that I can recommend. Tannenbaum knows operating systems. He's
not going to tell you how hard it is. He's going explain all the details to
you and, when you understand them, you'll think how easy kernels really are.

Good luck!

~~~
vidarh
> like it's some bug that takes 2 days is just lame.

Unless she was spending full time on it, it was not some bug that takes 2
days.

At the same time: If you've never come across some trivial, stupid little bug
that has left you stumped for far longer than it should have, you're either
superhuman (or have selective memory), or you're a total beginner.

Your dismissive tone is really offputting.

~~~
abshack
When I did Operating Systems in University (in a group), I remember distinctly
only 3 bugs that we had difficulty debugging:

* re-using a loop variable from the outer loop in an inner loop (damn you `int i`!)

* putting a char[256] on the stack when the stack size was only 256 bytes and getting stack corruption (this one wasn't me, thankfully)

* signed vs unsigned code being passed between user mode and kernel mode causing problems (no idea why it was causing issues; i just made it all unsigned and the issue went away)

In hindsight, -Wall would have probably caught everything and saved a couple
hours of "wtf" at 2-3 in the morning.

Often the simplest things are the hardest to discover.

