Hacker News new | comments | ask | show | jobs | submit login
How to run a program without an operating system? (stackoverflow.com)
394 points by Cieplak 58 days ago | hide | past | web | favorite | 99 comments

>Technically, a program that runs without an OS, is an OS.

It is not. A program that runs without an OS just doesn't need the OS provided services but instead manages everything itself.

A perfectly valid program might be:

    #include <avr/io.h>
    int main(void)

an OS is so much more than an application, amen.

just a simple google would help: the low-level software that supports a computer's basic functions, such as scheduling tasks and controlling peripherals.

this poor author of the post. >.> and then to think critisism gets a downvote :'D.. must be some angry kids mob who found where on memory mbr is put from some youtube1337hack0r..

The accepted answer was correct: "You place your binary code to a place where processor looks for after rebooting " That was a good anwer to the question. the rest is just filler nonsense.

Why does that distinction matter here? Besides, many OSes in the past were more like libraries than active resource managers.

I don't think they have to be active, but an OS should in principle own hardware resources and merely grant access to other programs by some means. Just importing a library that directly accesses hardware wouldn't qualify as an OS.

There has to be some distinction between the subset of all possible programs that qualify as "operating systems", otherwise why have two terms?

Not sure if you still can, but you used to be able to find 4 hooks for primitives in early Smalltalk images. 1 for lifting the hard drive head. 1 for setting down the hard drive head. 1 for moving it out one cylinder. 1 for moving it in one cylinder.

On my dad's old Apple ][+ 48k, I used the same REPL to get the CATALOG of what was on the floppy disk and to program in BASIC. What's the line between OS and a REPL?

Your valid program is still a system that operates your hardware directly. So, still an OS?

An OS, by general definition, usually contains the ability to launch applications developed for it, using the high-level API’s the OS provides to greatly simplify low-level hardware functionality, such as playing sound or displaying images.

A program, by definition, is a running system which executes specific instruction sets, whether this is from a low-level, no-OS setup, such as an NES, or a MacOS program which makes extensive use of Core Image, Core Animation, Core Audio, et cetera.

These are both programs. But only one of them actually runs in an OS.

This is a "bare metal" program, not an operating system.

If it's managing resources itself, how is it not an OS?

What is art?

Programs manage resources. Operating systems explicitly manage resources for the benefit of other programs.

I think that is the key notion.


Handling resources isn't the definition of an OS.

Also asking "what is art?" is not the same as asking "what is an operating system?"

Art covers just about all human activity, the definition is extremely wide. I wouldn't say an OS has a definition that wide.

Providing services to other programs is indeed the key notion.

I learned to program on an SDS Sigma 5. [1] The machine booted by reading the first card in the reader and executing the instructions punched on that card.

The card could be a boot loader to start the BTM (Batch Timesharing Monitor) operating system from disk. Or the card could be a little program all of its own.

A particularly useful one-card program was a card deck printer. You would put it at the front of the deck and boot the machine. This card had a simple loop to read the next card into memory, then print that line on the printer, and repeat until the card reader was empty. (One of my first optimization adventures was modifying the code to be double-buffered: read the next card into memory while the previous card is printing. Twice as fast!)

That card certainly handled resources, but no one called it an OS. It didn't manage and provide services to other programs, it was just a one card program.

Another popular one card program was the bird chirp card. It pulsed the front panel speaker to make it sound like birds were chirping. We didn't think of that as an operating system either. :-)

[1] https://en.wikipedia.org/wiki/SDS_Sigma_series

Early operating systems only had one program run at a time.

What if that one program is a Virtual Machine interpreting bytecodes for a language like Lisp or Smalltalk? From one point of view, only one program is running on the hardware, however many Smalltalk 'Processes' (really, cooperative green threads) could be running simultaneously. This was once the case for Smalltalk. It may have been the case for Lisp machines.

>> Handling resources isn't the definition of an OS.

Well, it certainly is part of the definition of a useful OS. A car doesn't have to be able to turn right. It is still a car, just not a useful one.

Operating system program starts user program.

User program does not start operating system program.

Job done.

VirtualBox is a user program that starts operating system programs.

oooo. Good point. My definition has been corrupted :)

Running programs without an OS used to be the norm not the exception. An OS has at minimum the ability to load and run other programs, additionally it may provide many other functions such as process separation, filesystem access, multi-tasking, device drivers, APIs, and so forth. If you look at the progression of operating systems over time you can see them becoming more and more sophisticated. Back in the early '90s, for example, if you were playing a DOS based video game each game would have to have its own low level drivers for things like video and audio cards. Today the OS handles that and the games just use an API (like DirectX or OpenGL).

I think API’s, and the ability to run different applications, usually within the OS, are the major distinction here.

I think it boils down to the concept that an OS should serve as a middle layer of abstraction between a program and hardware, and provide some sort of meaningful way to make that stream of communication more simple and useable for humans.

If you ever programmed a ROM for a Gameboy or something this should look familiar. The BIOS of those old systems just jumps right into a certain address and all you have to do is make sure your linker is setup properly to put crt0.s there.

The Gameboy Advance in particular is pretty fun and simple, I recommend it.

Something neat about the original Gameboy is how its firmware actually works. As a developer, the entire region from $0000-$3FFF is mapped to the cartridge, and the entrypoint is at $0100.

When the CPU boots up, there is a special temporary mapping for $0000-$0100 to the firmware. The firmware largely just validates the ROM header and displays the Nintendo logo. But my favorite part is this: it executes all the way to $0100. There is no jump to $0100 - the last two bytes are an instruction that simply removes the firmware mapping. Now the IP lands on $0100 and everything is mapped as expected.

This simple mechanism made it notoriously difficult to extract the Gameboy BIOS. I recall reading that one way it was eventually extracted was via fault injection: by messing with the clock very precisely, you could get the processor to 'skip' the instruction that unmapped the firmware, thus allowing you to dump it.

Of course, another way you could read the firmware is by "simply" decapping the processor and taking shots of it with a powerful microscope, and then optically extracting the 'bits.' I think this was also done, but as a person with little understanding of the actual details of how a decapping is done it seems like an incredibly expensive, error prone operation.

Where should one go to get started with GBA dev work these days?

I've found this to be a good resource: https://www.coranac.com/tonc/text/

For any "How do I...X?" question involving connecting two pieces of software together in a different way, the first thing I try to do is look at the API(s) that connects them. In the case of a program and the OS, it's the system calls, device drivers, and standard libraries.

There have been efforts to provide the capability of running a program without an OS before, but any such effort is going to need to provide the system calls and standard libraries used by the program, and the infrastructure to support it (device drivers, management, etc.). That that point it becomes a mini-OS.

An example is Erlang on Xen. Xen is more often run with a guest OS running inside it, and then the program runs within that guest os. The http://erlangonxen.org/ folks made Ling (https://github.com/cloudozer/ling), software that enables an Erlang BEAM VM to be run directly on Xen, and thereby run a single Erlang program on Ling.

Saw a talk on Include OS that is a C++ library that allows one to do this http://www.includeos.org

A similar effect can be achieved with Newlib, which is mentioned on the post, and exemplified at: https://github.com/cirosantilli/linux-kernel-module-cheat/tr... You might need to implement some syscalls though. Or similarly you can also use an open source embedded system like FreeRTOS or Zephyr which essentially implement the syscalls for you.

IncludeOS is modeled much like FreeRTOS or Zephyr. The difference is mainly that it addresses CPUs and not microcontrollers as the primary platform. With paging, you have better security, typically.

Along those lines, a bit, I have always wanted to play with targeting linux but making ones program the init process. All the resources but the benefit of Linux's ability to run on a lot of platforms. Not micro controller though, well arm, but that is pretty large.

That was a spectacular SO answer btw. Thanks for writing it!

People forget that an operating system is, itself, just a program.

It is usually more than one program.

* https://superuser.com/a/329479/38062

For another take on the subject, see "This OS is a bootloader" in PoC||GTFO 0x4:


There's a sequel in 0x5 that explains how to add a basic form of multiprocessing support.

I have also covered baremetal multicore at: https://stackoverflow.com/questions/980999/what-does-multico... The exemple is present in the same repo: https://github.com/cirosantilli/x86-bare-metal-examples/blob...

The Osdev wiki is a great resource on this type of thing https://wiki.osdev.org/Expanded_Main_Page

I only wish they would create a repo with all their examples like I did :-)

Not directly related to the topic, but I wrote a simple puzzle game that emulates a computer which executes programs directly in the memory and you can modify memory to rewrite running programs. Here's a web-based demo if anyone wants to check it out:


A lot of people here seem to be questioning the usefulness of this, why someone would want to do this, et cetera.

Embedded systems are one of the most extremely relevant uses of this today.

However, historically, I think my first case of programming without an OS would be ASM development for the Sega Genesis?

Another addition is that I saw a comment ‘a program that runs an OS, is an OS.’

This is absolutely not true. An Operating System is a multi-faceted, complex piece of software, usually with the capacity to run other pieces of software within it, as well as usually offering high-level API’s to improve ease-of-use for developers to access common functionality, such as playing sound, or displaying images.

A ROM for the NES, while a program, is certainly not an OS, and to be very clear, the NES does not have an OS, like I believe the PS4 and XB1 have.

It contains a certain number of hardware features that can be accessed and manipulated by low-level software functions.

Now try to do it on a smartphone.

I really wish it were possible to do this kind of thing on smartphones. Unfortunately, odds are that anything you try will quickly brick it, and best-case scenario you can restore it using something like the low-level Broadcom interface

Would be interesting to hear discussion on why one might consider doing this.

My take is that it's most likely just being asked for the theoretical details, and few people would practically benefit, but does anyone think this would be practically useful?

Presumably you'd potentially cut overhead from anything you didn't explicitly want happening on the host, but as soon as I start to think about most of the uses typically have, the advantages quickly sink below the disadvantages of having to actually manage everything. It's reminiscent of that person who set out to build a toaster from scratch.

>does anyone think this would be practically useful?

Most embedded software projects I worked on did not have an operating system. You'd just have C or C++ program and some ASM to initialize everything.

I would expect really small IoT devices would be the same. With your average single chip device I wouldn't bother trying to get an operating system on there.

I have only just started to use a Raspberry Pi and it does have an operating system - the advantages there are you can pull in IP networking and run services and do much more complex things than on the bare metal.

One use case I can think of is heavy real time audio processing.

In such an arrangement every millisecond of latency counts.

If you want to understand and improve the Linux / some other OSs boot process.

It’s good for learning, and firmware/embedded systems engineering exists too

Because it's fun?

> printf '\364%509s\125\252' > main.img

It appears that bash `printf` wraps coreutils printf "with ARGUMENTs converted to proper type first," according to the manual. I guess that's how the "%509s" specifier is able to work without any args.

So how does bash `printf` actually determine the proper arg types? Is coreutils printf implemented with hooks that let wrappers fetch the proper arg types without generating an output string? Or does bash `printf` implement its own separate parser for the format specifier?

> It appears that bash `printf` wraps coreutils printf "with ARGUMENTs converted to proper type first," according to the manual. I guess that's how the "%509s" specifier is able to work without any args.

Actually, coreutils printf follows POSIX/SUS which states that "Any extra b, c, or s conversion specifiers shall be evaluated as if a null string argument were supplied".[1] So that trick will work for many printf(1) implementations (including both bash and coreutils).

Bash printf(1) doesn't have any hooks or wrappers, but it does parse the format string to determine how to convert arguments before calling libc vsnprintf(3). If you are interested in the details, check out how %d/%i is handled in bash[2] and coreutils[3].

1. http://pubs.opengroup.org/onlinepubs/9699919799/utilities/pr... (item 9 under Extended Description)

2. https://git.savannah.gnu.org/cgit/bash.git/tree/builtins/pri...

3. https://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;...

Thanks for the links, those are really helpful!

Why didn't libc include a printf function that expands a format specifier with a pointer to a byte array (and/or possibly an array of c strings)? That way there would at least be a standard way of handling arbitrary input and erroring out with less risk of buffer overflows and stuff.

I don’t think bash’s `printf` “wraps” coreutils; pretty sure it just implements the entire call itself.

Very nice post. However I thought it would be about writing programs that could be executed from OS, but OS would be completely unaware that such a program is running. Not sure if that is even possible.

I know it’s possible to not use libc or similar, but os is aware of such a program.

A bit of thought about what it means to "run" gives you the answer here: in order for the code to run, it has to be scheduled at some point. That means it has to be in the operating system's task list somewhere.

Having said that, there are places you can put running code that get scheduled but don't appear in an obvious way: inside other processes, as device drivers, as code on peripherals (GPU etc), or on the notorious management engine. Variations of these methods are used by viruses and exploits: the core of any exploit is getting code to run inside some other program that did not originally contain it.

The final place is to put it "outside" the operating system as a hypervisor, then run the "normal" operating system as a seamless virtual machine inside the hypervisor.

Kind of. Check out IBM's isostack. It takes CPU cores which are not used by the main os (you explicitly disable them) and essentially runs a parallel operating system which only handles the network cards / network stack. The main os cooperates with it, but otherwise it's not really aware of it.

I remember hearing that the PlayStation emulator bleem! did something like this for threading/timing purposes, essentially running its own microkernel and hooking interrupts to bypass the Windows scheduler. However, this supposedly worked by basically exploiting a bug in the way Windows 9x set up task switching and memory protection, and so didn't work on any NT-based Windows (or Wine, for that matter). Also, it obviously didn't try to hide the main program that managed the window and other I/O operations.

Malware often has something akin to this as a goal, but as far as I know it usually works by hiding itself from the user rather than the kernel.

It is not possible from userland, except for extremely serious zero days, due to hardware protections which OSes use, see: https://stackoverflow.com/questions/18717016/what-are-ring-0...

In some way, this is what TSR (terminate stay resident) programs did under DOS. They were started by the OS, exited, but some of the code was still around as part of IRQ handlers and such. DOS didn't really know about that.

Yes, I had exactly this on my mind. DOS era.

Stack Overflow is for very specific questions with very specific answers.

You'll get better results for more general questions from a reddit forum.

There are some fairly decent subs on reddit but, generally, I find it mostly populated by 15-year old kids and others who have never had a job in the industry so, no, I would not refer anyone to reddit for information.

How can you tell their age?

I've always though if a forum like this had a little age indicator (or experience indicator, or something like 'this person is smart') it would be a lot more useful.

It's strange that I can tell by the way they write. I'm not always correct but many times I'd find someone writing as if they were an expert but you could just tell they were only regurgitating something they read. They couldn't make a proper connection between things they were saying. I'd trick them in a way to reveal their age and way too often it was 15 to 17 years old. Sometimes 18 or 19.

You can also read them saying they wanted to take CS when they graduated from school so you can glean from that they are probably in high school.

Or I just flat out ask them (or accuse them).

> many times I'd find someone writing as if they were an expert but you could just tell they were only regurgitating something they read.

that's 90% of HN too though.

What annoys me is that Reddit threads get locked after 6 months. Then people ask another one. And another one. I want to fix that with decent algorithms one day: https://github.com/cirosantilli/write-free-science-books-to-...

I think a better place was the WikiWikiWeb (aka C2 Wiki), the original wiki created by Ward Cunningham. Unfortunately it kinda died and was then converted into a read-only site to avoid abuse. Maybe we could create a successor?

i always feel these questions are answered to best effort of a person just starting out in the field of os / bare metal programming. there is a whole plethora of information missing in this post crucial to x86/64 early development which will get new programmers of this type into trouble if they don't become aware of it.

i'd be careful following such advices. for example this person isn't aware you really need to build a cross toolchain for your OS to compile to your target. on 64-bit there's some more things to take into account about compilers (red-zone?). even if you go all assembly your assembler will probably be an optimising one and shoot you in the back on more than one occasion! Thats just the tool-chain surface issues (deeper ones ignored... like file formats..? binary is fun, but good luck scaling that up to a complex project... bios can't load you up that high in memory and limited amounts...)

Next you get to things omitted which on baremetal would cause problems, like no mention of power management and such topics. or even assumptions that your code is loaded at 0x0000:0x7c00 while it could be loaded at that address but via a different CS value, completely screwing you over if you don't flush it immediately on your mbr...

Take an example of opengl tutorials. people tell you to write your first polygon because they just managed to do so. They forget to mention to throttle the loop, so you burn out your VGA card rendering 10000000000fps of 1 triangle.... good job!

this tutorials are kind of like that. fun to put into qemu which is sort of 'safe' learning and get acquainted but really doesn't do what it says on the box.

I'm not against these type of tutorials, i like that more people become interested in this, but really take some time and state the scope of the offered information properly. like this is a bit of clickbait / misinformation which is a shame

Prompted a fascinating answer. 'closed as too broad' -- modern Stack overflow in a nutshell.

Yeah, I hate that. I've asked a few questions on those sites and came back later, excited to read what answers I got and...it had been closed immediately, before anyone could answer it. GRRRR!

Maybe there could/should be a 'broad' SE site alongside every 'normal' one? The 'too broad' questions include most of the interesting ones to read. The ones that are 'intellectually interesting'.[0] It's a shame. I love how you can ask anything on HN without being shut down like that. It's so unfriendly, hurtful even. (At worst on here, no-one answers - no harm done.) Why they can't just let 'too broad' questions go for a day or two to see what responses come in, and delete the page if it sux, I don't know. The mods get points for closing things, I guess. Often they don't understand the question, it seems.

[0] e.g. https://mathoverflow.net/questions/43690/whats-a-mathematici...

Well, bizarrely the close notice on that says "This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet." Which seems absurd. (Although sure, the question doesn't help people, the answers do though.)

Some neat questions are too broad.

But the too-broad questions aren't usually very interesting. Most of them are just pure crap. If you have enough SO karma, try https://stackoverflow.com/review/close/?filter-closereason=t... — what I see just now is someone who wants help with his java/spring thing and isn't very detailed about it, someone who's "been trying to learn how delegation" works, someone who "would like to know how can i mock diffrent user request behavior pattern" and those aren't unusual. The intellectually interesting questions are unusual in the "too broad" morass.

If people are to moderate SO, the rules have to be simple and quick to apply. Of course people don't have to do that, but I fear that without moderation, the interesting questions would get no answers because they would be completely flooded by questions like those three. So the open-ended, intellectially interesting questions lose either way.

Well, yeah, I assume most non-broad questions are pure crap too. But they don't close all of the non-broad questions. (Or even a lot of the awful ones - I have tried to answer a few horrifyingly badly written Qs on math SE, that no-one else dared to go near, which weren't closed for months, for some reason) So I can't see how that was so relevant to talk about that.

So the answer is, There Is No Alternative? There's no possible way things could be different in that respect?

The close reasons ("too broad" and the others) are good descriptions of the typical bad questions. Anything which doesn't fit into any of the dozen-odd close reasons is not typically a bad question.

This summer I waited on a lot of six- and even twelve-minute batch jobs, and I spent some of those on the SO moderation queue (I'm one of the 3000-karma users, not an elected moderator). It didn't take much more than six minutes to learn that picking the interesting questions out of the flood of manure is a terrible chore.

It would be nice if the intellectually interesting questions could be left alive. But I don't think it's humanly achievable. People will post their job interview quizzes and first-year CS exercises and say, oh, mentally stimulating and please do leave open.

Also, what made this interesting wasn't the question but that the question won the lottery of someone contributing substantial time to craft a great answer.

I don't think you could've done a better job than linking that moderation queue of "toobroad" to make your point. It was a never-ending channel of straight shit. Ranging from "here's a copy and paste of all my unformatted code that won't work, whatever that means" to "how do i use python on windows".

Everything looks easy to someone that has no stake in the solution. We get so used to being end-users that we take everything for granted, as if our experience is the natural ordering of the universe, rather than the result of all the work that goes on behind the scenes.

Read enough posts from HNers condemning Stack Overflow and you'd think they believe Reddit/HN's "new" queue is the ultimate browsing experience.

> The mods get points for closing things, I guess. Often they don't understand the question, it seems

FWIW, it's not moderators closing a question (I rarely see that). You "only" need 3000 points to vote to close a question. Users with a gold badge on a tag the question has can single-handendly close a question.

Users get no incentive for closing questions.

Also, those 3000 points also give you the possibility to vote for reopening a question. And there's a queue where users can see questions that are being voted on for reopening.

One quasi-workaround for broader questions is to use the Software Engineering SE site. It's not a panacea (and in my opinion shouldn't be necessary), but questions like the OP should theoretically be better-received there.


Thanks, will try that.

I think maybe there is a common misunderstanding that Stack Overflow is a forum for discussing software dev. It's not, it's a programming Q and A site and it specifically spells out what that means. Questions have to be fairly specific either to a programming language, tool (make/git), library, or help figuring out a bug. Open discussion of broad CS topics is not what it's trying to be.

I agree I'd like such a site. I don't think any Stack Exchanged based site will ever be such a site. I think TPTB believe it's nearly impossible to run such a site and not have it turn into crap because of the large number of people asking bad questions of the "How do I make a website like facebook" or "How do I make a game like GTA5"

Sure, thanks. I don't only mean 'open discussion of broad topics', but some of the most valuable pages to me have been 'What are the best books/favourite books in field X', and the SE format is great for that - with the best books upvoted to the top of page... but even they're closed. (As being opinion-based.) Again, are there really no questions that are non-broad yet total crap? All the answers here have been like "But broad questions are mostly crap so that would be impossible" .. Ah yes, you said "turn to crap" - often I feel the problem is not with a particular question, but with what is predicted to happen if questions of that type were allowed, a kind of 'slippery slope' argument. Has that actually ever been tried, or it is an untested suspicion it will turn to crap?

I agree with you "best books" can be good questions. They could also maybe lead to farming points so you want your book or library to appear at the top of the list you hire the mechanical turk to upvote your project to the top? I can certainly see "best of ABC" turning into a shouting match if people passionately disagree why one sucks and the other doesn't so I can accept the idea that "best books on X" are often more problematic then they seem at first glance. "best stack to clone FB functionality". "best game engine". etc... both seem like they'd turn to crap. In fact you can already see this on stackshare.io where for most products it's basically a popularity contest not a merit contest. "best text editor"

Because Stack Overflow is for asking a specific question that can generate a specific solution or answer. This is spelled out in "The tour" and the Help Center which no one ever reads. Then they complain when their question gets closed as if they were never told--even those who are told, don't read the link they're given to the above topics, and continue to complain anyway.

I think Stack Overflow has reached a point where most of its regular power users would be happier if there were no more new questions. Then they could dedicate their time to putting the correct tags on things, closing and deleting the remaining backlog until there's nothing left. :-)

I have over 10k points there, so this is said (mostly) in jest. There is a lot of crap being pumped into SO all the time. This kind of defensive closing of an interesting but off-topicy question is the only real tool people have over there. If they had let this question go unclosed, it could encourage other less-interesting open-ended questions and before you know it we'd be seeing "best kind of Pizza topping for programmers?" questions.

I wish there was a guide for "proper" SO questions; my impression is, the only ones are super-detailed questions about some super-particular APIs.

Also interesting how this disconnect is still going on after so many years, mostly unaddressed - despite the many essays Jeff Atwood writes about true purpose of SO, people still want to use it differently.

There is a guide here :


> I wish there was a guide for "proper" SO questions; my impression is, the only ones are super-detailed questions about some super-particular APIs.

And those questions can in most cases be answered by the docs anyway. The times I've used SO is when the problem can't be answered by the docs, my own experience or that of the team, but then I've received either no answer or a series of non-answers (that should have been comments instead, but a lot of people can't comment).

I also see quite a bit of overt reputation farming there. Like vague questions that receive an improbably precise answer that immediately gets selected. Like "how do u process payments?" (the orthography is usually bad) immediately followed by very specific instructions from a very specific payment processor not mentioned in the question, probably taken straight from its documentation. And I rarely see "closed as too broad" on those, probably because of the quick turn-around.

One day, I will create a website with decent ranking algorithms to destroy stack overflow: https://github.com/cirosantilli/write-free-science-books-to-...

>One day

so never then?

* maybe never :-)

But I said the same thing about creating a bare metal example one day hehe

SO doesn't use a ranking algorithm.

It always reminds me of that really weird period of wikipedia in which four out of five articles you looked up had been shut down as not notable enough (fortunately they have moved past it).

It's probably not wrong of the stackoverflow users to have closed the question according to established rules, I just don't think this is necessarily the right way to run the site.

This is wrong, you'd still have a operating system: BIOS

The author does cover this: "It can be argued that firmwares are indistinguishable from OSes, and that firmware is the only "true" bare metal programming one can do."

Maybe it's just my limited knowledge on the subject, but it seems to me it's a bit too much to consider the BIOS an operating system. At that point, what isn't an operating system? You're still writing code against a microcode interpreter even once you remove the BIOS. Is that an OS too?

I would agree. The BIOS/UEFI POSTs the system, and then passes execution to another program (usually a bootloader). At that point, the BIOS is no longer performing the tasks that an OS does. It's not scheduling tasks for the processor or handling I/O at all. Instead, it switches to what's basically an ACPI interface with functions that can be called by the OS. It's like an old TSR program.

the BIOS sets up SMM handlers which do run. also there is ACPI which runs partially outside of the os's control....

BIOS - is an abbreviation for "basic input/output system". System is in the name.

BIOSes offer system calls that tend to be backward compatible, so it's an OS of sorts.

UEFI is the successor to BIOS on modern machines, and it stands for "Universal Extensible Firmware Interface". Is it also an operating system (it doesn't have "system" in the name)?

(I don't disagree with you -- UEFI provides BootServices which arguably are more numerous than the number of syscalls on most operating systems. Just that "system" being in the name isn't really the best argument. :P)

It was thought to be like that - back in the day when everything used interrupts for syscalls, hence 'system' was mentioned. (Apple II had fixed address calls, though - no interrupts)

It's quite the same as DOS being 'disk operating system' - it was able to drive the floppy and provide a file system.

lts not talk about acpi :D ... or system management mode which i think you refer to as BIOS, since bios really hands off control. but has set up SMM to do stuff outside of the os.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact