"If a programmer writes software for a living, he should better be specialized in one or two problem domains outside of software if he does not want his job taken by domain experts who learn programming in their spare time."
Seems a bizarre sentiment, but after reading this sentence, I feel like I really wanna donate some money to the guy. If he gives a way I will surely do.
I think what he's explaining is that for most (maybe all? no empirical evidence to back this up) programming and computer software in general is a means to solving a problem.
Operating Systems solve the problem of needing a foundation in which to develop higher-level tools that enable computing. Whether the higher-level problems are complex astrophysics calculations or hailing a ride, as you travel down the stack, each piece of software is solving some problem domain.
Point of this sentence is to say, as you move forward in your career, you (usually?) develop knowledge of an industry, a customer base, a constituency or some other problem domain in which you combine your skill in engineering with your knowledge of the domain.
If you don't do this, and you say "I can do anything but I have no special knowledge of any problem" then it's more likely a person with both problem-specific knowledge and engineering knowledge will be given the job.
However, there are so many problem domains in the world of computing, it's very easy to pick one or two that interest you personally.
Those best suited to compete in that scenario are going to have deep experience in some area where programming is an applied skill, not an end unto itself.
I see lots of jobs being done by non-programmers that would be done much much better had that person been a programmer. The future will undoubtedly be full of people in traditionally non-programming jobs with programming skills. But we should not trivialize the systems that allow this to happen. While most programmers work very high in the stack there are many very deep layers that sit below that will require a army of programmers to keep going and keep innovating so more productivity can be had at the higher layers.
As an example I feel people often trivialize one or two liners programs, or short examples that seem to do so much for such a few terms. But even those simple examples are backed by sometimes 10 of thousands of lines of codes. A wave of a hand, and a bark of spoken command to a computer will -- at least with today's software and hardware -- never end up producing something new and innovated that will be widely used.
To get where you are suggesting many things have to change. And it's not all software. If you even take a cursory glance at what it takes today to bootstrap a operating system -- you would then see how complicated things get. For the future I feel you are envisioning the hardware will have to be worked along with software to make things much much much more modular than they are. More standardised terms to describe things and more finite building blocks will need to be made. In fact the closest thing I could get your future vision with today's technology is FPGAs and it is no trivial task to do anything useful with a hardware described language.
tl;dr We have not solved computers or software, therefor; a army of programmers will be required for the foreseeable future.
If you've ever read Vinge's _Deepness in the Sky_, recall that the protagonists occupation for a period of time was that of a code archeologist, attempting to collect, categorize, and understand code that was centuries (or was it millenia?) old... We may be a ways off from that, but I don't think its far from the truth of the future.
But with that being said, people from other professions could still learn how to program and do it well.
Maybe you think of a programmer as person that is just typing in the program designed by someone else, a developer, in a specific language with a specific syntax?
It would be foolish to think that the same will never happen to the vast majority of our jobs.
The number of people who can put together effective, performant and reasonably correct systems is not that large.
About the sentence, sadly, but it is the trend.
I would also like to recommend another free resource that might be a good complement(theory vs implementation) to this:
"Operating Systems: Three Easy Pieces"
available online at:
The book not only teaches x86, but how to use the official resources from the hardware manufacturer to write the OS. In sum, a reader when reaching part 3 for writing the OS, he will need to use the official document, in this case, the "System Programming Guide" manual from Intel to write C code that complies with the documents. Once he learned how to do so, learning other platforms will be much easier given how complex x86 is.
You still have to open the A20 gate in the bootloader if you want to access a memory adress that has bit 20 (counting from 0) be set to 1 (you probably want) - even if you switch to protected mode. The only exception is if you boot from UEFI instead of BIOS - in this case the A20 gate is already set. But the book uses BIOS as far as I see it.
"In protected mode, the IA-32 architecture provides a normal physical address space of 4 GBytes (2 32 bytes). This
is the address space that the processor can address on its address bus. This address space is flat (unsegmented),
with addresses ranging continuously from 0 to FFFFFFFFH. This physical address space can be mapped to read-
write memory, read-only memory, and memory mapped I/O. The memory mapping facilities described in this
chapter can be used to divide this physical memory up into segments and/or pages."
It correlates to my experience of developing in protected mode in QEMU. Once entered protected mode, I can access to any address above 0x10000 without being wrapped around. When I was writing my first kernel (https://github.com/tuhdo/os-study) in real-mode, indeed A20 must be enabled.
To nitpick on this a little bit (consider it as a polite supplement):
In Intel® 64 and IA-32 Architectures
Software Developer’s Manual
Volume 3 (3A, 3B, 3C & 3D):
System Programming Guide
18.104.22.168 External Signal Compatibility
one can read (emphasis by me):
"A20M# pin — On an IA-32 processor, the A20M# pin is typically provided for compatibility with the Intel 286
processor. Asserting this pin causes bit 20 of the physical address to be masked (forced to zero) for all external
bus memory accesses. Processors supporting Intel Hyper-Threading Technology provide one A20M# pin, which
affects the operation of both logical processors within the physical processor.
The functionality of A20M# is used primarily by older operating systems and not used by modern operating
systems. On newer Intel 64 processors, A20M# may be absent.".
TLDR: The accepted spec is that A20M# might not exist.
From the title, I had mistakenly assumed it was about the first OS ever.
The first OS was OS/360 at IBM in 1964.
Examples of earlier operating systems include SHARE Operating System (SOS), first version in 1959, for the IBM 709 mainframe; which in turn was itself was based on an earlier operating system, GM-NAA I/O, first version in 1956.
A very notable pre-OS/360 operating system was MIT's CTSS operating system, the first timesharing OS, first version in 1961 for the IBM 7094. The first versions of OS/360, by contrast, notably lacked timesharing – that was initially provided by alternative IBM operating systems (TSS/360 was the official answer and CP/CMS, which later became VM/CMS, the principal unofficial one); OS/360 only gained timesharing support itself when TSO was released in 1971.
The problem is that the guide is out of date in terms of toolchain, and you need to figure out many things by yourself, especially if you want to develop on Linux. My book helps you to understand how to learn and write x86 with Intel manuals (this is really important!), understand how to craft a custom ELF binary that is debuggable on bare metal, which in turn requires you to understand a bit of how debugger works.
Once you get gdb working, it is much easier to learn how to write an operating system.
There are many topics other OSes like Windows and Solaris do differently and talking about them even for a little bit would be beneficial, but I haven't seen any trace of it.
Search for Windows, Solaris, shows up nothing, and search for unix shows a single page about unix signals.
It does not focus on the concrete implementation of an OS though.
No, he (Robert Szeleney, http://www.skyos.org/?q=node/411) doesn't. He (together with Heiko Hufnagl) founded a studio for creating software and games for mobile devices:
Surely: Field Programmable Gate Array (FPGA) ?
You don't want people judging this excellent book based on few language quirks here and there.
I used Lyx because it enabled me to focus on the content without all the markup text. Writing Latex in Emacs can reduce the distraction, but not enough. I just wanted to focus on the content at the time. Learning Latex is difficult enough, learning how to use the major mode at the same time doubles the difficulty.
Obviously, I still use Emacs daily for writing code and other things. Just not for writing book.
If this was intentional - well played. If not - thanks for the unintentional demonstration of Muphry's Law :)
[ten seconds later] Now scanning my own post really carefully for misspellings and grammar errors ;)
From just skimming through a few chapters I can completely understand what the author is saying but the grammar is jarring and pulls me out of the book.
Hopefully the author uploads the LaTeX source files to the repo soon. Grammar fixes are pretty trivial.