Another great article along these lines for those interested in assembly and execution environments and so on .. is of course the famous muppetlabs exploration to build the smallest linux executable:
The reality is that one great way to get to novel value is to apply your own take on something that people have done before; you may find a way to teachably express a concept better than others have before you.
But interestingly enough, the smallest (known) Win32 executable is actually still more than double the size of that Linux one:
...and both of them don't do much more than exit with a defined value. I think for simplicity, nothing beats the DOS COM format - pure code, up to 64k, loaded in a single segment at offset 100h, entry point is right at the beginning. A "Hello world" is on the order of 20 bytes. (Of which 7 bytes are actual code and the rest is the message.)
The sub-1k categories at Pouet (http://www.pouet.net/prodlist.php?type%5B0%5D=32b&type%5B1%5... ) are some of the more amusing things possible in a tiny executable.
It's also an odd order for that book to defer binary/hex and data representation until far beyond other more complex/higher-level material. The fact that the code listings have a "compiler-generated" feel to it, complete with redundant instructions, suggests that this isn't a great way to learn Asm; see https://news.ycombinator.com/item?id=8248172 for a more detailed explanation.
"Creating Really Teensy ELF Executables for Linux"
I applied to Hacker School solely on the basis of reading her blogs and being impressed with the work she was doing.
 Favourite: http://jvns.ca/blog/2013/10/16/day-11-how-does-gzip-work/
I used both when implementing ELF for my hobby kernel 
Wow. Seriously? I'm kinda surprised to hear it from you.
I'm not saying that article is bad, actually I believe that it can be useful for somebody, because there are some who doesn't know what computer program is even to this extent, but I'm merely surprised that somebody presumably competent would call such an superficial article "deep".
No, they don't. Thinking that they do suggests to me that you're still in school (at least emotionally), because technology is vast, everybody specializes somehow, and not everything you crammed for in CS 100 remains memory resident and actionable once you're actually working.
I work in software security, where file formats are especially relevant; I write a new debugger an average of once a year. This was a good post. And file formats are in fact arcana to most working programmers, as I've learned by actually having conversations with working programmers that touched on how executable loaders work.
I guess the second through fourth years are spent sleeping, to make up for the sleep debt from that first year?
Anyway, CS students may very well be polymaths who shoot lightning from their fingertips, but those of us who spent our wayward college years solving the Schrödinger equation and eating pizza are happy to read a fun article about the executable formats that we've never really taken the time to look into.
It's perfectly natural that not everyone knows how exactly executable loaders work. The same goes for what you called "file formats". That article isn't about that, because, as I said, all interesting parts are skipped. All the facts revealed are that computer executes some files, that are not arcane magic inside, but can be read and decompiled (which isn't even always true, but author doesn't mention it as well, which is natural). Static and dynamic linking are terms everyone should know as well, because even if you use only interpreted languages like Python, once in a while you find some library that requires manual compilation and run into all these nasty problems like required library versions don't match. Or maybe you could notice that the same Qt app compiled for Windows takes a lot more space on the disk than that does when compiled for Linux and ask yourself why. So, it isn't something that only specialists know, it's a basic fact about how computers work.
I don't know about today, but 15 years ago you couldn't be considered programmer without knowing that much at least. It's surprising to hear that it is considered deep.
60 years ago, you couldn't be a programmer without knowing the exact binary code of every opcode your machine executed, and how all of the peripherals worked at the lowest possible level.
Back then, of course, all programs were trivially small to fit in hilarious amounts of memory and mass storage, and graphical programming was a specialized topic, to put it mildly. Networking in the form we know it now flatly didn't exist.
I'm not convinced that the amount of knowledge programmers know has changed, but the kind of knowledge surely has.
I don't know as much about the nitty gritty details as I'd like, but it's damn cool that I can now write 1 line of Ruby that fires off a query at a web server somewhere, gets a JSON reply, parses it into a Hash, and delivers it back to me, doing roughly a kazillion hugely complex things along the way. It lets us all spend a lot more time building things that are useful to customers instead of scrounging around with bits, fun as that can be sometimes.
1. basic data structures (variables, arrays, simple binary trees)
2. static control flow (sequential execution and control structures)
3. dynamic control flow (the call stack and how exception handling works (in C: setjmp()/longjmp()))
4. basic program structuring and hygiene (functions, named constants, picking good names)
Focusing on that instead of the details of machine-level knowledge is what separates CS from IT; we need both, so we should not try to make our CS programs bad copies of our IT programs.
It's nice when people come in to a class warm, as opposed to completely cold, but it's bad pedagogy to assume specialized knowledge beyond what's explicitly listed in the course prerequisites.
Based on the hundreds of programmers I've met with CS degrees, I would say that 90% of them have probably never even written C, most write Java which this still applies to I suppose. And of those 10% I doubt any of them had the curiosity to use a tool like readelf to understand symbols and static linking. Given that, I highly doubt most first year CS students have any idea what this article is even about, much less care.
If everyone in the world shared your attitude about sharing knowledge, the world would have one smart person and a bunch of morons. Get off your pedestal.
As somebody who works on operating systems, I share your disappointment that it's not common knowledge, but it really is trivia insofar as it's related to the sorts of things that most working programmers do these days.
However, I do not share your disappointment that somebody had the nerve to write a perfectly fine article about something you already knew about. How dare they! How will you ever deal with the shame you feel on their behalf? These are truly tragic days we live in. :)