Nice. I did an EE degree many years ago even though I wanted to work in software. The single best course I did was to design, simulate and have fabricated a 4 bit microprocessor. It completely solidified my understanding of how computations took place. Hopefully, texts like this will do the same for others.
In a similar vein, I highly recommend the MOOC "nand2tetris" which progresses smoothly from simple logic circuits right up to a high level Java-like programming language.
I did a EE CS double major a LONG time ago. Senior year included a real summonavitch VLSI course, where in Q1 we had to design a 16- or 32- bit CPU. Over Thanksgiving break, TI manufactured them for us. Q2, we built a computer. Q3, we made it bootable (OS) and C compiler. I was interviewing for jobs in Japan (school was in Terre Haute, so travel overhead!) and I thought I was going to die. But it was the best thing that ever happened to me...
I love this kind of Books! In a much more general tone, there is "The Knowledge: How To Rebuild Our World After An Apocalypse" by Lewis Dartnell. This explains the first principles of a lot of things we take for granted in the modern world, from agriculture to food and clothing, medicine, chemistry and more.
This book about CS principles is a great complement to that!
As I posted the last time this book came up https://news.ycombinator.com/item?id=22294655 that book would be more accurately called The Misinformation. If you follow the instructions in it you will die. Better alternatives are listed in my linked comment.
Agreed. I read through this on a recent vacation and even though i have a computer engineering degree, it was nice to get through the basics again and they have some fun exercises.
It's wonderfully relevant because it takes you through the basics and those are all still the same. Their simulator software was already old when I went through the book a few years ago, but I'm sure it still works fine.
I recommend Code: The Hidden Language of Computer Hardware and Software by Charles Petzold [1]. It is far more comprehensive than the OP, goes from pre-computer code, to electrical circuits, to an overview of assembly. No prior knowledge needed except how to read.
The amazing thing about Code is how it traces the connection of formal logic (in the Aristotelian sense) to the, as you say, pre-computer code of braille and even flag signals to form the foundations of modern computing.
I am a self-taught developer and probably had 10 years experience in web development when I first read Code. I would have these little moments of revelation where my mind would get ahead of the narrative of the text because I was working backwards from my higher level understanding to Petzolds lower level descriptions. I think of this book fairly often when reading technical documentation or articles.
I recently listened to Jim Keller relate engineering and design to following recipes in cooking [1]. Most people just execute stacks of recipes in their day-to-day life and they can be very good at that and the results of what they make can be very good. But to be an expert at cooking you need to achieve a deeper understanding of what is food and how food works (say, on a physics or thermodynamic level). I am very much a programming recipe executor but reading Code I got to touch some elements of expertise, which was rewarding.
Code's good but it doesn't cover Kleisli categories and Kleisli composition, Peano arithmetic, parametric polymorphism, sum types, pattern matching, or any of numerous other things covered in Maguire's How These Things Work. So it's not accurate to say Code is "far more comprehensive"; Code mentions FORTRAN, ALGOL, COBOL, PL/1, and BASIC, but the example programs it contains are written in assembly, ALGOL, and BASIC. It doesn't contain any programs you can actually run except for an three-line BASIC program and some even simpler assembly programs.
You are correct, Code does not contain all of the knowledge relevant to computer science. In fact, no book does, as far as I'm aware. But it is far more comprehensive than the OP because it covers a greater breadth of subjects and in greater depth with more accessibility. You're comparing 50 pages of blog posts to 300 pages of book.
It's very depressing that I've been a developer for 8 years and I've never heard any of those terms you mentioned. I'm self taught but I've always felt like I should go back and really learn the fundamentals.
I'm not saying you shouldn't always strive to learn new things (for your own personal growth and curiosity), but I think it's important to point out that the link between being a developer and knowing about these things-- esoteric topics of applied Mathematics-- is pretty weak.
Imagine a carpenter spending their time getting a chemistry degree in order to better understand how wood glue works.
I don't think so. Understanding what goes on underneath the hood is really what differentiates decent coders from great engineers. Compare the data structures of Subversion to those of git. Or look at some of the work John Carmack did in video games. That requires depth.
If your goal is to be a carpenter who puts together housing frames, you absolutely don't need depth. You're also interchangeable and get paid union blue collar wages. On the other hand, if you want to be a craftsman who invents new wooden things, you need depth in some direction, be that structural engineering, artistic, or otherwise.
There's a ceiling you hit unless you learn much more of this stuff. The direction is your choice (but new APIs ain't it -- we're talking depth).
What I actually want to say is that OP shouldn't feel guilty about not knowing those things. It's okay to want to master these things, if it's what you want. But it's pointless to feel bad about not knowing them.
Of course there is no necessity for excellence. The only necessary thing about human life is death; everything else is optional. Before your death, you can cultivate excellence in yourself, or not — many people instead cultivate hatred, addiction, or greed. There are many ways to cultivate excellence; learning is only one of them, and there are many things to learn. Mathematics, and in particular logic (which is what we are talking about here) are the foundation of all objective knowledge, but objective knowledge is not the only kind that has value.
The true philosopher, motivated by love for the truth rather than pride, is so noble in spirit that when she sees evidence that she may be in error, she immediately investigates it rather than turning away; and if she discovers that the evidence is valid, she immediately changes her position. I see such nobility so routinely among mathematicians and logicians that it is noteworthy in the rare cases where it is absent. I see it rarely outside of that field; in some fields, like psychology and theology, I do not see it at all. So I conclude — tentatively — that excellence in mathematics and logic promotes humility and nobility of spirit, which is the highest and most praiseworthy kind of excellence.
So, while I do not think the OP should feel guilty about not knowing those things, I also do not agree with the implication that there is nothing praiseworthy about knowing them.
Well, I agree with you. I think that pursuing our interests in mathematics, music, literature or whatever strikes our fancy is admirable. And I think it makes us happier, wiser and more humble as you say.
At the same time, I maintain that we shouldn't feel guilty if we aren't doing it that, for whatever reason. Sure, sometimes we actually want to pursuit some of these things, but don't. Maybe it's because we have a messy schedule, we can't organize ourselves to prioritize passions.
Feeling guilty does little to actually make you pursue your passions. You're better off learning about habits and how to pick ones that serve you.
Those aren't esoteric topics of applied mathematics if you're programming in Haskell or using formal methods. Moreover, some of them will improve your ability to write working Python. (The others I don't understand. Maybe they will too once I understand them.)
I don't think you can be much more efficient than learning over a period of 8 years if you maintain so-called work-life balance, or being an autodidact. Remember that they have 4-year degrees for this.
I'm reading Code right now and it's fantastic. I'm a bit more than 1/2 the way through and so far it's only been about how computers work and not really about computer science.
I heard an expression this weekend that I think is apt - a computer is to computer science as a telescope is to astronomy.
"Calling it 'computer science' is like calling surgery 'knife science'."
(Also, "CS could be called the post-Turing decline in the study of formal systems." But I don't know for sure if that was Dijkstra. It's one of my favorite jokes.)
A few years ago, I asked my engineer friend about how much of civilization he could rebuild singlehandedly, should he survive some hypothetical apocalyptic event. “All of it,” he replied. “Not all at once, but I know enough to be able to puzzle together the pieces I don’t know right this second.”
While I admire the Connecticut-Yankee optimism of the engineer, as a non engineer I am seriously skeptical about how a single engineer could know enough about the chemistry, materials, physics, CS etc. I can explain what a battery, or transistor is supposed to do but wouldn't have the foggiest idea how to actually make one. In this scenario are we leaving the bunker to break into Bell Labs (or some research university library at least)?
Somewhat of a tangent but related, there is an anime called Dr.Stone where a brilliant genius scientist kid gets ported 3700 years in the future and people have reverted back to the stone ages. He teaches them how to build everything from scratch and makes some crazy stuff i.e. antibiotics, etc. Highly recommend
I share your skepticism. Seems to me the engineer is falling prey to the Dunning-Kruger effect[1]. Rather than knowing enough to be able to puzzle together all the pieces they don’t know, I’d wager they don’t know enough to be able to discern what they won’t be able to figure out.
We studied a lot of detailed semiconductor physics in my engineering degree. Certainly loads of details were left out, but knowing something is possible and roughly along which lines it could be achieved is a huge hurdle in the innovation process.
Half-way into it’s introducing monads and Maybe. Feels like teaching a stack machine after talking about the visitor pattern. There’s good information here but I’m not sure it covers the important fundamentals (such that I could give to a beginner).
Another book I particularly like in the same style are Feynman (lesser known) lectures on computation: https://amzn.to/2SSoJaR where he takes you from single instructions all the way to quantum computing
It doesn't seem to yet cover circuitry; the hardware it discusses seems to be a two-tape Turing machine, much like BF. The author seems to have been simulating the machine by hand to generate the included execution traces.
I don't disagree with him on that, but there's really quite a bit of stuff in there about quasi-circuitry-like things: "machines" and "latches" and things like
> In the next chapter, we'll investigate how to make machines that change over
time, which will be the basis for us to store information. From there it's
just a short hop to actual computers!
so I think that grounding abstract computation in something that can clearly be constructed in real life is actually very much a concern of the book, even if he's not planning to cover IEEE-754 denormals, the cost of TLB flushes, or strategies for reducing the bit error rate due to metastability when crossing clock domains.
First-principles is a physics/math way of thinking, and is common parlance in the mathematical modeling world.
When we say a model is a first-principles model, it means it is derived through fundamental equations like conservation of mass/energy, and other known relationships. This is in contrast to a data-driven model, where the underlying phenomena are not explicitly modeled -- instead the model is created by fitting to data.
Elon Musk became associated with it because he applied this form of thinking to business problems, i.e. by establishing the "fundamental equations" (as it were), questioning some basic assumptions and coming up with conclusions that are necessarily true but that no one else has arrived at.
Data-driven models (or the human equivalent: reasoning by analogy) are convenient to build and work well in the space the data has been collected in (~interpolation). However, they do not extrapolate well -- you cannot be sure they will work outside of the space of training data that the model has seen.
First-principles models (or the human equivalent: reasoning by principles) are generally more difficult to build and test (I worked on first-principles physics models for a decade -- they are a pain), but because they are built on a structure of chained principles/truths, they often extrapolate well even to areas where data has not been collected.
This is why if you want to improve efficiency and operations in known spaces, you use data-driven models (fast to build and deploy, accurately captures known behavior).
But for doing design and discovery (doing new things that have never been done before), first-principles models/thinking will carry you much farther.
It's a relatively common idiom here in Australia. It gets used quite a bit in STEM education, i.e. "prove such-and-such from first principles", but is also pretty common more colloquially.
According to your link, it is also used a bit in South Africa (where Elon grew up), but less common in the US. Rather than being a new and overhyped term, perhaps it is a case of Elon using a term that is quite everyday to him, without realizing it is less familiar to the audience.
It often occurs that technical jargon is swiped to be used as business speak by business leaders with technical backgrounds. For example I often hear “orthogonal” when “independent” is usually more appropriate.
It's all over the place in physics and chemistry, and as a consequence on the engineering areas that are based on those (AKA, nearly all of them).
It is rarer to see it in CS, but it's more because CS used to deal with very simple theories up to recently than because of some fashion. As CS theories start to construct up from the earlier ones, it's appearing more.
UK here - I remember it back in college thirty+ years ago as in "build X from first principles", which could mean (e.g.) implement an algorithm without out using library calls.