I can strongly recommend it - really great material!
I've looked over it, it's not too dissimilar (on surface) with what I used to teach at the university. Might not be an accident since I remember at some point we switched from our own compiler skeleton to COOL, which is apparently what Stanford uses; so maybe I inspired my syllabus from Stanford's? I forget :) (for sure I did check the syllabi of several big US universities when building mine).
Interesting that he recommends "Engineering a Compiler"; I don't doubt that it's a good book but I don't know it. The third on my list (though admittedly a bit advanced) was Muchnick's "Advanced Compiler Design and Implementation".
I've written a Scheme interpreter a few years back, but never a compiler, so this should be a pretty fun time-dump for the weekends.
I mainly did the labs. In 6.828, you program a full OS kernel (JOS, based on provided source code). In 6.824, one project was a user-space distributed file system in C++, another was a distributed reliable key-value store in Go.
I agree that the course material is hard to follow without external help but it's definitely doable. They have automatic grading scripts and the labs are progressive.
Week Videos Quiz / Exam PA assigned PA due
1 Course Overview |
Cool: The Course Project
2 Lexical Analysis |
Finite Automata Quiz #1 PA1
3 Parsing |
Top-Down Parsing Quiz #2
4 Bottom-Up Parsing I |
Bottom-Up Parsing II Quiz #3 PA2 PA1
5 Semantic Analysis and Type Checking | Midterm
6 Cool Type Checking |
Runtime Organization Quiz #4 PA3 PA2
7 Code Generation |
Operational Semantics Quiz #5
8 Local Optimization |
Global Optimization Quiz #6
9 Register Allocation |
Garbage Collection PA4 PA3
10 Java | Final Exam
Why? How many copies of these textbooks do you think they sell? There is about 50,000 "computer and information science" bachelors degrees awarded each year (which is broader than what you'd think of as CS). Let's say, generously, 5% take a compilers course. That's 2,500 books per year in the entire subject. Let's say any one book gets about 1,000 of those sales. At $190 per book, that's $190,000 gross. Over an entire edition, maybe you sell 2-3 times that much (considering campus book rentals, people buying used books, etc.) That's maybe $500,000 to write, edit, publish, and distribute an entire textbook. That doesn't seem unreasonable to me at all.
It's a requirement at some universities so I wouldn't be surprised if it's higher. Then again, I don't know what % of universities requires it either.
The hardcover version of the first edition is $2.50 in Alibris. New this book shouldn't cost more than $90.
tl;dr version: you're ignoring all the other stuff that goes into selling a textbook and grossly overestimating how much margin publishers have. Professors don't submit PDFs ready to hit "print." The books must be edited, typeset, etc. This can't be done by minimum wage workers because the editors have to actually understand what they're editing. The hypothetical $230 book nets about $40 in profit.
It's about $20 that way .
Note that it is legal in the US for domestic sellers to import international editions to resell domestically, so for US people buying international editions does not necessarily mean actually buying directly from someone outside the US .
 There is, of course, nothing wrong with doing business with someone outside the US. It is just that if something goes wrong (book lost in shipment for example), it might be more difficult to resolve the issue satisfactorily if the seller is halfway around the world. The time zone difference could make talking to them difficult, as could language differences, and consumer protection provided by your credit card might only apply to domestic purchases.
A textbook represents a distillation of the author's knowledge and experience in some specific field.
You may need that knowledge now, because at some time in the future you may be earning a living from what you know in that field.
Usually a business with some knowledge gap in their practices, hires a consultant for their knowledge in that specific field. By the same token, when you buy a textbook, you are "hiring" the author/s of a textbook as consultants, to fill in some gap or need in your knowledge. How much do you think you should pay your consultants(authors) for their knowledge and experience?
And that is not such a high price - I have read from some engineering contract law, and maritime law, textbooks that were in the order of $3500 each, if I had to pay for them. I have seen some specialist medical surgery textbooks in the same ballpark price.
You're going to make plenty of money - I've been working for a while, I don't care what things below 1000$ cost - I have more left over than I spend regardless, and the mental bandwidth that's been freed up by not having to look at prices is very rewarding.
It's scarcity vs abundance mentality.
Another way to look at it is - when you're out there working, making say a phone app, and you find out it is costing them a million dollars, and it's a bug ridden piece of shit - you'll look back on this textbook, that was written by very smart people, proofread, etc, and it only costs 190$.
(but yes, I hear you: education & healthcare shouldn't be "industries").
Eventually, they do.
You don't know that this isn't going to happen here. In fact, I'd argue that it already started (but yes, it's a slow process).
There are a lot of things I'd like to look into these days, but only so much time. And I do realize that some courses I took in University a decade ago were far more valuable than others (e.g. learning to program the m68K was extremely useful just to understand how stacks, interrupts, etc. work. OTOH, I haven't used anything from in my programming languages course since then).
Niklaus Wirth puts it like this:
"It is the essence of any academic education that not only knowledge, and, in the case of an engineering
education, know-how is transmitted, but also understanding and insight. In particular, knowledge about
system surfaces alone is insufficient in computer science; what is needed is an understanding of contents.
Every academically educated computer scientist must know how a computer functions, and must
understand the ways and methods in which programs are represented and interpreted. Compilers convert
program texts into internal code. Hence they constitute the bridge between software and hardware.
Now, one may interject that knowledge about the method of translation is unnecessary for an
understanding of the relationship between source program and object code, and even much less relevant
is knowing how to actually construct a compiler. However, from my experience as a teacher, genuine
understanding of a subject is best acquired from an in-depth involvement with both concepts and details.
In this case, this involvement is nothing less than the construction of an actual compiler."
I had an AHA moment along the lines of "Wow, the compiler writer is basically God." I had taken courses in software development and hardware architecture, but after writing a compiler I saw first-hand how a program--a description of a solution to a problem in a human readable language--is converted to the same description in a hardware readable language. The meaning of the program must be preserved by the compiler, but otherwise it has the freedom to generate any one of potentially many outputs that preserve the meaning.
There is a beauty I realized in the relationship between the programmer, the programming language, the compiler, and the hardware architecture. They are all in cahoots, but the compiler is the cornerstone. That relationship doesn't directly affect my decisions as a professional software engineer, but it forms the backdrop upon which I think about the act of programming and computing in general.
I haven't reviewed his new syllabus in depth, but even 20 years ago, Alex had reduced the focus on parsing compared to more traditional syllabi based on the Dragon Book. You were expected to do some homework and self-study to learn the basic parsing and lexing concepts. The reason for COOL was to get you into the other compiler-construction topics sooner to get some breadth of exposure to concepts like abstract syntax trees, semantic checking, type systems, intermediate languages, code-generation, and the rough idea of runtime systems and ABIs.
Taken together with an operating systems course, these are really two different examples of systems engineering concepts. You should be able to take away not just some vertical silo of knowledge only applicable to "building a compiler" or "writing an OS". You should be able to fuse these concepts into a richer understanding of how to decompose and engineer complex systems given a range of specialized and more general-purpose requirements.
Another facet of this can be found in courses focused more on database construction and storage, but this starts to veer out of the introductory space. Unfortunately, most introductory database courses focus too much on using a database and writing applications instead of on understanding how they work and what the engineering tradeoffs actually are.
For me, compilers, operating systems, databases, networks, processors are foundations upon which everything else is built. I'd like to know a little bit about the foundation upon which I'm making a living.
Some folks just want to get things done. I think it's just a personality thing.
If you've been programming for a few years and take this course, you're going to say "OHHH that explains <some weird thing that happened in your career>" a bunch.
In my mind, both of these examples showed a lack of understanding in how the compiler works. I would say learning about compilers should help with being able to make good assumptions of how things work. Whether it's worth the rare times it comes in handy is probably dependent on what else you could learn.
Note that I actually didn't take a compilers course myself and said co-workers did, so I can't say for sure even taking a course is sufficient (but I am currently reading a compilers book, though out of enjoyment and not for career skills). I should also say I believe my co-workers are significantly better developers/employees than I am.
The fact that you think she's wrong, and that you're right, on this issue, suggests you don't understand how compilers work either. 99% of printf fails are catchable through static analysis, and should be caught that way.
I'm happy to be proven wrong and certainly know I have much more to learn but I suspect in this case I was not clear in my comment and consequently you made assumptions about what I think is right or not. Feel free to correct me =] Either way I appreciate the response
> 99% of printf fails are catchable through static analysis, and should be caught that way.
I never said that it wasn't possible! But that would require the compiler have special logic just for printf or in this case the fmt and log packages. Maybe you would think that's the right thing to do, and I can see that point of view, but I suspect you would at least understand why it is a controversial decision and thus not necessarily assume it's there.
TL;DR: the compiler DOES have that special logic.
> the compiler DOES have that special logic.
Indeed it being a controversial decision does not imply that no compilers would do it; if anything it implies that there should be some compilers which make that choice.
More all-nighters during that class than any other. It occurs to me, I've spoken to several Junior devs over the last few years with CS degrees that never had to take a compilers course. That just isn't right.
*Full disclosure that the following is mine
Short answer, yes.
Longer answer, compilers of any sort are hard, because there's both a lot of special cases and also pressure to be algorithmically efficient.
Writing any kind of compiler is going to force you to do stuff with the parse tree. It's up to you to decide if
float a = 1
int a = 1 + 2 + 3
int a = 6
Making your own language is hard. if your goal is a working language, i'd suggest building an interpreter first, so you can really nail down the semantics. at the end of that you're going to have a call that looks like eval(ast, context). you can do a simple compiler by just reimplementing eval, rather than doing the work, spit out the code that would do the work.
also, if you have an adequate interpreter, you can you can write your compiler in your own language.
I didn't take this course, but from the syllabus posted above it looks like garbage collection is not covered at all. That's pretty much a requirement for FP languages, and it makes OOP languages easier too. So you'd have to look at other resources for that. A simple semispace collector should be relatively easy to write even if you don't want to get too deep into the matter.
For a very first version of your language, you can of course just allocate memory without ever freeing it. That's good enough to experiment with simple programs.
Another point that I think isn't covered in this course, and that is relevant for (many) functional languages, is pattern matching and the representation of algebraic datatypes.
Their scaffolding code is only available for C++ and Java, but (at least when I took it) many of the tests could be downloaded and reverse-engineered, so I was still able to adequately test my Go implementation. See https://github.com/zellyn/gocool for my working but probably quite nasty code :-)
Compilers was one of my favorite courses. Definitely humbling and enlightening. Would recommend :D