Here's how it goes — You are just given a Nand Gate, you then construct other gates and complex logic from that Nand Gate. Then you build computer's basic processing and storage devices (ALU and RAM, respectively). Then next Stage you build an assembler and compiler for your own defined language. ;)
Finally a High level language(jack) is implemented to run on your machine architecture. Then you build an OS for your machine. Jack OS.
And In the last step you build your first application, Tetris Game. ;)
Remember it is running on your own self-built computer. ;)
This is one of the most well-thought self-learning Projects out there to build a computer from first principles. Kudos for the creators. Pure Bliss.
My experience — You get the feel and appreciate the project as you move up and also later in your life. Its a life-long experience.
If you are a college student, having a mentor helps a lot in understanding and appreciating concepts faster.
Worth mentioning, this is one of the best gifts you can give for a curious soul who has just stepped into computers.
I rate this project very high, and the best self-learning project of all time.
For students and hobbyists alike, the task of understanding what a computer fundamentally does can seem like a truly uphill battle. Approaching this battle from the top down can seem never ending. The number and complexity of layers between application code and executable binaries is daunting to the newcomer to say the least. Approaching it from the bottom up is still difficult, but it allows you to see the need for each abstraction layer as the shortcomings of a lower layer present themselves.
This book takes this bottom up approach to literally take you from digital logic to high level software, literally from nand to tetris. And while each layer in between is highly simplified, it allows you to understand a system as a whole rather than concentrating on the specific layers. Really, a great read. And the projects are priceless. If you make it through this book, you will understand how computers _fundamentally_ work.
A friend at Caltech took this a step further and came up with his own crude SoC that took input from basic switches, did calculations based on code taken from a small off the shelf EEPROM, and displayed the output to segment LEDs. Took him like three years but he was eventually able to make a chip with a 20 micron process using a microscope and a UV DMD development board . He did have access to wire bonders, IC debugging equipment, professors, etc though.
For additional opinions also see:
Edit: But slides are available on http://www.nand2tetris.org/course.php
A video lecture based course would have been great, but this is splendid too!
I found it great read and it covers some of the ground of this course.
And a giant honking breadboard.
Doing it on an FPGA, without extensive handholding and in real-world languages, would be about a year of work in my estimate. This is assuming you build your own CPU (in procedural VHDL, not at a gate level) to implement an existing instruction set, and use the manufacturer's provided memory blocks, video blocks, etc. For reference, an experienced FPGA programmer would take about 2-4 months full-time to emulate something like an NES.
It would be a really good experience, and it's the kind of thing a comp. eng. degree prepares you for (we do a capstone project at my school which is like this). As a bonus, you'll also cover analog electronics (which are infuriating) and as much comp sci. and math as you're willing to take on.
Here's a (video) presentation: http://www.youtube.com/watch?v=UHty1KKjaZw.
E.g. if you press a key, hold it and then release it, it is possible that this event gets missed as it was too short and the key press method did not check the memory region at the right point of time. For me their implementation enforces this bogus "I am in total control of everything that happens feeling" which often leads to bad design as it ignores the messy real world. I would have appreciated something more sophisticated and generic there which would work for different kinds of HW. Maybe I am missing some simple bus-system one might say ...
Nevertheless it is a wonderful book and I had lots of fun with it! :)
That routine should then mask lower-priority interrupts, poll the appropriate region of memory (assuming memory-mapped IO) for the byte or bytes held down, push those onto the buffer for key inputs or into the STDIN equivalent, unmask lower-priorty interrupts, and return.
It is then up to the user program to read in from the buffer and do the needful.
One interesting exception is the Unix VM, whose interrupts are called "signals".
Also, the source (thousands of lines of decent Java) is available at the bottom of the download page.
I read this book as a teenager and I remember it giving me my first "aha!" moment of understanding how computers really work. In my experience, the prerequisites for understanding the book are pretty low, but the knowledge within is sophisticated.
My question for people who have done the course is: does it cover even simple design theory like K-maps? Does it make you account for propagation delay? Does it explain caching schemes and TLBs? I feel like it probably has to gloss over a lot of the 'hard stuff' to remain so dense.
Likewise, it sounds like it's all done in custom languages. Half of my first year was spent struggling with industry standard, terrible software like Altera which is super powerful but terribly designed. The other half was spent actually breadboarding circuits and having them fail because of problems you never see in simulations (or which they solve for you).
I'm not saying it's not an interesting project, but it really is a nice, abstract diversion for people who work on software all day. People calling for it to be included in comp. eng. programs probably don't realize the depth of what actually gets covered in comp eng.
Edit: to sound a bit less whiny, if anyone is doing this course and they want to dig deeper into a particular area, I'd be happy to point them to the books/course materials we used.
For example, one of the assignments is to design a 16-bit adder in their toy HDL, but they never cover carry lookahead adders. The only thing that matters is that your circuit passes the tests, so ripple carry is considered okay.
Similar efficiency/performance issues are glossed over throughout. Propagation delay is never covered, and the sequential circuits use idealized clocks (instant transition between low and high). They also don't describe how to build up flip flops from latches: the D-Flip Flop is given as a primitive and you build up other elements from there.
K-maps are not covered either. Caches are ignored as well.
Still, the book is amazing for its intended purpose. If you don't already know this stuff, this is an easy way to get a somewhat detailed (though abstract) view of how computers work without getting mired in all the concerns that accompany the engineering of actual computers.
Also, I seriously doubt that it covers the entire breadth of information required to create, from scratch, the entire video subsystem required for displaying graphics. Or anything like that.
This book aims to change that in an simplified environment. I don't fault it for skipping over Karnaugh maps just like I don't fault it for skipping the physics of cosmic background radiation, or the techniques used to compensate for failures in multi-level flash memory cells. These details are not on the most direct route from (simulated) nand gates to tetris.
This class equips you with the tools and knowledge to do one thing: finish the class. It guides you along, handing you simplified abstractions that allow you to progress without getting frustrated. At the end, however, you'll only really know how to take the class. If you wanted to drill deeper, you've already done the introductory week where they do a high-level overview of the course material.
In my opinion, the hard part of hardware description is understanding the concepts, not the language used. If you've got a basic understanding of hardware description, the barriers to moving to VHDL or verilog will be much lower. Coming from the other direction, I can see large similarities between the concepts in their HDL and VHDL - the syntax may be slightly different, but the concepts are the same.
I studied computer engineering and computer science, and I haven't used Karnaugh maps or VHDL since college because I write software now. I'm still glad I studied computer engineering though, because it gave me a deep understanding of how computers work.
I appreciate how my education ran from sand to Skyrim (so to speak), and I find it hard to see how anyone can really function in computing without such a vertical understanding.
I'd be curious to know a) how deep your education actually went (since you've implied it was broad, from logic gates up to OSes and high-level programming) b) what you actually do day-to-day that you think this high level overview is indispensible.
What materials and courses structured like this excel at doing is very rapid demystification. They quickly allow the student to remove the "and this layer is black magic" notion of things and give them structure on which they can realize the limits of their own knowledge, or learn to know what they don't know. With this sort of foundation they are better equipped to teach themselves.
Materials and courses like this are not vocational, and don't pretend to be. They are very much the opposite.
* Magic is supposed to work. So you see people calling for functionality to be moved from whatever they're doing (their user-level code, say) into the magic: build something into the language, compile it to machine code instead of interpreting, do it in hardware, etc. Because of course if it's done by magic, it doesn't cost anything and it works perfectly!
* Magic is out of your control. So if it breaks, there's nothing you can do. If your operating system is crashing your program, or downloading updates you don't want, you're out of luck.
* Magic is easy. So the people who make the magic happen don't get the credit.
* Magic is memorized, not understood. So you need to memorize the incantations needed to squeeze performance from your database/OS/CPU/whatever instead of doing them yourself.
You don't need to understand how to use Karnaugh maps to understand that putting more multipliers on your chip is going to cost you real estate. You don't need to understand the different possible scheduling policies to understand that making your program multithreaded will slow it down, not speed it up, unless you have more than one core. Even a shallow understanding is sufficient to be very useful, and to enable you to question things.
There's an old joke that the difference between computer science and computer engineering is that in the former one assumes infinite speed and infinite storage. Understanding that there are limitations, and why they exist and to what degree, is important.
As already noted, it demystifies the surrounding "magic". There's a confidence and freedom which comes from knowing that nothing in the system is beyond you.
My education indeed went from "sand to Skyrim", from basic physics & chemistry to electrochemistry to discrete electronics to quantum mechanics to semiconductor doping to hand-layout of integrated circuits to automated layout of ICs (writing the automators, that is) to hardware languages (acronym escapes me) to logic to gate theory to basic CPU design to machine language to assembler to compiler design to C/APL/Pascal/Prolog/Lisp/C++ to OS design discrete math to graph theory to raster graphics to 3D graphics, and a bunch of other stuff throughout. It's indespensible because I can look at any problem and grok what's happening all the way down to silicon, able to work with someone writing Windows printer drivers one day and proving a linked crossover bug in the USB driver IC the next while discussing circuit design in between, why an elegant recursive solution causes a "drive full" error under certain conditions, why error handling in a certain protocol is pointless (already handled six layers down the network stack) - to name just a few real cases.
Knowing propagation delay in the gates can explain/reveal the limits of scheduling in the OS. Understanding drive rotation speeds provided the breakthrough of on the fly compression as an OS-level storage acceleration technique.
Take anything away? Just a sensible understanding of how everything works, and ability to drill into detail where and when needed. All learned in about 6 years, and even came out understanding why Aristophanes' plays survived for several millennia (to wit: dirty jokes endure).
What I do day to day (now)? Writing an iPad app for mobile enterprise data. Working under a genius crafting the many layers of abstraction making it fast & flexible, he can (has) describe a new way to represent very high level data, hand me a rough description of a virtual machine to process it efficiently, and I'll instantly see how it runs on server hardware. I can't imagine not having this view. As a part time teacher, I'm trying to get students from zero to binary to writing object oriented games in 12 weeks flat; to do less is to deprive them of the joy and rewards of knowing how things work - at every level.
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects."
— Robert Heinlein, Time Enough for Love
To go with your specific example of compilers. Why would people need to learn enough to be able to conrtibute to GCC to get anything out of this course. GCC is the leading open-source compiler; knowing enough to contribute is overkill.
To give an example from personal experience, I, over the course of a weakened, wrote my own operating system from scratch, without having had any formal training in Systems design. I have no doubt that my system would collapse under the weight of doing anything remotely close to what we would expect a modern system to be able to do. The only reason I was willing to risk running it on bare metal was that I had an old, semi-broken, laptop that I didn't care about. But I still learned a lot from the process.
In terms of including it in an comp. eng. program, it seems like it would fit in well as a 101 course. It provides a big picture of how everything fits together, and a first look at all of the topics will help enough when you go to learn them in depth that it seems like it could be worth the time investment.
As some of the responses have stated the book appears to (because I haven't read it, yet) motivate (i.e. motivate as professors do as part of the introduction to a course: why are we learning this? how may we apply this? what should we learn next?) topics the reader may want to study deeper.
Coursework is the minimum, not the maximum.
I don't know if there is any pedagogical benefit from being able to say "I built this whole thing from scratch". But it sure is cool.
I think I may have benefited from going to a smaller school, in this respect: I've had the same professor for every core 'Computer Architecture' class I've taken across 3 years (apparently there's one other guy, but they teach the same content). The courses are numbered, and they pick up exactly where the last one left off. I think the problem with trying to fit this stuff into a CS program is that it's so broad and deep, and it's not your primary focus. For me all of those classes were tightly scheduled requirements, so I took them at the right time, back to back.
It's definitely cool, and I encourage anyone who works primarily in software to check it out and get a better understanding of hardware design in an abstract way.
Flip-flops and sequential logic made my brain flip-flop itself for a while though when I first ran into it. :) That really does require a different sort of thinking, since you're introducing time as a factor.
I also studied computer engineering (and computer science), so I got all of this information over the course of 4 years, but I think it would have been valuable to take this course up front in order to immediately understand how all the pieces fit together.
It would also be valuable for computer science students who don't get most of the computer engineering material.
The title is very ambitious. This is not really building a computer from first principles, there are some steps skipped. This is a high-level overview of modern computers, it's worth noting there's a lot of depth to be explored.
Everyone agrees custom languages are not great. They don't really give you a lot of transferable skills, it would be cool if you really implemented C or Lisp, and did it in Verilog or VHDL.
This style of course may suit a particular type of student, who enjoys a broad overview or wants to specialize in only one area. Personally my preferred way to learn is in depth, serially, so this doesn't really apply to me. My degree also covered most of these topics anyways, so picking wasn't really a problem. I realize this doesn't apply to everyone.
A lot of comments say 'a motivated student will just learn that on their own'. This material is a good jumping off point, but (once again, in my experience) the theory is the hardest stuff to learn on your own. I would rather do the 'dull' stuff in class, then teach myself how to make games out of it (as opposed to being taught how to make games, and having to learn best practices, design techniques, theory).
Some commenters were also saying that this is unique, or it should be taught everywhere. It is unique in that it's a single, very dense class, but the material is definitely available elsewhere, in a format that I find easier to learn from. I wanted to make it clear that, if this is interesting, I think a computer engineering degree will let you learn the same stuff, but in much greater detail. Taking this class first might motivate some people, but I would find it redundant.
In conclusion, this is great, but it's not for everyone. If you like all the content but you're disappointed by how brief it seems, try computer engineering.
edit: I forgot, a lot of comments implied that understanding this material helped them do higher level programming. It's certainly cool to have a soup-to-nuts knowledge, but I still don't really understand how it could help without the topics that actually impact performance like caching, pipelining, I/O, etc.
Many of the points you make in your critique (lack of depth, etc) are obvious to anyone that decides to read the book. As an example, the book Learn Modern 3D Graphics Programming  has been posted and praised on HN in the past, but it should be obvious to anyone that there's a lot more to Computer Graphics than that book alone.
I think your comments would be more valuable if you had something more positive to add, perhaps in addition to criticism. If this book glosses over some topics, perhaps you could suggest some learning resources for those topics.
This is like a professional fabricator complaining that the 10 hour welding course at night-school doesn't cover welding aluminium. I don't think anyone was under the impression that this was a replacement for a an engineering degree. People will do this course because it's cool to build stuff you thought was beyond you.
Also, enroll in a welding course, it's cool to be able to build big stuff out of metal too.
Or you're being sarcastic. I'd hate to assume that, but someone did go through and downvote all of my posts.
I don't think the problem is that the end result isn't a computer (it certainly sounds like it is), but that the computer only runs in the provided simulator, and is written in a custom HDL designed to make this project relatively simple. The simulator itself ignores a bunch of complexities around timing that a commercial one (like ModelSim) would consider.
Personally I haven't done this class, but I'd be curious to know whether the students design the control unit and data path themselves. I know that was a giant pain in the ass when I did it for a gimped RISC processor (as you described).
edited to add: Dug up an actual comment from SPJ on this, I hadn't realized he uses Comic Sans for all his talks:
I suppose type designers or whoever might be particularly sensitive to whatever transgressions it commits (I dunno), but almost everybody I've seen indulge in a bit of C.S.-bashing seems to otherwise not care very much about typography at all.
As far as I can figure, it's just because people love a bandwagon, especially one that's really easy to hop onto and entails few risks....