Hacker News new | comments | show | ask | jobs | submit login
Building a Modern Computer from First Principles (nand2tetris.org)
238 points by xal 1316 days ago | hide | past | web | 83 comments | favorite

You start with a simple Nand Gate, then build the hardware, ALU, RAM, assembler, compiler, OS and finally build the Tetris Game. Remember all this runs on your own self-designed computer — hardware + software, all your work! Awesome once in a lifetime experience project.

Here's how it goes — You are just given a Nand Gate, you then construct other gates and complex logic from that Nand Gate. Then you build computer's basic processing and storage devices (ALU and RAM, respectively). Then next Stage you build an assembler and compiler for your own defined language. ;) Finally a High level language(jack) is implemented to run on your machine architecture. Then you build an OS for your machine. Jack OS.

And In the last step you build your first application, Tetris Game. ;) Remember it is running on your own self-built computer. ;)

This is one of the most well-thought self-learning Projects out there to build a computer from first principles. Kudos for the creators. Pure Bliss.

My experience — You get the feel and appreciate the project as you move up and also later in your life. Its a life-long experience.

If you are a college student, having a mentor helps a lot in understanding and appreciating concepts faster. Worth mentioning, this is one of the best gifts you can give for a curious soul who has just stepped into computers.

I rate this project very high, and the best self-learning project of all time.

Seriously, this book should be in all computer engineering curriculums. Hands down, there's nothing like it.

I've been sick in bed for the last couple days and yesterday read the first half of this book (_Elements_). It is I think the best, tightest description of how one gets from primitive gates to Adders to an ALU to memory. Extremely well written.

This book is amazing, simply for the fact that it really allows you to understand the major layers of a computer system in a way that can fit in your mind all at once. This is not an easy task, with each layer having the capability of being a highly specialized area of study. You can spend your life working in operating systems without ever really understanding what a compiler does, or work in compilers without ever actually understanding the digital logic that underlies computer architecture.

For students and hobbyists alike, the task of understanding what a computer fundamentally does can seem like a truly uphill battle. Approaching this battle from the top down can seem never ending. The number and complexity of layers between application code and executable binaries is daunting to the newcomer to say the least. Approaching it from the bottom up is still difficult, but it allows you to see the need for each abstraction layer as the shortcomings of a lower layer present themselves.

This book takes this bottom up approach to literally take you from digital logic to high level software, literally from nand to tetris. And while each layer in between is highly simplified, it allows you to understand a system as a whole rather than concentrating on the specific layers. Really, a great read. And the projects are priceless. If you make it through this book, you will understand how computers _fundamentally_ work.

An excellent book, one that I can't recommend highly enough.

A friend at Caltech took this a step further and came up with his own crude SoC that took input from basic switches, did calculations based on code taken from a small off the shelf EEPROM, and displayed the output to segment LEDs. Took him like three years but he was eventually able to make a chip with a 20 micron process using a microscope and a UV DMD development board [1]. He did have access to wire bonders, IC debugging equipment, professors, etc though.

[1] http://ajp.aapt.org/resource/1/ajpias/v73/i10/p980_s1

This is awesome. Been thinking of doing the same with an electron gun (hijacked from an old crt) and vacuum chamber.

See if you can get a plasma window for that vacuum chamber, which will allow you to do e-beam etching down to about 100nm with the wafer outside of the chamber. Combine it with a DIY 80/20 class 100 cleanroom (if you can make it small enough to be able to manipulate it with isolated gloves then it will be relatively cheap), some precise X-Y linear stages (you can probably get away with +-10nm precision at 100nm), supersonic bath with etching fluid, and a high quality blender for applying the resist and VOILA you have a simple little fab. You can probably adopt some stuff from an SEM to scan the electron beam across the resist but you're probably better off trying to come up with a way of etching through a die long term (much more difficult and slow).

Wonderful book.

For additional opinions also see:


Some of the (somewhat critical) opinions in that thread are Alan Kay's - worth a read.

http://www.idc.ac.il/tecs is down?

Edit: But slides are available on http://www.nand2tetris.org/course.php

A video lecture based course would have been great, but this is splendid too!

If you don't need to go as deep or as far, you might want to consider 'Code' by Charles Petzold: http://www.charlespetzold.com/code/

I found it great read and it covers some of the ground of this course.

I second this recommendation. Code is a really great tour through the creation of a digital computer, and helps build a fertile ground for later questioning and research into the field.

Re the complaints about how idealized the hardware chapters are: http://www.amazon.com/Computation-Structures-Electrical-Engi... also covers the full stack from transistors to operating systems, but with much more depth on the lower levels. It's pretty old, though.

MIT offers a similar undergraduate class called 6.004 which most undergraduates take. The course materials are available on OCW:


Yes, good approach. It would be very cool to build this from real hardware instead of an VM, now that we have Arduinos and RPI's.

If you built it from real hardware, you'd be going to the store to buy a couple thousand NAND chips, like this: http://www.digikey.com/product-detail/en/SN7400N/296-14641-5...

And a giant honking breadboard.

It might be beyond the scope of a normal hobbyist project, but writing Verilog or VHDL to drive an FPGA might bring interested people half of the way there. You mentioned Altera design software in another post; I don't know if Xilinx's is any better (I'm guessing not really), but Digilent sells FPGA boards intended for the educational market at a pretty reasonable cost. Powerful enough, I'd reckon, to allow for something of this scope to be built.

I actually considered something like this; there are good dev boards for $150 that can drive a VGA monitor and use USB peripherals. The manufacturer provides stock blocks for you to integrate, so you can inspect them but you don't have to build them from scratch (which is super hard).

Doing it on an FPGA, without extensive handholding and in real-world languages, would be about a year of work in my estimate. This is assuming you build your own CPU (in procedural VHDL, not at a gate level) to implement an existing instruction set, and use the manufacturer's provided memory blocks, video blocks, etc. For reference, an experienced FPGA programmer would take about 2-4 months full-time to emulate something like an NES.

It would be a really good experience, and it's the kind of thing a comp. eng. degree prepares you for (we do a capstone project at my school which is like this). As a bonus, you'll also cover analog electronics (which are infuriating) and as much comp sci. and math as you're willing to take on.

There's a link (on the "cool stuff" page here: http://www.nand2tetris.org/coolstuff.php) to an implementation of their machine in FPGA.

Here's a (video) presentation: http://www.youtube.com/watch?v=UHty1KKjaZw.

Note that this is exactly how they made the Apollo Guidance Computer, except that it was 2,800 chips, each with two NOR gates. And a giant honking breadboard.


You could convert their custom HDL to Verilog, for programming an FPGA. I've thought about doing it before, but I've never seen enough support to do it myself.

It's an amazing book and doing all the projects is really fun. However keep in mind that the computer you build is really simple and in my opinion maybe a little bit too simple. In particular there is no decent IO model and reading from the keyboard is simply done by busy waiting and writing to the screen is done by writing to a special region in memory. This is ok, as the book covers many topics, but I would enjoy finding a book that covers this topic in detail. E.g. the famous patterson-hennessy book about MIPS covers the implementation of a RISC processor in great detail but does not go into detail about IO stuff which is in my opinion the really hard part.

Historically, writing to the screen was done by writing either ASCII or pixel values to a special region in memory (e.g. Apple II, IBM PC). I'm not sure what you're suggesting the book should do instead? Use a serial port and terminal? Implement a GPU? (I'm not being sarcastic, just looking for clarification.)

My point is that after reading the book I kind of knew how I could implement a simple CPU in real HW (with ALU, memory, etc.) but it was not clear to me how the IO part (kbd, screen) would work. Is the kbd connected directly to a certain place of memory? How is this implemented? Would there be some screen/gpu HW that is directly connected to the memory region? How is the CPU clock involved?

E.g. if you press a key, hold it and then release it, it is possible that this event gets missed as it was too short and the key press method did not check the memory region at the right point of time. For me their implementation enforces this bogus "I am in total control of everything that happens feeling" which often leads to bad design as it ignores the messy real world. I would have appreciated something more sophisticated and generic there which would work for different kinds of HW. Maybe I am missing some simple bus-system one might say ...

Nevertheless it is a wonderful book and I had lots of fun with it! :)

If I'm not mistaken, the keyboard should raise high an interrupt pin on the CPU, which should cause an interrupt service routine to be called.

That routine should then mask lower-priority interrupts, poll the appropriate region of memory (assuming memory-mapped IO) for the byte or bytes held down, push those onto the buffer for key inputs or into the STDIN equivalent, unmask lower-priorty interrupts, and return.

It is then up to the user program to read in from the buffer and do the needful.

The Hack CPU doesn't have interrupts, IIRC.

I was actually working on implementing a simple little VM library in C as a fun exercise, and deciding how to handle interrupts was where I got caught up.

Most VMs don't have interrupts. As a less-complex alternative, you might consider implementing an I/O thread, something rarely done in hardware because an extra CPU is a lot more hardware than an interrupt mechanism.

One interesting exception is the Unix VM, whose interrupts are called "signals".

I would love to see a book or interactive course that started fairly simple like this, but also had progressively more advanced "second passes" that would go through most or all the abstraction layers of the first project, but with the goal of extending the entire system for some specific advanced feature (like a nice IO model).

I actually started writing my own hardware test runner[0] to more imitate a spec/unit test runner, and because I really wanted just a folder, a NAND gate, and a DFF. I found the built-in components confusing, because I would not implement parts, and yet the tests would still pass.

Also, the source (thousands of lines of decent Java) is available at the bottom of the download page[1].

[0] https://github.com/Jared314/n2trunner

[1] http://www.nand2tetris.org/software.php

Another book in the same space is "Understanding Digital Computers, by Forrest Mims, originally sold by Radio Shack. It's dated, written in 1978, but the basics are still relevant, taking the reader from the definition of a bit, though binary numbers, Boolean logic and eventually to a complete microcomputer and its programming.

I read this book as a teenager and I remember it giving me my first "aha!" moment of understanding how computers really work. In my experience, the prerequisites for understanding the book are pretty low, but the knowledge within is sophisticated.

There is a somewhat similar book by nobody else but Niklaus Wirth:


Question to those that already have this book - is the book full of diagrams? I ask because I could get this on kindle instantly (but diagrams suffer) or in paperback in a week or so. Is it worth getting the physical book over the e-book for this?

Several of the chapters are available as pdfs from the site

Thanks, I see that now. Paperback version it is.

So I just buy the book and use software from the site? That's all that you need?


Well, that's what I call good education. Start from the scratch. Understand how it's made. Then your intuition will have something to grab on when you do more complex stuff. It doesn't feel like voodoo any more.

I don't really have the time or inclination to do all the hardware stuff with real hardware, can anybody suggest software I might use to virtually build the hardware and get the general gist?

Read the link again! The computer is implemented in an HDL and simulated. All the software needed is supplied.

I own it, I love it. It's great and should be read and worked through by everyone who is interested in computers.

Total Cost?

$0, the hardware/each chip is implemented in software. The book is about $30, but most chapters are free on the website.

I going to come across as defensive here, but I'm actually in a Computer Engineering program (not Computer Science). This book purports to cover as much material as 8 undergrad courses, I feel like it must skimp on depth to (for example) condense all of 'compilers' into two weeks. Compilers are a very large topic, a single undergrad course isn't even sufficient to really understand a real world project like GCC. Likewise, 'computer architecture' makes up three classes in my curriculum: one all about building RISC processors, another about CISC, then a third about modern architectures. Only in the last one do you approach an understanding of a contemporary CPU architecture.

My question for people who have done the course is: does it cover even simple design theory like K-maps? Does it make you account for propagation delay? Does it explain caching schemes and TLBs? I feel like it probably has to gloss over a lot of the 'hard stuff' to remain so dense.

Likewise, it sounds like it's all done in custom languages. Half of my first year was spent struggling with industry standard, terrible software like Altera which is super powerful but terribly designed. The other half was spent actually breadboarding circuits and having them fail because of problems you never see in simulations (or which they solve for you).

I'm not saying it's not an interesting project, but it really is a nice, abstract diversion for people who work on software all day. People calling for it to be included in comp. eng. programs probably don't realize the depth of what actually gets covered in comp eng.

Edit: to sound a bit less whiny, if anyone is doing this course and they want to dig deeper into a particular area, I'd be happy to point them to the books/course materials we used.

The hardware content in this book is not sufficiently detailed for computer engineering. It's really for CS students who want to understand roughly how computers are made. (This is evident by counting pages in the book: the first 5 chapters are the hardware chapters and they span only 100 pages. The remaining 200 pages cover the software stack from assembly up.)

For example, one of the assignments is to design a 16-bit adder in their toy HDL, but they never cover carry lookahead adders. The only thing that matters is that your circuit passes the tests, so ripple carry is considered okay.

Similar efficiency/performance issues are glossed over throughout. Propagation delay is never covered, and the sequential circuits use idealized clocks (instant transition between low and high). They also don't describe how to build up flip flops from latches: the D-Flip Flop is given as a primitive and you build up other elements from there.

K-maps are not covered either. Caches are ignored as well.

Still, the book is amazing for its intended purpose. If you don't already know this stuff, this is an easy way to get a somewhat detailed (though abstract) view of how computers work without getting mired in all the concerns that accompany the engineering of actual computers.

The things you list are pretty much some of the base fundamentals of circuit design. I can't imagine that this curriculum is of much use without them.

Also, I seriously doubt that it covers the entire breadth of information required to create, from scratch, the entire video subsystem required for displaying graphics. Or anything like that.

I don't understand your critique. A book manages to condense the design of a computer -- from gate-level to operating system -- in a 300 page tutorial and you do not like the fact that it glosses over some details? I doubt any single person has the knowledge to build a modern computer from scratch, and few people have/need the knowledge to go from gate level to OS-level in the real world.

This book aims to change that in an simplified environment. I don't fault it for skipping over Karnaugh maps just like I don't fault it for skipping the physics of cosmic background radiation, or the techniques used to compensate for failures in multi-level flash memory cells. These details are not on the most direct route from (simulated) nand gates to tetris.

My biggest problem with the book is that it gives you an adequate, hand-wavey idea of how a computer works, but it doesn't impart any hard skills. Karnaugh maps are a reusable skill in digital design. I've done hundreds of them for simple problems, it's like breathing now. I suspect using multiplexers to simplify a design is also left out. VHDL is a real language people actually use, and you can put it on your CV or do something besides this class with it. Breadboarding a real circuit and troubleshooting it is useful and opens up a whole world of projects.

This class equips you with the tools and knowledge to do one thing: finish the class. It guides you along, handing you simplified abstractions that allow you to progress without getting frustrated. At the end, however, you'll only really know how to take the class. If you wanted to drill deeper, you've already done the introductory week where they do a high-level overview of the course material.

You're not going to learn enough of any HDL in a 100 page tutorial to be particularly proficient in it anyway. Trying to explain the quirks of VHDL in a short tutorial would be a nightmare. The result would inevitably be skipping over stuff rather than explaining it. Coming from a software-development background, once you get into the higher-level aspects of writing VHDL (and I'd assume Verilog, although I have no experience with it) there's a lot of fundamental mistakes you can make in your understanding of how the language is interpreted into hardware. Using their own HDL which doesn't have much more than basic primitives stops people falling into those holes.

In my opinion, the hard part of hardware description is understanding the concepts, not the language used. If you've got a basic understanding of hardware description, the barriers to moving to VHDL or verilog will be much lower. Coming from the other direction, I can see large similarities between the concepts in their HDL and VHDL - the syntax may be slightly different, but the concepts are the same.

This is like complaining learning algebra doesn't equip you to be a mathematician. That's not the purpose.

I studied computer engineering and computer science, and I haven't used Karnaugh maps or VHDL since college because I write software now. I'm still glad I studied computer engineering though, because it gave me a deep understanding of how computers work.

Just as meaningful as a full course. This is just a "depth first search" thru the content, getting from top to bottom in one pass; you're complaining its not a "breadth first search" covering everything on one level. Done this way, you get the gist of how it all does, in fact, go from NAND gate to games - yes a lot is glossed over or missed, but once the student sees the vertical structure he can see how each layer connects, and how expanded knowledge of each serves not just one layer but those above and below (and how to leverage each in sync).

I appreciate how my education ran from sand to Skyrim (so to speak), and I find it hard to see how anyone can really function in computing without such a vertical understanding.

This is actually closer to breadth-first (since the alternative is an 'in-depth' course) in my mind, but I get your meaning. The thing is, do you actually take anything away? If you don't talk about caching in the CPU, scheduling in the OS, or propagation delay in the gates, how does that help your understanding of how to write software?

I'd be curious to know a) how deep your education actually went (since you've implied it was broad, from logic gates up to OSes and high-level programming) b) what you actually do day-to-day that you think this high level overview is indispensible.

> The thing is, do you actually take anything away? If you don't talk about caching in the CPU, scheduling in the OS, or propagation delay in the gates, how does that help your understanding of how to write software?

What materials and courses structured like this excel at doing is very rapid demystification. They quickly allow the student to remove the "and this layer is black magic" notion of things and give them structure on which they can realize the limits of their own knowledge, or learn to know what they don't know. With this sort of foundation they are better equipped to teach themselves.

Materials and courses like this are not vocational, and don't pretend to be. They are very much the opposite.

We all start out not understanding how it's possible to make a CPU, write a compiler, communicate over wires, draw text into a framebuffer, and so on. These things seem like magic. This is a problem: as long as they seem like magic, we're deprived of engagement with them. You get a family of systematic errors:

* Magic is supposed to work. So you see people calling for functionality to be moved from whatever they're doing (their user-level code, say) into the magic: build something into the language, compile it to machine code instead of interpreting, do it in hardware, etc. Because of course if it's done by magic, it doesn't cost anything and it works perfectly!

* Magic is out of your control. So if it breaks, there's nothing you can do. If your operating system is crashing your program, or downloading updates you don't want, you're out of luck.

* Magic is easy. So the people who make the magic happen don't get the credit.

* Magic is memorized, not understood. So you need to memorize the incantations needed to squeeze performance from your database/OS/CPU/whatever instead of doing them yourself.

You don't need to understand how to use Karnaugh maps to understand that putting more multipliers on your chip is going to cost you real estate. You don't need to understand the different possible scheduling policies to understand that making your program multithreaded will slow it down, not speed it up, unless you have more than one core. Even a shallow understanding is sufficient to be very useful, and to enable you to question things.

It helps immensely in knowing WHY such things are useful, how they can improve (or screw up!) another layer, and where the correct solution should be implemented.

There's an old joke that the difference between computer science and computer engineering is that in the former one assumes infinite speed and infinite storage. Understanding that there are limitations, and why they exist and to what degree, is important.

As already noted, it demystifies the surrounding "magic". There's a confidence and freedom which comes from knowing that nothing in the system is beyond you.

My education indeed went from "sand to Skyrim", from basic physics & chemistry to electrochemistry to discrete electronics to quantum mechanics to semiconductor doping to hand-layout of integrated circuits to automated layout of ICs (writing the automators, that is) to hardware languages (acronym escapes me) to logic to gate theory to basic CPU design to machine language to assembler to compiler design to C/APL/Pascal/Prolog/Lisp/C++ to OS design discrete math to graph theory to raster graphics to 3D graphics, and a bunch of other stuff throughout. It's indespensible because I can look at any problem and grok what's happening all the way down to silicon, able to work with someone writing Windows printer drivers one day and proving a linked crossover bug in the USB driver IC the next while discussing circuit design in between, why an elegant recursive solution causes a "drive full" error under certain conditions, why error handling in a certain protocol is pointless (already handled six layers down the network stack) - to name just a few real cases.

Knowing propagation delay in the gates can explain/reveal the limits of scheduling in the OS. Understanding drive rotation speeds provided the breakthrough of on the fly compression as an OS-level storage acceleration technique.

Take anything away? Just a sensible understanding of how everything works, and ability to drill into detail where and when needed. All learned in about 6 years, and even came out understanding why Aristophanes' plays survived for several millennia (to wit: dirty jokes endure).

What I do day to day (now)? Writing an iPad app for mobile enterprise data. Working under a genius crafting the many layers of abstraction making it fast & flexible, he can (has) describe a new way to represent very high level data, hand me a rough description of a virtual machine to process it efficiently, and I'll instantly see how it runs on server hardware. I can't imagine not having this view. As a part time teacher, I'm trying to get students from zero to binary to writing object oriented games in 12 weeks flat; to do less is to deprive them of the joy and rewards of knowing how things work - at every level.

"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." — Robert Heinlein, Time Enough for Love

I think you are significantly overestimating how much knowledge is necessary to do these things. This course does not claim to prepare you to be able to work anywhere near the current level of technology, it claims to teach you enough to construct a Tetris game starting from NAND gates. I can assure you, that Attanasof (inventor of the electronic digital computer) would have killed to have a resource like this.

To go with your specific example of compilers. Why would people need to learn enough to be able to conrtibute to GCC to get anything out of this course. GCC is the leading open-source compiler; knowing enough to contribute is overkill.

To give an example from personal experience, I, over the course of a weakened, wrote my own operating system from scratch, without having had any formal training in Systems design. I have no doubt that my system would collapse under the weight of doing anything remotely close to what we would expect a modern system to be able to do. The only reason I was willing to risk running it on bare metal was that I had an old, semi-broken, laptop that I didn't care about. But I still learned a lot from the process.

In terms of including it in an comp. eng. program, it seems like it would fit in well as a 101 course. It provides a big picture of how everything fits together, and a first look at all of the topics will help enough when you go to learn them in depth that it seems like it could be worth the time investment.

You sound a bit defensive. The title of the post is "Building a Modern Computer from First Principles". It's a book; the author doesn't claim equivalence to a computer engineering degree. The areas of study you've listed are gaps a motivated student with appropriate background can fill through self-study or other means. Depending on your chosen career after graduation, you're likely to have some gaps of your own. I don't mean to compare your curriculum to the book, but to make the point that there is always more to learn.

As some of the responses have stated the book appears to (because I haven't read it, yet) motivate (i.e. motivate as professors do as part of the introduction to a course: why are we learning this? how may we apply this? what should we learn next?) topics the reader may want to study deeper.

There's nothing wrong with taking a course that covers a very wide area, even if you're intending to specialize more deeply in all the topics later. It's actually a very effective learning strategy, because it motivates all the subsequent deeper dives.

I will definitely second that. It's simply not possible to cover every aspect of a topic within any course. Survey courses are a great solution to a very real problem.

I guess personally I would feel like I wasted a semester when I had no problem being motivated to deep dive into the other topics already. I suppose if someone was undirected and needed to pick a specialization this might help.

Ah, but if you're actually motivated there's nothing stopping you from going as deep as you want into any of the topics. There's never an excuse for "wasting" a semester.

Coursework is the minimum, not the maximum.

What I meant was that I would want to deep-dive into each topic, but we'd be busy moving on, and I'd cover all that material again next year anyways. I don't think survey courses fit my way of looking at topics, I'm very single-minded. That doesn't mean they aren't valuable or they can't work for other people. Just that I wouldn't want them to be mandatory.

I think the advantage this has is you have one continuous path from NAND gate to Tetris. Whereas in my experience attempting CS at Cal Poly, we did all these steps but they were disconnected. The output of one course was not used as input to the next.

I don't know if there is any pedagogical benefit from being able to say "I built this whole thing from scratch". But it sure is cool.

It's really hard to go from NAND to Tetris, because there's a logical jump when you get to VLSI. This is the whole notion of quantity being a qualitative property: when you get enough gates together, you really start abstracting them and thinking about higher-order components. There's no smooth zoom out, there's just a sudden discontinuity when you stop using individual gates and start using MUXes, flip-flops, etc as your primitives instead. I suspect there's another layer, but I'm not there yet ;)

I think I may have benefited from going to a smaller school, in this respect: I've had the same professor for every core 'Computer Architecture' class I've taken across 3 years (apparently there's one other guy, but they teach the same content). The courses are numbered, and they pick up exactly where the last one left off. I think the problem with trying to fit this stuff into a CS program is that it's so broad and deep, and it's not your primary focus. For me all of those classes were tightly scheduled requirements, so I took them at the right time, back to back.

It's definitely cool, and I encourage anyone who works primarily in software to check it out and get a better understanding of hardware design in an abstract way.

I don't know what you mean about MUXes. It's just simple abstraction to say "yeah, take this pattern of gates and give it a symbol so we don't have to draw it over and over again in detail". You're still just wiring up combinational logic...

Flip-flops and sequential logic made my brain flip-flop itself for a while though when I first ran into it. :) That really does require a different sort of thinking, since you're introducing time as a factor.

I know a lot of people who didn't really move beyond thinking about boolean logic. It's easy to reason about the larger scale component if you're reading a diagram, but it gets harder to design with them at scale.

As he mentioned, at least one school calls it computer science 101. It's not a replacement for an entire computer engineering degree.

I also studied computer engineering (and computer science), so I got all of this information over the course of 4 years, but I think it would have been valuable to take this course up front in order to immediately understand how all the pieces fit together.

It would also be valuable for computer science students who don't get most of the computer engineering material.

OK, to clarify, this is a cool book/course. I don't mean to disparage the author, they've done an excellent job condensing a large body of material. However:

The title is very ambitious. This is not really building a computer from first principles, there are some steps skipped. This is a high-level overview of modern computers, it's worth noting there's a lot of depth to be explored.

Everyone agrees custom languages are not great. They don't really give you a lot of transferable skills, it would be cool if you really implemented C or Lisp, and did it in Verilog or VHDL.

This style of course may suit a particular type of student, who enjoys a broad overview or wants to specialize in only one area. Personally my preferred way to learn is in depth, serially, so this doesn't really apply to me. My degree also covered most of these topics anyways, so picking wasn't really a problem. I realize this doesn't apply to everyone.

A lot of comments say 'a motivated student will just learn that on their own'. This material is a good jumping off point, but (once again, in my experience) the theory is the hardest stuff to learn on your own. I would rather do the 'dull' stuff in class, then teach myself how to make games out of it (as opposed to being taught how to make games, and having to learn best practices, design techniques, theory).

Some commenters were also saying that this is unique, or it should be taught everywhere. It is unique in that it's a single, very dense class, but the material is definitely available elsewhere, in a format that I find easier to learn from. I wanted to make it clear that, if this is interesting, I think a computer engineering degree will let you learn the same stuff, but in much greater detail. Taking this class first might motivate some people, but I would find it redundant.

In conclusion, this is great, but it's not for everyone. If you like all the content but you're disappointed by how brief it seems, try computer engineering.

edit: I forgot, a lot of comments implied that understanding this material helped them do higher level programming. It's certainly cool to have a soup-to-nuts knowledge, but I still don't really understand how it could help without the topics that actually impact performance like caching, pipelining, I/O, etc.

For whatever reason, you feel the need to defend the value of your degree. You forget that people have various reasons (some personal) for seeking knowledge. It's not always about gaining marketable skills or about learning all there is to know about a subject.

Many of the points you make in your critique (lack of depth, etc) are obvious to anyone that decides to read the book. As an example, the book Learn Modern 3D Graphics Programming [1] has been posted and praised on HN in the past, but it should be obvious to anyone that there's a lot more to Computer Graphics than that book alone.

I think your comments would be more valuable if you had something more positive to add, perhaps in addition to criticism. If this book glosses over some topics, perhaps you could suggest some learning resources for those topics.

1. http://www.arcsynthesis.org/gltut/

I have to agree. The GP is getting bent out of shape. I think him describing the limitations of the course would be useful if not worded so defensively.

This is like a professional fabricator complaining that the 10 hour welding course at night-school doesn't cover welding aluminium. I don't think anyone was under the impression that this was a replacement for a an engineering degree. People will do this course because it's cool to build stuff you thought was beyond you.

Also, enroll in a welding course, it's cool to be able to build big stuff out of metal too.

I pointed this out above, but this is really a course for CS people that glosses over a lot of computer engineering topics that you need to, say, design your own processor. It's definitely a tremendous accomplishment, but let's keep our pants on here.

Such as? Does it skip over K-maps or something?

If you control-f, there's squabbling at length below about the fact that they don't cover k-maps, k-maps aren't important, etc.

Or you're being sarcastic. I'd hate to assume that, but someone did go through and downvote all of my posts.

I was looking for more of a list of topics it omits that are absolutely required to implement a functioning processor. A sufficiently simple little register-based RISC CPU with memory-mapped IO, no interrupts, no caches or TLBs, and so on is a functioning (if gimped) computer.

Sorry about that, you can see I'm kind of getting clobbered below.

I don't think the problem is that the end result isn't a computer (it certainly sounds like it is), but that the computer only runs in the provided simulator, and is written in a custom HDL designed to make this project relatively simple. The simulator itself ignores a bunch of complexities around timing that a commercial one (like ModelSim) would consider.

Personally I haven't done this class, but I'd be curious to know whether the students design the control unit and data path themselves. I know that was a giant pain in the ass when I did it for a gimped RISC processor (as you described).

Probably you're getting clobbered because you made lots of negative comments about something that's really cool, and because your criticism (that it falls short of a complete degree program in computer engineering) misses the point of the thing.

The lecture slides are in Comic Sans, ugh.

I used to feel like this, until I saw Simon Peyton Jones use Comic Sans for a slide deck on Haskell and decided this was a such a petty, knee-jerk reaction to something as superficial as font-choice. At this point, I'd almost choose Comic Sans on purpose for my own presentations, just to weed out and troll the people who aren't paying attention to what matters.


edited to add: Dug up an actual comment from SPJ on this, I hadn't realized he uses Comic Sans for all his talks:


I've never understood the enormous enthusiasm people seem to to have for badmouthing Comic Sans. It's not particularly elegant, but it's quite readable and not particularly ugly. It more or less looks like hand-lettered dialogue in comics, and that seems quite accepted, and even respected.

I suppose type designers or whoever might be particularly sensitive to whatever transgressions it commits (I dunno), but almost everybody I've seen indulge in a bit of C.S.-bashing seems to otherwise not care very much about typography at all.

As far as I can figure, it's just because people love a bandwagon, especially one that's really easy to hop onto and entails few risks....

Give it a chance. The value of the content highly outweighs the presentation. I promise you that.

Among the widely available fonts, Comic Sans is apparently the easiest for dyslexics to read.

Maybe I have too much faith in other people, but my personal weighting of social proof is far higher than most aesthetic details in a document.

This is Hacker News, not Hipster News.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact