Basically it has you hands on work through the basics of every concept active in a modern computer save for networks and web.
You use software tools provided with the book to design memory, ALUs, intepreters, VMs, compilers Operating Systems and applications.
Available as a free online class at http://www.nand2tetris.org
I wound up spending the most time playing around with chapter 9 (the Tetris part of Nand to Teris) and made a very basic ray-casting game a la Wolfenstein 3D: https://youtu.be/c-J7lwKWDN8
OK, this is an oddball one, in that a book on Evo Devo is pretty far from CS, but reading it completely changed how I look at software architecture, protocols, formats, programming languages, and a whole range of other CS-related topics.
Superficially, it is what it says on the tin: an examination of the mechanisms by which life manages to evolve at all, that keep most mutations from being immediately lethal, and instead have a decent chance of producing (potentially) useful variations.
But under the hood, I've found it extremely interesting how and why so many aspects of the living world are resilient and even antifragile, and there is are lot of useful ideas and concepts to mine beyond relatively superficial biomimicry like genetic algorithms, neural networks, or even "design for failure" approaches for software and services.
Read it, and you won't think about "Worse is Better", "Bus Numbers", or Chaos-Monkeys the same way ever again.
One of the things that I gained from the book was a bit better "feel" for getting the right granularity/modularity in order to facilitate evolvability (as in the chunks I might want to swap out for another implementation, while leaving others in place), as a separate concern from maintainability.
BTW, I've noticed is that the "order blindly emerging from chaos" theme keeps popping up in fiction by writers like Bruce Sterling, Cory Doctorow, and Charles Stross (all favorites of mine).
For the viewers following along at home, this passage summarizes why I found the book interesting in the first place:
"Organisms are not analogous to human-engineered machines [...] rather, they are characterized by developmental systems that are capable of accommodating quite a bit of disruption—be that from changes in the external environment (phenotypic plasticity) or from mutations in their genetic makeup (genetic homeostasis). This ability to accommodate is in turn made possible by the modular structure of the genetic-developmental system itself, which allows organisms to evolve new phenotypes by rearranging existing components. "
To which my emotional response at the time was, "Hey, human-engineered machines are becoming less like human-engineered machines in many of the same ways!"
And this bit was the part I found most interesting about this review:
" Inherited epigenetic variants can interact with their genetic counterparts to multiply by orders of magnitude the phenotypic variation available to natural selection, thereby expanding the mechanistic bases of evolutionary theoretical explanations and greatly increasing their plausibility as an account of life's diversity."
Huh. The analogous mechanisms for software seems like they may be those that don't get propagated by code per-se (but certainly affect it), like the surrounding community. I'll have to think about that some more (and probably eventually read additional Evo Devo books).
Each section contains a story of some situation he was in where he faced a problem which he solved by applying one of various algo techniques (DP, divide and conquer, etc.). After reading CLRS for a class, it was nice to see how some of the most common textbook algorithms have been applied by a notable computer scientist.
Here you will also find a important chapter that unfortunately didn't make it into the book:
The second half of the book is "the catalog" of algorithms and I guess maybe that's what people like but the I had so few "a ha" moments reading the book that I got to page 130 before I threw i the towel.
It has the same material as Sedgewick, doesn't it?
Basically beginner programmers can acquire a broad understanding of the foundation the programs you're building are built on by reading this book. It reads more like a non-fiction expose than a programming language tutorial book, which is to say, given its subject, it's an easy read you can do on the couch. Depending on your skill and knowledge level, there may be a few sections you have to re-read several times until you understand it, but you won't feel as though you need to go over to your computer chair and try something to fully grasp it.
On a side note: it's my opinion that theory first is the wrong way. Application first, theory as needed is the right approach. Otherwise it's like learning music theory before u know u even like to play music. U might not even like being a programmer or be natural at it. And if u spend 4 years studying theory first, u will have spent a lot of time to discover what u could have in like a month. In addition, it can suck the joy and fun out of the exploration of programming and computer science. It's natural and fun to learn as u dive into real problems. Everything u can learn is on the internet. It's very rewarding and often faster to learn things when you are learning it to attain a specific goal. The theory u do learn seems to make much more sense in the face of some goal you are trying to apply it to. In short over ur computing career u can learn the same stuff far faster and far more enjoyably if you do so paired with actual problems.
But that said sometimes u do gotta step back and allocate time for fundamentals, even if u have no specific problem they are related to. However you will know when it's time to brush up on algorithms, or finally learn how the computer works below your day to day level of abstraction. Just know that a larger and larger percentage of us programmers went the applied route, rather than the computer science theory first + formal education route. It's probably the majority of programmers at this point in time. In short u r not alone learning this as u go. Learn to enjoy that early on and save yourself from the pain of insecurity of not knowing everything. This is an exploration and investigation, and perhaps you will make some discoveries nobody else has been able to make, and far before u have mastered and understood everything there is to know about he computer. Perhaps that's it's biggest selling point--you don't have to know everything before you can contribute to the computer world! So enjoy your pursuits in programming, knowing in your unique exploration at any time u may come up with something highly novel and valuable.
I think that everyone will get new information better when it is something that fixes an immediate issue or clarifies an immediate doubt.
But this is not a contradiction. Theory can be presented in such a way that you want and need to know the next piece of information, the way mystery novels work.
It can be easier for the writer to create this need with examples instead of narrative, no doubt about it.
But let's not fall in the opposite direction of having only examples and no theory, so common with blogs now. I feel empty when I read such materials.
A brilliant MIT professor's reflections on the intersection between computer science, physics and philosophy.
Here are the lecture notes and book: http://www.scottaaronson.com/democritus/
I also very much recommend the author's blog: http://www.scottaaronson.com/blog/
"It's a shame that, after proving his Completeness Theorem, Gödel never really did anything else of note. [Pause for comic effect] "
Similarly to books like GEB, it really exposed me to a whole range of fascinating ideas and topics that I am now interested in.
It was written 40 years ago, but it is still relevant to today. Its so important to think about how to build effective engineering teams.
"It is the one thing every software manager needs to read... not just once, but once a year." (Joel Spolsky)
The insights from reading this book years ago certainly helped me navigate project management as a rescue guy. By the time I was hired, the failing project was already part way through the experience that Brooks' book is based on, and he was writing the OS!!!
He writes about the origin of the svc (supervisor call) in OS360. They needed a way to keep track of them as they were popping up from all over, so they made a list, and you added yours to it, then other groups could check the list before they wrote another version of the same function.
Note, you have to be willing to put the time in, especially if your linear algebra is rusty or (like me) you have only a passing familiarity with complex numbers.
With that in mind, it's almost entirely self-contained and you can immediately start to make connections with classical computing if you're familiar with automata.
I've been interested in learning about quantum computing for a few years now and this book finally got me going.
As an aside it's a really great excuse to try out one of the many computer algebra systems out there. I gave Mathematica a trial for fun since I'd already used SageMath in the past.
This is cheating a bit, because the book is history of computer science rather than computer science, but I think anyone interested in programming should read it. We often think about history of computing (and what it teaches us) in a retrospective way (we see all the amazing totally revolutionary things that happened), but it turns out that if you look deeper, there is often a lot more continuity behind key ideas (and we just think it was a revolution, because we only know a little about the actual history). This book goes into detail for a number of major developments in (early) programming and it makes you think about possible alternatives. It was one of the books that inspired me to write this essay: http://tomasp.net/blog/2016/thinking-unthinkable/
If you you work in bioinformatics it is weird how little history the average bioinformatician knows (the field only really dates back to the late 1960s). People may know things like PAM matrices, but not Margaret Dayhoff who created them (not to mention the standard amino acid codes used today).
I used it to supplement my prescribed compiler construction textbook and it's incredibly useful! Also surprisingly easy to read compared to some texts with a very math-heavy approach.
For anyone interested in other books on compiler construction, I would recommend these:
- "The Basics of Compiler Design" by Torben Mogensen
- "Modern Compiler Implementation in C" by Andrew Appel (ML and Java versions available too)
- "Modern Compiler Design" by Grune, Bal et al.
The Dragon Book is obviously infamous for this topic, but I would recommend covering at least two slightly more basic texts before taking it on.
Came out in November 2016. Split in 3 parts:
Part I: Applied Math and Machine Learning Basics (Linear Algebra, Probability and Information Theory, Numerical computation)
Part II: Deep Networks: Modern Practices (Deep Feedforward Networks, Regularization, CNNs, RNNs, Practical Methodology & Applications)
Part III: Deep Learning Research (Linear Factor Models, Autoencoders, Representation Learning, Structured Probabilistic Models, Monte Carlo Methods, Inference, Partition Function, Deep Generative Models)
The real soul of SICP is in it's exercises however. It asks you to build on your own prior work in inventive ways, challenging you to solve the exercises correctly, but also to do so in a maintainable way.
I wrote up some supplementary material to make the experience smoother for a practicing programmer: High-level requirements of the chapter subprojects, Pitfalls and paradigms embedded in some of the footnotes, Answers to (nearly) all exercises, a testing framework for Chapter 4 interpreter (including the JIT compiler), a GUI for running Chapter 5 virtual machine here: https://github.com/zv/SICP-guile
There's also the MIT online tutor for exercises http://icampustutor.csail.mit.edu/6.001-public/
This breaks down a bit around chapters four and five where you are plugging in parts of a larger code base, and the accidental complexity starts to dominate.
Of course someone could use the test suite to just make changes semi-randomly until it works, but there's no reason it has to be used that way.
I like the amount of projects SICP inspires. That's one of the best things about it. I wouldn't want to discourage writing a test suite, but I wouldn't suggest studying it by using one.
SICP questions are designed to have elegant solutions, which may be missed, this is one reason why a study group is better than a test suite.
Also, I found this blog post good: "Tech Book Face Off: Gödel, Escher, Bach Vs. The Annotated Turing" http://sam-koblenski.blogspot.se/2016/01/tech-book-face-off-...
Writing Efficient Programs
Also has a great bunch of "war stories" about performance tuning in real-life, including one in which people, IIRC, improve the performance of quicksort on a supercomputer by 1 million times or some such, by working on tuning as well as architecture and algorithms at several levels of the stack, from the hardware on upwards.
Edited to change Wikipedia URL to more specific one.
Interesting to know, and agreed.
The book even has a list of thumb rules for the various kinds of optimizations it describes, with guidelines on when each one is appropriate to use or not (as you would know, having read it). Great writing throughout, too. Loop unrolling (for both fixed- and variable-length loops) was one among the many cool ones.
A great review of college math and CS. The book goes over FOPL, Set theory, Turing machines and lambda calculus among other things. A fun read!
The dozen or so principles Varghese lays out for writing fast networking code is reason enough to pick it up. The writing is very approachable, too. Reminds me more of the early network operator books than a CS text.
It was a real eye-opener of what types can do.
The book is divided into two sections: Principles and Practice. The former introduces basic terminology, concepts and common mistakes. The latter is where it gets interesting and introduces real-world problems like cache coherence traffic.
The language used in the book is Java but the concepts can be applied to practically any language that supports concurrency. Highly recommended!
An interesting mix of computer science and psychology. Just started on this one recently. Highly recommended by a colleague. Seems great so far.
The reasons and explanations given seem to touch around a technical, but not too technical approach to algorithms. Getting stuck in a place that probably just leaves both audiences a bit unhappy. For instance, there is a chapter that mentions that the optimal stopping point is ~37%. There is never any mention about how the 37% number is found. Of course, I could look it up but I could just as well look up the optimal stopping problem.
Aside from that, the examples come across as contrived and inapplicable. Sure merge sorting your socks sounds great, but I still will never do it!
I still think it is better suited for those with little to no CS knowledge.
Many of the algorithms and ideas are familiar, but the novelty here is how one can map these CS lessons to improve performance on the human OS and human network.
Beside a good introduction into Scala, that book also teach me the principles in computer programing. It's worth read again several times.
It's the easiest book I've found to get into machine learning without any formal statistics training.
The story they've wrapped around the shift to Agile for this company was extremely entertaining and kept me coming back to read more! Plus it has some very good thought exercises to take with you to your own job :)
Grokking functional programming and deep learning seem awesome as well, but I haven't finished either so take my recommendation with a pinch of salt.
The Design and Implementation of the FreeBSD Operating System
The gist is a model of computation based on knotting the worldlines of a certain kind of particle (well, particle/anti-parricle pairs) and measuring properties of the knots/links. It's also the theory behind Microsoft's effort to build a quantum computer.
Highly recommend at least reading the non-technical sections (ie, everything but section 3 and appendix A).
Copy of paper: https://arxiv.org/abs/0707.1889
It introduces some important parallel patters, explains what makes
them tick and how to make them efficient. The book makes a strong case for using parallel frameworks, namely TBB and Cilk+ to create general and portable solutions.
There's also a set of slide available,
Edit: I'm not connected to the book.
(The alternative is learning PDP-11 assembler and reading the original Unix v6 sources with Lion's commentary.)
A very clear book that learns you to think about integrating data semantically. Make sure to by the second (or later) edition.
Had reddit :)
That reminds me about the book:
Open Sources: Voices from the Open Source Revolution:
It is free to read at the above link. I had read many of the chapters some years ago. Pretty interesting stuff.
It consists of multiple chapters, each written by different well-known people associated with prominent projects from the open source movement, at a time when it was relatively new (1999).
The paragraphs that I remember the most from it are from this chapter:
Future of Cygnus Solutions
An Entrepreneur's Account
(Cygnus Solutions was a company which initially ported and improved GCC and its toolchain to multiple Unix platforms, and made a business out of it, before open source was a gleam in many people's eyes. They did well and were acquired by Red Hat some years later.)
Here are the paragraphs:
Again, a quote from the GNU Manifesto:
There is nothing wrong with wanting pay for work, or seeking to maximize one's income, as long as one does not use means that are destructive. But the means customary in the field of software today are based on destruction.
Extracting money from users of a program by restricting their use of it is destructive because the restrictions reduce the amount and the ways that the program can be used. This reduces the amount of wealth that humanity derives from the program. When there is a deliberate choice to restrict, the harmful consequences are deliberate destruction.
The reason a good citizen does not use such destructive means to become wealthier is that, if everyone did so, we would all become poorer from the mutual destructiveness.
Heavy stuff, but the GNU Manifesto is ultimately a rational document. It dissects the nature of software, the nature of programming, the great tradition of academic learning, and concludes that regardless of the monetary consequences, there are ethical and moral imperatives to freely share information that was freely shared with you. I reached a different conclusion, one which Stallman and I have often argued, which was that the freedom to use, distribute, and modify software will prevail against any model that attempts to limit that freedom. It will prevail not for ethical reasons, but for competitive, market-driven reasons.
At first I tried to make my argument the way that Stallman made his: on the merits. I would explain how freedom to share would lead to greater innovation at lower cost, greater economies of scale through more open standards, etc., and people would universally respond "It's a great idea, but it will never work, because nobody is going to pay money for free software." After two years of polishing my rhetoric, refining my arguments, and delivering my messages to people who paid for me to fly all over the world, I never got farther than "It's a great idea, but . . .," when I had my second insight: if everybody thinks it's a great idea, it probably is, and if nobody thinks it will work, I'll have no competition!
-F = -ma
You'll never see a physics textbook introduce Newton's law in this way, but mathematically speaking, it is just as valid as "F = ma". The point of this observation is that if you are careful about what assumptions you turn upside down, you can maintain the validity of your equations, though your result may look surprising. I believed that the model of providing commercial support for open-source software was something that looked impossible because people were so excited about the minus signs that they forgot to count and cancel them.
An invasion of armies can be resisted, but not an idea whose time has come.
There was one final (and deeply hypothetical) question I had to answer before I was ready to drop out of the Ph.D. program at Stanford and start a company. Suppose that instead of being nearly broke, I had enough money to buy out any proprietary technology for the purposes of creating a business around that technology. I thought about Sun's technology. I thought about Digital's technology. I thought about other technology that I knew about. How long did I think I could make that business successful before somebody else who built their business around GNU would wipe me out? Would I even be able to recover my initial investment? When I realized how unattractive the position to compete with open-source software was, I knew it was an idea whose time had come.
The difference between theory and practice tends to be very small in theory, but in practice it is very large indeed.
In this section, I will detail the theory behind the Open Source business model, and ways in which we attempted to make this theory practical.
We begin with a few famous observations: ]
And here is the full chapter by Tiemann:
Not finished it yet, but it's a joy to read and explains fundamental concepts of things like parsers, interpreters and Lambda Calculus using minimal Ruby syntax.
Butcher: Seven Concurrency Models in Seven Weeks
Code by Charles Petzold.
Of the ones that I haven't finished, but have at least looked at, I think I'd say:
Machine Learning for Hackers by Drew Conway and John Myles White
The Master Algorithm by Pedro Domingos
# Elements of Programming
This book proposes how to write C++-ish code in a mathematical way that makes all your code terse. In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book https://www.youtube.com/watch?v=4moyKUHApq4&t=39m30s
Another point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible.
Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol).
This style might become dominant because it's only really possible in modern successors of C++ such as Swift or Rust, not so much in C++ itself.
# Grammar of graphics
This book changed my perception of creativity, aesthetics and mathematics and their relationships. Fundamentally, the book provides all the diverse tools to give you confidence that your graphics are mathematically sound and visually pleasing. After reading this, Tufte just doesn't cut it anymore. It's such a weird book because it talks about topics as disparate Bayesian rule, OOP, color theory, SQL, chaotic models of time (lolwut), style-sheet language design and a bjillion other topics but always somehow all of these are very relevant. It's like if Bret Victor was a book, a tour de force of polymathical insanity.
The book is in full color and it has some of the nicest looking and most instructive graphics I've ever seen even for things that I understand, such as Central Limit Theorem. It makes sense the the best graphics would be in the book written by the guy who wrote a book on how to do visualizations mathematically.
The book is also interesting if you are doing any sort of UI interfaces, because UI interfaces are definitely just a subset of graphical visualizations.
# Scala for Machine Learning
This book almost never gets mentioned but it's a superb intro to machine learning if you dig types, scalable back-ends or JVM.
It’s the only ML book that I’ve seen that contains the word monad so if you sometimes get a hankering for some monading (esp. in the context of ML pipelines), look no further.
Discusses setup of actual large scale ML pipelines using modern concurrency primitives such as actors using the Akka framework.
# Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for Building Intelligent Systems
Not released yet but I've been reading the drafts and it's a nice intro to machine learning using modern ML frameworks, TensorFlow and Scikit-Learn.
# Basic Category Theory for Computer Scientists
Not done with the book but despite it's age, hands down best intro to category theory if you care about it only for CS purposes as it tries to show how to apply the concepts. Very concise (~70 pages).
# Markov Logic: An Interface Layer for Artificial Intelligence
Have you ever wondered what's the relationship between machine learning and logic? If so look no further.
# Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series)
Exhaustive overview of the entire field of machine learning. It's engaging and full of graphics.
# Deep Learning
You probably have heard about this whole "deep learning" meme. This book is a pretty self-contained intro into the state of the art of deep learning.
# Designing for Scalability with Erlang/OTP: Implement Robust, Fault-Tolerant Systems
Even though this is an Erlang book (I don't really know Erlang), 1/3 of the book is devoted to designing scalable and robust distributed systems in a general setting which I found the book worth it on it's own.
# Practical Foundations for Programming Languages
Not much to say, probably THE book on programming language theory.
# A First Course in Network Theory
Up until recently I didn't know the difference between graphs and networks. But look at me now, I still don't but at least I have a book on it.
I always go to the book author's page first not only to get the errata but also discover things such as free lectures as in the case with Skeina's Algorithm Design Book
They are referring customers to Amazon, and customers don't pay extra.
Do you enjoy viewing commercials and product placements without the proper disclaimer? Because this is exactly what this is. I surely don't appreciate hidden advertising, not because of the quality of the advertised products, but because I cannot trust such recommendations, as a salesman can say anything in order to sell his shit.
Notice how this is the biggest list of recommendations in this thread. Do you think that's because the author is very knowledgeable or is it because he has an incentive to post links?
Please don't project your behavior onto others. I take book recommendations seriously. I actually really enjoy it, people have told me IRL that my recommendations helped them a lot.
> Notice how this is the biggest list of recommendations in this thread.
They are all books that I've read in the last ~4 monthish (not all in entirety). Just FYI I'm not sure how much money you think I'm making off this but for me it's mostly about the stats, I'm curious what people are interested in.
> Do you think that's because the author is very knowledgeable
I'm more than willing to discuss my knowledgeability.
> or is it because he has an incentive to post links?
It's the biggest list because due to circumstances I have the luxury of being able to read a ton. I own all the books on the list, I've read all of them and I stand by all of them and some of these are really hidden gems that more people need to know about. I've written some of the reviews before. Just FYI I've posted extensive non-affiliate amazon links before and I started doing affiliate only very recently.
Furthermore, HN repeatedly upvotes blog posts that contain affiliate links. Why is that any different?
I learnt a lot about what I don't know Being able to do rough math off the head, thinking critically and out of the box.
Also the book smells good. Most of the time I read certain books because they smell good.
Available also online for free at https://hpbn.co/
Lots of great insights about how TCP/IP, 3G etc work and how it affects the performance of websites
Constructing Turing Machines is real fun.
Maybe stretching outside the typical, but I have a weakness for cellular automata.
Also, I've vowed never to buy another Oriely book or similar book ever. Online docs are free and stay up to date.
At least he doesn't have his head up his ass the may Mandelbrot did.
And on top of it all he generated a great deal of original work.
I see you've never met Stephen.
The only new thing I found was a Turing Machine with (iirc) 1 less tape - the whole book could have been reduced to a 4 page paper, as far as new work is concerned.
I read it twice and it is still a gem
Thinking Clearly about Performance, Cary Millsap (Method R Corporation)
Awesome even if you don't want to do any C++
Copyright © 2009 Pearson Education, Inc.
All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise.
I see no indication that the site is affiliated with Pearson or Bob Martin or that they have permission to redistribute the PDF. It's much more likely they're hosting it without permission, and therefore illegally.