1. Understanding what a proof is, what is required to prove some proposition, how to figure out and write a proof, how to cope with struggling to prove something, and how to carefully read definitions.
2. Just knowing what “basic” things are. In the case of SICP these include calculus in one variable, lots of elementary algebra, an understanding of polynomials and exponentials, vectors and linear transformations (roughly the level of geometry required for basic classical mechanics (c.f. SICM although this is a slightly more advanced way of doing mechanics))
It is easy to see this requirement by looking at the exercises from the book. They start with calculus themed exercises (e.g. Newton’s method), move on towards more algebra flavoured problems and then vectors/geometry. Later exercises are not on mathematics but require one to apply a reasonable amount of mathematical sophistication to get the answer. Throughout are exercises which rely on the reader carefully applying their careful reading of carefully written definitions for things like models of evaluation.
I have another issue which is with the following:
CS is basically a runaway branch of applied math, so learning math will give you a competitive advantage.
I basically disagree with this entirely. I would say that computer science is basically a branch of pure mathematics with applications to engineering practice with real-life computers. I’m not sure I would describe it as runaway but some branches of computer science (eg operating systems) are, I think, not really branches of mathematics.
So when I came across all the math-related stuff in it, I'd have to stop reading and go learn about some things that were basically orthogonal to the subject matter I was actually interested in from the book (e.g. principles for effectively managing complexity with functional constructs).
It's one thing to say CS and math have similar intrinsic structure—but that doesn't mean the content of the two fields need to overlap, and we shouldn't assume the content of one in the other. Imagine if you were reading a book on real analysis but they kept bringing in examples that required you to know CS stuff.
I think it is hard to find good exercises for this sort of early computer science outside of mathematics sadly. It seems to me that there is less variety in the computational constructs one might use in solving problems about producing or processing text (which are concisely written and have a small solution), but I think the book does emphasise mathematical examples more than needed. For example there are other first examples of higher order functions than Newton Raphson, and proving things on the way to a logarithmic Fibonacci algorithm is largely irrelevant to the kind of computer science that people looking at Teach yourself Computer Science are interested in.
I’m not really convinced that scheme is the be all and end all of teaching languages. It has some nice qualities and some that can make programming more difficult. For example I think the kind of interaction is good but often in that interaction one is essentially repeatedly refactoring a small part of the program and TNT’s language does not offer much to ensure that these refactoring are correct for any definition of correct. I think a ML style language could be good for teaching too although often the errors that are produced can be quite unhelpful.
The book seems like it covers some nice introductory things (like splitting up programs into small functions) and some harder things to get to grips with like quoting. But I haven’t had a very thorough look.
The thing I really like about SICP is the idea of building up a model of how programs are interpreted and evaluated. The book produces plausible models for evaluation and demonstrates how one can test between the two and where they are wrong. I like this for two reasons:
1. It really feels like science, coming up with ideas, testing them, seeing when you are wrong. This feels especially useful when one considers mordern systems built out of so many individually large things that it is hard to keep track of things perfectly. It is useful in real life to build up models for what’s going on and think about how you might test them and where you might be wrong. I like that the book reminds you that computer systems may be tested in the same scientific way that the real world might.
2. (Because these evaluation models are implemented in code) it makes the process of compilation and evaluation seem much more understandable and less like magic. One can have an ok understanding of machine code and how a machine roughly works without ever having to write any.
On the other hand maybe these reasons are wushu-washy and no one really needs such concepts for computer science.
I had a much better time with "What is Mathematics?" (aka "Courant and Robbins"), though it still suffers from some similar problems (I remember being baffled by all the attention paid to these properties like commutativity, associativity, etc.—"why should I care about this!?").
Really, I still can't think of a single book that was great for overcoming my own "math illiteracy," it was mostly just a haphazard process of trying enough different things that eventually various parts started falling into place etc.
But I guess part of what's going on is that what works for one person is gonna be awful for another. So I just want to say, maybe "How to Prove It" will be great for you (other readers)—but don't feel bad if it isn't.
I remember reading a few sections from What is Mathematics too. Some chapters on topology and the preface.
So, yeah, I also succeeded because I drowned myself in everything possible until I got used to it. It's just that I never felt any frustrations with Velleman's book, so I knew that every time I open it I will learn something new, won't get confused, and the level of difficulty will be just right. That's why I kept coming back to this book and completed most of it, while didn't make much progress in others.
Also, interesting how much of a similar way we seem to have gone about learning math. I kept it as my main focus for a similar amount of time, then for as a secondary focus while working on a big software side project for another 7 months or so (I was working at a grocery store for money the whole time), then got my first software engineering job and largely dropped any focused study in mathematics. But now I'm in a position where I'll learn bits of math as I need them for projects or whatever, so it definitely wasn't a waste (and like you mentioned in being able to read stuff off of arxiv, it expanded the range of ideas I can understand).
I've actually started getting interested in continuing study again, getting more into applied math this time since I really focused on pure last time. First goal is to get a clear understanding of Maxwell's Equations :)
It took me a long (and often frustrating) time to figure out that the perception of difficulty is largely because of how implicit most everything is in mathematics. It's not like programming where eventually there is a compiler with a definite structure that's going to make your program behave in some exact specific way.
Even worse, is the way we're typically taught mathematics, it's always kinda starting in the middle (as opposed to the beginning). We aren't taught about what it is our how it works, so it's really hard to situate particular mathematical concepts we're taught into a larger coherent framework.
The way I interpret mathematical works changed so much (and for the better), once I realized that any significant bit of math you're using is the result of something a human was trying to accomplish. In the same way you can point to a piece of code and ask it's developer what the motivation was for some abstraction or algorithm, there are always reasons behind the development of mathematical systems. Once I understood that, rather than mathematical systems being these disconnected things that materialized out of nowhere, I started seeing commonalities and considering the reasoning behind them, and seeing more of the network that relates them. It became a lot more enjoyable. I'd recommend checking out some history of mathematics if you'd like to get a better sense of that (my personal intro recommendation: "Men of Mathematics" by E.T. Bell or maybe "Mathematics and the Imagination," which isn't history so much, but serves a similar goal).
If you're looking for something on algorithms specifically the book recommended by the author ("The Algorithm Design Manual") is a pretty friendly read and largely non-mathematical, while still containing all the useful bits (unless you're looking for a reference work on computational complexities or something).
Which is a completely wrong impression due to how it is taught in school and to undergrads in the US.
Speaking as a mathematician, since one is reasoning about abstract objects everything necessarily is pedantically explicit (otherwise proofs could not possibly work). Hence, in US graduate courses or EU undergrad courses you start from scratch and rigorously define every symbol you ever write and justify every step you take (e.g. given a field, why is "1" distinct from "0"? What is a derivative? Prove that it is actually well-defined, exists for such functions etc.).
What many people confuse for being implicit is the heavy polymorphism and terseness in mathematical notation. For instance, one often identifies a function, its graph or its image depending on type makes sense in context. Here, rigorous courses spent much time to prove that any possible ambiguity in notation is actually no ambiguity at all; one is allowed to use short-hand notation because all interpretations are equivalent.
My biggest frustration learning math when I was younger was not knowing -why- I was doing something a certain way or how it worked. It was incredibly discouraging because it didn’t just “click” like it seemed to with my peers.
It's a very healthy mindset to have, but it's a blessing and a curse. Some people are just very good at just abstracting away the details and getting on with things; what seems like "clicking" is not always the same as intuition. Probably it's good to operate in both modes. It's a really really useful skill to know when you need to care about the internals and when you can safely ignore them.
One teacher kept saying "f(x)" but couldn't explain what a function is. He just said "it's anything", then "don't worry about it". If he had even said "a function is like a machine that takes number(s) as input, changes them with a formula and outputs the new number(s)", I think it would have helped us grok.
I think he understood math(s) so well that he couldn't relate to someone who didn't know what a function was.
Helpful but not quite right and one of the common misconceptions students seem to carry over from high school.
A function is something that takes inputs out of a set (its domain, e.g. people in this class) and gives you exactly one output with a prescribed data type (e.g. a date. The function than for instance being person -> birthdate). Neither numbers nor a formula are needed.
> I think he understood math(s) so well that he couldn't relate to someone who didn't know what a function was.
I think he probably could, but the syllabus said explicitly not discuss this ("too abstract") and there is time pressure. High school maths mostly is somewhat handwavy and stringing "definitions" together by examples. So you would mostly see lots of examples of functions and the "definition" of 'function' is then simply "things like that".
I was the same as how you describe yourself, and I always just assumed the kids who were best in the class were doing what I was trying to do also, but were much better at it. And then you see later how fragile/limited one's understanding is when taking the memorization approach—but as a kid you don't have many options.
Schools tend to not teach mathematics but memorization, unmotivated formulas and computation. Mathematics as its own discipline is basically only "why and how", i.e. proofs.
Advanced math is linked with computing mostly because the roots of computing lie in mathematical logic.
So you can become a master programmer without advanced math knowledge.
In short, most algorithms 101 books should be accessible to someone who considers themselves math illiterate.
Statistics, for example, appears in any non-trivial computer science problem, like network congestion, analyzing proper parameters for algorithms, probabilities of getting lock contention, distribution of dispatched instructions, audio psychoacoustics, etc.
Computer vision and graphics programming requires a solid foundation of linear algebra and geometry.
In short, those tools are pretty necessary if you want to approach computer science as a science and as a tool for engineering beyond the very basics.
A lot of real world problems can be solved without a programmer requiring advanced math knowledge.
- Thinking about and fixing bugs related to OS scheduling, like, why doesn’t my time.Sleep work as expected? (Intro to Systems, Operating Systems).
- Understanding the protocol layers involved in a network performance problem; writing network services to RFC specs (Networks and Distributed Systems).
- Selecting, working with, and understanding the limitations of distributed databases (Advanced Distributed Systems).
- Shifting the bulk of tricky computing into pure functions; borrowing design patterns like the state monad (Functional Programming).
- Parsing serialized data (Formal Languages for basic theoretical grounding, Programming Languages for compiler and interpreter implementation).
- Dealing with concurrency and using synchronization primitives (Intro to Systems).
- Awareness of security concerns and techniques, respect for the subtlety of crypto implementation, literacy when reading HN on subjects like ASLR or the Juniper RNG compromise (Intro to Security).
- Basic comfort with C, Make, vim, bash, awk, etc. (Intro to Programming).
The vast majority of my education was spent on programming projects to implement key components up and down the stack. Having a decent understanding of what they’re doing and how to approach writing them has been invaluable. The only thing I truly haven’t touched since college is discrete probability.
People without CS background who don't see where it would be useful
People with CS background who don't know how to use it
People without CS background who realizes where the science can be useful.
People with CS background who know where and how to apply it to every day problems.
Unless someone is building forms all day long (and even then), it's going to be useful. Sure, you can build apps without it. but they'll be mediocre instead of good.
I learned almost all the basis for what I do on a day-to-day practice of being a programmer from:
- my high school programming classes
- my grad school practice of learning how to research and read
- a bunch of middle school classes in formal logic
- playing with a ton of programming tasks
The CS stuff that I've read in the decade since quitting the pursuit of a PhD in literature has been way less helpful than simply trying to fix bugs in my code.
I don't know if my career is representative of other folks' work, but at my "level" I consider programming to be a lucrative trade, and much of the CS stuff is almost totally irrelevant to that, compared to having awareness of the specific of whatever larger system I am trying to diagnose or modify.
I agree that the basis for this profession can be taught in a couple of classes; I feel that almost all of what I do comes down to playing with the actual technology and trying to solve problems with it; theory is only useful after you have a fundamental feel for the nature of the problems at hand.
I was working on a iOS App that connects to a GraphQL API. All GraphQL are POSTs. POSTs aren't cacheable. I had to implement a client-side cache. Implementing a cache, understanding the pros/cons of an implementation, is an exercise in CS.
I was implementing an animation. I drew the animation out on graph paper and worked out the transformations using stuff I learned in my linear algebra and computer graphics classes.
And in general, thinking through the tradeoffs of Swift vs Objective-C or REST vs GraphQL or Ruby vs Elixir, etc, etc is an exercise in CS.
Unfortunately, we seem to live in a cargo cult world. Good enough for Facebook/Google/Etc? Good enough for us. ¯\_(ツ)_/¯
I was lucky that by the time I git to college, I already been hacking around with AppleSoft Basic and 65C02 assembly language for six years. I picked up C on my own and that carried me through the first 12 years of my career. Knowledge of algorithms helped me a little bit since we had to do everything from scratch but most modern developers wouldn’t even need that.
If you have many I'm going to assume that's at least three. Surely that's a PhD then? Otherwise you've been standing still doing multiple bachelors or masters degrees, which wouldn't make any sense.
If you've done a PhD and then go into a field which didn't need that research, then that's unusual. Why did you bother with the PhD?
But, the actual dev ops work I do, while I use a bit of distributed systems theory to understand things, is mostly orthogonal to my education. The theory I do use I can easily explain to non CS background employees in about 10-20 minutes. The practical knowledge I use (Linux, coding, s/w eng.) I picked up in my own time and one or two classes.
Why did I get the PhD? Primarily for pie in the sky reasons.
The knowledge and research. Not in order to get a job or teach in academia.
Built-ins are made to work well on the most common case. Often, you can use domain knowledge to come up with better solutions. But you need to know how common sorting algorithms work to understand if and when that domain knowledge can be used to speed up your program.
Also, CS is different from programming, just like Math is different from bulding a house.
Received a comp-sci degree in 2008 from a small liberal arts college. I was one of five grads in my year, and most of the last two years were not spent learning “practical” tech, but rather theories, algorithms, etc. My liberal arts educating taught me how to write, how to discuss solutions, explore tough concepts, break problems down into smaller subtasks, and more.
Skills are important. But theory matters too. Just because one has value does not rule out the other.
I am not so much arguing against CS theory, but CS education.
The majority of software engineers are application developers making CRUD websites or mobile apps, so that perspective is the one to come up most often. I also happen to be one of those developers.
The challenges of application development are related to transforming data, handling asynchronous operations, managing state, and picking elegant abstractions that solve your problems. The intuition for these things is mostly picked up through hours of professional development, seeing good code, and shooting yourself in the foot a couple of times.
While there are some harder problems in app dev which do require deeper computer science understanding, they're extremely rare. I suspect this is different for people doing things like video game development, although I don't have any experience there so I can't speak to that.
My experience is that teams that act as a platform (and I’m using this term very loosely) tend to have lower level problems to solve. Think of AWS teams vs. large companies. A large company might be dealing with high scale; something like 100k+ transactions per second. An AWS team can have many large companies as their customers, so their scale gets ridiculous; much higher than any single company. This can require more traditional CS knowledge.
Some individual engineers love shipping products though — they like writing a LOT of code and getting things out the door. Some engineers like very carefully working on MASSIVE systems, but they end up releasing way less code. Other engineers like working on very low level embedded systems or whatever.
There are a lot of problems to solve, and none of them are necessarily strictly harder than each other. Some people who can support systems at incredible scale simply cannot cope with the speed of back to back product launches, and vice versa. There are tons of types of talent.
The way I think about it is that what you learn in class is valuable for understanding the abstractions that you will use in industry. If you’re working in C# or Java, most of the time you don’t care about the cost of memory allocation, method calls, reflection voodoo, file system access, etc... BUT, in those rare situations where the abstraction causes a performance or correctness issue, then all that academic knowledge becomes valuable. I find that the instances of these problems are very rare but when they occur you have an opportunity to deliver a lot of value.
For example, at my job we have a rather slow build. Looking at the logs, it’s because we spend a lot of time doing I/O. Someone had the idea to use symlinks instead of file copies. Badabing, badaboom, we got something like a 3x speed up from doing that.
I can’t speak for yters’ experience in school or work, not least since they didn’t name names.
Tech is a huge and diverse industry, but it seems to be treated homogenously when stuff like this comes up. Writing web apps in React is very different from game programming, mission-critical embedded, chip architecture, etc. I see much more demand for frontend devs than more specialized roles, and I think that should lead to lower enrollments in CS programs to match. Today it is way oversold (as I believe a lot of university is in America), but still necessary in some circumstances. For instance, someone teaching a bootcamp should probably have undergaduate level training in CS and/or teaching.
University and even bootcamps also serve as validating authorities that vouch for the abilities af the people they graduate. Maybe not perfect, but I don’t profess to know that licensure or some other method is better or worse. Programming jobs aren’t just about code, and uni/bootcamp isn’t just about learning code: you need discipline and self motivation, executive functioning, ability to research and navigate systems.
Is having a github repo with 1,000 stars now a necessary and/or sufficient condition to be talented?
The field of computer science can be rather punishing if you don't grow up with a STEM mentality. I don't think there's a need to put any more pressure on developers and tell them they are not on a path to make enough money, that they will not work on interesting problems and that they have to spend another 900 to 1800 hours before they are worthy.
We should instead foster a mentality of inclusion and promote learning by getting people excited about spending 100 hours on a particular subject and not persuade people to learn based on the fear of missing out. And we're not addressing the really big elephant in the room. So many bright people spend 900 to 1800 hours developing their CS skills and end up working on trivial things that could just as easily be tackled by somebody with a bootcamp background.
I do maybe 30% front end, then a whole bunch of everything else and often wonder what entire categories I'm blind of.
I'm always more interested in reading quality material, that teaches things quickly and efficiently, with practice and not just long theory, without going into dark details. Those material are more rare. It's also often more interesting to learn about subjects you're interested into, instead of just "learning CS".
I have an allergy to academic materials, and reading books that teach theory without explaining their application. Math is important, but it's a mean to an end, and to me it's better to teach CS with code. You cannot teach the proper math to everybody. The finality of computers is to use them by writing code and algorithms that perform well or better. Analyzing computability and other theory seems like applied math, so it's not really applied to computers science in my mind.
Being able to pick up machine learning is great, but there won't be many individuals using that sweet math to do CS research. I used to believe programming was always a field of research in a way. It's not. You don't see random developers building ground breaking algorithms that change the world. That doesn't happen. Programming is about engineering and tinkering. You don't need to teach yourself computer science, you need to teach yourself more math, or learn programming, or learn electronics.
It goes back to the 1970s. C was created to have all these cute little tricks that you could do to manipulate your data in memory. This led to pointers and all the fancy pointer arithmetics, that allowed your program to run a few clock cycles more efficiently. But, the cost is that you shoot yourself in the foot, once in a while.
Fast forward to now, and squeezing a few extra clock cycles here and there, and memory usage conservancy is largely irrelevant, except for a few edge cases.
C is just a bad programming language. And C++ just inherited all its defects, just to bring in Objects. But the objects were a poorly designed idea, and was terribly executed.
In C and C++, instead of just focusing on your problem, you have to manage the language, and its built in defects.
Then came Java. And it was the opposite reaction to C++. No more pointer arithmetics, everything is now a reference. Ok, good. But, it created another problem. The framework monstrosity. Learning the language itself is simple enough. But having to work with someone else's poorly designed framework, that makes no logical sense, is just unbearable. In Java, you now have to manage the framework.
I like Python for its brevity and conciseness. And especially for its flexibility with functions. It allows you to program in a pure functional style. It's a breath of fresh air. I can build out my code like lego blocks, stress test each function, and then connect it all together. And the results work flawlessly. Except for one catch, it runs a little slower.
I'm hoping this move into Functional Programming will be the next true wave to come into the computer industry.
How does this compare to Open Source Society or freeCodeCamp curricula?
The OSS guide has too many subjects, suggests inferior resources for many of them, and provides no rationale or guidance around why or what aspects of particular courses are valuable. We strove to limit our list of courses to those which you really should know as a software engineer, irrespective of your specialty, and to help you understand why each course is included.
freeCodeCamp is focused mostly on programming, not computer science. For why you might want to learn computer science, see above.
You can learn things by a book sure. But if you go on a udemy or coursea or whatever course, there's always Q/A comments with other people working on the same course and that insight is extremely invaluable so this is why I prefer OSSU over teachyourselfcs.com. I prefer MOOC's because there's also some level of accountability as well, you can go for the certificates too but I don't really care too much about them.
OSSU does have way too many topics just pick 80% of them and you should be okay, depending on what your weaknesses are.
Some course overlap in both so you should focus on those first and foremost. These include things like nand2tetris, 3blue1brown's linear algebra, data structures & algorithms (various choices, sedgewick, skienna, etc).
Do your homework on reddit threads as well before picking one or the other.
I like OSSU because it also tells you what the prereqs are clearly as well.
Currently doing nand2tetris because I have a lack of hardware knowledge and the course is fantastic. Also doing a lot of math courses so I can do mathmatical proofs much easier to understand why one algorithm is better than another, or how to derive it based on whoever invented it etc. I prefer a traditional book for math though, but everything else I prefer MOOC's.