Hacker News new | past | comments | ask | show | jobs | submit login
Teach Yourself Computer Science (teachyourselfcs.com)
334 points by rspivak on July 21, 2018 | hide | past | favorite | 81 comments

I don’t really like the recommendation for SICP. The book expects a certain level of mathematical maturity from the reader and I think this level is significantly greater than the level that might be expected from the targets of this website. Mathematical maturity in this case can be broken into two parts:

1. Understanding what a proof is, what is required to prove some proposition, how to figure out and write a proof, how to cope with struggling to prove something, and how to carefully read definitions.

2. Just knowing what “basic” things are. In the case of SICP these include calculus in one variable, lots of elementary algebra, an understanding of polynomials and exponentials, vectors and linear transformations (roughly the level of geometry required for basic classical mechanics (c.f. SICM although this is a slightly more advanced way of doing mechanics))

It is easy to see this requirement by looking at the exercises from the book. They start with calculus themed exercises (e.g. Newton’s method), move on towards more algebra flavoured problems and then vectors/geometry. Later exercises are not on mathematics but require one to apply a reasonable amount of mathematical sophistication to get the answer. Throughout are exercises which rely on the reader carefully applying their careful reading of carefully written definitions for things like models of evaluation.

I have another issue which is with the following:

CS is basically a runaway branch of applied math, so learning math will give you a competitive advantage.

I basically disagree with this entirely. I would say that computer science is basically a branch of pure mathematics with applications to engineering practice with real-life computers. I’m not sure I would describe it as runaway but some branches of computer science (eg operating systems) are, I think, not really branches of mathematics.

That kind of annoyed me when I delved into a it a while ago, too. I have decent mathematical sophistication (I do a lot of linear algebra and computational geometry for work, read graphics papers and math history), though I'm largely self-taught, and had bad gaps in very basic things at the time I was attempting to read SICP.

So when I came across all the math-related stuff in it, I'd have to stop reading and go learn about some things that were basically orthogonal to the subject matter I was actually interested in from the book (e.g. principles for effectively managing complexity with functional constructs).

It's one thing to say CS and math have similar intrinsic structure—but that doesn't mean the content of the two fields need to overlap, and we shouldn't assume the content of one in the other. Imagine if you were reading a book on real analysis but they kept bringing in examples that required you to know CS stuff.

I basically agree. SICP makes a lot more sense when considered in the context of a course at MIT, taken after a first course in calculus. The exercises are easy to motivate if one can motivate the study of calculus (e.g. calculus is useful because X, therefore doing calculus by computer is useful because X but faster). The examples therefore feel relevant to what one knows and what one knows is useful (or at least considered useful by lecturers).

I think it is hard to find good exercises for this sort of early computer science outside of mathematics sadly. It seems to me that there is less variety in the computational constructs one might use in solving problems about producing or processing text (which are concisely written and have a small solution), but I think the book does emphasise mathematical examples more than needed. For example there are other first examples of higher order functions than Newton Raphson, and proving things on the way to a logarithmic Fibonacci algorithm is largely irrelevant to the kind of computer science that people looking at Teach yourself Computer Science are interested in.

Do you think How to Design Programs (TtDP) is a good alternative to SICP? It seems gentler, but if one were to go the HtDP route, what would come after? SICP or something else?


Well I don’t really know anything about it but it seems ok. The programming environment is easier to set up than in SICP which removes a barrier to entry.

I’m not really convinced that scheme is the be all and end all of teaching languages. It has some nice qualities and some that can make programming more difficult. For example I think the kind of interaction is good but often in that interaction one is essentially repeatedly refactoring a small part of the program and TNT’s language does not offer much to ensure that these refactoring are correct for any definition of correct. I think a ML style language could be good for teaching too although often the errors that are produced can be quite unhelpful.

The book seems like it covers some nice introductory things (like splitting up programs into small functions) and some harder things to get to grips with like quoting. But I haven’t had a very thorough look.

The thing I really like about SICP is the idea of building up a model of how programs are interpreted and evaluated. The book produces plausible models for evaluation and demonstrates how one can test between the two and where they are wrong. I like this for two reasons:

1. It really feels like science, coming up with ideas, testing them, seeing when you are wrong. This feels especially useful when one considers mordern systems built out of so many individually large things that it is hard to keep track of things perfectly. It is useful in real life to build up models for what’s going on and think about how you might test them and where you might be wrong. I like that the book reminds you that computer systems may be tested in the same scientific way that the real world might.

2. (Because these evaluation models are implemented in code) it makes the process of compilation and evaluation seem much more understandable and less like magic. One can have an ok understanding of machine code and how a machine roughly works without ever having to write any.

On the other hand maybe these reasons are wushu-washy and no one really needs such concepts for computer science.

I can't say for sure, but the TYSCS says "For those who find SICP too challenging, we recommend How to Design Programs."

My theoretical knowledge of computer science can be approximated to zero, as is my knowledge of math, but for some time I worked as a programmer with good results; in my best days I could write about two hundred lines of C or Pascal in a few hours, then go home without testing them because the machine wasn't available and be complimented the following day because they compiled and worked without any errors. I still felt I was lacking something so I tried to improve reading some books but all of them required high math skills I never had, so I quickly gave up. (Niklaus Wirth algorithms book probably is the only exception here) I can perfectly figure in my head registers loading data and shifting them around, logic gates changing and combining values, stacks pushing and popping words, electrons flowing through components etc, and for example can diagnose problems in analog circuits due to say bad decoupling, ground loops etc. Figuring them is easy from experience but the math behind that is way harder; anything beyond simple equations is like alien language to me, and the very few times I grasped something beyond that I quickly forgot for lack of continuous exercise. Did anyone else encounter the same discouraging level of difficulty? So my question would rather be: are there any good algorithms book for nearly complete math illiterates like me that would help to learn both in the process (and possibly not hate them :)?

I overcame my math illiteracy by reading How To Prove It. The book is so simple and narrow in scope and yet it gave me a feeling that ultimately no math is beyond my grasp, even the papers on arxiv. Now I would say my math skills are still rather low because I know how much is out there, but at least I am competent. At this point I wouldn't want to read books that avoid math for mathematical topics. Algorithms are mathematical, so it's silly to seek books for math illiterate. It's like looking to copypaste a large chunk of code without reading and understanding it while caring about code quality of your system, it just doesn't make sense. It is not a good decision.

I wonder if there is just a big split between what folks are looking for in mathematics. I cannot understand how "How to Prove It" is so frequently praised. I tried reading after finishing my CS degree, while I was searching for books that would do math 'right,' being especially inspired by Paul Lockhart (and others), and it was one of the first books I picked up. I found it to be so much more of the same, focused on all these small things I didn't care about, pedantic, written in a cold detached tone, still no suggestion of why I should care about the things being talked about.

I had a much better time with "What is Mathematics?" (aka "Courant and Robbins"), though it still suffers from some similar problems (I remember being baffled by all the attention paid to these properties like commutativity, associativity, etc.—"why should I care about this!?").

Really, I still can't think of a single book that was great for overcoming my own "math illiteracy," it was mostly just a haphazard process of trying enough different things that eventually various parts started falling into place etc.

But I guess part of what's going on is that what works for one person is gonna be awful for another. So I just want to say, maybe "How to Prove It" will be great for you (other readers)—but don't feel bad if it isn't.

Truth be told, at the time I was also reading around 20 other books, it's just that I completed 90% of How To Prove It and around 10-15% of the other books. I was so captivated by math after discovering blogs of rather opinionated math professors and students that I downloaded literally over a hundred books (for free) and started designing the perfect curriculum. I would start over many times, over and over, study most of the free time I had between college and programming, watch dozens of lectures, seek lecture notes and additional exercise sheets, and try to solve everything. This particular period lasted about 8 months, after which I stopped studying math and took a long break to resolve some life issues and prepare myself for my first job as a software engineer.

I remember reading a few sections from What is Mathematics too. Some chapters on topology and the preface.

So, yeah, I also succeeded because I drowned myself in everything possible until I got used to it. It's just that I never felt any frustrations with Velleman's book, so I knew that every time I open it I will learn something new, won't get confused, and the level of difficulty will be just right. That's why I kept coming back to this book and completed most of it, while didn't make much progress in others.

Oops! My bad! I just realized that you're talking about a different book. I thought you were referring to the book's namesake, Polya's "How to Solve it." I have not had any contact with how "How to Prove it"!

Also, interesting how much of a similar way we seem to have gone about learning math. I kept it as my main focus for a similar amount of time, then for as a secondary focus while working on a big software side project for another 7 months or so (I was working at a grocery store for money the whole time), then got my first software engineering job and largely dropped any focused study in mathematics. But now I'm in a position where I'll learn bits of math as I need them for projects or whatever, so it definitely wasn't a waste (and like you mentioned in being able to read stuff off of arxiv, it expanded the range of ideas I can understand).

I've actually started getting interested in continuing study again, getting more into applied math this time since I really focused on pure last time. First goal is to get a clear understanding of Maxwell's Equations :)

> Figuring them is easy from experience but the math behind that is way harder; anything beyond simple equations is like alien language to me ...

It took me a long (and often frustrating) time to figure out that the perception of difficulty is largely because of how implicit most everything is in mathematics. It's not like programming where eventually there is a compiler with a definite structure that's going to make your program behave in some exact specific way.

Even worse, is the way we're typically taught mathematics, it's always kinda starting in the middle (as opposed to the beginning). We aren't taught about what it is our how it works, so it's really hard to situate particular mathematical concepts we're taught into a larger coherent framework.

The way I interpret mathematical works changed so much (and for the better), once I realized that any significant bit of math you're using is the result of something a human was trying to accomplish. In the same way you can point to a piece of code and ask it's developer what the motivation was for some abstraction or algorithm, there are always reasons behind the development of mathematical systems. Once I understood that, rather than mathematical systems being these disconnected things that materialized out of nowhere, I started seeing commonalities and considering the reasoning behind them, and seeing more of the network that relates them. It became a lot more enjoyable. I'd recommend checking out some history of mathematics if you'd like to get a better sense of that (my personal intro recommendation: "Men of Mathematics" by E.T. Bell or maybe "Mathematics and the Imagination," which isn't history so much, but serves a similar goal).

If you're looking for something on algorithms specifically the book recommended by the author ("The Algorithm Design Manual") is a pretty friendly read and largely non-mathematical, while still containing all the useful bits (unless you're looking for a reference work on computational complexities or something).

> because of how implicit most everything is in mathematics.

Which is a completely wrong impression due to how it is taught in school and to undergrads in the US.

Speaking as a mathematician, since one is reasoning about abstract objects everything necessarily is pedantically explicit (otherwise proofs could not possibly work). Hence, in US graduate courses or EU undergrad courses you start from scratch and rigorously define every symbol you ever write and justify every step you take (e.g. given a field, why is "1" distinct from "0"? What is a derivative? Prove that it is actually well-defined, exists for such functions etc.).

What many people confuse for being implicit is the heavy polymorphism and terseness in mathematical notation. For instance, one often identifies a function, its graph or its image depending on type makes sense in context. Here, rigorous courses spent much time to prove that any possible ambiguity in notation is actually no ambiguity at all; one is allowed to use short-hand notation because all interpretations are equivalent.

> We aren't taught about what it is our (sic) how it works

My biggest frustration learning math when I was younger was not knowing -why- I was doing something a certain way or how it worked. It was incredibly discouraging because it didn’t just “click” like it seemed to with my peers.

There is a certain danger in trying to learn maths with this mindset though. A lot of topics seem intuitively easy, but the proofs only come somewhere near the end of a degree level course.

It's a very healthy mindset to have, but it's a blessing and a curse. Some people are just very good at just abstracting away the details and getting on with things; what seems like "clicking" is not always the same as intuition. Probably it's good to operate in both modes. It's a really really useful skill to know when you need to care about the internals and when you can safely ignore them.

I’ve definitely learned that now after 10 years of development experience. But as a curious kid who didn’t just want to memorize and regurgitate it was quite a challenge. I’ve debated restarting mathematics all the way from basic algebra and geometry to see if I’d do any better these days.

I noticed my high school math(s) teachers were very bad at explaining why and what we were doing.

Arbitrary example:

One teacher kept saying "f(x)" but couldn't explain what a function is. He just said "it's anything", then "don't worry about it". If he had even said "a function is like a machine that takes number(s) as input, changes them with a formula and outputs the new number(s)", I think it would have helped us grok.

I think he understood math(s) so well that he couldn't relate to someone who didn't know what a function was.

> If he had even said "a function is like a machine that takes number(s) as input, changes them with a formula and outputs the new number(s)", I think it would have helped us grok.

Helpful but not quite right and one of the common misconceptions students seem to carry over from high school.

A function is something that takes inputs out of a set (its domain, e.g. people in this class) and gives you exactly one output with a prescribed data type (e.g. a date. The function than for instance being person -> birthdate). Neither numbers nor a formula are needed.

> I think he understood math(s) so well that he couldn't relate to someone who didn't know what a function was.

I think he probably could, but the syllabus said explicitly not discuss this ("too abstract") and there is time pressure. High school maths mostly is somewhat handwavy and stringing "definitions" together by examples. So you would mostly see lots of examples of functions and the "definition" of 'function' is then simply "things like that".

I was the same way. I think the way we typically teach math (in the U.S. at least) to young people is particularly hard for those who insist on understanding rather than memorizing things.

I was the same as how you describe yourself, and I always just assumed the kids who were best in the class were doing what I was trying to do also, but were much better at it. And then you see later how fragile/limited one's understanding is when taking the memorization approach—but as a kid you don't have many options.


Schools tend to not teach mathematics but memorization, unmotivated formulas and computation. Mathematics as its own discipline is basically only "why and how", i.e. proofs.

Algorithms ( sorting, for instance) don't require knowledge of advanced math. Analysing the computational cost requires some math or math like thinking.

Advanced math is linked with computing mostly because the roots of computing lie in mathematical logic.

So you can become a master programmer without advanced math knowledge.

In short, most algorithms 101 books should be accessible to someone who considers themselves math illiterate.

Yeah, but a substantial amount of problems that are dealt with through computers _do_ require mathematical tools beyond that.

Statistics, for example, appears in any non-trivial computer science problem, like network congestion, analyzing proper parameters for algorithms, probabilities of getting lock contention, distribution of dispatched instructions, audio psychoacoustics, etc.

Computer vision and graphics programming requires a solid foundation of linear algebra and geometry.

In short, those tools are pretty necessary if you want to approach computer science as a science and as a tool for engineering beyond the very basics.

I was answering specifically about algorithms.

A lot of real world problems can be solved without a programmer requiring advanced math knowledge.

I have found that reading (auto)biographies of persons in the field I am interested in has helped me. These alerted me to the original problems that the mathematicians were trying to solve and the struggles they faced.

I have many degrees in computer science, and have used almost none of the knowledge in programming jobs. So, the content is fairly useless, practically speaking. I've also found the more I focus on theoretical elements while at work, the more useless I become. The actually useful elements are big picture, and can probably be taught in one or two classes. Much more useful than technical knowledge are the abilities to reason about systems and solve problems. In my opinion, the CS classes seem to be almost purposefully obtuse.

I use my CS education every day. The trick was to take the bare minimum of theory and stuff my schedule with Systems:

- Thinking about and fixing bugs related to OS scheduling, like, why doesn’t my time.Sleep work as expected? (Intro to Systems, Operating Systems).

- Understanding the protocol layers involved in a network performance problem; writing network services to RFC specs (Networks and Distributed Systems).

- Selecting, working with, and understanding the limitations of distributed databases (Advanced Distributed Systems).

- Shifting the bulk of tricky computing into pure functions; borrowing design patterns like the state monad (Functional Programming).

- Parsing serialized data (Formal Languages for basic theoretical grounding, Programming Languages for compiler and interpreter implementation).

- Dealing with concurrency and using synchronization primitives (Intro to Systems).

- Awareness of security concerns and techniques, respect for the subtlety of crypto implementation, literacy when reading HN on subjects like ASLR or the Juniper RNG compromise (Intro to Security).

- Basic comfort with C, Make, vim, bash, awk, etc. (Intro to Programming).

The vast majority of my education was spent on programming projects to implement key components up and down the stack. Having a decent understanding of what they’re doing and how to approach writing them has been invaluable. The only thing I truly haven’t touched since college is discrete probability.

There's IMO a kind of quadrant going on.

People without CS background who don't see where it would be useful

People with CS background who don't know how to use it

People without CS background who realizes where the science can be useful.

People with CS background who know where and how to apply it to every day problems.

Unless someone is building forms all day long (and even then), it's going to be useful. Sure, you can build apps without it. but they'll be mediocre instead of good.

I have very little training in formal CS (like, I dropped out of a CS program as a sophomore to get a BA in Philosophy and an MA in Lit).

I learned almost all the basis for what I do on a day-to-day practice of being a programmer from:

- my high school programming classes - my grad school practice of learning how to research and read - a bunch of middle school classes in formal logic - playing with a ton of programming tasks

The CS stuff that I've read in the decade since quitting the pursuit of a PhD in literature has been way less helpful than simply trying to fix bugs in my code.

I don't know if my career is representative of other folks' work, but at my "level" I consider programming to be a lucrative trade, and much of the CS stuff is almost totally irrelevant to that, compared to having awareness of the specific of whatever larger system I am trying to diagnose or modify.

I agree that the basis for this profession can be taught in a couple of classes; I feel that almost all of what I do comes down to playing with the actual technology and trying to solve problems with it; theory is only useful after you have a fundamental feel for the nature of the problems at hand.

Agree with the last sentence. All the different data structures for example. Where do they come from? Presumably because in doing some practical tasks we realize it would be a lot more efficient if we organize data that way. To acquire proficiency with those concepts by seeing a genuine need for them in the task at hand imo follows a much more logical order of introduction.

I don't concur. A few examples:

I was working on a iOS App that connects to a GraphQL API. All GraphQL are POSTs. POSTs aren't cacheable. I had to implement a client-side cache. Implementing a cache, understanding the pros/cons of an implementation, is an exercise in CS.

I was implementing an animation. I drew the animation out on graph paper and worked out the transformations using stuff I learned in my linear algebra and computer graphics classes.

And in general, thinking through the tradeoffs of Swift vs Objective-C or REST vs GraphQL or Ruby vs Elixir, etc, etc is an exercise in CS.

Unfortunately, we seem to live in a cargo cult world. Good enough for Facebook/Google/Etc? Good enough for us. ¯\_(ツ)_/¯

They’re called heuristics, and everybody would be paralyzed without them. Good enough for Facebook? Probably will work for my use case, and now I can focus on things that actually add value to my application.

How do you know you used nothing? Wouldn't that be hard to tell, perhaps you should work with someone who got a degree in somewhere unrelated vs a recent cs grad. I find the difference can be massive. Not realizing what is basic knowledge

Seeing that I got my degree over 20 years ago and even then it was from a state school with a horrible comp. sci. department, I am sure my CS degree did nothing for me even for my first job.

I was lucky that by the time I git to college, I already been hacking around with AppleSoft Basic and 65C02 assembly language for six years. I picked up C on my own and that carried me through the first 12 years of my career. Knowledge of algorithms helped me a little bit since we had to do everything from scratch but most modern developers wouldn’t even need that.

Is that surprising? Computer Science isn't meant to be a Practical Programming degree.

I doubt that parents or students who are spending tens of thousands of dollars on college and more than likely are leaving college with tons of debt are not expecting to be able to learn practical, marketable skills.

If they expect to learn practical programming skills then they've completely misunderstood what comp sci is. They want to be spending their money on vocational education instead.

And be at a disadvantage in the market....

As a self-taught developer, I used to think that some of the theoretical elements were overhyped. I can build iOS apps that work, and I did just that for the last 2-3 years. However, many of the programs that I wrote have not been as easy to maintain as I would like and some difficult to fix bugs have popped up overtime, both of which are due to a lack of deeper understanding of CS fundamentals. Last year I started interviewing and was ridiculed at one company in particular for a lack of CS knowledge. Afterwords I started exploring a lot of the CS concepts listed in this link and I have since found numerous ways to improve my code quality and have a better understanding of how CS best practices came to be. I also used to think that algorithms and data structures were relatively useless for an iOS developer, and I was able to do the job without them, thus proving my point. However, after gaining a better understanding, it quickly becomes clear that things like view hierarchies are simply trees and understanding ways to traverse these hierarchies can lead to much cleaner code. With the open sourcing of Swift, I also became more interested in understanding the language, but a lot of the language design decisions didn't make sense to me until I gained a better understanding of CS fundamentals. I have found the programming languages course on Coursera [1] to be particularly useful, and have also greatly enjoyed the book Designing Data Intensive Applications [2]. There's also a great video from this year's WWDC that really inspires algorithm study and use in everyday applications [3].

[1] https://www.coursera.org/learn/programming-languages

[2] https://www.amazon.com/Designing-Data-Intensive-Applications...

[3] https://developer.apple.com/videos/play/wwdc2018/223/

> I have many degrees in computer science

If you have many I'm going to assume that's at least three. Surely that's a PhD then? Otherwise you've been standing still doing multiple bachelors or masters degrees, which wouldn't make any sense.

If you've done a PhD and then go into a field which didn't need that research, then that's unusual. Why did you bother with the PhD?

I have a BS, MS and PhD in CS and Comp. Eng. I needed a job. Turns out my PhD wasn't such a hot commodity, at least in the DC area. For a GS type job, it can translate to automatic pay grade, and a contracting company may want a PhD on paper for their bids. Coding interviews might be a bit easier.

But, the actual dev ops work I do, while I use a bit of distributed systems theory to understand things, is mostly orthogonal to my education. The theory I do use I can easily explain to non CS background employees in about 10-20 minutes. The practical knowledge I use (Linux, coding, s/w eng.) I picked up in my own time and one or two classes.

Why did I get the PhD? Primarily for pie in the sky reasons. The knowledge and research. Not in order to get a job or teach in academia.

Would you do it again? Get the PhD, I mean.

I don't think it is that unusual for grad students to research a try during PhD, decide it is not for them, and then move onto other things afterwards. This is from a physics grad school perspective though.

"I know sorting algorithms so intimately that I know to always reach for the built in .sort()" - hypothetical person in my head.

"I know sorting algorithms so intimately that I know to not reach for the built in .sort() when the array is already 99% sorted"

Built-ins are made to work well on the most common case. Often, you can use domain knowledge to come up with better solutions. But you need to know how common sorting algorithms work to understand if and when that domain knowledge can be used to speed up your program.

You're right. I'm being more tongue in cheek about how like 95% of the time just do the default. Your education is really to know when it's important to donut differently.

Do you think Software Engineering is a more relevant degree vs Computer Science? I would think the former would focus on the more practical rather than Pie in the Ski stuff you will rarely if ever use.

I read this a lot. I am currently studying CS and I kind of regret it for that reason. Sure, the topics tend to be very interesting and I gained some real insights from some classes, such as OS, but I probably won't ever use this stuff when I start working. I kind of wish I did EE/ME or Math instead, especially because I actually quite enjoyed my math classes.

Having worked as a SFE while going to school I can say there are times where I had wished I didn't have this mentality. Don't know how many times I thought "I'll never use this" and blew it off only to a couple months later run into an edge case where I had to go back and relearn some stuff. OS was specifically one of the classes I've been going back over recently and frequently. I kinda realized as I've gotten older that I'll spend a decent amount of time reading over a lot of material that doesn't immediately click as being useful or is terribly interesting to read but when it comes up I get a lot of satisfaction out of the fact that I can say "OH READ ABOUT THIS" even if it was drudgerous. Plus I'd have to agree that it opens more doors and more interesting work, not just CRUD/web apps, when you have an in depth knowledge. Also if you're not challenging yourself to do difficult work then you're not going to have to dive that deep into the knowledge they teach you.

But do you like what you're studying? I think most people end up not using what they learn in university, whether it's CS or one of the other hundreds of degrees people get....

I used what I learned in my linear programming class (i.e. optimizations of a linear function under a system of linear constraints) not too long ago to solve a problem involving multiple non-trivial constraints.

Also, CS is different from programming, just like Math is different from bulding a house.

I think you may not realize how much your education has shaped the way you think and tackle problems. Understanding goes far deeper than the direct application of knowledge.

Anyone else concur with yters experience ?

I don’t concur.

Received a comp-sci degree in 2008 from a small liberal arts college. I was one of five grads in my year, and most of the last two years were not spent learning “practical” tech, but rather theories, algorithms, etc. My liberal arts educating taught me how to write, how to discuss solutions, explore tough concepts, break problems down into smaller subtasks, and more.

Skills are important. But theory matters too. Just because one has value does not rule out the other.

I agree there is value to the theory. But, the valuable parts are fairly easy to explain, when not convoluted through academese, and many classes can probably be consolidated into one or two. Plus, most of the useful theory is encapsulated by someone else's well designed library. So, it is helpful to know, but the hard implementation part is unnecessary.

I am not so much arguing against CS theory, but CS education.

It really depends on what kind of development you're doing.

The majority of software engineers are application developers making CRUD websites or mobile apps, so that perspective is the one to come up most often. I also happen to be one of those developers.

The challenges of application development are related to transforming data, handling asynchronous operations, managing state, and picking elegant abstractions that solve your problems. The intuition for these things is mostly picked up through hours of professional development, seeing good code, and shooting yourself in the foot a couple of times.

While there are some harder problems in app dev which do require deeper computer science understanding, they're extremely rare. I suspect this is different for people doing things like video game development, although I don't have any experience there so I can't speak to that.

Even most of video game development is simply using features of the existing engine. Some basic 3D math is required, but nothing crazy. You do find some gnarly problems in engine development though. (And some of the sub-disciplines like the network and graphics programmers.)

So CS should get a job at tooling companies because that's where deep things are done ?

I don’t think it’s nearly that simple. Some teams are product teams, and they operate higher in the stack. Some teams are straight engineering teams, and they operate much lower in the stack. Lower does not always equal more technical though. A product team dealing with incredible scale can still require deep CS knowledge.

My experience is that teams that act as a platform (and I’m using this term very loosely) tend to have lower level problems to solve. Think of AWS teams vs. large companies. A large company might be dealing with high scale; something like 100k+ transactions per second. An AWS team can have many large companies as their customers, so their scale gets ridiculous; much higher than any single company. This can require more traditional CS knowledge.

Some individual engineers love shipping products though — they like writing a LOT of code and getting things out the door. Some engineers like very carefully working on MASSIVE systems, but they end up releasing way less code. Other engineers like working on very low level embedded systems or whatever.

There are a lot of problems to solve, and none of them are necessarily strictly harder than each other. Some people who can support systems at incredible scale simply cannot cope with the speed of back to back product launches, and vice versa. There are tons of types of talent.

Thanks, that's what I was curious about

I concur.

The way I think about it is that what you learn in class is valuable for understanding the abstractions that you will use in industry. If you’re working in C# or Java, most of the time you don’t care about the cost of memory allocation, method calls, reflection voodoo, file system access, etc... BUT, in those rare situations where the abstraction causes a performance or correctness issue, then all that academic knowledge becomes valuable. I find that the instances of these problems are very rare but when they occur you have an opportunity to deliver a lot of value.

For example, at my job we have a rather slow build. Looking at the logs, it’s because we spend a lot of time doing I/O. Someone had the idea to use symlinks instead of file copies. Badabing, badaboom, we got something like a 3x speed up from doing that.

Surprisingly one of the areas I use my comp sci knowledge the most is in one of the most widely applied areas - relational databases. Every programmer spends lots of time reading and writing data, and understanding a lot of the low level operations, plus linear algebra, can really help with performance and assessing alternatives.

Not exactly. I really enjoyed my undergrad and grad CS programs, but I agree I don’t get to use most of it in everyday work. I also haven’t exactly sought out places where I would yet.

I can’t speak for yters’ experience in school or work, not least since they didn’t name names.

Tech is a huge and diverse industry, but it seems to be treated homogenously when stuff like this comes up. Writing web apps in React is very different from game programming, mission-critical embedded, chip architecture, etc. I see much more demand for frontend devs than more specialized roles, and I think that should lead to lower enrollments in CS programs to match. Today it is way oversold (as I believe a lot of university is in America), but still necessary in some circumstances. For instance, someone teaching a bootcamp should probably have undergaduate level training in CS and/or teaching.

University and even bootcamps also serve as validating authorities that vouch for the abilities af the people they graduate. Maybe not perfect, but I don’t profess to know that licensure or some other method is better or worse. Programming jobs aren’t just about code, and uni/bootcamp isn’t just about learning code: you need discipline and self motivation, executive functioning, ability to research and navigate systems.

Is having a github repo with 1,000 stars now a necessary and/or sufficient condition to be talented?

I concur, with the caveat that whether you need to have a formal education in computer science depends on the types of problems you are trying to solve and the types of systems you're trying to build and maintain.

This seems like a thought out list of recommendations. I have enjoyed going through some of those books myself. However, the persuasive argument seems rather biased and somewhat toxic. There are many software developers out there, without a strong CS background, who write useful tools used by other developers. Does anybody here use Homebrew? The author admits that Homebrew is not the perfect software, but I think we can all agree that it's highly successful and helps many developers every day.

The field of computer science can be rather punishing if you don't grow up with a STEM mentality. I don't think there's a need to put any more pressure on developers and tell them they are not on a path to make enough money, that they will not work on interesting problems and that they have to spend another 900 to 1800 hours before they are worthy.

We should instead foster a mentality of inclusion and promote learning by getting people excited about spending 100 hours on a particular subject and not persuade people to learn based on the fear of missing out. And we're not addressing the really big elephant in the room. So many bright people spend 900 to 1800 hours developing their CS skills and end up working on trivial things that could just as easily be tackled by somebody with a bootcamp background.


There aren't two types of software engineers. It's a spectrum, possibly multidimensional. The attraction to declaring two types of any kind of person is that you can bin all the desirable and undesirable traits together.

My personal experience is that I got bored with frontend and pushing pixels, and when I tried to switch into infra work I found my incomplete picture of CS made it hard for me to grasp concepts other seem to take for granted. Maybe you don't have to have a CS degree, but even having a survey knowledge of major CS topics will at least give you more confidence.

I'd love to know what kinds of concepts you struggled with.

I do maybe 30% front end, then a whole bunch of everything else and often wonder what entire categories I'm blind of.

Just like OS concepts like sockets/files/pipes and how clang/ld/dyld actually works.

previous discussion, 237 comments:


Of course if you read every available online course, buy books, you can teach yourself CS.

I'm always more interested in reading quality material, that teaches things quickly and efficiently, with practice and not just long theory, without going into dark details. Those material are more rare. It's also often more interesting to learn about subjects you're interested into, instead of just "learning CS".

I have an allergy to academic materials, and reading books that teach theory without explaining their application. Math is important, but it's a mean to an end, and to me it's better to teach CS with code. You cannot teach the proper math to everybody. The finality of computers is to use them by writing code and algorithms that perform well or better. Analyzing computability and other theory seems like applied math, so it's not really applied to computers science in my mind.

Being able to pick up machine learning is great, but there won't be many individuals using that sweet math to do CS research. I used to believe programming was always a field of research in a way. It's not. You don't see random developers building ground breaking algorithms that change the world. That doesn't happen. Programming is about engineering and tinkering. You don't need to teach yourself computer science, you need to teach yourself more math, or learn programming, or learn electronics.

I didn't really get algorithm analysis, beyond the standard sequential analysis of this simple operation maps to X complexity, until I did some complexity theory then it all sort of came together https://functionalcs.github.io/curriculum/#sec-12-1

Does this even matter given that Apple is developing a new screen technology?


I think the problem with Computer Science is that the focus has been on the C and C++ programming languages.

It goes back to the 1970s. C was created to have all these cute little tricks that you could do to manipulate your data in memory. This led to pointers and all the fancy pointer arithmetics, that allowed your program to run a few clock cycles more efficiently. But, the cost is that you shoot yourself in the foot, once in a while.

Fast forward to now, and squeezing a few extra clock cycles here and there, and memory usage conservancy is largely irrelevant, except for a few edge cases.

C is just a bad programming language. And C++ just inherited all its defects, just to bring in Objects. But the objects were a poorly designed idea, and was terribly executed.

In C and C++, instead of just focusing on your problem, you have to manage the language, and its built in defects.

Then came Java. And it was the opposite reaction to C++. No more pointer arithmetics, everything is now a reference. Ok, good. But, it created another problem. The framework monstrosity. Learning the language itself is simple enough. But having to work with someone else's poorly designed framework, that makes no logical sense, is just unbearable. In Java, you now have to manage the framework.

I like Python for its brevity and conciseness. And especially for its flexibility with functions. It allows you to program in a pure functional style. It's a breath of fresh air. I can build out my code like lego blocks, stress test each function, and then connect it all together. And the results work flawlessly. Except for one catch, it runs a little slower.

I'm hoping this move into Functional Programming will be the next true wave to come into the computer industry.

At the bottom of the website, the authors compare the two and freeCodeCamp.

How does this compare to Open Source Society or freeCodeCamp curricula?

The OSS guide has too many subjects, suggests inferior resources for many of them, and provides no rationale or guidance around why or what aspects of particular courses are valuable. We strove to limit our list of courses to those which you really should know as a software engineer, irrespective of your specialty, and to help you understand why each course is included.

freeCodeCamp is focused mostly on programming, not computer science. For why you might want to learn computer science, see above.

I was on mobile so didn't get to scroll to the bottom of it. Thanks!

I quickly audited all the courses a few months ago, I find OSSU to be much easier to grasp than teachyourselfcs.com because it doesn't always use modern day MOOC's which makes learning things way quicker

You can learn things by a book sure. But if you go on a udemy or coursea or whatever course, there's always Q/A comments with other people working on the same course and that insight is extremely invaluable so this is why I prefer OSSU over teachyourselfcs.com. I prefer MOOC's because there's also some level of accountability as well, you can go for the certificates too but I don't really care too much about them.

OSSU does have way too many topics just pick 80% of them and you should be okay, depending on what your weaknesses are.

Some course overlap in both so you should focus on those first and foremost. These include things like nand2tetris, 3blue1brown's linear algebra, data structures & algorithms (various choices, sedgewick, skienna, etc).

Do your homework on reddit threads as well before picking one or the other.

I like OSSU because it also tells you what the prereqs are clearly as well.

Currently doing nand2tetris because I have a lack of hardware knowledge and the course is fantastic. Also doing a lot of math courses so I can do mathmatical proofs much easier to understand why one algorithm is better than another, or how to derive it based on whoever invented it etc. I prefer a traditional book for math though, but everything else I prefer MOOC's.

I think adding a concrete language (most probably C) and a course for OOP to the list might prove useful.

When you say "concrete language (most probably C)" are you referring to using one language throughout each subject?

Not one language per subject. A separate subject to have basics strong. I feel learning C as a subject and getting concepts like pointers right might prove useful while learning datastructures and algorithms. The article assumes learning a language is very easy. But for a self taught programmer its easy to fall into the trap of straightaway going to high level language like dynamically typed languages. It would be more appropriate to learn C followed by datastructures followed by compiler design in that order.

Late reply, but looks like C is covered in the Computer Architecture section. How deep does it go into C? That I don't know.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact