Hacker News new | comments | show | ask | jobs | submit login
Let's Not Call It "Computer Science" If We Really Mean "Computer Programming" (codemanship.co.uk)
205 points by vetler 1820 days ago | hide | past | web | 196 comments | favorite



So, I agree with this article in spirit. Lots of programmers could've benefitted from a more programming centric approach rather than a CS approach. However, this bit gave me pause.

" I cannot tell the difference by watching them develop software."

I can't disagree with this more. While this may be true of students who were middle of the road students in CS programs. I can't definitely tell the difference between people with a formal cS background and those without in my daily work. I work with a handful of extremely talented self taught programmers but there is definitely a difference between the way they attack problems and think about coding versus the CS folks. Not to even mention the programmers I've worked with who have EE degrees. Those guys think about things and code in an entirely different third way.

All of these people / approaches are necessary... and none of them are better than the others... but pretending like they don't do things different because you want to feel like you don't need a CS degree is disingenuous.


Add me to the list of people who are curious and don't agree at all with your claim that people with CS background approach regular problems in a fundamentally different way. The only times when the CS folks' approach differ is when they can reduce the unknown to a known - finding the least wasteful way to cut raw materials(knapsack); modelling problems as graphs/trees and then applying well known, efficient algorithms etc.

From what I have seen, the ratio of non-clever to clever code in a typical software is 80:20(I am being conservative. It's more like 95:5). The 80% doesn't need cleverness - it needs a lot of discipline and proper abstractions. CS doesn't teach you discipline or proper abstractions. Most of the projects in a CS curriculum are small, done in small teams or solo and are thrown away as soon as the course requirements are met. The higher level CS projects exist just to prove a point(research), or doesn't exist at all except in theory.


Something that no one is acknowledging here is that a "CS" program isn't a uniform thing, and that people who come out of a "CS" program have not uniformly learned what it did have to teach.

I never took the full CS program, but I hit most of the core classes while studying Cognitive Science. By far the most valuable class sequence I took was compiler design; learning HOW a compiler takes code and turns it into assembly language was a revelation to me, and I was able to "see" the relative optimization of algorithms much better after taking that course.

Will I ever need to write a compiler? Probably not, though I have written several small interpreted scripting languages/DSLs. But taking the class expanded my mind in a way that grants me insights a LOT of self-taught programmers I've worked with never seem to have.

Does everyone who takes a compiler design class gain those insights? No, certainly not. It's possible to muddle through just about any class without REALLY understanding it and pass -- some more easily than others.

The weeder classes in CS at my college were tougher than many, though, including one that actually required each student to write non-trivial programs in assembly language.

But back to your comment: It's not that all code needs to be "clever." A friend of mine once CRITICIZED a piece of code for being "too clever," and he was right. "Clever" isn't a goal. But sometimes the straightforward approach made by someone who is good, but self-taught, isn't as good as the equally straightforward approach by the self-taught AND CS-educated programmer.

>The higher level CS projects exist just to prove a point(research), or doesn't exist at all except in theory.

Real higher-level CS projects frequently involve interesting graphics research, which (also frequently) ends up working its way into commercial game development.

I don't have any clue what you mean by "doesn't exist at all except in theory," since even if you're building advanced data structures that you could otherwise pull from a library, you're building something "real" that could actually be used.


I think you are arguing against a claim I didn't make. Your first half of the post about compiler classes and other tougher courses are giving me the perception that I somewhere claimed CS education isn't useful. I didn't make that claim. I was responding to the claim about CS and non-CS people taking different approaches to problem solving.

> But back to your comment: It's not that all code needs to be "clever."

By clever, I meant code that isn't routine. My definition of clever will include dynamic programming, reducing NP complete problems to known algorithms, writing a parser which takes xml as input and produces python dicts("writing a parser is simple" you say. But compared to the rest of the code in a typical project, it does count as clever)

> But sometimes the straightforward approach made by someone who is good, but self-taught, isn't as good as the equally straightforward approach by the self-taught AND CS-educated programmer.

It's not me who was making general claims. In my post, I did point out that about 20% of the time, CS background is very useful. The OP made the claim that CS and non-CS approach differs in general, which I don't think holds true.

> Real higher-level CS projects frequently involve interesting graphics research, which (also frequently) ends up working its way into commercial game development.

"Real higher level CS" -> http://en.wikipedia.org/wiki/No_true_Scotsman

You don't get to choose what real CS research is.

CS researchers have a lot of commendable qualities, but that requires a different set of qualities than a regular software development project, and CS research doesn't hone their skills as far as regular software dev is concerned.

Look at the code from the academia. More often than not, either there is no code(hence the comment "doesn't exist") or it's spaghetti. I don't see how that helps you write code for the regular projects where you work with teams, code isn't thrown away and you are expected to maintain it. Now, I am not saying CS researchers can't work that way - I am saying what they do for research doesn't help.


To extend on your points, I believe the difference is smaller in practice than in theory.

The biggest difference is that rigorous CS program will drill this proof-first, implement-later habit in you through repeated exercise. This is very valuable when solving something difficult because you will end up with many trial/errors, and it's much faster to iterate in your head than in an IDE. This habit is also difficult to acquire without deliberate and repeated exercise.

Most other differences are just difference in experience. It'll take a smart programmer+ no time to learn enough about CS theories to adequately solve that 5% of the most difficult part of the job.

+ By smart programmer, I mean someone who have enough intelligence/discipline to graduate from rigorous CS course, but did not.


This could not be more true.

A formal CS background gives people a very valuable toolset with which to program. It's something you can't fake, and if you know what to look for, it's instantly recognizable.

But if you don't have a CS background, yeah, I could see why it would all look the same to you.

A CS person can easily learn programming; it's second nature to them. But a programmer does not learn deeper theory as easily. Of course it can be learned, but that's why they teach CS at universities and not just programming—it's a much more difficult and more fulfilling subject, in my humble opinion.


A CS person can easily learn programming

This is not true, if you've ever seen academic code...


To be fair to academic code, it's typically used once to prove a point and then thrown away. It's a lot like an R&D project any of us might do that will either be rewritten later or just used to test a theory and then forgotten.


Academics are notorious for their code, but I suspect the parent was talking about people in industry with a CS background.


I think some more fairness is needed here - Academic code is often bad in absolute terms, but excellent if you understand the requirements and process that produces it.

Code developed by actual academics is often terse, elegant and small; produced either due to a sudden flash of enthusiasm or because of a deeply held and significant urge to demonstrate a point. This kind of code is often at the core of what is known as "academic code".

The bulk of "academic code" is developed by a sequence of postgrads, pursuing individual goals and code is handed round with a mix of suspicion and over enthusiasm, and dropped and adopted according to the whims and short term needs of semi engaged investigators who are make do and mending with budgets and partners. So, by any sane standard it's bad.

On the other hand, it isn't meant to be adopted and used in the long term, and if you are looking at it at all it's because it does things that will be very expensive to replicate, and no one can afford to do a clean rebuild on. So - don't dismiss it if you can't afford to bin it.


It would be like saying an Art History major could "easily" learn to paint. The skills just aren't transferable.


> The skills just aren't transferable.

Not all comparison are good. I agree that playing music requires skills that you probably don't have if you only studied art history, but this is absolutely not true when it comes to CS and SE. CS and SE both have to do with abstraction, languages, logic, models ...


The skills of "programming in the large" are hard-won through experience. There is no good "theory" that will help you structure a large program (despite many languages from academia making the claim of good structure). That's just one example.

In my experience, it's easier to teach a programmer CS theory than the other way round. At least the programmer knows what he doesn't know.


I think you misunderstand what I mean by "theory" and "theoretical background." Knowing how a computer works from the silicon level up to the operating system helps you design systems, and starting with that foundation helps you build better systems. You can get the same experience after a theoretical education, but you can't fake a theoretical basis from simple experience. You're right that you can learn it, but in my opinion, having the theoretical foundation is key to getting the right type of experience, and also helps you immensely along the way.

All I know is that I look for it when hiring. A Berkeley, Stanford or MIT CS student has a very high weight. They still have to prove themselves, but they definitely have a head start in my book.


I'm not sure I agree. For practical programming, design is one of the most important skills that you need. Code that is ugly is automatically unmaintainable, no matter how theoretically sound. The text editor is your canvas, it is your job to turn out a work of art.

I believe this is why so many without strong CS backgrounds are successful as programmers: They bring the design skills often lacking in CS graduates.


At least at my university, you can't get a CS degree without a significant amount of programming. Everybody--even the theory people--gets a thorough background going all the way from high-level programming through assembly and even a bunch of EE (it's technically an "EECS" program). Then everybody who isn't specializing in EE or very interested in CS theory does a lot of programming in more advanced courses. On top of this, most people work on some research which also tends to involve a significant amount of programming.

Really, it's not like saying an Art History major could easily learn to paint as much as saying an "Art Practice" major could.

Also, I've found that more CS-oriented people often have a good grasp of software engineering--designing programs, keeping them maintainable, ensuring correctness and so on. On the other hand, non-CS people tend to lack knowledge of theory unless they go out of their way to learn it (and most, unfortunately, don't seem to). It's much easier to get by programming at some company without knowing any theory than it is to learn CS without knowing how to program.

Now, there are obviously CS people who spend very little time programming. But, in my experience, they're relatively rare and tend to stick to academia. Really, they're more like math major who happen to like CS than anything else. The ones like that I've met here also happen to be some of the smartest people I've ever talked to, but that could just be a coincidence.


EE programmers are fun... just model it as a state machine! The best part about writing your code as a state machine is that you can then easily translate it into VHDL for implementation on an FPGA, or even into silicon.


I'm curious about what you think those differences are, and how you think it hurts or helps.


It is hard to answer this because a lot of the trite answers are indeed false. A computer science student may be more able to tell you whether an algorithm is O(n^3) or O(2^n), but the normal (and experienced) programmer will be able to tell you either is "slow" and fix it just as quickly.

But there is a style of thinking that can come out of a study of computer science that can be very difficult to obtain on your own, and enable you to build larger and better systems than all but the very, very, very best of self-taught programmers, and while it's hard to put that difference into words, it's mostly the study of maintaining invariants in the code from both a theoretical and a practical standpoint. Or, if you prefer, a way of thinking that helps create and enforce a mathematically-strong form of conceptual purity in your code.

Those who don't have this background, and I very much include people who took the courses but just sort of skated through without absorbing anything, will often have problems with any API I create that requires certain constraints, such as being careful at what point they access local information vs. remote information. Their use of the APIs will be sloppy and hacky, because they don't really understand how they work or what they are for. The constraints I am thinking of are purely technical, a particular server/client split, so when I say you can not do X it is not merely me being an academic prick, it is because it is actually impossible to do X because at the time your code runs you are literally in the wrong place. Explaining this fact is easy, but explaining how the system conceptually wraps around this constraint and works the way it does is a challenge.

And APIs designed by such people, while they may get the job done, tend to be very brute force (for lack of a better term) and to lack any sort of firm foundation, such that the moment a requirement even slightly changes we have to make massive (and often hacky) changes to them.

Unfortunately, it is impossible to provide examples in the scope of a single post, because all small examples will not show the issue. It's a larger pattern of issues that tend to start interacting with each other, which is where the real problems emerge.

It is also the case that a proper path chosen through a computer science program will take you through some eminently practical theory that will make you a vastly more powerful programmer, and is very hard to pick up on your own. By far the biggest example of this is compiler theory. If you are currently in college and still have the opportunity to take your local compiler course, take it. I don't write very many "compilers", but I have now written quite a few "interpreters" and it has enabled me to complete projects that could not have been completed any other way. Getting a formal grounding in signal processing is also a good idea, that one can be hard to pick up later and has surprisingly useful intuitions for a lot of high-speed networking tasks. A formal grounding in networking can be good, though a lot of courses unfortunately seem to just march through TCP and the OSI model, which you could relatively easily just read about.

The fastest way to obtain this basic understanding if you are a good programmer but lack the formal education is one of really, honestly working through SICP (and not just reading it), or becoming fluent in idiomatic Haskell, with special concentration any time someone talks about "enforcing invariants via types". (Personally, I believe Haskell has completely supplanted Lisp as "the language to learn to expand your mind even if you have no intention of using it", but be sure you stick with it for a bit. Learning how to string together a couple maps and a fold is not where the interesting stuff is, it's what the interesting stuff is made out of.) Also, break yourself of the subconscious idea that academic === useless. As someone who tends to straddle the border I will completely agree it isn't all useful, but it isn't by any means all useless either.

And just let me reiterate as my closing point that it is absolutely possible to go through even a very good Computer Science program and fail to absorb the useful lessons it has. Presumably these are the people claiming it's "useless". I have to admit I tend to not have a high opinion of people who managed to go through a solid program and come out with nothing. (There are also some complete wastes of programs, so YMMV.)


Very, very insightful post. Thank you! To add a little personal bit, I use Haskell daily in my research. I also do web programming on the side in Rails. After figuring out Haskell, and then learning idiomatic Haskell (along with monads, monoids, functors and friends) my way of writing Ruby became much different. I'm more cognizant of patterns in Ruby as they relate to Haskell (as they relate to formalisms in computation). For example, handling nils in Ruby can follow patterns of the Maybe monad in Haskell.

This isn't to say that Haskell is the only mind-expanding language out there, but it sure is good and it worked for me. I think it makes it especially easy in this regard over ML or Lisp in the "mind-expanding" game because it has so many formalized computational concepts that are first-class and upfront. That's not to say that you can't expand your mind in other languages, but Haskell can really help you out if you're willing to roll with the learning curve.


I would add that part of the reason I think Haskell has supplanted Lisp is that so many of the concepts of Lisp have been absorbed into mainstream languages that the distinctiveness is greatly reduced. Macros are still mindblowing, but much less so if you've used Ruby or Python or Perl than if you're a pure C programmer. In 20 years, I suspect modern Haskell will be "less mindblowing" for the same reason. It's not that Lisp has gotten less good or anything, it is that it has mostly won.


Macros are still mindblowing, but much less so if you've used Ruby or Python or Perl than if you're a pure C programmer.

I disagree with this statement. Python's introspection, first-class functions, magic methods, and duck-typing add a lot of the dynamism of Lisp, but every time you run up against something that needs to be a macro, it's a dead end. C has a rudimentary macro system that can get you a little past that point; for example, it's not too difficult to add a foreach construct to C that looks like this:

    queue *q = queue_new();
    queue_add(q, 1);
    /* ... */
    foreach (int i, q) {
        printf("%z\n", i);
    }
On a sidenote, it's kind of sad that a programming layer just above assembly language created 4 decades ago has better macro support than some of the most popular modern languages.


On a sidenote, it's kind of sad that a programming layer just above assembly language created 4 decades ago has better macro support than some of the most popular modern languages.

I disagree. In Python, you have much more advanced metaprogramming facilities, depending on what kind of effort you wish to go to. You have the normal introspection, magic methods and metaclasses as you mentioned (which, IMHO, are in themselves already much more powerful than what you can do with the C preprocessor). But you also have access to exec and eval which lets you do all kinds of crazy stuff that isn't possible with C macros. Using operator overloading and classes, you can even create a lot of new syntax which isn't possible in C[1].

Finally, if you are really determined, you can even hack semantics of existing Python constructs by instrumenting the bytecode. For example, I saw a hack which adds tail-call optimization to Python functions this way (iirc as a decorator).

[1] One example is python-pipeline: http://code.google.com/p/python-pipeline/


I havethat's usages. You are falling for he fallacy of believing that because you are a talented that only people with the same experience and knowledge can be talented. This mistake I see in the many the recruiting articles that make it to HN that boil down to "how to here someone go is exactly like myself".

I have worked with extremely intelligent CS graduates. When it comes to the really involved algorithmic work they leave me in the dust. However every programming job I've ever had has been focused on the other stuff like writing tests and building clean abstractions and maintainable code 99% of the time.

When it comes to this stuff CS grads are no better than any other programmer. In fact, new graduates are often worse because they've never had to build and maintain massive systems over any significant period of time.


Try the following thought experiment:

Person A spends four years getting a BS in CS at a top-tier school, learning about programming at least for a few hours a day on average, with the benefit of a well-considered curriculum and instruction by wizards.

Person B spends four years working full-time on something interesting at Google, which probably gives her more total hands-on-keyboard time, and also access to some minor wizards, although they may not care so much about teaching her.

Let's say that both of them also spend a bit of time on the side learning programming things not school- or work-related.

If both of them have an equal thirst for knowledge, I just don't see why person A is likely to become a better programmer, or why B would fail to pick up the style of thinking you described (which is common, in my experience, among good programmers.) I honestly think that you're mixing up the consequences of {smart, curious, motivated, spends a lot of time programming} and {took a CS degree}.


> Person A spends four years getting a BS in CS at a top-tier school, learning about programming at least for a few hours a day on average,

Real world might be different, but I don't think a CS degree should be teaching programming at least for a few hours a day. IMO that would be a total waste. There are bazillion of things to learn - database concepts, discrete maths, networks, ai&ml, digital electronics, some basic circuit theory, operating systems...There is programming involved in almost all of the courses, but the purpose isn't to learn about programming. When I am learning about MVCC, I am least concerned about learning programming, but the MVCC concept itself.


Sorry, I was just using that as a catch-all to describe learning "things which will probably help you be a better programmer in some way."


Often the best to prove you know these things, is program them.


What are the chances of actually getting a job at Google straight out of high school?


It would be pretty tough, although you can replace "Google" with "any roughly Google-quality set of coworkers and projects." But the original question was whether it's really "very difficult to obtain on your own" the sort of perspective and knowledge you get from taking a CS degree. I doubt that, and I think that most people who are similarly smart and spend a few years working hard with other smart people will pick up many of the same skills.


I work with two people who did.


I would love to read an expanded version of this, how may I interest you to write it? :)


I've stabbed at it, but what came out had a circular dependency in it; the only way to understand the text that resulted was if you already understood the text. About half the stab attempt is at http://www.jerf.org/iri/blogbook/programming_wisdom , with another half just sitting on my hard drive (which is what has the stab at a discussion on exactly what I personally meant by conceptual purity), but until I figure out how to get past that circular dependency it's probably not going anywhere. And I suspect what I'm trying to do there is simply not possible.

Oh, and this expands my Haskell point: http://www.jerf.org/iri/post/2908


Same here. I think the idea of a BSc in Software Development is excellent, and I think some colleges are starting to think along those lines. For instance, the SaaS class from Berkeley on Coursera would have been far more useful to me than the circuit design and formal theory classes I had to take.

However. There is definitely something to be said for SOME kind of structured education for developers. The author's anecdotal evidence notwithstanding, I have met many aspiring self-taught programmers who just don't have it. There seems to me to be a strong correlation between a person's drive to learn something and their willingness to pursue a formal education... not surprising if you think about it.

I also know an absolute genius programmer who is 99% self-taught. But just because people like Wozniak exist doesn't mean they're the norm.


For what it's worth, Wozniak actually did EECS at Berkeley ;).

In regards to the SaaS class: I know a bunch of people who took it before it was offered online. Perhaps surprisingly, most of them did not find it useful. The issue is that they picked up everything taught there either on their own or in internships; they really didn't need to waste a whole course on it. That said, the particular people I talked to are some of the better EECS majors who tend to have significant projects of their own and good internships, so there is certainly some bias.


No need to limit it to self-taught/trade-taught programmers, CS people, and EE people.

I once worked with a brilliant programmer who was studying Classics and English Literature and he had an entirely novel approach too, and brought real value to the team.

That's the cool thing about programming -- it's an abstract thinking skill, so anybody with any sort of formal training will bring their specific ways of working and thinking into the game.


I'm curious. In what ways do they attack problems differently?


I can't talk for the parent, but one significant way I expect people with formal training and people without it to differ is in how the knowledge of data structures and algorithms affects their work.

If you stumble upon a problem that cries for a trie or a splay tree and you don't know they exist, you'll have a very difficult time solving the problem with adequate performance.

If you are lucky and dedicated and talented you might end up with something similar, but it'll take you a lot longer than one who already knew they were there and how and why to use them. Likewise, if you have been trained in developing algorithms, you know what are the important things and which aren't, and have a toolkit to select ideas from.

Of course, outliers do not invalidate the thesis.


Studying a good book on data structures would probably cover about 80% of what a practicing programmer really "needs" from a CS degree.


More than studying, learning a good book on data structures. That, and the intractable quality about learning to design systems and how invariants are involved in the design jerf talked about up there.


> I can't talk for the parent, but one significant way I expect people with formal training and people without it to differ is in how the knowledge of data structures and algorithms affects their work.

And how often in your experience you stumble upon these kind of problems? In my experience, the clever parts are about 5% to 20%. Of course, there can be projects where the clever parts outnumber the non-clever parts, but we are talking about norms, and as you said, exceptions don't prove rules.


The question posed was "how do they differ?" not "how important is that difference?". If you ask me the latter, I'd say that for many domains it's not that important (see the caveat below.) On the other hand, for Google and Facebook and similar companies it's certainly very important, which is why all their interviews are mainly about how well you can reason about algorithms and data structures.

Caveat: These problems might not happen often in less data intensive domains but, when they do happen, they are certainly very important in my experience.


> The question posed was "how do they differ?" not "how important is that difference?"

And the question arose from the claim that the formally taught and self-taught programmers take different approaches in general. It goes without saying that a problem that can be solved effectively using a trie will be approached in 2 different ways by someone who knows tries and someone who does not. The claim made in the OP was broad, and I am saying about 80% of the time, an auto-didact and a CS person will solve it the same way. Claiming that because sometimes a CS person can use insights which the non-cs guy doesn't have doesn't equate to different approaches.


The fact that 80% of the time the approach is not important to get the desired result because the task only operates over small n or similar does not mean that the approaches are equal.


It might not be needed, but it's about the approach. You don't need all the tools all the time, but knowing what patterns are available and having a good understanding of why you would or wouldn't use it means you approach a problem starting from a different place.

It's possible to get the same end result most of the time (especially if the end result doesn't require the "clever" solution), but I would also agree that there definitely is a difference in approach that is discernable by watching someone work.


Ah yes. This is right on.


People talk about the types of programmers you company needs, things like:

Starter finisher bug fixer architect

In many ways I think these are the sorts of things that are influenced by your background. People with a CS background tend to be better at the architecting side of things in my experience. The self taught are better starters because they are super comfortable learning new languages and enjoy the thrill of the new stuff. Some of the best finishers I've ever met tend to be EE folks.

I think this is possibly as much about their background as it is about the personality types that choose these different paths. That said, lets look at the coding style. The EEs I've worked with have all written extremely "simple" (This is not derogatory) code that was easy to verify and tended to not use as many "high level" features. The self taught programmers I've worked with write amazing code using some cutting edge shit that looks great and usually works great... but often don't think about optimization until the very end.. some times to the detriment of architecture and design. The CS folks will tend towards over optimizing up front and getting caught in the pre-optimization trap. They may over design it up front and tend towards some middle ground between "simple" and "cutting edge" that half the time ends up being worse than either of the other methods.

This is of course an over simplification and doesn't fully capture all of the differences I might've noticed.. it's simply an example and obviously I understand these are generalizations that won't always be true, but it's clear there are differences.

Additional example, if you want to know what a bunch of different grad students specialize in stick them together on a moderately complex project and just watch which items each person obsesses about. Some will obsess about network latency, some may obsess about the cache performance, others about security. We need all of these people... but it's clear they will hone in on different items.


> Some of the best finishers I've ever met tend to be EE folks.

That's been my experience as well, but I think it's mostly because of the three types, EE's are the only ones used to making schedules and sticking to them.


#1 (My Ad-Hominem Attack) Who is this guy? Why does he go on to trash comp sci if he never studied it?

#2 "Of all the mathematical sciences, computer science is unquestionably the dullest. If I had my time again, despite discovering just how much I love writing software, I still wouldn't study computer science."

I stopped reading after this point. Why does he state as a matter-of-fact that computer science is unquestionably the dullest? It's actually quite captivating and quite profound. Cryptography, machine learning, computability theory - all leading ultimately to the question of what, exactly, is knowledge and what can we know.

I can't stand the broader attitude of this, which essentially boils down to "I'm a hotshot programmer therefore anything academic or computer-sciencey is stupid". A lot of people in general could be a little more humble and recognize the fact there's a lot of things that they don't know they don't know.


Starting with your ad hominem, I think you took exactly the wrong things away from this article. Regardless of his writing style, his points are dead-on:

1. Computer science doesn't teach programming (with the corollary that computer scientists specifically don't want to teach programming).

2. Most people going into computer science want to learn programming.

3. Many people get fed up with the rigors of computer science (because they recognize it isn't teaching them programming) and move on to other pastures instead.

I've always believed that there should be a "Software Engineering" curriculum in the engineering department and a "Computer Science" curriculum in the math department. That we blur the lines hurts both fields, as programming students demand more programming in CS, at the expense of the underlying theory; and math students demand more theory, at the expense of being able to actually code up a solution.

At the end of it all, though, there is a strong value for some computer science in all programmers. It is amazing how often solutions get re-invented, though it happens far less now with the combination of open source and Google.


I agree with the sentiment that software engineering isn't computer science, but I disagree with the sentiment that software engineering is a discipline worthy of note on the same scale that we afford to, say, electrical engineering, civil engineering, or mechanical engineering. To date, software engineering is still largely a collection of (fairly subjective and contextually-sensitive) best practices.

Software Engineering is better off if it's treated as a trade like being an electrician or a carpenter (which it largely is through on the job, experiential training being as emphasized as it is). Take what you absolutely need from the theory, and learn from those in the know to become a respected practitioner.

Software engineering is a trade without a guild.


> software engineering is still largely a collection of (fairly subjective and contextually-sensitive) best practices.

I agree

I'm glad I did computer science. Even though my title now is Software Engineer, I learned most software engineering best practices on the job. What's very difficult to learn on the job is computational complexity theory, advanced data structures and algorithms, or simply a solid background in discrete math. Basically, you probably won't find yourself contributing hard, cognitively challenging problems in computation with a "software engineering" degree. In hindsight, if I had done some kind of "software engineering" program, I really would have sold myself short.


> but I disagree with the sentiment that software engineering is a discipline worthy of note on the same scale that we afford to, say, electrical engineering, civil engineering, or mechanical engineering.

Exactly. Programming is not harder to learn or master than something like carpentry (i.e. something easily picked up by an interested person on their own, but with lots of potential for mastery).

What on earth would you be doing for four years of college studying programming sans compsci? What a waste.


Most professions are a collection of best practice rules, accountancy for one. Yet it is not a trade.


2 years after I graduated with a BS in Computer Science from the LAS College the Engineering College introduced the Software Engineering program. It was a joint effort between the ComS & CompE departments. Becoming increasingly common.


My institution has introduced similar programs. I question their utility in the face of the cost of college and the shelf-life of what you can learn in a 4-year classroom setting (not to mention general ed requirements).


> To date, software engineering is still largely a collection of (fairly subjective and contextually-sensitive) best practices.

Any engineering field is a collection of best practices, so that works out well.


At the university that i attend (University of Waterloo, Canada), we actually do have a Software Engineering program which is part of the Engineering department, and a Computer Science curriculum which is part of the Mathematics department. The SoftEng degree owners can even qualify to become professional engineers.

I'm also 99% sure that the two degrees are also common at quite a number, if not the majority of big technical universities in Canada.


The same here in Argentina. You have may have a Computer Science which belongs to the "Exact Science" department (along with Maths, Physics, Chemistry, Geology, etc...) and then, you have a Software Engineering curriculum in the Eng department, (along with Electrical, Mechanical, Nautical, etc)...

Back in the day I choose Software Engineering because the other (CS) seemed extremely theoretical to me. But I think that having that "blur line" of yours (ref to SoftwareMaven), let's you mix both worlds a little more, so I would prefer that. But maybe it's just because we always want what we don't have ;)


Yeah, in terms of curriculum differences, CS is definitely a lot more theoretical, with heavier math and pure CS courses, while SE has project management components, more programming, some computer science, some Computer Engineering courses and even some Electrical Courses.

In Waterloo's case, the whole "knowing CS, but not how to program" falls flat, since we have an amazing co-op program where you can easily 2 year-ish of experience at different companies by the time you graduate, so most CS students that come out, do come out of the CS with Co-op, and as a result are great programmers.


That's the way it's set up at Waterloo, with soft-eng in Engineering, and CompSci in the Math Faculty. That being said, in Computer Science we have some classes on bash scripting and C++, which I think are valuable, but not as fun as functional programming or compilers. Still more fun than our very dry formal logic class, which is sad, because I like formal logic.


In my experience as a CS student, the fist few classes you take teach you the basics of "computer programming"...aka you just write a lot of object oriented programs. Then as you move on to parallel programming, computer architecture, and operating systems, it really becomes more of a science.


> it really becomes more of a science.

This bugs me. There really should be another word for this, since, as far as I am concerned, something in which the scientific method plays such a secondary role should not be primarily classified as a science. I guess you could call it "a math", but that is not terribly satisfying.

This complaint is really just a rephrasing of the old "is mathematics science" debate (http://en.wikipedia.org/wiki/Mathematics#Mathematics_as_scie...), but as long as we are talking about if things that are "too engineery" are computer science, we might as well talk about if computer science is science.


I agree totally. I do research work in crypto, and I find the theoretical aspects incredibly interesting and engaging. That said, I don't think he is incorrect in his overall point (despite the arrogance with which he makes it); most programmers only need enough CS knowledge to get by.


Here in the UK, when I did my Computer Science degree, about a third to a half was programming. The rest of it was basically hardware and mathematical type theory. There was more but the point is, it covered computing in the whole. How CPU's actually work, how data is organised on a disk, what is actually going in in CAT5 between cards, etc. AI, Databases, servers, etc.

The was another course, CS Software Engineering, or "programming" for the rest of us. They did our programming stuff, and then some. The two courses sort of forked. Programmers did more programming, we did electronics, hardware etc.

Funny thing was, the CS students became better programmers than the CS-SE people did. I think it was because they understood computers, and not just programming. Next odd thing was that the CS guys often become programmers, and the CS-SE guys ended up in support roles. Even when the CS-SE guy were programmers, they worked in herd like environments, where as the CS programmers ended up on some very interesting projects, like avionics.

Later on, when I did some support roles, I found that the programmers knew less about computers than secretaries. Web designers/coders were worse.

Not saying this is any sort of trend, and I did do my degree 15 odd years ago. But I found it interesting.


Would be interested to know the course requirements for the CS vs. CS-SE

If they had more relaxed A-level grades for the CS-SE students (or it was under-subscribed and filled through clearing) then the students on the course were just a lower standard of student rather than the course itself being deficient.


Computer Science is neither about hardware.

I built a basic CPU, but i'd class that as digital electronics. Building a a very simple CPU isn't really that hard either.


I follow the author's point, but I feel as though he's as guilty of dismissing valuable knowledge as the academics he condemns. I recently completed a graduate degree at a university that offered both a "Computer Science" program as well as a "Software Engineering" program. This is only my opinion, of course, but the difference in depth of understanding between the students in each field was very stark. The SoftE students dedicated several semesters to development best-practice-type classes: generating JavaDocs, curating JUnit tests, writing and revising good requirements, and UML modeling. They would essentially walk away from their program as expert Java users with little to no idea how Java itself (let alone the machine underneath) functioned. The CS students knew the inner workings of a computer program from code to compiler to stack, but would have to learn the bookkeeping side of development on the job. Both paths have their strengths and weaknesses and you've got to learn some degree of each to be a meaningful contributor.


fwiw, an opposite take, from a decade ago.

at my cs pgm, the assistant professors were supposed to make a presentation on day 1 so the grad students could make an informed decision which prof to work with, which subjects to sign up for.

the software engg prof was a glib, entreprenurial hotshot who said "my students have been placed at netscape, sun, microsoft, oracle". he talked about industry partnerships, internships, 1000s of lines of production code, maintenance, unit tests, refactoring...

the db prof said "i have personally placed all my students in oracle". he talked about rdbms, schemas, superkeys, boyce codd normal forms, how "everything was ultimately data, so you'll never be jobless if you became a dba".

the algorithms prof was a shy lanky dude who went straight to the blackboard and wrote "computers are to computing what telescopes are to astronomy". at that point none of us knew who dijkstra was, so we just looked at each other like "huh?". he then turned to us and said "99% of cs is about searching and sorting. sort algorithms. search algorithms." then he drew a table which listed performance of quicksort, shellsort, heapsort, insertion sort, and two algorithms of his own invention. he talked about Big O notation, theorems, discrete math, taocp. our heads were spinning, and when he left there was a huge collective sigh of relief.

at the end, the student breakup was like 49-49-2. So only 2% of the class signed up for algos. Like most indians, I come from a poor household & my main concern was coin. So I signed up for Software engg. After 1 month, I dropped the course and went crawling back on my knees to the Algo prof, and begged him to take me on. That single decision changed my whole life. In that 1 month, I had found out something about myself - that I was a royal prick. I was personally not cut out to do scut work.I had zero interest and respect for maintenance, unit tests, requirements & specs, UML modelling, refactoring, waterfall method, agile, kanban...I found that whole discipline filled with unproven subjective airheaded garbage, essentially a fad. To this day when a recruiter mentions the word "unit tests" on the phone, I just hang up. Just pure instinctual reflex.

It takes all kinds...


I've tended to explain it to people this way -- In the related computer science fields there are only three kinds of problems:

1) The problems math makes for you. These are the kind of problems lots of people think they work on, but don't. Compared to other problems, there aren't a lot of them, but solving one has a huge impact on the field as one solution can be shared across the industry.

2) The problems physics makes for you. These are the problems you encounter when you take the math and start applying it to actual machines. This is hardware design or software to directly make that hardware do what you want. This is 'applied' computer science and again, many can leverage the work of a few.

3) The problems other software engineers make for you. This is what most computer scientists/engineers/programmers actually have to do. This is dealing with making someone else's code do what you want. It's dealing with API's and layers and layers of other, primarily human-created, problems.


Am I reading it correctly that people in your CS program didn't know about basic algorithms until the beginning of graduate school?


Degrees don't matter so much. Can you learn? Are you smart? Can you ship good code (given enough training)? Real programming skills comes through experience and domain knowledge. I am less interested in the educational background of developers that I've hired than I am of their future growth potential which has more to do with personality than IQ.


sadly not every employer is like you


This is not an opposite take, if you write, you found yourself are a royal prick.

This confirms the 0.1% rule of the op.



> I was personally not cut out to do scut work. I had zero interest and respect for maintenance, unit tests, requirements & specs, UML modelling, refactoring, waterfall method, agile, kanban...

The world in which this is "scut work" is not the world I want to live in.

Let's be honest. You don't want to do practical, you'd rather do theoretical, and that's fine. However, don't disrespect practical just because it isn't your bag of chips, particularly to an audience that is full of practical engineers. I could spend all day disrespecting theoretical -- mainly because most of those passionate about theoretical at the expense of practical make comments like these -- but I do not, because I see theoretical as necessary for our craft.


Are you saying people who do theoretical work don't don't it in practical ways?

As an example, take Google's self-driving cars. Would you say the meat of their body of work theoretical? But would say the code they (Sebastian Thrun himself, and others involved in the project) write code that's not very practical, readable, maintainable or without tests? I would say what they do is theoretical work. And they write practical code for their theoretical work.

If you look at Udacity classes - I've watched Peter Norvig not Thrun's so I'll use that as a reference - Norvig places a lot of importance for tests, maintainability, readability and other practical things throughout his class. I would assume the code he writes (however little now, however much in the past) is very practical. Yet what he writes code for has a huge theoretical aspect to it. I'm saying one can have their body of work being theoretical by nature but that doesn't mean they don't write practical code.


Everything's practical. The difference between theory and practise is time and distance.

"Nothing I have ever done is of the slightest practical use." - G.H.Hardy


Thrun's code in his class is nothing like Norvig's, FWIW. It's practical in its way, but in a project of mine I'd want it tightened up.


> As an example, take Google's self-driving cars. Would you say the meat of their body of work theoretical?

No.


Would you care to elaborate? Surely most of the lines of code are not directly implementing theoretical things, but I would say the meat of the work is without a doubt theoretical.


I don't know about self driving cars, but I have known people who did PhD level research in control engineering and then went into industry (real, heavy metal industry, like offshore oil installations in the North Sea) implementing advanced control systems.

The estimate they gave me of the contribution of their control algorithms (the "theoretical" part of the project) to the over amount of effort getting the thing working was less than 1%.


Theory is thinking about how to build a self-driving car, and the algorithms and considerations that would be required to do so, and perhaps writing a paper about how best to go about doing it. Practical is writing code, running wire, and testing the car. Surely they've done both, but now that the car exists they're into the practical territory. As they adventure through the practical part, surely new theoreticals are discovered and published.

Just because nobody has built a self-driving car before does not automatically make actually building one a theoretical exercise. The process is applying the sciences to making a car do something, which is very practical. Your awesome sort algorithm and the paper explaining it is theoretical. My implementation of it is practical.

That is the differentiation that most people can't see.


I have to disagree with you.

Software engineering 'ideas' regarding modeling and testing have gone way overboard.

I have a feeling that that all those who peddle fancy buzzwords such as agile and kanban are either MBA graduates with no real engineering background or failed engineers trying to reinvent their career.


I'm a preacher of Agile. I talk about it in order to get things done. Theoretical work is important, but let's face it, most customers want products, not research. Your average person declaring himself 'theoretically oriented' is no good in delivering products. He hasn't understood the bigger goal and will just waste the clients time and money training himself to write better algos. Most likely he will be rather unexperienced, and produce pale replicas of existing library algos and data structures.

In my experience the ones who know the most theory are also often the most pragmatic and professional ones. These don't boost about their technical knowledge, but they know it and know how to apply it. These will declare themselves 'problem solving'.

One that declares himself mostly theoretical most likely just hasn't gotten that far yet. Then we have real researchers, but that's another story entirely.


> most customers want products, not research

"I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don't think we are. I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don't become missionaries. Don't feel as if you're Bible salesmen. The world has too many of those already."

So said Alan Perlis, the first recepient of the ACM Turing award. I'll listen to Perlis all day than pay any attention to an Agile blowhard who calls himself a "thought leader" on his own biopage. customers can go fuck themselves. cs is what matters.


Well I for one want to make great products that make me feel like we surpassed ourselves and makes the customers happy. Most likely this involves technical innovation. I'm sick and tired of the ones writing "better" hash tables all day, blowing by budget and pissing everone off who actually wants to accomplish something.

Fun for me means keeping the project on track, delivering so that we get an income and can spend real money on R&D and events. I don't want to sit in a project which hasn't delivered in months, is in overtime (with all rhe stress that that means) and no working program what so ever, just a bunch of Impls, Contexts, BidiMaps and XxxUtils. That's not cs, and it sure as hl ain't fun!


> customers can go fuck themselves.

Oh, dear Lord. This comment makes me hurt.

Your salary from Bank of America comes from customers -- even if you're paid from investment, the investment is only there because Bank of America has customers. Your research in academia is paid for by customers of the institution, both former and present, and possibly governments (who are also interested in the product of the research). Your shortsighted world view is tragically common in those who favor academia, and without paying customers driving research into new areas, your precious academia wouldn't exist and you'd do well to understand that. Money is everything.

What the hell do you think the point of academia is? All throughout history, the sciences are almost always advanced due to a pressing need or mistake from practical execution.

> cs is what matters.

Execution is the only thing in the world that matters. You are basically admitting that you're the idea guy, searching for a "technical co-founder".

Had Mark Zuckerberg spent a lot of time worried about the runtime efficiency of parts of his code or whether there was a more efficient algorithm for sorting friends, Facebook would be nothing today. He executed and didn't give a shit, because he wasn't exercising computer science, he was exercising building a product.

Surely people went back and made things more efficient as Facebook scaled, but I've been at a rapidly-growing startup a while, and I haven't made my product more efficient through many computer science advances. Most have been using better software, better network topology, better configuration, less dumb code, and so on.

Your attitude completely misses what the article is explaining, and, frankly, your ad hominem on the guy talking about Agile is completely out of place when practically your entire LinkedIn profile is buzzwords, just your field instead of his.

(Aside: I love that HN makes it really easy to build a list of no-hires, fairly easily, just by observing how people think and communicate.)


"I haven't made my product more efficient through many computer science advances. Most have been using better software, better network topology, better configuration, less dumb code, and so on."

Please stop. You're getting irony all over my desk here.


> Please stop.

No. I will not silence my opinion because you disagree. Although your comment is almost devoid of insight, I would infer that you're implying I'm being obtuse regarding the role of computer science in making my stuff better. Example of a performance improvement: choosing a frobnicator that uses multiple cores to frobnicate instead of a single core to frobnicate. Do I have computer science to thank for that? In a way, the same way I have medical research to thank for Advil. However, it's disingenuous to say medical research got rid of my headache. The Advil did.

Computer science has its place, but it needs to be more aware of that place. This thread is a very poignant demonstration of that.


Well, here are a few of the people "peddling fancy buzzwords". I think their bios speak for themselves. * Martin Fowler - http://martinfowler.com/aboutMe.html * David Anderson - http://agilemanagement.net/index.php/bio_david/ * Steve McConnell - http://www.stevemcconnell.com/bio.htm


> Software engineering 'ideas' regarding modeling and testing have gone way overboard.

Maybe, but he also said specs and maintenance.


I agree that a lot of people only need to learn how to program, but I find computer science genuinely fascinating and mind-expanding, so it's a shame that more programmers don't feel like they're able to access it, either because of unfamiliarity with formal mathematics or a justifiable aversion to the "category theory or GTFO" attitude.

<plug> Which is why I'm writing http://experthuman.com/computation-book. </plug>


I have always considered computer science to be information science and be a part of mathematics and not a separate science. But then again, as a physicist i might be wrong. It's not meant to be diminutive at all (i am after all a programmer) but i consider programming to be a tool, like maths (and consider the two to be inseparable).


I really wish we in the English-speaking world would stop calling it computer science and call it informatics, as is done in some parts of the world. Linear algebra, when applied to any practical problem, is almost always done by computer, yet linear algebra isn't called computer matrix equation science. The terms linear algebra and informatics make it clear that these are mathematical fields that can be studied for their own sake or as tools for practical applications in other fields.

People who majored in informatics would study such things as information theory, approaches to AI/machine learning, models of human cognition, communications (as in signal/noise, data compression, etc.), and so on, and would undoubtedly be required to take programming classes, where programming would be considered a tool to help people learn informatics.

People who majored in programming (wherever that was done, including industrial training and apprenticeships), would be required to take some informatics classes, where informatics was treated as a tool to help them learn to be better programmers.

Just changing the name to informatics would go a long way, in my opinion, toward clearing up the mess. Employers who demand CS degrees as if the degree meant "better-trained programmer" might think differently if they were called informatics degrees and stand in contrast to programming degrees (or certifications) that emphasized practical industrial software dev. And informatics departments would be freed up to be a more general resource to more than just programmers.


Mathematics ("the systematic study of quantity, structure, space, and change") and computer science are both formal sciences (http://en.wikipedia.org/wiki/Formal_science). You can always broaden the definition of mathematics to encompass all formal sciences but it's useful in practice to be able to differentiate them: a mathematician is a different thing from a computer scientist, even if they both spend all day pushing symbols around inside imaginary formal systems instead of predicting and measuring the physical universe.


I have always considered mathematics to be a part of philosophy and not a separate science.


It's all down to definitions.

You could argue that mathematics is philosophy done right: With rigour.


Well, if you take attitude, then you'd have to reject all science because it relies on induction.


Oh, nothing wrong with induction. Just with waffling around.


I think Hume would disagree.


I wasn't permitted to study CS formally (no school would accept me), but I've never found the study itself to be particularly inaccessible. There is a lot of great information out there that anyone can access.

I think people are just apt to become discouraged. There is the prevailing idea that only CS students can learn CS, especially on places like HN, which causes people, who would otherwise be more than capable, to shy away from studying the topic.


This is practically a high brow version of "math is hard, let's go shopping": hash tables are hard, let's use java.utils. Yes, you can write a lot of programs with a few primitives you've learnt in high school and some random bits of practical wisdom you've picked from co-workers. But you're horribly limited as well. (To pick a simple example: if you don't understand how hash tables work, then you probably also use naive O(n^2) algorithms where O(n log n) is possible with little extra effort.) It's a personal decision to stay this way, but it's quite awful, for other people who may look up to you, to quash their interest to become more educated by proclaiming higher knowledge is "dull" and useless.


I was with him until he started complaining about being asked how to implement a hashmap and how this implied that the interviewer had reimplemented Java.utils.

How would you know when to use a hashmap and when to use a list if you don't know anything about big-O notation?


I think he addresses this point pretty well -- you can learn good programming practices through apprenticeship/experience without understanding the deeper fundamentals. You won't be creating the next MapReduce, but you'll be able to remix and hack up existing functionality in new and creative ways.


From the other end though, this can result in a lot of cargo-cult programming. "this is how to do it" can create a ton of redundancy (the bad kind) if the underlying mechanisms are not understood sufficiently.


Even having the underlying ability to understand something sufficiently doesn't imply you'll automatically do so. There are plenty of times even a properly educated person on the subject matter at hand will miss opportunities to apply their knowledge. Recognition of a given scenario is a test of your analytic ability, not sheer know-how.


You can have an intuitive sense of how many operations you're performing without knowing big-O notation.


Of course, but part of the benefit from studying CS is that you'll be able to recognize intuitively when there is or there should be a better solution.

Let's say you want a data structure that performs three operations. Insert, Delete, and Find (as in, 'is this in the database?'). The intuitive sense may come from saying, "Linked-Lists would be horrible for this! Each operation would be slow." (O(n)) The practiced programmer may say, "I can keep the data sorted and use an Array. Those would probably be faster." (O(lg n)) However, if you learned a little more CS and how hashes work, you would know that they are constant time for all three operations. You never had to waste time thinking about which choice to make because you know how hashes are implemented and that they specialize on those operations running in constant time.

Besides, big-O notation takes no time at all to learn. I learned it's theory as a freshman in high school during algebra II when we wanted to know which of two polynomials grew faster. Take the most significant part, rip off the constants, and that's its growth rate.


I got into trouble once at an interview, when I answered a question about joining two data sets by writing a SQL join. The interviewer didn't know how to think of the performance of SQL - he was assuming I'd write something iterative so we could talk Big-O. Which is to say - a lot of tools used in the real world don't easily submit to Big-O, and some academic types resist learning them for that reason.


I find it interesting that you think a programmer needs to know big-O to know when to use a hash-table vs a list; I don't find that to be the case at all.


"Take the most significant part, rip off the constants, and that's its growth rate."

Well, it's an upper bound on its growth rate. And only after some possibly-gigantic n.


Yeah I understand. I was describing how I did it in Algebra II. They would give up f(n) = 5x^3 +4x + 8 and we would know that it was O(x^3).

We also understood that it was as n -> infinity, hence why f(n) = 2x^4 would have a larger growth rate than g(n) = x^2 + 5000. When you're talking about scalability when programming, you're going to have to understand big-O, how constant factors come into play (why Floyd-Warshall at O(n^3) is often better in practice), and how big-\Theta works. If not, you're eventually going to make a mistake and slow everything down.


I like the question. I would never expect a candidate to know a perfect implementation but they better understand the general concepts. It's hard to tell the tone from his writing, but if a candidate was snarky about "why not java utils" I'd be less inclined to want to work with them. The question can be answered many ways without being hostile to the interviewer.


Agreed. The Data Structures courses we had were among the best courses we had. I think it's great to learn how to implement them once.


"Of all the mathematical sciences, computer science is unquestionably the dullest."

Really? I find it the most interesting. Which is, I think, what led me here.


Sigh.

I keeping seeing all these cs vs programming articles. I'm really waiting for a "fuck just programming, cs is fucking awesome" one, because all the ones I've read so far sound like "haha, stupid math nerds and their cs. just learn to program" (a bit of mis-characterization of this article, maybe, but I get annoyed when people say something I like is "unquestionably" dull)


Knowing HN, someone will surely deliver, and the front page will have 4-6 variants of "Why you should/shouldn't learn computer science"


After I wrote out "fuck just programming, cs is fucking awesome", I immediately thought it sounded like a good name for a blog post :)

Is my hipster showing?


I've been mentally composing a blog post entitled, "Programming is hard, which is why I don't want to do it" about the differences between "real programming" and computer science. The thesis is that "real programming" involves solving problems that are inherently simple but have loads of incidental state and details to mentally keep track of, whereas computer science involves solving streamlined, semi-pure instances of inherently hard problems.

I prefer computer-science because I don't like loading five windows and 18 tabs worth of incidental state into my working mind just to get any work done at all, but I greatly enjoy cutting away every impurity and irrelevant detail to forge a creative solution from the ore of a truly difficult problem.


Please write this blog post out in full because it sounds exactly like what I've always felt but haven't been able to express properly.


Done and submitted to HN.


Thanks. I really think the difference you've pointed out is something more people should discuss, instead of implying that CS is just like programming(coding) but with more math.

Instead most people focus on how programming isn't CS enough, or how CS isn't real-world enough and they miss or gloss over the fundamental difference between the two in regard to the type of cognitive abilities required.


Yes. Unfortunately, I have only but one fuck to give, so I find your title a bit overwhelming ;)


Maybe its my physics background, but I've always found the problem to be a lack of analogues.

We have physics, but we also have mechanical and civil engineering. Likewise we should have a computer science major and a developer major. Just as how mechanical and civil engineers still take some physics and math courses, the developers will take some of the CS courses in addition to specialized courses for their major.


Perhaps this is the thinking behind many of the Software Engineering degrees? They typically focus on requirements gathering, architecture, process, and business integration with less focus on the nitty gritty of sorting algorithms or how compilers work.


We used to get those, or at least articles that were like LOOK WHAT YOU CAN DO WITH THIS THEOREM.


was tempted to stop reading when I got here. Computer Science is fascinating...and an incredibly broad field. The notion that the author could make the sweeping accusation that it is "unquestionably the dullest" of the mathematical sciences leads me to disregard his opinions in general.


I interview a lot of people who want programming jobs.

What I find is: almost nobody can program, and almost nobody knows basic CS 101 data structures.

It does not matter if you are a self-taught programmer with ten years of experience in the best companies, odds are that you cannot solve a trivial coding exercise.

It does not matter if you have a masters degree in Computer Science, odds are that you cannot successfully build a tree structure.

I have no idea how these people keep their jobs or how they graduated, but this is the norm. People who can do CS at all, and people who can code at all, are both rare. Or at least, are rare in the pool of people applying for work.


If a person is a self-taught programmer with ten years of experience working in a good company, then you are obviously programming for that company in a sufficient capacity even if you are unable to solve puzzles offered up by human resource managers that found them on Google by typing in "Questions for Programming Interviews." Programming is not about being able to solve brain teasers. Programming is about being able to deliver quality code. If the language handles most of these data structures for you or if you have used them before without actually considering it, then you obviously know their importance even if you cannot put it into words under stress. Do not degrade people that cannot repeat verbatim some useless information they read in a book or solve brain teasers in minutes (unless they had already seen those puzzles.) Being able to work hard, clean, in a team, and supply a functional product on time according to specifications are the most important traits.


Sure - so I prefer to ask people to write a small program that solves a small, practical task. And I try to make it feel like we're pair programming, instead of just me watching them code, so we can see what it's like to collaborate.


You are the only one who said anything about brain teasers and puzzles. The parent lamented the fact that so many programmers can not solve basic programming problems. Fizzbuzz for example. If you can't write fizzbuzz, you are not qualified for a programming job. It has nothing to do with computer science, or brain teasers people googled, and everything to do with writing actual working code to accomplish a trivially simple goal.


I had to look up "FizzBuzz" to know what you mean, but you are correct. (I have actually been asked to write programs like this, and this is great even if it is minimal. I think the questions may need to be a bit more complex than this, but at least it is possible to solve in one sitting.) Asking an applicant to write simple programs like this would be good. Or, if the person is a front-end JavaScript developer, you can ask them how they would handle thumbnail mouseovers to display larger versions of the images in a separate div. Things like this are fine and have purpose. Asking a database administrator about the difference between joins makes sense. Another question might be how would you write a class to represent a deck of cards including methods for shuffling the deck. In comparison, sometimes you will have technical questions thrown at you that have no real world applications or literal brain teasers, and I think those types of questions are pointless. There is a difference between asking programming questions and asking programming puzzles, and I wanted to voice the opinion that puzzles should not be misconstrued to the point of suggesting that people cannot program because they cannot solve them.

Real programming projects are best. I think one of the best methods would be to give a person a reasonable assignment that should only take a couple hours to complete (if that long) and ask for the results within a few days. (This will give them plenty of time to clean it up, add surprising features, and create a nice design and front-end after they are finished with it. A day can be used for the promise of completion, and the remaining time can be used to overdeliver.)


>In comparison, sometimes you will have technical questions thrown at you that have no real world applications or literal brain teasers, and I think those types of questions are pointless

My point was that nobody else is talking about that. You just brought that up completely out of the blue, but in response to someone. When you reply to a post, those of us reading the thread tend to assume your post will in fact be in reply to the parent post, not a completely unrelated post just stating your opinion on something else entirely.


Probably because the vast majority of the "programming" field as it stands today barely exercises computer science knowledge. Linked list optimizations, heaps, that stuff isn't even on the radar of most devs hacking on a startup, frankly, because the language hides it away. And they are not inferior programmers if they can execute on their knowledge, are they?

I hate that, to most folks who are into CS, CS is a yardstick for how much value a hacker will develop. "Can't optimize this from O(n^2) to O(n)? I have no idea how you keep your job." There is far more to the value produced by a hacker than the typical interview questions, and this elitist attitude out of most compsci folks is obnoxious.

This is why, even though I understand CS, grilling me on theoretical CS in your interview is automatic points off and I'll accept the position that actually quizzes on practical. I'm definitely of the mind now, being a self-taught programmer with a working knowledge of compsci, that I will learn things when they're needed to execute (not so they'll sit around in my brain waiting for the manager that Googled hard interview questions to ask).


I avoid quizzing people on CS, but I do have a practical problem that I pair program with candidates on where most (but not all) say "hmm, I guess I want to store this in a tree structure", and then start implementing one - so they quiz themselves.


Did you really not notice that your reply is to a strawman you made up? The person you are replying to did not say anything about CS vs programming. He stated that most people who claim to be programmers can't do basic programming, and most CS grads can't do basic computer science. Incompetence is widespread on both sides. There was no claim that one side is more important than the other, yet your reply is almost entirely "you are elitist for thinking computer science matters".

Every programmer should be able to fizzbuzz. 95% can't. This is a problem. Every CS grad should be able to build a tree. 95% can't. These are serious problems.


“Computer science is no more about computers than astronomy is about telescopes.” - Edsger Dijkstra


It's worth noting that a sharp high schooler can learn to program - many do.

It's a rare, rare high schooler that can learn how to use computer science. Most CS grads don't.

And we wonder sometimes why our software is so cruddy.


Well, a sharp primary schooler can too... Bah, my first CS teacher claims now that it's crucial to start in primary school if you care about getting anywhere in IOI.


I don't think it's crucial to start early. Rather, there's a correlation--the sort of person to become a really good programmer or really good at CS is also very likely the sort of person to start programming at an early age.


You make it sound like one needs to be Michelangelo before they can even attempt to.


The reality is that there are many different responsibilities that, confusingly, fall under the singular title of "programmer".

If you're the tech cofounder at a startup, you're probably focused on building a new product that customers love from scratch. You're going to need a very different set of skills and abilities than, say, Microsoft engineer #10000 who is maintaining some old codebase. Or even engineer #100 at Google who's trying to scale to hundreds of millions of people. Etc.

What's weird is that people hardly ever mention these differences. They just say things like, "All programmers should know advanced algorithms"... or data structures, or compilers, or UX design, etc. Even if people/companies don't say this explicitly, they say it implicitly when they quiz for specific material in interviews for jobs that don't rely on that type of material.

If you're exceptionally good at what you do, but you constantly hear that you're inadequate because you don't have this skill or that knowledge, it's easy to doubt yourself. But you shouldn't. The fact is there's nobody who knows all of this stuff, or even most of it. And there's no job that's going to ask you to do most of it (I say this as the sole tech person at a startup where I have to do sysadmin, back-end coding, front-end coding, and design single-handedly).

Just find what you love and get good at it.


The rift between CS and SE is unfortunate. To use Greek terms for different kinds of knowledge, computer science is logos (reasoning from first principles). Software engineering is metis (practical, local knowledge). They complement each other.

I suspect that the only reason there is such a rift in the first place is because the sources of funding for people employed in CS and SE are different. CS is funded via (predominantly) government grants. SE, clearly, is typically funded privately.

Like China commenting on diplomatic rows between North and South Korea, I'll say: both sides need to learn how to work with each other.


I like to think of it like this:

Scientists: Focus on developing new theories, solutions to abstract problems, etc.

Engineers: Build tools based on the theories created by the scientists.

Developers: Build products based on the tools the engineers made.


So where do mathematicians fall in that spectrum?


Well it's not a spectrum really, you need a mix to do anything. Scientists need developers (or need to be developers too) if they want to do experiments related to their theories, or if they want to push them to the world. Developers need to understand what scientists say, or they need to be partly scientists, to be able to build up their own small theories whenever a problem is difficult enough. And so on.

Back to your original question, maths is an extremely useful tool both for CS research and development. A pure mathematician is providing material for all of CS to work. But CS people, programmers, and so on need to know some math too.


(Natural) Scientists: study nature.

Engineers: study how to build. This may include original theorizing pertaining to methods of building -- they don't just ape scientists' theories, they create their own. Engineering is itself the science of building.

Developers: Engineer vs. developer is an arbitrary division. In truth there is a continuum from the most scientific of engineers, who use abstract theorizing to create new technologies, down to someone comparable to the person who installs your water heater.


When i was in university majoring in Computer Science i had about 5 programming classes in all 4 years of school while the rest were theory, applied statistics and mathematics. So to call it "computer programming" is a major overstatement to me.


This definitely varies per school. At my university, almost all of the CS classes involve a significant amount of programming. (For reference, all the EE and CS people have one major.)

We have a slightly weird structure: everybody takes the same intro courses but then you can do whichever advanced courses you like. So this means that everybody (even pure EE people) get programming courses going from Scheme (SICP) to assembly. (And the CS people like me also have to do a bunch of EE.) Then, most of the advanced CS courses all involve a healthy amount of programming. The only exception is the algorithms/theory sequence, but most people don't do all of it and everybody (except EEs) do some other programming courses as well.

I think the main difference is that the program I'm in is part of the engineering college of a school that takes engineering rather seriously.


This depends heavily on the school. When I got my CS degree from a liberal arts school in 2002, my experience was similar. I had lots of discrete math, linear algebra, data structures, algorithms, and language theory courses.


Upvoted just for the title. C'mon, value judgements of CS vs programming aside, there needs to be a broader awareness of the difference between the two.


I agree with his main point, that we should have an entirely separate "software engineering" major which would stress the process of software development and prepare students for being professional software developers. Then, computer science programs would mainly be for students who wanted to do research in computer science - much like the difference between various engineering disciplines and physics.

But he comes dangerously close to the "computer science is math" notion that assumes CS is just theory. It is not. I wrote an entire blog post in response to this (surprisingly common) sentiment: http://www.scott-a-s.com/cs-is-not-math/

HN discussion: http://news.ycombinator.com/item?id=3928276


My university had a software engineering and computer science majors. Practical difference was that the SE majors took the standard 1st & 2nd year core engineering classes, which meant more physics and DSP classes. After that both majors took mostly the same classes. CS majors took a bunch of SE classes and vice versa. I could theoretically go back to school take about 8 classes and get a SE major.


That's why schools like RIT have a Software Engineering program. Learn the basics of CS, then go build real software with it.


Great article. Most students would probably prefer practical programming experience but they get CS degrees since that is all that is being offered at their college. The schools meanwhile couldn't care less about what's practical and they refuse to become a "trade school" so they instead require the students to learn difficult and often unnecessary material. Besides making it more difficult for the students who do succeed, it ends up scaring off many people from a career in software development all together. I think its time for some disruption in the education system...


I can't tell you how much I learned on the job vs. what I learned at school.

I don't think the difficulty barrier is the issue.

The issue is the code you write out of school. Your sense of style is driven by the fact that the TA would mark you off if you didn't write a comment before your function. Your experience working with other developers is confined to that one terrible semester long project with the one idiot and the other guy who wouldn't do anything until the week it was due.

I don't mind so much that I had to learn QuickSort every year for 6 years or so. I do mind that I left school not really grokking a thing about object oriented notation. I left school thinking objects were nice and they could have methods, and you could create other objects that got those objects for free - maybe some sort of polygon class with subclasses "square" and "triangle".

But in practical terms of course it was one huge main() function for most of the "hard" problems they had us solving.

I understand not wanting to be a "trade school". Computer science is harder than that. You want people who can understand big-O notation. But teach them how to code, even just a little bit.


IMO CS is necessary to advance the state of the art. And while I completely agree that many computing/programming tasks currently do not necessarily require a CS degree, there is no guarentee that will always be the case.

As for my myself, I studied roughly 3 years of computer science before I dropped out to work full-time, so I only have a half-finished bachelors degree. I wont say I regret it, because the last 5-6 years of being partner in a startup and earning a good pay-check have been fun. But I very much do regret not being able to completely grasp all the new interesting research papers that are coming out. Nothing is more frustrating than reading a research paper, and knowing enough to be able to grasp that this algorithm could improve some part of your system, but then being unable to decipher/implement it, because one have forgotten(or not learned) parts of the mathematical notation/background. In other words being able to see the solution, but the solution being juuust out of your grasp. I hope down the road to be able to compensate for this problem by hiring smarter guys than myself that DID finish their CS degree :)

Also another benefit of studying CS is that around year 2-3 you will be introduced to some subject matter that I at least personally am pretty sure I would never have heard of otherwise. I found stuff like operations research and integer programming very interesting. I havent yet have had opportunity to use any of it in the "real world", but its nice knowing whats out there, who knows it might come in handy a few years down the road in some other venture.


I've studied theoretical math and computer programming.

Coming from theoretical math, I feel like the reason that computer science is dull that as a part of math, it is mind-bendingly difficult.

The almost no "substantial" theories of computer science because they would have extraordinarily encompassing abstract statements about what is possible to compute, essentially to even think about. P =? NP is a good example. Unlike other Millennium Math Problems, there has essentially been no positive progress on the question. The only theorems are about how this or that tool won't help us. And P=? NP is a simple, even "obvious" statement from the right standpoint.

Essentially, most of the generic theories of CS are constructions to show something either possible or impossible.

I'd wonder if you could teach CS as something like the ragged edge of mathematical logic. Might make it authentically interesting ... for a few people but it would probably wind-up being less practical. Sigh...


> the UML meta-meta-model

that's not computer science; that's software engineering "theory".

Algorithms; data structures; type systems; however ...


Great article. This expresses better than I ever could have at age 20 why I quit my CS degree since I already had a job. At the time, the university was experimenting with a "software engineering" degree, but it was new and not yet accredited so it was too early to know if it was worth it.


"Developers will need some theory, and I'm painfully aware, too, of the degree snobbery that most employers harbour. So I propose that the right course would be a 5+ year apprenticeship with part-time degree study - CS in the classroom 1 day a week, software development in the office the other 4."

I disagree with this statement. My program at school supplements 6 months of formal learning with 6 month long internships. I don't feel like 4 days a week is enough to get the benefits of working to supplements formal learning. I defiantly need 6 months in a job to learn something valuable, and towards the end of my internships is where I feel like I've learned enough to contribute just as much as any of my teammates and coworkers can.


Basically, we should only learn the 'real' programming skills and forget that the whole state of computing technology rests on the shoulder of giants who created these algorithms and did all this research for 'real programmers'. Oh, we certainly don't need CS, because it gets in the way of writing code.

Seriously, I understand what the author is trying to convey, but IMHO he goes too far on the bias against CS. It is a serious handicap to program anything worthwhile and using APIs and frameworks without understanding (at least to some degree) what they do ... my .2 cents


To a computer scientist, programming is merely a tool to help bring their computer science ideas to life. Writing a bunch of APIs and calling them will only take you so far. After a certain point, theory will prevail because math (computer science is applied math) is the truth. Also the "computer programming" major is called Infoscience majors at the top engineering schools, those students are laughed at, you don't have to be smart to grind through those classes ie you don't build mental toughness towards theories.


I agree that modern "computer science" educations often don't teach enough to allow you to make great software, nor enough to open your eyes to the grander concepts. But you can't take that to mean that there is some subset of concepts that are useful to you and that this is all you need to know to hone your craft.

I do not like these strange lines in the sand that are drawn between engineers and scientists. These distinctions are artificial and a curious mind shouldn't be trapped on one side or the other.


>Time spent learning the UML meta-meta-model and Object Z >is, for 99.9% of developers, time completely wasted

I thought it was interesting that the poster used these examples for CS.

I didn't take a single course at university that dealt with either of these topics. Everything I dealt with in comp sci was algorithms & complexity theory, coupled with a smattering of, "this is how computers actually work". Far as I can tell op is talking about computer programming subjects, and not computer science subjects.


Indeed, don't call it computer science unless you're making computer hypotheses that you then test via computer experiments, using the computer scientific method.


While we're at it, why is it called Computer Science? I think "Computational Mathematics" or something would make more sense.


In Germany it is "Informatics" a portmanteau of "information" and "automatic", since the field is about automatic information processing in general.


Computational maths (research area of mine) is its own field, and is very different to computer science.

A introductary (undergrad) computational mathematics course at my university covers; numerical solutions to PDEs, Inverse problems, Regularisation problems, numerical optimisation etc.


I should have looked for a name conflict first, I suppose, but my point is that "science" doesn't seem like the right word to describe most of CS.

I'm not saying it's incorrect, but it just doesn't seem to fit well.


Writing applications: go ahead, program by the seat of your pants.

Architect a solution to a big problem: theory comes in very handy.


The theories behind computer science have little very application in large software engineering.

software engineering != computer science


That's preposterous. Computer science is directly applicable to understanding issues of scalability, searching, hashing, database properties, network routes and looping, the list goes on and on.

Computer science is a lot more than proofs.


None of those things deal with maintainability, design by contract, interface design, etc.

When you're writing a multi-million dollar piece of software (20+ man-years), the technical problems are the easy ones to solve. Actually putting the thing together is the hard part.


Well said. Computer Engineering is a lot more than programming languages. But does it take a college education to get there? Or a seminar on the latest tool chain.


Hell, software engineering isn't even engineering.

'you will discover that software engineering has accepted as its charter "How to program if you cannot.".'

http://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD1036...

"Software engineering" has little to do with engineering and its principles and everything with applying buzzwords to squeeze the most money out of whomever was suckered into paying for the project.


The theories behind computer science are used all the time in large software engineering. They're just not the interesting part.


Contrarian take on the issue:

I wrote a Quake 3 mod in C++ (directional damage modification, server/client magazine+reload, and a few other things not-so-difficult things) in high school around age 17. I then did a CS undergrad at a top 6 university.

I learned more about programming writing that mod than my combined experiences at school.


Yep, because programming and computer science don't have a lot of overlap. But I bet you could have written a mean linked list after the CS degree.


What's wrong with "Software Engineering"?


CS to a programmer is no more "theoretical" than anatomy is to a surgeon.



I agree it's more of a mathematical discipline than a science.

But it isn't just "studying man-made creations". It's about studying computation in general. Not just man-made machines that compute.


Complexity theory is based on the idea of a turning machine, a man made thing.

Mind you, information theory, which is often considered computer science, certainly applies to nature itself. Of course, it would be easy to claim that information theory isn't actually computer science. Personally, I consider most of computer science more math than science. In fact, I sometimes tell people I do math! (This is a great way to avoid further questions at borders or awkward parties.)


Not really. Complexity theory deals with many different forms of computation, from Turing Machines to RAM models to circuits to quantum computers. These are all particular concrete artifacts that we use to study the fundamental notion of computation itself.

Saying that complexity theory studies "man-made things" is like saying that chemistry studies man-made molecules. It's technically true, but it confuses the methodology with the object of study: studying particular chemicals vs the underlying laws governing them or particular models vs the underlying laws of computation.


The beauty of computability and complexity theory is that it's actually robust across models of computation. That is, while we analyze a Turing machine, the actual machine is rather arbitrary.

Paraphrasing from Sipser[1], the class of decidable and undecideable languages is the same for any "reasonable" model of computation where "reasonable" involves some basic things like not being able to do an infinite amount of work in a single step. So while any particular model (e.g. a Turing machine) is arbitrary, the class of languages is actually natural.

[1]: http://www-math.mit.edu/~sipser/book.html

This makes sense--any model of computation containing a countable number of instances will not be able to decide or even recognize every language. It also makes sense that these models would be able to recognize the same set of languages.

I think this also holds for complexity classes. That is, the complexity of a language in a class like P or NP is the same across different models as long as those models are all deterministic or non-deterministic.

So really, complexity theory is not about a man-made thing: it's about a deeper, fundamental truth. It's really awesome.


a Turning machine is not a man-made thing! Turing-complete systems exist in natural reality, and a the Turing machine is just a way to explain them.


"It helps to know some music theory if you write and perform music, but a lot of very successful songwriters and performers get by very happily with just enough music theory."

This is an absolutely bogus comparison. Music does not need to be maintained, music does not need to be troubleshooted or debugged (and thus, reasoned about), music does not solve a problem(1).

Music is an artistic expression, while computer software isn't.

(1) Writing soundtracks for movies, for example, can be considered solving a problem with music. It also requires music theory knowledge because soundtracks have to convey particular emotions at particular moments.


I found the comparison pretty useful.

I used to compose music solely by playing around on an instrument until I hit upon something that sounded good, but I had no idea why it sounded good.

I can still compose that way, but after studying music theory, I can compose directly from my head to a sheet of paper without any instrument, as (1) I can now envision in my mind what the music would sound like, and (2) I understand and can employ reusable patterns and concepts of composition that I know will sound good and work together.

Similarly, when I started programming, I had little overall vision for what I was doing. I just started writing code, and stopped when I had built up a pile of spaghetti statements that did what I wanted them to do. Now, with better understanding of design concepts and reusable patterns, the development process is much more clean and structured rather than poking around with guesswork.


Making a noise doesn't need to be troubleshooted or debugged. Making music does, unless you're trained in musical theory well enough to not make the mistake. There are many compositions that come across my desk for advice when the author knows there's something missing, something wrong, something out of place but they can't tell what.

You can have clashing chord progressions which are "good enough to ship", but to a keen listener might throw the whole performance away. It's not a perfect analogy (none of them ever are) but it might be more apt than you think.


"It helps to know some music theory if you write and perform music, but a lot of very successful songwriters and performers get by very happily with just enough music theory."

This is an absolutely bogus comparison

I disagree. There are very successful songwriters and performers who have no training, but almost all of the good songwriters and performers did have (usually classical) musical training.

The same is absolutely true in software: There's a lot of very popular crap out there, but the software which is universally recognized as good -- code like TeX -- almost always comes from authors with solid computer science training.


There's some good stuff in TeX but it's been buried in an avalanche of shit from people who aren't Knuth. You can get TeX in a couple megabytes (http://www.kergis.com/en/kertex.html), but if you install something like pdfTeX or XeTeX or TeXlive, well I hope you've got a big hard drive.


> Music is an artistic expression, while computer software isn't.

Unless it is? The sentiment that the pursuit has to be entirely about doing a job or providing utility is frighteningly inhuman. I need to go for a walk after reading that. Or maybe watch some demoscene.


The demoscene is artistic expression, but only the output, not the software itself (it can be, but it isn't a precondition).


Would you not agree that trying to rein-in the universal definition of art is completely futile?

We could dubiously cast an eye to instrumental music vs. singing. After all, the former can only be expressed indirectly via a contraption.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: