Hacker News new | comments | show | ask | jobs | submit login
OOP no longer mandatory in CMU Computer Science Curriculum (infoq.com)
113 points by geophile on Mar 21, 2011 | hide | past | web | favorite | 37 comments



My rule of thumb with OOP (of which I am very proud): if a bunch of code needs to retain state, make it an object, but if not, make it a function. Also, in a class, you should either have glorified getter/setters (to set the state, possibly in complicated ways) or stateless functions (to execute algorithms). This approach makes it far easier to test, since you have less set up and tear down.

To teach object oriented programming as if it is theoretically foundational, or should precede simpler concepts is a nice bit of foolishness that lasted from excitement over Smalltalk to, say, last week.

I think the blossoming of GUI's -- which have A LOT of state -- is the historic reason for the excitement over OOP, not any theoretical advantage to OOP.


I would say that the naturalistic fallacy is at play as well. If you have a business domain, and you want to model it as an algorithm, your first inclination—not knowing any computer science—will likely be to model each business object as, well, an object. A Store with Cashiers and a CashRegister and Products and UPCCodes, et al. And, surprisingly, using OOP, this works—although certainly not optimally. Until, of course, you try to build something like a compiler, which doesn't have real-world objects to equivocate to—and you find yourself screwed.


nice note correlating GUIs with historic excitement over OOP.

"if a bunch of code needs to retain state, make it an object"

well, a FP wizard might argue that most problems don't need stateful code to solve -- our decision to use stateful patterns is an implementation detail.


I'm all about FP, and find it lovely, but I've never seen a coherent explanation of an FP solution to GUIs. What is the FP answer to MVC? What does a modern GUI application design look like, if not OOP?


I'll hazard a possibly bad answer.

MVC is not inherently OO - it's more about sending messages which, although a feature of modern OO (Alan Kay even said that he should have called OO Message-Oriented programming), is not its defining feature (as we see it used today).

Since the Model, the View and the Controller are just continually processing and sending messages, you could easily implement them as processes (say as in Erlang). A processes's state is then retained in immutable structures passed to recursive invocations of its main message loop (sorry if this sounds opaque - I'd recommend seeing how it's done in Erlang).

Another GUI approach to FP that is being researched is functional reactive programming. There is a neat Javascript implementation called FlapJax. You could probably argue that FRP is a generalization of MVC (though I'll hedge my statement since I'm not 100% sure of this).


> I think the blossoming of GUI's -- which have A LOT of state

I don't think GUI's have to have a lot of state.

I wrote a Haskell library (https://github.com/Peaker/vtywidgets) which can serve as a proof-of-concept of a "stateless GUI".

The idea is that a widget is a pure function like (in pseudo-Haskell syntax):

    input_from_program -> 
      (output_to_user, 
       Map input_from_user
           (documentation, output_to_program))

A "Map" is a dictionary. It's like a function, but its domain can also be enumerated so we can generate useful documentation about how a widget responds. So a Widget takes program and user inputs, and generates program and user outputs.

Any state (e.g: cursor) will be saved elsewhere, and I've come to the conclusion that separating "GUI state" from "model state" is not a good idea. This allows me to use the "cursor" of a box of widgets as an active selection, and not just a way to navigate, for example. It also means that I can use the same persistency model/etc for the GUI as I do for the model.

I wrote a simple (again, still at the POC stage) transactional, structural, revision-controlled editor based on vtywidgets (https://github.com/Peaker/treeedit), and it persists the model, including the GUI state, in the transactional database. This means that I get undo features for GUI state (I can store it outside the revisioned store, but it's quite nice to have), and if I kill the editor and restart it later, it is at exactly the same state it was before.


several remarks worth mentioning, as the infoq page seems to be editorialized link bait begging to foment a flamewar.

Bob Harper and (his now graduated phd student) Dan Licata are (world class) experts in programming language semantics. What does this mean? They understand in a very very precise sense how difficult it is to correctly understand the exact semantics of various linguistic constructs.

Why is this important? Because some of these freshmen will not have had a solid grounding in understanding the most basic of the tools they use, the programming language. These students should come away from their intro class not burdened by confusion over syntax and boilerplate need to use the built in language constructs, but rather a precise enough mental model of the semantics for a basic programming language that they can accurately simulate the execution of their programs with pen and paper, or just in their heads!

Yes objects do provide some occasionally fantastic abstraction facilities in certain use cases in practice, but every semantic model for real OO languages is INCREDIBLY complex, let alone considering what is needed for having a correct cost model for operations in an OO language once you consider how many OO abstraction are represented by nontrivial lookup operations / data-structures.

What I think is more interesting is the following 1) they've come up with low overhead ways of teaching parallelism constructs directly to intro students

and 2) this course is part of a larger system of classes that include their intro imperative programming class being taught by Frank Pfenning (an expert in mechanized theorem proving such as might be used to verify program correctness)that teaches a lot of the basic algorithms / datastructures that are easiest to first see framed imperatively,

and 3)they're having the intro algorithms & datastructures course be taught by Guy E. Blelloch, who's done a great amount of good research in a huge number of algorithm topics.


Agree. Running into OOP as the foundation of AP Comp Sci cirriculum in high school almost made me hate programming ... and that takes some effort because leading to the class I was programming out of joy every minute that I could.

If it was that bad for me, I can only imagine what it was like for the rest of the class(granted, there were only 5 of us in total).


that's a bit strong. even if the industry is shifting paradigms (maybe), OOP is the basis of many production systems and certainly today not an, um, incorrect choice for many problems.


No one is suggesting OOP isn't useful. It's just not being used for introductory curriculum...and I'm pretty much certain they're right.

The HN title is inflammatory and misleading, in that it omits the word "introductory", which is a pretty important word in this context.


For those of us at cmu, this is old news.

The article is very misleading. The old curriculum never had a focus on OOP. However, the major difference is that the new curriculum will have a very heavy focus on the functional paradigm at the cost of teaching systems skills. For example, students will no longer learn C at CMU yet they will be expected to know it well for os. Either some higher level classes must get a lot easier or a lot more students will start failing.

The general consensus among students was that this curriculum change wasn't very well thought out.


> For example, students will no longer learn C at CMU yet they will be expected to know it well for os.

This just isn't true. C-not (a typesafe C variant put together by the language department) will be taught in an intro class, and the end of that class will involve a transition to C. Then, the students will take 213 Introduction to Computer Systems, which will be ALL C and Assembly (as it has been since its inception). Then, after writing malloc and a web proxy in C, they will go on to OS.

Additionally, Java will continue to be taught in 214, a class that will be mandatory for all Software Engineering minors (granted, a small number of these exist) but that will invariably be taken by many students concerned about internships and that kind of stuff.

The reality of the matter is that Language Theory is the biggest player in CMU CS right now, and so they took that as a mandate and ran with it all the way to this new curriculum, which is why it's functional-language heavy. That said, our students' brains will not rot for having learned ML more heavily than Java (I imagine it will be advantageous), and all previous options will still be available, albeit in slightly different forms (I actually imagine 214 will be much improved over previous attempts at teaching Java).

The reality is that CMU never gave a shit about students who barely make it by, and scraping off a few extra in the Systems class transition isn't really a big concern for anyone (it's a CMU cultural thing). Most of the complaints is just fear of change and "It wasn't like this when I took it!" type complaining.


The problem with C0 (it's "C-naught", not "C-not") is that it is a toy language used for the compilers class which has somehow made it down into the intro curriculum. It is not useful in the real world; absolutely no one outside of CMU uses it. It is useful only as a stepping stone for C -- at which point, why not just teach C, and keep to a "safer" subset for a while? A couple weeks "transition" to get used to malloc and gdb and NULL pointers and segmentation faults etc is not nearly enough.


You haven't refuted any of my points.

Alex, you really need to speak to any professor outside of the PL department to get an idea just how political this change actually is.

C-naught, as jwatzman said, is not C. Students will not learn pointers, they will not learn arrays. In fact, they'll not learn any of the difficulties of C. We both know the purpose of 213 isn't to teach you C. It's coded in C, but I think I used pointers in exactly 1 lab. The course won't teach you C and you can get by with C0 knowledge. Then when you get to OS, Distributed, etc, you will need to know the intricate details of a language you have no real experience in. At this point, why teach a useless language that gives no benefits to students? It is not any easier to learn than Java, yet much less useful.

Additionally, I never stated that its necessary for Java to be taught. But what I do feel is that some language that a student can use in an interview should be covered in the cirriculum.

>The reality is that CMU never gave a shit about students who barely make it by

The reality actually is, its impossible to make it by some of our systems courses without filling out the proper prerequisites. And currently what's happening is that the prerequisites are being removed from the cirriculum. Thus, unless people have knowledge of C, pointers, asm, people will be focusing on learning C during OS and not OS. We've both taken the course. We both know that a student who doesn't know C has a high probability of failure.

The point is, a student coming out of CMU will now only be required to know 3 languages: SML, OCaml, and C-naught. One of them is utterly useless. I do believe that a focus on functional programming is a good thing. What I don't support is pushing programming languages at the cost of everything else. A systems person or even an algorithms person can't thrive in the new cirriculum. Almost every required course is a PL course, or fundamentally tied to PL in some way.


I don'tt hire people with CS degrees unless they can demonstrate some level of technical competency before enrollment (open source commits, weekend projects, etc). Congrats new CMU grads, you get a free pass.

OOP might have its place, but teaching it to first year CS students is one of the things I blame for the valley being full of mediocre programmers.


Interesting, why http://www.joelonsoftware.com/articles/ThePerilsofJavaSchool... does not come up more often in these discussions. Seems relevant to me.


Well, here it is pretty irrelevant. CMU wasn't a java school. It was mainly a C school.


Note that this has been coming for a while -- the "announcement" cited is from Harper's blog, as is the bit about "anti-modular and anti-parallel". The quote cited is Harper's opinion, and was not necessarily the opinion of everyone involved in the curriculum changes (though Harper himself was very instrumental in said changes).

Harper is well known among his former students for being a brilliant guy who makes many interesting arguments... and who then takes those arguments so far to the extreme that it's laughably silly.


> and who then takes those arguments so far to the extreme that it's laughably silly.

That sounds like another opinion. I don't find what he said to be laughably silly. Do you have examples of such things he said?


He once discussed NULL pointers. He made a well-reasoned argument about how a proper language should use some other method to properly distinguish between NULL and "I really do have a pointer here", such as ML's option types or Haskell's Maybe types or something. Basically the "million dollar mistake" argument.

He then went on to say something basically like, "Thus, there is no such thing as a NULL pointer. It is a myth. It simply doesn't exist." Which is patently false, and I find rather funny. Languages do have NULL pointers whether they should or not. I can assign a value of NULL to my pointer in C -- they exist. They are not a myth.

A friend of mine, who is a TA for our operating systems class, whispered to me, "Hm, NULL pointers don't exist?! I guess I'll go tell my students that."


Can you link to the context of that statement?

I think maybe he was trying to say something else, but said it in a misleading/funny way?


He said it in a lecture, but I'm pretty sure he was dead serious.


I just watched the video by Barbara Liskov. He references this to show how even a turing award winner does not like FP.

The only thing that I would directly releate to FP was that she: "I think programms are fundamentally about manipulating state". Therefore she did not like the FP back in the 70s.

So I will try to show that thats a very bad reference.

1. Today FP languages are mostly about only using state if there really is state not like in OO where you just use state everywhere.

2. Try to be more clear where the sideeffects happen --> dont interleave the sideeffects with the computation. In Haskell you even have to be explicit where you use sideeffects.

in imperative (in VB)

for i = 1 to 100

    print computeSomething(i)
next

while in FP you would

(print (map computeSomething (range 1 101))

(I know that thats not 100% the same output but I hope you get the point)

3. She was doing Systems Programming for the last 30 years an yes at that level there is lots of state but at a high level there does not need to be that much state INSIDE the programm.

More generally speaking:

All the typical OO features FP languages can provied. For example Hierarchy are totally interleaved with the rest of the programm. In Clojure you can use Hierarchy if you really need a kind of animal -> dog Hierarchy-System. Even better because its not interleaved you can have two Hierarchy-System on the same "object" without them disturbing each other. The same with Polymorphism.

http://clojure.org/multimethods

I don't want to say OO is very bad I like just like the decision CMU takes here. Teach these concepts sepertly and not in a language like Java.


I think saying that FP is bad for "manipulating state" is misleading.

FP is great at manipulating state. A pure Haskell function (State -> State) manipulates state.

What's being discussed is whether or not the state is manipulated destructively. And personally I don't see any real benefits to the destructive method, except performance issues in some cases, and arguably stylistic ones.

Returning new values instead of in-place updates may seem cumbersome before you get to know the abstractions that make it easy to use.

As an example, in Haskell, if you want to update a value deep inside multiple nested records, you do something like:

    movePlayer :: (Pos -> Pos) -> GameState -> GameState
    movePlayer f gs = gs { player = oldPlayer { position = f oldPos } }
      where
        oldPlayer = player gs
        oldPos = position oldPlayer
That looks horrible, compared to something like:

    gs.player.position = f(gs.player.position);
But then, if you discover the Semantic Editor Combinators abstraction (http://conal.net/blog/posts/semantic-editor-combinators/), then you can write that as:

    movePlayer :: (Pos -> Pos) -> GameState -> GameState
    movePlayer f = (atPlayer . atPosition) f
or just:

    movePlayer = atPlayer . atPosition
Then you can use:

    movePlayer (+delta)

to add delta to the position.

Now one nice thing about the return-new-value approach is that it is more composable than the destructive update approach. When you compose state manipulation functions, you get transactional state changes.

So paradoxically, I believe functional languages to be better at manipulating state!


Saying OOP is anti-modular is like saying operands are anti-functional.


Class based programming traditionally relies heavily on mutable state. Remove or severely limit mutable state, and most of us will stay it's not OOP any more.

And pervasive mutable state is anti-modular. http://www.loup-vaillant.fr/articles/assignment


Shared state is anti-concurrency, but objects don't share state, they hide state behind well defined interfaces which is inherently modular. Actors and objects work together beautifully. Objects handle the behaviour and the Actors separate out threads of control. It all goes to hell when stated is shared, but that's a completely different thing.


> they hide state behind well defined interfaces which is inherently modular

They expose mutable state through a well defined interface. Unless you're talking about purely functional interfaces, of course. And purely functional interfaces are rare in typical class-based code, especially C++ or Java code.

Most programmers don't understand that when when a class exposes mutable state, it automatically becomes more difficult to use because you can make fewer assumptions about it. That's why it's anti-modular. And that's even before you try concurrency.


Concurrency is dependent on how threads run your code. Accessing an object's mutable state within a defined thread context has no issues at all. Accessing shared state across thread boundaries is the problem with concurrency. My guess is people are coming from a POV where concurrency is automatically handled by the system, in which case you would want certain properties, but when coding in a typical environment where care is taken it's not an issue.


Nothing in OOP says that object state must be mutable. It's just as simple to have objects that always maintain their instance pointers and internal state separate from one another and use message passing for transforming that state. Such an architecture is completely capable of running on multiple cores without too much context switching or other synchronization overhead issues, especially if the object message queues are lock-free. However, one of the keys to such an architecture is making sure that there are no shared sub-allocators being used by the RTL for memory allocation.


I'm a CMU CS minor who has gone through most of CMU's "base" CS courses (211, 212, 213, 251). The majority of these courses were taught with either C or Java, so I'd rate my abilities in C and Java as probably a 7/10.

When taking 212, which is an intro to functional language course taught in SML. I had some trouble that I never encountered in other courses. I attributed this trouble with the fact that I learned programming with a Java and C mindset; such that I feel like I can only think efficiently in C or Java (or procedural and object oriented). Thus, when programming in SML, there were many times where I knew what to do in an abstract sense (and could easily code it in a functional-like manner in either C or Java), but had trouble expressing my thoughts in SML. Nevertheless, as I learned more and more SML in class, its elegance and grace drew me in.

Although I feel like functional languages are more powerful than OOP or procedural languages, I am hesitant to use them on side projects because I'm not accustomed to thinking and writing code functionally. I feel like I will spend more time figuring out the syntax of the language than actually coding. Furthermore, I have no idea about how design patterns work and have never coded large projects in functional languages, so even if I knew a specific functional programming language well, I would still have trouble using it for a non-toy project.

I feel like my mind thinks in OOP, and as much as I want to change that, it has already set; I still stick with OOP and procedural languages because that is what I am confortable writing in even if I know I'd be better off writing things in functional languages.

From what I understand from conversations floating around campus, 150 is similar to 212, except it teaches more CS fundamentals that 1st years are less likely to know (212 is typically a 2nd year+ class). 210 will follow 150, which is a data structures course that will be taught in a functional language. By starting students out with a functional language and then continuing to teach them other aspects of CS using functional languages, these students will likely start to think functionally - even when coding in C or Java (as I kind of do the reverse when coding in functional languages). This could be a huge advantage, as I believe functional languages are more powerful and allow you to express more ideas with less thought (or "brain memory"). I think PG's essay "Beating the Averages" explains this better.

Thus these students that are taking this new curriculum may be able to code or think in a order of magnitude "better" or faster than the people who learned programming using lower level languages such as Java/C. Nevertheless, I'm envious of the people who get to learn this new curriculum.

I apologize if this post seems like a huge braindump. This is my first "serious" post on HN and I had a lot of difficulty expressing my thoughts clearly and organizing them in a logical manner. When I read others' posts here or PG's essays, everybody seems to be able to express themselves extremely clearly and concisely. Any advice on developing functional language or writing skills will be appreciated.


1. That was interesting, thanks

2. If you eel like functional languages are more powerful than OOP or procedural languages and you're much less comfortable in them than in C or Java, you should do a major project in one, or switch your scripting language. After all, you're still in college, you have fucking up time and capability. Haskell for the Summer of Code, or Jane Street Ocaml?

3. Never, ever pre-emptively apologise. It is a signal of weakness and lack of confidence and will cause people to give you abuse and disrespect you who would not have done so if you hadn't done it.


The problems you had or still have with ML and Haskell are probably mostly about pure syntax. For beginners, syntax matters a lot. Over time, we get over it, and learn to see the underlying structure of the code.

About the deeper problem of "thinking in OOP", I wrote a small tutorial[1] about the first step: avoiding mutable state where you can help it. The second step is to remove loops (even the recursive loops as well). Instead, use `map` and `filter` (the bulk of list comprehension) where you can. The final step is to generalize this approach by writing your own "higher order" functions to capture common patterns.

[1]: http://www.loup-vaillant.fr/tutorials/avoid-assignment


It takes weeks to retrain one's mind to think in a new paradigm. It took me 8 weeks (years ago) to 'get' OOP, coding every day. Maybe I'm slow, but something on that order is expected.

To change means painful, unproductive days. I went home sweating every day(!)

So some of the emotion regarding choise of paradigm is of course personal - we all want to stay where we are comfortable. Then we rationalize why that is best.

I believe the CMU profs have taken a different tack - how to teach folks relatively new (< 20 years?) to computers. They may choose any paradigm that works, and they seem to be perfectly trained to make that decision well.

As for job opportunities for their graduates, CMU has traditionally had zero problem there. So we shouldn't cry any alligator tears for them. They will learn 'practical' OOP etc as needed at their first job.


If you are comfortable with FP at a low level then jump in and give it a try. At a high level the two approaches tend to not look that different. Why, because the end goal is the same: clean interfaces to logically organized components. It shouldn't be too surprising that a logical, and consistent interface to a Balanced binary tree or bank account looks very similar once you get past the implementation details.


> I'd rate my abilities in C and Java as probably a 7/10.

To go off on a tangent here, this is something that's been bothering me for a while. These one-to-ten scales are seductive, but they do a very poor job of representing people's actual skills.

The problem, for me, is the 10. Where is 10? If you're at 7/10, you're 70% of the way to whatever 10 is. But how do you know what 10 is? If you're a novice C programmer, you really have no idea. You know about some things you can't do, but you don't really know what you don't know.

Skill differences are really hard to measure. If you compare me with, I don't know, Rob Pike, Fabrice Bellard, Ian Lance Taylor, Landon Curt Noll, P.J. Plauger, Peter Deutsch, or Dan J. Bernstein, or somebody like that — well, if we're working on a project that I can reasonably do myself, they might get it done in ⅔ of the time it would take me. Or maybe ½. So maybe I'm a 7/10? Or 5/10?

But suppose the project at hand was something of the difficulty of otcc or gold or mawk? Those are probably things I could write, given enough time. But those guys could probably do it ten times as fast as I could. Does that mean I'm at 1/10? (Also, their version would work better than mine.)

The trouble with rating my skills at 1/10 is that it suggests that my C skills are close to the minimum. But I'm probably as much better than your average C-speaking CS undergrad as Peter Deutsch is better than me.

We really want some kind of exponential scale, like the Richter scale for earthquakes (or its modern replacement, the moment magnitude scale), or dBa for sound pressure levels. There's no actual ceiling on dBa or earthquake magnitude, but in practice, the numbers stay within a usable range, because of physical limits.

So maybe on this exponential scale, the CS undergrad guy is a 10, and I'm a 12, and Peter Deutsch is a 14. Or maybe a 16. And this comes to the other problem: how do you judge people above you? You can't, really. If they complain about your variable name choice when you send them a patch, and you can't tell that their suggestion is a real improvement, well, is that because something is really obvious to them that I'm missing? Or is it just an idiosyncrasy? I can't tell until I'm at their level.

To some extent, though, I can judge objective achievements. GhostScript has held its own against a much better-funded competitor, managed by adept programmers, for decades. Yahoo Groups still runs on qmail, as do thousands of other mail sites, and still nobody's found a practically exploitable bug in qmail. ffmpeg and tcc are impressive achievements. And so on.

But how can I quantify whether the skill required to create GhostScript or to create ffmpeg was greater? Both are probably beyond my own ability. So I can't even tell whether Deutsch beats Bellard or vice versa. There's no batting average to put on our Genius Hacker Trading Cards. So I can't tell whether Deutsch is at 14, or 16, or 18 — whether he's a hundred, or ten thousand, or a million times better than I am.

But if he's at 10/10, I'm sure as fuck not 70% of the way there.


>And this comes to the other problem: how do you judge people above you? You can't, really.

Relevant Wikipedia article: The Dunning-Kruger Effect http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: