Hacker News new | comments | show | ask | jobs | submit login
Stop writing classes... (pyvideo.org)
369 points by andybak 1894 days ago | hide | past | web | 197 comments | favorite



I almost never write classes in Python. That may be a result of the fact that I never use Python for anything other than system automation and data post processing; I don't use Python to build complicated systems. What I have made use of are named tuples:

  3DPoint = collections.namedtuple('3DPoint', ['x', 'y', 'z'])
  p1 = 3DPoint(3, 2, 4)
  p2 = 3DPoint(10, 1, 10)

  dist = math.sqrt((p2.x - p1.x)**2 + 
                   (p2.y - p1.y)**2 + 
                   (p2.z - p1.z)**2)
I don't mean to say that classes aren't useful in Python, just that I've gotten a lot done in the language just depending on hashes, lists, tuples, named tuples, functions, comprehensions and generators. I've never felt the need to define a a class - the closest I came, I realized I wanted a named tuple. It certainly would make sense in the above example to define "dist" as a member function of a 3DPoint class, but it's just not how I write code in Python.

Contrast this with C++, where I define classes all the time. I think the reason for this discrepancy is that if you want to do anything interesting in C++, you almost have to define a struct or a class; that's where most of the abstractions happen. In fact, if I want functional style code in C++, I have to define my functions as classes. I think that Python's native tuple types, first-class functions and comprehensions make the difference.


A few years ago, I quit organizing my code in terms of classes altogether. Everything got so much simpler that for me that whole way of looking at programming is now a gigantic mistake. It's true that you have to have a powerful enough language to get away with writing only functions, or at least one that doesn't impose classes on you, but that's true of lots of languages nowadays.

Freud said sometimes a cigar is only a cigar. I say sometimes a function is only a function.

Classes get in the way between the programmer and the problem. They force you to reify your thinking into "things" (classes) that demand names and citizenship rights, so to speak, in the system. These "things" don't really exist, but we act like they do, and so increasingly see the problem in terms of the classes we've defined. This is a distorting filter that we can ill afford when the problem is complex (if it's not complex, who cares - anything will work ok). The solution is not to define such pseudo-things in the first place, but rather to focus at all times on the problem at hand. Functions, for me, are the right level of abstraction to do this. They don't get in the way.

Once a bunch of classes have taken residence in your system and your brain, it's hard to remember that they aren't real, and thus the pain threshold is quite high before you realize that they're causing problems. At that point they're hard to change: you have to replace the old class hierarchy with a new class hierarchy. The difference between that and reworking a few functions is the difference between working in concrete and clay.

There's also the cognitive tax of having to think about what classes to create and what to name them. This too is overhead we can ill afford given our limited mental resources. If you've ever found yourself putting some code in a new class (perhaps because the old one got too big, or because you need to use it from two places) and gotten stopped in your tracks by the question, "What should I name it? What is this thing?" you are experiencing the overhead I'm talking about. It's the programming equivalent of Mr. Heavyfoot: http://www.youtube.com/watch?v=R9d2Y1We-d0. There is no such thing; you're just working in a way that forces you to pretend that there is and fulfill a bunch of duties towards it.

These two issues together are a double-whammy: you pay up front for what you pay for later.

It is shocking to see how much simpler systems become when we stop doing the things that make them complicated.


I couldn't disagree more.

Classes are nothing but:

  1. a group of data attributes, and
  2. a defined set of operations that can happen on instances
     of those data attributes.
If you are opposed to classes, which of these characteristics are you opposed to? Since I think it would be pretty much impossible to be a programmer and be opposed to structs or tuples, it must be the latter that you don't like: giving structs an associated set of methods that operate on them.

But having a set of methods that define the state transitions that are possible on an object is incredibly useful, for so many reasons. It helps organize your code. It provides nicer, more intuitive syntax. But most importantly, it reduces program complexity by narrowing down the set of code that can directly modify the attributes of an object.

Without this encapsulation, any piece of code anywhere in the program could be modifying data attributes in any way, and it is extremely difficult to reason about the state transitions that any particular struct could be taking.

There is no rule that says classes have to be in any kind of hierarchy at all, and in fact "prefer composition over inheritance" has been common wisdom for years now.

I recently rewrote a very useful but hard to follow JavaScript library in a more object-oriented style. The old code has a bunch of maps everywhere that can be read or written by any part of the code, and it's very difficult to follow. In a more object-oriented style the structure of the code is far more obvious, and the state transitions that each object can take are far more clear.


Upvoted, but I disagree with every paragraph :)

I don't accept your breakdown of what classes are "nothing but". You've omitted at least one critical thing: the name. But even if the list were complete, it's a fallacy to say "if you dislike the combination, you must be opposed to one of its components".

(The name is critical because the human mind leaps irresistibly from names to things, something you appear not to be considering. But I already wrote about that.)

Organizing code, making it nicer, reducing complexity? all I can say is that my own programs got far better when I stopped organizing them into classes. They became shorter, easier to write, easier to test, easier to change, and more fun to produce. But we're in YMMV territory here. I do feel like stating, though, that I worked for years in the style you describe. I even taught it. And I said and believed many of the same things. Should that count for something? Maybe not. Maybe I just got bored and went off.

Encapsulation? The most grossly overrated allegedly simplifying mechanism ever. But "any code could be modifying blah"? Any code can do a lot of horrible things. Trying to rely on technical constructs to prevent it just adds weight and bloat and impediments. Let's have flexible languages and be good programmers.

Difficult to reason about? The hardest code I've ever tried to understand has been complex object models where Thingy depends on Fooey which needs a Bingy and a Batty, and to construct a Batty you need a... Compared to this sort of conceptual glob, bad procedural code has always, in my experience, been easier to understand.

Inheritance vs. composition? Not relevant here. Maybe I should have said "graph" instead of "hierarchy". Whether A "is" or "has" a B, that's still an edge in a graph, and it's those object graphs I'm talking about. They are much harder to rework than functions. The trouble is that when we believe that programming is making object graphs, we assume that difficulty to be part of the problem and don't notice it.

Rewriting? You rewrote something to be clearer and more intuitive to you and maybe to others who share your beliefs. That it happened to come out in objects is a consequence, not a cause, of what you find clear. It's a lot harder to judge clarity across assumption sets. For example, I work a lot in JS and defining JS objects is the one thing I never do. Maybe if each of us put our code in front of the other we'd recoil to exactly the same distance :)


I'm glad to see this discussion on HN, because I've always wondered if there was something I was really missing regarding OO programming and whether or not coding in that style exclusively would make me a better programmer. Obviously it's all subjective and somewhat dependent on the task at hand, but my concern is probably more with whether that "style" of thinking is beneficial.

Then, fairly recently, I dabbled in some OO programming when I was making a mod for Unreal Tournament (I've also coded in C++ in the past, but not extensively); and I realized that I had already been using the same line of thinking with my functional programming in the way I was using structs, tuples, etc. When you really get down to it, it's really all just syntax and semantics.

For whatever reason, my brain has always naturally leaned towards the functional style; and it's just my opinion (of which gruseom seems to agree) and my opinion is probably only the result of how my brain works, but something about OO programming always seemed unnecessary for me. It seems to me like it's just another way for people with different thought processes than my own to represent their program logic. To me it seems a little too verbose and entirely too repetitive, and I always felt like the overhead required to "classify" everything wasn't worth it; but it's all in how you look at it, how your brain works or what you're used to. So what seems unnecessary to me (and makes it harder for me personally to trace the program's logic) might be perfectly comprehensible code for someone else; and some of the relatively minimalistic code I attempt to write might seem perfect to me, while an OO-minded programmer might feel like it's scattered about and hard to understand.

Good code is good code, regardless of the paradigm.


> I dabbled in some OO programming when I was making a mod for Unreal Tournament

> but something about OO programming always seemed unnecessary for me.

I actually learned to program making mods for Unreal Tournament (1999). I'd love to hear more about your feelings about how OO seems unnecessary.

For me, UT was a prime example of OOP (both the general bundling data and functions together as well as inheritance) being used to simplify things. Key is that any instance of 'actor' is a single physical object rendered in the world. A subclass of 'actor' is "inventory" - items that can be picked up and held by players. Below that is the "weapon" class; which a player can switch between, fire, and are rendered in 1st and 3rd person views.

Obviously you don't need to be OO to pull off a game; many FPS games are not. But it makes things so easy to handle. If you build a new weapon, all of the functions to handle picking it up, rendering it, etc. are written in super-classes. But if you want, you can over-ride those. Furthermore, the weapon interface is well-defined to external entities; e.g. the 'playerpawn' knows his weapon has a 'fire' method.

I'm not sure how you could build an equally powerful and customizable engine without using OOP techniques at some level. Sure a weapon could be a struct, but how would you know what associated fire() method you would call? Unless you want a bunch of non-extensible case statements, you'd need a function pointer in the struct itself.. and once you couple the data and methods, you are getting close to OOP.

One thing I loved about UT compared to Quake-engine games was its extensibility. OOP allowed reflection; just 'spawn weaponpackage.weapon' and you can use a custom weapon. This also allowed gameplay "mutators" to blend together. Quake games in the day required a custom -game argument at startup; and consequently custom content could not easily be merged together.

Of course, this is how I learned to program so my brain may be overly-wired that way.


As someone who works in games, I should point out that the industry has rejected the hierarchical OO model in new engines because it doesn't remain maintainable at scale.

The influence that is driving the new stuff - component systems and composable entities - can be seen as derivative of the actor model. Actor models go beyond what is present in most game engines, which tend to stay well on the imperative side of the line. It takes a heavy dose of purity to grok actors in full, but it dovetails well with functional style as an alternative view of computation. (I also recommend the Tim Sweeney POPL06 "Next Mainstream Language" talk to see exactly what problems are motivating the changes)

First, aim to never store values on objects - prefer instead listeners that return values independent of the objects themselves. Suddenly you start to enjoy "code=data" just as in Lisp, because the semantics of getting data and processing data are identical and interchangeable - you can take a getter on one object and copy it to a different one, and they'll return the same data, even if their other methods are completely unrelated!

Decoupling occurs naturally in such a system - instances can be as similar or different as desired. The usage of classes shackles these abilities into monolithic, heavily-coupled data structures again, which is the core point of difference from "OO" as we know it in industry.

If you also stay purely asynchronous at all times, there is no longer a difference between "now" and "later" to the callee, so order-dependent operations can naturally transition from one-to-one mappings into arbitrary ordering and queuing. Pure asynchronicity was lost when actor-style systems first tried to enter industry(e.g. Smalltalk-80 opted for a synchonous system - a concession to single-threaded performance), but it's come back into vogue in this heavily-parallelized era.

Fortuitously, it's straightforward to achieve these properties in languages that support closures and reflection. And you don't have to write entirely in this style; raw data structures can be written imperatively, and then tied together using actors. The listeners can access a common database to look up values. You can mix and match.

State-of-the-art game entity systems aim to cut down this level of power into a palette of performance/power options for composition, but if you can live with high overhead, taking all the power is pretty simple to do. I wrote a prototype for it yesterday - 169 lines of Haxe.


That is fascinating, and I for one would get a lot out of seeing some code. Any chance of posting your prototype in a gist please? :)


You(and the other commenters) prompted me to do something I've been meaning to do for a while: Get my code on github.

The actor system (Haxe highlighting is broken): https://github.com/triplefox/triad/blob/master/dev/com/ludam... And example: https://github.com/triplefox/triad/blob/master/examples/Sour...


I was thinking the same thing while reading the post :) Code or more fleshed out critique (even pointers to others' critiques) will be great.


Agreed--I'd love to see this as well.


As a person who has written games, I think games are one of those weird cases where OO really shines, but I don't think it's representative of most general programming problems. In my experience OO makes a ton of sense when you have a lot of mutable state that actually represents a tangible real world object or metaphor. So that works great for games, which are all about simulation, and where you actually have a lot of things that really do map to an "object".

But, outside of simulations, a lot of programming problems come down to things that don't have a convenient real world mapping. A lot of it is just data transformations, and data transformations are awkward in an OO setting, because the focus is on the object rather than the process. This is where more functional styles really shine, (for instance, in compilers and such), where the existance of an object would be incredibly transient and short lived, because the goal is to transform one set of data into another representation. OOP basically sucks at this.

Objects are certainly useful, but I think they've been fetishized to a weird degree.


OOP works very well for UI code. I got introduced introduced to it in that way, back with Turbo Pascal. When you bought a compiler you got this wall-filling poster with the class hierarchy. Great fun. Nowadays, this is still reflected in toolkits such as Qt.

It also tends to work for things that can be represented as opaque handles (such as files, sockets, ...). Polymorphism helps there to be be able to treat different but similar objects in the same way. No need for deep, nested hierarchies there though.

But I agree that outside the domain of UIs it quickly diminishes in value.


I agree, I think where we went wrong is in these one size fits all type designs. I like OO and events for UI, I feel that it is more powerful than other paradigms like say controller / template. I think black box widgets that respond and emit events and that can be inherited work in a manner that reflects my thought process. For example I always use a select box, where you have the standard select box and via extension or mixing in, you can create a filtering select box that adds the behavior of filtering type ahead. To me this works very well and promoted loose coupling of the interface as well as reusability of components.

Once I move past the UI though I really start to feel the burden of OO start to drag me down as far as productivity goes. It requires a lot of organizational overhead to force overtly service based functionality into OO structures, something like getUserList is functional at it's simplest form and should remain so. When working with Java I tend to discard OO by creating static functional classes that acts as a service and util layer. I find that doing so simplifies the architecture of the system and eliminates a lot of spaghetti code that has to be written to deal with making functional logic OO.

For that reason I tend to prefer languages that can do both, or that don't force a pattern on you at all, and that leave it up to the library developers to work out patterns. JavaScript despite it warts is a good example of the library developers building out that layer of the language. I also find myself writing more and more back end code in Clojure for that reason, while it is a functional language it does not preclude the ability to write OO code if it is the right pattern for a particular problem set. I like not having to fight the language to break free of the constraint.


> the focus is on the object rather than the process

From a business apps point of view, this is exactly it. You might find that methods are subordinate to objects, but in the business domain itself, the objects themselves are manipulated by processes. It's a whole other layer above the OO layer, and failure to realise this means you either end up with lots of FooManager classes, or you push overarching responsiblities back down into low-level objects that don't really want them, and find you have too much coupling and not enough Demeter

(Of course, you might be writing FooManager classes with this upper layer explicitly in mind - in which case fair enough and you'll probably like DCI. But I think there's still a lot of mileage in plain ordinary functions that don't have to belong to anything)


I learned a lot from Quake 1. QuakeC [1] was not object oriented, although it had one struct type: an entity. The entity struct had function pointers that were used to attach handlers for various events (touch, think, etc.), but there weren't methods per say, or a class hierarchy. The code was very procedural. You could not define new types, and could only make limited modifications to the entity struct's definition.

This restriction was often frustrating, but limited types meant the language was very approachable for beginners. Once a programmer understood the entity and its attributes, they knew what they had to work with. They only needed to find interesting ways to transform data, not interesting ways to organize data.

While sometimes frustrating, the simple type system rarely limited the creativity of developers. This is evident in the wealth of mods that were created for the game. People built Diablo style RPGs [2] with it, racing games [3], and much more.

In Quake's sequels, the modding system grew more flexible (and also more complex). The number of mods seemed to decrease. The amount of mod content for Quake 2 was less than 1, and Quake 3's was far less than 2. The mods that were released were often far more polished, but I think this is representative of the skill level and determination that was required to actually build mods for those games.

Garry's Mod [4] is only modern game I've seen with a similar level of successful modding activity. It also uses a scripting language (Lua). Garry's Mod defines a strict set of structs that you use to interact with the host game engine (effects, entities, tools, and weapons). It feels very Quake-like.

I personally feel like these kinds of restrictions create flatter code, and structure that is easier to hold in my head. I find myself concentrating more on features, rather than on code organization.

[1] http://www.inside3d.com/qcspecs/qc-menu.htm

[2] http://www.inside3d.com/prydongate/

[3] http://www.quakewiki.net/archives/qca/reviews/patch70.htm

[4] http://garrysmod.com/


Nitpick: "per se".


To follow up on gruseom's comment that YMMV, I would point you to Joe Armstrong's (Erlang creator) "Why OO Sucks"[1]. The title isn't great, and you may hate the content, but the interesting thing (to me) is that he makes a lot of the same basic points as you and then he draws all the opposite conclusions.

Maybe a reasonable conclusion is that there is no silver bullet for managing complexity and making things easier to reason about.

[1]: http://www.sics.se/~joe/bluetail/vol1/v1_oo.html


No silver bullet? Check this out: http://vpri.org/html/work/ifnct.htm

They already have some results: http://www.vpri.org/pdf/tr2011004_steps11.pdf

To sum it up, they do "personal computing" in about 4 orders of magnitude less code than current mainstream systems (from more than 200 millions lines to about 20,000). That includes the self-implementing compilers. The only thing they left out was the device drivers (which by the way takes less than 0.5% the size of current systems).

Now the Complexity Werewolf just went from "Unspeakably Awful Indescribable Horror" to "Cute puppy".


That's a really interesting article. Too bad the <title> of that document is 'Test page', making it practically unfindable with Google, SEO-wise.


> Classes are nothing but:

> 1. a group of data attributes, and

> 2. a defined set of operations that can happen on instances of those data attributes.

It must be nice to live in a world where a given piece of data belongs in only one group.


The point you're implying here is unclear. Can you expand on it?


I still think classes have value. For example, Python's standard library provides csv.writer and csv.reader objects for reading and writing csv files. Making them objects, rather than just functions, allows one to use them like so (assuming the csv file contains "value,average,stddevs" on each line):

   vals = []
   avgs = []
   stds = []
   file = csv.reader(open('data.csv', 'r'))
   for row in file:
      vals.append(row[0])
      avgs.append(row[1])
      stds.append(row[2])
Providing this abstraction without using objects would be difficult, I think. The key differentiator is, I think: is what you want to represent with a class a concrete thing with a clear interface? If those two things don't hold, then it may be better not to represent the concept with a class.

Again, I have to break this reasoning all the time in C++. I try to use free functions as much as possible, but when I want behavior based on runtime values, I often have to represent abstract concepts using classes because those are the tools that C++ gives me.


In Python, you can get that exact same behavior by writing a generator function. I'm sure this code has shortcomings -- it doesn't handle comment lines, for example -- but something like this will parse basic CSV:

    def readcsv(file):
        for line in file:
            if line:
                yield tuple(field.strip() for field in line.split(','))
No explicit class definitions necessary! And you can write the same kind of code. Or, if you want to be a bit too clever for your own good, you can shorten it to this:

    rows = readcsv(open('data.csv', 'r'))
    vals, avgs, stds = zip(*[map(int, row) for row in rows])
At no point here have I used the "class" keyword!


You and jules provided excellent examples for how to do my example without classes - but you do assume that the csv parser never needs to peek across lines. That's probably fine for csv files, but in general, lexers/parsers may need to. My point is, you're taking advantage of the fact that files are already objects which can be processed line by line. You can't always do this. I think that things like files and sockets are very nice to have as objects.


I a way, you could think of the generator function as an object with a specific interface.

In fact, I think that's how generators are implemented in python, the language just hides this fact away from the programmer.


Everything in Python is an object under the scenes. If you look at the CPython extension API, it turns out that pretty much everything is a PyObject.


Why would it be difficult? That's just a lazy sequence or iterator in Python terms. In fact the standard way of writing that in Python is with a function and `yield` or a generator expression, not with a class.

    def read_csv(file):
      for line in file:
        yield parse_csv_line(line)
-or-

    def read_csv(file):
       return (parse_csv_line(line) for line in file)
-or-

    def read_csv(file):
       return map(parse_csv_line, file)
-or-

    from functools import partial
    read_csv = partial(map,parse_csv_line)
None of which involve defining a class.


I notice that your first and second versions return an iterator, while your third and fourth versions return a list. So, you know, be aware of that. (If you want a version of map that doesn't force its arguments and produce a list, you can use itertools.imap. I wish it were a standard function so I didn't have to import it.)


> you can use itertools.imap. I wish it were a standard function so I didn't have to import it.)

In Python 3 it is (the `map` builtin is lazy, it's essentially `itertools.imap` and the old eager `map` has disappeared)


Couldn't you just do something like this?

  (let ((reader (csv-reader "data.csv")))
    (loop :for row = (reader) :while row
      :collect row[0] :into vals
      :collect row[1] :into avgs
      :collect row[2] :into stds
      :finally (list vals avgs stds)))
That's in the bastardized Lisp that I favour, but any language with closures should allow it. The point is, no classes.

I don't want to seem fanatical about this - your example is a fine use of objects. But unless I'm missing something, one can do just as fine without them, and with a little less code (since you don't have to define a class).

I (maybe!) have no problem with classes for things that, as you say, are concrete and have clear interfaces. But then, when things are concrete and have clear interfaces, there are a lot of nice ways to do things. Making things well-defined and clear in the first place is the hard part.


Objects are collections of closures that share state. As soon as your interface has more than one closure, you've more or less reinvented objects to solve that problem.


But (a) classes demand names, and that's critical, and (b) I almost never need more than one closure. Oh, and (c) this is far from the organizing principle of the system.


Dictionaries, lists, tuples and other structures also demand names. Getting away from classes does not free you from the need for names; and in a dynamically typed language, it requires even more attention to names - see my other comment http://news.ycombinator.com/item?id=3719056

I almost invariably find I need more than one closure down the road, outside of simple predicates handed off to algorithms. Even for such simple things as specifying a sink for output (out :: string -> ()), later on I find myself wanting a flush() routine. That's just my experience: problems grow thornier over time, and starting out with something more object-like adapts to that growth more gracefully than a closure does.

I think a lot of the problem comes from the syntactic weight of classes. I've implemented closures in a commercial compiler; the actual implementation, when it uses captured state, is very similar to a class, and in fact the implementation method I chose behind the scenes was a class. So I tend to view closures (rather than simple function literals that don't capture state) as just a concise way of declaring and instantiating an object.

I've had some success with blending the two. For example, a read-only collection facade is often implemented in .NET with ReadOnlyCollection<T>. But the normal way of implementing that thing is to pass off an IList<T>, which you may need to implement wholly if the logical read-only collection is not already a list. I think that's a complete waste of time; instead, I wrote a function that takes two closure arguments, getCount :: () -> int and getItem :: int -> T, and henceforth I can create a new collection with new behaviour without worrying about the "name" of the descendant, but with the benefits of a class for flexibility down the road.


> Dictionaries, lists, tuples and other structures also demand names.

Those are just names for values though, we don't have to create a name for a new general principle just to use one.

Not to mention values often do not need names at all, in an expression oriented style where function calls are composed.


Any time you have a dictionary or list of something, there must be some regularity in the entries, otherwise you can't do anything meaningful with them; call that regularity a type, and you can give it a name (or choose not to). If you don't give the type a name, you'll probably give it a name via the variable (or else have very cryptic code). In any case, it has a name, irrespective of whether it's a type or a value.


> you'll probably give it a name via the variable

Variables are easier to name than the more general notion of class or type (being specific to what is immediately at hand and thus not a distraction), which I think was the original point.

And even assuming that my structures will be named is assuming too much. Consider common uses of the fold function, e.g. which act on the pair of (accumulated-value, item). It's common that the pair isn't named. It's parts are named usually, but that's beside the point, I wasn't forced to name the structure itself, much less create a "AccumulatedSumWithNextItem" class.


If your problems are trivial enough that they reduce to folds, sure, these concerns go away; but AccumulatedSumWithNextItem would be a terrible name in any case, because it only describes the type, not its intent.

For example, List<Customer> describes both the type and its semantic content. List<Tuple<string,DateTime,int>> just describes a type; it's a lot less self-documenting, and code using it will be cryptic. In a dynamic language it will probably be customer_list or somesuch. Point being, "customer" is a name and it crops up somewhere, whether it's in the variable or the type.


This is horribly off-topic, but perhaps relevant still: Can you share the details of this bastardized Lisp that you favour? Is it a lisp that is generally available, or one that you made up on the spot? I'm curious.

I don't know the ins and outs of the loop macro, only that it has a lot of them, so perhaps this is simply Common Lisp and I need to keep reading my CL books, but the "row[0]" sugar makes me think that there is something else going on.


I made that example up (it's CL, except for row[0]), but it's the Lisp I want. I'd love to share the details because they're very specific and come directly from the application I'm writing. But we probably shouldn't go so far off-topic here. You're always welcome to email.


Honestly, I'm not sure if you can do that in Python - one of the side-effects of not writing classes in Python is that I'm also ignorant of the behind-the-scenes stuff that goes on with them.

You may be able to get something close to what I wrote with a function that returns a closure, but I don't think you can get exactly the for x in y syntax. As explained in another post on HN's front page, "A Guide to Python's Magic Methods": http://www.rafekettler.com/magicmethods.html#sequence, you need to define the __iter__(self) method on an object to support that style of iteration. (And that method needs to return an iterator object.)

And that brings us back to the same point I already made, but I think it's so important that I'll say it again: the abstractions you choose to represent concepts in your program will be determined by what is easy in the language you're using.


Ah I didn't realize Python's limitations (if it has them here) were part of the mix. I strongly agree with your general point and will go one further: this has a huge impact on the kind of system that over time gets developed in that language. It conditions what gets written, which conditions what's easy to do next and eventually what's possible to do next. By making different things easy, languages bias the programmer toward thinking different thoughts. I wish more people would figure this out: it would make discussion of these things much more interesting. (Instead, all we really do is comparison of code snippets in different languages, which kind of misses the whole point.)


Something that the objects I had in mind have in common: they're resources. They fundamentally require a handle, and associated functions that accept that handle as a parameter. There already is a tight coupling between the resource handle and those functions. Such resources, off the top of my head: files, sockets, locks, condition variables.


Check it out...

     def fibonacci_sequence():
         a, b, c = 0, 0, 1
         while True:
             a = b; b = c; c = a + b
             yield b

     >>> fibonacci_sequence()
     <generator object fibonacci_sequence at 0x108055550>
     >>> list(itertools.islice(fibonacci_sequence(), 0, 10))
     [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
     >>>

     for n in fibonacci_sequence():
         print n
         if n > 13:
             break

     1
     1
     2
     3
     5
     8
     13
     21
     >>>


Yes, I had forgotten about generators, pjscott and jules provide great examples for the csv case above.


A function that yields instead of returning values results in an iterator. One does not need to define a class at all.


A closure is essentially just a class by another name and with slightly different syntax. Yeah, you can replace classes with them, but to what purpose except to be able to say "no classes"?


"Classes get in the way between the programmer and the problem. ... Functions, for me, are the right level of abstraction to do this. They don't get in the way."

Couldn't agree more. I've long suspected that OO is a diversion or misdirection from actually solving the problem, rather than an inherent part of it. Discovering functional programming recently with Scheme and then Haskell has only reinforced that.

OO may be the appropriate abstraction for some things like GUI programming, which seems to be what it was originally created for, but not everything.


I have a similar approach. I only use classes when I find myself passing around an object that is starting looking suspiciously class-like. While this adds a refactoring burden, it turns out most of the time, OO is a waste of time, and a few arguments is all you need. Some things are objects, in and of their nature. So it turns out that they transform into classes very nicely. But most of the time, you have data, and you have things that work on that data, and it really does you no good to nounize the world too far. You wind up going off into this ontological modelling foolishness, and as a rule of thumb, you wasted your time.


In what language do you find it easiest to make classes only when you feel like it?

most of the time, OO is a waste of time, and a few arguments is all you need

That reminds me of another key point. I try not to let function signatures have more than a few arguments, and I try to keep those arguments primitive. (A good litmus test is how easy it is to call from the REPL.) When my code starts to break these guidelines, that's a sign of design weakness, and I do what it takes to break up the complexity. It still amazes me how far you can get in terms of simple, decoupled design just by doing this.

The widespread OO practice of factoring some of those arguments into a new class, so that now you need pass only one thing (the new composite object) instead of several old ones, does nothing to solve the problem, but instead makes it worse: you've both added complexity and lost transparency. It's like a kid saying yes when mother asks "did you clean your room", having shoved all the mess under the bed.


I feel all this talk against OO is missing the fundamental idea being objects: You are in fact strictly reducing the solution complexity by the proper use of classes. They are a mechanism of data-hiding. I know data within a class cannot be accessed outside of my defined interface, thus my "interaction space" has been reduced. That is, there is data I can't access and code that cannot access my data. This is a marked reduction in solution complexity.

The same can be said about functional programming. But once you degenerate to passing a ton of parameters around in each function call, you're probably better off defining an explicit object anyway.

Measuring complexity by number of lines and saying defining a class is adding complexity is off the mark.


My intuition tells me that hiding data in objects does not really help as long as the data is still mutable. Immutability is the only way to guarantee true encapsulation for objects that don't actually represent something inherently stateful. (eg. file on disk)

Coupling data and operations is also a problem. It's non-extensible, both in that you can't add more data and use the same operations, and you can't add more operations that work on the added data. Inheritance works around some of this, but it doesn't solve the general problem. What if it makes sense to add the same piece of data to objects of classes from different parts of the hierarchy?

The thing about functions is that you can always write a new one that works with anything you want. You don't need to add it to a class or an interface (you can, if the language allows that), and if possible, with immutable data that essentially makes the function a unit of perfect encapsulation.


You're going to have to offer some evidence for this, because (as many threads on HN have discussed) all the evidence I've seen points to precisely the opposite of what you say: the best measure of complexity is simply the overall size of the program. If that's true, defining a class certainly does add complexity, and your argument is plausible but wrong.

Not content with repeating myself in this thread, I will repeat myself from threads of the past as well: it's astonishing that the evidence is so consistent on this, yet almost nobody takes it seriously. But I do! And I have a really interesting question, too: what it would it mean, not just for a programmer to organize their work around this principle, but for an entire company to do so?


You're probably referring to the study that showed lines of code correlated with error rate. I completely believe it. But I think there's more to it than that.

We all know that lines of code is a poor measure of code complexity, but it does correlate with it. It is far more reasonable that error rate is proportional to code complexity, which is weakly measured by LOC.

Unfortunately I don't have any studies to back up my intuition here. But I would bet that my definition of complexity, aka "interaction space", would correlate much stronger to error rate than LOC does. Also, that decently good usage of classes would in fact reduce the complexity, and thus error rate.


Unfortunately I don't have any studies to back up my intuition here. But I would bet that my definition of complexity, aka "interaction space", would correlate much stronger to error rate than LOC does.

Intuition is often surprisingly wrong - lesswrong.com shows a lot of examples. That doesn't mean it's wrong in this case, but now you have code which is hidden so the interaction space is limited, so there could be:

1) Now you have to add complexity to get information into and out of this interaction space. Similar to how it's easier to walk onto a field than it is to walk into a castle - if the walls protect the contents, they also contort your path around them.

2) Leaky abstractions. It might be the case that your protected code is needed elsewhere. Now you have to duplicate it, or break around the protection/expose a new way through, all of which would not be necessary for a free function. At the very least you have overhead considering this. All of a sudden your neatly wrapped class is two classes, one in use in two places where you could again change code and have an effect far away if you're not careful.

3) In a language such as Python, you don't have your code inside a class protected by anything except convention. So it seems that you either must: a) accept that what protects groups of code is programmer attentive care, not the system of classes itself, and thus that any other programmer paying the right kind of attention could have the same benefits and less code, or b) declare languages like Python to be excluded from the benefits of classes by not implementing them properly. Do you agree?


First: not badgering you :)

We all know that lines of code is a poor measure of code complexity

We certainly don't! Program length is the only good measure we have. The best measure of program length is up for grabs, but LOC is probably as good as any (there was a big thread about this a few months ago).

But I would bet that my definition of complexity, aka "interaction space", would correlate much stronger to error rate than LOC does

But the best work on this points to so exactly the opposite of what you say, I would really recommend you take a look at it: http://www.neverworkintheory.org/?p=58 - and tell us what you think.

I'm going away now.


Good link. The thing that stands out to me about the standard code metrics used in studies like this, is that they attempt to measure actual complexity, rather than perceived complexity. LOC perhaps is the only measure that directly measures perceived complexity. If I add 100 lines of code to my project, its possible that cyclomatic complexity stays constant (I didn't add any branches, just a straight fall through), but the perception of complexity no doubt increased.

If we can come up with more metrics that are based on our perception of a codebase's complexity, I think we would see interesting results.


> In what language do you find it easiest to make classes only when you feel like it?

Any language that isn't a fairly close C descendant. Haskell, Lisp, Python, Perl are languages I've done some work in with minimal OOness.

When I used Java a few years ago, it required classes. C++ winds up being very class-ish. C of course is limited.

I have come to be very pleased with not using classes unless the solution really demands it. It's sort of a "grow the solution" idea, instead of "waterfall the solution".


>Any language that isn't a fairly close C descendant. Haskell, Lisp, Python, Perl are languages I've done some work in with minimal OOness.

At least in Python - which is the only one of those four I can speak for - all you are doing is basically OO (in fact, everything in Python is an object). The beautiful thing about Python is how it's abstracting a lot of this away by providing functions which implicitly call "magic methods" which are either generated by the interpreter or can be defined by the programmer (ie, __iter__(), __next__(), __init__()), and by virtue of Duck Typing, which means an object defines itself by what it can do, not its identity or origin.

You'd never even notice a lot of this unless you dip into the internals of Python a bit. And ultimately, I don't think this is actually possible without using OO or some close equivalent. So in the end, a lot of the "solutions" people have proposed in this thread ultimately depend on the very thing they want to "solve": Object Orientation.


I'm reminded of 100 horrible C APIs that take a "struct foo_opts" instead of just conceding that they need 7 or 8 silly arguments.

Or of "sockaddr_in".


The sockets API would be much more future proof if C better supported polymorphism. That's usually the reason for using structs instead of arguments; in Windows, most API structs start with the size of the struct, effectively indicating the version. Makes binary compatibility easier when you add arguments in version n+1.

sockaddr_in is the way it is because of sockaddr, sockaddr_in6 etc. Judicious polymorphism could have led to fewer headaches converting IPv4 apps to IPv6.


I program 3D geometry and topology professionally. Let me tell you, the idea that my code would be simpler and clearer if I didn't have a Point3D class is purest insanity.

That doesn't mean everything needs to be a class -- that's crazy too. And except for special cases, my inheritance hierarchies never have more than two levels. (And I probably wouldn't allow Point3D, for instance, to be derived from.)

But classes are an incredibly powerful tool when they are needed.


What advantage would a Point3D class have over, say, a Point3D data-structure here? (ernest question)

Is it the interface it provides, the types, the coupling of methods to data, or something else?


I'd say the advantages are:

1) Naming it vastly clarifies further data structures and interfaces. (This is shared with having a Point3D data-structure, obviously.)

2) Providing functions and operators specific to the type.

At it's simplest (and possibly most compelling) it's the difference between needing to say

    p.x = p0.x + t * (p1.x - p0.x);
    p.y = p0.y + t * (p1.y - p0.y);
    p.z = p0.z + t * (p1.z - p0.z);
and

    p = p0 + t * (p1 - p0);
The latter is easier to type, easier to understand, and less error prone. It's a huge win, especially in more complicated expressions.

Now of course, your language may allow you to define data structure-specific functions and operators. But at that point, what you've got is essentially a class, even if you call it something else.

(PS Note that data hiding is completely unimportant here, and inheritance would be nothing but trouble.)


I agree with you that Point3D-type-thingies are a sweet spot, though your operator-overloaded line is not doable in most class-based languages, and indeed is generally controversial in OO, so I don't think it gets to count here.

That aside, you're quite right that when one has a common construct that permeates the entire program, one badly wants to give it first-class status in the language. What I've noticed, though, is that these constructs don't tend to get involved in complex object graphs (or if they do, they're leaf nodes). Rather, they tend to be atomic and universal (i.e. interoperable with anything) and simple. So I wonder if what one really wants here is the ability to add new primitives to the language. That's potentially quite different from classical classes. It's more related to the "build up a language that fits the problem, then code the solution in that language" idea.


You might guess from my example how little respect I have for the "don't overload operators" argument. All I can say is, all the OO languages I've spent any time programming do indeed let you do that, and I'm sorry for you if your experience is otherwise.

So clearly I agree that adding primitives is an extremely important ability. But it's already nearly if not completely there in all the OO languages I use, using classes.

I don't think primitives are enough, though. The interface / implementations of interfaces combination is also essential to my programming. Being able to say "Curve" and have it mean Straight or NURBS curve or Circle or Offset curve or Composite curve or... is another huge simplifying device. In fact, I've just looked at my code, and I have 15 distinct curve types, at least 5 of which I'd have missed if you asked me to list them, despite having written all of them over the years. That is exactly the power of simplifying abstraction.

Let me point out that given a sufficiently capable OO language, this doesn't need inheritance to implement. This is just the implementation of a role or interface. But it does need classes.


I think there is no question that being able to define new types is important. I don't, however think it's clear that having to pack operations that operate on the types into the types themselves is a good idea in general. Your example is easily handled with a simple type class in Haskell e.g., and no there's it's not "essentially a class" in the OO sense, as new operations can be freely added without modifying the type itself.


Er, I've never used a OO language that didn't allow you to add new operations freely without modifying a class. Not all of them allow you to add new methods, sure, but new functions and operators are par for the course.

And surely you're not arguing the defining feature of a class is that it's closed?


Hear, hear. I use classes when I actually need the functionality offered by classes, or when it's a good cognitive fit. Often, simple functions and data structures do the job.


I'm a huge fan of a namedtuple; I'm always surprised not to see it used more.

And in cases where you really do need class-like behaviour, it's easy to extend it with additional methods or operators, while still getting all the other convenience stuff for free:

    class Point3D(namedtuple('Point3D', "x y z")):
      def __add__(self, other):
        if other.__class__ != self.__class__:
          raise TypeError
        return self.__class__(*[a+b for a,b in zip(self, other)])

    print Point3D(1, 2, 3) + Point3D(5, 4, 2)


Beware that once you subclass namedtuple you are losing out on one of its beneficial characteristics: memory efficiency. Instances of subclasses have an instance dict which has a non-trivial overhead. You can avoid this by defining __slots__ again in the subclass.


> if I want functional style code in C++, I have to define my functions as classes

Much of the benefit of C++11 and Boost is in not having to write functor classes.


Lambdas buy us this in C++11 (which is awesome), but not so much with Boost alone. Boost can help if you are, say, applying the member function of a class to a bunch of objects; you don't have to write a speciality function object to express that. But if you're writing logic from scratch, and you want it to be a closure, it needs to be a function object.


If you're writing the logic from scratch and want a function object, make a function whose arguments are the variables you need to close over and then use boost::bind. That's the most convenient way.


Yes, but many function objects can be constructed easily enough with Boost.Lambda and related libraries.


Oy, Boost.Lambda. I tried, I really tried. I wrote this comment here on HN a year and a half ago:

I played with boost::lambda, and I found the resulting code less clear than a simple for loop. I also had difficulty gaining an intuition for what I could and could not do with it. Rather than constantly asking myself "Can I use boost::lambda here...? Will it look better or worse than the alternative?" I just decided to never use it.

While I prefer a functional style of coding, I recognized that it was silly to adhere to it if doing so made my code worse - C++'s support for what I wanted to do just wasn't there. If what I wanted to accomplish was several lines of code, I wrote a separate function or function object. If it was brief and I could not express it with combinations of existing functions, then I just wrote a for loop.


the PyCAM codebase has this exact change made to is and it decreased the run time significantly. That's another thing that I didn't see mentioned in the lecture. Python classes are not free they have a definite impact on performance.


I really like this approach. Plus, it's easy to parse, particularly if you use class name capitalization for the named tuple (like you did). I've seen this done before, but after your reminder, I resolve to start using named tuples where before I sometimes gratuitously used classes. They probably perform much better than classes, too (although I'll probably benchmark just to be sure).


I would love to use other languages (static ones), but Python is so pragmatic AND still on an upward trend right now that it would be stupid for me not to use it.


Are java programmers the most egregious abusers of the Gang of Four?

Twice now, I've been faced with a metric crap-ton of Java source files for reading / writing to a device.

Anything you could possibly imagine, gets defined as a class. There's dozens of classes, and more SLOC than I can count.. Meanwhile, an equivalent C program comes in at 2k SLOC, and 4 files.

You().Can('not').Do(any(thing)).Concise().In(java())!

When I was primarily doing C++, I gradually switched to generic programming once visual studio supported STL... And then when I started using Perl I found I didn't need to write classes at all.

Reminds me of this Paul Graham quote: "This practice is not only common, but institutionalized. For example, in the OO world you hear a good deal about "patterns". I wonder if these patterns are not sometimes evidence of case (c), the human compiler, at work. When I see patterns in my programs, I consider it a sign of trouble. The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I'm using abstractions that aren't powerful enough-- often that I'm generating by hand the expansions of some macro that I need to write."

http://www.paulgraham.com/icad.html


>The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least

This is just so wrong. The shape of your program should reflect your understanding of the problem, and the clearest way to communicate that to other humans. Patterns will be a part of this, as it reduces the amount of mental effort required to understand the solution.

It is true that writing too much boilerplate to create a pattern is a sign that your language is lacking necessary abstractions. But patterns themselves will always be a part of good software.

I swear this entire thread is just trying to rationalize bad programming practices.


You might enjoy Steve Yegge making fun of this aspect of Java culture, if you haven't this particular post before: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


I had not seen that! Explains so much...

In the neighboring programming-language kingdoms, taking out the trash is a straightforward affair, very similar to the way we described it in English up above. As is the case in Java, data objects are nouns, and functions are verbs. But unlike in Javaland, citizens of other kingdoms may mix and match nouns and verbs however they please, in whatever way makes sense for conducting their business.

For instance, in the neighboring realms of C-land, JavaScript-land, Perl-land and Ruby-land, someone might model taking out the garbage as a series of actions — that is to say, verbs, or functions. Then if they apply the actions to the appropriate objects, in the appropriate order (get the trash, carry it outside, dump it in the can, etc.), the garbage-disposal task will complete successfully, with no superfluous escorts or chaperones required for any of the steps.

There's rarely any need in these kingdoms to create wrapper nouns to swaddle the verbs. They don't have GarbageDisposalStrategy nouns, nor GarbageDisposalDestinationLocator nouns for finding your way to the garage, nor PostGarbageActionCallback nouns for putting you back on your couch. They just write the verbs to operate on the nouns lying around, and then have a master verb, take_out_garbage(), that springs the subtasks to action in just the right order.


My biggest problem with Perl is trying to track and remember what dictionaries contain what. I end up with massive comments and ludicrous variable names just so it's clear when I come back to the code 5 years later.

Dictionaries and lists of things don't relieve you from the job of naming and encapsulating the data you're dealing with. But full on classes are usually only needed when behaviour needs dispatching from polymorphic locations, and that isn't too often.


I disagree with this. It's possible to write concise code in Java, although probably not as concise as in Python, since everything still needs to be a class (you can emulate modules with functions-only with all public static methods, but it seems like a bad idea).

It's the OO and design pattern culture that results in much of the Java code you describe as over-engineered. But you can write bad code in any language.


"I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras - families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting - saying that everything is an object is saying nothing at all. I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work." - Alexander Stepanov.


The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.

Taking software engineering advice from algorithm geeks is a mistake we've been making for decades over and over again.

In my professional life, I've not had a need to engineer a single algorithm. I have had to make plenty fault-tolerant, maintainable and understandable control software, though.

The same holds for most, if not all, of my colleagues. Common professional software engineering has nothing to do with sitting in a room on your own figuring out how to best store a dictionary in memory such that it is fast under insertions, removals and lookups.


In case anybody was wondering: other things that Alexander Stepanov wrote, aside from that paragraph, include the C++ STL.


The part about proofs and axioms is completely non-logical. for the past several centuries, proofs have relied on axioms. Euclid introduced axioms (postulates) in order to provide proofs (and precise formulations) for informal statements. So the math part is really more an argument in favor of classes as a sort of axiomitization of computing.


The majority of the proofs Euclid produced existed, both as theorem and proof, before his time. Similarly, few recent mathematical advances have started with formalizations and axioms. Calculus wasn't put on a rigorous basis until Lebesgue's time, hundreds of years after Newton and Leibniz. Category theory needed quite some work before it was formalizable. Differential geometry wasn't that rigorous until the twenties or so. Complex analysis was developed using many, many unnecessary assumptions because the formalities weren't there. I'm not picking-and-choosing math fields here; these are just the ones I'm familiar ones. It turns out, axiomatization and theory-building are two separate jobs rarely performed by the same mathematician. Usually, the theory-building makes the mathematician famous, and the axiomatization does not, but this is in many ways a tragedy. In any case, to the outsider of mathematics, it seems that axioms come first. This is false -- axioms come much later, because the work of axiomatizing isn't worth it before the theory proves itself useful. Axioms come first in a formal way, in the same way classes come first in a formal way. But the interesting thing isn't the class -- it's the instance. And in the same way, the axioms aren't interesting -- it's the resulting theories. I found the analogy very insightful.


Lakatos's [Proofs and Refutations](http://www.worldcat.org/oclc/258084194) is good background on the mathematical ideas that Stepanov is alluding to. The short version is that you don't start with axioms, because they are generally uninteresting. You start with interesting theorems and if necessary revise the set of axioms needed to prove those theorems.


I agree with the statement about OOP, but fail to understand what one can proove when one has no axioms.


I notice no mentions of Ruby yet, and I wonder if this discussion is pretty much impossible to have in the Ruby world, given Ruby is solely an object oriented language, even if you try to use it procedurally. Not writing classes would be against so many Ruby conventions and accepted style that it's a non-topic?

Or is it that classes in Python specifically aren't that much of a win over the language's implementation of other paradigms? I sure haven't hit into any problems using classes or objects in Ruby, even when sometimes the use feels a little contrived to begin with, but.. I also have no choice :-) (maybe Rubyists have a Stockholm Syndrome with OO ;-))


I think part of the reason this doesn't come up in Ruby is that Ruby doesn't have Python's powerful namespacing system.

In Python, it's reasonable to have a package with just functions in it, whereas in Ruby, writing a top-level method means polluting every object in the system. You can write modules that have singleton methods on them, but you still don't have anything as flexible as Python's "from pkg import foo, bar"—the caller needs to either write "ModuleName.my_function" at every call site, or use "include" and end up with the module's methods as part of the consuming class's interface.


Implicit self and late binding in Ruby makes it hard to implement a proper namespacing system without breaking other semantics :(

After working with Perl (which has a similar namespacing), I must it's something I really miss when I come back to Ruby.


I think the appropriate translation would be:

Python - take your __init__+1 method class and just make it a function

Ruby - take your initalize+1 method class and just make it a static method (and maybe make that class a module instead)


Everything in Ruby that is an object, is an object in Python as well.

It's more that Python programmers prefer not to proliferate excess complexity or layering. They hate ravioli code.


One concern I have with Jack Diederich's video is that he never addressed one benefit of classes: better testability. In his Muffin example, he took out the instance variable that stored the API URL and created a constant out of it. If I had an URL for my staging environment, and another for my production environment, I'd like to programatically set that instead of going back into the code, modifying the constant value, run the test, and then change the value back to the original. That's just a bug waiting to happen.

This isn't to say his underlying message is wrong (I 100% agree with Jack's message, too much classes create unnecessary complexity in your library/app), but I don't want people destroying their classes without fully understanding the consequences.


Just use a settings file. I use settings overrides by putting local_settings.py in my .gitignore and then adding this to the bottom of my settings.py: https://github.com/j2labs/microarmy/blob/master/settings.py#...


Why are you masking errors from that import?


I assume it is so that the default values are used if no local_settings.py file exists, instead of aborting the program with an uncaught ImportError.


If so, then for the benefit of any Python newbies here: "except ImportError" is the idiomatic way to handle this situation, it it clearly communicates intent, and prevents masking legitimate errors. One fairly common pattern is to try to import one module, and if it's not installed, to go for a backup. For example, if I need to parse JSON, the simplejson library is significantly faster than the (completely API-compatible) standard library "json" module:

    try:
        import simplejson as json
    except ImportError:
        import json
I'm pretty sure I've seen those exact four lines of code in the internals of Tornado and Flask, among others.


explicit is better than implicit

He should explicitly catch the ImportError


You can create a variable, set that, and pass that variable as the first parameter to greeting. I don't see why having a class makes testing any easier.


I agree that in Jack's specific examples you don't need the constructor: the constructor trivially saves the parameters into instance variables. However, one thing that hasn't been mention is when the setup of the class is non-trivial, which most likely requires breaking it down its helper methods that's only specific to that class. Exposing those helper methods outside of the class can potentially add as much unnecessary complexity to the code base (i.e. is this a helper method for another method or if this is standalone method, how does this fit into what I'm trying to achieve?). Yes, this could be mitigated with good documentation, but classes exist for this specific reason.


Huh? Modularity has nothing to do with classes.

In plain C, your main "method" can be a simple external function, and the "helpers" can be static functions. Python offers similar functionality.

Making a class is a step backward because the internal helpers have to exposed to users, even if they're intended not to be used.


Exposed how? Methods can be private.


Decoupling. For example if your code requires access to an outside resource (database, message queue, email service) you can mock said resource in order to test the code in isolation.


In python you can mock pretty much anything you want. You can mock package-defined variables by simply modifying them to the value you want (sys.modules['mymodule'].MY_GLOBAL_SWITCH = True). In such a way you can even mock standard library functions and built-ins (which have their own __builtin__ package). With such capabilities you can test pretty much any code. (There are some limitations indeed, like you can't change types which are defined in C code.)


Hey! Ho! Stop that. Nobody uses the word "Decoupling", I just learned that in the video. You must be pulling a fast one on me.


Let me clarify what I think you're saying. It's not about testing class X, it's about testing code that uses class X. As long as X is a class, then we can create a mock version and pass around instances of it. If our "X functionality" is stored somewhere else (ie a bare function), then that gets harder.


Down that road lies the hell of dependency injection frameworks. A simpler approach is to use something like detours[1] to mock bare functions directly.

[1] https://research.microsoft.com/en-us/projects/detours/


Google Guice is not hell.


Why not just pass the url into each function where it needs to be used?


Uhh.. that's ugly redundant code. Classes make code neater and easier to read..


Not really redundant when you can curry the function when needed. This is how libraries in functional languages are written. Classes make code harder to reason about given that you have hidden state everywhere.


Yeah but the debate topic is "don't use classes", not "use Haskell" or "write in a unary lambda style"


Do you think a closure isn't state?


He does, but he feels smarter if he emulates something everyone already does in a more obscure manner. Yesterday, I derived an n-algebra for feeding my cat. See how smart I am!?

(The reality is simple. Classes are Python's state-capture abstraction. Using a bunch of lambdas to do the same thing means you are writing Haskell in Python, which is a dumb thing to do, because while the computer knows what you want, other programmers don't. If you want to use Haskell, use Haskell.)


Ouch. No need to be condescending, Jon. When I write Python, I use classes and magic methods and all that jazz. Programming in a functional style in Python is often verbose and brittle due to the duck-typing. I was under the impression we were talking about programming in general.


Ha, precisely.


I don't think the point was that this wasn't state, but that it's not hidden state if you explicitly pass needed data as parameters.


Disagree with "Decoupling" never used or "if used, they are pulling a fast one on you". It not only will help make his point of decreasing line-count, but more importantly, it is one of the main principles in achieving DRY-ness (both code-level and more importantly, architectural level DRY-ness). I'll even go further and say that if you're not thinking about it all the time, you're most likely doing it wrong.


I think he didn't mean that it's never used, just that it's never actually said in the real world.

My experience confirms this.


Yup, my bad, he actually said "people never say". But the way he presented it, seemed like he's opposed of it. But I still think it should be mentioned more in the industry so more developers are aware of it and think about it more often than not.

EDIT: Can't reply to Tommabeeng's comment (probably too deep). Yup, and I agree with the presenter's main point - that classes are not necessary in many situations, even for decoupling.


He does say about coupling & cohesion that "if used, they are pulling a fast one on you".

That's wrong. I like the key to his talk but the plague of large complicated systems is exactly coupling and incoherence. Classes are NOT necessarily the way to achieve these.


As others have implied already, he could just as easily have been referring to the words.

I think his intention was to explain that these are the same words (mostly from academia and taken up with reckless abandon via language architectures like Java) that may be thrown around by some to create the code mess (anti-pythonic) in the first place.


Yeah I disagreed with that as well. Though I think in classrooms it's "Decoupling," but in the business world they usually say "Model View Controller."


I dislike classes but I like bundling data along with methods which operate on that data. I often find that JavaScript's classless objects are a great solution. It's just so natural to be able to create an object with object literal syntax without first creating an ad hoc class and then instantiating it:

  var o = {
      name: 'David',
      greet: function () {
          console.log('Hi, I am ' + this.name);
      }
  };


Wouldn't this be better?

    function greet(name) {
        console.log('Hi, I am ' + name);
    }

    var o = function() { greet('David'); };


> bundling data along with methods which operate on that data

What can you possibly believe a class is other than exactly that?


I realize that is one thing that classes offer. My point is that you can achieve exactly the same thing with objects rather than classes.

Very often when writing Python (or whatever) you'll create a class which is only ever instantiated once and the instantiation is done simply for the benefit of calling one method.

In such cases I find classes to be overkill. What I really want is the object. I don't need to factor the object's behavior out into a class.

Many of my sympathies are expressed much more thoroughly and eloquently in the many papers you can find online by Walter Smith outlining the motivations behind NewtonScript.


Polymorphism and type hierarchies.


It may interest people to know that Alan Kay has said that they got granularity wrong with classes. Functions are a good granularity, and so are systems, but classes don't work. To get true encapsulation as he originally envisioned it, you need to think in terms of interoperating systems. The example he gave of getting it right is the internet (but not the web). This was all from Kay's talk last year in Germany.


>> true encapsulation as he originally envisioned it

I suppose that "true" objects are Erlang processes. (I'm only partly joking)

OOP gone wrong when it became structs with methods.


Do you have a source (video, transcript) to that talk of Alan Kay in Germany?


I believe he is referring to this talk: http://tele-task.de/archive/video/flash/14029/


Yes thanks - and if anyone watches it, be sure also to check out the stuff on Bob Barton. I learned about a magnificent part of computing history thanks to that talk.

Edit: here you go:

http://news.ycombinator.com/item?id=2855500

http://news.ycombinator.com/item?id=2928672

http://news.ycombinator.com/item?id=2856567

http://news.ycombinator.com/item?id=2855508

The first two are teaser comments. The second two are the important pieces. Also the paper Kay praises at the start of the talk is short and very worth reading.


I think he missed one of my favorite uses of classes: a collection of implementations of defined methods, especially if you can't pass around function pointers. For example, you're working with a Reader and the Reader is something that needs to be able to read() and flush() and open() and close(). You use that Reader everywhere with those abilities. You don't care how they work, you just need for it to be able to do those four things. If you're expecting different Readers, it's useful to have a Reader interface and then classes that implement it.

Unfortunately, this is the only way to perform this activity in Java. Also, I adore Go's style of pseudo-OO programming.

.. or, from reading the HN threads, is that decoupling?

If so, I'm using decoupling right now!


I think he addressed that (or a very similar circumstance) while talking about the stdlib heap, and again in response to the question on "bag of related functions." I don't think he'd take issue with your position.

Alternatively, if you are working with things that you expect to follow a certain interface, Python does not really require a formal definition. Duck typing should suffice.


Over-engineered code is frustrating to read and work with. If your project doesn't require classes then it's fine to not create a bunch of complexity with OO style and design patterns. But if you just dismiss the techniques because they're difficult or you don't want to spend time learning, then you're likely to re-invent the wheel over and over.


That's not what the video was about at all. The speaker never implied to use an approach that would lead to reinvent the wheel over and over ... quite the contrary in fact.


Well, his message I think is easily translated into "classes suck - never use them" by those who may choose read into it that way. He also dismisses a few rather basic design patters as basically jokes (literally with laughs from the audience as if these ideas are idiotic rather than foundations of software design) without making the distinction that these concepts are actually really important.

It's easy for people who are very smart and experienced to dismiss the rules - that applies for all things, not just programming. But if you never learn the basics then you'll stumble over silly things that have be solved by much smarter people.


Standard design patterns are important ... if you program in Java and similar languages. But many of these are completely unnecessary when using a dynamic language like Python - and many others.


To me it's just simply a way to organize code, there's nothing inherently good or bad about classes. You can use or abuse them like anything else. It never hurts to know the basic GoF patterns and when to use them to simplify your code or solve a common problem. You can use them with or without classes. Most Javascript libraries that I look at today use closures, prototypes and anonymous objects that might as well be classes. Patterns seem to be making a huge appearance in client-side code in the last year. I'll admit that I don't know anything about Python, though.


I'm not much of a python programmer, but in Java I use classes all the time. The main reason is that small classes make writing unit tests easy. I can use a mocking library like http://www.easymock.org/ to mock out whatever dependencies the code I'm trying to test depends on.

I agree on some of the presenters philosophical ideas, mainly that adding complexities into the codebase before they're needed is almost always a terrible idea, but saying that classes lead to complex code is simply not true, at least in my experience.


Of course you use classes all the time in Java: it forces you to do so by design. I have significantly more experience with Python than with Java and I find that writing unit tests on functions is as easy if not easier than writing tests for (small) classes. And Diederich's point about small classes being a code smell (my words - not his) is hard to refute.


What surprises me is that no-one mentioned inheritance when speaking about classes. I am not a fan of classes and I prefer the functional approach, whenever possible. In my experience, I found that classes make sense as part of a hierarchy, that is, when subclassed to override/specialize methods or to describe interfaces, like type-classes in Haskell.


As an 'end-user' of Python, I rarely find myself writing my own classes, but I am really pleased that the authors of many of the libraries that I use decided to write classes.

I think the distinction between 'end-users' and people who make libraries is an important one and one that Jack seemed to overlook.


Look, there is nothing wrong with classes and OO in general. What is wrong is using OO when it is unnecessary and only adds layers of complexity.


What is wrong is using OO when it is unnecessary

That's a platitude which everybody can and does say about everything.

But I think one can make a case that there is something wrong with OO in general. The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general. So we need to ask, when is OO a win? But for simpler problems it doesn't matter, because anything would work. So we need to ask: what are the hard problems that OO makes significantly easier? I don't think anyone has answered that.

I suspect it's that OO is good when the problem involves some well-defined system that exists objectively outside the program - for example, a physical system. One can look at that outside thing and ask, "what are its components?" and represent each with a class. The objective reality takes care of the hardest part of OO, which is knowing what the classes should be. (Whereas most of the time that just takes our first hard problem - what should the system do? - and makes it even harder.) As you make mistakes and have to change your classes, you can interrogate the outside system to find out what the mistakes are, and the new classes are likely to be refinements of the old ones rather than wholly incompatible.

This answer boils down to saying that OO's sweet spot is right where it originated: simulation. But that's something of a niche, not the general-purpose complexity-tackling paradigm it was sold as. (There's an interview on Youtube of Steve Jobs in 1995 or so saying that OO means programmers can build software out of pre-existing components and that this makes for at least one order of magnitude more productivity - that "at least" being a marvelous Jobsian touch.)

The reason OO endures as a general-purpose paradigm is inertia. Several generations of programmers have been taught it as the way to program -- which it is not, except that thinking makes it so. How did it get to become so standard? Story of the software industry to date: software is hard, so we come up with a theory of how we would like it to work, do that, and filter out the conflicting evidence.


> The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general.

Wrong. Even the worst Enterprisey mess of Java classes and interfaces that you can find today, is probably better than most of the spaghetti, global state ridden, wild west that existed in the golden days of "procedural" programming.

If you consider that software is composed of Code and Data, then OOP was the first programming model that offered a solid, practical and efficient approach to the organization of data, code and the relationship between the two. That resulted in programs that, given their size and amount of features, were generally easier to understand and change.

That doesn't mean OOP was perfect, or that it couldn't be misused; it was never a silver bullet. With the last generation of software developers trained from the ground up with at least some idea that code and data and need to be organized and structured properly, it's time to leave many of the practices and patterns of "pure" OOP and evolve into something better. In particular, Functional has finally become practical in the mainstream, with most languages offering efficient methods for developing with functional patterns.


You believe this, but you've given no reason to believe it other than the dogma you favor. The old-fashioned procedural systems I've seen with global state and the like were actually easier to understand than the convoluted object systems with which they were often replaced. Your comment is exactly the kind of thing that people who are enthralled with a paradigm say. But the "worst Enterprisey mess of Java" that you blithely invoke is... really bad, actually, as bad as anything out there. You're assuming that paradigm shifts constitute progress. I offer an alternate explanation for why paradigms may shift: because the new generation wants to feel smarter than the old one.


OO enterprisey mess is strictly better than global state ridden spaghetti code. The hard part with enterprisey code is that the code performing the action is hidden under layers of abstraction. The bootstrapping cost is much higher here because you have to put more abstractions in your head before you can understand a particular bit of functionality. This is a different sort of complexity than the global state spaghetti.

With the global state code you have to understand the entire codebase to be sure who exactly is modifying a particular bit of state. This is far more fragile because the "interaction space" of a bit of code is much greater. The dangerous part is that while you must understand the whole codebase, the code itself doesn't enforce this. You're free to cowboy-edit a particular function and feel smug that this was much easier than the enterprisey code. But you can't be sure you didn't introduce a subtle bug in doing so.

The enterprisey code is better because it forces you to understand exactly what you need to before you can modify the program. Plus the layered abstractions provide strong type safety to help against introducing bugs. Enterprisey code has its flaws, but I think its a flaw of organization of the code rather than the abstractions themselves. It should be clear how to get to the code that is actually performing the action. The language itself or the IDE should provide a mechanism for this. Barring that you need strong conventions in your project to make implementations clear from the naming scheme.


This sounds like ideology to me. You can never be sure you didn't introduce a subtle bug, you can never force someone to understand what they need to, and interactions between objects can be just as hard to understand as any other interactions.

I agree that a project needs strong conventions, consistently practiced. See Eric Evans on ubiquitous language for the logical extension of that thinking. But this is as true of OO as anywhere else.


First off, apologies for seemingly badgering you in various threads in this post. I don't usually look at usernames when replying so it was completely by accident.

>interactions between objects can be just as hard to understand as any other interactions.

While this is true, there are strictly fewer possible interactions compared to the same functionality written only using primitives. To put it simply, one must understand all code that has a bit of data in its scope to fully understand how that bit of data changes. The smaller the scopes your program operates in, the smaller the "interaction space", and the easier it is to reason about. Class hierarchies do add complexity, but its usually well contained. It adds to the startup cost of comprehending a codebase which is why people tend to hate it.


That's hilarious. Well, you're welcome to "badger" (i.e., discuss) anytime. It's I who should apologize for producing every other damn comment in this thread.

In my experience, classes don't give the kind of scope protection you're talking about. They pretend to, but then you end up having to understand them anyway. True scope protection exists as (1) local variables inside functions, and (2) API calls between truly independent systems. (If the systems aren't truly independent, but just pretending to be, then you need to understand them anyway.)


You're right that the scope protection I'm talking about isn't as clear cut when it comes to classes. Design by contract is an attempt to address this. Client code can only refer to another object through a well-defined interface. Thus your limited scope is enforced by the type system itself. Furthermore, much of this discussion is about perception of complexity rather than actual complexity.

In python you can access any classes internals if you're persistent enough. However, the difficulty in doing so and the convention that says that's bad practice gives the perception of a separation, thus one does not need to be concerned about its implementation. It lowers the cognitive burden in using a piece of functionality.

Haven't you noticed this yourself? A simpler interface is much easier to use than a more complicated one. If you're familiar with python, perhaps you've used the library called requests, its a pythonic API for http reqeusts. Compare this to the core library API using urllib2. The mental load to perform the same action is about an order of magnitude smaller with requests than urllib2, and its because there are far more moving parts with the latter.

My contention is that if bits of data and code are exposed to you, it is essentially a part of the API, even if you never actually use it. You still must comprehend it on some level to ensure you're using data it can access correctly, aka interaction space (I feel like I'm selling a book here).


When I compare crappy codebases in procedural C vs OO java, all I can say is that it took fewer lines of code to bring a project to the region of paralysis in C. Does that mean the C codebases accomplished less? I don't think so.

Part of the allure of Java: it gives the illusion that more progress was made before you entered that region.

We've all heard that you can shoot yourself in the foot with any language, but I think the implications haven't fully sunk in: There are no bad paradigms, only bad codebases.


>The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general.

How so? I find OOP code to be much easier to understand and change.


I've tried to explain that in my other comments. But we may not be able to say much to each other than that our experiences differ. How much of our experience is real and how much is owed to our assumptions (like "OO is good" or "not") is impossible to disentangle in a general discussion. You know how this stuff really gets hammered out? When people are working together on the same system. Then we can point to specific examples that we both understand, and try to come to an arrangement we both like. Barring that, it's pretty much YMMV.


The thing that is wrong with OO in general is that is is not a silver bullet. I disagree that OO is useful only for simulation - I've done virtually no simulation work and I've found using OO useful in many contexts. Maybe if I used functional style or hypertyped dynamic agile mumbojumbo style or whatever is the fashion de jour - I would save some time, but OO worked fine for me and allowed me to do what I needed efficiently. Would I use it everywhere? Definitely no. Does it have a prominent place in my tool belt? Definitely yes. I reject the false dichotomy of "OO is Da Thing, should be used for everything" and "OO is useless except in very narrow niche". My experience is that OO is very useful for reducing complexity of projects of considerable size with high reuse potential and defined, though not completely static interfaces.


Of course you find OO useful in many contexts, since you know it and like it. But the argument is not that OO is useless. The argument is that it provides no compelling advantage and that it comes with costs that people don't count because -- owing to familiarity -- they don't see them. It would be interesting to know how much of the "considerable size" of the systems you refer to is owed to the paradigm to begin with. The cure is often the disease.


No compelling advantage over what? Over random bag of spaghetti code? You bet it does. Over some other methodology? Bring up the example and we'd see. I am sure you can replace OO with something else - so what? I explicitly said OO is not the panacea - you like something else, you do it. So what's your proposal?

Considerable size of the system is because of considerable complexity of the task at hand. I don't see how you can write 1000-page book of requirements into 2 screens of code. And if you need the system that is able to be customized to satisfy different books with minimal change - you need more complex code.


Another reason it endures is the large collection of mature and easy to use libraries available. Of course this is something of a chicken and egg problem, but a very present one.


Yes, but that's just the same inertia. If programmers no longer believed in those libraries, they'd soon write new ones. Otherwise our systems would all still be integrating with Fortran and COBOL. Some still do, of course - and the extent to which they still do is probably the true measure of the library argument.


I'm not so convinced. There was a time when most people had grown up with procedural code. OO offered enough promise that people moved over to it and developed things like Cocoa or Java.

If people aren't moving on, it's not simply because of 'intertia'. It's because the alternatives aren't offering enough benefit yet.


I think that's nearly entirely untrue. Emotional attachment to what one already knows is the dominant factor - all the more dominant because, in our zeal to appear rational, we deny it.

What will change the paradigm is the desire of a future generation to make its mark by throwing out how the previous generation did things. One can see this trying to happen with FP, though I doubt FP will reach mass appeal.

When you do anything as intensely as writing software demands, your identity gets involved. When identity is involved, emotions are strong. The image of being a rational engineer is mostly a veneer on top of this process.

Edit: 'intertia' - your making fun of an obvious typo rather illustrates the point.


You haven't explained how the inertia was overcome before. Nor have you explained what you think would replace OO if not for the inertia.

So this is basically a contentless argument that says people are stuck on OO because they are irrational.

As you me 'making fun' of a typo - I don't know what you're projecting onto me. I simply mistyped the word inertia. I put it in quotes because I don't think it's a cause of anything.


My apologies! I had typed intertia at one point and thought your quotation marks were quoting that. It seems our fingers work similarly even if our minds do not :)

As for OO, I think our disagreement has probably reached a fixed point.

Edit: nah, I can't resist one more. I believe I explained how the inertia was overcome before: people who were not identified with the dominant theory at the time (structured programming) came up with a new one (object-orientation) and convinced themselves and others that it would solve their problems. Why did they do that? Because the old theory sucked and they wanted to do better. Their error was in identifying with the new theory instead of failing to see that it also sucks. The only reason they could see that the old theory sucked was that it was someone else's theory.

As for "what would replace OO if not for inertia", I know what my replacement is: nothing. I try to just let the problem tell me what to do and change what doesn't work. Turns out you can do that quite easily in an old-fashioned style. But if you mean what paradigm will replace OO (keeping in mind that we ought to give up our addiction to paradigms), who knows? FP is a candidate. One thing we can say for sure is that something new and shiny-looking will come along, until we eventually figure out that the software problem doesn't exist on that level and that all of these paradigms are more or less cults.

Perhaps I should add that I don't claim to know all these things. I'm just extrapolating from experience and observation of others.


PG covered the inertia problem pretty well with The Blub Paradox [1]. OO is now the blub language somewhere in the language power continuum, and those who don't know any other paradigm equally well are more likely to become and stay invested in it, hence the inertia.

1. http://www.paulgraham.com/avg.html


I've been as influenced by that essay as anybody here, but I'm not sure I believe in a power continuum anymore. How powerful a language is depends on who is using it. You can't abstract that away, but if you include it the feedback loops make your head explode.

The trouble I have with what you're saying is it suggests that a better paradigm (e.g. FP), higher-level and more powerful, will improve upon and succeed OO. But the greatest breakthroughs in my own programming practice have come from not thinking paradigmatically- to be precise, from seeing what I was assuming and then not assuming it.

Edit: My own experience has been this weird thing of moving back down the power continuum into an old-fashioned imperative style of programming, but still very much in Lisp-land. For me this has been a huge breakthrough. Yet my code isn't FP and it certainly isn't OO, so I guess it must be procedural. How much of this is dependent on the language flexibility that comes with Lisp? vs. just that Lisp happens to be what I like? Hard to say, but I suspect it's not irrelevant. If you can craft the language to fit your problem, you can throw out an awful lot. Like, it's shocking how much you can throw out.


Maybe the reason programmers moved from cobol to OO is because they found OO better? Staying with Cobol was an option?


It would have been if pre-existing software (libraries) were the dominant factor.


There is that other sweet spot: UI programming. I don't think I have ever seen a better setup for UI code.


I just read through the ClojureScript One source and read through their wiki and tutorial. The mini-framework they've developed borrows ideas from (at least as I gather from their docs) functional reactive programming and dataflow programming. You register handlers who respond to events, and it's mostly just functions calling each other, although obviously there's a huge pile of Objects with a capital O in the browser's DOM. But I think it's a pretty compelling alternative, and certainly interesting, although I'm not sure if it's actually better or not.


State machines! But that is another discussion. Let's have it some time.


Defining abstract datatypes and legal operations on those datatypes is useful whether you do it with objects or Haskell type classes or C++ generics. It's easy to go overboard with objects but explicit interface definitions are a lot easier to reason about then big grab bags of functions.


Wish someone asked his opinion on "singletons". I've been fan of them in the past, and now I totally despise them... But then it came to me, that I don't despise the semantics of he singleton - I just write that as a bunch of functions, rather than class with methods that is "singleton-ized".

There were also some revelations - like printer().getSingleton() (or instead of printer, put keyboard, mouse, memory, etc.) - at some point some of these "singletons" might be become "multi-tons" - for example gamepad().getSingleton() already is a trouble...


The best "singleton" for me is python module. Import is the constructor of singleton. Python itself controls single initialization of object.


Tao of Leo #25: http://justinmchase.com/2012/02/22/tao-of-leo-25/

"The singleton pattern combines all the perf benefits of a global variable with all of the code maintenance benefits of a global variable."


Some arguments from this guy are:

1) Do not use a class if it doesnt have state

Agreed

2) Keep it simple and do not think about the future, fix it later if needed

He probably never wrote API code in libraries needed for other people. Once you choose something you are bound to it. In this case classes and interfaces are very helpfull. For example the API class example in the beginning (no state) is a good example. In the future the api might be extended with more functionality. The last thing you want is have 3rd parties rewrite their code cause you changed the API. So a class is good to provide a Facade (yes Facade Pattern).

3) package names are not for taxonomy but for preventing nameclashes

True. However in Java the editor is so powerfull that you never ever have to lookup package names. This goes automatically (IDE parses the code on the fly). Java package names also use URI patterns like: com.google.android.hardware.vibrator. Which can only be used by somebody who has the google.com domain name. This prevents name clashes ALL the time. In python you sometimes have to rename a whole project. And the taxonomy gives you an indication what the code actually does.

4) Dont overuse exceptions and keep shorts names.

Agreed. However i am not sure what the best granularity is. In a public Library API, i would be very carefull about this.

Otherwise I agree with him.


This discussion would be much more constructive if we have an actual problem to code a solution for... then we could see the merits of each approach...


Stop writing class hierarchies.

Classes are fine. Over-engineering classes, like over-engineering anything is bad, m'kay?


For awhile I did not really know how to take this whole thing. Upon reflecting, I realise I probably do lean away from Classes in general... it was strange to come upon that realisation, as I come from a strong Java/OOP background.

The benefits of Classes are still related to data abstraction. But it is pointless if you are still accessing the data directly anyway (getters and setters be damned). So, I would only legitimately prescribe a class only when you are building a data structure.

Nowadays, my classes are either - composition of primitive data (as I don't have tuples(python)/structs or dynamic objects (javascript)) - or function groups (like a business logic layer of some sort, which could have just as easily have been just bags of functions in modules).

I'm going to be teaching my students Classes in a couple of weeks... food for thought before then.


I don't understand. I started to program computers when I was 12. I used BASIC. And now they have conferences because they don't know how to write data structures and subroutines? No wonder software gets bloated...


And just last week I read about a post where some guys were amazed that one coud write loops without for {;;} while {} or do { } while

Facepalm.


One thing that I still wonder about is, in cases where I need for some object to hold some state, whether it is better to use a class or a closure. Both approaches are similar in many ways but still subtly different. Is there a good rule of thumb, kind of like the one for classes vs tagged unions? (classes let you add new classes but don't like if you add a new method, tagged unions let you add new methods but make it hard to add a new type).


I think most of the comments here are taking it too far. Not using classes in some cases is not equivalent to stopping using classes altogether. Classes are yet another tool in the programmer's toolbox. Sometimes the tool makes sense, sometimes it doesn't. The trick is to know when. But jumping to absolutes is surely not the right way to go about it.


I agree with most of this issues, but it sounds like his editor doesn't have auto import and jump to definition like Eclipse.

eg complaint about nested packages. (not that I agree that this is a necessary class): instead of MuffinMail.MuffinHash.MuffinHash, type MuffinHash and have your editor add: from MuffinMail.MuffinHash import MuffinHash.


Tools to reduce the burden of tautology are workarounds. You can rely on tools to make the worst design/language less painful, but it's not an argument against avoiding tautology.


"Stop writing classes... "

No

Ooooh, you wanted to make a case for not using classes in certain situations? Then enough with the linkbait title.


The best use of Python Classes is as a stand-in for Javascript-style object syntax, ala:

  class foo:
    pass

  f = foo()
    f.bar = "yes"
which is easier to type and more readable than:

  f = {}
    f['bar'] = "yes"
(credit where due: learned this trick from TB)


You can simplify it a bit by building the container class on the fly, if desired.

  >>> x = type('Foo', (object,), {})
  >>> x.bar = 1
  >>> x.bar
  1


Who or what is TB?


OOP obfuscate our programs by blurring the frontier between data and treatment. I can't agree more, since I made the mistake quite a few times myself.


I wish they told me that when I took inline lessons on class-class.org




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: