Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
From C# on Mono, to Clojure on the JVM (agilehead.com)
43 points by mariorz on June 13, 2009 | hide | past | favorite | 32 comments


His opinions about the merits of python vs lisp vs clojure seem based more on rumour and the current lisp/clojure kool-aid than on hard-earned hands-on experience. I love Lisp when coding by myself and Clojure sounds really interesting, but Python is currently rock solid and a better fit for real-world team based development. And it's fine for functional programming, contrary to the strange opinion in the article that Guido is well-known to be 'against' functional programming !?


Is there a significant reason, other than preference, that Python is a "better fit for real-world team based development" than Clojure or any Lisp dialect?

Python supports functions as objects that can be passed around, but that does not make it a FP language. Functional programming is about constraining mutability and declaring what you want, now how. Python does not lend itself to that style of programming.

Guido has spoken out against map/reduce/filter, lambda, and TCO in Python so many people interpret that as him disliking FP. I think it has more to do with Guido's aesthetic and Python's design principles than anything else.


I'll give one reason why Python may be better for team development than clojure/lisp: powerful macros. They do make it harder to reason about code you didn't write. In Python, you can look at the inside of a function and have some idea what it is doing.

"It's doing kwyjibo(i,v) to all the i in zqmfgb. Now, lets see what v and zqmfgb are."

In clojure, that isn't always the case. An example: recently, I was trying to write an OpenGL app in Clojure. I found the following code on the net (pretend it was written by a team member):

    (doto obj
        (.glBegin ...)
        (.methods ...)
        (.glEnd))
I decided I wanted to factor out the (.glBegin ...) and (.glEnd). Aha, this is a job for macros!

    (gl-begin-end body) ; this should macro expand to:

    (do
        (.glBegin ...)
        body
        (.glEnd))
This would just fit inside the the doto, and be exactly what we need. The problem is that doto significantly modifies it's contents:

    (doto obj
        (.method x)
        (.method y))

    (clojure.core/let [G__1206 obj] 
        (.method G__1206 x)
        (.method G__1206 y) G__1206)
So I got this: (doto obj (gl-begin-end body))

    ;macroexpand once
    (clojure.core/let [G_1725 obj] 
        (gl-begin-end obj body))

    ;macroexpand twice
    (clojure.core/let [G_1725 obj] 
        (do
             (.glBegin ...)
             G_1725
             body
             (.glEnd)))
Obviously this didn't work.

In this case, the problem is me: `(doto obj body)` is idiomatic clojure, and I should have known about it. On the other hand, `(kwyjibo obj obj2 body)` could be very hard to figure out and modify.

I should, however, raise the caveat that I'm not a good enough lisp programmer to know if this is really a problem in practice.


[Macros] make it harder to reason about code you didn't write.

No, actually, they don't. Given the usual high quality of your comments, I'm surprised you would repeat this canard.

you can look at the inside of a function and have some idea what it is doing.

This is just as true of a macro. In fact it's more true: you can expand a macro in place on your code and see what it is doing, a maneuver there is no exact analog to in functionland. The macroexpansions in your example make it immediately apparent what the problem was.

Macros are different from functions. That's why they're useful. I'd encourage you to stick with the learning process. The key difference with what you were expecting is that macros control the evaluation of their arguments.


Macros do more than just control evaluation of their arguments (although that is often their primary use). If you simply want to control evaluation, you can do that with controlled laziness and functions. Macros can rewrite your code.

To understand a line of code, you must understand enclosing sexps before you can understand what the line of code does. In a macro-less language, that is not necessary. It is something extra you need to think about (and I agree that C-x C-m helps), and that makes it harder to reason about.

I never said they aren't useful (and I'd strongly disagree with someone who did). It doesn't change the fact that they can make understanding code harder.


Macros do more than just control evaluation of their arguments

Yes, but I was commenting on your example: it was what the macro was doing to control evaluation that caused it to depart from your function-centric intuition.

As for the general point - that macros "can make understanding code harder" - I haven't heard any argument that doesn't apply just as much to functions mutatis mutandis. A function adds complexity too. You have to understand its name. You have to know what each of its arguments is there for. To truly understand a function, you may have to look at its source code. Therefore functions can make understanding code harder!

Except, of course, that they make it easier, assuming they're used properly to organize a program, and people who have experience with them consider them indispensable. Same for macros. Most of the objections people bring up are exactly this sort of pseudo-argument that, if it were true, would in fact be a reason to avoid higher-level languages.

If you're really making a general claim to the contrary, please come up with more than a single trivial example that is confusing only at the beginning of the learning curve.


My specific argument is about locality, not abstraction.

A function call (in a lexically scoped) language without side effects can be completely understood only by looking only at the function definition.

A function call with side effects now requires extra knowledge: you need to know the state of the world (or some subset) before it is called. Side effects make a program harder to reason about, and can pose a problem in a team.

Similarly, a function or macro call (ignore side effects) also requires extra knowledge: you need to know not only the definition of your function/macro, but also the definition of all macros enclosing the current one. This also makes it harder to reason about.

So my argument "favors" Haskell over Python and Python over Lisp. It also favors Haskell over Liskell, and Clojure over Elisp (due to lexical vs dynamic scoping).

Regardless, I'm not trying to advocate against lisp use. I'm just making a point that the non-locality of macros can make it harder to reason about, and that could be a problem in a team environment.


I have to correct something I said earlier:

it was what the macro was doing to control evaluation that caused it to depart from your function-centric intuition

I took a closer look at the example and now see that it had nothing to do with evaluation. Sorry about that. Clearly, the problem was that doto was transforming (gl-begin-end body) into (gl-begin-end obj body) because of an assumption that the forms passed to doto are always OO method calls.

Yuck! Now I understand your complaint about the enclosing form transforming the enclosed form in an unexpected way. What doto is doing there strikes me as just wack from a Lisp point of view. I can't think offhand of many (any) macros that modify the forms they enclose while still evaluating them. Obviously this is completely non-composable. No kidding it leads to unexpected behavior.

What's going on here doesn't have so much to do with macros in general, but rather an impedance mismatch in Clojure between the Lisp side and the Java side (and perhaps ultimately between FP and OO). It's a non-issue in CL - so much so that I completely missed the point until the third time I read your example. It makes me wonder how many other such mismatches Clojure must inevitably be afflicted with.

I'd still argue, though, that you're drawing a false conclusion about macros in general. The problem with (doto obj (gl-begin-end body)) is very much with the definition of DOTO. It has a weird side-effect, true, but one which is still local: it affects the form it encloses, not one which encloses it. The cognitive burden here is analogous to functionland: in order to understand f(g(x)) you have to understand both f and g. f might do something weird that violates the intention of the author of g, but understanding this is incumbent upon the author of f(g(x)), not the author of g. Of course the real culprit may well be the author of f for writing something prone to unwelcome surprises.


You say that

    (doto obj
        (.glBegin ...)
        (.methods ...)
        (.glEnd))
> I decided I wanted to factor out the (.glBegin ...) and (.glEnd). Aha, this is a job for macros!

    (gl-begin-end body) ; this should macro expand to:

    (do
        (.glBegin ...)
        body
        (.glEnd))
But why did you change `doto` to `do`?

Here's how you can do this:

    (defmacro gl-begin-end [obj & body]
      `(doto ~obj
          (.glBegin)
          ~@body
          (.glEnd)))


As I said, I know the programming mistake I made here (and that it reflects lack of knowledge of clojure on my part). But the problem I had with the doto macro could easily translate to a problem you have with my kwyjibo macro.

In python, I know what this code does, regardless of what happens at higher levels of indentation (ignoring some rarely used class-level hacking):

    f(a,b,c)
    g.h(i)
In lisp, I can't necessarily assume that

    (f a b c)
    (.h g i)
will actually call f on a,b,c and then call the h method of g on i. The ability to modify syntax programmatically implies that syntax (i.e. the s-exp structure) may not mean what you think it means.

I'm speculating that this may make collaboration more difficult.


Powerful macros and functions make it harder to reason about code.

In Lisp and Smalltalk, you can look at the inside of a function and have some idea what it is doing.

With Lisp you can use #'DISASSEMBLE and with any Smalltalk worth its weight, you can simple click around and find the method that you want to look at


I agree completely, the notion of the "inscrutable macro" is often brought up when arguing against a Lisp. Whatever one could say is dangerous about macros, I would argue, could be said equally of functions.

But I think something else is going on here, the Inscrutable Macro idea serves as a kind of "intuition pump" (of the negative kind) http://en.wikipedia.org/wiki/Intuition_pump to try and demonstrate macros' badness. If someone isn't very familiar with lisp they'll offer code that's merely not idiomatic, when someone is familiar with lisp then they'll offer up something that's just downright pathological, and I'll be the first to admit, that a pathological macro can be really pathological. At this point, equipped with their executable straw man, they claim "this macro is insane, and so macros are unmanageable."

If the measure for unmanageability is just that impossible-to-understand code can be written in some language then the IOCCC means C shouldn't be used.

In practice the Inscrutable Macro just isn't written.


Python is better than the lisps for team based development, IMO, primarily because you can't write your own language in python. In a team you're normally working on long-lived code of medium - large projects (e.g. over 100k lines, over a period of 1 - 5 years). You need to be able to quickly grok how to maintain older code written by someone else. Python's mantra "there's only one way to do it" means that is more likely to be possible than it is in one of the lisps. However I have to admit that that is conjecture on my part, not having worked on a large team with a lisp project.


Right, because programs with millions of lines of code written by many different people are so easy to "quickly grok". As long as they stick to lower-level constructs.

The problem with this fallacy is that it assumes that what is true of a single self-contained function is true of thousands of inter-depending functions heaped on top of each other.


We, for instance, structure all our code very carefully with clearly defined and documented interfaces and modularised functional areas. I doubt that any function is really interdependent with more that 20 other functions, never mind thousands. If someone needs to understand thousands of functions to maintain any piece of code then that code has a serious problem and probably a low 'bus factor' (how many developers being hit by a bus it would take to derail your project). I suspect that the power of the lisps tempts developers into writing code that does effectively have many interdependent functions, though they may not fully realise until it's too late.


I really don't see any difference there. You can write bad code in any language. If you have the organizational discipline to code the way you've described in Python, why wouldn't that restraint carry over into other languages? Why are macros different?

I find typical Python programming to be not particularly disciplined, so if you've overcome the problem in that language you should be fine in a generally more disciplined language like Clojure.


"thousands of inter-depending functions heaped on top of each other."

If you are in this situation, I would suggest a full rewrite with policies that prevent your team from building such a mess.


I'm not in that situation, virtually the whole software industry is in it. If you think "policies" can address that, we have a very different worldview.

The real point though (which remains unaddressed) was that the argument against higher-level abstractions is wrong because it assumes that what is true of a single function written at a certain level of abstraction will remain true if you try to build an entire complex system out of them. It won't. Therefore, as complexity grows, we need the ability to create new abstractions. There is something perverse about the argument that we should eschew powerful techniques for doing this because they'll make large systems hard to understand. It's like saying we should avoid water because dehydration is bad.


I only jokingly implied you could be in such a mess. I also can assure you it's not the _whole_ industry.

Some of us employ techniques that make it easier to maintain large chunks of code. I refrain to describe them as complex because complexity is something to be avoided like the plague.


Isn't it nice to be able to disagree by modding down instead of engaging in an active argument?


It's a legitimate way to disagree, as has been made clear here on many occasions. Besides, sometimes it's better not to engage in an active argument. For example, my contributions to this thread were borderline over-participation.


There should be no such thing as "over-participation". As long as you have something to say: as I implied - at least one group I participate in has managed to have a very large and long lived codebase that has little to no signs of the cruft described in the thread head. Careful choice of languages, paranoid test coverage evaluation, commit policies (like "you only merge into head if all the tests pass and no function is left untested"), extensive code reviews (one could go as far as do pair-programming all the time) and relentless refactoring and feature removal can get you very far.

You can think of the current *BSD codebases as direct descendants of the original AT&T UNIX codebase that have survived by employing one or more of the techniques I mentioned in the previous paragraph.


Well.

The whole "people don't write applications in Lisp, they use Lisp to write a new language and then write applications in that" point has been brought up already, so I won't try to argue its pros or cons (thankfully somebody else can get flamed from both sides over it).

I will say that I think Python does something interesting, in that it removes some of the stupidest problems teams usually run into. Consider projects written in C or C-like languages: hugely disproportionate amounts of time are dedicated not to actual issues of how to design applications, but to things like where to put braces and how to use whitespace. Python sidesteps a lot of that petty bickering because the requirements of syntactically valid Python impose consistent solutions to those debates from the start.

Also, culturally, Python programming style tends to go quite a bit beyond that. For example, most large projects simply use PEP 8 as their style guide, doing away with an even larger class of stupid infighting.

As to FP and whether Python hates or supports it, unfortunately it is the case that if you put five FP proponents in a room you'll get at least twelve mutually-exclusive definitions of FP. So I think that tends to be a rather silly debate to have :)


I must have missed something about this platform transition.

First and foremost, where are his customers in all of this? Languages are tools to reach goals for humans. What goals did he not accomplish because of using Mono and C#? What specific business objectives were impaired? How would these goals be more easily achieved now?

Secondly, I find it odd that a position on patents would work it's way into a tools choice without there being some kind of firm evidence of trouble. Did Microsoft's decision change the technical environment? Were there problems with building or maintaining code? Or did Microsoft just tick off the open source community and that made him decide to change? Perhaps I missed something, but it was sounding like he was making platform decisions based on some kind of popularity contest or imagined threat. Like I said, I'm ignorant here. Perhaps Microsoft's move has real-world ramifications? If so, I didn't see them in the article.

Finally -- what about F# on mono? You've got all the functional programming you can throw a stick at. And you've got objects as well if you want them. I didn't see that come into consideration at any point, although it seemed relevant -- if his interest was functional languages.

I also found the comment about the .exe and .dll file extensions weird. So people think they're strange? So what? Once again, what does this practically mean to developers or maintainers? Did it cause real problems? Or does it just look bad?

Language and platform discussions always come down to defining criteria: you determine what criteria are important to you and make your decision around that. Sometimes it's speed or conciseness. Sometimes it's availability of programmers. Sometimes it's ability to grow and maintain large code bases.

Whatever the criteria, you have to establish them and the stick to them. I prefer "ability to provide value to customers as quickly as possible" over things like "ability to run in parallel" or "ability to easily find programmers", but to each his own. There are great arguments to be made for all kinds of criteria.

What I didn't see was what exact criteria was being used. Sounded more like somebody just chasing whatever was trendy at the time.


"How would these goals be more easily achieved now?"

I feel the guy's pain. Of all technologies we use at our company, the Java folks are the ones that take the longest to deliver a functioning product. I wouldn't be surprised if productivity under Mono/.NET was similarly problematic.

"Or did Microsoft just tick off the open source community and that made him decide to change?"

It's very unlikely Microsoft holds patents that threaten MySQL, PHP, Django, Rails or Clojure users. I would guess their patent-space overlaps quite nicely with Mono.

"F# on mono"

If Microsoft's empty threats scare you or your costumers, F# is not an option.

"I also found the comment about the .exe and .dll file extensions weird."

It _is_ weird. Only Windows uses such an odd convention. I think VMS used something like it. OS/2 too, but it is every bit as dead as we joke BSD is (it is not - I use it - calm down, *BSD people).

"Whatever the criteria, you have to establish them and the stick to them", "Sounded more like somebody just chasing whatever was trendy at the time"

Criteria can change. Rails and Django may be "trendy", but that "trendiness" happened because those who use them can deliver applications very quickly, faster than, say, JSF or ASP.NET MVC - It's derived from real-world results. "ability to easily find programmers" is nice, but if you can't train programmers in a language to make them functional in a week, you are probably training the wrong ones.


Thanks. Having actually used Mono on Linux, he shared some interesting observations. Clojure surely looks promising, but Python or Scala can be equally fine choices as of the moment.


If I need to run my programming language to run on top of the infrastructure of another program language to use its services, the designers made something wrong. Today larger applications are based on layers communicating with each other over all kinds of ways. Deeply embedding is just one way.

I also think that developers should do more than read books to evaluate a technology. Reading Lisp books is surely not sufficient - eventually the author now was able to use a Lisp dialect and not only read about it. The author is also mostly guessing that 'Clojure has all the benefits of LISP' without having used other Lisps.


I've used other Lisps to write production code, and I'll say that Clojure has all the benefits of the Lisp family. Certainly, specific other Lisps have their own advantages. I miss Common Lisp's method combination whenever I use any other language.


can Clojure now save images?

does it have CLOS? I thought a huge different to many Lisp dialects is that Clojure is not especially object-oriented. In contrast to most CL implementations (and other Lisp dialects like ISLisp, many Scheme implementations, ...), which are very object-oriented - using many CLOS-based libraries.

can it compile to C?

can I use it without the JVM?

does it start up in less than ten ms?

programmable reader?

more powerful exception handling for interactive software?

tail calls?

etc. etc.

I'm not saying that you need all that, but saying that Clojure has all that a typical CL implementation or what some of the more advanced Scheme implementation offer would surprise me...


None of the features you cite are general features of the Lisp family of languages; they are features of specific Lisps. Clojure has its own unique features that are absent from other languages, including Common Lisp. To be specific:

Clojure has multimethods with arbitrary dispatch functions as well as user-defined hierarchies that provide more flexibility than CLOS in many situations. There's no inheritance, but you can copy a structmap.

I do miss method combination and the programmable reader from CL, but I love that the standard library is largely defined in terms of generic functions, that lazy sequences are assumed by default, that every data structure is immutable, but still reasonably fast.

The perfect Lisp has not been written and probably never will be, but I certainly feel like I'm writing Lisp, and not some other language with Clojure.


if you say:

   Clojure is a member of the family of Lisp languages

   x is the set of features that all the members of the Lisp family share

   Clojure has all features in x
then this is trivially true and not surprising.

But if you compare the specific language and implementation called Clojure with, say, the specific implementation Clozure CL (CCL), then I would say that there are quite a few features (and their benefits) of CCL that are both available in a lot of other Lisps (for example direct support for object-oriented programming or the ability to save and restart the state) that Clojure lacks. Also true, Clojure has a few features (especially features that are not widely shared among Lisps like STM) that CCL lacks.


  "Although I was unable to use LISP in any production code"
Does he refer to LISP 1.5?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: