Hacker News new | past | comments | ask | show | jobs | submit login
Clojure or: How I Learned to Stop Worrying and Love the Parentheses (nathanmarz.com)
121 points by mrduncan on Aug 31, 2010 | hide | past | web | favorite | 84 comments



Lispers rightly tout the malleability of the syntax as a killer feature. The big problem is that any language you build in lisp is still pretty much lisp. If you're not already sold on s-expr syntax, and a lot of programmers, rightly or wrongly, don't like it, the fact that you can replace one s-expr syntax with another just isn't that convincing.

The irony here is that lisp's syntax will probably keep it from ever falling into obsolescence because it can adapt to any new general task but it will also prevent it from ever really breaking into the mainstream because for any particular task it's a non-optimal syntax and a steady procession of more specialized alternatives suit most people's needs better.


I'll admit that learning to be effective with Lisp's syntax took me a few months. It's a matter of getting used to that way of thinking about programming.

Now that I'm fluent, I strongly prefer the s-expression style for most problems. The uniformity of Clojure means that I can use the sequence library for an unbelievable variety of tasks. And I love that operations like "+" are just like any other function and can be passed around.

Another important point is that when you make a DSL in Lisp, that DSL can interoperate with all your other DSL's.


I programmed pretty heavily in Common Lisp from about 2000-2003 and I got over the parentheses too after a few months. I can't say I ever really came to love the syntax though. I always thought of it as the price I had to pay for macros. I did get pretty tired of the religious zeal of a lot of lispers though. Too many insist that lisp is the "right" way to design a language and refuse to see it as the collection of design compromises it is, like every other language.


For the particular task of list processing its syntax is pretty optimal ;-)


Ironically I think Haskell is actually much better for this.


Points free ftw.


What tool do you use to build an optimal syntax for all your needs?


I'm increasingly suspicious of DSLs. Mostly I work in languages like Scala or Ruby with a syntax that's expressive enough for almost anything as it is and avoid building DSLs on top of them. You can cover a lot of ground in any language that allows higher order functions and function, sequence and hash literals.

Syntax matters but it seems to have become a bit of an obsession lately.


DSLs aren't just about the curly braces and lambdas. It's also about reducing the amount of boilerplate code required to achieve some task. You can do this with higher order functions and objects but only to a certain extent.

If you find that your language still requires boilerplate code to implement features in whatever domain you're working in, then perhaps it's time to get a new language - or write a DSL.


Agreed, but I just don't find myself writing much boilerplate in modern languages. The other abstraction mechanisms take care of most of it.


That's the problem with boilerplate. You don't see that it's boilerplate until you see a languages where it's unnecessary.


If that's true (and it seems likely) how can you avoid writing blub in lisp? How can you ever know to what extent you're writing blub in lisp?


You mean writing blub-like code in Lisp? You probably cannot avoid that on your own unless your name is Guy Steele or someone like him, but by reading non-blub Lisp code you can.


I think the spirit of the post is right on, but I'm not a fan of the example given. That's the kind of code that scares people away from lisp. Which isn't to say it's bad code, but it's not nearly intuitive enough to use as an example.


I wanted to show an example of an embedded language people could relate to. Querying is something most everyone has done and has felt the pain of doing from other language. Any other non-trivial example of an embedded language would be even harder to understand b/c the problem domain wouldn't be familiar to the reader. It's a tough concept to communicate, I did my best.


Your best...was good enough. I don't follow Clojure much but I understood what you were trying to get at through the article. Nice job.


I can relate to the feeling of angst one gets when he encounters an unfamiliar dsl in lisp code(or any language that uses them a lot), but its the same kind of angst i feel when i encounter anything unfamiliar, its perfectly normal. But such irrational fears are, well, irrational. I need better reasoning against lisp(or any other powerful non mainstream language) than "it looks scary", or "it isn't intuitive".


I think he started out well, pointing out that SQL is a DSL for database queries. But the mapping between Cascalog and SQL wasn't clear, without further explanation it seems to be different for the sake of being different.

I would have appreciated a DSL which could obviously be used as a way to query an SQL database more than the examples in the article.


Great article. Probably no coincidence that Nathan uses SQL as a straw-man since he is interested in data oriented languages and wrote the nifty Cascalog DSL (which, BTW, I am evaluating for use at work).

I share his interest in Clojure, but some little things like poor stack traces still keep the experience of using Clojure from being totally fun. I use Clojure for work, but I am still mostly using Ruby for my own projects (with some Clojure).


clj-stacktrace improves things quite a bit.

http://github.com/mmcgrana/clj-stacktrace


I do use it on one project and am semi-satisfied with it.


Not sure if you realized it, but saying someone used a strawman is generally an insult to their argumentative abilities :)


"SELECT Person FROM Age ..." .. Let's just that it's not SQLs fault if you run into problems here.


:-) right you are - I had not thought of that.


I think most of what is good/unique about clojure doesn't come from being lisp though. For example - its concurrency constructs, seq/lazy-seq abstraction, persistent data structures that work with seq, java interop, etc.

Being a lisp doesn't hurt it, especially in the context of DSLs mentioned, but I don't think it defines it either.


Nit: the technical term for "integrated language" mentioned here is domain-specific language. People spin off DSLs from Ruby all the time.

Incidentally, one could argue that SQL is the most widespread DSL there is.

Edit: The author mentions this is basically a DSL in his post, which I missed the first time around. Not sure why he didn't just stay with that nomenclature though.


The distinction being drawn here is between a DSL which is defined in terms of pre-existing syntax, and one which defines its own syntax.

The "integrated languages" that are discussed do not need to be stored in strings, or have some sort of special pre-processing step to coexist with the primary language. In Clojure, a Cascalog query is a first-class structure simply by virtue of being defined in terms of Clojure syntax. Integrating LINQ into C#, on the other hand, required modifying both the compiler and IDE. And as pointed out in the OP, SQL is a second-class citizen pretty much everywhere you look, which leads to issues like injection.


Metalua, Nemerle, Boo (and even Perl to some extent) ... among other languages have support for compile time macros / interacting with the compiler's pipeline.

Here's the LINQ syntax built as a Nemerle macro ... http://code.google.com/p/nemerle/source/browse/#svn/nemerle/...

Where a query looks like this ...

     def res = linq <# from c in customers where c.City == "London" #>;
Of course, it requires a prefix "linq" + the actual code to be delimited ... but that's not a requirement, as that macro could've been built to look exactly as the C# equivalent (the current form being chosen to prevent ambiguities both for the compiler and the programmer).

Here's MooseX-Declare, which introduces completely new syntax in Perl ... http://search.cpan.org/dist/MooseX-Declare/lib/MooseX/Declar...

So the argument that you need Lisp-like syntax for adding syntax to a language without storing it in strings or through pre-processing ... is bullshit.

The only difference between a language like Nemerle and Lisp is that in Lisp a macro is easier to write, while in Nemerle you need some knowledge about the compiler's parser / AST and pipeline.

But having easy-to-write macros ... I'm not sure it's such a good thing ... and in Nemerle you're not limited in the kind of syntax you can add to the language. In fact Nemerle is a simple core language where many of the useful things in it are built as macros.


This deficiency of not being "integrated" is also true to a lesser degree in Ruby.

Ultimately, you get the thing to behave how you want with the syntax you want, but under the hood you're not getting code as data that you can manipulate as freely and naturally as you can in lisp. Instead you hack on the language's object models and method dispatch to get the syntax to line up with the one you have in your head.

In this sense, beauty is only skin deep in Ruby when it comes to DSLs. (It's nice and all, but it's apples to oranges vs lisps.)


Actually, Lisps conveniently sidestep the whole issue by refusing to have ANY syntax. Which is fine if you like it. But you could just as easily argue lisp is bad for DSLs because you can't, for example, define an infix operator.


I think "Lisp has no syntax" is an unfortunate meme. Lisp has a syntax. It's a very simple syntax for defining trees, and you're basically using it to write the abstract syntax tree for your program, but it still has a syntax.


Yes of course, but in the context of a DSL, you're not really defining a new syntax. You've still got an AST at the end of the day.

And of course, this isn't an attack on lisp. If it works for you, cool. But I just don't see how that distinguishes itself from a ruby DSL like say rspec.

I think the nicest feature of lisp DSLs is that lisp doesn't have the concept of an operator, and most langaguages make if difficult to define a function called or <- or something like that. But you can do that in Haskell or or ML, it's not really exclusive to S-exps.


I'm unfamiliar with rspec, but the major difference between designing and implementing DSLs in Lisps versus other languages is ease of code generation. That is, actually generating new code based on what the user of the DSL supplies. In Lisp, this is done because it's trivial to receive the AST that defines what the user wants to do. Then you can generate code directly based on that AST. No need for reflection or any such trickery because it's all just lists of list.

In most other languages, if that sort of transformation is even possible, it is much less direct.


I think it's more accurate to say that Lisp has a metasyntax rather than a syntax. Each special form and macro, including user-written macros, has its own syntax; but all of these syntax are layered on top of the standard metasyntax.


This is a good point, and it's one of the things I don't like about Lisp. You can't know the semantics of evaluation without memorizing each form or having a doc handy. The code at the call site itself is not enough.


Interestingly, that's similar to the criticism often levied at operator overloading in C++.


Similar, but different. Operator overloading doesn't affect evaluation. You know immediately what is being evaluation now and what isn't, not so in Lisp, and this matters a great deal.


You can change prefix expressions to infix with a macro, if you like; didn't you read SICP? Your DSL can look however you want it to, as long as there are some parentheses around the ends of it.


Except you still need to give the macro a name, I don't think you can just write a macro that figures out:

(1 + 2) (x = 1 + 2)

It needs to be something like

(!! 1 + 2)

And you need to have spaces.

(1+4/5)

Won't work.

And operator precedence is do-able, but a bitch.

Etc... Etc...


Actually, in CL, you can write a "reader" macro that interprets your first infix examples. You probably do not want to because overriding ()s (as opposed to overriding maybe []s or {}s) is complicated. But you can.

As for the issue of spaces, 1+4/5 is just a symbol, which you can interpret as a string, which you can then parse back to symbols.

All that said, if I really wanted infix math, I would probably just make something that interprets this:

    {1 + {2 / 3}{4+5}}
Or if I really love my parentheses...

    (!! (1 + (2 / 3)(4+5)))


> As for the issue of spaces, 1+4/5 is just a symbol, which you can interpret as a string, which you can then parse back to symbols.

You can write a parser in any language; nothing magical about Lisp there.


> You can write a parser in any language; nothing magical about Lisp there.

Your statement is correct; it is just misleading. First, it is irrelevant in that we are talking about parsing the syntax of your language and producing code; not "any language" can do that. You would also have to write an eval for the many languages that do not have an eval. See SICP for an introduction to eval-apply logic.

Second, it disregards ease of parsing. It is easier to write a parser in Lisp to translate Lisp-like syntax into Lisp code than to write a parser in another language to translate that language's syntax into that language's code (and exceptions to this are because the language was based on Lisp). This has to do with basic Lisp syntax being a syntax tree and is aided by the CL standard providing many tools for parsing.

So yes, you can complicate things a bit by mashing your symbols together, but that's just one extra parsing pass (insert spaces around operators). One READ-FROM-STRING later, and you have a symbol tree. In another language, it might be a series of complicated lexx statements and functions where the language relearns how to do simple addition.


> It is easier to write a parser in Lisp to translate Lisp-like syntax into Lisp code than to write a parser in another language to translate that language's syntax into that language's code

Yeah, that's a neat thing about lisp, but it wasn't what the OP really said - he was talking about parsing a random string.

Incidentally, I would be curious to get a design guy - one who really knows little about code - to look at blocks of code in different languages and give us his opinion. I have suspicions about what he might say, but it'd be a fun experiment.


If you don't want the weight of pulling in Incanter libs, here's a pair of functions for infix ops* in Clojure:

    (def && #(and % %2))
    (def || #(or  % %2))
    
    (def *rank* (zipmap [- + * / < > && || =] 
                        (iterate inc 1)))
        
    (defn- infix* 
      [[a b & [c d e & m]]]
      (cond
       (vector? a) (recur (list* (infix* a) b c d e m))
       (vector? c) (recur (list* a b (infix* c) d e m))
       (ifn? b) (if (and d (< (*rank* b 0) (*rank* d 0)))
                 (recur (list a b (infix* (list* c d e m))))
                 (recur (list* (b a c) d e m)))
       :else a))
        
    (defn infix [& args]
      (infix* args))

* from my unfix lib http://fogus.me/fun/unfix


Here's infix math from the Incanter library for Clojure:

http://data-sorcery.org/2010/05/14/infix-math/


> Lisps conveniently sidestep the whole issue by refusing to have ANY syntax.

I'm not sure why people repeat that, but Lisp has syntax ...

In the same way Xml, Yaml, Json have syntax ... plus it has operators that do stuff, like: quote / eval / for defining methods / for defining macros.

Sure, a serialization format for what are basically syntax-trees is lighter than for a language that requires a LALR parser ... but you have to limit yourself to that serialization format in your DSLs.


There have been many attempts to articulate the benefits of Lisp-based languages before, but most of these attempts seem to end in futility.

Really? I think someone even wrote a whole book that's mostly about macros, embedded languages and so on.

http://www.paulgraham.com/onlisp.html


Excellent book, but no one is going to read an entire book on Lisp unless they're already interested in the language.

What I mean by "articulate the benefits" is a short explanation for the unique benefits of Lisp that a non-Lisper can digest. That's what I tried to do in this article by trying to show a tangible example of an embedded DSL.

PG's post "Beating the Averages" is a better example, but its argument is more based on authority than clearly communicating the tangible reason why macros are so useful. That said, PG's posts are what got me interested in Lisp in the first place, but I didn't fully understand the benefits of Lisp until I started using it heavily.


There are many many many blog posts, short essays, long essays, old usenet posts, mailing list rants, you name it that are about 'the benefits of Lisp'. Now, you may not have found one that was very clear to you and that's fair enough. But to say they 'end in futility' is really pretty presumptuous.


I'm very much looking forward to seeing Nathan talk about Clojure and Cascalog at the Strange Loop conference in October! http://strangeloop2010.com/talks/14487


I don't know why there is so much Clojure junk on HN but if your thinking about picking up Clojure you'll be better of learning Erlang instead.


I'm all for a contrary point of view, but it would be nice to see you give some reasoning. I have heard some good things about Erlang, but the syntax isn't exactly luring me in. Why would I look at Erlang when I could get good concurrency and bajillions of Java libraries with Clojure?


Erlang, scala/akka, haskell/ghc and clojure all have compelling takes on concurrency (C#/F# too). In erlang's case, it's the lightest of lightweight processes. I've seen blogs describing erlang apps spinning up millions of processes. Those are the kinds of numbers that akka and ghc are shooting for (I don't have an akka reference:

http://www.serpentine.com/blog/2009/12/17/making-ghcs-io-man...

http://www.slideshare.net/jboner/akka-scala-days-2010


Any parentheses other than () are evil. ^_^


And rightly so, because they wouldn't be parentheses, rather braces, brackets, angle-brackets, and guillemets.


OK. Let's state in differently. Clojure is a functional language with a Lisp-look-like syntax, inspired by Lisp. But it isn't a Lisp.

The one of the fundamental part of the Lisp philosophy is to keep a syntactic sugar away, while Clojure is a collection of various syntactic sugar on top of something which looks like Lisp's syntax.

The statement that "Clojure is a dynamic programming language that targets the Java Virtual Machine" is true, while "Clojure is a dialect of Lisp" is just a very-very clever marketing statement.

^&~@%#{}[] - what all this shit is supposed to be? One must learn it. With all that crap Clojure is just another language, which locks like Lisp, especially for those who never saw emacs-lisp or Arc before.

http://groups.csail.mit.edu/mac/classes/6.001/abelson-sussma... - Lecture 1b: Procedures and Processes; Substitution Mode.


http://en.wikipedia.org/wiki/M-expression

By far, the cleanest Lisp dialect out there is Dylan, and it uses Algol notation, not s-expressions:

http://en.wikipedia.org/wiki/Dylan_%28programming_language%2...


http://mumble.net/~campbell/scheme/style.txt

The reader of the code should not be forced to stumble over a semantic identity because it is expressed by a syntactic distinction. The reader's focus should not be directed toward the lexical tokens; it should be directed toward the structure, but using square brackets draws the reader's attention unnecessarily to the lexical tokens.

But it is not only about a style. When [x y] means a different thing than (x y) it is a different language construction, with different behavior.


"There's one huge difference between Cascalog and Linq: Linq is part of C#. You can't define Linq in terms of regular C#"

So, the code snippets in this article aren't written in Clojure, but instead in some other custom language? I dont know Clojure, but those examples look very much like that crazy foreign Lisp 'nested parenthesis talk' to me.


The code snippets are Clojure, but that's the point. Nothing was added to the Clojure language itself in order for this to be possible. Linq would not have been possible (in it's current form anyway) without adding new features to the C# language itself.


Flippant response: you're saying that nothing was added to Closure in order for some Closure code to get written? This is not unprecedented.

Seriously - I don't see what's unusual about the syntax of these examples that makes them a DSL that doesn't just look like clojure code.

If my Linq implementation is allowed to resemble the language for which it is implemented, then I can easily write a 'Linq' implementation in plain C# that will look just like plain C#. (Although I grant that LINQ refers to the syntax additions to C# and not just the library)

Not hugely worked up about this, just feeling a little 'Emperors New Clothes' about it.


The point wasn't to make it not look like Clojure, the point was to express querying constructs in a way that's natural to the language. This isn't impossible in any language and is actually fairly common within the domain (e.g. SQLAlchemy for Python works similarly though operator overloading).

The point he's trying to make is that the default way of doing things in a lisp is to adjust the language constructs to fit the domain of the problem. This (generally) results in a simpler mental model of the problem domain and less code. It's not a about being ABLE to do it. You can do something similar in most languages, but there it's just not as easy, not the default, and not as flexible.


You might be right, but if so it sounds like the talk of Linq, and syntax, and 'DSL's served to obscure his actual point, which is a shame.

Guys, honestly, downvote if it makes you feel better.


Here's the "LINQ" of Common Lisp, which may look "different" enough to convince you:

http://www.unixuser.org/~euske/doc/cl/loop.html

(It could be implemented identically in Clojure, if you cared to.)


Thanks. To my untrained eye that looks more like it could be called a DSL than the article, where mostly it's just creative whitespace formatting, as far as I can tell. What's going on with the underlining?


Re: the underlining -- no idea, just an instructional aid, I guess.

Here's another example that might look "DSLy": http://www.brool.com/index.php/pattern-matching-in-clojure

The pattern matching macro "match" is written entirely in Clojure, without any compiler tricks. I wish I could do the same in C#! (I'd need to have a bunch of cruft defining a predicate object with options for each match criterion.)

All of these techniques boil down to how easy it is to do two things in a language:

- To write DSL code in your structure of choice without having to cover it in a huge amount of syntax goop, e.g. the loop macro or the magic LINQ syntax.

- To analyze and transform that DSL code programmatically, in a way that's more structured than simple string manipulation, so that you can do the right thing with it.

C# 2.0 was very bad at both of these. C# 3.0 improved a lot, by adding lambda expressions for the first, and allowing you to analyze C# expressions as a syntax tree. But Clojure (& Lisp in general) is really good at both.


The snippets are Clojure code that use the Cascalog library. Cascalog has the look and feel of an embedded language.


I feel like I'm nitpicking but I couldn't let it go, and yes its already been mentioned once but I feel that argument offered by confuzatron was lacking. I'm referencing to your characterization of Linq in C# as something inextricably linked to the language, how you cant even begin to separate it from the language.

Straight off the bat the most glaring problem with that statement is that Linq is not part of C# in any way shape or form since all C# code is compiled to bytecode, it would be literally impossible for C# to have Linq and for it to not be available for the rest of the languages supported by dotNet. The funny part is the expressive form of Linq that reads like a sentence is not even fully supported by C#! Only VB.Net has fully implemented Linq expressiveness.

There are many more reasons that argument is wrong, but I feel just pointing out that one above shows how far off it is.

In my experience its almost always best to shy away from hyping your idea of better by knocking the competition, your article stands well on its own and C# has so many obvious flaws that theres no need to add to it. Stick to whats good about X, not whats crappy about alternative Y.

Again all in all an interesting read since im not familiar with Clojure, but I just had to nitpick.


Straight off the bat the most glaring problem with that statement is that Linq is not part of C# in any way shape or form since all C# code is compiled to bytecode, it would be literally impossible for C# to have Linq and for it to not be available for the rest of the languages supported by dotNet.

There's no such thing as "LINQ bytecode". LINQ is just syntactic sugar that can coexist with vanilla C# (or VB.NET, or whatever) because Microsoft decided to modify its compiler and IDE to allow it. It is fundamentally impossible for you or me to make a similar change, unless we're willing to eschew the Microsoft toolchain.

In Clojure, you do not have the same limitation. That's the only point the article was making, and it's completely correct in that respect.


see my reply to nathanmarz


Straight off the bat the most glaring problem with that statement is that Linq is not part of C#

Parts of it are. Query expression keywords[1] are defined in the C# 3.0 Language Specification (see 7.15 Query expressions)[2].

[1] http://msdn.microsoft.com/en-us/library/bb310804.aspx

[2] http://www.microsoft.com/downloads/details.aspx?FamilyID=dfb...


I'm not bashing C#. I'm just saying that you couldn't define Linq in vanilla C#. It had to be implemented as part of the compiler.

In Clojure, you can create embedded languages without modifying the compiler. That's all I'm saying.


I can appreciate and understand that you didnt mean to come of as combative, but to me it did come off that way - and others since im not the only one pointing this out.

Can you name the compiler change that enabled Linq without looking it up? Can you define Linq, what it basically is? It feels like if you could you wouldnt make such a statement since all Linq is at its core is an iteration engine. Its literally just methods you dump your collection into plus an anonymous method on top and vrooom goes the engine applying the method to each item. The pretty syntax form you usually see is not actually Linq the framework and leads to significantly more problems than it solves, its basically just for PR purposes.

Edit: This feels like it might get out of hand and spin into a good old nitpicking programmers war. Reading my post again it came out way more confrontational than I meant, and I didnt mean any insult by it. My main contention is that the Linq syntax is often confused for the Linq framework - they are not in the same state let alone ballpark. Plus the argument can be made (and i would agree to a large extent) that it wasnt the compiler that was modified to allow Linq, it was Linq that was waiting in the wings for the compiler team to implement features they had planned quite some time before. However I am also being a stickler and stubborn, I nitpicked when even from my point of view it wasnt such a large error - it was the way it disrupted the flow of a pretty good article I was getting into, thats what made it stick out for me.


I'm not the OP, but here are some compiler features missing from C# 2.0 that would have prevented a LINQ-type library: lambda expressions, implicit types, and anonymous types.

None of these required modification to the runtime, they were purely compiler features. And, as pointed out, they were features only Microsoft had the ability to implement.


Strictly speaking, none of those features were necessary to implement LINQ. Anonymous functions were already possible in C# 2.0 with the ugly delegate syntax. Type inference saves a lot of keystrokes, and anonymous types save big families of generic tuple types, but you could do without -- hell, people write functional Java.

(A better comparison would be to look at LINQ-to-SQL specifically; that would not be possible in any sense without the expression tree libraries and compiler support introduced in C# 3.0, since there was no way to "quote" a C# expression and look into it. That's much closer to the mark here.)

However, I agree with you in spirit. A big enough quantitative difference becomes a qualitative one; nobody would actually want to use LINQ+C#2. And likewise, few people want to write small-scale DSLs in C#.


Well, your argument is tiptoeing down a very narrow path based on the definition of "necessary". Here's what Eric Lippert has to say about which features were necessary for LINQ: http://blogs.msdn.com/b/ericlippert/archive/2009/10/05/why-n...


Yes, you're right. Expression trees are especially interesting from an integrated DSL perspective, since you can do ridiculous things like turn lambda expressions into a syntax for hashes and so on. I omitted them because I believe that really did require a runtime modification to support.

But I would argue that since DSLs are all about affordances, it's not sufficient to say that similar functionality would be "possible". If the new approach doesn't represent significant semantic compression (which your hypothetical LINQ+C#2 would not), no one will use it. By that measure, the syntactic sugar added to C# 3.0 was absolutely a necessary precondition for LINQ.


LinqBridge allows support for Linq to run on .NET 2.0. The description of how it works might provide some insight:

http://www.albahari.com/nutshell/linqbridge.aspx


That requires the .NET 3.0 compiler to work, which was my point: LINQ is largely a compile-time feature.


No, the features that were used to build LINQ were added to C#. LINQ was then built using them, and those same features are available to you to build anything similar.


I'm obviously just referring to the syntax added to C# to support Linq.

I'm not even criticizing Linq/C#, I'm just using it as a point of comparison to help the reader understand Clojure concepts. How that comes across as combative I don't understand.


Im afraid we will have to agree to disagree, I think weve both stated our cases to the extent they can be clearly stated. I wanted to edit my previous post to take out the combativeness but responses had been put up and it would like a cop out if i did that. I banged it out without double-checking it for overall tone, but again I do appreciate the article and overall found it a good read.


I think what Nathan is referring to is the SQL-like syntax for LINQ, which really couldn't be implemented in C#, as C# doesn't have any syntax extension features.

However, expression-based LINQ is implemented in C#. That's possible because of several language features added to C# 3.0, notably expression trees (code as data) and anonymous functions (lambda). Those features make C# expressive enough to do things like LINQ in C#, though in a somewhat clumsier way than Clojure does it.

For example, if you write cities.Where(s => s.StartsWith("L")), that "s => " is a lambda expression, but because the Where method takes an expression tree, the expression is turned into a data structure rather than executable code. This is similar (not identical!) to Clojure recognizing that you're calling a macro rather than a function and letting you see code as data.


Thats why it felt like a cheap detour to me. To me it read like Linq is completely unchangeable and you're stuck with what MS/C# language dictates it is, but almost every single component of it can be swapped out with your own implementation and overloads that in practice you can do much of what is implied as unachievable. Its been so long ive lost some of my grasp on it, but from what I recall you can swap out even the core methods that the C# sentence type syntax ends up being converted to, now is that not really close to what was highlighted in the article as impossible? With expression trees added on top to build out queries with decision trees during runtime you can make it work against whatever kind of datastore youre interacting with. In the end I felt its not nearly as immovable as portrayed in the article, and at the time i felt like the article was past its prime since it had been an hour with few comments - though im kinda wishing I kept my mouth shut now that theres been an invasion of comments.

To start splitting hairs about exact definition of what was written in the article seems to miss the point to me, the general flow and feel was dismissive of its ability. Is it clumsier? From what I can tell so far yes its clumsier, but its not powerless and immovable which is how it came off. Maybe im just sensitive though, or maybe im insensitive in how i portrayed my argument, but I really did just meant my original comment as constructive criticism.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: