What's nice is that the new tools are not just syntactic sugar, as in so many other languages. They either adress specific performance pain points (non-rebindable functions and numerics) or introduce new abstractions and tools (reduce-kv is a nice small example).
I love the fact that each time I read about a new thing coming to Clojure, I immediately think "well this will fit right into what I'm building, great!".
Clojure strikes a good balance between nice ideas and practicality.
“To arrive at the simplest truth, as Newton knew and practiced, requires years of contemplation. Not activity. Not reasoning. Not calculating. Not busy behaviour of any kind. Not reading. Not talking. Not making an effort. Not thinking. Simply bearing in mind what it is one needs to know. And yet those with the courage to tread this path to real discovery are not only offered practically no guidance on how to do so, they are actively discouraged and have to set abut it in secret, pretending meanwhile to be diligently engaged in the frantic diversions and to conform with the deadening personal opinions which are continually being thrust upon them.” –George Spencer Brown in The Laws of Form, 1969
For example, if we went back in time, would all of the core functions have been implemented this way? Would this be a possible drop in replacement in the future? Could future versions of Clojure integrate these ideas more deeply? If so, what are these backwards compatibility concerns?
I don't think so - the existing implementations work on the higher-level abstraction of sequences. Reducers are optimized parallel versions that work on collections. While parallelism is extremely useful in some parts of your code, there is overhead and I don't think you would want either the overhead or the restriction of working below the sequence abstraction in the general case.
I seem to see some of the same choices being made available in the new Java 8 collections and parallel operations work. That is, it is up to the developer when to "go parallel".
For an entirely different approach, check out Guy Steele's Fortress language which eagerly executes most things in parallel by default (like all iterations of a loop, arguments to a function, etc) and you have to tell it not to do that.
Guy's Strange Loop 2010 talk is an interesting complement to this work: http://www.infoq.com/presentations/Thinking-Parallel-Program...
2) If seq or first is called on a reducible, wouldn't it be easy to just implicitly realize the reducible in to a sequence first?
2) That's possible, but it makes it too easy to write code with abysmal performance because of (1). The common case is that you call both first and rest on the reducible. If both turn the reducible into a seq first, then both will take O(n) time in the best case (might be much worse depending on how the reducible was built up). Combine that with the fact that most times, you're going to recurse on the rest, and you've got an O(n^2) algorithm where you expected O(n), if everything is based on reducibles. Additionally, it's impossible to take the first or rest of an infinite reducible (well, perhaps you could do it with exceptions -- in general you can turn a reducible into a seq with continuations).
Looking forward to trying this out. I've been implementing another language in Clojure (tho just experimenting for the moment). It's a non-lazy language (Groovy) so I have reduce all over the place, e.g.
(defn multiply [a b]
(cond (and (number? a) (number? b)) (* a b)
(and (vector? a) (integer? b)) (reduce conj  (flatten (repeat b a)))
(and (string? a) (integer? b)) (reduce str (repeat b a))))
(defn findAll [coll clos]
(reduce conj  (filter clos coll)))
If you haven't seen this just take time to watch the first 15 minutes. Really worth it.
He says: "The only thing that knows how to apply a function to a collection is the collection itself." Which is like a monad in the sense that the insides of a monad are opaque; you can only interact with a monad through the functions it gives you.
The "map" function from his "reducers" library has type:
fn * reducible -> reducible
(i.e., it takes a function and a reducible and gives you back a reducible)
while monadic "fmap" is a little higher-order and has type parameters, but it does something analogous:
(t -> u) -> (M t -> M u)
(i.e., take a function from type "t" to type "u", and return a function from "monad of t" to "monad of u"). It's a little different in that Hickey's "reducers/map" applies the argument function itself, while monadic fmap gives you a function that will do that.
Of course, his "reducers" library addresses a bunch of other stuff like parallelism, which isn't something that monads themselves are concerned with. I'm just saying that part of the interface to his new collection abstraction is monad-like.
John McCarthy was the first to recommend using logic in programs with his advice taker proposal. This inspired the logic programming paradigm:
Paul Graham (creator of the hacker news site) has an entire chapter in his book On Lisp on embedding prolog:
Peter Norvig's Paradigms of Artificial Intelligence programming includes an implementation of prolog in Lisp:
Common Lisp relations: an extension of Lisp for logic programming proposed a mechanism for logic programming back in 1988:
Franz. Inc is developing Allegro Prolog as an extension to Common Lisp:
The Clojure programming language includes relational programming functionality through the core/logic module:
Racklog is an embedding of Prolog-style programming in Racket.
The Shen programming languages includes a fully functional embedded prolog system:
CycL is a declarative ontology language based upon classical first-order language which includes support for ontology components including parts and relations:
What is it that we don't get again?
I wonder if there would be any benefit to that?
Of course, this a ROR crowd, software invented by a business school grad/game review writer, so it's can be an uphill battle explaining this stuff.
* argumentum ad verecundiam
* ad hominem
Clojure tends to emphasise simple components that do specific tasks; so core.logic focuses on logic programming and not, for instance, persistence or atomicity. Because it's so focused, it's more capable in its particular area of expertise than a more general tool like SQL.
So basically the traditional niche of lisp, complex apps, has been replaced by sql.
There are apps written in lisp, like this site for example, but that's just for vanity mainly (pg is famous for his lisp books) - it would be have been easier to write it in sql.
Almost every website or software running a business I can think of runs sql. The very small few that don't, google for example, run on c++ and assembly.
So, in 2012, with gigs of ram and postgres, why would I use lisp?
Why not just do relational programming - prolog or sql. The lisp weenies still don't get it.
And now you've peppered the entire thread with non sequiturs and off-the-wall droppings. Please take a break.
I'm simply trying to tell the kids on here that SQL is great and ROR/LISP/NoSQL worship around these parts is misguided. No 'droppings' in that.
Yes, I write Lisp on my resume to make me look like a stronger candidate. I learnt it a few years ago for fun, that's all (I'm not a lisp flip-flopper). Just want to put the advice out there to be a relational weenie (instead of lisp weenie).
If you have something to say about the actual post, then please say it. Otherwise, get out.
Lisp is useful to me (more useful than ruby or C was), predominantly because I can work with emacs/swank/slime and change and query my program while I'm observing its output.
But you made your claim too big.
SQL (even with recursive CTE) is still just a DSL for relational data. Even more: DSL for relational data only.
I'm happy to see a language putting this approach to collections into its core libraries and even combining it with ideas about parallel processing of data structures.
On the other hand, the whole thing is written as if Rich Hickey had an awesome idea, wrote some awesome code, and is now sharing his awesomeness with us. It's kind of a lost opportunity to give credit to the people who gave him the ideas (and maybe the people who helped him write the code, if there were any) and it's kind of a turn-off.
One good, prior write-up about reducing as a collections interface is:
http://research.microsoft.com/~simonpj/papers/deforestation-... The first paper (yes, that's a .ps.Z - check the date)
http://www.scs.stanford.edu/11au-cs240h/notes/omgwtfbbq.html some recent slides, which also include a bit about a "stream fusion" which isn't yet in the standard library
http://darcs.haskell.org/cgi-bin/gitweb.cgi?p=packages/base.... The details from GHC's libraries - it's all in the RULES.
When working up from the bottom it might seem that this is just manual stream/list fusion. But the parallelism is the key prize. We need to stop defining map/filter etc in terms of lists/sequences, and, once we do, there is nothing to fuse, no streams/thunks to avoid allocating etc, because they were never essential in the first place, just an artifact of the history of FP and its early emphasis on lists and recursion.
Iterators do nothing for parallelism either.
I did see that link, because I liked the concept enough to read it all the way through without knowing Clojure, but that's not quite what I had in mind. You're right that most people have never heard of that library, which is why the way you presented it will leave most people with no idea that it was an influence (even if they read that far). That's something you could have just said, in one sentence, without getting into detail about how it was different.
I'm not really trying to criticize you for saying or not saying certain things, and I don't think you did anything wrong. Not really acknowledging influences is just a symptom of what turned me off. I feel like this post was written from a sort of aggressive fighting-for-popularity mindset that I'm uncomfortable with in a language.
EDIT: missing word
Also, most of what's special about enumerators is the gymnastics needed for Haskell's type system, which isn't relevant to Clojure at all...
One of the nice things about Clojure is that it's a lot of great ideas from a lot of great people, packaged up with Rich Hickey's particular brand of digestible design.
Having watched most of Rich's published talks, it's apparent to me that he is quite humble about the fact that none of his ideas are that novel. He simply takes credit for this particular arrangement and takes pride in his design approach. His design sensibilities really resonate with a lot of people, hence Clojure's popularity.
On a personal note, I wish I could give credit to all the giant's whose shoulders I have stood on. However, it's simply not possible to remember all those details. Sometimes, ideas take days, weeks, months, YEARS to bake. Sometimes, someone tells you the solution to the problem, but you don't get it. Then 5 years later, it comes to you in a dream.
Rich Hickey might be very humble in person, and I don't think this blog post is necessarily arrogant (but you're also not going to convince me it's humble).
I'm not really complaining about a lack of humility or even about Rich Hickey. I just feel like Clojure is one of those languages caught up in a sort of Cult of Awesomeness, where people feel obligated to lionize the pros and downplay or omit the cons and the contributions of others.
I don't really want to use a language that's being driven by that. Maybe I'm mistaken about Clojure fitting that description, and maybe meeting Rich Hickey would set me straight, but I still think it's a reasonable perception to take away from this blog post.
Do you expect Rich or anybody else to write a blogpost that goes 'I wrote this library and here is why it sucks'. EVERYBODY writes blogpost on why what they themselfs build is good, I dont see any diffrence between clojure and other languages (or communitys).