Here we've defined a protocol called "Sliceable" and added support for it to two built-in classes. We've done so in a safe way -- the "slice" function isn't visible anywhere outside this namespace unless you import it.
Now say you come along and want to use my little "slicing" library to carve up someone else's data structures. You can add support without touching my files or their files:
(:use fancylists [:only (FancyList)])
(:use stevelosh.slicing [:only (Sliceable)]))
(slice [this start end]
(... slice up the fancylist here ...)))
And now in another file you want to slice up a FancyList:
(:use fancylists [:only (FancyList)])
(:use stevelosh.slicing [:only (slice)]))
(def my-fancy-list (FancyList 1 2 "cat" "dog"))
(slice my-fancy-list 2 4)
; => (a FancyList containing "cat" and "dog")
Now you've added support for some random person's FancyList library to my Slicing library, without touching the code in either of them. This is the really cool part about Protocols.
This concept actually does appear in David's post when he adds support for his Slice type to the IFn protocol, but it's a bit obscured by the fact that function calling has some syntactic sugar so you don't have to write -invoke.
In this context this looks somewhat similar to Haskell's type classes, and maybe to a lesser, or greater(?), extent C#'s extension methods. The latter I guess lacks the concept of the "protocol" as extensions methods live at the namespace level in a way (they're wrapped in a static class, but this is somewhat meaningless). Then again, the protocol concept here seems to be mainly used for the same purpose of having a namespace for storing implementations and also importing them?
What seems really interesting to me here is the ability to extend an existing class by implementing an interface (instead of a set of extension methods), and therefore becoming able to cast any instance of this type as an instance of this interface. I'd love to have that in C#, as it would simplify (if not eliminate) the creation of wrappers around types you don't own.
An important difference between Clojure protocols and C# extension methods is that protocols are one of the tools for polymorphism in Clojure, the (only?) other being multi-methods, whereas extension methods are just syntactic sugar for static methods and therefore aren't polymorphic.
Clojure also supports prototype-based polymorphism via Associative objects, inheritance polymorphism via Java interop/proxy, interface-based polymorphism with deftype/defrecord, polymorphism via abstraction (see the Clojure sequence operations for example, which use Java interfaces under the hood but you'd never know....). And probably a few more I can't remember right now..... there's a reason Rich Hickey calls it "polymorphism a la carte".
Except isn't that missing the actual point? The example was contrived to drive home that we can use objects as functions in ClojureScript - I respect that the engaged reader understands that these tools must be applied tastefully. I could have implemented it just like you've shown but would we have learned anything? :)
Also my example is one step from being open, extensible to any type instead of hard coding string/array which you copied. You can't do this in CoffeeScript in a safe way:
"But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well"
Not trying to refute anything. Just a passage that comes to mind when reading that opening sentence. Especially with the specific reference how that seems to be common occurrence with Lisp.
In fact, you might argue that OP and PG are saying the same thing. Programming languages are habits of mind, and until you have real understanding of an abstraction, it's a lot easier to think "that looks weird, why don't I just use this other thing I already know".
I appreciated the look into ClojureScript as someone who's never touched it (and whose interest is definitely piqued now), but I'm baffled by the article's initial hook of 'let's compare how many characters it takes to accomplish the same task'. Is this really a meaningful metric? There's value in comparing the relative clarity and expressibility of each language, but that's not necessarily synonymous with brevity. I get that it's not the heart of the piece, but starting with such pedantry seems to detract from the point.
Brevity is a feature of clarity, and clarity usually attends very powerful notions like reuse and application to a new domain.
What's truly orthogonal is some notion of "expressiveness" or "teachability" or otherwise being-able-to-read-and-understand.
The example that you should think of here is mathematics papers. The ideas are stated in their clearest form, which usually is absurdly brief: lemma, theorem, proof. You can then spend about an hour poring over each line to figure out what the heck it's expressing, before suddenly a moment of clarity dawns on you and you see all of the connections together.
Why would they write incomprehensibly? It's not because they don't want to be understood; these are peer-reviewed journals we're talking about here, and someone else must read and OK your work. But it's because when they make the fewest assumptions and the most broad argument, their work maximizes power and usability and robustness.
ClojureScript doesn't have this problem. It's normal Clojure code that (I assume) can be debugged like normal Clojure.
EDIT: I haven't debugged ClojureScript yet, but I assume you can debug it like normal Clojure code.
I have yet to find myself in a situation after 15 years of programming when the primary problem was "redundant key-tapping".
Ah, I see. I'd probably take the route of allowing ClojureScript to be run as normal Clojure, but I guess the problem is that you don't have access to the browser environment. Debugging browser-based apps can be tricky because of that.
We've written a Parenscript/JS debugger that runs in Emacs and interacts with V8 via Node.js as an Emacs subprocess. We started off using the JSON debugging protocol you mention on that linked page, but had to abandon it because it insisted on printing large arrays in their entirety, which hangs V8 if you ever encounter a large array as a local variable. In the end, we found V8's in-process debugging API much easier to work with. The JSON protocol is just a (bloated) wrapper around that, which gives less control and doesn't work particularly well. (I'm not aware of any systems out there that actually use it.) Of course, if you drop the JSON wrapper, you have to write your own server-side code and pass your own messages to the client. But in return, you have fewer hoops to jump through, and it felt to me like less overall work to make a good debugging client this way. The in-process API isn't documented but it isn't hard to figure out from the V8 source.
The other reason is to avoid code bloat. Adding your own runtime library adds a lot of (generated) JS that has to be pushed down to the client.
This is something we're constantly fending with in Dart. Dart does have a runtime library (including a different DOM API!) and managing that without generating enormous amounts of JS is tricky. We're getting pretty good at dead code stripping, but doing that isn't easy. Without type annotations, it would be even harder.
> We've extended all objects including numbers to respond to the bar function.We can provide more specific implementations at anytime, i.e. by using extend-type on string, array, Vector, even your custom types
>> We've extended all objects including numbers to respond to the bar function.We can provide more specific implementations at anytime, i.e. by using extend-type on string, array, Vector, even your custom types
>This is totally safe (except for oldIE)
No, not in his definition of safe it isn't. That's just monkeypatching -- the new functions are visible everywhere in the program, and it's entirely possible to be stung badly by name collisions. The ClojureScript version doesn't have these problems -- the protocol only exists in the namespaces where it is defined or imported.