Hacker News new | comments | show | ask | jobs | submit login
Transducers are coming to Clojure (cognitect.com)
319 points by siavosh on Aug 6, 2014 | hide | past | web | favorite | 100 comments



This sort of reminds me of the Church-encoded form of a list.

    newtype Fold a = Fold (forall r . (a -> r -> r) -> r -> r)

    fold :: [a] -> Fold a
    fold xs = Fold (spin xs) where
      spin []     cons nil = nil
      spin (a:as) cons nil = cons a (spin as cons nil)

    refold :: Fold a -> [a]
    refold (Fold f) = f (:) []
Notably, since `fold` and `refold` are isomorphisms then we can do everything we can do to `[a]` to `Fold a`

    map :: (a -> b) -> (Fold a -> Fold b)
    map x (Fold f) = Fold $ \cons nil -> f (cons . x) nil

    filter :: (a -> Bool) -> Fold a -> Fold a
    filter p (Fold f) =
      Fold $ \cons nil -> f (\a r -> if p a then cons a r else r) nil
but all of this work is done without concrete reference to `(:)` and `[]`... you instead just use stand-ins I've been calling cons and nil. What's nice about this is that `Fold` can be used to build anything which can be "constructed from the left"

    foldSet :: Fold a -> Set a
    foldSet (Fold f) = f Set.insert Set.empty
It's sort of dual to the stuff I was exploring in Swift here [0]. It also creates laziness for free because you can't really execute the chain until the end—Church-encoding is really a form of continuation passing.

The downside of this idea is that each time you "consume" a Fold you redo work—there's no place to put caching necessarily.

Maybe that's what they're solving with the Fold transformers representation.

[0] http://tel.github.io/2014/07/30/immutable_enumeration_in_swi...


Kind of. The idea is to get out of the context of the 'whole job' (the ->r->r bit above) and focus on transformations of the step function (a->r->r) -> (b->r->r) {using your arg order above}. Not talking about the whole job (i.e. the source and result) makes for much more highly reusable components, especially when the jobs don't produce concrete results but, e.g., run indefinitely, like channel transformations.


Yeah, I'm less sure about the properties as you go this way. You ought to be able to get an Arrow out of it and it's a pretty natural idea.


For those curious a about this, look up Haskell's `build` and `destroy` functions. Those functions are for church-encoding lists and doing optimizations that way.



So is this actually just the Free monad?


No, it's kind of a universal construction though, but not in any way more special than regular lists are.


I'm not quite sure what this means, so here's my attempt to translate this into Python. A reducer is a function such as `add`:

    def add(sum, num):
      return sum + num
  
Of course you can plug `add` directly in `reduce(add, [1, 2, 3], 0)` which gives `6`.

A transducer is an object returned by a call such as `map(lambda x: x + 1)`.

You can now apply the transducer to a reducer and get another reducer.

    map_inc = map(lambda x: x + 1)
    add_inc = map_inc(add)
    
Our first reducer simply added, but the next one increments and then adds. We can use it as `reduce(add_inc, [1, 2, 3], 0)` which gives, I'm guessing, `9`.

Since the transducer returns a reducer as well, we can compose transducers:

     r1 = filter(is_even)(map(increment)(add))
     # use r1 in reduce()
     
It seems in clojure, reduce() isn't the only useful function that works with reducers, there are others which makes this all worthwhile.

Is my translation accurate?


If I understand right (I may not):

In Python, the (complex, generic) iter() protocol is how every container provides a method to iterate itself, and reduce is a trivial application of a reducer (like your add function) to a container by using the iter protocol.

In Clojure, it's the opposite: there is a reduce() protocol and every reducible container knows how to reduce itself. the traditional reduce function just takes a reducer and a reducible and performs the reduce protocol on it.

As there's a bunch of different kinds of containers with different rules, the best way to implement the reduce protocol on them is different for each, and there might be weird specialized reducers that are parallel or asynchronous or lazy or whatever.

Transducers allow you to composibly transform reducers into different reducers, which can then be handed to the reduce protocol. As it turns out, most (all?) normal container->container operations, like map and filter, have corresponding transducers.

Here's the good part: if you have a reduce protocol and a complete set of tranducers, containers don't need to be mappable. The map(fn, iterable) function needs to know how to iterate; mapping(fn) doesn't care about containers at all, and is fully general to mapping any reducible for free. So you can write a transducer to produce the effect of any map or filter-like operation without touching iter().

As an added bonus, the code is more efficient:

  reduce( add, map( lambda x: x + 1, xrange(10**9) ), 0 )
eagerly builds a gigantic mapped list (barring sophisticated laziness or loop-fusion optimizations), but

  reduce( mapping( lambda x: x + 1 )(add), xrange(10**9), 0 )
is equivalent and trivially runs in O(1) space on one element of xrange at a time.

PS: python translation of mapping from http://clojure.com/blog/2012/05/15/anatomy-of-reducer.html converted to named functions to be more pythonic:

  def mapping( transformation ):
    def transducer( reducer ):
      def new_reducer( accum, next ):
        return reducer(accum, transformation(next) )
      return new_reducer
    return transducer


my heart somehow leaps when I see beautiful code like this.

Although I have tried to like clojure in all earnestness, I am somehow put off by the noise from clojure ( PS its not only about the brackets. )

Somehow the python code and also the haskell version looks so succinct yet sparse enough to be read.

maybe I am hardwired that way ...


Excellent answer.


I'm also hoping for an informative answer to this question. Anyone?


I think its just reduce, but in general its possible to write many implementations of reduce. It could be applied to lazy sequences, observables (to get a future or a new observable) and many other reducible things


Yes, it's not just the 'reduce' function. You can think of many kinds of jobs in terms of seeded left reductions. Here's an example of some of the functions that can apply a transducer to their internal 'step' operation:

https://gist.github.com/richhickey/b5aefa622180681e1c81

Note how one transducer stack is created and can be reused in many different contexts.


I think that a good way to understand transducers is to look at their implementation (shortened a bit). Here it is for map:

    ([f]
    (fn [f1]
      (fn
        ([result input]
           (f1 result (f input)))
        ([result input & inputs]
           (f1 result (apply f input inputs))))))
filter:

    ([pred]
    (fn [f1]
      (fn
        ([result input]
           (if (pred input)
             (f1 result input)
             result)))))
And it gets more interesting with take:

    ([n]
     (fn [f1]
       (let [na (atom n)]
         (fn
           ([result input]
              (let [n @na
                    nn (swap! na dec)
                    result (if (pos? n)
                             (f1 result input)
                             result)]
                (if (not (pos? nn))
                  (reduced result) ; a terminal value indicating "don't reduce further"
                  result)))))))
The transducer is supplied with the reducer next in the chain (f1) and returns a reducer function that gets fed with the reduced value by the preceding reduction (result) and the next element (input). Note how the take transducer maintains internal state with an atom. This could get a little tricky for more elaborate reductions, as how the internal state is maintained might have a significant effect on performance, depending on exactly how the reduction is performed. For example, if the reduction is done in parallel (say, with fork-join), then an internal state that's updated with locks (like refs) might significantly slow down -- or even deadlock -- the reduction.

AFAICT mapcat still only returns lazy-seqs.


Because mapcat's signature was not amenable to the additional arity, there's now also flatmap (note you can write the lazy collection version of any transducer fn using sequence as below):

    (defn flatmap
      "maps f over coll and concatenates the results.  Thus function f
      should return a collection.  Returns a transducer when no collection
      is provided."
      ([f]
       (fn [f1]
         (fn
           ([] (f1))
           ([result] (f1 result))
           ([result input]
              (reduce (preserving-reduced f1) result (f input))))))
    
      ([f coll] (sequence (flatmap f) coll)))


Interesting. Continuing to try to analyze these in Haskell, I think this is a direct translation:

    -- z is just there to not clobber some standard prelude names
    type Red r a = r -> a -> r

    zmap :: (b -> a) -> Red r a -> Red r b
    zmap f f1 result input = f1 result (f input)

    zfilt :: (a -> Bool) -> Red r a -> Red r a
    zfilt p f1 result input = if p input then f1 result input else result

    ztake :: Int -> Red r a -> (r -> a -> Maybe r)
    ztake n f1 = run n where
      run n result input =
        let n'     = n - 1
            result = if n >= 0 then f1 result input else result
        in if n' == 0 then Nothing else Just result
I wanted to post this mostly to note that `map` is mapping contravariantly here. Is there something I'm missing? I had that occurring when I was playing around with this idea before looking at the source as well... to fix it I had to consider the type `Red r a -> r` so that the `a` was covariant again.


Also, you can get rid of the sentinel value by packing along a termination continuation value as well:

    type Red r a = (r -> a -> r, r)

    zmap :: (b -> a) -> Red r a -> Red r b
    zmap f (f1, z1) = (\result input -> f1 result (f input), z1)

    zfilt :: (a -> Bool) -> Red r a -> Red r a
    zfilt p (f1, z1) = (\result input -> if p input then f1 result input else result, z1)

    ztake :: Int -> Red r a -> Red r a
    ztake n (f1, z1) = (run n, z1) where
      run n result input =
        let n'     = n - 1
            result = if n >= 0 then f1 result input else result
        in if n' == 0 then z1 else result
This has the theoretical niceness of having `Red r a` just be the signature functor for linked lists.


I'm also assuming transduce and sequence are something like this

    type RT r a b = Red r a -> Red r b

    sequence :: RT [a] a b -> [b] -> [a]
    sequence xform bs = transduce xform (flip (:)) [] bs

    transduce :: RT r a b -> (r -> a -> r) -> r -> [b] -> r
    transduce xform cons nil =
      let (cons', nil') = xform (cons, nil)
      in foldr (flip cons') nil'
but I can't find the actual source yet. Again, there's a weird flipping going on induced by the contravariance. It makes me want to define RT-composition backwards.


Doesn't ztake need to live in State Int (or something similar)? As written, I don't see how it passes the 'n' onto the next call. It seems that the transducer returned by ztake n for any n > 1 will always pass on (f1 result, z1).

Thank you for posting this though, helpful to see someone work through it.


Here's how to work it without even using the state monad: https://news.ycombinator.com/item?id=8149200

It's quite a bit different from this formulation.


Yeah, I'm working through it in more depth now (I just fired this off last night quickly without a lot of thought). There's more to be said about take. My implementation here totally does not work.


May be worth linking back to this here: http://www.reddit.com/r/haskell/comments/2cv6l4/clojures_tra...


As someone who tried Clojure and failed, serious question: Does anyone actually use all these crazy features/patterns that keep getting added/discovered and talked about?

I ask because even though I can imagine someone smart mastering these things and programming faster, I can't imagine a second person being able to understand his code, maintain it, and generally be productive. I imagine the second person losing a lot of time trying to understand what is going on, or even thinking he understood but in reality he didn't and messing things up.

So how do you even form a Clojure team?


It's a mindset change.

Other than macros, which fundamentally alter the flow of code in a very non-standard way, the rest of these "crazy patterns and features" in Clojure are just like the crazy libraries used by typical Java/Ruby/C# developers, only thousands of times simpler. If I came to you and said, "does anyone even use these tens of thousands of libraries in Maven? How do other developers work on this afterwards, they'd have to learn all these new APIs!" I'd likely get a response like, "they'd just be expected to", with a chuckle. The mindset I've seen a lot is that language features are "too hard" to learn and should be avoided, but library complexity is beyond reproach and is rarely questioned.

Clojure the language takes the approach that it's better to provide a dozen simple but incredibly composable tools that can easily replace a lot of duplicated boilerplate in other languages. Like these transducers, in Java/C# they'd likely be one of the design patterns that needs a whole set of classes to make what he shows in a few characters. Would you rather learn to write and read maintain a handful of classes, or just learn what those few characters mean? I don't get paid by the line, and any abstraction built into the language like that is a few more classes I don't have to maintain and possibly implement incorrectly in some subtle way.

Like I said, it's just a mindset change. I know a company that only uses Clojure, and they hire a lot of entry level developers who haven't yet gotten stuck in a mindset that "languages are too hard; libraries are easy". They have great success with that, and their entry level guys are up to speed much faster than those in my shop, using .NET, where they have to learn a new language AND a mountain of boiler plate code using dozens of libraries and frameworks.


> Would you rather learn to write and read maintain a handful of classes

I appreciate your answer, but that's not what I do with libraries. I don't have to read their internals, let alone write them.


I may have communicated that badly. A pattern like transducers is one that everyone needs. Everyone either uses it built into the language or writes it out long form. Design patterns are a way of giving you instructions for writing it out long form. So, you either write it yourself or use it built in.

Some people do not want to learn language tools that will reduce such boilerplate, but those people often are (ironically) happy to learn lots of libraries. Libraries by definition can't remove any more boilerplate than you can, they are just as limited by the language, so often times just save the user some writing, rather than reducing (haha) it across a whole codebase like a built in feature can.


I work in a clojure shop with about 10 other developers. Until very recently, we hired folks with zero exposure to clojure or even function programming. Do we use all the new bells and whistles that Rich and the folks at Cognitect develop for clojure? Yes, but we're not jumping into the existing code base and refactoring everything to use core.async or reducers. Usually, one or two guys will use it in a less public, less critical piece of the infrastructure as a real-world application and then we do a code review with the whole team.

So, how do you build a clojure team, exactly? Don't try to exclusively hire developers who have been exposed to it yet or have masters degrees from elite universities, focus on finding people who love to program, who are genuinely interested in improving their own abilities, but can clearly hold their own in the language they currently use. You will soon have a team of developers who love the challenge of keeping up with what guys like Rich Hickey (and Cognitect) are doing. We have been very successful with this.


Sounds like the job I'm looking for :)


Something I've noticed about Clojure code is that people tend to have taken the effort to express a problem in a concise and logical way, even for very complex problems. I'm not sure if it's just because Clojure attracts people who value good code, or because Clojure provides the tools to do it, but it sure is nice. Better to spend an hour on a 5-line function than a 50-line one, if they accomplish the same thing in the end.


I don't think that 5-line function will be understandable by the next programmer who has to deal with it though. That's my point. It probably involves a Fibonacci or other math trickery that not everyone might have fresh in their minds.


You may be mistaking patterns with 'clever hacks'. If the 5-liner is about writing the shortest possible code, then it certainly isn't beneficial to anyone (except for the original author who has all the fun of coming up with a 'clever' solution). But useful abstractions are a different matter whatsoever.

Just look at Go channels or at reactive programming patterns. If applied to the right problem (i.e. a problem they help solve), they allow you to solve the very problem in a very concise and expressive way.

Having such patterns as a part of the language just makes them more popular and reusable.

And as far as your point, don't you think it's easier to spot a 5-line pattern than a pattern that is spread out over several classes and 200 or so lines of code? It all boils down to this: being able to see patterns or abstractions if you know that and having a common language across the team. Functional 'patterns' are just (arguably) more succinct than object-oriented ones (and it comes from someone who has been programming in C++ and Ruby for nearly 20 years).


In my experience the answer is yes but there isn't necessarily a hurry. I work in a small shop with 3 other Clojure developers who vary in experience from pretty green (I'd dabbled a bunch but have only been writing it professionally for about 4 months) to wizard (people who are going on two years and written the majority of the libraries we use, who are fluent in pretty much everything all the way up the stack). The learning curve for me has been like this:

Stage 1: Deep end. Given a task, I crack open an existing application and spend half a day to a day just trying to read it and figure it out. I struggle with my notions of state and immutability, make and test some changes until I think I have it figured. It's slow and arduous, at this point I'm reading Chas Emerick's book and mostly writing very standard clojure, dipping into and experimenting with things like multimethods and atoms.

Stage 2: Local maximum. I'm comfortable with some of the better patterns to pass around and manage state, make a lot of use anonymous functions, and a lot of my code looks like a let to establish useful local variables followed by a return. I get comfortable with writing programs either from the top down or bottom up, slowly building to a point where everything ties together. I start dipping my toes into core.async.

Stage 3: I get very comfortable with core.async and appreciate channels as a really nice way to separate concerns, you can build a whole bunch of neat little machines with channels. Sometimes I go a little overboard and roll something back to just using functions.

Stage 4: Start writing code where you realize you could use a macro. Macros feel about as hard as stage 1, with careful repl development and scrabbling to build some kind of conceptual framework where the whole thing hangs together. Transducers come out, I read about them and understand them conceptually, and get pretty excited about the use of pipelines, which present a much nicer way to chain channels together (where before you might use a map or write custom functions to do channel transformations, but they don't prove to be very composeable).

That's pretty much where I'm at right now. I'm comfortable, but there's still a lot of stuff I haven't really jumped into. One nice thing about most of these constructs is that if my conceptual grasp is wrong, things usually don't work at all, and that's a good time to ask questions to others.

As far as building a team? If progressing through those stages and having days that are frustrating followed by some big breakthroughs seems appealing, that's probably a good indicator that you'd fit into this sort of environment. When I was dabbling I made some progress and started to understand some of these mechanisms, but sometimes you need a problem staring you in the face that requires some fancy moves to solve to motivate you to push past what you know.

It seems daunting at first, but I sort of think that all of programming is. Practical knowledge can be acquired through experience, but it's expanding how you work theoretically that is the hardest and most rewarding.

Just a datapoint, anyway.


This sentence is important: "There will be more explanations, tutorials and derivations to follow, here and elsewhere." If a team ever adopts transducers, it'll generally be after a lot of explanations exist and has been made easy to understand.

Clojure teams can defend themselves against exotic features like any other team: code review, conventions, etc. If I pull in some weird new feature, others had better be ok with it.

(Obviously, what I'm saying doesn't apply to every team in the world; presumably there's a spectrum of ways to deal, or fail to deal, with this problem.)

In my view, Clojure teams are formed via a political process, like any other programming language adoption.


I just saw this morning that many functions in core.async are marked "Deprecated - this function will be removed. Use transformer instead." I guess Tranducers will provide a generic replacement for those. Looking forward to seeing some examples.


This looks exciting, but I'm confused about the decision to add extra arity to collection-manipulating functions. "filter" that returns a collection or a transducer depending only on arity seems a little counter-intuitive.


It's a pretty well considered tradeoff in my opinion - no existing code breaks while at the same time all the transformation functions now have the same semantic interpretation when used in different contexts. The alternative would be to replicate the notion of `map`, `filter`, etc. again and again as occurred with reducers and higher level core.async operations on channels.


We just did exactly the same thing in the Wolfram Language, for similar reasons (we called these things "operator forms" rather than "transducers") [0]

One major side effect has been to mitigate the kinds of heavy nesting you see in functional languages like WL and Clojure. Personally I think the resulting code resembles the phrase structure of English much more closely. It's a huge readability win.

The original motivations for operator forms were in fact writing Queries [1] against Datasets [2], for which you want to represent complex operations independent of their execution.

[0] http://reference.wolfram.com/language/guide/FunctionComposit...

[1] http://reference.wolfram.com/language/ref/Query.html

[2] http://reference.wolfram.com/language/ref/Dataset.html


I'm not seeing anything that looks like a reducing function transformer there. That all looks like variants of ordinary function composition, currying and partial application. Is there someplace that shows 'operator forms' acting as functions with this signature: (x->a->x)->(x->a->x)?


WL doesn't yet have a laziness/streaming/reducing framework, but the prototype we're working on uses 'operator forms' like Select, Map, GroupBy and so on in the way you describe.

I don't think the exact details are the same, because our operators don't actually evaluate to transformers (they remain totally symbolic). Rather, the conversion of composed operators to an actual reducer pipeline happens lazily 'at the right time', which I think will make optimization a bit easier to express.


So if I understand correctly, you would arrive at a symbolic expression like (->> (map f) (map g) x) and directly rewrite it to (->> (map (f . g)) x), rather than having to manipulate the resulting data-structures?


Yeah, it's pretty easy to turn

    Map[#^2&] /* Select[PrimeQ] /* Select[OddQ] 
into

    Select[OddQ[#] && PrimeQ[#]&] /* Map[#^2&] 
and so on via rules that look like:

    Select[s1_] /* Select[s2_] :> Select[s1[#] && s2[#]&] 
    Map[f_ ? SideEffectFreeQ] /* s_Select :> s /* Map[f]
    Map[Extract[p_Integer | p_Key]] :> Extract[{All, p}]
I'm already doing a bunch of that for Dataset.

But in reality you need to move to a DSL for complex enough rules, and then Clojure and WL will be on the same footing (Clojure even a bit stronger, maybe, WL doesn't really have a proper macro system).

Does Clojure core do or allow for any of that kind of optimization already?


I don't know about Clojure, but Common Lisp provides something called Compiler Macros[0]. They basically allow the programmer to define two versions of a procedure. One that is a macro, and one that is an actual procedure. The macro will expand only at compile time (it can also specify to call the procedure instead of expanding), and the procedure will be called only at run time. I suggest you look at [0] for some examples of how it is possible to optimize something as simple as squaring a number.

[0] http://clhs.lisp.se/Body/m_define.htm


It appears that the operator forms he's talking about are things like Select[EvenQ], which would become "transducers" if I could do something like:

even_selector = Select[EvenQ]

incrementor = Map[Inc]

decrementor = Map[Dec]

even_incremented_selector = incrementor * even_selector (????)

odd_selector = even_incremented_selector * decrementor

even_selector[{1,2,3,4,5,6,7}] => {2,4,6}

even_incremented_selector[{1,2,3,4,5,6,7}] => {2,4,6,8}

odd_selector[{1,2,3,4,5,6,7}] => {1,3,5,7}

However I'm not entirely sure I've understood things correctly, I just stared at it for a while and need to get back to work...

(edit: HN does _not_ like formatting)


No, because all that is required for operator forms to do what you write is just ordinary function application. Which they already do just fine.

Rich is talking about something deeper, in which operators like filter become transformers that themselves operate on stateful processors (reducers) to produce new stateful processors that have incorporated the action described by the original operator.


It seems very weird to me; it looks like currying (or partial application) but actually you're getting two fundamentally different things. Partially applying `map` to one argument with `partial` (or just doing `#(map f %)`) gets you one thing; non-partially applying map to one argument gets you something totally different.


My argument is why not making it look like:

(comp (transducer map inc) (transducer filter even?))

Of course, it's more typing and doesn't look as nice as single-arity map, but it goes along quite well with (partial map inc).

I can be wrong but to this date Clojure didn't have a function that produces completely different things based on arity, did it?


Not sure how what you're suggesting could possibly work unless transducer has a table of fn to transducer - this doesn't sound like a good idea. I recommend looking at the implementations which have landed in Clojure master to see what I'm talking about.


You could just make them transducer-map, transducer-filter, etc.

There is no reason the exact same name should be used for two different things. It is a lot harder to tell the difference between (map inc) and (map inc xs) then it is to tell the difference between (transducer-map inc) and (map inc xs).


I'm sorry, but from the OP I can't be at all sure I can understand notation such as:

     ;;reducing function signature
     whatever, input -> whatever
or

     ;;transducer signature
     (whatever, input -> whatever) -> (whatever, input ->   whatever)
Or, in mathematics there is some notation

     f: A --> B
where A and B are sets and f is a function. This notation means that for each element x in set A, function f returns value f(x) in set B. Seems clear enough. Maybe the notation in the OP is related? How I can't be sure I can guess at all correctly.


It's not a formal notation. It's talking about a pattern in function signature. The function takes in some parameters (whatever, input) and spits out an output ( -> whatever ).

The "reducing function signature" basically is just the function signature of a "reducer" (or "fold") function in the map/reduce (or map/fold) pattern.

The "whatever" is kind of sloppy and confusing. It's the accumulating memo parameter of a reducer function. To be precise, the reduce(list, reducer, init_value)->list function takes in another function - the reducer, which has the function signature of reducer(current_element, accumulating_memo)->new_memo to run against each element of the list.

The "transducer" is basically a function to take in a reducer and spit out another reducer.

e.g. you have a reducer to sum up all the elements of a list. A doubling transducer would take that reducer and produces another one that doubles each element before summing them up.


Thanks. I just looked up "map/reduce" on Wikipedia. Gee, I've been speaking 'prose' (an old joke) all along!

So, I was procrastinating from working on my code, and there I have some data base data split, for some positive integer n, into n 'partitions'. The intention, later, is to use n servers, one server for one partition.

Then I have some data X to be 'applied' to all the data in all the partitions, and from each partition I get back some data, say, for partition i = 1, 2, ..., n, from partition i I get back data Y(i). Then I combine all the data Y(i) to get my final results Z that I really want.

I've programmed the communications, etc., but it looks like I've just reinvented the wheel, i.e., map/reduce. Looks like a library of good map/reduce software could save me from programming the low level details (serialize to an array of byte an instance of a class and send the array via TCP/IP sockets) and also get some fault tolerance. Sounds good. So, I reinvented the 'pattern'.

For the OP, I'm not so clear on how in most compiled languages the 'transducer' could work; that is, it sounds like the programming language would need some means of 'reflection', 'entry variables', 'delegates', dynamically written, compiled, and linked code, interpretative code, or some such. There, 'static analysis' of 'dynamic code' seems a bit clumsy!


Actually "transducer" can be done with straight function composition. It would work in any language supporting high order function, a fancy way of saying passing function around as argument or return value.

e.g. in Javascript (I'll be overly verbose for illustration)

    function mySumReducer(sum, value1) { 
        return sum + value1;
    }
    function myTimesReducer(product, value1) { 
        return product * value1;
    }

    [1, 2, 3, 4].reduce(mySumReducer, 0) gives 10
    [1, 2, 3, 4].reduce(myTimesReducer, 1) gives 24


    function myDoubler(x) {
        return x * 2;
    }

    function valueTransducer(originalReducer, valueEnhancer) {
        var newReducer = function(memo, value) {
            var newMemo = originalReducer(memo, valueEnhancer(value));
            return newMemo;
        }
        return newReducer;
    }

    var myDoubleSumReducer = valueTransducer(mySumReducer, myDoubler);
    var myDoubleTimesReducer = valueTransducer(myTimesReducer, myDoubler);

    [1, 2, 3, 4].reduce(myDoubleSumReducer, 0) gives 19
    [1, 2, 3, 4].reduce(myDoubleTimesReducer, 1) gives 192
valueTransducer is a generic transducer that can be used to apply an extra function to the value during the reduction process. Voila, you got a transducer in Javascript!

To make it more useful,

    function fancyTransducer(originalReducer, valueEnhancer, memoEnhancer) {
        return function(memo, value) {
            return memoEnhancer(originalReducer(memo, valueEnhancer(value)));
        }
    }
This generic transducer can transform the value and memo of the original reducer. Also since the transducer returns another reducer, you can chain it up by calling transducer again with it using different enhancers. The wonder of functional composition.

It's nothing fancy once it's laid out. It's just a useful programming pattern.


Actually this doesn't really capture the idea properly - you need a "reducerComposer" function that can compose any functions that look like mySumReducer / myTimesReducer (take a memo and a value). You've just manually composed them in myDoublingTransducer, which indeed is just straight function composition. Your reducerComposer needs to function if the first reducer returns nothing (e.g. filter) or multiple values (e.g. flattening a list of lists). One way to think of it is that you are "pushing" values through a chain of reducers, and at any step you might produce 0, 1, or multiple values for the next step.


Sorry I was editing before I saw your comment. Note: There was a myDoublingTransducer before.

On a separate note, a reducer returns nothing doesn't make sense. A reducer might not add the current value to the memo (filter) but it always returns some memo for the next step. Also a reducer would not return multiple values. It returns just one value, the memo. If there are multiple values produced at a step, they would have been added to the memo. The function signature of a reducer is (s,v)->s, that means it always returns one thing.


> A reducer might not add the current value to the memo (filter) but it always returns some memo for the next step.

I think the idea is that your reducer function shouldn't have to care about steps or memos. e.g. I should be able to do

    filter(function(x) { return x < 10; });
and get back some thing that I can then pass in to map, or to reduce, or compose with other things.


Hmm, isn't that just the filter predicate function? Not the reducer itself? Looks like the filter() function constructs a reducer using the predicate. The new reducer adds the filtering functionality with the predicate.

Looking at the code for filter (copied from above)

filter:

    ([pred]
    (fn [f1]               <-- builds the new reducer; f1 is the inner reducer
      (fn                  <-- this is the new outer reducer
        ([result input]    <-- outer's params: result is memo; input is value
           (if (pred input)    <-- pred is user provided, your function
             (f1 result input) <-- calls the inner reducer; returns its result.
             result)))))       <-- call filtered, returns result unchanged.
The reducer still behaves as reducer, returning one result.

A Javascript version would be:

    function filterTransducer(innerReducer, pred) {
        return function(memo, value) {
            if (pred(value))  
                return innerReducer(memo, value)
            return memo;
        }
    }


a la Haskell:

    ;;reducing fn
    x->a->x

    ;;transducer fn
    (x->a->x)->(x->b->x)


or, to steal ashley yakeley's marvellous quip, "every sufficiently well-documented lisp program contains an ML program in its comments" (:


lol


So there's some common ground between map, filter, flatmap, etc and these transducers allow you to compose those common-ground representations? I mean, you can define `map` as a callable chain of `(key-by #(true)) -> (map-apply fn) -> (filter-trues)` and `filter` as `(key-by predicate) -> (map-apply identity) -> (filter-trues)` where key-by, map-apply and filter-trues are some low-level primitives? Is that correct?


Precisely the opposite - these functions compose because they are representation-free.


More precisely:

  -- Left reduce
  type Reducer a r = r -> a -> r

  -- Here's where then rank-2 type is needed
  type Transducer a b = forall r . Reducer a r -> Reducer b r


Clojure transducers are exactly signal functions from Haskell FRP literature, for those interested in such a connection.


I'm not yet seeing that, given:

Signal a :: Time -> a SF a b :: Signal a -> Signal b thus (Time -> a) -> (Time -> b)

not exactly:

(x->a->x) -> (x->b->x)

Can you point me to a paper that makes the connection clear?


In the non-continuous FRP literature[1], i.e. the kind you actually implement, SF a b = [a] -> [b], which is isomorphic to: Fold a -> Fold b, where Fold a = (exists s. (s, s -> (a, s))), which isomorphic to the type the author writes for transducer:

  ;;transducer signature
  (whatever, input -> whatever) -> (whatever, input -> whatever)
Signal functions are easy to program in Haskell using arrow syntax, and many libraries already exist for dealing with signal functions.

[1]: Nilsson, Courtney and Peterson. Functional Reactive Programming, Continued. Haskell '02. http://haskell.cs.yale.edu/wp-content/uploads/2011/02/worksh... (see section 4)

edit: formatting


Tentative benchmark results have surfaced: https://github.com/thheller/transduce-bench

Add salt according to taste.


The benchmark is wrong atm, the compared functions do not yield the same result.

(comp (map inc) (filter even?)) means filtering first, then incrementing.

The opposite for (->> data (map inc) (filter even?) ...

- which is not the same. And of course, it also means that the latter has to increment the double amount of numbers.

EDIT: It was me who was wrong, thanks for the corrections. Pitfall successfully identified :)


It seems counterintuitive, but composing transducers yields a reducing function that runs the transformation steps left->right. What you are composing is the reducing function transformations (right to left, ordinary composition), but their result, having been built inside-out, runs the steps outside-in. So (comp tx1 tx2...) runs in same order as (->> xs tx1 tx2...)


Nope, that's the funny feature of transducers. Like lenses in Haskell, transducers compose in reversed order.

    (sequence (comp (map inc) (filter even?)) (range 10))
    ;;=> (2 4 6 8 10)
    (->> (range 10) (map inc) (filter even?))
    ;;=> (2 4 6 8 10)
Both test functions in transduce-bench return 250000500000.


This is about as close as I could get in Haskell so far. It uses a slight twist on (x -> a -> x) called a Fold (which has a lot of great properties—it's a profunctor, an applicative, and a comonad).

Nicely, this construction lets us write `take` purely!

    {-# LANGUAGE GADTs         #-}
    {-# LANGUAGE RankNTypes    #-}
    {-# LANGUAGE TypeOperators #-}

    import           Control.Arrow
    import           Control.Category
    import qualified Prelude
    import           Prelude hiding (id, (.))

    data Fold a r where
      Fold :: (a -> x -> x) -> x -> (x -> r) -> Fold a r

    data Pair a b = Pair !a !b

    pfst :: Pair a b -> a
    pfst (Pair a b) = a

    psnd :: Pair a b -> b
    psnd (Pair a b) = b

    newtype (~>) a b = Arr (forall r . Fold b r -> Fold a r)

    instance Category (~>) where
      id = Arr id
      Arr f . Arr g = Arr (g . f)

    amap :: (a -> b) -> (a ~> b)
    amap f = Arr (\(Fold cons nil fin) -> Fold (cons . f) nil fin)

    afilter :: (a -> Bool) -> (a ~> a)
    afilter p = Arr $ \(Fold cons nil fin) ->
      let cons' = \a x -> if p a then cons a x else x
      in Fold cons' nil fin

    fold :: Fold a r -> [a] -> r
    fold (Fold cons nil fin) = fin . spin where
      spin []     = nil
      spin (a:as) = cons a (spin as)

    asequence :: (a ~> b) -> ([a] -> [b])
    asequence (Arr f) = fold (f (Fold (:) [] id))
    
    aflatmap :: (a -> [b]) -> (a ~> b)
    aflatmap f = Arr $ \(Fold cons nil fin) ->
      Fold (\a x -> foldr cons x (f a)) nil fin
    
    atake :: Int -> (a ~> a)
    atake n = Arr $ \(Fold cons nil fin) ->
      let cons' = \a x n -> if n > 0 then cons a (x (n-1)) else x n
      in Fold cons' (const nil) (\x -> fin (x n))


Brilliant, this clears up the usage a lot more than the rest of the prose in the comment thread.


Not sure I understand - so Clojure is getting first class support for lazy collections and curried combinators? Or am I missing the important part?


Many Clojure abstractions need things like map, filter, concat, etc. Reducers are eager, lazy-seqs are lazy, and core.async channels are lazy and push-based. This is a way to unify all these abstractions so that you can write a single map/filter/concat transform once, and use it in many different ways.


Yeah, when looking at core.async transformers and the reducer library, it seemed clear that there was some generality in there that could be factored out. Glad someone smarter than me figured out what it was.


> This is a way to unify all these abstractions so that you can write a single map/filter/concat transform once, and use it in many different ways.

I don't understand why these things aren't all unified in the first place, it's just function composition we're talking about here right?

Edit: I mean it's clearly not just simple function composition, but I don't understand why not

    (def xform (comp (map inc) (filter even?))) ; xform is a transducer
    (defn xform [aseq] (->> aseq (map inc) (filter even?))) ; xform is a fn
Why do these have to be different things? Why is there a need for machinery to do what fn composition is supposed to do?


Your typical definition of map, filter etc includes concrete usage of e.g. lists. These don't. Transducers are not just currying or partial application of the map function over lists, they isolate the logic of map, filter etc from lists or any particular context, allowing them to be used in very different (e.g. non-collection) contexts.


> they isolate the logic of map, filter etc from lists or any particular context

Is it correct to say that the relationship between the new forms of map/filter/etc, reduction functions, and transducers is something like this:

"traditional" (map f l) is equivalent to (foldl (mapR f) l []), where (mapR f) is the "reduction function" corresponding to map. (mapR would still be list specific)

the new (map f) is a transducer that takes a reduction function and returns another reduction function; given idR, the "identity reduction function" for foldl such that (foldl idR l []) = l, ((map f) idR) = mapR.

Furthermore, given reduction functions mapRR such that (foldr (mapRR f) l []) == (map f l) and idRR such that (foldr idRR l []) = l, then ((map f) idRR) = mapRR. (Because all the list-specific things in the output reduction function come from the input reduction function, the transducer (map f) doesn't need to know anything about lists, so can be used in other contexts as well -- one minor-but-perhaps-easier-to-illustrate aspect of that is, even when used with lists, (map f) is independent of the folding direction, unlike the reduction functions mapR and mapRR.)

(I know this is barely even scratching the surface of the applications, but I'm trying to confirm that I've got the concept right.)


> isolate the logic of map, filter etc from lists or any particular context

so Functors (fmap)? fmap still uses regular old fns, no machinery.

http://clojuredocs.org/clojure_contrib/clojure.contrib.gener... http://learnyouahaskell.com/making-our-own-types-and-typecla...


I feel that same confusion as you. In Haskell terms, what is it Rich have implemented or perhaps even invented?


I'm not sure I grok this, but I think that the main points are:

- reducers can work on tree structures, and thus can exploit parallelism. This would be like using a map that requires only the Foldable typeclass

- In Haskell you have stream/vector fusion, but it's not obvious to know when ghc will actually exploit it, you might want to use something like Control.Monad.Stream or Data.Vector. In theory it might be generalized to all Foldables, but in practice for now it might be a good enough compromise to stick to a limited number of types (the one that support such transducers)

So: nothing terribly new/groundbreaking, but it might bring something like stream fusion to the masses (of clojure developers :P )


Comparing transducers to stream fusion is far narrower than the scope of their applicability.


They are more comparable to Oleg's Enumerators (http://okmij.org/ftp/Haskell/Iteratee/describe.pdf), in that you compose a series of computations and then push data through them. The type signature is similar:

    type Iteratee el m a -- a processor of 'els' in the monad 'm' returning a type 'a'

    type Enumerator el m a = Iteratee el m a -> m (Iteratee el m a)
The Enumerators library is complicated by the presence of monads and by trying to automatically close handles when the stream is processed. In some ways it seems that the goal of solving the lazy IO problem led to missing a much simpler abstraction. Transducers seem to be simpler and more focused on just abstracting stream/map-reduce computation.


While not mentioned in the blog post, the transducers implementation supports both early termination and result completion/flushing/cleanup.


there does seem to be some overlap with stream fusion -- which was all about exploiting the optimization opportunities when separating the collection operation from the kernel performed on each element.

We called the bits in the middle "step functions" , which could be combined with "consumers", "producers" and "transformers".

And the algebra generalizes hugely (not just collections) but to things like concurrent processes, data flow programs etc.

http://metagraph.org/papers/stream_fusion.pdf

Things to think about in a non-Haskell settings: how do you prevent reordering side-effects? Can execeptions/non-termination being reordered be observed?


By "map in a non-collection context", what would be an example? Say, mapping a boolean negation function over the bits of an integer, to produce its complement? How does that look with transducers?


Well, the most obvious value to me is that you can create algorithms that defer their action- So instead of having a function that operates on a collection, you can have a function that will operate on a collection at a future date, even if the collection does not yet exist. (that's what the clojure reducer library does)

(And if you say "deferring an action is just another term for 'functions'" then that's exactly the point, lifting more algorithmic logic into composable functions)


Alan Perlis Epigram #2.


Isn't there some overlap between this and extend-protocol? To my vague understanding this is from the functional composition angle instead of a type, but otherwise don't they accomplish the same thing?


I think this is just announcing that the Clojure core libraries are gaining another arity for many functions specifically for this usage so that you don't have to use (partial) or the short-hand #() syntax to use this pattern. I assume the goal is more community awareness and cleaner syntax for the re-use of this pattern.


No, (map f) is not curried map. It returns an entirely different thing - a function of reducing function to reducing function, aka a reducing function transformer, aka a transducer.


Providing the signature of this new `map` function (e.g. as in Haskell's fmal http://www.haskell.org/hoogle/?hoogle=fmap), would certainly go a long way towards helping people understand what this `map` does.


These sigs are for the arities below only.

map f: (a->b)->(x->b->x)->(x->a->x)

filter pred: (a->bool)->(x->a->x)->(x->a->x)

flatmap f: (a->[b])->(x->b->x)->(x->a->x)

etc.


OK, maybe we're getting somewhere. Let me try to write this `map`, just to see. I'm using Scala, so I can put some types, and have the compiler yell at me if I'm doing something overtly wrong (I need all the help I can get!).

For clarity, let's define a type alias for reducers:

    type Reducer[X, A] = (X, A) ⇒ X
Let's define `map` to match the type definition you provided. And with that type definition, I only see one way in which the function can be implemented. So it must be:

    def map[X, A, B](f: A ⇒ B): (Reducer[X, B] ⇒ Reducer[X, A]) =
        (redB: Reducer[X, B]) ⇒ (x: X, a: A) ⇒ redB(x, f(a))
How can I use this? Let's try the following:

    def addup(zero: Int, a: List[Int]) = a.foldLeft(zero)(_ + _)
    def parseList(a: List[String]) = a.map(_.toInt)

    map(parseList)(addup)(1, List("7", "8"))
This returns 16. OK, parsing the list, and adding up starting from 1. But it doesn't look to me like `map` implements anything like the usual semantic of map. It just converts the data structure, and applies the reducer. What am I missing here?


So, a transduceMap implementation would be this I guess?

  transduceMap :: (b -> a) -> (acc -> a -> acc) -> (acc -> b -> acc)
  transduceMap f = \reduceFn -> \acc el -> reduceFn acc (f el)
lambda added for clarity (no pun intended), however types are easier to match when using this syntax:

  transduceMap :: (b -> a) -> (acc -> a -> acc) -> acc -> b -> acc
  transduceMap f reduceFn acc el = reduceFn acc (f el)


> "these transformers were never exposed a la carte, instead being encapsulated by the macrology of reducers."

What does 'macrology' mean in this context? Is this a common usage? Or a novel application of a word that ordinarily means "long and tedious talk without much substance"


It means that reducers were some macro sugar atop the underlying mechanism described in the post. See https://github.com/clojure/clojure/blob/master/src/clj/cloju...


... so 'macrology' means "use of macros" rather than "long and tedious talk without much substance"? Is this usage specific to Clojure, or to all languages that have macros? It seems like a pretty bizarre repurposing of a word that already has a very different meaning, and I wonder how widespread it is. Are we too late to stop it?


It applies universally and you're probably decades too late to stop it. http://www.catb.org/jargon/html/M/macrology.html


Well, it's the use of macros to avoid having to write code that has the characteristics of macrology. If nothing else, it's sort of strangely recursive, like perhaps many "complex macros" are...




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: