
Transducers are coming to Clojure - siavosh
http://blog.cognitect.com/blog/2014/8/6/transducers-are-coming
======
tel
This sort of reminds me of the Church-encoded form of a list.

    
    
        newtype Fold a = Fold (forall r . (a -> r -> r) -> r -> r)
    
        fold :: [a] -> Fold a
        fold xs = Fold (spin xs) where
          spin []     cons nil = nil
          spin (a:as) cons nil = cons a (spin as cons nil)
    
        refold :: Fold a -> [a]
        refold (Fold f) = f (:) []
    

Notably, since `fold` and `refold` are isomorphisms then we can do everything
we can do to `[a]` to `Fold a`

    
    
        map :: (a -> b) -> (Fold a -> Fold b)
        map x (Fold f) = Fold $ \cons nil -> f (cons . x) nil
    
        filter :: (a -> Bool) -> Fold a -> Fold a
        filter p (Fold f) =
          Fold $ \cons nil -> f (\a r -> if p a then cons a r else r) nil
    

but all of this work is done without concrete reference to `(:)` and `[]`...
you instead just use stand-ins I've been calling cons and nil. What's nice
about this is that `Fold` can be used to build anything which can be
"constructed from the left"

    
    
        foldSet :: Fold a -> Set a
        foldSet (Fold f) = f Set.insert Set.empty
    

It's sort of dual to the stuff I was exploring in Swift here [0]. It also
creates laziness for free because you can't really execute the chain until the
end—Church-encoding is really a form of continuation passing.

The downside of this idea is that each time you "consume" a Fold you redo
work—there's no place to put caching necessarily.

Maybe that's what they're solving with the Fold transformers representation.

[0]
[http://tel.github.io/2014/07/30/immutable_enumeration_in_swi...](http://tel.github.io/2014/07/30/immutable_enumeration_in_swift/)

~~~
richhickey
Kind of. The idea is to get out of the context of the 'whole job' (the ->r->r
bit above) and focus on transformations of the step function (a->r->r) ->
(b->r->r) {using your arg order above}. Not talking about the whole job (i.e.
the source and result) makes for much more highly reusable components,
especially when the jobs don't produce concrete results but, e.g., run
indefinitely, like channel transformations.

~~~
tel
Yeah, I'm less sure about the properties as you go this way. You ought to be
able to get an Arrow out of it and it's a pretty natural idea.

------
vbit
I'm not quite sure what this means, so here's my attempt to translate this
into Python. A reducer is a function such as `add`:

    
    
        def add(sum, num):
          return sum + num
      

Of course you can plug `add` directly in `reduce(add, [1, 2, 3], 0)` which
gives `6`.

A transducer is an object returned by a call such as `map(lambda x: x + 1)`.

You can now apply the transducer to a reducer and get another reducer.

    
    
        map_inc = map(lambda x: x + 1)
        add_inc = map_inc(add)
        

Our first reducer simply added, but the next one increments and then adds. We
can use it as `reduce(add_inc, [1, 2, 3], 0)` which gives, I'm guessing, `9`.

Since the transducer returns a reducer as well, we can compose transducers:

    
    
         r1 = filter(is_even)(map(increment)(add))
         # use r1 in reduce()
         

It seems in clojure, reduce() isn't the only useful function that works with
reducers, there are others which makes this all worthwhile.

Is my translation accurate?

~~~
bcoates
If I understand right (I may not):

In Python, the (complex, generic) iter() protocol is how every container
provides a method to iterate itself, and reduce is a trivial application of a
reducer (like your add function) to a container by using the iter protocol.

In Clojure, it's the opposite: there is a reduce() protocol and every
reducible container knows how to reduce itself. the traditional reduce
function just takes a reducer and a reducible and performs the reduce protocol
on it.

As there's a bunch of different kinds of containers with different rules, the
best way to implement the reduce protocol on them is different for each, and
there might be weird specialized reducers that are parallel or asynchronous or
lazy or whatever.

Transducers allow you to composibly transform reducers into different
reducers, which can then be handed to the reduce protocol. As it turns out,
most (all?) normal container->container operations, like map and filter, have
corresponding transducers.

Here's the good part: if you have a reduce protocol and a complete set of
tranducers, _containers don 't need to be mappable_. The map(fn, iterable)
function needs to know how to iterate; mapping(fn) doesn't care about
containers at all, and is fully general to mapping any reducible _for free_.
So you can write a transducer to produce the effect of any map or filter-like
operation without touching iter().

As an added bonus, the code is more efficient:

    
    
      reduce( add, map( lambda x: x + 1, xrange(10**9) ), 0 )
    

eagerly builds a gigantic mapped list (barring sophisticated laziness or loop-
fusion optimizations), but

    
    
      reduce( mapping( lambda x: x + 1 )(add), xrange(10**9), 0 )
    

is equivalent and trivially runs in O(1) space on one element of xrange at a
time.

PS: python translation of mapping from
[http://clojure.com/blog/2012/05/15/anatomy-of-
reducer.html](http://clojure.com/blog/2012/05/15/anatomy-of-reducer.html)
converted to named functions to be more pythonic:

    
    
      def mapping( transformation ):
        def transducer( reducer ):
          def new_reducer( accum, next ):
            return reducer(accum, transformation(next) )
          return new_reducer
        return transducer

~~~
neotrinity
my heart somehow leaps when I see beautiful code like this.

Although I have tried to like clojure in all earnestness, I am somehow put off
by the noise from clojure ( PS its not only about the brackets. )

Somehow the python code and also the haskell version looks so succinct yet
sparse enough to be read.

maybe I am hardwired that way ...

------
pron
I think that a good way to understand transducers is to look at their
implementation (shortened a bit). Here it is for map:

    
    
        ([f]
        (fn [f1]
          (fn
            ([result input]
               (f1 result (f input)))
            ([result input & inputs]
               (f1 result (apply f input inputs))))))
    

filter:

    
    
        ([pred]
        (fn [f1]
          (fn
            ([result input]
               (if (pred input)
                 (f1 result input)
                 result)))))
    

And it gets more interesting with take:

    
    
        ([n]
         (fn [f1]
           (let [na (atom n)]
             (fn
               ([result input]
                  (let [n @na
                        nn (swap! na dec)
                        result (if (pos? n)
                                 (f1 result input)
                                 result)]
                    (if (not (pos? nn))
                      (reduced result) ; a terminal value indicating "don't reduce further"
                      result)))))))
    

The transducer is supplied with the reducer next in the chain (f1) and returns
a reducer function that gets fed with the reduced value by the preceding
reduction (result) and the next element (input). Note how the take transducer
maintains internal state with an atom. This could get a little tricky for more
elaborate reductions, as _how_ the internal state is maintained might have a
significant effect on performance, depending on exactly how the reduction is
performed. For example, if the reduction is done in parallel (say, with fork-
join), then an internal state that's updated with locks (like refs) might
significantly slow down -- or even deadlock -- the reduction.

AFAICT mapcat still only returns lazy-seqs.

~~~
tel
Interesting. Continuing to try to analyze these in Haskell, I think this is a
direct translation:

    
    
        -- z is just there to not clobber some standard prelude names
        type Red r a = r -> a -> r
    
        zmap :: (b -> a) -> Red r a -> Red r b
        zmap f f1 result input = f1 result (f input)
    
        zfilt :: (a -> Bool) -> Red r a -> Red r a
        zfilt p f1 result input = if p input then f1 result input else result
    
        ztake :: Int -> Red r a -> (r -> a -> Maybe r)
        ztake n f1 = run n where
          run n result input =
            let n'     = n - 1
                result = if n >= 0 then f1 result input else result
            in if n' == 0 then Nothing else Just result
    

I wanted to post this mostly to note that `map` is mapping contravariantly
here. Is there something I'm missing? I had that occurring when I was playing
around with this idea before looking at the source as well... to fix it I had
to consider the type `Red r a -> r` so that the `a` was covariant again.

~~~
tel
Also, you can get rid of the sentinel value by packing along a termination
continuation value as well:

    
    
        type Red r a = (r -> a -> r, r)
    
        zmap :: (b -> a) -> Red r a -> Red r b
        zmap f (f1, z1) = (\result input -> f1 result (f input), z1)
    
        zfilt :: (a -> Bool) -> Red r a -> Red r a
        zfilt p (f1, z1) = (\result input -> if p input then f1 result input else result, z1)
    
        ztake :: Int -> Red r a -> Red r a
        ztake n (f1, z1) = (run n, z1) where
          run n result input =
            let n'     = n - 1
                result = if n >= 0 then f1 result input else result
            in if n' == 0 then z1 else result
    

This has the theoretical niceness of having `Red r a` just be the signature
functor for linked lists.

~~~
riwsky
Doesn't ztake need to live in State Int (or something similar)? As written, I
don't see how it passes the 'n' onto the next call. It seems that the
transducer returned by ztake n for any n > 1 will always pass on (f1 result,
z1).

Thank you for posting this though, helpful to see someone work through it.

~~~
tel
Here's how to work it without even using the state monad:
[https://news.ycombinator.com/item?id=8149200](https://news.ycombinator.com/item?id=8149200)

It's quite a bit different from this formulation.

------
oafitupa
As someone who tried Clojure and failed, serious question: Does anyone
actually use all these crazy features/patterns that keep getting
added/discovered and talked about?

I ask because even though I can imagine someone smart mastering these things
and programming faster, I can't imagine a second person being able to
understand his code, maintain it, and generally be productive. I imagine the
second person losing a lot of time trying to understand what is going on, or
even thinking he understood but in reality he didn't and messing things up.

So how do you even form a Clojure team?

~~~
JackMorgan
It's a mindset change.

Other than macros, which fundamentally alter the flow of code in a very non-
standard way, the rest of these "crazy patterns and features" in Clojure are
just like the crazy libraries used by typical Java/Ruby/C# developers, only
thousands of times simpler. If I came to you and said, "does anyone even use
these tens of thousands of libraries in Maven? How do other developers work on
this afterwards, they'd have to learn all these new APIs!" I'd likely get a
response like, "they'd just be expected to", with a chuckle. The mindset I've
seen a lot is that language features are "too hard" to learn and should be
avoided, but library complexity is beyond reproach and is rarely questioned.

Clojure the language takes the approach that it's better to provide a dozen
simple but incredibly composable tools that can easily replace a lot of
duplicated boilerplate in other languages. Like these transducers, in Java/C#
they'd likely be one of the design patterns that needs a whole set of classes
to make what he shows in a few characters. Would you rather learn to write and
read maintain a handful of classes, or just learn what those few characters
mean? I don't get paid by the line, and any abstraction built into the
language like that is a few more classes I don't have to maintain and possibly
implement incorrectly in some subtle way.

Like I said, it's just a mindset change. I know a company that only uses
Clojure, and they hire a lot of entry level developers who haven't yet gotten
stuck in a mindset that "languages are too hard; libraries are easy". They
have great success with that, and their entry level guys are up to speed much
faster than those in my shop, using .NET, where they have to learn a new
language AND a mountain of boiler plate code using dozens of libraries and
frameworks.

~~~
oafitupa
> Would you rather learn to write and read maintain a handful of classes

I appreciate your answer, but that's not what I do with libraries. I don't
have to read their internals, let alone write them.

~~~
JackMorgan
I may have communicated that badly. A pattern like transducers is one that
everyone needs. Everyone either uses it built into the language or writes it
out long form. Design patterns are a way of giving you instructions for
writing it out long form. So, you either write it yourself or use it built in.

Some people do not want to learn language tools that will reduce such
boilerplate, but those people often are (ironically) happy to learn lots of
libraries. Libraries by definition can't remove any more boilerplate than you
can, they are just as limited by the language, so often times just save the
user some writing, rather than reducing (haha) it across a whole codebase like
a built in feature can.

------
undershirt
I just saw this morning that many functions in core.async are marked
"Deprecated - this function will be removed. Use transformer instead." I guess
Tranducers will provide a generic replacement for those. Looking forward to
seeing some examples.

------
unlogic
This looks exciting, but I'm confused about the decision to add extra arity to
collection-manipulating functions. "filter" that returns a collection or a
transducer depending only on arity seems a little counter-intuitive.

~~~
swannodette
It's a pretty well considered tradeoff in my opinion - no existing code breaks
while at the same time all the transformation functions now have the same
semantic interpretation when used in different contexts. The alternative would
be to replicate the notion of `map`, `filter`, etc. again and again as
occurred with reducers and higher level core.async operations on channels.

~~~
taliesinb
We just did exactly the same thing in the Wolfram Language, for similar
reasons (we called these things "operator forms" rather than "transducers")
[0]

One major side effect has been to mitigate the kinds of heavy nesting you see
in functional languages like WL and Clojure. Personally I think the resulting
code resembles the phrase structure of English much more closely. It's a huge
readability win.

The original motivations for operator forms were in fact writing Queries [1]
against Datasets [2], for which you want to represent complex operations
independent of their execution.

[0]
[http://reference.wolfram.com/language/guide/FunctionComposit...](http://reference.wolfram.com/language/guide/FunctionCompositionAndOperatorForms.html)

[1]
[http://reference.wolfram.com/language/ref/Query.html](http://reference.wolfram.com/language/ref/Query.html)

[2]
[http://reference.wolfram.com/language/ref/Dataset.html](http://reference.wolfram.com/language/ref/Dataset.html)

~~~
richhickey
I'm not seeing anything that looks like a reducing function transformer there.
That all looks like variants of ordinary function composition, currying and
partial application. Is there someplace that shows 'operator forms' acting as
functions with this signature: (x->a->x)->(x->a->x)?

~~~
taliesinb
WL doesn't yet have a laziness/streaming/reducing framework, but the prototype
we're working on uses 'operator forms' like Select, Map, GroupBy and so on in
the way you describe.

I don't think the exact details are the same, because our operators don't
actually evaluate to transformers (they remain totally symbolic). Rather, the
conversion of composed operators to an actual reducer pipeline happens lazily
'at the right time', which I think will make optimization a bit easier to
express.

~~~
jamii
So if I understand correctly, you would arrive at a symbolic expression like
(->> (map f) (map g) x) and directly rewrite it to (->> (map (f . g)) x),
rather than having to manipulate the resulting data-structures?

~~~
taliesinb
Yeah, it's pretty easy to turn

    
    
        Map[#^2&] /* Select[PrimeQ] /* Select[OddQ] 
    

into

    
    
        Select[OddQ[#] && PrimeQ[#]&] /* Map[#^2&] 
    

and so on via rules that look like:

    
    
        Select[s1_] /* Select[s2_] :> Select[s1[#] && s2[#]&] 
        Map[f_ ? SideEffectFreeQ] /* s_Select :> s /* Map[f]
        Map[Extract[p_Integer | p_Key]] :> Extract[{All, p}]
    

I'm already doing a bunch of that for Dataset.

But in reality you need to move to a DSL for complex enough rules, and then
Clojure and WL will be on the same footing (Clojure even a bit stronger,
maybe, WL doesn't really have a proper macro system).

Does Clojure core do or allow for any of that kind of optimization already?

~~~
malisper
I don't know about Clojure, but Common Lisp provides something called
_Compiler Macros_ [0]. They basically allow the programmer to define two
versions of a procedure. One that is a macro, and one that is an actual
procedure. The macro will expand only at compile time (it can also specify to
call the procedure instead of expanding), and the procedure will be called
only at run time. I suggest you look at [0] for some examples of how it is
possible to optimize something as simple as squaring a number.

[0]
[http://clhs.lisp.se/Body/m_define.htm](http://clhs.lisp.se/Body/m_define.htm)

------
graycat
I'm sorry, but from the OP I can't be at all sure I can understand notation
such as:

    
    
         ;;reducing function signature
         whatever, input -> whatever
    

or

    
    
         ;;transducer signature
         (whatever, input -> whatever) -> (whatever, input ->   whatever)
    

Or, in mathematics there is some notation

    
    
         f: A --> B
    

where A and B are sets and f is a function. This notation means that for each
element x in set A, function f returns value f(x) in set B. Seems clear
enough. Maybe the notation in the OP is related? How I can't be sure I can
guess at all correctly.

~~~
richhickey
a la Haskell:

    
    
        ;;reducing fn
        x->a->x
    
        ;;transducer fn
        (x->a->x)->(x->b->x)

~~~
zem
or, to steal ashley yakeley's marvellous quip, "every sufficiently well-
documented lisp program contains an ML program in its comments" (:

~~~
xmonkee
lol

------
davdar
Clojure transducers are _exactly_ signal functions from Haskell FRP
literature, for those interested in such a connection.

~~~
richhickey
I'm not yet seeing that, given:

Signal a :: Time -> a SF a b :: Signal a -> Signal b thus (Time -> a) -> (Time
-> b)

not exactly:

(x->a->x) -> (x->b->x)

Can you point me to a paper that makes the connection clear?

~~~
davdar
In the non-continuous FRP literature[1], i.e. the kind you actually implement,
SF a b = [a] -> [b], which is isomorphic to: Fold a -> Fold b, where Fold a =
(exists s. (s, s -> (a, s))), which isomorphic to the type the author writes
for transducer:

    
    
      ;;transducer signature
      (whatever, input -> whatever) -> (whatever, input -> whatever)
    

Signal functions are easy to program in Haskell using arrow syntax, and many
libraries already exist for dealing with signal functions.

[1]: Nilsson, Courtney and Peterson. Functional Reactive Programming,
Continued. Haskell '02\. [http://haskell.cs.yale.edu/wp-
content/uploads/2011/02/worksh...](http://haskell.cs.yale.edu/wp-
content/uploads/2011/02/workshop-02.pdf) (see section 4)

edit: formatting

------
fnordsensei
Tentative benchmark results have surfaced:
[https://github.com/thheller/transduce-
bench](https://github.com/thheller/transduce-bench)

Add salt according to taste.

~~~
mduerksen
The benchmark is wrong atm, the compared functions do not yield the same
result.

(comp (map inc) (filter even?)) means filtering first, then incrementing.

The opposite for (->> data (map inc) (filter even?) ...

\- which is not the same. And of course, it also means that the latter has to
increment the double amount of numbers.

EDIT: It was me who was wrong, thanks for the corrections. Pitfall
successfully identified :)

~~~
richhickey
It seems counterintuitive, but composing transducers yields a reducing
function that runs the transformation steps left->right. What you are
composing is the reducing function transformations (right to left, ordinary
composition), but their result, having been built inside-out, runs the steps
outside-in. So (comp tx1 tx2...) runs in same order as (->> xs tx1 tx2...)

------
tel
This is about as close as I could get in Haskell so far. It uses a slight
twist on (x -> a -> x) called a Fold (which has a lot of great properties—it's
a profunctor, an applicative, and a comonad).

Nicely, this construction lets us write `take` purely!

    
    
        {-# LANGUAGE GADTs         #-}
        {-# LANGUAGE RankNTypes    #-}
        {-# LANGUAGE TypeOperators #-}
    
        import           Control.Arrow
        import           Control.Category
        import qualified Prelude
        import           Prelude hiding (id, (.))
    
        data Fold a r where
          Fold :: (a -> x -> x) -> x -> (x -> r) -> Fold a r
    
        data Pair a b = Pair !a !b
    
        pfst :: Pair a b -> a
        pfst (Pair a b) = a
    
        psnd :: Pair a b -> b
        psnd (Pair a b) = b
    
        newtype (~>) a b = Arr (forall r . Fold b r -> Fold a r)
    
        instance Category (~>) where
          id = Arr id
          Arr f . Arr g = Arr (g . f)
    
        amap :: (a -> b) -> (a ~> b)
        amap f = Arr (\(Fold cons nil fin) -> Fold (cons . f) nil fin)
    
        afilter :: (a -> Bool) -> (a ~> a)
        afilter p = Arr $ \(Fold cons nil fin) ->
          let cons' = \a x -> if p a then cons a x else x
          in Fold cons' nil fin
    
        fold :: Fold a r -> [a] -> r
        fold (Fold cons nil fin) = fin . spin where
          spin []     = nil
          spin (a:as) = cons a (spin as)
    
        asequence :: (a ~> b) -> ([a] -> [b])
        asequence (Arr f) = fold (f (Fold (:) [] id))
        
        aflatmap :: (a -> [b]) -> (a ~> b)
        aflatmap f = Arr $ \(Fold cons nil fin) ->
          Fold (\a x -> foldr cons x (f a)) nil fin
        
        atake :: Int -> (a ~> a)
        atake n = Arr $ \(Fold cons nil fin) ->
          let cons' = \a x n -> if n > 0 then cons a (x (n-1)) else x n
          in Fold cons' (const nil) (\x -> fin (x n))

~~~
phobbs
Brilliant, this clears up the usage a lot more than the rest of the prose in
the comment thread.

------
dustingetz
Not sure I understand - so Clojure is getting first class support for lazy
collections and curried combinators? Or am I missing the important part?

~~~
_halgari
Many Clojure abstractions need things like map, filter, concat, etc. Reducers
are eager, lazy-seqs are lazy, and core.async channels are lazy and push-
based. This is a way to unify all these abstractions so that you can write a
single map/filter/concat transform once, and use it in many different ways.

~~~
dustingetz
> This is a way to unify all these abstractions so that you can write a single
> map/filter/concat transform once, and use it in many different ways.

I don't understand why these things aren't all unified in the first place,
it's just function composition we're talking about here right?

Edit: I mean it's clearly not just simple function composition, but I don't
understand why not

    
    
        (def xform (comp (map inc) (filter even?))) ; xform is a transducer
        (defn xform [aseq] (->> aseq (map inc) (filter even?))) ; xform is a fn
    

Why do these have to be different things? Why is there a need for machinery to
do what fn composition is supposed to do?

~~~
richhickey
Your typical definition of map, filter etc includes concrete usage of e.g.
lists. These don't. Transducers are not just currying or partial application
of the map function over lists, they isolate the logic of map, filter etc from
lists or any particular context, allowing them to be used in very different
(e.g. non-collection) contexts.

~~~
dustingetz
> isolate the logic of map, filter etc from lists or any particular context

so Functors (fmap)? fmap still uses regular old fns, no machinery.

[http://clojuredocs.org/clojure_contrib/clojure.contrib.gener...](http://clojuredocs.org/clojure_contrib/clojure.contrib.generic.functor/fmap)
[http://learnyouahaskell.com/making-our-own-types-and-
typecla...](http://learnyouahaskell.com/making-our-own-types-and-
typeclasses#the-functor-typeclass)

~~~
Flow
I feel that same confusion as you. In Haskell terms, what is it Rich have
implemented or perhaps even invented?

~~~
berdario
I'm not sure I grok this, but I think that the main points are:

\- reducers can work on tree structures, and thus can exploit parallelism.
This would be like using a map that requires only the Foldable typeclass

\- In Haskell you have stream/vector fusion, but it's not obvious to know when
ghc will actually exploit it, you might want to use something like
Control.Monad.Stream or Data.Vector. In theory it might be generalized to all
Foldables, but in practice for now it might be a good enough compromise to
stick to a limited number of types (the one that support such transducers)

So: nothing terribly new/groundbreaking, but it might bring something like
stream fusion to the masses (of clojure developers :P )

~~~
swannodette
Comparing transducers to stream fusion is far narrower than the scope of their
applicability.

~~~
jamii
They are more comparable to Oleg's Enumerators
([http://okmij.org/ftp/Haskell/Iteratee/describe.pdf](http://okmij.org/ftp/Haskell/Iteratee/describe.pdf)),
in that you compose a series of computations and then push data through them.
The type signature is similar:

    
    
        type Iteratee el m a -- a processor of 'els' in the monad 'm' returning a type 'a'
    
        type Enumerator el m a = Iteratee el m a -> m (Iteratee el m a)
    

The Enumerators library is complicated by the presence of monads and by trying
to automatically close handles when the stream is processed. In some ways it
seems that the goal of solving the lazy IO problem led to missing a much
simpler abstraction. Transducers seem to be simpler and more focused on just
abstracting stream/map-reduce computation.

~~~
richhickey
While not mentioned in the blog post, the transducers implementation supports
both early termination and result completion/flushing/cleanup.

~~~
dons
there does seem to be some overlap with stream fusion -- which was all about
exploiting the optimization opportunities when separating the collection
operation from the kernel performed on each element.

We called the bits in the middle "step functions" , which could be combined
with "consumers", "producers" and "transformers".

And the algebra generalizes hugely (not just collections) but to things like
concurrent processes, data flow programs etc.

[http://metagraph.org/papers/stream_fusion.pdf](http://metagraph.org/papers/stream_fusion.pdf)

Things to think about in a non-Haskell settings: how do you prevent reordering
side-effects? Can execeptions/non-termination being reordered be observed?

------
nohat00
> "these transformers were never exposed a la carte, instead being
> encapsulated by the macrology of reducers."

What does 'macrology' mean in this context? Is this a common usage? Or a novel
application of a word that ordinarily means "long and tedious talk without
much substance"

~~~
prospero
It means that reducers were some macro sugar atop the underlying mechanism
described in the post. See
[https://github.com/clojure/clojure/blob/master/src/clj/cloju...](https://github.com/clojure/clojure/blob/master/src/clj/clojure/core/reducers.clj#L139)

~~~
nohat00
... so 'macrology' means "use of macros" rather than "long and tedious talk
without much substance"? Is this usage specific to Clojure, or to all
languages that have macros? It seems like a pretty bizarre repurposing of a
word that already has a very different meaning, and I wonder how widespread it
is. Are we too late to stop it?

~~~
e12e
Well, it's the use of macros to avoid having to write code that has the
characteristics of macrology. If nothing else, it's sort of strangely
recursive, like perhaps many "complex macros" are...

