
APL matrix product operator - panic
https://blog.plover.com/prog/apl-matrix-product.html
======
dnautics
Fairly easy in julia:

    
    
        julia> function arbitrary_matrix_reop(bin1::Function, bin2::Function)
                 (m::Matrix, v::Vector) -> [
                   foldr(bin2, [
                     bin1(m[row_idx, col_idx], v[col_idx])
                       for col_idx in 1:size(m,2) 
                   ]) 
                   for row_idx in 1:size(m,1)
                 ]
               end
    
        arbitrary_matrix_reop (generic function with 1 method)
    
        julia> M = [1 2 3
                    4 5 6]
        julia> v = [5,6,7]
    
        julia> ⊚ = arbitrary_matrix_reop(*, +)
    
        julia> M ⊚ v
        2-element Array{Int64,1}:
          38
          92
    
        julia> ⊜ = arbitrary_matrix_reop(min, +)
    
        julia> M ⊜ v
        2-element Array{Int64,1}:
          6
         15
    

Just a first pass. Function composition (∘) is a predefined operation in
julia, so that sohuld make what the author already wants pretty trivial, There
are probably smart ways of doing multiple dispatch so that you can do more
than just matrix-vector but also matrix-matrix, etc.

Edit: The author actually might want function application (|>), not function
composition (I'm not sure). In that case it's doable too:

    
    
        julia> M = [π    0
                    π/2 π ]
    
        julia> v = [cos, sin]
    
        julia> ⦷ = arbitrary_matrix_reop(|>, +)
    
        julia> M ⦷ v
        2-element Array{Float64,1}:
         -1.0                   
          1.8369701987210297e-16

~~~
semi-extrinsic
Having never written any Julia, I have to ask: is this snippet hard to follow
because it's doing a strange thing, or is this just "normal" Julia?

~~~
dnautics
It's doing a very strange thing. Did you read the op? Arbitrary matrix reop
takes a 2-ary functions and reassigns the *,+ operations for matrix
multiplication to those operations. Was it just the results you couldn't
follow or were other things confusing? Typically I find Julia to be one of the
easiest PLs to read.... Another confusing aspect is that my function is
returning a lambda and assigning it to a unicode symbol (valid identifier name
in Julia). That's a bit bad in terms of code clarity but aligns with the
intent of the author.

The only other confusing thing could be that comma arrays are column vectors,
because I deleted some REPL responses for brevity.... If you actually use the
Julia repl, this would be clear.

~~~
semi-extrinsic
The (abstract) operator itself and the results were understandable. It was
more stuff like: how do I see the function returns a lambda? What is foldr?
Groking the typing, which is somehow reversed of what I expect ( _m::int_
instead of _int::m_ ). The list comprehension-like indexing, and why one _for_
is outside the foldr, not zero or both -- and why you can't write this as
implicit vectorization like on a Numpy array or in Fortran, instead of
explicitly iterating.

~~~
dnautics
gotcha. I was writing for terseness and timepressure in the small hn box, so I
would say that this is not representative of how one should or even, I
generally would have written.

1) I probably should have put an explicit "return" on the lambda statement,
since it's unclear with the strange tabbing.

2) foldr is reduce, with an order guarantee (I think reduce might do a binary
or parallel reduction if given the chance). I probably should have done
reduce.

3) the type-first technique is a C-and-friends thing - found in C, Java, Go.
Functional languages Typescript, F# do type-second, as does pascal if we reach
far back enough (albeit they do it with the single colon operator) I think
type-first contributes to the difficulty of reading C types but i don't have
evidence ([http://unixwiz.net/techtips/reading-
cdecl.html](http://unixwiz.net/techtips/reading-cdecl.html))

4) now that I think about it, julia has a dot broadcast notation which I
probably should have used.

This might have been better (but possibly too terse? I don't know)

    
    
        function matmul_reop(mulop::Function, addop::Function)::Function
          return (m::Matrix, v::Vector) -> [ reduce(addop, mulop.(m[row,:], v)) for row in 1:size(m,1) ]
        end
    

I wish julia had a row iterator, (and once wrote one myself) because that
would make it even more terse... That is on the issues tracker, so it will
probably make the standard library.

------
evincarofautumn
Worth noting that the basic functions like “sum”, “transpose”, and “foldr1”
are already available in the standard library in Haskell. Dunno why they
weren’t used—just a learning exercise?

It’s often a good strategy to arrive at a simple definition by starting with a
list comprehension or do-notation, then desugaring & simplifying it to use
Functor or Applicative operators if it makes things nicer. That’s a good way
to build experience with using those functions and develop a sense of when to
use sugary notation and when not to.

For example, if I were coming from APL and in a pointfreeish mood, I might’ve
rewritten the list comprehension version toward the end like this:

    
    
        generalMatrixProduct add mul a b
          = (<$> transpose b) . innerProduct <$> a
          where innerProduct = fmap (foldr1 add) . zipWith mul
    

Which reads as “The inner product of each row of ‘a’ with every column (a.k.a.
row of the transpose) of ‘b’.”

(In reality I’d probably just stick to do-notation, though, unless I found a
particularly nice fully pointfree version, and also avoid foldr1 since it’s
partial.)

------
contravariant
That's one weird operator. It treats polynomials as vectors, but includes the
powers of x? I could understand treating polynomials as vectors of
coefficients, but to treat them as vectors of mononomials is just plain weird.

Ignoring the question of _why_ on earth you'd want to do something like this
it would be far more natural to point out a matrix can be considered a vector
of linear operators by writing it as a tensor product of basis vectors and co-
vectors (so a column-vector of row-vectors basically), where for a matrix M
with rows r_i we have:

    
    
        (M x)_i = r_i * x = r_i(x)
    

and then take the smallest logical step to make it nonlinear which would be:

    
    
        (M x)_i = f_i(x)
    

for general functions f_i. Of course this is just the definition of a
multivariate function, so perhaps it would be more interesting to look at some
intermediate version where f_i is some linear combination of functions f_ij
such that:

    
    
        (M x)_i = Sum_j f_ij(x_j)
    

again analogous to how matrices work.

------
jph00
OP makes surprising claims about J. Here is some reading which hopefully will
help you make up your own mind about whether the claims are accurate.

J has a rich range of composition operations (far more than mentioned in OP)
which work well together. Many are discussed in this paper:
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98....](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.3105&rep=rep1&type=pdf)

Matrix convolution is handled with semidot3:
[https://code.jsoftware.com/wiki/Vocabulary/semidot3](https://code.jsoftware.com/wiki/Vocabulary/semidot3)

FFT convolution examples are here:
[https://code.jsoftware.com/wiki/Essays/FFT](https://code.jsoftware.com/wiki/Essays/FFT)

Iverson wrote a chapter on polynomials in J in his book Exploring Math:
[http://www.jsoftware.com/books/pdf/expmath.pdf](http://www.jsoftware.com/books/pdf/expmath.pdf)

~~~
jonahx
Indeed. J is a wonderful language.

From the article:

> I don't know why Iverson thought the hook was the thing to embed in the
> language.

There are deep reasons for the choice:
[http://www.jsoftware.com/papers/fork1.htm](http://www.jsoftware.com/papers/fork1.htm)

In addition, viewing J as "a worse Haskell" is to misunderstand it. J is a
function-level, not a purely functional, language:

[https://en.wikipedia.org/wiki/Function-
level_programming#Con...](https://en.wikipedia.org/wiki/Function-
level_programming#Contrast_to_functional_programming)

From that link:

> When Backus studied and publicized his function-level style of programming,
> his message was mostly misunderstood,[2] as supporting the traditional
> functional programming style languages instead of his own FP and its
> successor FL.

> ...

> This restriction means that functions in FP [Backus's function-level
> language] are a module (generated by the built-in functions) over the
> algebra of functional forms, and are thus algebraically tractable. For
> instance, the general question of equality of two functions is equivalent to
> the halting problem, and is undecidable, but equality of two functions in FP
> is just equality in the algebra, and thus (Backus imagines) easier.

------
leafario2
Any time I see APL code, I feel slightly stupid.

~~~
FreeFull
For the APL-family of languages, it's generally easier to write code than it
is to read it.

~~~
rpz
Agreed. Interesting thought: Would you or anyone else here also agree that it
is generally easier to write mathematical notation than to read it?

In my experience its much easier to write almost anything, whether it be code,
poetry, english, etc. As opposed to reading it. I'm interested to hear your
thoughts.

Edit: Maybe i shouldnt say much easier. One could argue its much easier to
read a novel than it is to write one, and i'd agree :)

~~~
akira2501
> Would you or anyone else here also agree that it is generally easier to
> write mathematical notation than to read it?

Yes. In my estimation, it's due to a much higher density of conceptual ideas
being expressed. I honestly prefer imperative code for stepping through a new
idea because it simply can't be expressed in most dense and compressed form
and probably because it's what I'm most familiar with.

> One could argue its much easier to read a novel than it is to write one, and
> i'd agree :)

I think it's because novels assume you're starting from a position of
ignorance. Probably why I could never read Pynchon.

------
JosephHatfield
Some implementations of APL allow for definition of arbitrary dyadic functions
which can be composed using the inner product operator. For example, if a and
b are conformable, you can write a f.g b

