
PEP 465 – Dedicated infix operators for matrix multiplication and matrix power - ot
http://www.python.org/dev/peps/pep-0465/
======
jey
Huh? Isn't it a question of whether you are doing matrix or array ops, so the
type system is the issue? I have rarely needed to do elementwide multiply on
matrices, and if I do, in Eigenproblem I can just call mat.array() to get an
ArrayView.

Really, the problem is that NumPy is a crappily designed library primarily
intended for array ops, and is just not well suited for linear algebra.

~~~
rspeer
> NumPy is a crappily designed library

I don't think you know what you're talking about.

~~~
jey
Try Eigen, you'll agree that NumPy is an array library that had linear algebra
support thrown in as an afterthought.

~~~
rspeer
Of course it's an array library. The fact that it's not designed for your
problem doesn't mean it's designed _crappily_.

Suppose you ported Eigen to Python. You'd still need an array library, so you
could represent the inputs and outputs in a way that's usable in Python. You'd
probably use NumPy.

------
shoyer
Guido has indicated that he is ready to accept the PEP after a few details are
worked out: [https://groups.google.com/forum/#!msg/python-
ideas/aHVlL6BAD...](https://groups.google.com/forum/#!msg/python-
ideas/aHVlL6BADLY/GBCTSbY40QQJ)

So it appears that we will indeed have @ for matrix multiplication in Python
3.5! This is a feature the numeric python computing has been hoping for for a
long, long time.

------
stormbrew
It kind of seems to me that it's actually vectored multiplication that's the
odd one out that should maybe have a special syntax, maybe even one that
applies to many operations and not just multiplication.

A matrix is a _Thing_ that happens to also be a collection. A vector you want
to multiply is just a collection. Really vector multiplication is just a map
operation, so make some syntactic sugar to generate an operator comprehension:

    
    
        a = [1,2,3]
        b = [4,5,6]
        c = a [*] b # or something
        # becomes
        c = [x * y for (x,y) in zip(a,b)]
        # or
        c = a.__vecmul__(b)
    

This would make more sense to me.

------
krallja
Great! Minor quibble: the underlying method names should not mention `mat`
(e.g. `__matmul__`); instead mentioning the shape of the operator (e.g.
`__atmul__`).

~~~
shoo
that might be a good idea, or a little strange.

many existing python infix boolean operators resolve to method names based on
the meaning, rather than the symbol. e.g. `a + b` resolves to `a.__add__(b)`
rather than `a.__plus__(b)`, `x * * y` resolves to `x.__pow__(y)` rather than
`x.__asteriskasterisk__(y)`, say. so arguably it would be consistent to name @
after the common meaning also.

[http://docs.python.org/2/reference/datamodel.html#emulating-...](http://docs.python.org/2/reference/datamodel.html#emulating-
numeric-types)

on the other hand, "consistency is not necessarily a virtue: one can be
consistently obnoxious" \- C.A.B. Smith

that said, i like the idea that @ should be more general than just for
matrices. an arbitrary infix boolean operation, neither necessarily
commutative nor invertible.

even when talking about matrix multiplication, generalising slightly from
arrays to abstract elements of vector spaces, and from matrices to linear
transformations between vector spaces leaves you writing the same kinds of
expressions that compose linear transformations without anything necessarily
being represented as a matrix.

------
Malarkey73
That's funny. If there is one thing that Python users do more after than diss
R - it's steal from R.

~~~
GFK_of_xmaspast
Matlab is a horror but one thing they do do well is .* and * for the two
multiplies.

~~~
synparb
Julia also adopted this syntax, and I'm not sure if I like it. In numpy at
least, I find myself doing element-wise multiplication much more frequently
than matrix multiplication, so '.*' always feels clumsy. I'd rather have
special syntax for the matrix multiply (and I was a long time matlab user
before moving to numpy).

------
ryansouza
Seems to me that Haskell's ability to infix named functions is a much nicer
solution over-all

~~~
pcarbonn
I agree. I wish python had the capability to infix functions. Maybe the topic
for a new PEP ?

~~~
akehrer
This is this[1] workaround.

I wonder if a whole set of infix matrix operations could be added to reduce
the need to load numpy for simple tasks.

    
    
        A @* B
        A @. B
        A @+ B
    

[1]
[http://code.activestate.com/recipes/384122/](http://code.activestate.com/recipes/384122/)

~~~
pcarbonn
Yes, but making it part of the language would bring clear error messages in
case of misuse, and would allow the use of a different token, e.g. ":", as in
smalltalk

------
RhysU
The bloody associativity issue is unresolved...? Of course it should be right
associative.

Otherwise A _B_ x means (A _B)_ x which is idiotic for numerical work. One
would have to write A _(B_ x) to get reasonable performance which isn't far
enough away from A.dot(B.dot(x)) to justify the implementation overhead.

Lastly, as awful as it sounds, it is nice when awfully expensive things like
10K by 10K matmats have some visual weight. It makes people think about how
they're written down.

------
valtron
If you're really determined, you can use `.` as an operator via __getattr__
and inspect :p

~~~
shoo
ugh.

very loosely related, this reminds me of calling R functions from python that
take keyword arguments with dots in their names.

    
    
        >>> f(hello.world=123)
          File "<stdin>", line 1
        SyntaxError: keyword can't be an expression
    

so instead:

    
    
        >>> f(**{'hello.world':123})

------
captaincrowbar
I'm kind of baffled by this. Python has operator overloading, so what's wrong
with using * for matrix multiplication? I know there's a bit in the PEP that
claims to answer this, but I can't understand their argument. Can someone
explain?

~~~
gradys
It looks like it says there are enough cases where libraries crave two
multiplication operators, elementwise multiplication and matrix
multiplication, that it makes sense to add an operator to the language.

~~~
bsaul
Would matrix multiplicqtion be useful for anything other than matrices ? I can
understand to provide an operator in the language when it's usable by many
types, but i don't understand what sense would it have to create an operator
just for one type... Even more so when that type isn't part of the language.

Or does it mean that @ would be used just for arrays, and that it would be
useful for cases when arrays aren't matrices of numbers ?

------
pekk
Who else uses @ this way? It seems really ad hoc and ugly as a core feature of
Python syntax

------
scotty79
This looks bad:

S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)

but this:

S = (( (H) _.dot_ (beta) - (r) ).T) _.dot_ ( _inv_ ( (H) _.dot_ (V) _.dot_
((H).T) )) _.dot_ ( (H) _.dot_ (beta) - (r) )

is close enough to:

S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)

although bit less readable due to being bit longer.

------
gamegoblin
I'd really like to see user-defined infix operators such as those in Haskell.
There are some baked into the standard library (*+-/ etc), but you can also
roll your own, so people could make do until this gets accepted.

~~~
maxerickson
You can get cute about it:

[http://code.activestate.com/recipes/384122-infix-
operators/](http://code.activestate.com/recipes/384122-infix-operators/)

The code there overrides | to construct arbitrary operators that are used by
wrapping an object in |, like:

    
    
        a |x| b
    

where x defines the custom behavior.

------
caretcaret
Why not overload the already-defined * * ? Matrix multiplication is used much
more than matrix exponentation (which can be delegated to methods), and * * is
already right associative.

~~~
shoyer
Because __already means element-wise power. The reason we need a new operator
(@) for matrix multiplication is precisely so we can all the same operators
for arrays that we already use for scalars.

------
zmmmmm
Coincidentally I just implemented this the other day for a quick Matrix class
in Groovy. Was about 4 lines of code.

