
Whence function notation? - djmylt
http://blogs.law.harvard.edu/pamphlet/2015/09/28/whence-function-notation/
======
javajosh
There is a certain pleasing _physicality_ to function notion, in the sense
that empty parens make a circle into which you put things. () -> (1). Look, I
put a 1 in there! f(x) is a notation that is strictly speaking only good for
the author of f because they know how to access their independent data; any
utility to the caller, by inferring what "x" should be, is purely
conventional/accidental.

Function notion in this sense mirrors our counting system, which begins with
an empty circle, 0, which we replace with a succession of symbols, until we
hit 9, and then 10. It's interesting to imagine what "10" means if you're
treating each symbol as a function. But this requires defining some sort of
analog to "counting" with functions, which I'm not sure makes much sense in
any general way. What is the "successor" to a function? It seems that we have
a lot of options. One simple example would be to let f return a function based
on an integer input, with the understanding that this is a simple function
from 0-9 but then at 10 we return the composition of f_1 and f_0. In this way
we describe the composition of 9 functions in any order using only a natural
number.

Anyway, back to writing CRUD webapps.

~~~
thaumasiotes
> There is a certain pleasing physicality to function notion, in the sense
> that empty parens make a circle into which you put things. () -> (1). Look,
> I put a 1 in there!

This gets dropped as you go further in math, though. Take the function that
reflects across the x,y plane, f(x,y,z) = (x,y,-z). Interpreting the notation
your way, you'd want to write f((x,y,z)) = (x,y,-z), but that isn't done.

~~~
javajosh
I didn't talk about how to implement (or represent) the body of the function!
And besides, Church seemed to like single arguments, so actually he _would_
write something like that. :)

Physicists wouldn't (my degree is in physics).

Programmers would write f(x,y,z){return {x,y,-z};}. But actually, I would say
that this is nasty because of asymmetry - it can't consume it's own output.
f(p){return {p.x,p.y,-p.z}} where p = {x,y,z}

------
derefr
Reminds me: is there a name for the _particular lexical operator_ "f(x)" of
(what is apparently) Euler's notation, in the way that the syntactic operator
"(f x)" is usually referred to as a reduction, and "f x" as a juxtaposition?

I'm aware that the _semantic meaning_ of "f(x)" in most languages is function
application—but I want to know what term I should use when, say, talking about
how I wish in Ruby you could do "f(x)" with Procs, but are stuck instead with
"f[x]".

~~~
jordigh
I don't think so. Mathematicians don't think of "f(x)" as evaluation or
application, but as the value of the function f at x. The "at" is very
important, because we generally deal with numerical functions that have well-
defined geometric graphs, and even for other functions, we retain the
graphical metaphors even if they may not apply anymore. We don't generally
think of "f(x)" as a _computation_ is what I mean. We discuss functions all
the time whose values we agree will exist but cannot be computed (for example,
a choice function that we obtained via the axiom of choice).

We also don't generally pass functions around to other functions, and we don't
care about syntax that much. After all, we are not computers, so we are quite
comfortable changing syntax around according to habit and custom, even happily
using ambiguous syntax when we think that our readers can figure out what we
mean.

So, no. We don't have a common name for it because it's not an important
distinction for us.

------
tel
I always feel like `f(a)` invokes the idea that `f` is a value dependent on
some `a` where notations like `(f a)` and `f a` are more indicative of the
idea that `f` is a thing which has the operation of application to `a`.

This makes sense mathematically as it's taken a long time for people to come
around to thinking of functions as things in their own right. Dependent
quantities are a more historical notion to my understanding.

------
mkehrt
As someone who has done a lot of work with ML and the lambda calculus, having
to put parentheses around my arguments in most programming languages drives me
crazy. It feels so heavyweight.

Function argument parens are particularly weird when dealing with curried
arguments. `f x y z` is so much nicer than `f(x)(y)(z)`.

~~~
cousin_it
Naw. I hate using whitespace for function application. Having a binary
operation that's neither associative nor parenthesized is super confusing to
newbies.

    
    
       a+b+c = (a+b)+c = a+(b+c)  // Good!
       f a b = (f a) b != f (a b)  // WTF?
    

BTW, that's also a good rule of thumb for defining new operators. If your new
op ~-^-~ is not associative, don't make it an op!! And even if it is, consider
that users won't magically know the relative precedence of your op versus
someone else's =^%^=. Looking at you, Haskell.

~~~
dragonwriter
You provide:

    
    
       a+b+c = (a+b)+c = a+(b+c)  // Good!
       f a b = (f a) b != f (a b)  // WTF?
    

So, I suppose, also:

    
    
       a/b/c = (a/b)/c != a/(b/c) // WTF?
    

> BTW, that's also a good rule of thumb for defining new operators. If your
> new op ~-^-~ is not associative, don't make it an op!!

Why should associativity be a requirement for a new op? Its not universal
among common standard binary operators, after all (particularly division and
exponentiation are not associative.)

~~~
cousin_it
I said either associative or parenthesized. People don't write a/b/c because
they had 10 years of learning to use parens in arithmetic, but they write f a
b all the time. For the same reason, my rule of thumb applies only to
operators that you define.

The non-associativity of exponentiation does throw people off sometimes. Not
all college students will correctly parse 2^2^2. IMO it's a wart in math
notation. It could've been easily avoided by e.g. drawing a box around the
exponent, so stacked exponentials would have boxes nesting upward.

------
jordigh
In group theory exponential notation such as x^f (picture a superscript, like
x<sup>f</sup>) or composing on the right (x)f are not altogether uncommon.
It's particulary convenient for homomorphisms or the action of a group on a
set. Some category theorists of the mathematical kind, not the computer
science kind, (i.e the kind that cares most deeply about homological algebra)
also like to compose on the right, so that you can write function composition
in the order that you diagram-chase.

------
mtdewcmu
I found some more background here[1]. It's not clear (to me, at least) whether
Euler would have employed parentheses in that case if they hadn't been
necessary to avoid ambiguity, i.e. if the argument had been x, he might have
just written fx.

[1] [http://math.stackexchange.com/questions/636332/the-origin-
of...](http://math.stackexchange.com/questions/636332/the-origin-of-the-
function-fx-notation)

