
Decomposing a function into its even and odd parts - ingve
http://blog.plover.com/math/even-odd.html
======
pash
It's a bit odd (pardon the pun) that the words _Taylor series_ [0] appear
neither in the blog post nor so far in the comments, since that's the origin
of the terms _odd_ and _even_ in this context [1]: the power series of odd
functions have non-zero coefficients only on the odd powers of the variable,
and those of even functions only on the even powers.

And thus the Taylor-series representation of a function makes it immediately
obvious that the function can be decomposed into odd and even sub-functions.
But Taylor-series representations exist only for smooth functions, and
generally converge only in parts of their domains, while the even-odd
decomposition applies to all real-valued functions defined on all real inputs.
It's this algebraic generality of the even-odd decomposition that's most
remarkable.

0\.
[https://en.wikipedia.org/wiki/Taylor_series](https://en.wikipedia.org/wiki/Taylor_series)

1\. The Wikipedia article on odd and even functions points out the
decomposition described in the submitted post:
[https://en.wikipedia.org/wiki/Even_and_odd_functions](https://en.wikipedia.org/wiki/Even_and_odd_functions)

~~~
davidtgoldblatt
I got curious about the etymology of this, and it's not clear to me that
wikipedia is right on this one. According to [1], the earliest use of "even"
and "odd" for functions goes back to Euler [2]. It's been a long time since
high school Latin, but it doesn't look like he has the Taylor series in mind
here. He certainly calls out the functions f(x) = x^n for some n as even or
odd, and notes that the sums work in the ways you'd expect, but he also talks
about ratios of those functions, which he wouldn't need to do if he were
assuming smoothness and could just expand out the Taylor series of the ratio.
It's unclear, but it looks to me like he's using "even" and "odd" to draw an
algebraic analogy.

[1] [http://jeff560.tripod.com/e.html](http://jeff560.tripod.com/e.html) [2]
[http://eulerarchive.maa.org/docs/originals/E005.pdf](http://eulerarchive.maa.org/docs/originals/E005.pdf)
, section XVII.

~~~
pash
I did not mean to imply that Wikipedia was my source for the etymology of
_even_ and _odd_ as classifications of functions. (I hadn't really meant to
make an etymological point at all, and I added the link only so that readers
unfamiliar with the concept of Taylor series might be enticed to click through
and learn something.)

I don't have any references for you right now, but I would be surprised if the
usage of the terms doesn't pre-date your citation of Euler by at least several
decades. The basic technique for deriving Taylor series is known to date to at
least 1671, to the correspondence between James Gregory and John Collins; it
also shows up in letters from de Moivre to Johann Bernoulli in 1694 and from
Leibniz to Bernoulli in 1708. Newton had a geometric means of deriving the
coefficients of power series when he was writing the _Principa_ (published in
1687), as demonstrated by the proof of his tenth proposition, and he included
a description of the algebraic technique in an early draft of his _Quadrature
of Curves_ (but removed it before publication in 1706). So Taylor series were
broadly known to Europe's prominent corresponding mathematicians in the first
decades of the eighteenth century, and to some of them decades earlier.

And the basic concept of power series, as well as the power-series
representations of many common functions, were already widely known in the
latter decades of the seventeenth century. Like I said, I don't have a
reference for you at the moment, but I have always heard that the terms _odd_
and _even_ derive from the power-series representations of functions, and it's
difficult to imagine that no one before Euler had noted that the power series
of some functions involve only odd or only even powers—although one may more
readily imagine that someone noted it while neglecting to name it.

Anyway, Taylor series and the more basic concept of power series certainly
were known to Euler when he wrote the paper you cited. Since the power series
of polynomial functions are the polynomials themselves, you're right that
Euler would not have had power series in mind in the context of this paper.
(But, no, Euler would not have been thinking in terms of
smoothness—mathematicians up until the analytic enlightenment of nineteenth
century played fairly fast and loose with derivatives. The point you made
about ratios of functions is off-base, I think: the paper is about reciprocal
solutions to polynominal equations.)

The question is simply whether your citation is evidence that Euler first
coined the terms for the concept in the context of this paper (i.e., at that
time and referring specifically to polynomials), or whether the concept and
terminology already existed. If not, when was it generalized to refer to
functions other than polynomials of finite order? I don't know. Perhaps nobody
bothered to give the concept a name until Euler in 1727, and perhaps nobody
even considered the concept with regard to a more general class of functions
until even later. (I have a source book around here somewhere that might say
something about the matter. ...)

In any event, if you asked a sample of mathematicians today why functions are
called odd or even, I'd bet that many of them would point to power-series
representations. That's what I meant to point out in my earlier comment.

~~~
thaumasiotes
> I would be surprised if the usage of the terms doesn't pre-date your
> citation of Euler by at least several decades

The citation is pretty suggestive, in that the text goes "functions, which I
call even, which have this property...". That doesn't mean the terminology is
original to Euler, but it does mean it can't have been established at the time
he wrote.

> if you asked a sample of mathematicians today why functions are called odd
> or even, I'd bet that many of them would point to power-series
> representations

Eh. The terms apply just as much to nondifferentiable functions. I would have
just said that it comes from the properties of polynomials with terms of all
even or all odd degree. I've never thought of _infinite_ polynomials as having
any special place in the categorization of functions as even or odd; the
concept is generally introduced with finite ones like f(x) = x^2 . Years
later, when you learn about Taylor series, it gets pointed out that sine and
cosine form polynomials with terms of only odd or only even degree, and --
look at that -- they conform to the definition of odd and even functions. (I'm
not making a claim about the origin of the concept, but I am making a claim
about how it's viewed today.)

~~~
pash
Yes, that's more or less what I meant, though from what I understand of pre-
modern mathematics, I still think it's likely that power-series
representations were the inspiration for applying the terms _odd_ and _even_
to functions. ...

But is it even clear that Euler's usage of "functiones pares" etymologically
corresponds with our "even functions"? In studying classical Latin for four
years, I never encountered a usage of "par" that would be translated as "even"
in the numerical sense, but admittedly I know next to nothing about
mathematical Latin of the eighteenth century. Reading Euler's words, if I were
not aware of the modern English usage of _even_ to describe the functions he
named, I imagine I might translate his phrase as "equal functions". That
translation seems to capture the idea that such functions have the property
that f(x) equals f(-x), or that the functions' values are the same on both
sides of the y-axis.

Particularly since Euler did not name what we now call _odd_ functions, it
does not seem clear to me that his Latin usage is etymologically related to
our modern terms at all. ... Do you know whether _par_ and _impar_ were the
Latin terms used in Euler's era to refer to even and odd integers?

(To counter my own objection, yes, the English _even_ , of Germanic origin,
does have several meanings that are close to the main meaning of Latin _par_
as _equal_ , e.g., in "even odds" or "an even split". Presumably that's the
origin of the mathematical meaning of _even_ in English: an even number is one
that can be split into two equal numbers whose sum is the original.)

~~~
thaumasiotes
> But is it even clear that Euler's usage of "functiones pares" etymologically
> corresponds with our "even functions"?

Well, the etymology of "even" doesn't trace to the Latin word "par", or any
cognate or ancestor of it. Is it clear that Euler's usage of "pares" is the
same sense as our "even functions"? Pretty clear.

The "words" Latin dictionary (
[http://archives.nd.edu/whitaker/words.htm](http://archives.nd.edu/whitaker/words.htm)
) includes the following gloss of 'par': "s:even, divisible by two". Perseus (
[http://www.perseus.tufts.edu/hopper/morph?l=par&la=la#lexico...](http://www.perseus.tufts.edu/hopper/morph?l=par&la=la#lexicon)
) seems to cite this sense back to Horace, glossing the phrase "ludere par
impar" as "play 'even and odd'".

I'll also note that par and impar are the Spanish equivalents today of even
and odd.

All that, together with the fact that it was translated into English as
"even", seems like a pretty strong case that that's what Euler meant.

~~~
pash
Yep, I looked it up too, and indeed _par_ and _impar_ seem to have been the
standard Latin terms for _even_ and _odd_ in the mathematics of the era. See,
e.g., Clavius's 1574 translation of Euclid's _Elements_ [0].

0\.
[https://books.google.com/books?id=4Ks7AAAAcAAJ&pg=PA8&lpg=PA...](https://books.google.com/books?id=4Ks7AAAAcAAJ&pg=PA8&lpg=PA8&dq=Par+numerus&source=bl&ots=JQZ7Q37ezp&sig=DP_HvkXUeNZ0E5454eGdS8Gfpss&hl=la&sa=X&ei=6QG3VLaLJIG_uATjpYLwCg&redir_esc=y#v=onepage&q=Par%20numerus&f=false)

------
qmalzp
You can think of the even and odd parts of a function as decomposing the
function into its +/-1 eigencomponents with respect to the operator f(x) ->
f(-x).

You can think of the exponential Fourier series of a function as a way to
decompose the function into its {..., -2, -1, 0, 1, 2, ...} eigencomponents
with respect to the operator f(x) -> f'(x).

~~~
kmill
I like this from a representation theory perspective. The cyclic group of
order two, C_2, is the set {1,x} with x^2=1 (it's reasonable to just use
{1,-1} with the group operation being real-number multiplication). Let V be
the set of continuous function R->R, and then let's define a linear
representation where phi_1 is the identity operator on V and phi_x is the
operator you describe, phi_x(f)(c)=f(-c). (The only two symmetries of a line
which fix the origin are the identity and flipping, which is what this
representation is representing.)

The group C_2 is known to have exactly two irreducible representations (the
positive and the negative representations), so V decomposes into (at most) two
subrepresentations (that is, there are two subspaces of V which are closed
under the group action). Using the characters of C_2, we get two projection
operators: (\phi_1+\phi_x)/2 and (\phi_1-\phi_x)/2\. Examining what these do,
they decompose a function into the even and odd parts, respectively!

This idea can be extended to the circle group for the Fourier transform.

Representations are able to capture a bit more than eigencomponent. Where an
eigencomponent requires that the action be strictly scaling, components from
representation can have more complicated actions. For instance, if you have a
representaiton of the dihedral group of symmetries of a triangle, then there
will be projections which will give you 1) the +1 eigencomponent, 2) the -1
eigencomponent associated with flipping the triangle over, and 3) the
2-dimensional component which faithfully represents the symmetries of the
triangle (i.e., the one already mentioned when describing the dihedral group).

~~~
Xcelerate
It's funny that you mention this; I was about to type a similar comment!
Thinking of the Fourier transform in terms of group theory seems like it would
make it more complicated, but it actually makes the fundamental underlying
concept simpler to understand.

One can perform the Fourier transform over an arbitrary, compact, non-
commutative group G via f(g) = ∑ dᵏ⋅tr(f̂ᵏ⋅ρᵏ(g)), where the sum is from k = 0
to k = ∞, and k indexes the unitary, irreducible representations ρᵏ of G. dᵏ
is the dimension of the kth representation. f̂ᵏ is the (matrix) Fourier
coefficient of the kth irreducible representation and is computed as f̂ᵏ = ∫
f(g)⋅ρᵏ(g⁻¹) dμ(g), where μ is a Haar measure on G such that ∫ dμ(g) = 1. Note
that since the group representations are unitary, ρᵏ(g⁻¹) = ρᵏ(g)ᴴ.

For commutative groups, all of the ρᵏ are one-dimensional, and so the sums and
integrals are over scalar values. As you mention, for the circle group the
above expression reduces to the "conventional" equation for the Fourier
transform.

One can think of the group Fourier transform as decomposing a nonlinear
function over a group into a linear combination of orthonormal functions such
that cutting off the sum at the kth term provides the best MSE approximation
to the function, i.e., f̂ᵏ = argmin ∫ [tr(ĉᵏ⋅ρᵏ(g)) - f(g)]² dμ(g), where the
minimization is over ĉᵏ.

------
kmm
Neat :) I had almost the same idea when I was trying to figure out whether
every matrix is the sum of a symmetric and an asymmetric matrix.

The decomposition being very similar: A = (A + A')/2 + (A - A')/2, with '
denoting the transpose.

~~~
tamana
*antisymmetric

The matrix problem is of course a special case of the same problem, where the
domain is the matrix indices, labelled with (0,0) as the middle of the matrix.

------
hellabites
This can be rather difficult for general data where an explicit equation isn't
obvious (even though it'll work quite often as pointed out by johncolanduoni
below).

[I think it's neat that] for sufficiently smooth and periodic data, a Fourier
transform will do exactly this (decompose a function into its even and odd
parts)!

~~~
johncolanduoni
I'm confused. If you have a table of data where one of the columns varies from
-A to A, what is the difficulty in calculating the odd and even parts by just
adding (resp. subtracting) the values at x and -x and dividing by two? Even if
you don't have a precisely symmetric span of x values you can use simple
interpolation as long as your data points are reasonably dense.

Fourier analysis seems to be overkill in this case, unless I'm missing
something.

~~~
hellabites
Your algorithm had a bit of a typo--you want to subtract (resp. add) to
calculate the odd and even parts of a function.

If you don't have a nearly symmetric span of x values, you may need to do
extrapolation to obtain one, which may be difficult.

I brought up Fourier analysis not as a means to replace the decomposition
described in the blog, but to connect it. I think it's neat that Fourier
transformations can be viewed as a parity decomposition.

~~~
S4M
if your data are over [a,b], where a < 0 and b > 0, you can do the
decomposition mentioned in the article over [-c,c], where c = min(|a|,|b|), so
you don't need to extrapolate.

------
apricot
The odd and even functions summing to 1/(x+1) given in the article have an
asymptote at x = 1.

Is there a way to avoid this, and represent 1/(x+1) as the sum of an odd
function and an even function which have no asymptote except for x = -1?

~~~
panic
The odd/even parts of a function are unique, so no.

To see why they're unique, say you have two pairs of odd/even functions
fo1/fe1 and fo2/fe2 that each sum to the same function. Subtract fo1 from both
sides:

    
    
        fo1 + fe1 = fo2 + fe2
              fe1 = fo2 - fo1 + fe2
    

Since fe1 is even, fo2 - fo1 + fe2 must also be even. The function fo2 - fo1
is the difference of two odd functions, so it is itself an odd function. And
the only way the sum of an odd function (fo2 - fo1) and an even function (fe2)
can be even is if the odd function is everywhere zero [1]. In other words, fo2
- fo1 = 0, which means fo2 = fo1. Substituting this into the overall equation,
that means fe1 = fe2 as well.

[1] the sum is even when

    
    
        odd(x) + even(x) = odd(-x) + even(-x)
                         = -odd(x) + even(x)
                  odd(x) = -odd(x)
    

and the only number which is its own negative is zero.

~~~
mjd
I think it's a little simpler to observe that

    
    
        fo1 - fo2 = fe1 - fe2
    

The left side is an odd function and the right side is an even function, so
the common value must be both odd and even. But only the zero function is both
odd and even, QED.

~~~
tamana
Your don't even need to posit fe3 and fo2:

f(X) = e(X) + o(X)

f(-X) = e(X) - o(X)

From there you can explicitly solve the system of linear equations to get e(X)
and o(X)

------
tamana
To visualize this, take the graph of f and mirror it across the y axis.

The even function is the average of the two traces f(x) and f(-x), and the odd
function is the distance from the average to the actual values.

