
An easier approach to partial fractions decomposition - luu
https://jaydaigle.net/blog/calculus/easier-partial-fractions/
======
kwantam
I learned this in undergrad calc 1 as the Heaviside cover-up method [1], named
after Oliver Heaviside.

Heaviside was a remarkably productive scientist [2], contributing to a range
of topics from vector calculus to electromagnetic theory.

[1] [https://en.wikipedia.org/wiki/Heaviside_cover-
up_method](https://en.wikipedia.org/wiki/Heaviside_cover-up_method)

[2]
[https://en.wikipedia.org/wiki/Heaviside](https://en.wikipedia.org/wiki/Heaviside)

~~~
tigerlily
Didn't Heaviside condense Maxwell's nine (or more?) original equations down to
the four we know and use today?

~~~
antidesitter
And those equations can be condensed into a single manifestly covariant
equation:

□A = J

where □ is the d'Alembertian, A is the 4-potential, and J is the 4-current.

------
JadeNB
As the author presents it, it seems to me difficult to discover the idea of
"partially clearing denomninators"; but notice that we can still clear all
denominators—we just have to be lazy about what we do next. (This is one of
the things that I have trouble convincing my calculus students to do; I'll
frequently see them encounter a product of two binomials, reflexively multiply
them together, and then spend time factoring them to find the roots of the
product.)

Anyway, if we clear denominators in the author's example, then we get:

    
    
        7x + 2 = A(x - 1)(x + 2) + B(x - 1) + C(x + 2)^2.
    

As long as we _don 't_ multiply and re-group as instinct may suggest, we can
plug in `x = 1` just as easily as the author describes:

    
    
        9 = 0 + 0 + 9C.
    

As he suggests, we can also plug in `x = -2` (and now it's maybe less
mysterious why this is still OK) to get

    
    
        -12 = -3B
    

; and then we can proceed as the author does, subtracting off the B term and
factoring out `(x + 2)` from what's left, _or_ (which is substantially the
same procedure) we can _differentiate_ :

    
    
        7 = A(x - 1) + B + 2C(x + 2) => 7 = -3A + B.
    

(The differentiation would look more complicated if we had more terms, but
notice that we don't actually need to know that the derivative of `C(x + 2)^2`
is `2C(x + 2)`, only that the derivative of _any_ polynomial divisible by `(x
+ 2)^2` is divisible by `(x + 2)`, and so vanishes at `x = -2`.)

~~~
edflsafoiewq
Another way to find A (which is only another form of differentiation) is not
to evaluate, that is reduce mod (x-a), like for B and C, but to reduce mod the
higher-order term, (x+2)^2. The C term drops out immediately and (x-1)(x+2) =
(x+2 - 3)(x+2) = -3(x+2) so we get

    
    
        7x + 2 = -3A(x+2) + B(x-1) => (evaluate at x=1) => 9 = -9A

------
monochromatic
This method may work, but the explanation is nonsense.

We start out with an equation that’s undefined at x = 1. First we multiply
both sides by (x - 1), then we evaluate at x = 1. None of that is valid.

~~~
learnstats2
This type of method for partial fraction decomposition is commonly taught at
A-level Maths (typical end of high school exams in England, Wales & Northern
Ireland)

Can anyone explain why it works to use x=1, even though the fraction is
undefined at x=1?

Presumably an explanation would involve continuity and limits.

~~~
edflsafoiewq
You don't need limits. It works perfectly fine as an equation in the ring of
rational fractions on R[x]. That is, an equation between rational functions
shouldn't be read as saying "if you evaluate these for a specific value of x
and the denominators do not vanish the results are equal" but as saying "these
are equal as polynomials after formally clearing the denominators".

Concretely the equations he computes with

(7x+2) / (x+2)^2 (x-1) = A/(x+2) + B/(x+2)^2 + C/(x-1)

(7x+2) / (x+2)^2 = A(x-1)/(x+2) + B(x-1)/(x+2)^2 + C

(7x+2) / (x-1) = A(x+2) + B + C(x+2)^2/(x-1)

(7x+2) / (x+2) (x-1) = A + B/(x+2) + C(x+2)/(x-1)

are all equivalent by definition (because f/g = f'/g' means fg' = f'g) to

7x+2 = A(x+2)(x-1) + B(x-1) + C(x+2)^2

Evaluate at x=1 to get C=1. Evaluate at x=-2 to get B=4. IMO it's easier to
get C by using the system of equations approach rather than the evaluation
method at this point, since the coefficient of X^2 of the RHS is evidently A+C
and must vanish, so C=-1.

~~~
JadeNB
The problem with viewing this as _only_ a computation in the ring `R(x)` (I
think that's what you mean, not "the ring of rational fractions on R[x]"—maybe
you meant the fraction field of `R[x]`?) is that, while all the equalities _of
rational functions_ make perfectly good sense in `R(x)`, there is no "plug in
`x = 1`" function on `R(x)`. One must 'promote' the equality in `R(x)` to one
in `R[x]` by clearing denominators, and only _then_ plug in `x = 1`.

(Of course this is what you _do_ , and I agree that it's the right, or at
least a good, way to explain it; but I think that it's important to emphasise
explicitly _how_ the algebraic point of view avoids any concern with taking
limits.)

~~~
gizmo686
You could view R[x] as a subring of R(x). At that point, once you have
tranformed the problem into one contained entirely within R[x], evaluation at
a point becomes a well defined function.

I suppose you could view evaluation as a non total function on R(x)XR (or
rather, a function on the annoying to write domain where it is defined), but I
have never seen anyone do that explicitly.

~~~
JadeNB
> You could view R[x] as a subring of R(x). At that point, once you have
> tranformed the problem into one contained entirely within R[x], evaluation
> at a point becomes a well defined function.

Indeed, this is exactly the process that I meant to describe by:

> One must 'promote' the equality in `R(x)` to one in `R[x]` by clearing
> denominators, and only then plug in `x = 1`.

> I suppose you could view evaluation as a non total function on R(x)XR (or
> rather, a function on the annoying to write domain where it is defined), but
> I have never seen anyone do that explicitly.

I agree that it would be strange, for many reasons. Mathematicians in the main
tend to avoid the language of partial functions. An individual evaluation
function, say at `x = c`, is naturally viewed as a function on the
localisation R[x]_{(x - c)}; but, if one wants to view the evaluation as
(informally) a "two-place" function, then I would say that it lives naturally
on R[x] \times R.

