
Special Linear Systems and Cholesky Factorization for Programmers - disaster01
http://dragan.rocks/articles/17/Clojure-Numerics-3-Special-Linear-Systems-and-Cholesky-Factorization
======
dragandj
Part 1: [http://dragan.rocks/articles/17/Clojure-Numerics-1-Use-
Matri...](http://dragan.rocks/articles/17/Clojure-Numerics-1-Use-Matrices-
Efficiently)

Part 2: [http://dragan.rocks/articles/17/Clojure-
Numerics-2-General-L...](http://dragan.rocks/articles/17/Clojure-
Numerics-2-General-Linear-Systems-and-LU-Factorization)

Linear Algebra Refresher:

[http://dragan.rocks/articles/17/Clojure-Linear-Algebra-
Refre...](http://dragan.rocks/articles/17/Clojure-Linear-Algebra-Refresher-
Vector-Spaces)

[http://dragan.rocks/articles/17/Clojure-Linear-Algebra-
Refre...](http://dragan.rocks/articles/17/Clojure-Linear-Algebra-Refresher-
Eigenvalues-and-Eigenvectors)

[http://dragan.rocks/articles/17/Clojure-Linear-Algebra-
Refre...](http://dragan.rocks/articles/17/Clojure-Linear-Algebra-
Refresher-3-Matrix-Transformations)

[http://dragan.rocks/articles/17/Clojure-Linear-Algebra-
Refre...](http://dragan.rocks/articles/17/Clojure-Linear-Algebra-Refresher-
Linear-Transformations)

~~~
makmanalp
For linear algebra basics, I also especially recommend 3Blue1Brown's youtube
channel and his "Essence of" series:

[https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQ...](https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab)

I've never seen the material presented more clearly - it's completely visual
and focused on building intuition first.

------
tw1010
This is awesome. I can't wait until the future when we'll get more informal
blog posts from programmers about even more abstract subjects which thus far
are dominated in exposition by academics (who very rarely innovate in how they
present the material). The day I see a good engineer-style post on HN about
sheaf theory or algebraic geometry will be amazing.

~~~
dragandj
Author here. I hope I won't spoil the future, since I work at a university,
but I am also a programmer, and these libraries and tutorials have been made
to be 100% practical with the working programmers as the target audience.

------
santaclaus
Now the real tough question is how do you pronounce Cholesky? I had a Polish
math prof who was mad at the American pronunciation, but Cholesky was actually
French, so should we actually pronounce it in the francophone style?

~~~
mturmon
The sophisticated linear algebra practitioners I learned from all said "ko-
LESS-key". People who ask me naive questions about the decomposition ("wait,
is it upper triangular or lower triangular?") say "choe-LESS-key", or "CHOLES-
key", or mutter something even farther off the mark.

Thus, I'm sticking with ko-LESS-key.

Also seeking opinions on: Weiner (as in the process), and Jensen (as in the
inequality), Fourier (as in the series).

~~~
mindcrime
_Fourier_

I've always heard this pronounced like "for-e-ay" (or "for-e-eh" where the
last part is the Canadian "eh?").

~~~
mturmon
I'm with you. You also hear "FOUR-e-err", and my favorite EE teacher said
"FUR-ee-err", indistinguishable from the word for a fur coat-maker, which
always produced a smile. (But, he's a fellow of the IEEE and I'm not, so who's
laughing now?)

I say "WEE-ner" (since he was American, but, open to correction on that one)
and "YEN-sen" (that's how I was taught).

~~~
santaclaus
> FOUR-e-err

My favorites are 'you-ler' for Euler and 'goose-e-in' for Gaussian.

------
CoreXtreme
Font size at 23px. Seriously?

Just skimmed and now my eyes are hurt.

Computation cost of calculating the Cholesky factorization of the original
second derivative matrix is comparable to cost of calculating the inverse
factorization.

------
amelius
Since LAPACK is an excellent library for these matrix functions, why not use
it, and translate it to Clojure instead of reinventing the wheel? (There's
also a translator from Fortran to C).

~~~
dragandj
Because JVM does not have access to the hardware features needed to execute
this efficiently. Also, the power of lapack comes from existing highly tuned
_implementations_ , not the interface itself. Reference BLAS and LAPACK are
actually quite slow.

~~~
TurpIF
In fact, you can access the hardware through native calls via JNI (or JNA). Or
course, then, you have to embedded multiplatform libraries and manage the
associated issues. Also, the OpenBLAS implementation is very well optimized
for several Intel and AMD processors (you can compile it so that it
autodetects which one you're using). It can even reach the efficiency of the
Intel's MKL implementation in mono-threaded mode.

~~~
dragandj
We don't even have to guess, since that's exactly what Neanderthal does. Also,
I micro-benchmarked lots of options and have yet to find one that fills
similar use case that is faster than Neanderhtal+MKL on the CPU, regardless of
the JNI overhead (minus the obvious direct use of MKL, but that is much more
low-level code). Also, most higher level libraries have considerable overhead.
Neanderthal's overhead is tiny.

OpenBLAS's huge drawback is that it only supports BLAS without LAPACK, sparse,
tensors, FFT etc.

Anyway, regarding the OP's comment, I guess that they meant to suggest
implementing all that in pure Java, not Java + FFI, since then the native code
has to be written in non-Java.

