
Russian peasant multiplication - starpilot
http://everything2.com/title/Russian+peasant+multiplication
======
LoonyPandora
Schemes like this multiplication technique are fascinating, the patterns one
can see are beautiful. The fact they are not taught in schools is scandalous.
Their use is actively discouraged.

My Dad bought me a book about the Trachtenberg System of Basic Mathematics[1]
when I was a kid, it had all sorts of funky new ways to solve maths problems
just like this. My primary school hated it, they marked me down for not
showing my working and doing it the wrong way. I was brilliant at maths mainly
because of this book, but didn't get the extra marks for doing it the "right"
way.

<http://en.wikipedia.org/wiki/Trachtenberg_system>

~~~
tomjen3
Thank you. I too read that book as a child but I couldn't remember what it was
called.

The only beef I had with the idea in it was that it was difficult to
understand why they worked, compared to standard multiplication.

------
jkap
I used this a lot while writing a 16bit math system for the NES. A similar
trick to this is Egyptian Division
(<http://everything2.com/title/Egyptian+division>)

------
RiderOfGiraffes
This can also be used for powers. Square instead of doubling, multiply instead
of adding, and it works for matrices as well as numbers.

~~~
eru
Of course this also works with bases other than two for your underlying number
system. Or with mixed approaches.

In general, there's no known algorithm for finding x^n with the fewest numbers
of multiplications in time polynomial in O(log n), i.e. the size of the
representation of n.

(I can't remember whether the problem was (co-)NP complete, or even how to
construct the certificate to proof that the problem is in NP or perhaps co-NP.
Though I'd bet on it being in NP.)

And then you can also think about balancing the number of multiplications and
the number of intermediate results you have to store.

~~~
xtacy
Is this what you meant: Addition Chain Exponentiation; in general, performs
better than binary exponentiation, but it could be very hard to find an
addition chain for a given exponent: :-)

<http://en.wikipedia.org/wiki/Addition-chain_exponentiation>

~~~
eru
Yes, thanks!

------
suprgeek
Take a look at Vedic Mathematics
<http://www.vedicmaths.org/introduction/tutorial/tutorial.asp>. There are
sixteen Sutras or principles that provide interesting hacks for quick "in your
head" Arithmetic.

~~~
jinfiesto
As an interesting aside, the algorithm for multiplication is algebraically
identical in both methods.

------
waterhouse
Somewhat related: An elementary method of fast multiplication of large
integers:

<http://en.wikipedia.org/wiki/Karatsuba_multiplication>

------
Andrew-Dufresne
Polish Hand Magic is also an interesting trick to do multiplications:
[http://www.scientificamerican.com/article.cfm?id=parents-
cor...](http://www.scientificamerican.com/article.cfm?id=parents-corner-
polish-han) A comic illustrating the trick: [http://www.smbc-
comics.com/index.php?db=comics&id=1914#c...](http://www.smbc-
comics.com/index.php?db=comics&id=1914#comic)

------
joe24pack
looks similar to the method used by the ancient Romans

<http://www.phy6.org/outreach/edu/roman.htm>

------
fakelvis
I've always enjoyed the "Mental Math" section of the "Mentat Wiki":
<http://www.ludism.org/mentat/>

It covers 'tricks' for quickly and accurately calculating squares, cubes, logs
and roots (up to fifth root). Worth a look.

------
signa11
iirc in fourier transforms, covolution of signals in time-domain is equivalent
of signals in frequency-domain. thus 12 * 23 can be thought of as convolution
of two signals (1,2) and (2,3) which gives (6,7,2)...

~~~
gjm11
Yup. And you can then use the FFT to do super-fast multiplication. (I think
you omitted something like "pointwise multiplication" after "equivalent of".)
This is called "Strassen multiplication"; you can actually do better by doing
your Fourier transforms not with complex numbers but with a different
algebraic structure; that's called Schönhage-Strassen multiplication. I've
seen it claimed that Strassen is actually faster in practice than Schönhage-
Strassen.

Your numbers need to be pretty big before the benefits of this sort of
algorithm outweigh the overheads, though. For instance, GMP uses naive
multiplication up to about 600 bits, various Karatsuba/Toom-Cook schemes
(divide-and-conquer) up to about 120,000 bits, and FFT after that.

