I am not especially interested in the digits of Pi, but in the various algorithms involved to do arbitrary-precision arithmetic. Optimizing these algorithms to get good performance is a difficult programming challenge.
Arbitrary-precision arithmetic with huge numbers has little practical use, but some of the involved algorithms are interesting to do other things. In particular:
- The Discrete Fourier Transform. This transform is widely used in many algorithms and most modern electronic appliances (such as digital televisions, cell phones and music players) include at least one instance of it.
- The reliable managing of a very large amount of disk storage, at least for a single computer. Specific methods were developed to ensure high reliability and high disk I/O bandwidth. The same ideas can be applied to other fields such as video streaming or data base access.
- The whole computation is an extensive test for a computer including its CPU, RAM, disk storage and cooling system. A single bit error during the computation gives a bad result. A bad cooling results in a hardware failure.
Your use of the word "must" is the key. The digits of pi are supposed to be the same, but if a cosmic ray flip a bit, or if the machine becomes unstable because it runs hot, or if there's a bug in the FPU (such as happened to Intel) then results can differ, even when they shouldn't. Computing pi to very large numbers of digits is a standard warmup to make sure that new hardware is working correctly. It's only one of many tests, but it has found hardware bugs.
I remember using a machine that started to give incorrect answers when it ran too hot, and it ran hot when it was working full speed. We had to tweak the compiler to add extra NOPs into the instruction path becuase the micro-code used less power in a NOP, thus generated less heat.