I remember playing a computer game in math class where you were presented with a XY plane and had to "hit" points (really circles, didn't need to be exact) by plotting equations through them. With some trial and error you could make a polynomial go through basically as many as you wanted, it's neat to see this applied to a real problem!
Doing this kind of thing was my first introduction to Lagrange interpolation, too, but it turns out that you can do Lagrange interpolation without trial and error; you can just use linear algebra, since the points are a linear function of the coefficients, regardless of the degree of the polynomial.
This was in fact the first decoding algorithm for Reed–Solomon codes, but it's not very efficient when you don't know which of your points are the ones with the corrupted data; you kind of have to guess, which gets expensive fast when you're trying to tolerate more than one or two errors.
Although it turns out that there are better algorithms for that, the most commonly used RS codes don't actually encode a series of points on a polynomial, as subjoriented's comment at the root of this thread suggests; instead they are so-called "BCH codes", where you consider the data you transmit (both the original data and the "parity" symbols) to be actually a sequence of coefficients. So where does the redundancy come from? After all, any set of numbers is valid as the coefficients of a polynomial.
BCH codes require the polynomial to be divisible by a known generator polynomial, and that's where you get the redundancy you need to correct errors. Gorenstein and Zierler published an efficient error-correction ("decoding") algorithm for BCH codes in 1960; the first efficient decoding algorithm for original-view RS codes (where you transmit a series of points) wasn't found until 1986, and even today, BCH-code decoding algorithms are more efficient than original-view decoding algorithms.
The Gorenstein–Zierler algorithm works by evaluating the received polynomial at the roots of the generator polynomial. Since it's supposed to be a multiple of the generator, it should be zero at those roots, as the generator is; if it's nonzero, the values at those roots are due to some "error polynomial" that's been conceptually added to your message in transit. If you suppose that it has only a few nonzero coefficients, there's a clever way to compute the error polynomial from those values that should have been zero. This allows you to subtract it from the received message to correct the errors.
At least, I think that's how it works. I haven't implemented an RS decoder yet, so I might be getting some of this wrong.
But something like that was what I was actually hoping to read at the above link.
It runs in O(n^3) time, and unfortunately is not the state of the art. What actual Reed-Solomon implementations use are the Berlekamp-Massey and Forney algorithms, which run in O(n^2) time.
That’s an interesting duality between the coefficient and pointwise representation. I wonder whether this connects to the discrete Fourier transform, which can be viewed as the evaluation of a polynomial at the complex n-th roots of unity. (and the inverse DFT must be equivalent to Lagrange interpolation I guess - insert some handwaving here)
Also, the BCH encoder multiplies the input sequence with the generator polynomial, which is a convolution of their coefficients. Fourier-type transforms (i.e. number theoretic transform) relate convolution and pointwise multiplication, so I feel there’s an underlying connection here, but I don’t have enough experience with finite fields to connect the dots...
You seem to have answered your own question! For any set of `n + 1` sample values for degree-at-most-`n` polynomials, there's a map from the coefficient representation to the pointwise representation (obvious), and one going the other way (Lagrange interpolation). The DFT is just one instance of this, but a particularly nice one, because the map is essentially involutive, and subject to considerable computational speed-up.
What dots do you want to connect? (Anyway you can connect them with a high-degree polynomial curve. :-) )
Indeed, the wikipedia article on RS codes [1] mentions this equivalence between the DFT and RS encoding/decoding.
They mention that both procedures are basically a multiplication by a Vandermonde matrix, with the Fourier transform having the additional requirement of using the n-th roots of unity.
what a blast from the past - i remember this game and it blew my mind when i first realized i had the power to plot a polynomial through all those points with little effort required