Polynomial regression tends to be numerically unstable, e.g., under reasonable assumptions has for its normal equation matrix the notorious Hilbert matrix.
But can take an approach via a set of polynomials orthogonal over the data points on the X axis and then for the coveted coefficients of the fitted polynomial just do orthogonal projections. This approach is much more stable numerically.
If for some reason want to insist on taking the normal equation approach, then maybe do a numerically exact solution of the system of equations and/or matrix inversion: For this, of course, start with rational numbers. Multiply through by powers of 10 until have all whole numbers. Then for sufficiently many single precision prime numbers, solve the equations in the field of integers modulo each of those primes. For multiplicative inverse, that is a special case of the Euclidean greatest common divisor (least common multiple) algorithm. Then for the final answers, multiple precision, use the Chinese remainder theorem.
For full details, see a paper by M. Newman about 1966 or so in a journal of the US National Bureau of Standards, maybe Solving Equations Exactly.
The main point of Newman's approach is that get exact multiple precision answers where nearly all the arithmetic is standard machine integer arithmetic. For that might need a few lines of assembler.
There is a sometimes useful related point: Even if the normal equations matrix has no inverse, don't give up yet! The normal equations still have a solution, actually infinitely many solutions, and any of those solutions will minimize the sum of the squared errors and give the same fitted values although with different coefficients. If want the coefficients exactly, then can be disappointed. But if really want just the fitted values, are still okay!
But can take an approach via a set of polynomials orthogonal over the data points on the X axis and then for the coveted coefficients of the fitted polynomial just do orthogonal projections. This approach is much more stable numerically.
If for some reason want to insist on taking the normal equation approach, then maybe do a numerically exact solution of the system of equations and/or matrix inversion: For this, of course, start with rational numbers. Multiply through by powers of 10 until have all whole numbers. Then for sufficiently many single precision prime numbers, solve the equations in the field of integers modulo each of those primes. For multiplicative inverse, that is a special case of the Euclidean greatest common divisor (least common multiple) algorithm. Then for the final answers, multiple precision, use the Chinese remainder theorem.
For full details, see a paper by M. Newman about 1966 or so in a journal of the US National Bureau of Standards, maybe Solving Equations Exactly.
The main point of Newman's approach is that get exact multiple precision answers where nearly all the arithmetic is standard machine integer arithmetic. For that might need a few lines of assembler.
There is a sometimes useful related point: Even if the normal equations matrix has no inverse, don't give up yet! The normal equations still have a solution, actually infinitely many solutions, and any of those solutions will minimize the sum of the squared errors and give the same fitted values although with different coefficients. If want the coefficients exactly, then can be disappointed. But if really want just the fitted values, are still okay!