With either of those, you're still representing your polynomial as a combination of powers: 1, x, x^2, x^3, x^4, etc.
For many purposes it's much better to represent a polynomial as a combination of Chebyshev polynomials: 1, x, 2x^2-1, 4x^3-3x, etc.
(Supposing you are primarily interested in values of x between -1 and +1. For other finite intervals, use Chebyshev polynomials but rescale x. If x can get unboundedly large, consider whether polynomials are really the best representation for the functions you're approximating.)
Handwavy account of why: Those powers of x are uncomfortably similar to one another; if you look at, say, x^4 and x^6, they are both rather close to 0 for smallish x and shoot up towards 1 once x gets close to +-1. So if you have a function whose behaviour is substantially unlike these and represent it as a polynomial, you're going to be relying on having those powers largely "cancel one another out", which means e.g. that when you evaluate your function you'll often be representing a smallish number as a combination of much larger numbers, which means you lose a lot of precision.
For instance, the function cos(10x) has 7 extrema between x=-1 and x=+1, so you should expect it to be reasonably well approximated by a polynomial of degree not too much bigger than 8. In fact you get a kinda-tolerable approximation with degree 12, and the coefficients of the best-fitting polynomial when represented as a combination of Chebyshev polynomials are all between -1 and +1. So far, so good.
If we represent the same function as a combination of powers, the odd-numbered coefficients are zero (as are those when we use the Chebyshev basis; in both cases this is because our function is an even function -- i.e., f(-x) = f(x)), but the even-numbered ones are now approximately 0.975, -4.733, 370.605, -1085.399, 1494.822, -994.178, 259.653. So we're representing this function that takes values between -1 and +1 as a sum of terms that take values in the thousands!
(Note: this isn't actually exactly the best-fitting function; I took a cheaty shortcut to produce something similar to not quite equal to the minimax fit. Also, I make a lot of mistakes and maybe there are some above. But the overall shape of the thing is definitely as I have described.)
Since our coefficients will be stored only to some finite precision, this means that when we compute the result we will be losing several digits of accuracy.
(In this particular case that's fairly meaningless, because when I said "kinda-tolerable" I meant it; the worst-case errors are on the order of 0.03, so losing a few places of accuracy in the calculation won't make much difference. But if we use higher-degree polynomials for better accuracy and work in single-precision floating point -- as e.g. we might do if we were doing our calculations on a GPU for speed -- then the difference may really bite us.)
It also means that if we want a lower-degree approximation we'll have to compute it from scratch, whereas if we take a high-degree Chebyshev-polynomial approximation and just truncate it by throwing out the highest-order terms it usually produces a result very similar to doing the lower-degree calculation from scratch.
The advantage of Chebyshev polynomials over ordinary power-basis polynomials will be much less in that situation. But sometimes you're approximating something that doesn't have convenient domain-reduction relations available.