Quite a few languages (like Lisp) have a rational data type, such that 1/3 + 1/4 is done exactly. Others have a decimal data type, whose main purpose is for calculations involving money. 0.1 + 0.2 = 0.3
And finally, I'm no expert on Mathematica, but IIRC it computes error bounds on your results, and it only displays decimal digits if they are within those bounds. It's the proper way of doing things, and again, 0.1 + 0.2 = 0.3
The only way around that is to use a library that supports higher-precision floating point through emulation. Then agian, that still doesn't change the theoretical background in any way.
Computers are so darn fast today that there is no excuse for not using an arbitrary precision library. Floating point should be thought of as an optimization, not as a default.
Unless you are writing DSP code (in which case this article should be nothing new), don't use floating point!
I still remember when every Rails-based example with an invoice in it stored dollar amounts in the DB as floats. Ahh bless.