Hacker News new | past | comments | ask | show | jobs | submit login

Yet, Python (and most of her programmers including data scientists, of which I am one) stumble with typing.

    if 0.1 + 0.2 == 0.3:
        print('Data is handled as expected.')
    else:
        print('Ruh roh.')
This fails on Python 3.10 because floats are not decimals, even if we really want them to be. So most folks ignore the complexity (due to naivety or convenience) or architect appropriately after seeing weird bugs. But the "Python is easiest and gets it right" notion that I'm often guilty of has some clear edge cases.



Why would you want decimals for numeric computations though? Rationals might be useful for algebraic computations, but that’d be pretty niche. I’d think decimals would only be useful for presentation and maybe accountancy.


Well, for starters folks tend to code expecting 0.1+0.2=0.3, rather than abs(0.3-0.2-0.1) < tolerance_value

Raw floats don't get you there unfortunately.


If you want that you should use integers. This seems to be a misalignment of expectations rather than a fault in the language.

Other people have posted other examples but it’s not possible to represent real numbers losslessly in finite space. Mathematicians use symbolic computation but that probably is not what you would want for numerics. I could see a language interpreting decimal input as a decimal value and forcing you to convert it to floating point explicitly just to be true to the textual representation of the number, but it would just be annoying to anyone who wants to use the language for real computation and people who don’t understand floating point would probably still complain.

Edit: I’ll admit I have a pet peeve that people aren’t taught in school that decimal notation is a syntactic convenience and not an inherent property of numbers.


They also expect 1/3 + 1/3 + 1/3 == 1. Decimals won't help with that.


That's slightly different in that most programmers won't read 1/3 as "one third" but instead "one divided by three", and interpret that as three divisions added together, and the expectations are different. Seeing a constant written as a decimal invites people to think of them as decimals, rather than the actual internal representation, which is often "the float that most closely represents or approximates that decimal".



Correct! Many python users don't know about this and similar libraries that assist with data types. Numpy has several as well.


It is not a Python thing, it is a floating-point thing. You need it if you want hardware support (CPU/GPU) for non-integer arithmetic in any language. Otherwise, you have decimal, fractions, sympy, etc modules depending on your needs.

https://docs.python.org/3/tutorial/floatingpoint.html


This is an issue for accountancy. Many numerical fields have data coming from noisy instruments so being lossy doesn't matter. In the same vein as why GPUs offer f16 typed values.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: