Hacker News new | past | comments | ask | show | jobs | submit login

Python is slow because it does a lot of dynamic dispatch, which dwarfs the cost of actual operations such as addition. So it's the other way around - Python, of all things, could probably switch to decimal by default without a significant slowdown.

What would be much slower is all the native code that Python apps use for bulk math, such as numpy. And that is because we don't have decimal floating point implemented in hardware, not because of some fundamental limitation. If decimals were more popular in programming, I'm sure the CPUs would quickly start providing optimized instructions for them, much like BCD was handled when it was popular.




Eh, maybe. I don’t think many NumPy people would reach for a decimal dtype even if it were available and implemented in hardware. It just isn’t useful in the primary domains where NumPy is used.

That leaves vanilla Python. You’re right that compared to dynamic dispatch, a decimal op implemented in hardware is nothing, but the issue here (IMO) is death by a thousand cuts. Python is already slow as hell, and currently using decimal as a default would require a software implementation in many if not all places, which will be slow. Trade off does not seem worth it to me.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: