 I am wondering. Why is division of integers returning an integer an awful thing? https://www.python.org/dev/peps/pep-0238/{Describing the old python-2 behavior:}-------- Quote: -------The classic division operator makes it hard to write numerical expressions that are supposed to give correct results from arbitrary numerical inputs. For all other operators, one can write down a formula such as xy*2 + z, and the calculated result will be close to the mathematical result (within the limits of numerical accuracy, of course) for any numerical input type (int, long, float, or complex). But division poses a problem: if the expressions for both arguments happen to have an integral type, it implements floor division rather than true division.-----------------------To guarantee the correct mathematical behavior in python2, one would probably have to write: def true_math_div(a,b) : if type(a) is int or type(a) is long : a = float(a) if type(b) is int or type(b) is long : b = float(b) return a/b as a and b could be int, float, complex, or some object defining the method __div__. What's wrong about just `float(a)/float(b)`? not everything that can be divided can be reasonably cast to a float Also, it's ugly as sin I wouldn't call it awful, but it is slightly annoying. In a dynamically typed language it's hard to know apriori if a variable is an int or a float with a whole number value and you end up having to write x/float(y) all over the place just to make sure your code does what you want it to do.The new case of / always being float division and // always being integer division just makes everything more explicit. This particular language quirk I don't think has anything to do with dynamic typing: it's equally annoying in C-style languages where 3/2 and 3/2.0 mean different things. sure, but in C if you have a line that looks like z = 3/y you'll know that y is either always a float or always an int depending on its type and thus you'll 'know' what z is. Because it's a duck that doesn't quack. Search: