The Python docs¹ include an implementation of Allen Downey's algorithm:
from random import Random
from math import ldexp
class FullRandom(Random):
def random(self):
mantissa = 0x10_0000_0000_0000 | self.getrandbits(52)
exponent = -53
x = 0
while not x:
x = self.getrandbits(32)
exponent += x.bit_length() - 32
return ldexp(mantissa, exponent)
That's almost correct, but not quite (depending on your definition of correct). It fails to account for the problem that Allen Downey's paper so carefully took care of. I think it's best shown with an image (exaggerated to have a 2-bit mantissa and 2-bit exponent):
Top is Python, bottom is correctly rounding to nearest float. Python's implementation does not round correctly to the nearest floating point value, it floors. This means it oversamples those floating point numbers with mantissa 0, and can not sample an inclusive range at all.