Hacker News new | past | comments | ask | show | jobs | submit login

> Then the GCD operation could be done in hardware and everything would be fast (at least as fast as floats).

Could it? Is there a fast way to do GCD in hardware?

Googling gave me https://en.wikipedia.org/wiki/Binary_GCD_algorithm, which needs O(log₂(max(u, v))) operations to compute gcd(u,v), but I wouldn’t know whether that’s the best we can do in hardware, or whether that’s ‘as fast as floats’.

Also, I don’t see how your format describes a rational type. Rationals store numbers as p/q for integer p and q, not with a decimal point.




Sorry, abuse of notation. 30.30 would be 30 bits numerator, 30 bits denominator.

I would imagine it would not need to fully reduce after every operation, there's probably tricks where if you know the factors going in, there will be obvious ones after multiplication/division.

It's not my best idea :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: