Why? If I found out that the library that I was using didn't have good fixed-point support (i.e., "doesn't do what I expect"), and if I didn't need the values to be infinitely precise, I'd just use a multiplier. It's always worked well for me in the past.
Yep, that's called "fixed point arithmetic". The multiplier is called a "scaling factor". You typically have to "rescale" after each multiply operation by dividing the result by the scaling factor, but you've done that implicitly by not scaling integer values like 7.
Luckily, I've never had to write any database code that deals with money. If I had, though, I would have insisted on integers where a penny is 1 like this - it just doesn't make any sense to store an exponent for a bunch of values that are all the same order of magnitude anyway.
(I assume that this is how "fixed point" algebra works, although what I assume computers do with floats and what they actually do ain't exactly ever been similar.)
Ah, but it would work fine on a website where you're selling things to humans (whose bank accounts can't store ha-pennies). I don't think anyone would be foolish enough to let the physicists near financial markets - there would be heavy losses on both sides.
Don't do that. All of the major database players have built in fixed point decimal types with user specified precision. You'd be reinventing the wheel.