Others (like me) will cry because we know of multibillion-dollar fintechs that still struggle with this.
The one I'm thinking of didn't even have cents for a long time. After a pretty heroic migration effort they added cents. And they did it properly. But within weeks folks were using floats all over the place for money, leading to flaky tests and all kinds of other errors.
I have worked at banks and fintechs for the past 30 years and honestly have never used anything but a doubles for money with no issues (and a simpler code base.)
I understand the sentiment and the potential issues, but it's really kind of domain dependent.
If you store things in a hypothetical subsidiary unit, you end up with plenty of corner cases.
1. You might need a different number of "implied decimals". Two decimals is enough for many currencies, but some are three, quite a few are zero, and a few are a janky 1-decimal model. And that doesn't even go near, say, pre-1971 GBP structures.
You'll have to put scaling logic on every interface with the outside world.
2. Even if you do it perfectly, it's going to change. For example, the subsidiary unit on the Icelandic Krona is being removed in a lot of financial APIs right now. You can either change your records, or change your scaling logic, but you've now got an inflection point where you can't reason about the behaviour before the changeover from the current code.
What you need is a decimal type, where you can have 6.33 dollars, 2.167 dinars, or 500 won, and have them all retain fidelity. Sadly, few popular languages provide it.
Still in a long, you can represent micro "cents" (e.g. 1,000,000 = 1 unit of the currencies smallest unit, for USD thats cents). It's just a matter of scaling things up or down to the level of granularity that you require.