Have built a fintech startup here in Germany ten years ago. The article mentions lots of important things. Here‘s another one:
Time synchronization. It is incredibly important in fintech applications for a number of reasons:
1. *Transaction Ordering:* Financial transactions often need to be processed in the order they were initiated. This is especially crucial in high-frequency trading where trades are often made in milliseconds or microseconds. A small difference in timing could potentially lead to substantial financial gains or losses. Therefore, accurate time synchronization ensures fairness and order in the execution of these transactions.
2. *Security:* Accurate timekeeping helps in maintaining security. For instance, time-based one-time passwords (TOTPs) are widely used in two-factor authentication systems. These passwords are valid only for a short period of time and rely on synchronized clocks on the server and client side.
3. *Audit Trails and Dispute Resolution:* Timestamping transactions can help create a precise audit trail, which is critical for detecting and investigating fraudulent activities. In case of any dispute, a detailed and accurate transaction history backed by synchronized time can help resolve the issue.
4. *Distributed Systems:* In distributed systems, time synchronization is important to ensure data consistency. Many financial systems are distributed over different geographical locations, and transactions need to be coordinated between these systems in an orderly fashion. This requires all servers to have their clocks synchronized.
I am sure there are even more fields where this is relevant.
This was also a surprise to me when we started TigerBeetle.
How clock synchronization protocols can experience unaligned network partitions, where the ledger database cluster is able to continue running, but now with the risk of unsynchronized clocks and far-future timestamps, which can in turn then lead to money being locked up in 2PC payment protocols.
We therefore spent considerable effort [0] on clock synchronization in TigerBeetle, not for the consensus protocol—we never risk stale reads or take a chance with clock error bounds—but rather simply for accurate audit trails and to keep inflight liquidity from being locked up if transactions take too long to get rolled back.
You don't represent money as (<currency>, <decimal amount>), you may want to store it as (<currency>, <decimal amount>, <timestamp>) to be able to apply the correct exchange rate post-facto, and not have to deduce from the transaction history.
It also helps with cross-checks/consistency verification.
Also, preserve the original timezone in the timestamp. It can save many headaches down the road.
> Financial transactions often need to be processed in the order they were initiated.
And then the banks send you info in batches and out of order :) This happened to us more than once. So the team responsible wouldn't settle/cancel a payment for X even if bank said so. They would read a few other sync batches yo make sure that nothing else changed for X.
Because "Financial transactions need to be processed in the order they were initiated" is a must :)
— If you’re using JSON to pass around monetary quantities (eg. from the frontend to the backend), put them in strings as opposed to the native number type. You never know what the serializers and deserializers across languages will do to your numbers (round them, truncate them etc.).
— Start recording the currency of the transactions as early as possible. It can be a separate column in your table.
> Use ISO8601 date and time with offset info, always.
I think that's just sufficient for all use cases everywhere. I've been a software engineer for almost 25 years and, of all the universal truths I've encountered, implicit and unintended changes to datetime offset are the ones I've seen at every single job.
> I think [ISO8601 date and time with offset info is] sufficient for all use cases everywhere.
TLDR: Also, which timezone is used (not quite the same as offset) really does matter--UTC is great but you can't use it everywhere.
________
One of my favorite simple examples of this "here be dragons" for the new developer: Any system that schedules a future calendar-event.
Such events are typically pegged, implicitly if not explicitly, to a particular timezone or geographic context. For example: "The company's Virtual Summit will occur on November 2nd at 1PM Elbonian Xtremesunshine Time, hosted out of our central Elbonian headquarters."
In that scenario, it is impossible to know for sure how many seconds-from-now it will happen until the moment actually happens! "2023-11-02 13:00:00 EXT" is actually a contract or spec for recognizing a future condition, one that will shift if/when the relevant nation/province/city simply declares their clocks shall be set differently.
So if the Elbonian government alters their daylight-savings switchover to occur earlier on 11-01 instead of 11-06, then the summit just moved. Even if you scheduled everything UTC all along... Well, now the summit is overlapping lunchtime for everyone in Elbonia, so it moved from their perspective.
Absolutely, e.. an offset of +2:00 in e.g. 2023‐06‐28T18:00+02:00 could mean Berlin in the summer (Central European Summer Time, clocks will change) or in Johannesburg (South African Standard Time, not summer and clocks don't change). Same offset, different time zone, different clock change rules.
As you note, for some uses this _does_ make a difference and tracking which one you have can in these cases be important.
Ha, good one! One can only hope the Elbonian Parliament will have the sense to abolish this stupid Xtremesunshine time and observe one time zone year-round.
As I put in another comment, your class library might have a type that is equivalent to ISO 8601 data, indeed is convertible to and from it, but is a binary representation at runtime compatible with other types in the language.
So this technically isn't ISO8601, and certainly isn't "ISO8601 in a string", which is an _interchange format_ between application with potentially very different runtimes. I don't really recommend treating ISO8601 dates as mere strings, unless you intend to pass them through without even looking at the contents.
"— If you’re using JSON to pass around monetary quantities (eg. from the frontend to the backend), put them in strings as opposed to the native number type. You never know what the serializers and deserializers across languages will do to your numbers (round them, truncate them etc.)."
I'd go a step further and prefix the strings with an ISO currency code ... to stop someone from just feeding it into their languages int to float converter and assuming that's ok. Only custom built (hopefully safe) converters will work.
Ergh, I get what you're trying to prevent, but this actively breaks those custom safe parsers we build, and now you have to do some additional active parsing. Please don't do this, just set contractual expectations in your API.
Politely, I think this comment may be incorrect advice.
I think correctness and precision requirements of financial transactions outweighs any devex concerns. Lots of banking and trading APIs do exactly this: pass your currency fields as strings.
I'm generally not a fan of overloading fields, but we took the opportunity to add the currency code when we decided to encode monetary quantities in JSON as strings, e.g. "0.02USD". This has worked well, particularly because we use a money handling library that parses it unchanged.
It’s probably fine, but this sort of thing is why transactionAmount and transactionCurrency are better when separated. Consider the case when you’re doing some reporting from your DB based on currency; do you really want to have to deal with one string that contains both or just do a simple WHERE clause?
In our case those are indeed separate in the database, so that we can type the value as a numeric, which is very necessary for many operations. Those fields only get combined in the serialization.
You also included localized formatting (the '.' and ',' ) in your example that'll likely break on parsing without special handling. For interchange, you'll want to avoid using localized formats, e.g. stick to the "C" locale or something specific or agreed to between the involved parties.
Don't be afraid to use formal methods. Queues, retries, event sourcing, payment state handling -- there are "global" properties we desire from these systems: certain things we want to make sure are always true and others that we want to make sure eventually happen. For the single process case, which nearly every developer thinks about, it can seem like a solved problem but we live in a concurrent world with network failures and vendors with errors in their own systems: it's nearly impossible for you to think really hard in your head and be sure that your queue retry strategy will maintain those properties that are important to your business. It's nearly impossible to do this with lightweight, informal testing strategies employed by busy software teams.
By modelling your systems you will learn what the important failure modes are and you will get better at designing systems that are resilient and efficient.
Card payment systems are fairly unreliable peer-to-peer messaging systems. Be prepared for a lot of complexity. Using an event-sourcing architecture is really useful here for that "auditing" requirement and for debugging transaction state when the network sends you messages in error, out of order, or they forget to retry themselves when they promised to, when merchants send bad data, when POS systems do weird things, etc.
Various platforms use: UTC, the user’s timezone as set in their dashboard, the user’s detected timezone, the timezone of the server that is generating the reports.
Aggregating or reconciling data from different platforms can be a pain if the timezones aren’t clearly indicated.
I’ve had bank statements that didn’t agree to CSV exports of the same data because the servers generating the two reports were in different timezones.
Haha yeah for the past tax year I I looked at my transaction history and it was missing some that I expected to be there. Turns out, the exchange (Gemini, for crypto) uses UTC, but the IRS docs always tell you that a cutoff is relative to your own time zone.
The transactions were in the late evening on New Year's Eve, central US time, but that was already the next year by UTC, so any readout of "transactions for 2022" would not include them.
That’d be a great subplot for a movie. The antagonist sets up an elaborate money laundering operation that leverages conflicting definitions of the taxable year and moves off shore profits that are “lost” between the systems. Could call it “Black Ink to the Future.
Haha, you're not wrong. I think it was actually more like 8pm, shortly before I left to live it up. But either way, transacting near the new year shouldn't break financial systems.
Furthermore, even for “date fields”, consider using a datetime. Every date implicitly exists in a timezone, and if you ignore that ambiguity you’ll get bitten later.
For example, an invoice due on Friday is probably actually due by close of business (5pm say) in the timezone your business operates, and if created at 11pm it would be processed the next day (or even on Monday, don’t get me started about business day calculations).
> Every date implicitly exists in a timezone, and if you ignore that ambiguity you’ll get bitten later.
Solid advice, and must come from painful burns. I've been preaching from the same book for a few years now: a timestamp without timezone offset is worse than useless.
Or as the DB expert in previous job so eloquently put it... Timestamp with zone tells you when an event actually happened. A timestamp without zone or offset is equal to wall clock time inside a windowless room, which itself is in an unspecified location somewhere on the planet.
We stored and transmitted exclusively UTC, which put the burden on the UI at display time (trivial) and backend business logic (complex, but simplified with robust utility functions that everyone understood well) in accounting for timezones when material.
We only had a dozen issues due to it so I believe it worked pretty well compared to the war stories I heard from comparable companies in the fintech space.
That's why Datetime data should be handled in a type that includes this data (e.g. DateTimeOffset (1) ) and exchanged as ISO 8601 with offset. e.g. "2023‐06‐28T15:55:22+01:00"
#2 and #3 are pretty solid advice. But the rest of them are nothing specific to fintech. I will also offer a few additional points.
1. Never record an amount without its currency.
2. Reconcile all your data all the time. Never let any data go unaccounted for.
3. Maker-checker is a powerful concept. Embrace it to the fullest across your system.
4. You will be dealing with all sorts of non-standardized financial integrations. A lot. Think adapter pattern as early as possible.
5. You will be answering to multiple regulatory agencies. Create boundaries between them within your system and reduce the surface of compliance as much as possible.
I think this article covers many of the main points, but could be worded/structured better. There are terms of art that more precisely refer to the concepts you need.
For “using floating point data types”, it’s even worse; you often need to use strings, for example if you need to store a bank account number “01234567789” will have the leading zero stripped if you use a numeric type.
“Updating transactions” would be better phrased as “use an append-only log / evented architecture”. (Also, “use a double-entry ledger” is probably the most valuable advice I could have sent myself prior to getting into FinTech.)
“Be careful with retry” should be more strictly “use idempotent operations” and link to the canonical Stripe article on idempotency keys (https://stripe.com/blog/idempotency).
Another important one to think about is bitemporality. “Created at” vs “effective at”. Not obvious at first and you’ll have some painful migrations if you don’t build it in. Fowler has a good overview here: https://martinfowler.com/eaaDev/timeNarrative.html.
Edit to add - the advice that maybe using a NoSQL database is pretty bad IMO. I’d advise in the opposite direction - use SERIALIZABLE isolation in a SQL database. Read up on your Aphyr blog posts before trying to do anything distributed. Be paranoid about race conditions / serialization anomalies. If you eventually hit performance issues you need to think hard about what anomalies your access patterns might be subject to. (Obviously HFT won’t use serializable SQL).
> Over the last decade, fintechs using RDBMS databases to record transactions have built features onto their databases like a journal of all changes to all data and clever ways of making sure the data hasn’t been modified by storing checksums of data as transactions are added. (under “Updating transactions”)
It is worth noting that many of these methods don't prevent tampering, they only make it visible when you look for it at which point the fact you are explicitly looking for it (rather than having been alerted to a potential issue) might imply it is too late.
The recommendation that "developers should use integers to represent money" (with an implicit 2 decimal places, that will work for many but not all currencies) is not a great one.
If your language has a dedicated type for monetary amounts, use that. (1)
If it does not, but you can make a value object to represent, e.g. amount and currency code, then do that.
If however, your language does not have a dedicated type for monetary amounts, or one cannot be trivially built or retrieved as a package (2), then you should ask yourself if it is really a suitable language for financial tasks.
a) On web api, so this is by definition data interchange, not "in a language".
b) with a currency code to cross check it.
c) implicitly "in cents" which needs further documentation - does this mean "pence" when the currency is GBP? How does this work for for JPY? BHD?
d) cannot represent fractions of a cent or penny.
So: this int plus currency code plus docs is OK but not great, a lowest common denominator format for interchange, requires further documentation and cross-checking as "100 USD" does not mean 100 bucks and the conversion is currency-specific, and cannot represent all values.
I wouldn't refuse to convert _to and from_ this format at the edges of my application, for data interchange, but the conversion to something clearer and richer IMHO should remain there.
Payment APIs are far from the only use case in fintech, for interest calculations you do have to care about fractions of a cent or penny.
In fact, if your case is "I call the stripe api" ... are you sure you're a fintech and not an online store? Get back to me when you have to interop with FiServ, MasterCard or SAP.
yet with all those a) b) c) d) they still use that. Point is not "I call stripe api". Point is - those companies, that process wast amount of transactions, use integers. Probably for a reason.
And its not in cents rather _minor units_ which you as working in fintech should ought to understand the difference
> Point is - those companies, that process vast amount of transactions, use integers.
Point missed, they use that format _on their client apis_, which tells you nothing about what they do in the code that handles it.
Now it could be the same internally, in which case the reason is "there's crap code everywhere"
It's nice if your language has support for monetary amounts, but usually you end up using multiple languages that interact with the same database model, and you still end up using a built in numeric datatype in your RDBMS of choice.
Three additional 'mistakes' to prevent when dealing with money representations:
1. The definition of a currency might change. For example, some years ago Iceland decided to change the exponent of ISK from 2 to 0. Currencies have different versions.
2. As a FinTech you probably have integrations with many third parties, they don't change their exponents for a currency at the same time. Keep track of what third parties think the correct exponent is at any point in time, and convert between your representation and their representation. Otherwise, you'll have interesting incidents (e.g. transferring 100x the intended amount of ISK).
3. At first you think that counting minor units as an integer is enough, and then you need to start accounting for fractions of cents because sales people sold something for a fee of $0.0042 per transaction. If your code rounds all these fees to $0.00 you don't make any money.
1, 2 - Yeah, there's always a sanity checking and conversion layer around the 3rd party. Currencies do indeed sometimes have 2 versions in play there.
3 - Indeed, the currency type that I referenced "is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors"
I really mis the mistake of NOT using double entry bookkeeping in the way you store transactions.
Next to that, ensure transactions are immutable, and ensure reporting is idempotent: Close your periods and generate reports for them, when you regenerate the report it should be identical.
This is more domain tips. A lot of technical tips hold for non fintech too.
RDMS databases are not a weak point. A bad model of your transaction is a weak point. Depending on the application, and not all applications require this, you may need a consistent read state for the data. Eventually consistent is like saying periodically wrong. Let's say a trader enters a trade for that exceeds there allowed VAR, because they looked at their screen thought they had more dry powder. (Cocaine jokes aside). That becomes a risk management problem. (Also the risk that traders can just use the 'It said I was below my limit' excuse).
If you want an example where consistency is not important, you might be able to overdraw from your bank with your ATM card. The bank is happy for this to be inconsistent, since they can charge for the overdraft.
I'm surprised the article doesn't explain how float numbers are stored in memory according to IEEE that's the main reason you don't use them to represent money.
Others (like me) will cry because we know of multibillion-dollar fintechs that still struggle with this.
The one I'm thinking of didn't even have cents for a long time. After a pretty heroic migration effort they added cents. And they did it properly. But within weeks folks were using floats all over the place for money, leading to flaky tests and all kinds of other errors.
I have worked at banks and fintechs for the past 30 years and honestly have never used anything but a doubles for money with no issues (and a simpler code base.)
I understand the sentiment and the potential issues, but it's really kind of domain dependent.
If you store things in a hypothetical subsidiary unit, you end up with plenty of corner cases.
1. You might need a different number of "implied decimals". Two decimals is enough for many currencies, but some are three, quite a few are zero, and a few are a janky 1-decimal model. And that doesn't even go near, say, pre-1971 GBP structures.
You'll have to put scaling logic on every interface with the outside world.
2. Even if you do it perfectly, it's going to change. For example, the subsidiary unit on the Icelandic Krona is being removed in a lot of financial APIs right now. You can either change your records, or change your scaling logic, but you've now got an inflection point where you can't reason about the behaviour before the changeover from the current code.
What you need is a decimal type, where you can have 6.33 dollars, 2.167 dinars, or 500 won, and have them all retain fidelity. Sadly, few popular languages provide it.
Still in a long, you can represent micro "cents" (e.g. 1,000,000 = 1 unit of the currencies smallest unit, for USD thats cents). It's just a matter of scaling things up or down to the level of granularity that you require.
This is a nit but is there a better way to say “RDBMS databases?” It feels like saying “ATM Machine”. Could we just say “relational database” or “state of the art database management system?”
The difference is small, but mostly as probing questions to interviewers. I take the mind the more you can speak the language the more comfortable people become with you.
Here's something small but I was doing a SQL interview one involved a a card_tx table with the amounts stored in integers values in cents. I immediately noticed this and chatted with the interviewer about this. My assumptions were wrong, and the floating point arithmetic was a simpler reason than what I was thinking.
Anyway you can display you have thought about relevant business problems to where you are interview engineers tend to have a strong reflex to engage the person more.
Why should anyone have a look at TigerBeetle, a bleeding edge accounting system built with a language no one understands? Since no one understands Zig, no one will be able to maintain TigerBeetle. The company is exposing itself to a great deal of risk with this decision. Adopting TigerBeetle is a colossal business mistake, not an engineering mistake.
Coincidentally, one of the reasons we picked Zig was for how readable it was, and strikingly so, even for high level programmers who might not understand systems programming or C. Because Zig reads like TypeScript, and we were working in payment switches where the majority of programmers could read that. This particular switch, in fact, had this same business requirement, that programmers should be able to at least read the systems language.
But generally, our experience has been that people who understand C will understand how to maintain Zig [0]. Zig's toolchain is also more accessible, and across all platforms. Zig's compiler is already being used by Uber for hermetic builds.
It's also easy to learn. You can pick up Zig in a week and be comfortable in a month. Zig has a simple grammar. I love how, when we have someone join the team, we never have a discussion about how to learn Zig, as if it's a difficult language to master (like C++!). Rather, there's excitement around learning the language, even ahead of starting at work, and within a day or two they're committing.
We made this decision for TigerBeetle in July 2020, and didn't take it lightly. We had already followed Zig's progress for 2 years by that point, and many factors were considered [1][2][3]. C was the other contender, given that we had to handle memory allocation failure.
The crux of the decision, then, was whether to invest in a systems language of the last 30 years, or in a systems language of the next 30 years. A distributed database is a big investment. It made sense to invest for the future. If anything, it would have been a colossal business mistake to have picked C or C++, which would have crippled our development velocity.
Furthermore, for TigerBeetle's design goals, especially w.r.t. our adoption of NASA's Power of Ten Rules for Safety-Critical Code and thus static memory allocation, Zig made (and continues to make) the most sense.
We also liked the efficient performance culture surrounding Zig, with talented game developers like Michal Ziulek and Stephen Gutekanst, and embedded programmers like xq, Matt Knight, Jens Goldberg and others moving to it. These industries (gaming, embedded) are often a good litmus test of where systems programming is at.
More details (our thinking on Zig through the lens of safety/performance/tooling/ecosystem/hiring/marketing) here:
Fascinating info, thanks for sharing this with all the details! Didn't watch your youtube video but do you use any formal systems like TLA+ internally for validating your designs?
A little off-topic, but I would have loved to see mifos / fineract[0] referenced in the article. Great open source banking core having many of the strengths listed
You can absolutely use doubles for money. Excel does it and so do many other financial tools. There are pros and cons but as long as you do rounding and comparisons correctly it works perfectly fine.
For example 16.10 is not exactly 16.10 in IEEE floating point. When you do enough operations, and depending on the order of operations, you can wind up several cents off. That sounds small, but can be enough to give your auditors heart-burn. COBOL does BCD arithmetic, (not really but at least conceptually), and it's penny accurate to 31 digits (as per the standard - implementations may have greater accuracy). Frankly, it's stupid that 63 years after COBOL we're still treating money and currency as an afterthought in languages that are supposed to be business oriented. Proper currency handling should be part of the language.
> Can you confidently say that $100>¥10 in an offline environment?
Money is never converted. It is exchanged. Trying to solve this is like trying to solve the question "Is "100 USD" > "1 LAPTOP".
When you turn 100 USD into 90 EUR, you didn't convert it. You exchanged it. You bought EUR, at a price given to you by someone or something exchanging it. This could be a bank, a well-established currency office, or some dude on the street. There is no real difference between all three of those: The third party gave you a price, and now has more USD and less EUR, whereas you have more EUR and less USD.
There are various entities publishing standardized average rates which are calculated after the day closes, based on a variety of datapoints they have access to. Those are often used in eg. accounting, to establish the "real" value of something you bought in a currency you don't often use, but it's not true conversion.
If you have, as a datatype, a currency becoming another, there is ALWAYS a "rate" attached to this. So the question "$100>¥10" you asked above requires more data, it should be "$100>¥10 @ 144.28". ANYTHING else is a terrible leaky abstraction. Don't do it. Source your rates automatically from a single source if you like, but make it explicit.
Anyway, a "Money" object really is just this: A precise decimal object, with an ISO currency code. The latter simply being a short string among an included, limited set.
Currency conversions are a transaction not an operation, the conversion rate fluctuates constantly and typically involves fees and involve tax liabilities. For a money type I’d go so far as to want it to either disallow or throw an exception if attempting an operation on two monies in different currencies.
I don't think it's as big a heft. First, there are standards bodies that list currencies as "default" set, much like we have ISO standard country codes. No one really complains if Narnia isn't a country, that Disneyland isn't a country, or the Austro-Hungarian empire, for ISO Locales.
At a bare minimum, it should be a reasonable fixed point type that correctly handles rounding and intermediate values. So a dollar amount like 123.45 times a rate like 0.3450 doesn't exceed 4 decimal places but intermediate values are extended so we get correct rounding. The destination should probably determine the number of places. That bare minimum wouldn't stop you from comparing yen to dollars, any more than a floating point representing mph stops you from comparing it to a value representing kph.
But there are times where we need to track prices to the nearest tenth or hundredth of a cent. So it should be extensible so that 123.456 dollars * 0.3450 winds up at a correct round/decimal places.
You also don't need always-on, real time currency conversion. You could have a conversion type, operator, or method that does safe conversion based on the value I give it. So if I estimate that Yen are about 130 to the dollar, I can just use that. If I happen to write an application that queries a data provider and can populate that in 'real time,' that's up to me.
If you really wanted, you could find a way to create new types that represent currencies that aren't part of the basic implementation. That might mean you need to specify some things like the representation for different locales, or the default number of digits.
"Can you confidently say that $100>¥10 in an offline environment?"
A major problem money has is that it isn't a unit in the sense we usually take the term to mean. We expect, for instance, that translating one unit to another with a suitable level of precision should be translatable without loss back to the original unit, but that's not true for money, even ignoring transaction costs. If "US dollar" is a unit, it is a unit that technically stands alone in its own universe, not truly convertible to anything else, not even other currencies. All conversions are transient events with no repeatability. But that is very inconvenient to deal with, and with sufficient value stability of all the relevant values, often it's a sufficient approximation to just pretend it's a unit. But if you zoom in enough, the approximation breaks down.
For that and similar reasons, while you could theoretically write that line of code, it would be implicitly depending on a huge pile of what would in most languages be global state. It would be a dubious line of code.
> There are pros and cons but as long as you do rounding and comparisons correctly it works perfectly fine.
This is exactly the issue with using floats where an arbitrary precision decimal with proper rounding is really needed. Easily solved with a good library and if your languages supports it, type, but it's really easy for a dev in a hurry to not use the library and roll some a=b+b*c_rate code that forces some type conversion. The rounding rules often are tied to contracts, and a subtle bug that's off a few mills here (total problem created $2.33) and there can lead to audits (total cost of audit $14,800) that cost a lot.
I'm tired of having to drag in another dependency and lose operators if I'm doing money math. I can create an experience in C++ that's almost rational, with operator overloading, but most other languages were designed well after we knew that doubles are not sufficient. And there's more than just arbitrary precision. For example,, some currencies use three decimal or no decimal places. Two just happens to be convenient for the Euro and Dollar. In addition, sometimes you carry prices to 3 or 4 places. But you still want banker's rounding. And I shouldn't be able to add Turkish Lira to US dollars, any more than the language allows adding floats and integers, without conversion. Then there's locale correct display for currencies (e.g. $ vs USD and before or after the money amount).
you can also easily hurry up subtly wrong money math with ints. (overflows, wrong rounding method, etc). So you should not in any case use the default math operators
The question is why? What does the double give you over a 64-bit integer? Sure when you divide and it leaves a fractional part you loose it and need to think about what happens there explicitly but you need to do the same for doubles to avoid pennies going missing and snowballing into larger errors.
Using integer representations for currencies becomes very messy when dealing with more than one currency at a time.
The United States dollar is famously subdivided into cents, but is also subdivided into 'mills' (one thousand to the dollar).[0]
The Mauritanian ouguiya is divided into five khoums.
The Madagascan ariary is divided into five iraimbilanja.
The Maltese scudo is divided into twelve tarì, which are divided into twenty grani, which are divided into six piccioli.
Historically, such currencies were ubiquitous. For example, prior to 15 February 1971, the pound sterling was divided into twenty shillings, each of which was divided into twelve pence, a system that originated with Roman currency and was used throughout the British Empire.
Exchange rates are typically quoted in terms of the largest unit, whereas integer representations of currency would need to be done in terms of the smallest unit, so extensive information about currency structures would need to be used to correctly represent exchange rates. Floating point or binary-coded decimal representations are consequently much better.
Doubles can exactly represent all 16 digit ints (IIRC) which is good enough for most use cases, you can catch the out of range cases (as you should do with ints as well)
If you use long ints you must track the decimal precision along the value which is not always trivial if you use mixed currencies.
Long ints are not guaranteed to correctly roundtrip through json serialisation / deserialisation
Doubles are easier to handle in the frontend
Currency math is different enough from regular math that you need special operator functions anyway so it’s not like ints are easier to handle either
If Excel uses doubles for money, it should be a warning sign. It uses simple numeric data type (I guess just ints) for freaking dates.. I can trace at least a couple of bugs in my career to just that fact.
Apart from its quirks Excel is fine, if you know its limitations. I think the real warning sign would be an analyst/programmer working with Excel and expecting high precision results.
Everything is fine apart from its quirks, if you know its limitations. The problem is when a tool's quirks and limitations are neither exhaustively documented nor can they be inferred from having reasonable knowledge about the base principles of the tool, but are rather learnt by experience (to be read as "through bad experiences").
If you want a simple real life example (unconnected to the way numbers are stored) here it is (accounting may have quirks that arrive unexpected).
In EU prices to customers are required to be comprehensive of VAT, so a price is € 60,00 included VAT 10%.
But in an invoice/receipt you have to explicit how much is the net and how much is the tax, so 60 / 1.10 = 54.55 and VAT is 54.55 x 0.10 = 5.46 which makes a nice 60.01.
You may be tempted to round down the 60 / 1.10 = 54.54 and have VAT 54.54 x 0.10 = 5.45 but this makes 59.99.
It's sufficient for quantitative finance and trading but not for accounting. For financial engineering and modeling it doesn't matter if results are a few fractions of a cent off as long as your profit margins/model tolerances are greater than the error, because your broker/bank/exchange will keep track of the exact values in your account. But if you are building a bank/broker/exchange, then tracking it precisely enough for GAAP is now your problem.
You can, but because the full set if timezones is only known at runtime, it is not terribly useful. You can statically distinguish a canonical static timezone (say UTC) vs a dynamic ine though.
I've gone as far as encoding bid and ask prices with different types, but tz really has never occurred to me. Closing times are also exchange/contract specific, not just location.
And that's why I shall be sticking to cash for the next 10 years till all the beta testing is complete.
By which time hopefully interest bearing CBDCs will show up, and make all these mindless intermediaries sitting between my wallet and someone elses wallet obsolete.
Time synchronization. It is incredibly important in fintech applications for a number of reasons:
1. *Transaction Ordering:* Financial transactions often need to be processed in the order they were initiated. This is especially crucial in high-frequency trading where trades are often made in milliseconds or microseconds. A small difference in timing could potentially lead to substantial financial gains or losses. Therefore, accurate time synchronization ensures fairness and order in the execution of these transactions.
2. *Security:* Accurate timekeeping helps in maintaining security. For instance, time-based one-time passwords (TOTPs) are widely used in two-factor authentication systems. These passwords are valid only for a short period of time and rely on synchronized clocks on the server and client side.
3. *Audit Trails and Dispute Resolution:* Timestamping transactions can help create a precise audit trail, which is critical for detecting and investigating fraudulent activities. In case of any dispute, a detailed and accurate transaction history backed by synchronized time can help resolve the issue.
4. *Distributed Systems:* In distributed systems, time synchronization is important to ensure data consistency. Many financial systems are distributed over different geographical locations, and transactions need to be coordinated between these systems in an orderly fashion. This requires all servers to have their clocks synchronized.
I am sure there are even more fields where this is relevant.