Again, no thanks. I want mathematical notation and I simply won't use any language without operator overloading. Free functions for common mathematical operations are an abomination.
Then you should probably use a language that lets you write DSLs for any given domain, rather than abusing operator overloading which just happens to work for a few subdomains of mathematics (e.g., you can't use mathematical conventions for dot product multiplication in C++). Anyway, I've never seen any bugs because someone misunderstood what a `mul()` function does, but I've definitely seen bugs because they didn't know that an operator was overloaded (spooky action at a distance vibes).
Actually, I'm quite happy what C++ has to offer :)
Yes, the * operator can be ambiguous in the context of classic vector math (although that is just a matter of documentation), but not so much with SIMD vectors, audio vectors, etc.
Again:
a) vec4 = (vec1 - vec2) * 0.5 + vec3 * 0.3;
or
b) vec4 = plus(mul(minus(vec1, vec2), 0.5), mul(vec3, 0.3));
Which one is more readable? That's pretty much the perfect use case for operator overloading.
Regarding the * operator, I think glm got it right: * is element-wise multiplication, making it consistent with the +,-,/ operators; dot-product and cross-product are done with dedicated free functions (glm::dot and glm::cross).
One never writes such expression in a serious code. Even with move semantic and lazy evaluation proxies it is hard to avoid unnecessary copies. Explicit temporaries make code mode readable and performant:
auto t = minus(vec1, vec2);
mul_by(t, 0.5/0.3);
add(t, vec3);
mul_by(t, 0.3);
v4 = std::move(t);
I think there may be a misunderstanding here regarding the use case. If the vectors are large and allocated on the heap/on an accelerator, then yes, writing out explicit temporaries may be faster. Of course, this does not preclude operator overloading at all: You could write the same code as auto t = vec1 - vec2; t *= 0.5/0.3; t += vec3; t *= 0.3;
However, if the operands are small (e.g. 2/3/4 element vectors are very common), then "unnecessary copies" or move semantics don't come into play at all. These are value types and the compiler would boil them down to the same assembly as the code you post above. Many modern C++ codebases in scientific computing, rendering, or the game industry make use of vector classes with operator overloading, with no performance drawbacks whatsoever; however, code is much more readable, as it matches actual mathematical notation.
> Many modern C++ codebases in scientific computing, rendering, or the game industry make use of vector classes with operator overloading, with no performance drawbacks whatsoever
I guess these people are all not writing "serious code" :-p
TIL Box2D must not be serious code because it doesn't use copious amounts of explicit temporaries[0].
And just for the record, I'm very glad Erin Catto decided to use operator overloading in his code. It made it much easier for me to read and understand what the code was doing as opposed to it being overly verbose and noisy.
> One never writes such expression in a serious code.
Oh please, because you know exactly which kind of code I write? I'm pretty sure that with glm::vec3 the compiler can optimize this just fine. Also, "vec" could really be anything, it is just a placeholder.
That being said, if you need to break up your statements, you can do so with operators:
auto t = vec1 - vec2;
t *= 0.5/0.3;
t += vec3;
t *= 0.3;
Personally, I find this much more readable. But hey, apparently there are people who really prefer free functions. I accept that.
Of course, the compiler or an advanced IDE can know what your code means. If all your identifiers were random permutations of l and I: lIllI1lI, your IDE would not mind either, but the code would be horrific, don't you agree? The point of the OP is that overloaded operators (and functions) make it harder to reason about the code for a human that reads it. At least for some people. At the end, everything is "just" syntactic sugar, but it makes a significant difference.
Exactly. If you don't care that the code is unreadable and you can rely on every human viewing the code through an IDE with symbol resolution (and not say, online code review platforms) and remembering to use said symbol resolution to check every operator, then operator overloading is great!
If editors were to implement it, you could navigate to the corresponding overload implementation or even provide some hint text. Just like they do for other functions.
Yeah, we would need editors and code review tools to not only follow overloads to their functions but also highlight that the operator is overloaded in the first place. Of course, this is quite a lot more work than just not overloading things in the first place (particularly since the benefit of operator overloading is negligible).
Dealing with money is important, even if it's only a small part of mathematics. I'll focus on that.
Python's 'decimal' module uses overloaded operators so you can do things like:
from decimal import Decimal as D
tax_rate = D('0.0765')
subtotal = 0
for item in purchase:
subtotal += item.price * item.count # assume price is a Decimal
taxes = (subtotal * tax_rate).quantize(D('0.00'))
total = subtotal + taxes
Plus, there's support for different rounding modes and precision. In Python's case, something like "a / b" will look to a thread-specific context which specifies the appropriate settings:
>>> import decimal
>>> from decimal import localcontext, Decimal as D
>>> D(1) / D(8)
Decimal('0.125')
>>> with localcontext(prec=2):
... D(1) / D(8)
...
Decimal('0.12')
>>> with localcontext(prec=2, rounding=decimal.ROUND_CEILING):
... D(1) / D(8)
...
Decimal('0.13')
Laws can specify which settings to use, for examples, https://www.law.cornell.edu/cfr/text/40/1065.20 includes "Use the following rounding convention, which is consistent with ASTM E29 and NIST SP 811",
(1) If the first (left-most) digit to be removed is less than five, remove all the appropriate digits without changing the digits that remain. For example, 3.141593 rounded to the second decimal place is 3.14.
(2) If the first digit to be removed is greater than five, remove all the appropriate digits and increase the lowest-value remaining digit by one. For example, 3.141593 rounded to the fourth decimal place is 3.1416.
... (I've left out some lines)
(3) Divide the result in paragraph (a)(2) of this section by 5.5, and round
down to three decimal places to compute the fuel cost adjustment factor;
(4) Add the result in paragraph (a)(3) of this section to $1.91;
(5) Divide the result in paragraph (a)(4) of this section by 480;
(6) Round the result in paragraph (a)(5) of this section down to five decimal
places to compute the mileage rate.
There's probably laws which require multiple and different rounding modes in the calculation.
This means simply doing all of the calculations in scaled bigints or as fractions won't really work.
Now of course, you could indeed handle all of this with prefix functions and with explicit context in the function call, but it's going to be more verbose, and obscure the calculation you want to do. I mean, it's not seriously worse. Compare:
But it is worse. I also originally made a typo in the function-based API for line5 where I used "decimal_add" instead of "decimal_div" - the symbols "/" and "+" stand out more, and are less likely to be copy&pasted/auto-completed incorrectly.
If overloaded parameters - "spooky action at a distance vibes" - also aren't allowed, then this becomes more rather more complicated.