Hacker News new | past | comments | ask | show | jobs | submit login
Not Growing a Language (kevingal.com)
77 points by todsacerdoti on April 3, 2021 | hide | past | favorite | 106 comments



The article suggests a use case for operator overloading in a Complex or BigInteger class.

My programming experience is that that these "mathy" objects are about the only unambiguous use case for operator overloading. Given the limited use case, I'm not sure whether it's worth the added complexity for language designers.

I wrote the simulator engine code for CircuitLab (https://www.circuitlab.com/). We have complex numbers, sparse matrices, and extended-precision (more than a 64-bit double) floating point classes, all of which we use extensively within the simulation engine. Yes, it can be a bit annoying and verbose to do X.add(Y) instead of X+Y, but I tend to just leave an end-of-line comment to aid readability later. And this mathematical core code tends to be infrequently changed and well covered by tests.


(Author here).

Here's a not-so-mathy application that I had recently. I was implementing a units system in Python. Operator overloading allowed me to define joules like so: `J = KG * M**-1 * S**-2`. Then I could define grays (a radiation-related unit) like `Gy = J/KG`. Repeat for hundreds of units. If I had to do it in Java, I'd be frustrated by the verbosity and it would be easier to make mistakes.

My point -- Guy's point, actually -- is that if you don't give people the ability to grow a language, then their expressive power will be limited. And you can't anticipate ahead of time all the ways they'll want to express themselves.

Admittedly, my application is kinda mathy under the hood, because the units exist in a vector space. I guess that's to be expected when the operators come from maths.


> Operator overloading allowed me to define joules like so: `J = KG * M*-1 * S*-2`. Then I could define grays (a radiation-related unit) like `Gy = J/KG`. Repeat for hundreds of units.

Presumably exponentiation rather than multiplication by a constant, right?

This is cool and reminds me of how the Python Z3 binding uses operator overloading to let you express arithmetic and logical constraints

  >>> import z3
  >>> s = z3.Solver()
  >>> a, b, c, d = (z3.Int(x) for x in "abcd")
  >>> s.add(a*b == c+d)
  >>> s.add(a+b == c+d)
  >>> s.add(a!=0,b!=0,c!=0,d!=0)
  >>> s.check()
  sat
  >>> s.model()
  [d = 6, c = -2, b = 2, a = 2]
  >>> s.reset()
  >>> s.add(a>1, b>1, c>1)
  >>> s.add(a*a + b*b == c*c)
  >>> s.check()
  sat
  >>> s.model()
  [c = 17, b = 8, a = 15]
(It's limited in some ways for some reasons, like for some types and relations you may have to use a Z3 function to express a relation even though Python has syntax for it -- the example that I know of is that you have to use z3.Or(), z3.And(), etc., with boolean variables instead of Python's builtin boolean operators. Not sure why.)*


Oh wait, Hackernews seems to have removed one of the astericks symbols. It should be Python's exponentiation operator (2 astericks symbols), ya. (edit: fixed now).

That's cool too! How do you define the variables?


Oh wait, I left that part out! I'll edit my post.

... there we go.

You use Z3 objects that represent unknowns of particular types, like Z3.Int, Z3.Bool, etc. Each one returns an object representing a variable of that type, which can then be used in (Python-formatted!) expressions that you give to a solver.


I agree that operator overloading should be restricted to math objects. D tries to make it unpleasant to do it for non-math objects. For example, < <= > >= are not overloadable separately, just one function `opCmp` does all of them. Non-math operators are not overloadable.

The D community has generally agreed, and we haven't had much trouble with people overloading operators to do a regular expression DSL, for example.


Fair enough, but mainstream languages used in broad commercial settings need moron-immunity more than niche languages attracting a select crowd.


D does not have a huge market presence, but it is not a niche language. It is very capable for a diverse variety of projects. Whenever you get tired of buffer overflows and preprocessor abuse, come check us out :-)


We still need @safe as default though.


The buffer overflow protection is so important it is there whether the code is marked @safe or not.


With @system by default it is harder to track down uses of .ptr


These "mathy" objects are about the only unambiguous use case for operator overloading.

Yes. Most other uses for operator overloading are really chains of functions. There's better syntax for that.

I once wrote some overloads for C++ so that you could write

    result = fileobject | item | item | item | item;
and then apply either "read" or "write" to "result". This allowed marshalling without writing the item list more than once. Bad idea.

Python is notorious for problems that stem from combining operator overloading and implicit conversions. "+" is arithmetic for some types and concatenation for others.

    [1,2,3] + [4,5,6] 
is

[1,2,3,4,5,6]

which you might not expect. If you add arrays from numpy, those add arithmetically. You can add a numpy array to a built-in array. Trouble comes when you pass a numpy array to a function that expects a built-in array, and the wrong semantics of "+" are applied.

Rust has operator overloading without implicit conversions. That avoids such ambiguities, at the cost of requiring many explicit conversions.


Conflating concatenation and addition is just a terrible mistake, this is one illustration among many of that.

In Lua, `+` is always addition, and can be overloaded with an `__add` metamethod, while `..` is concatenation, overloaded with a `__concat` metamethod. `..` is also right associative, which means that `a .. b .. c .. d`, a normal enough string-building pattern, can be optimized into a single allocation for the new string, without having to create `a ..b` and then `ab .. c` and so on.

So I don't see this as a problem of operator overloading, I see it as a problem of a missing operator. Implicit conversion makes that problem worse, but `"12" .. 13` in Lua will give you "1213", as you would expect, and `"2" + 3` gives you 5, again, as you would expect.


Concatenation is just addition, though: addition is the monoid operation on numbers with the identity of 0, concatenation is the monoid operation on lists with identity of [].

The problem is almost always not the overloading of + but the implicit type coercion rules


List[Int] has more than one monoid


But, when I'm thinking in List[Any], I expect to be working on the Monoid for List[Any], not the Monoid for List[Int]: things like special-casing List[Int] are exactly what make programming languages difficult to learn and use.


What to use for concatenation is a problem. PL/1 used "|", which was originally drawn with a break in the middle. But C took that over as "or", and that's now the accepted standard. ".." is accepted as a range operator now.

Moving deeper into Unicode is probably not the answer. Although, if a language needed a concatenate symbol, ⊞ (Squared Plus) might be a good choice. It's not used for much else.


Perhaps ++, and just use += 1 for incrementing. Rust already doesn't have an increment operator, and my JavaScript linter forbids it.


If it didn’t cause problems (ones I can think of in seconds: it complicates parsing, mathematicians will think it signals multiplication, and it makes it hard to insert spaces), using a single space for concatenation would make sense to me.

(Aside: C (or the C preprocessor?) is the only piece of software that I know that supports that , but only for string literals)


As a binary operator `~` seems pretty unambiguous.


Yes, my vote for languages which are running out of glyphs would be `~~` actually.

Rather than using context to differentiate bitwise NOT from concatenation, just rule that ~~ is always concatenation. What you lose is just "NOT NOT", which (without this operator) is legal but pointless.


and to make things weirder python can concat literal strings together with "space operator"(j/k)

    >>>'a' 'b' 'c'
    >>>'abc'


they got rid of that in Python 3, I think


> Python is notorious for problems that stem from combining operator overloading and implicit conversions. "+" is arithmetic for some types and concatenation for others. `[1,2,3] + [4,5,6]` is `[1,2,3,4,5,6]`, which you might not expect.

It seemed obvious to me as soon as I saw it in Perl that the Perl approach ("+" for adding numbers, "." for concatenating strings) is a much better idea than using "+" for both operations.

I'm not sure why people keep wanting to use + for concatenation.

(My other language pet peeve is using backslash as the escape character for your mini-DSL that gets parsed from strings, when backslash is also the escape character for strings. You want to use separate escape characters for each context; stacking the same one in several contexts is the worst possible approach.

C was aware of the problems: it uses `\` as the escape for strings and `%` as the escape for printf commands. But somehow by the time we were writing regular expression libraries, this simple, obvious technology had been lost.)


I know of at least one other use of operator overloading which is quite pleasant to use. In Lua's PEG engine, Lpeg, operator overloading is used to express combinators, so `P"a" * P"b"` matches "ab" and `P"a" + P"b"` tries to match "a", then tries to match "b".

It's a little bit of a kludge! But a clever one, which uses the precedence of the existing operators to match the expected precedence of concatenation and the `/` ordered choice operator from PEG grammars.

In general I'm suspicious of arguments that boil down to "people might use this language feature badly". I'm not sure I want the full ability to add my own infix and postcircumfix operators, the way e.g. Raku allows, but there's no denying that being able to write `if foo ∈ test_set` is expressive and cool.

Maybe expressive and cool isn't a terminal value for language design, maybe `if test_set.element(foo)` is good enough. I kinda like it though.


The Raku idea is that an operator only does one thing. That way you can tell at a glance what the operator is doing. If you want to do something else, create a new operator. (One of the design mottos was “Similar things should look similar, and different things should look different”.)

In languages which only let you overload existing operators; those operators often get used for a bunch of different things.

I mean, how many languages use `+` for both addition and string concatenation? … and maybe also set union?


> The Raku idea is that an operator only does one thing.

Its kind of interesting that Raku’s own docs think of '=' being used for both item assignment and lost assignment as an exception to this, because it illustrates how narrowly Raku defines “one thing”. (As does the use of different postcircumfix operators for positional and associative access, which most languages call indexing, and use one construct for, which may not even technically be an operator.)

But its not just freedom to create new operators that allows this; for it to work Raku has a huge number of built-in basic operators, plus the hyper- and meta-operators.


Actually `=` sort-of only does one thing. It asks the left argument what it wants to do with the values.

    my @a = 1,2,3;
is almost the same as

    my @a;
    @a.VAR.STORE((1,2,3));
---

As for separate positional and associative access:

Raku has separate positional and named arguments to functions.

An argument list is actually a singular thing called a Capture. Even if it usually pretends to not be.

    my \capture = \( 'a', b => 10 ); # Capture litteral

    say capture[0];   # a
    say capture{'b'}; # 10


    sub foo ( |capture ) {
      say capture[0];   # a
      say capture{'b'}; # 10
    }

    foo |capture; # give it the above capture litteral
    # foo 'a', b => 10

    sub bar ( $a, :$b ){
      say $a; # a  (positional)
      say $b; # 10 (named)
    }

    bar |capture; # give it the above capture litteral
    # bar 'a', b => 10
To be able to easily extract out those parts, it helps if there is a different way to get at each.

The same thing also applies to extracting information out of regexes

    'a10' ~~ / (.) # $0
       $<b> = (..) # $<b>
    /;

    say $0;   # a
    say $<b>; # 10

    say $/[0];   # a  (positional index into $/)
    say $/{'b'}; # 10 (named index into $/)


While in most of my code, I don't really care about operator overloading (ie not having it would not make my life any worse), but when I write shader code (or a C++ library like GLM) to do mathy stuff, but including vectors and matrices, I am very glad to have overloaded operators and dread the idea of having to work around the language to express what I want. I'm not thrilled about having to duplicate the code (once using X.add(Y) and a comment to show a clearer version) here, but its better than nothing.

Of course, if we just use s-expressions, then there's no difference between operators and functions and this whole thing becomes moot. Although, with the caveat that your nice infix math expressions are still not nice infix math expressions. Then again, if we are using sexps, then we're likely using a language with macros and writing an infix macro for those mathy parts wouldn't be such a bad idea... ;-)


Isn't operator overloading usually a bad idea for high-performance maths? It limits you to binary operations (unless you go down an even deeper rabbit hole of template meta-programming), when many oprimized algorithms are ternary or even higher arity (e.g. optimized matrix multiply-and-add).


I mean, I haven't had to write one myself, but as a user of GLM, I've been very happy and personally couldn't care less if they're template meta-programming heavy to achieve it. As a user of the library, it doesn't bother me. I use plenty of template-heavy libraries anyway.

Maybe its hard to achieve absolute best performance, but I'm not arguing for that, I'm only arguing that operator overloading leads to better programmer ergonomics. I can benchmark and convert to optimised functions after I have it working nicely. Ergonomics first, optimisations after.


I'm not familiar with GLM, but it seems to be the perfet sweetspot for operator overloading - it works with fixed size, small matrices (up to 3x3), which probably don't require and may not even benefit from more advanced algorithms.

The problems I had read about applied to operations on larger matrices, where the difference between n^3 and n^2.5 or whatever becomes extremely noticeable, and makes it worth to write mulAndAdd(a, b, c) rather than a*b+c.


I think many somewhat mathy DSLs can benefit from it.

For example, there’s boost Spirit, a library to write parsers. You describe a grammar in something that looks a lot like a EBNF grammar (https://www.boost.org/doc/libs/1_67_0/libs/spirit/doc/html/s...)


> My programming experience is that that these "mathy" objects are about the only unambiguous use case for operator overloading.

Rust's ability for smart pointers to overload the dereference operator works pretty well, though there is still some potential for abuse.

Part of what makes it reasonable is the required function signature: It can only ever produce a reference to some other type, which rules out some of the crazier possibilities.


Well of course, if you look at "mathy" operators like +. If you look at operator[], the use cases are all "collectiony" objects instead.


And note that these overlap: think matrices where a + b adds, and a[1:,:,0] may have meaning, requiring the index operator to support tuples and range objects.


I could even see + on collections: "Add all the elements of this collection to that collection."


Scala used + for an element and ++ for another collection

  scala> 1 +: List(2, 3) ++: List(4, 5) :+ 6
  res0: List[Int] = List(1, 2, 3, 4, 5, 6)
where +: is right-associative and implemented by the list rather than the element.


At first "mathy" objects seem not so important, but it also involves:

- Vectors, matrices

- Immutable containers (lists, sets, maps, etc.)

- Domain specific languages (regular expressions, constraint languages, etc.)

And without good support for matrices, the data science and ML people will just use Python instead of Java to build their libraries. And then everyone will use the language with the best libraries, even if they don't use matrix math in their code.

For regular expressions, you have to use a DSL embedded in Strings in Java instead of using operators like | as operator on regular expressions directly.


I actually like how conservative Java has been with making language changes.

People go nuts with language features because they can, not necessarily because they have a good reason. this it the big problem I have with Scala, for example.

Sure arithmetic operator overloading with numeric types makes sense. Even the parenthesis operator overloading in C++ for functors makes sense.

Then you have the Scala SBT build system and percent operator overloading.

I actually like how Rust does this. You don't overload the operators per se. You implement traits with named methods like Add that will allow you to use the operators. This, to me, is the best of both worlds.

I personally think Java has been just fine without operator overloading so there's no pressing need to add it now but, if they do, please follow the Rust model.


> Then you have the Scala SBT build system and percent operator overloading.

SBT is a fractal of bad design, the funny function names are just the most superficial of its problems.

> I actually like how Rust does this. You don't overload the operators per se. You implement traits with named methods like Add that will allow you to use the operators. This, to me, is the best of both worlds.

It's still just an extra unnecessary indirection where you have to remember which is which. What Scala does is much simpler - the + method is called +, the / method is called /, and so on. Yes, defining a function called %++< is probably a bad idea, but defining a function called YqqZ is probably a bad idea too, and few programming languages feel a need to ban that.


> I actually like how Rust does this. You don't overload the operators per se. You implement traits with named methods like Add that will allow you to use the operators. This, to me, is the best of both worlds.

What's the difference supposed to be? Why do I care whether my "add two abstract data types" method is named "__add__" or "operator+"?


I don't Rust, but if I had to guess, it helps keep you from defining operators that don't make sense. Like a + that actually means concat. You could still do that, but now you'd be implementing it as something you're calling "add" and that might feel dirty.

String concatenation probably isn't the best example, but it's the first one that came to mind.


That's a funny example, because Rust's `String` does implement `Add` as concatenation! Many think this was a mistake though.

https://doc.rust-lang.org/std/string/struct.String.html#impl...


Agreed, the haskell / rust overloading where there's a loosely enforced[^1] but implied contract is probably the best of both worlds.

But even without strong constraints, the language's culture is a huge aspect of whether it becomes a problem or not. For example, in python, the language has almost[^2] no constraints on overloading, but people generally use it in very reasonable ways.

[^1] the typeclass/trait has an interface that is enforced, but there's usually also some implied laws that the type checker can't verify for you

[^2] the big exception being comparison methods like `__leq__` which throw a runtime error if the result isn't a boolean, much to the consternation of DSL writers


Haskell has the worst operator overloading of any language I've used. There's tons of niche operators, only some of which have names, it's so hard to Google them.

Haskell should have enforced that every operator must have a `name`.


> There's tons of niche operators, only some of which have names, it's so hard to Google them.

Firstly, Hoogle is your friend

https://www.stackage.org/lts-17.8/hoogle?q=%3E%3D%3E

Secondly, you can Google them these days

https://www.google.com/search?hl=en&q=%3E%3D%3E


SymbolHound was invaluable, once upon a time.


I believe that once value types will be adopted by the JVM, operator overloading proposal for Java will be imminent. Meanwhile, there's actually a way to introduce operator overloading into your Java codebase today, the compiler plugin is called Manifold [1]

[1] https://github.com/manifold-systems/manifold


I mostly agree, but I think value types are likely to put pressure on this area. I think C++ style overloading would be a mistake, but maybe a monoid interface and some language changes to turn operators in to method calls could work. The thing that Java does well on is that it is easy to read and you wouldn’t want to break that property. Understanding the types of expressions can be hard enough without operators doing weird things, but having a system that doesn’t allow adding two different numerical types would also feel poor.


I wish it weren't referred to as 'operator overloading'. I don't want to think of them as 'operators' and I don't want to 'overload' anything.

Is putting a method on a class called 'method overloading'?

I want to be able to define `plus` on my datatypes and I want to be able to define `+` on my datatypes.

`+` on my datatype overloads `+` on `Integer` no more than `get()` on my datatype overloads `get()` on someone else's datatype.

I am not taking an existing Integer type and giving it a new definition of `+`, which is what 'operator overloading' would be.


> Is putting a method on a class called 'method overloading'?

It most certainly is.. _if_ the method you are adding has the name of a functionality that is either inherently part of its spec or effectively so if by community conventions.

e.g. when you make a method with signature `boolean equals(Object other)` in java, it is literally, as in the very spec itself calls it this, 'overloading'.

The term is appropriate. `*` is commonly understood to be some sort of numeric operation with the properties that it is reflective, commutative, etc. If you decide to add a definition for a class you're writing, the term 'overloading' is entirely appropriate.


Julia is like this. a+b is just syntactic sugar for (:call :+ a b). There’s a function + for various type pairs. The fact that the compiler can optimize it to 1 instruction is an implementation detail.


But when you overload '+', it overloads it as an operator, keeping things like "operator precedence". Overloading functions doesn't do that.


> I want to be able to define `plus` on my datatypes and I want to be able to define `+` on my datatypes.

The problem with this approach is that, just as defining `plus(int)` on a class `C` allows you to call `c.plus(1)`, defining `+(int)` allows `c + 1` but not `1 + c`.

Whereas the approach of adding overloads to a global `+` operator can support both `c + 1` and `1 + c`, by allowing you to define `+(C, int)` and `+(int, C)`.


> I want to be able to define `plus` on my datatypes and I want to be able to define `+` on my datatypes.

So, I take it we agree that expressing the quadratic formula for BigDecimals as "x.pow(2).multiply(2).plus(x.multiply(3)).minus(5)" is awkward.

Do you think that "x.^(2).(2).+(x.(3)).-(5)" is any better? Because if you were hoping to support "x^2 * 2 + (x * 3) - 5" then you'll need more than just allowing non-alphabetic characters in names... you will ALSO need special support for infix notation (which Java lacks today).


According to wikipedia it's also sometimes called operator ad hoc polymorphism. Better?


It's not any better. Ad hoc polymorphism is a defined thing just as overloading is a defined thing (in fact, it's a superset of OO-style "overloading"). When you allow operators to be defined for user-authored datatypes, these are features that you might choose to support for operators, but that doesn't mean that all user-defined operators are inherently using these features.

Concretely, in defining a + operator on your BigInt class that can only accept another BigInt, you haven't overloaded the + operator. It would only be overloading if you defined a second form that accepted some other type of object.


So if I want 3 + myBigInt to work?

(Which I'll note most languages that support operator overloading support)


Yes, that would be actual operator overloading.

Personally, I've always wanted languages to be stricter about that kind of thing. It's not that hard to explicitly express the desired conversions to make the types on both sides of your binary math operators the same and it feels a lot more "controlled" to me to do so.


> A syllable is a sound that makes up part of a word. In the year two nought nought nought (2000), Guy Steele gave a talk called "Growing a Language". He used a schtick in his talk, where he did not speak a word of more than one syllable if he did not first say what it meant.

> In this post, I will use the same schtick.

This is a great and apparently unintentional illustration of why the belief in short words being simple is misguided. In its own introductory paragraph, it messes up in multiple ways by multiple standards.

Most obvious: nobody can understand "two nought nought nought". We'd have a much easier time with "two zero zero zero", even though the word "zero" is notionally 100% more complicated than the word "nought".

That was so obvious that the author helpfully glossed his own simplified language, knowing that we couldn't understand it. "two nought nought nought" means "2000". Good thing someone was there to dumb that ...up... for us.

Except... "2000" is "two thousand", and "thousand" is a word of more than one syllable. The author forgot to tell us what it means!

It turns out absolutely everything is very simple by this ludicrous standard. "Curl" is a non-evil one-syllable word, and it can be defined, also very simply, as ∇ × F (F: ℝ×ℝ×ℝ ⟶ ℝ×ℝ×ℝ). Look how easy that is to understand!

What does curl mean? Well, that must be a totally different question from "how simple is the concept?" I didn't understand what curl meant even when I was taking a class in it.


Guy Steele used the restriction to one-syllable words to make a point. It's supposed to sound a bit off and it pushes him to define a bunch of words as he goes along. If he had started with "simple" words instead, the talk would have flowed too well for the gimmick to work! He actually mentions that in his talk:

> In truth, the words of one syllable form quite a rich vocabulary, with which you can say many things. You may note that this vocabulary is much larger than that of the language called Basic English, defined by Charles Kay Ogden in the year one nine three nought[10]. I chose not to use Basic English for this talk, for the cause that Basic English has many words of two or more syllables, some of them quite long, handed down to us from the dim past of Rome. While Basic English has fewer words, it does not give the feel, to one who speaks full English, of being a small language.

Using one-syllable words is a fun way to make the structure of the talk reflect its thesis, with a rule that's simple enough to explain in one sentence. It's not meant to be a serious example of a simple language! If you did it seriously you'd get something like Simple Wikipedia, which reads pretty well[1]:

> A programming language is a type of written language that tells computers what to do. Examples are: python, ruby, jupiter, html, css, java, javascript, C, C++, and C#. Programming languages are used to write all computer programs and computer software. A programming language is like a set of instructions that the computer follows to do something.

That reads well and it's a good example of defining specialized words as you need them, but it doesn't feel all that different from "normal" English and it's much harder to explain what "simple" constitutes in that context. It works better in reality, but worse for a talk.

[1]: https://simple.wikipedia.org/wiki/Programming_language


I recall an early conversation on operator overloading in Java and the other people were almost but not quite putting a thought together:

Limit operator overloading to classes that extend Number and 90% of the excesses of operator overloading are impossible.

You probably shouldn’t be using += for your concatenation code anyway (interpolation is often more powerful). The only other place you will “miss” it is in data types that are vector values, and you could probably figure out a rule for that in a later iteration.


Where possible, I prefer making the syntax opt-in, and explicitly so. For example, stealing OCaml's syntax:

let check_exp (g : Elliptic_curve_pt.t) (x : int) (y : int) = let xy = x * y in let open Elliptic_curve_pt.Num_syntax in (g ^ x) ^ y = g ^ xy

This lets you avoid making hard decisions about what can be added/multiplied/etc. at the language level -- can a list of numbers representing the coefficients of a polynomial be added? -- without jumping to the extreme of allowing anything implicitly. This also lets the typechecker catch more of your mistakes by having stricter types, and means you know exactly where to go to find out what the syntax is encoding.


I think, you could also write

    Elliptic_curve_pt.Num_syntax.( (g ^ x) ^ y = g ^ xy )
instead of let open Elliptic_curve_pt.Num_syntax in ...


> You probably shouldn’t be using += for your concatenation code

I agree. That's why D uses a separate operator for concatenation, ~=

That removes the temptation to overload +=.


What would be the definition of such "Number" class? How would it prevent any abuse?


A ring [0] might be too general, but I can't think of anything that isn't a ring that I'd want to call a "number" class. A field [1] is less general and might be a better fit. It probably depends whether you want a matrix to be able to be a "number," which I'd be inclined to do.

[0] https://en.wikipedia.org/wiki/Ring_(mathematics)

[1] https://en.wikipedia.org/wiki/Field_(mathematics)


The type systems underlying computer algebra systems like Sage, GAP, etc. are immensely more complex than this. Read the source of Sage (lots of python/cython) and the source of GAP (pure c; everything is basically a void *). Operator overloading is immensely helpful for entire classes of objects -- groups, rings, fields, categories, combinatorial objects... some of them are kinda number-like, but for example groups can support multiplication and division but not addition/subtraction (unless it's an abelian group and you'd rather represent it that way). I've been writing C for nearly 30 years, and GAP is a slog to read. Sage makes this stuff way easier.

In short... math isn't just arithmetic, isn't even arithmetic of complex tensors. Math is way bigger than most programmers seem to comprehend. I'm on the other side of the DK curve; I know that I've only scratched the surface despite a decade of dedicated study


Natural numbers don't form a ring, and integers don't form a field.


Good callout. What would you use if you had to restrict operator overloading to an appropriate “number” abstraction?


If we're actually requiring things be a ring, rather than approximating something that is a ring, that rules out floating point numbers (lack of associativity, at least).


As for definition, it already exists: https://docs.oracle.com/javase/8/docs/api/java/lang/Number.h...

As for abuse prevention: it would be impossible to use the common operators like +, -, +=, <<, >> and so on for anything that is not numbers, and therefore prevent C++ levels of overloading abuse.


But any class can inherit that Number, there is nothing enforcing that the subclass is particularly number-like. You can do just stuff like

    class MyString extends Number
and you get access to operator overloads. Only thing that makes this annoying is Javas lack of multiple inheritance, but that is pretty minor thing.


There is a level of stupid that is deniable, and a level that is not. MyString extends Number is going to lead more swiftly to words with your coworkers.


When looking at C#, which has operator overloading, did things really go that bad? What makes Java people think it would be worse than that?


C++ overdid operator overloading in their libraries, etc. This is what Java developers saw at the time and decided not to include it.

But I agree that C# proves that we all learned our lesson and operator overloading is now just mostly used for numbers.


Implicit casts can be a bit dangerous in C#, but the bigger problem is that you cannot generically use operators like you can with type classes in Haskell.


I think we get into fuzzy territory when we use one term to refer to a complex feature, without having good terms for constituent parts of that feature. If java offered the ability to call methods in infix position, and give methods non-alphanumeric names, and maybe express something about precedence order, that would support a good share of uses for operator overloading without actually overloading anything. `x :+: y` would be easy enough to read as "an addition-like operation which is not the inbuilt addition" and would still closely resemble the math, and might be fine in a context where x and y are complex numbers or whatever. It would come at the obvious cost of requiring changes to a bunch of the supportive tooling surrounding the language.

I think the messy part added by overloading specifically is that if `Complex`, `RealVector`, `List` etc all override addition, then there's some common class or interface which provides the declaration which they all override, and same for subtraction, multiplication, etc, and soon you have a large pile of interacting abstract definitions, which some language users will find daunting.


Sounds like re-inventing Haskell.


Or maybe this is just scala. I guess the issue is, is it possible to add "operator overloading" and get something sensible without adding a whole pile of other definitions?


That sounds still like overloading unless you'd require the infix function names to be globally unique. And if you require global uniqueness then that gets pretty nasty pretty quickly with libraries that might use same names.


The most "complicated" part about operator overloading (in my opinion) is dealing with precedence rules. Some languages, instead of dealing with this complexity, treat all operators the same. Lisp-like languages, for instance, by way of how the programs are structured, doesn't need rules for precedence. APL/J just does away with any idea of precedence and instead each operator acts on the result of the expression to its right. This breaks common arithmetic rules (2 * 2 + 2 evaluates to 8, for example), but I wonder if a language otherwise like C/C#/Java ditched precedence and allowed custom and overridable operators, if the result would be a lot less complex overall than some of the messes that can be found in C++.


> Take a look and see. What do you think this BigInteger math does?

The problem for me is it's not clear if a.add(b) is "a+b" or "a+=b". With the regular operators it's always obvious which mutate the value and which return a new value, whereas the methods can do whatever they want. Yes, the operator is a method call and can do anything, but in practice everyone makes them do the obvious thing and follows the convention. There is no convention that "add" should return a new value - it's a verb, verbs usually mutate the object.


Operator-overloading has seemed like a weird debate to me ever since I realized operators are only different from functions because:

1) They are infix (often in languages where functions cannot be)

2) They can use one of a handful of special characters in their name

3) In languages with methods - which you could argue are called infix - they lack the extra ceremony of .()

These are all pretty much syntax concerns, and it seems like they could be solved at a fairly superficial level. I don't see much reason why operators need special treatment under the hood.

Edit: Now that I think about it, they're also polymorphic in some languages that lack polymorphic functions. But that's somewhat of an edge-case if you count methods.


I wonder if it makes sense to add operator overloading to a language, but in a way that requires that closure under each overloaded operator and the existence of identities/annihilators. The operating principle here is to ensure that principle of least surprise for library consumers (sometimes at the cost of making life harder for library authors).


That's not a bad idea, but I think the problem is essentially solved.

C++ introduced operator overloading to the wide world, and it was promptly abused (with the stream operators, among other things).

But I believe the lesson was learned.

Few libraries hoping to achieve general usage will abuse operator overloading too badly, and if they do, potential users will resist adopting it widely.

There's an interesting class of language features, of which operator overloading is one, that are tempting to abuse which should really mostly only be used for what are essentially language extensions provided by a library.

I think what is more useful than even more language features to restrict the usage of these, is strong documentation and evangelism/marketing around the feature to help ensure it will be used appropriately.


> Few libraries hoping to achieve general usage will abuse operator overloading too badly, and if they do, potential users will resist adopting it widely.

Perhaps this is true for some language ecosystems and less true for others. But for the sake of argument, I’ll assume that this is generally true. Even in that case, if each library “abuse”s operator overloading in a new and interesting way, the problem still exists for the library ecosystem as a whole.

> I think what is more useful than even more language features to restrict the usage of these, is strong documentation and evangelism/marketing around the feature to help ensure it will be used appropriately.

I can’t imagine this scaling as well as building (or specifically choosing not to build) something into a language.


CLU, ML, Lisp, Ada, Modula, Eiffel, Smalltalk were the ones that introduced it to the world.


If you want this, just use Scala.

Full interoperability with Java, all the flexibility you could ever want.


I disagree with all of the preconditions this article suggests (and don't see why they are needed in the article)but I do agree that it is better to be able to say:

   a = b + c
rather than

   a.assign( b.add(c))


As a Lisp programmer I never felt the need for syntactic operators. The case you cite doesn't look any worse to me.

Metasynactic operators (like quotes) are a different matter but you need so few it's no big difference.

I do write `c = a + b` in c++ but simply as it's the local convention. `=` is a non-intuitive assignment operator and no more natural than `assign`.


So you don't do:

   (+ 1 2)
Also, I don't think that

    a.assign( something ) 
is particularly natural or understandable. Am I assigning something to a, or assigning a to something?


I usually use a dialect that allows me to use `plus` but the symbols `plus` and `+` are just names to me — just stuff you learn. As I mentioned the use of `=` for assignment is counterintuitive, but if you simply learn it as yet another name it’s no big deal.

> Am I assigning something to a, or assigning a to something?

Seems to me that it would be causing a side effect on a, but as I said these are just arbitrary labels that you have to learn.

I find operator precedence weird and in 40+ years of continuous programming and doing math I have never been able to develop an intuition about them. But it's a thing you learn.


I'm surprised that every comment so far is about operator overloading. All I thought after reading this is: saying what words mean up front sure is tedious and unnecessary.


Especially to get to the conclusion which is basically “I hope Java gets operator overloading” without much actual support for that position. The one interesting bit of support they gave was that it’s hard to identify the quadratic formula on BugIntegers without operator overloading and that this unfamiliarity might yield bugs, but I think the inverse is true. Math notation tends to be (IMO, obviously) horrible—it’s optimized for quickly scrawling on a blackboard and for elitist gatekeeping, not for understanding or maintenance. At least it violates many of the software industry’s rules about what constitutes readability, approachability, and maintainability.


Is there anything in the articles arguments for operator overloading that would not also apply equally for allowing arbitrary user-defined operators (i.e. infix functions) instead of the small limited set that is built-in to the language?

Personally I'm starting to lean more towards the opinion that infix operators/functions are not worth their complexity even for primitives.


The most simple, elegant and effective solution I saw is in Red language, though it's too advanced to apply in others :)


The problem with operator overloading is that it introduces a degree of ambiguity into the language. You can say what you will about the dot notation for math, and I agree it’s unwieldy, but at least it’s consistent. And at any rate, the mess can be solved with a comment. If every library I use has its own definition of what + is in different contexts, now I have to consult the library for all the ways they use + and keep that straight in my head. This complexity is a nontrivial constant cost for developers.

The real reason people seem to like operator overloading is that it’s a feature that scratches a certain itch. We all love our code to look elegant and simple, and often times this need inspires developers to set down the path of language design. This is a long dark path that’s not for the feint of heart. Many who start down this road dabble for a bit and then turn back quickly when things get rough.

Operator overloading is a low barrier of entry to language design, so developers looking to make simple elegant code reach for it in the name of aesthetics. They get to feel like language designers by playing with syntax, while staying within the safety of a language they know and love. But in my opinion, the aesthetic gains are not worth the increase in complexity for the language. Both for writers and readers of code.


> The problem with operator overloading is that it introduces a degree of ambiguity into the language.

How is it any different in any way from any other overloading (i.e. generic functions or allowing two classes to have methods with the same name?)


It’s much easier to grep for function names


These days even emacs users can use LSP.

Nevertheless I agree with your point.

But shouldn’t overloading be used where its results are intuitive thus lookup should rarely be needed? /s


That's a question of interface design. You can also create ugly, ambiguous interfaces with functions (10 arguments, it mutates some of them, etc), or any other language feature. If a library designer uses operator overloading well, then it should be obvious what "a+b" is doing if you know the types involved.

On one hand it's a question of philosophy -- do you provide the user with tools to better express themselves, if they might possibly shoot themselves in the foot and create a crappy interface?

On the other hand, it could be an engineering issue. Maybe it's not worth the technical challenge or the increase in complexity for the language designers. I'd be more sympathetic to this reason. Then again, Python and C++ manage it.


Really consistent, how can you ensure that sum(a, b) actually does what it states without looking at the implementation?


It's not that complicated--other languages typically define precedence levels and grouping (left/right) for operators. Any symbols not in the predefined set can all have the same precedence level and group left-to-right.


I think you have a good point when it comes to the public API exposed to users of a module or library; it's one thing if you are just using operator overloading in your internal implementations, but I'd imagine it'd be pretty confusing not seeing docstrings or being able to search for specific methods. Not sure how you would easily find that in the docs without peering into the object whose methods are overloaded.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: