
Why Operators Are Useful - s16h
https://neopythonic.blogspot.com/2019/03/why-operators-are-useful.html?m=1
======
rossdavidh
Wow, he's still got it. I read the whole thing, without realizing it was Guido
Rossum, thinking "hey, that's a really good point". Then got to the end and
realized it was the inventor of the Python language. So, I wasn't impressed as
I was reading it because of who it was, I was impressed because he was making
a really good point.

Obviously, the fact that he made my favorite programming language was no
fluke.

~~~
maskedinvader
Ramblings through technology, politics, culture and philosophy by the creator
of the Python programming language.

This was literally the first thing I read when I hit the link. Don't mean to
sound rude but how did you miss that ? Does it render differently on web? I
saw this on the mobile version of the site on my Android.

~~~
ChrisSD
For me that was covered up by some popover I didn't read. Besides the tagline
of a blog is the sort of thing I for one gloss over. I've had decades of
training how to skip straight to the content.

~~~
maskedinvader
That's what I thought, usually even I am trained to skip those few lines under
titles (like opinion articles have a blurb about the author )

------
FPGAhacker
Seems like it's more an argument and prefix notation vs infix.

z = Add(x,y) is just a form of prefix notation in my opinion. I would say that
z = x + y feels better because that's how we are taught in school. But in
English at least, it's reasonable to say the following:

Z is X plus Y.

To get Z, Add X and Y.

\---

It's also an argument about inconsistent syntax. For example there is not a
big gap in lisp for (+ a b) and (add a b). It's a bigger difference in python.

\---

I find infix for math easier to read because it's how I learned it. But I also
use lisp, and appreciate being able to say (+ x y) and (+ x y z a b c) where
plus might be more accurately read as sum.

That and in college I used an RPN calculator, and going from postfix to prefix
isn't as odd as infix to prefix.

~~~
hanniabu
It'd come down to context for me. If it's 2 or maybe 3 items then I'd prefer x
+ y, but if it's any more than that I'd definitely use Add().

------
mpweiher
Smalltalk also doesn't have this problem. Since all message-sends are
essentially infix, operators ( _binary message sends_ ) are simply special
keyword messages.

    
    
       1 add: 2.
       1 + 2.
    

This is helpful when you have "operators" that aren't quite as obvious, for
example raising to a power.

    
    
       2 raisedTo:3.
    

Since there are no operators, there is no operator precedence. However, there
is precedence between different message types: unary binds tightest, then
binary, keyword last. Otherwise evaluation is uniformly left to right.

While this is a question on Smalltalk job interviews ( What is 2 + 3 * 5. ?),
that's only useful for filtering out people who simply have never seen
Smalltalk before (in case they claim knowledge). It doesn't seem to be an
issue in practice, because the evaluation rules are otherwise so simple and
uniform.

The other surprising effect is that binary message sends don't seem to have
the same tendency to be confusing that operator overloading does. I don't
really understand this effect, because the mechanism has effectively the same
power.

------
0815test
The mnemonic value of "operators" as reminders of properties such as
associativity or commutativity interestingly extends to diagrammatic
reasoning. A diagram is really just a generalized expression, and this becomes
quite useful when one has to deal with more than one "type" or "domain" of
operation, but in a consistent way that preserves the compositionality-like
properties OP talks about. A recent book exploring this topic is "Seven
Sketches in Compositionality; An Invitation to Applied Category Theory"
[https://arxiv.org/abs/1803.05316](https://arxiv.org/abs/1803.05316) .
(Despite the obvious reference to CT in the title, the work is quite
accessible and the math involved is not much more complicated than that found
in the linked blogpost. Importantly, and perhaps unlike some people in _other_
programming-language communities, the author does _not_ assume any pre-
existing knowledge; the work is rather about using concrete, real-world
examples to gently guide the reader's intuition.)

------
makecheck
A “dict” operator would be vague because there is _more than one way_ to
combine dictionaries. Why should I have to guess whether duplicate keys are
being skipped or replaced by a symbol, when two different well-named functions
will always make it clear?

~~~
sametmax
Same with lists and strings, and it doesn't bother anybody: even for dict,
it's just a matter or reading from the left to the right.

~~~
makecheck
Of course it’s not the same. There is _one_ sensible result expected when
adding two lists or strings together (since they’re linear, it would not make
sense to encourage inefficient search/replace operations with an operator; and
also, the containers do not require unique keys, you can simply extend them).
A dict _must_ deal with conflicting keys, and arguably each situation requires
different treatment of those keys.

~~~
pacala
The sensible thing is to throw on conflicting keys.

~~~
JoelMcCracken
What? Have you never had two dicts and wanted vals from one dict override the
other?

~~~
vivekseth
If a+b != b+a that breaks the commutativity property of addition.

~~~
frabert
Commutatitivity does not need to always hold. Multiplication is not
commutative on matrices, but we still use it often and write it as a product
nonetheless

~~~
Koshkin
In mathematics, the ‘multiplication’ operation is not assumed to be
commutative, whereas the ‘addition’ operator is used specifically to indicate
commutativity. So, for example, using the plus sign for string concatenation
goes against this tradition.

------
nicoburns
JavaScript deals with this well. The equivalent of the proposed

    
    
        d3 = d1 + d2
    

is

    
    
        let d3 = {...d1, ...d2};
    

In fact, it seems like you can already do this in python:

    
    
        d3 = {**d1, **d2}
    

This seems to have most of the benefits of being a dedicated syntax (rather
than just a function call), without the downsides of breaking the
commutativity of the + operator.

~~~
emmelaich
Requires Python3

~~~
nicoburns
Surely any new syntax (like +) would also require python3, so that's a bit of
a moot point...

~~~
emmelaich
Yeah, but I didn't know it was new syntax. And presumably some others didn't
either.

------
etaioinshrdlu
See also: Notation as a tool of thought, the paper introducing APL:
[http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...](http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pdf)

------
Symmetry
Operators can certainly be nice. And I do like allowing the programmer to
create new ones too, though I'd tend to prefer the Haskell approach of
creating new ones out of existing symbols like >< or such rather than the C++
approach of letting the programmer redefine an existing operator for a new
use, << in the streams library being one of the worst common examples.

~~~
duckerude
Python discourages using operators for things that don't have anything to do
with their original use. If you try to be too clever with it, you run into
issues. For instance, comparison operators are chained, so that `x > y > z` is
equivalent to `x > y and y > z`, not `(x > y) > z`.

The most radical use of operators in the standard library that I know of is in
pathlib, where `Path('/usr') / 'lib' == Path('/usr/lib')`, and I think that
got a lot of pushback. It's certainly an outlier.

It fits well with Python's overall approach to readability. If the meaning of
your operator isn't immediately apparent from its existing meanings, then it
should probably just be a regular old named function or method instead. Python
doesn't like DSLs very much.

Programmer-defined operators are certainly useful, but Python went in the
other direction, which has its own advantages. C++'s choice to have a fixed
set of operators but overload them in a lot of arbitrary ways is probably the
worst of both worlds.

~~~
zwkrt
You should look at the Construct library, which overloads the '/' operator as
syntactic sugar to make s-expressions in it's DSL less paren-heavy. I'm not
saying I agree, but it was wild when I first saw it.

    
    
        Struct(
            "foo" / byte,
            "bar" / Struct( 
                "spam" / int16ul, 
                "bacon" / int64sb, ), 
            "viking" / int32sl, )

------
platz
> formulas written using operators are more easily processed _visually_ has
> something to do with it: they engage the brain's visual processing
> machinery, which operates largely subconsciously, and tells the conscious
> part what it sees (e.g. "chair" rather than "pieces of wood joined
> together").

This is why I prefer ML-style syntax over Algol-style syntax.

ML-style syntax (with sugar for pattern matching) engages my visual process
machinery better, such that I can scan over function definitions more at a
glance, than the "literary" style of Algol syntax

------
rzwitserloot
But, this argument defeats itself. At least, in practice.

If I look at how 'operator overloading' is used in practice, _rarely_ do you
get commutativity, or even anticommutativity, associativity, or
distributivity.

Take list addition, where operator overloading often shows up:

    
    
        someList += someElement
    

is shorthand for:

    
    
        someList.add(someElement)
    

add is not commutative; the elements used in the operation aren't even the
same type.

The point being, the manipulation and simplification that is, according to
GvR, much easier to spot if operators are used aren't even relevant here.

If we're talking about, say, introducing a class 'Complex', representing
complex numbers, and wishing that the language is such that one can add the
ability to use mathematical operators here, observing that it is both [A]
common in the domain of complex numbers to use, say, the symbol `+` to
indicate addition, and [B] these properties of operations such as
commutativity apply to many mathematical operations one might want to perform
on complex numbers, I'd agree: Yeah, the language kinda sucks if you can't
write `Complex(a, b) + Complex(c, d)`.

But how often does that actually occur?

Separately, if you go down this route, the abstraction should be as complete
as it can be. Therefore, this:

    
    
        `Complex(2, 3) + 5`
    

Should work, and should evaluate to `Complex(7, 3)`. But.. this:

    
    
        `5 + Complex(2, 3)`
    

should also work. Which requires either scala's `implicit` system or python's
take on this, involving `__rplus__`. These operations introduce their own
complications.

Thus, the debate on operator overloading boils down, as language feature
debates usually do, as a cost v. benefits analysis. The costs are heavy: There
is a lot of proof out there that even experienced coders just insist on
abusing operator overloading (or, to be a bit less judgemental: That opinions
are rather divided on how they ought to be used, given the amount of
complaints about `cout << somestring` and the like). Also, the language
complexity is considerable, given that you need to solve the `5 + Complex(2,
3)` problem.

Are the benefits worth it? Possibly.

But I don't think GvR's article takes the cost side seriously, and the
benefits stated, at least in my experience, tend not to apply at all there
where op overloading tends to end up.

~~~
wozer
In my opinion, requiring commutativity from a +-operator is maybe too much.
Associativity and a neutral element seem to be enough (i.e. forming a monoid).

~~~
naniwaduni
It seems like + normally connotes a commutative operator; for general monoid
operators are normally notated as if they were multiplications.

------
karmakaze
The dict example given is exactly why operators wouldn't be useful to me.

Presumably d1 + d2 != d2 + d1 for some d1, d2.

If you're going to use the + operator then it should behave like it does in
other contexts.

~~~
hprotagonist
this is equally untrue true for strings, of course.

    
    
      “foo”+”bar” != “bar” + “foo”
    

there are valid non-commutative additions.

~~~
edflsafoiewq
An interesting example is C's pointer-integer addition. p + n == n + p, true,
but this is a purely syntactic fact. The actual semantic question of
commutativity, whether switching the order of the arguments leaves the value
unchanged, cannot even be asked of pointer-integer addition since the
arguments, having differing types, cannot be switched.

~~~
hcs
Indeed in C even array[index] is equivalent to index[array], just in case you
want to be confusing.

------
phoe-krk
_This is much less confusing than (2), and leads to the observation that the
parentheses are redundant, so now we can write "x + y + z"_

This is a non-problem if you're using Lisp. (+ x y z) accepts an arbitary
number of arguments and the operator precedence problem does not exist since
there is no operator precedence.

~~~
mannykannot
If this is so obviously preferable, why have mathematicians so obdurately not
adopted this style?

Mathematical notation is not an archaic practice that is followed out of a
respect for tradition, or a doctrine that has been developed from first
principles, it is something that has evolved (and continues to do so) because
it has been useful.

~~~
ken
1\. Mathematicians have different priorities than programmers, and they use
different tools. Working with an equation on a whiteboard, it's easier to
write "a+b+c" and then cancel terms as needed. When writing a formula on my
computer, cancelling terms is something I almost never do, so it would be
silly to use a notation that's been optimized for that.

When I _am_ doing algebra on my computer, I hope I have a tool like Graphing
Calculator (not "Grapher"!) that lets me simply drag a term from here to
there, and automatically figures out what needs to happen to keep the equation
balanced.

2\. They have, except they use Σ for the prefix version. When it's more than a
couple terms, and there's a pattern to it, Σ (prefix notation) is _far_ more
convenient than + (infix notation).

If programming languages look like they do because they're taking the useful
notations from mathematics, why doesn't your favorite programming language
have a Σ function? Who's being stubborn here?

~~~
joshuamorton
Most programming languages do have some variant of `sum(seqence)`. Python
certainly does. Or, like, loops, which do the same thing.

But they're optimized for different things. Using the same tool for infinite
length sequences and fixed length sequences doesn't make a whole lot of sense.
We often have different types for them (tuple/record vs. list/array) too.

~~~
ken
Having done addition in both infix and prefix varieties on my computer, over
the past few decades, I don't understand why prefix notation is considered
'optimized' for indefinite (not 'infinite') sequences and infix notation is
considered optimized for definite length sequences.

What exactly "doesn't make a whole lot of sense" about (+ a b)? (It doesn't
look the same as you wrote it in school? Neither does "3+4j", or "math.sqrt".)

Being able to use the same tool for different types of data is precisely what
makes high-level programming so powerful. As Alan Perlis said, "It is better
to have 100 functions operate on one data structure than 10 functions on 10
data structures." Having only one way to add numbers (in languages that do
that) is a great feature.

Python's insistence on these pre-selected groupings of functionality has
always made it frustrating for me to work with. The two ways to add numbers
look and work nothing alike. Does "TOOWTDI" not apply to math?

(Yes, I'm also frustrated and confused that Python has built-in complex
numbers, and a standard 'math' module with trigonometric functions, but
math.cos(3+4j) is a TypeError. What possible benefit is there of having a full
set of complex-compatible functionality, but hiding it in a different math
module, with all the same method names?)

~~~
joshuamorton
The zen never says TOOWTDO, it says TO(APOO) _O_ WTDI. (That's "there's one,
and preferably only one, _obvious_ way to do it.)

`reduce(op.add, [x, y])` works. Python could remove it's infix ops and use
only prefix ones. But prefix ones aren't obvious. And as Guido says,
readability matters.

------
j88439h84
Python already has an 'operator' for combining dicts.

d = { __d1, __d2}

~~~
ploxiln
just have to avoid HN italic syntax thing, but yes:

    
    
        d = {**d1, **d2}

~~~
j88439h84
Heh, thanks :)

------
nikofeyn
he doesn’t make any coherent argument as to why one is more clear or preferred
than the other. he simply states it and moves on with this bias. for example,
when he compares 2 to 2a, i feel he doesn’t really address anything and just
states his preference as the more clear one.

plus, he of course seems to know nothing about lisp (or just ignores it) where
you might have:

    
    
      (+ a (+ b c)) = (+ (+ a b) c) = (+ a b c)
    

all three are valid lisp syntax and represent associativity of addition well,
including “dropping the parentheses”. the thing is, procedure notation as
opposed to infix operators actually make it more explicit by what is meant by
associativity. this is because it makes explicit about what comes first
because of how procedures are evaluated. this is much more explicit than
conventions of what parentheses mean in infix operator expressions. even his
python procedure example in (2) demonstrates this property.

the distributive law is:

(* a (+ b c)) = (+ (* a b) (* a c))

that seems pretty clear to me. and with this, you can of course extend the
language such that the +, _, etc. procedures operate on new data types or
heterogeneous data types like (_ c v) where c is a constant and v is a vector
and you get scalar multiplication.

it is also funny to me that he considers readability over performance. well, i
do too, but you can have both. see lisps, schemes, and sml dialects, all
languages he seems to ignore the existence of.

another thought is that i recall gerald sussman saying in a talk that
mathematical notation is impressionistic, and in general, i think he is right.
his point was that procedure notation is much more explicit. he also mentions
that prefix notation is inconvient for small expressions versus infix notation
but is much more preferred when you have very large expressions with many,
many terms.

~~~
redleggedfrog
Isn't "...once you've learned this simple notation, equations written using
them are easier to _manipulate_ than equations written using functional
notation..." his argument why operators are more clear?

I don't necessarily agree with that, but I haven't made a world class
programming language, either. I just assume he knows something I don't.

~~~
nikofeyn
but programming isn’t about manipulating equations. if you want programs to
manipulate equations, then again, the procedure notation, in particular lisp
notation, wins out.

------
tracker1
I have to admit... I half expected an article on why Phone Operators are
useful, say as opposed to annoying IVR systems.

------
Sniffnoy
Non-mobile link: [https://neopythonic.blogspot.com/2019/03/why-operators-
are-u...](https://neopythonic.blogspot.com/2019/03/why-operators-are-
useful.html)

------
tolmasky
I've grown to dislike _operator overloading_ quite a bit, and thus developed a
general skepticism with regard to operators in general, and I think this
article actually reveals why partially.

First off, I'm surprised this article doesn't touch at all on the fact that
operators are usually the only "blessed" functions in languages to be able to
be written in infix form, as I think that's actually where many of the
"benefits" being described here come from. For example, I believe the first
example about how operators highlight the lack of ambiguity in the expression
"x + y + z" has little to do with operators and more to do with prefix vs.
infix notation. Notice that this is not apparent if we use operators but also
require prefix notation instead: "\+ + x y z" vs "\+ x + y z". Here too you
may never realize how the binding order doesn't matter, because binding order
is non-existent and unnecessary. The fact that in most languages non-operator
function calls must be in prefix form shows the true concern here. If you for
example imagine working in Haskell, where you can apply the opposite
experiment, non-operator functions in infix form, this discovery could also
arguably have been highlighted: "x `add` y `add` z". All this to say, I do not
see _this_ particular point as a great defense of operators.

But the real problem in my mind comes from the eventual overloaded meaning.
This is actually displayed in this very blog post. Near the end, the author
mentions how * is not always commutative, but _in math_ , + is. And yet, in
the simplest non-math example use of operators, string concatenation, we
already find that + is not commutative. So what we're really doing is creating
very terse function names that "often" imply a set of rules, but also often
don't. In other words, in one specific domain they have a specific meaning,
but when put into a general purpose programming language, they actually are
just... "un-user-friendly one letter names".

And this gets to the heart of the issue. I agree that operators probably
trigger a different part of the brain, but I think they do so simply because
we use a reserved part of the character set to express them that we don't use
for anything else. So they _stand out_. If "+" was replaced with "a", I don't
think that "b a b" would really be that visually more helpful than "a(b,b)"
\-- so its not the magic "operator-ness" of the function, its not even the
infixness, its the fact that we're basically putting a weird character in
there, the same as how a smiley face emoji would stick out, or perhaps how
"bolding" text would stand out (if that were possible in your programming
language).

The problem is that these magic special limited-set characters are very
attractive, and once you have a concept you firmly understand, you want to map
them onto the closest version of the standard math forms. Case in point,
string concatenation. But these new applications very rarely map exactly, and
soon we end up with incredibly terse names that trick you into thinking they
behave like something familiar. Most people would agree that "add" isn't the
best name for a string concatenation function, and would hold a lot of weird
meaning baggage, but the abstractness of the "+" operator (and the fact that
we read it allowed as "plus") fools us into using it.

~~~
naniwaduni
The best part is that string concatenation is a perfectly cromulent
"multiplication", forms a free monoid, and in line with some notations for
concatenation of tuples.

Funny the associations we make...

~~~
dnautics
Well that's exactly why string concat in Julia is _

~~~
naniwaduni
Indeed, Julia has the most well-thought-out commentary on strings in their
docs I've seen from a non-toy language:

[https://docs.julialang.org/en/v1/manual/strings/#man-
concate...](https://docs.julialang.org/en/v1/manual/strings/#man-
concatenation-1)

~~~
dnautics
Elixir's string module documentation isn't half bad, but there is more
practical Unicode thinking and less mathematical thinking (which makes sense,
given their domains).

[https://hexdocs.pm/elixir/String.html](https://hexdocs.pm/elixir/String.html)

~~~
naniwaduni
I happen to think baking unicode into your concept of a string is
fundamentally misguided, so that all string operations following from that
premise are inherently wrong. The very first example, constrasting encoded
byte length with String.length("é")=1, calling the latter the "proper length"
walks into a shibboleth which puts Elixir on the side of String.length("ﷺ")=1,
even with the grapheme clusters concept, for which the only salvation is
integrated font rendering.

It's practical and informative, but I can't consider it well-thought-out.

ed: to clarify, ﷺ is an Arabic ligature which represents many more than one
(linguistic) characters. A more accessible example might be "ﬃ".

~~~
dnautics
I could be wrong, but I think the reason why String.length is one is to have a
consistent idea of what happens when you have monospaced console output.
Things in the elixir standard library exist "when you need them for elixir
itself", and monospaced console output formatting working is needed in a few
parts of elixir. If you care about bytes only, you can use byte_size, as
indicated in the docs.

~~~
naniwaduni
No, codepoint length is totally useless for monospaced console output, see the
third example. Grapheme clusters are closer, but still wrong in the presence
of wide characters.

~~~
dnautics
I've written a fuzzing library that tests random Unicode inputs and the width
of the output was sensible on three platforms (Linux, Mac, and powershell).

------
wiremine
> Of course, it's definitely possible to overdo this -- then you get Perl.

100% agree. I actually really like Perl, but in practice this makes Perl
programs more difficult to maintain than they should be.

Also: seems like this XKCD is appropriate here:
[https://xkcd.com/224/](https://xkcd.com/224/)

~~~
raiph
Forgive me as I use similarly extreme language. I 100% disagree. Imo Guido's
throwaway comment uses the same rhetorical form as hate speech. Your use of
"100% agree" and "I actually really like Perl" are mutually contradictory.

Now let me unpack that. I'm curious to see if you reply and whether we arrive
at middle ground.

=================================================

> in practice [Perl's use of operators] makes Perl programs more difficult to
> maintain than they should be

This is an old saw but do you agree that it is a misstatement of what's
actually going on:

* Some Perl programmers, especially experienced ones, have made and continue to make Perl programs, including large ones, EASIER to maintain than they could be and would be if written by a similarly strong programmer using the deliberately dumbed down language Python.

* Some Perl programmers, especially beginners, have made and/or continue to make Perl programs, mostly small ones, more difficult to maintain than they should be and would be if written by a similarly weak programmer using a deliberately dumbed down language like Python.

Perhaps your response to that is something like "Oh sure, I just meant most
programs I see because they're mostly small and written by inexperienced
programmers."

If not, how modern are the Perl programs of which you speak? Much has changed
since 2000 and especially in the 2010s: Perl 5 has stabilized around a wise
view of the language, interpreter, and how to be respectful of each others'
failings; Perl 6 has radically dialed back the complicated built in
availability of terribly cryptic symbol aliases as part of a complete rethink
of what it means to be a Perl language.

================================================

As for the "hate speech" aspect:

>> Of course, it's definitely possible to overdo this -- then you get Perl.

> That's the Guido we know and love.

The above was the entirety of by far the highest voted comment in the python
reddit post about this article.

I posted the following in sardonic response, which also got upvoted:

Guido:

> Once you've internalized the simple properties which operators tend to have,
> using + for string or list concatenation becomes more readable than a pure
> OO notation, and (2) and (3) above explain (in part) why that is.

Larry:

> But it contradicts property (1). That's (in part) why I did not use + for
> string and list concatenation in Perls. Then again I used it for adding
> floats, even though that contradicts property (1) too. And changed my mind
> about what symbol to use between Perl 5 and Perl 6. Heck, who cares about
> consistency?

Guido:

> I care. I consistently hate on Perl and Python users consistently love me
> doing so.

I generally find Guido's and the Python community's intolerant attitude
consistent with the above and with this exchange about the same article on
twitter:

> Random nit: you've got a "font-size: 85%" in the CSS for your posts. Mind
> removing that, so those of us with our default font-size set deliberately
> don't have to squint?

Guido:

> I'm sorry, I have no interest in CSS hacking. The template is what it is.
> Deal with it.

Guido's riposte also got lots of hearts (presumably to soothe or applaud their
beloved ex-BDFL).

I note the follow up got nothing from Guido:

> It's a single-line deletion, and would improve the accessibility of your
> blog, but sure ok.

There's zero chance Larry would have thought and spoken as Guido did and he
absolutely would have made that single line deletion out of respect for the
person whose eyesight wasn't as good as others'. The same attitude applies to
improving the language and culture.

==================================================

How familiar are you with current Perl activity?

I see a new Perl emerging, one that will unfold throughout the 2020s. I see a
well entrenched self-reinforcing tolerant attitude in Perl culture, even of
those who like to use too many operators for my liking.

~~~
wiremine
I appreciate the lengths you went to attempt to articulate your position, but
I don't think we'll arrive at a good spot in comments on HN. The "hate speech"
analogy is very incendiary.

The only comment I would add is clarification on your comment about 'Your use
of "100% agree" and "I actually really like Perl" are mutually contradictory.'

Someone can really like a languages, but not agree with all the language
designer's choices. A language designer needs to make tradeoffs, and I hope we
can agree that not every choice needs to be universally accepted. Tolerance !=
uniformity and bind acceptance.

Beyond that: I'm very familiar with Perl 5, Perl 6, and the direction the
community is taking the languages, and the Perl culture. I just disagree with
some of the language choices.

~~~
raiph
Thanks for replying.

> Someone can really like a languages, but not agree with all the language
> designer's choices.

Yes, of course.

When writing about `+`, Guido might have chosen to gently chide himself for
deciding to break the very first principle he wrote about in his article.

Instead he arbitrarily focused on a couple later principles he didn't break --
and then rudely dissed Perl which deliberately did _not_ break any of the
principles he listed in his article.

I really like Python but I do not agree with all Guido's choices in human
language and that inappropriate cheap shot at Perl was an example.

> A language designer needs to make tradeoffs

Yes, of course.

As I wrote, Guido chose to write "The template is what it is. Deal with it."
in response to someone with poor eyesight requesting he make a one line
change.

Twitter is a constrained medium so he presumably felt he needed to tradeoff
civility for making his point clear. But:

> I hope we can agree that not every choice needs to be universally accepted.
> Tolerance != uniformity and bind acceptance.

That was the point of my first reply to you.

I was curious to see if you would agree that Guido's decision to be rude
toward Perl in the very context in which he'd just described a flaw in Python
according to his own stated principles did not warrant the essentially
universal acceptance it got in the reddit python thread.

And, likewise, his rudely dismissive twitter response to a random innocent
person reading his article which received far more hearts than those
sympathizing with the tweeter with less than perfect eyesight.

> Beyond that: I'm very familiar with Perl 5, Perl 6, and the direction the
> community is taking the languages, and the Perl culture. I just disagree
> with some of the language choices.

To recap, I wasn't speaking about disagreement. And not really about Perl
either. I was speaking of agreement with Guido's rudeness.

------
enriquto
I'm amazed (and horrified) how come such an intelligent mathematician and
designer of a beautiful programming language made such an obvious fuckup of
using a commutative operator for string concatenation. Really, I hate python
just for this single idiotic notation.

I mean, it's right there in front of your eyes. He talks about the convenience
of using a visually commutative operator like "+" for commutative operations,
and then a few lines later he says that it is a convenient notation for string
concatenation. What. The. Fuck.

~~~
_cs2017_
I understand the confusion it may cause to a new Python developer who comes
from the mathematical background.

Does it cause any problems beyond that? For example, does it interfere with
some elegant patterns, or result in some bug prone code, etc?

FWIW, the single main annoyance I've had with python was its treatment of
strings as iterables of single character strings. While not wrong in any
obvious theoretical sense, it conflicts with the mental model of many
developers (both new and experienced) and has probably caused more bugs than
any other feature of Python (and even more ugly type checks to avoid such
bugs). When people realized it was a problem and discussed removing this
feature, Guido said he had tried but (a) it was too much work and (b) he felt
the benefits are too great to give up
([https://mail.python.org/pipermail/python-3000/2006-April/000...](https://mail.python.org/pipermail/python-3000/2006-April/000824.html)).

~~~
enriquto
> I understand the confusion it may cause to a new Python developer who comes
> from the mathematical background.

I'm confused mainly because a person with a mathematical background created a
language with such an obvious blunder. When I have to use the language (and I
do often) I am not confused about string concatenation, I am ashamed of having
to use this ridiculous notation.

