I certainly didn't read this as saying "3 - 4 = 5" was cool. I would hope any reasonable Haskeller would agree that doing something like this in real code is grounds for revoking commit access.
If you're not comfortable with (otherwise sane) overloading in C++, you probably aren't going to like it in Haskell, either.
Realistically? Because if you're programming in Haskell, you're probably programming by yourself or with a few other smart people, so either you understand the idioms you encounter or at least you can trust them to be applied in a consistent and effective manner.
C++ is used more often as a common-denominator language for many programmers of varying skill who are collaborating on a project, and as you move toward that domain, transparency and safety start to become more important than power, since a big element of getting anything done is just not screwing each other over with unreadable code.
And type classes force you to override related operators together. (With dependent types that Haskell doesn't have, the relationships between them could even be forced to adhere to mathematical axioms.)
If you need the code watered down into a deluge of little baby steps to satisfy your own definition of "readability", that's fine. I personally don't find useless dissipation of program logic to be readable.
And have you ever read back your own program after a few months of doing something completely different?
'accidental complexity' = 'essential complexity' / 'your intelligence'
You are forgetting accidental circumstances, like your current involvement with this particular code and the particular problems it solves. Code should read like a mathematical proof, exactly because it allows you to follow the steps in between.
Code should read like a mathematical proof, exactly because it allows you to follow the steps in between.
The requisite granularity of the between steps is completely subjective. If you're catering for the lowest common denominator, then you give up some succinctness, which may be more readable to someone unfamiliar with the concepts, but may just act as a time-wasting loss of abstraction that actually obscures readability to those who are familiar. You wouldn't want me to step through the proofs of associativity and commutativity of addition every time the rule is used, would you? That's why we just prove it once, give it a name, and assume a basic standard of intelligence/familiarity for those who have commit access.
Let bindings create a new local scope. It's only in the GHCi prompt that they apply "from this point on".
It is possible to do a more global redefinition but as far as I know you'd have to suppress the Standard Prelude which would mean you wouldn't really be in Haskell 98 anymore. It's certainly not something that would happen in a normal software project.
So to answer your question: In Haskell the effect is contained to a lexical scope (a few lines usually) where as in C++ it infects absolutely everything.
This is one of the things I like about Haskell. Operators like (-) are just like any other function, so you're free to redefine them for new input. If you want to define a world where 3 - 4 =5, you're free to do so. It makes it easy to explore some more complicated mathematical constructs.
Do you care to elaborate on how '3-4=5' allows one to explore some more complicated mathematical constructs? Unless 'complicated' means 'ilogical' I cannot understand it.
I think Haskell is a very nice language and it's flexibility on definitions is very cool, but I don't think this is a good example of it.
For much the same reason, one would probably never want to write "let 1 = 1", but it's still a valid statement. It may not be interesting, but it's not nonsense.
This title reminds me of an ancient Fortran compiler bug; a case where a programmer could alter the value of a numeric constant within the Fortran code.