

Bending a language to your will - vladev
http://bolddream.com/2009/05/25/bending-a-language-to-your-will/

======
mahmud
The article is hardly about "bending a language to your will", and more about
"why doesn't the sole vendor of my language implement features suggested by
the greatest living language designer".

Alright, now that we have paid lip service to GLS, let's ask a question:

The author speaks of the lack of operator overloading in java, and my question
is: why not use a language that allows you to add your own operators, better
yet, one with _no_ concept of officially sanctioned operators? Remember,
operators are just functions with syntatic-sugar (i.e. they're usually allowed
to be infix instead of prefix.) why not work with a rich, expressive language
that treats your "extensions" as first-class citizens?

It seems like the gripes of mainstream programmers about their tools boil down
to two things: 1) lack of vendor responsiveness in implementing much needed
features (which varies from one programmer to another, whence the lack of
vendor response ;-) and 2) demand for solutions to augment, or make-palatable,
a brain-dead design.

Don't worry about making the language "succeed" or "win" against the
competition. If you're struggling with your tools, don't hesitate to look for
better options.

Your tools should be so efficient that you no longer think of them
consciously, much less inspire a long gripe in prose.

~~~
jd
The problem with operators is that they have a binding strength. If you want
the expression 7 + x * 4 to evaluate correctly operator * must be stronger
than operator +. If programmers can define their own operators they'll either
have to supply binding strength as a numeric parameter (ugly) or relative to
other operators. For instance, you could have a list of operators per class
type, where the first operator in the list is the strongest, and the last
operator the weakest. You can then insert your operator in the "right" place
in the list to specify the strength. This is a reasonable solution, but you'll
run into different problems when two libraries each define operators on the
same types. Either way it's not going to be an elegant solution.

It also is a problem on the grammar level. The expression 5--x is ordinarily
parsed as 5 - (-x). But if you were to introduce an operator -- the meaning of
that statement would change to 5 -- x. So the introduction of user defined
operators can change the meaning of an expression.

Overloading and user defined operators is a hairy, hairy topic, and the
"anything goes" approach is not necessarily the right solution.

~~~
silentbicycle
Well, the main case in which people _really_ care about operator precedence is
arithmetic. I'm not saying it's the only case it matters, but it may be worth
having conventional infix arithmetic as a wrapped sub-language, or otherwise
making an exception to accommodate it.

You can argue all you want about how order-of-operations makes things
unnecessarily complicated in a parsing / syntax context, but if doing infix
arithmetic in your language doesn't _just work_ (see: Lisps, or OCaml's float
operators), many peoples' first impression will be that the language makes
doing simple stuff hard.

~~~
jd
> You can argue all you want about how order-of-operations makes things
> unnecessarily complicated

That's not what I'm arguing. It's okay for that complexity to exist, I just
don't think that exposing it to the end programmer is by definition a good
thing. If you have a fixed set of infix operators then you can have a well
defined grammar. If you have an extensible set of infix operators, then you
need to make major changes to your grammar to prevent ambiguity.

The way infix operators work is a decision the language designer has to make
early on. Bolting on a way to create new infix operators in Java (not that
you're suggesting that) would be a disaster. I agree with you that sacrificing
the "just work" aspect in the name of strong typing (OCaml) or in the name of
uniformity (Lisp) is a bad trade-off.

~~~
silentbicycle
I agree with you -- I wasn't refuting your point, but rather noting that it's
worth considering as a major usability issue for the language, rather than
strictly a technical decision. (I meant to say, "One can argue ...", but
hadn't had my tea yet. I can see why you read it that way.)

I had Hexstream's glib "The solution is to use prefix notation. Problem
solved!" reply in mind, actually: in that case, the language trades a
syntactic ambiguity for _pissing off everyone who expects to do basic
arithmetic without major problems_ , which will disproportionately be people
new to the language.

Cavalierly dismissing this is hardly a solution. If I were new to Lisp, and
trying to use it for something based on relatively simple arithmetic (say,
totaling a categorized spending log), and a question about why "total + val"
didn't work was answered with a digression about parsing, I would be pretty
annoyed. If the person talking up Lisp were to then go into a spiel about how
Lisp is great because you can extend the language to express things in their
own terms _when I couldn't even use arithmetic_ , I would probably assume they
were full of it and write off Lisp entirely.

