
C# Brainteasers - edw519
http://www.yoda.arachsys.com/csharp/teasers.html?0
======
TeHCrAzY
John Skeet (the author) has an impressive array of knowledge. Anyone working
with C# coudld benefit from reading through his site.

(I just learnt something from #1. I assumed inerhited members were treated as
first class members of the class. Apparently not!)

~~~
fauigerzigerk
The first one surprised me as well. It's not like that in Java or in C++. The
others are harmless because people either know or at least know that they
don't know and look it up if necessary.

------
javery
How about this one:

int i = 1; Object.ReferenceEquals(i, i);

What does it return?

~~~
TeHCrAzY
It's going to box them into objects, which will cause it to return false.

    
    
      object i = 0; //object, so we will cause no boxing
      Object.ReferenceEquals(i, i);//now refering to the same objects, returns true;

------
barrkel
Two of the six relate to overloading. The detailed mechanics of overloading in
any given language is often overlooked even by people with many years of
experience, because it both involves the interplay of several different
features and often ends up being underspecified, so reliant on implementation
quirks.

Some things that affect overload resolution:

* Scoping rules, providing the set of methods to be chosen from. C# generally only chooses a set of methods all declared at a particular level of derivation to mitigate the fragile base class problem. The principle is that if an ancestor introduces a method, it shouldn't change the overload resolution that was already in effect in descendants. There are other niggles to scoping to consider generally, such as the interactions between visibility rules. For example, can you see privates on x in C::SomeMethod(C x), even though the type of x might actually be derived? See also, Koenig lookup:

<http://en.wikipedia.org/wiki/Argument_dependent_name_lookup>

* Conversion rules. Most languages have built-in implicit conversions, and C# also includes user-defined conversions. Obviously overloads that don't involve a conversion should be preferred to those that do, but how do you measure the "weight" of a conversion? Then there are things like Variants in languages that support them.

* Inference rules. Languages with generics can often infer type arguments from argument types, by matching them up with the parameter types to fill in the missing type variables. But in situations where there is more than one parameter governed by the same type parameter, the compiler has to either make hard choices, or arbitrary choices, or give up.

The Java language uses a simple concept which is worth keeping in mind when
thinking about resolution: a method M1 is more specific than method M2 if
values of the parameter types of M1 could be passed to M2 without compile-time
error. The idea is that the most specific method of all applicable methods is
chosen, but if there isn't any single most specific method, then there's a
compile-time error.

The language whose overloading I've delved into most deeply is Delphi. In the
course of maintaining it I've had to fiddle with many of its semantics to iron
out bugs and enable specific scenarios. The backward compatibility constraints
prevent using a (relatively) simple approach like Java's. I've come instead to
think of overload resolution in a different manner:

* Every kind of argument type should be defined so that it it can form a partial order over potential parameter types. For any given pair of parameter types, the argument type should be able to indicate that neither is compatible, one is preferred to the other, or both are equally compatible.

* For a given argument list, the parameter list that best matches it is that list which is better or equal than any other list for every argument / parameter position, and is not incompatible in any place. If there is no single best match, then the resolution is ambiguous.

With this kind of approach, combined with much testing and experimentation,
it's possible to elucidate the de facto overload resolution rules that a given
compiler implements, and rewrite them both in the form of a specification and
a reimplementation. It's not as nice as the principles-based approach that
Java uses, but it ends up more practical in the case of older languages with a
single dominant implementation (cf Delphi), and its exhaustive approach should
also find any counter-intuitive niggles in the resolution mechanism.

~~~
pvg
There's another design approach, taken by Eiffel - not allowing overloading at
all. Bertrand Meyer's arguments are that it's of relatively little use,
compared to the cost (i.e. the kind of complexity you're describing), that it
can get in the way of multiple inheritance and that genericity is a better
tool for the sort of thing overloading is often used for. There's a section in
OOSC about it (then again, there's a section in OOSC about more or less
everything).

