What sensible thing could you do with a list containing a ThingValidator and OtherThingValidator? Assuming there's no relationship between Thing and OtherThing you're forced to use runtime checking to find the validator you need for something typed as object, so at that point you have to use casting or reflection, which you'd also need to do when implementing a non-generic interface.
You are thinking that using reflection and casting at runtime is not a sensible thing because it is not optimal, but you must consider that constraints in existing code bases will often make writing optimal solutions too costly.
An example is:
For some reason or another you've written a thread dispatcher that will dynamically spin up threads and route the outputs of these threads into new threads. (e.g. https://github.com/bilus/pipes )
Obviously this library will need to do run-time casting of types already as C# makes it too difficult to orchestrate these threads without some shared type. (Here is an example of Microsoft following this pattern. https://msdn.microsoft.com/en-us/library/dd321424(v=vs.110).... )
Now you need to hook validation into this existing dispatching library by assosciating validators with the outputs of particular threads. Without a shared type you cannot invoke the validator, but it sure is convenient to write the validator using generics.
Another example is:
You have two interfaces, "ISerializable" and "IEquatable", and two validators, "Validator<ISerizable>" and Validator<IEquatable>", and these validators apply to multiple classes.
To run all of these validators against a given class you have three (un)reasonable choices:
* Create a boilerplate wrapper class for each of these validators for each class that they apply to, and store them against that class somehow. The disadvantage of this is that you lose track of which validators are shared between classes which may be important for some optimisation.
* Create a non-genic type "IValidator" or "Validator" that all validators share and put these in a collection. (i.e. as mentioned by the grandparent of this post).
* Bite the bullet: The code that calls these validators has to be duplicated. (This is the incorrect choice though because some day you may need to add a spin-up/teardown before running the validators and now you don't have a centralized location to do this)
Another example is:
You are writing validators for types from a 3rd party library. You may not modify the source code of this library, because it's WinForms. You've been tasked with a set of arbitrary validation tasks for your multi-million line codebase. Maybe you need to add a spell checker for all user-editable text? Maybe any control that is bound to a particular property needs an orange background.
Suddenly you have a case where you really do need to do casting and reflection to figure out which validators to run, as you perform the unfortunate evil of a recursive search through the entire control tree to hook up your validation, rather than re-writing a few thousand files.
> You are thinking that using reflection and casting at runtime is not a sensible thing because it is not optimal
No I'm not thinking that, I'm saying that implementing a non-generic base interface like
public interface IValidator {
bool Validate(object obj);
}
is pointless because you'll have to do the same type checking/casting in Validate you would have to do in a wrapper which implemented IValidator<object>. Requiring a base interface is worse because you'll have to implement the same boilerplate implementation of Validate(object) in every implementing class - see IEnumerable/IEnumerable<T> as an example.
> Obviously this library will need to do run-time casting of types
You've linked to a clojure library which is dynamically typed so I don't see how this is relevant. It's not obvious to me why a C# version would require casting.
Task/Task<T> does follow the pattern but they could have simply introduced a Unit type in the BCL and replaced the non-generic Task type with Task<Unit> instead.
> To run all of these validators against a given class you have three (un)reasonable choices:
You only need to write one wrapper class which encapsulates the cast you'd have to do anyway in a non-generic IValidator interface:
public class Wrapper<T, TBase> : IValidator<TBase>
{
private readonly IValidator<T> inner;
public Wrapper(IValidator<T> v)
{
this.inner = v;
}
public bool Validate(TBase b)
{
if(b is T)
{
return this.inner.Validate((T)(object)b);
}
else return false;
}
}
> Task/Task<T> does follow the pattern but they could have simply introduced a Unit type in the BCL and replaced the non-generic Task type with Task<Unit> instead.
Microsoft could have added some unified Unit type but they did not, which makes this irrelevant.
Just because there is an obvious negative consequence to choosing a particular type schema (e.g. IEnumerable<t> extending IEnumerable) does not mean that the pattern is pointless. Sometimes biting the bullet is necessary since C# has a bullshit type system that makes dynamic orchestration a pain in the arse.
> Microsoft could have added some unified Unit type but they did not, which makes this irrelevant.
It's not irrelevant - you're using Task/Task<T> as an example of the non-generic base type/generic subtype pattern being necessary, but it isn't.
IEnumerable<T> extending IEnumerable is a consequence of C#1 not supporting generics at all, and the non-generic version would not be necessary if it had. I was just using it as a common example of the boilerplate the pattern incurs.
No, I was using IValidator, IValidator<t> as an example of a non generic base type being necessary when interfacing with existing libraries...
Honestly if you can that clojure library I linked a few posts ago without using this technique I will be extremeley impressed, but otherwise I disbelieve you. (Note: I have done this a few years ago.)
Not to mention that in the original example IValidator is invariant...it can be far more flexible by making it covariant (IValidator<out T>) so that GetAllValidators can also be used on more derived types.
And then you can do for example:
var validators = GetAllValidators<EntityBase>();
Where validators might be IEnumerable<IValidator<EntityBase>>. Compile-time type safety is preserved and each individual validator's generic type can be any type that is derived from EntityBase.
The question here is, what good is a collection of all validators? If you want to perform validation, you're starting with an item (to validate) and you know its type, so it shouldn't be a problem to get the correct IValidator<T> for it. If you really need to have a common base for all validators, explicit interface implementation is a better choice than hiding methods with new:
interface IValidator
{
bool Validate(object item);
}
interface IValidator<T> : IValidator
{
bool Validate(T item);
}
abstract class Validator<T>
: IValidator<T> where T : class
{
public abstract bool Validate(T item);
bool IValidator.Validate(object item)
{
var typedItem = item as T;
if (typedItem == null)
throw new InvalidOperationException();
return Validate(typedItem);
}
}
But it could be `IValidator<object>`, in the worst case. That being said, why not make a `class ThingValidator<T> : IValidator<T> {}`, for all the lack of detail we have?
There aren't enough details here to demonstrate the weaknesses of generics.