
I was wrong, reflecting on the .NET design choices - redknight666
https://ayende.com/blog/177921/i-was-wrong-reflecting-on-the-net-design-choices
======
jasode
If I attempt to generalize it, I think most of C#'s differences in language
design that have opposites in Java are superior. Examples include:

\+ not virtual by default (so base classes can be changed more easily without
breaking/recompiling downstream clients that the base class writer doesn't
know about; specifying something as "virtual" should be a deliberate conscious
decision by the class author)

\+ value types (for speed, because J Gosling's idea that "_everything_ is an
object is 'simpler' for programmers" has a cost -- the boxing & unboxing, and
inefficient cache-unfriendly pointer-chasing containers)

\+ no checked exceptions (checked exceptions have benefits in theory but real-
world practice shows that it forces programmers to copy-paste mindless
boilerplate to satisfy the checked-constraint)

\+ unsigned types (very handy for P/Invoke API boundaries because legacy Win32
has unsigned params everywhere; yes, yes, Gosling said that unsigned types are
confusing and dangerous but nevertheless, they are still very useful)

\+ many other examples

This doesn't mean that Anders Hejlsberg's C# language team was smarter than J
Gosling's Java team. They simply had 7 years to observe how Java programmers
(mis)used the language and therefore, could correct some design mistakes.

Nevertheless, C# still made some dubious decisions such as renaming
"finalizers" to "destructors". Java had the better name and C# should have
kept the original terminology.

~~~
namelezz
What do you think of partial classes and methods in term of code quality?

~~~
jasode
I've always thought of "partial classes" as a language feature motivated by
code generators not stomping on programmers' manually entered code. E.g.
Winforms generates some _declarations_ in one partial class while the
programmer codes UI _handlers_ in the other partial class.

~~~
namelezz
When I first learned C#, I was having a hard time to navigate the code due to
partial methods. So I have always wondered what C# developers think of having
their methods spread out in different files.

~~~
dragonwriter
Partial classes and partial methods exist specifically to support having code
that is automatically generated paired with a human-managed source file.

While you _could_ use the support for them to split human-managed code across
separate source files, that would be a horrible practice that I've never
encountered in the wild, even in .NET shops with otherwise-atrocious practices
and code quality.

~~~
jasode
_> While you could use the support for them to split human-managed code across
separate source files, that would be a horrible practice _

To continue that type of advice, some say "#region/#endregion" is another
language feature that's intended for code generators so that the IDE can
collapse specific lines of code and hide it from view. Programmers should not
be hand-coding "#region" themselves. That said, there is debate on that:
[https://softwareengineering.stackexchange.com/questions/5308...](https://softwareengineering.stackexchange.com/questions/53086/are-
regions-an-antipattern-or-code-smell)

~~~
partisan
We use regions to standardize the layouts of our classes. Except for trivial
classes, you will find the same regions in the same positions making it
eas(ier) to find, for example, all methods implementing an interface, or all
of the private helper methods.

~~~
tigershark
Regions are just useless visual clutter most of the time. You can put the
methods/fields in the same order without using the regions. In my experience
regions are a very bad practice used only to mask bad design that produced
gigantic classes. The only place where I think they may be helpful is when you
are writing a library and your class _must_ be huge because you are
implementing for example a Trie or some other collection or some other object
pretty complicated that doesn't make sense to divide in smaller classes. And
even in that case I would first try really really hard to split it in smaller
entities rather than just having a thousands lines class with some regions
around.

~~~
partisan
Not sure where to go from there. You've precluded the possibility that regions
and good design should exist at the same time in the same file.

Where does the absolutism in the tech industry come from? We are a bunch of
individuals who have individual experience and then try to form a view of the
world that satisfies our experiences. What about the experiences you haven't
had or conceived of? We are constantly rewriting the rules in our head to fit
the new experiences we have every day to make sure we are right all of the
time. Surely, our current world views are not complete or we would have no
room to grow.

Still, I'll take your comment under advisement in case my classes are big,
poorly designed non-Tries.

------
klodolph
Java and C# are seen as "old news" by some here on HN but there's a trove of
software engineering wisdom in there. It was a huge boon for Microsoft to be
able to learn from Sun's mistakes when they were designing a language that, on
paper, is basically the same thing. C#, Java, and Go are all "wonderfully
boring" languages which is a divisive topic, but they're all very good at
being boring languages.

It's not just language features that made C# an improvement over Java, either.
CIL is a fair bit more elegant than JVM bytecode. JVM bytecode has type-
specific operations to speed up bytecode interpreters, which turns out to be
irrelevant since nobody cares about bytecode performance these days.

~~~
psyc
As a low-level game dev, I consider C# a nearly perfect, wonderful language,
tragically self-defeated by garbage collection.

~~~
noxToken
Could you explain why garbage collection in your use case is a bad thing? The
GC vs non-GC always fascinates me, and I have no strong opinion either way.

Also, doesn't C# have the ability to limit (or pretty much disable) GC?

~~~
sqeaky
Garbage collection is the bane of smooth framerates.

Players notice when the framerate drops. Presuming a pretty typical 60 Frames
Per Second you have a tight 16.6ms time budget to do all of the work for the
entire frame. All of the physics, all of the sound processing, all of the AI
and everything else needs to be sliced up into little bits that can be
distributed across the time the game is played.

There are many good ways to achieve this, and many ways that just appear good
until someone else plays your game. If you allocate dynamically even
occasionally you need a strategy to allocated about as much as you deallocate
each frame or otherwise mitigate the costs.

C++ has this problem completely solved with strong deterministic semantics
between destructors, smart pointers and allocators. This can be handled in C#
a few ways as well, but sometimes a bunch of unallocated memory builds up and
is cleaned between level loads or during downtime. When the Frame rate drops
because the garbage collector consumes 1 of the 2 hardware threads in the
middle of a firefight players get mad. If you only ever tested on your nice
shiny i7 with 8 hardware threads you might never notice until a bug report
lands in your inbox. That presumes it wasn't one of the stop the world
collections and you couldn't use that last hardware thread better than the GC,
both of which negate GC altogether.

Done right deterministic resource allocation costs almost nothing. You can get
to zero runtime cost and nearly zero extra dev time. In practice a little
runtime cost is fine, and a little time spent learning is OK, but a bug report
in the final hour before shipping that the frame rate drops on some supported
hardware setups but not others is really scary.

~~~
skybrian
I wonder if the very low-latency GC in Go would be good enough, though? The
occasional dropped frame doesn't seem like the end of the world, so long as it
remains rare.

In practice, most games don't have entirely reliable performance, particularly
on low-end hardware.

~~~
sqeaky
Depends on the game.

Drop a frame in a competitive First Person Shooter and be ready for death
threats.

Drop a frame in an angry birds clone and be ready for 5 stars in a review just
because you made your first game.

I suspect you could could get away with quite a bit of GC in most games. But
by the time you learn whether or not you could get away with you have fully
committed to language for several months. Unless you fully committed to D you
are stuck with your memory management strategy. In order to be risk averse
game devs dodge GC languages entirely because the benefit is small compared to
the potential gain. Combine this with how everyone wants to make the next
super great-<insert genre here> MMO that will blow everyone away, they think
that they must squeeze every drop of perf out of the machine and sometimes
they are right.

Lua is hugely popular for scripting in games. World of Warcraft used it to
script the UI. Its garbage collector can be invoked in steps. You can tell it
to get all the garbage or just to get N units of garbage. If you tell it to
get 1 unit of garbage each frame while frugally allocating I expect you could
easily meet the demands of many casual games.

Then there are games like Kerbal Space Program. All C# and all crazy
inconsistent with performance. It will pause for no apparent reason right as
you try to extend your lander legs and cause you to wreck your only engine on
a faraway planet. I cannot say with certainty it is GC, but that cannot be
helping.

------
dahart
> However, given that I’m working on a database engine now, not on business
> software, I can see a whole different world of constraints.

This might be my own confirmation bias, but this is my takeaway: point of view
and the constraints that you see or believe are there are the main determinant
of choices, and not whether some particular pattern or feature of a language
is intrinsically good.

The older I get and the more code I write, the more I find this to be true. I
change my own mind about things I used to argue fiercely over and things I
thought were tautologically true.

I think (hope) this is making me more open minded as I go, more willing to
listen to opposing points of view, and more able to ask questions about what
someone else's constraints are rather than debating about the results. But,
who knows, I might be wrong.

~~~
nickbauman
I have many years of programming in Java under my belt. Until I started using
dynamic languages I thought static typing was really important. It's not.

~~~
sqeaky
It rules out certain categories of bugs, makes it hard to assign a string to
an int, etc...

If you are writing a small one time use script to accomplish a task clearly
this that kind of protection is of low value.

If you are trying to write or maintain a system intended to last 20 years and
keeps bugs out of 100 millions lines of code, every kind of check that can be
automated has extremely high value.

Most projects are somewhere between these two extremes. The nature of the
cutoff point where strong static typing helps or does not is what we should be
debating, not its inherent value as Dahart suggested.

~~~
Jach
Simple designations like "static typing" and "dynamic typing", even when you
bring in the concept of strong vs. weak (Java allows concatenating an int to a
string, Python throws an error), aren't very helpful when languages like
Common Lisp exist. (Edit: nor "compiled" vs. "interpreted" either for the same
reason but especially in current_year when just about everything compiles to
some form of bytecode, whether that is then run by assembly or by microcode
running assembly a small distinction.) Specific languages matter more,
specific workflows within languages matter more too. And as you say what
you're trying to build also matters, but not all that much.

~~~
sqeaky
You are right that the type system waters are muddied by a variety of
technologies and perhaps that isn't the best line to draw. I think your focus
on the semantics of static vs dynamic dodges much of my point.

The crux of my argument was the larger and more complex the work the more
important it is to find errors early. It seems obvious to me that languages
like Java, C++ and Rust do much more to catch errors early than languages like
Ruby, Python and Javascript which are easier to get started with and make a
minimum viable product. Put those two things together and it seems like strong
heuristic to use when starting a project.

~~~
Jach
This is why I think workflows matter too, at least as much as the language
itself. If you write Python like you write Java, of course you're going to not
catch some errors that Java would have caught before you ship, and you're
probably going to be frustrated when you're at a company where everyone writes
Python like Java. But if you write Python like Python (you can't write Java
like Python), you'll find many of your errors almost immediately after you
write it because you're trying out the code in the REPL right away, and
writing chunks in a way where that's easier to do in Python.

Maybe a few type errors will still slip by, but you'll have found and fixed so
many other kinds of errors much earlier. Kinds of errors that benefit by being
caught immediately instead of festering because they passed a type checker.
(I've never really found a type error to be a catastrophic-oh-I-wished-we-
found-this-sooner type of bug. You fix it and move on. It's not dissimilar to
fixing various null pointer exceptions that plague lots of corporate Java
code.)

To me your obvious claim is not obvious at all, because the tradeoff space is
so much richer than what mere type systems allow. We're not even touching on
what you can do with specs and other processes that happen before you code in
any language, nor other language tradeoffs like immutability, everything-is-a-
value, various language features (recalling Java only recently got streams and
lambdas), expressiveness (when your code is basically pseudocode without much
ceremony (or even better when you can make a DSL) there's a lot fewer places
for bugs to hide)... Typing just doesn't tell that much of a story.

~~~
tigershark
The type system is your friend, not your enemy. You are comparing the type
errors to the null pointer exceptions, aka the billion dollar error. You can
have an extremely powerful type system, with very low ceremonies, that checks
continuously that you are not shooting your foot AND having a REPL. For
example using F#. Your code will be extremely expressive and creating DSL can
be a breeze, with the huge benefit that even your DSL will be type checked at
compile time.

~~~
Jach
That's my point, all that is in favor of F#, not static typing in general. I'm
not opposed to type systems -- Lisp's is particularly nice, I like Nim's --
but having static types or not isn't enough of a clue that such a language
really is suitable for large systems or can catch/prevent worse errors
quicker.

------
hdhzy
I think it's valuable to read interviews with Anders Hejlsberg [0] about the
design process for both .NET and C#. They are old but clearly communicate why
certain decisions have been made (spoilers: compatibility).

[0]:
[http://www.artima.com/intv/anders.html](http://www.artima.com/intv/anders.html)

------
andrewvc
You never really learn a language, and you never really are an expert in using
it.

While you may know a lot about what the language is, how it works, and
accepted ways of using it, your opinions on how to do things will always be
evolving (hopefully).

Sometimes all the experts who use a language will be behind the times. For a
long time experts championed strong OO design. Now all the experts champion
hybrid OO/FP style things (witness Java 8!).

This too shall pass, and we should have the humility to realize that no-one
knows for certain what will be the next evolution of software development.

------
oop17
One of the greatest lingering flaws in both C# and Java is the lack of
metaclasses.

Because classes aren't real objects and therefore not necessarily also
instances of other classes (their metaclasses) as they would be in Smalltalk,
there is no class-side equivalent of "self/this," nor of "super." In effect,
you cannot write static (class) methods that call other static methods without
explicitly referencing the classes on which those other methods are defined,
completely breaking class-side inheritance and rendering class behavior (and
instance creation in particular) needlessly brittle.

I believe the explosion of factories, abstract factories, and just generally
over-engineered object construction and initialization schemes in Java and C#
would have been side-stepped if both languages had always had a proper
metaclass hierarchy paralleling the regular class hierarchy, as well as some
form of local type inference.

~~~
klodolph
> I believe the explosion of factories, abstract factories, and just generally
> over-engineered object construction and initialization schemes in Java and
> C# would have been side-stepped if both languages had always had a proper
> metaclass hierarchy paralleling the regular class hierarchy, as well as some
> form of local type inference.

That's a bit harsh. "Factory" is a term that became prominent in Java as a
result of the language design decision not to include first-class functions,
so any time you see "Factory" just think "function that returns an object",
and any time you see "AbstractFactory", think, "type of function that returns
an object". In C# you can just use delegates and the explosion of factories
isn't really there.

I'd say your opinion of this explosion might change if you work in a good
codebase which makes sensible use of techniques like IoC. Yes, it feels a bit
silly to have a component in your project which does nothing more than
instantiate objects, but you end up with classes that are much more cleanly
defined in terms of the interfaces they expose and consume, and you can write
unit tests that don't make you feel like you're damaging your code base to get
the unit test to work.

At least, when it goes well.

My experience with metaclass programming (a fair bit of Python metaclass
programming) is that it can often be replaced by generics, reflection, or
various code generation tricks in C#, and I don't end up missing metaclass
programming that much. Metaclass programming isn't a silver bullet, it's a
tool that complements other tools in the right toolbox (Python, Smalltalk) but
would just get in the way in other toolboxes (C#, Go).

There's a narrative here that we're somehow "neglecting" the lessons we
learned with old systems like Smalltalk, Lisp, etc. when we make languages.
It's a seductive narrative but I think it's mostly papering over the sentiment
that language X isn't like my favorite language, Y, and therefore it's bad. I
welcome the proliferation of different programming paradigms, and besides a
few obvious features (control structures, algebraic notation for math) there
are few features that make sense in every language. That especially includes
metaprogramming, generics, reflection, macros, and templates.

~~~
kazinator
First class functions aren't replacements for factories. An abstract factory
provides several methods for constructing related objects of different types.
The objects come from different type hierarchies, whose inheritance structures
mimic each other. For instance, you might have a hierarchy of EncryptionStream
and EncryptionKey objects. Both derive in parallel into AESEncryptionStream
and AESEncryptionKey. Then you have an EncryptionFactory base class/interface
which has MakeStream and MakeKey methods. This is derived into
AESEncryptionFactory, whose MakeStream makes an AESEncryptionStream and whose
MakeKey makes an AESEncryptionKey.

The client just knows that it has an EncryptionFactory which makes some kind
of stream and some kind of key, which are compatible.

AbstractFactory doesn't specifically address indirect construction or indirect
use of a class, but it does solve a problem that can also be addressed with
metaclasses. If we can just hold a tuple of classes, and ask each one to make
an instance, then that kind of makes AbstractFactory go away.

The thing is that in a language like Java, these factories have rigid methods
with rigid type signatures. The MakeKey of an EncryptionFactory will typically
take the same parameters for all key types. The client doesn't know which kind
of stream and key it is using, and uses the factory to make them all in the
same way, using the same constructor parameters (which are tailored to the
domain through the EncryptionFactory base/interface).

If we have a class as a first class object (such as an instance of a
metaclass), that usually goes hand in hand with having a generic construction
mechanism. For instance, in Common Lisp, constructor parameters are
represented as keyword arguments (a _de facto_ property list). That bootstraps
from dynamic typing. All object construction is done with the same generic
function in Common Lisp, the generic function _make-instance_. Thus all
constructors effectively have the same type signature.

Without solving the problem of how to nicely have generic constructors, simply
adding metaclasses to Java would be pointless. This is possibly a big part of
the reason why the feature is absent.

~~~
klodolph
Yes, you're absolutely right that functions don't cover all use cases of
factories. I was mostly thinking about the "why are there factories
_everywhere_ " complaint, which is mostly about factories that just produce
one object.

> If we can just hold a tuple of classes, and ask each one to make an
> instance, then that kind of makes AbstractFactory go away.

That seems like just one particular way to solve things. I guess I don't see
what the fuss is about, if we are talking about metaclasses in particular,
because we could also solve this problem with generics, and the factory
solution doesn't seem that bad to begin with.

> Thus all constructors effectively have the same type signature.

Or turned around, the type system is not expressive enough to assign different
types to different constructors, and is incapable of distinguishing them. This
matches with my general experience, that metaclasses are useful on the dynamic
typing side (Python, Lisp, Smalltalk, JavaScript) but annoying on the static
typing side (C++, Haskell, C#).

But of course that makes sense. In a system without static types, the only way
to pass a class to a function is through its parameters, so you have to pass
the class by value. In systems with static typing, you have the additional
option of passing a class through a type parameter, which has the advantage of
giving you access to compile-time type checking. Furthermore, there are real
theoretical problems with constructing type systems which allow you to use
metaclasses involving whether the type checker is sound and whether it will
terminate.

------
hyperpape
While I'm inclined to suspect that non-virtual by default is better from a
design perspective‡, don't assume the point about performance is overwhelming.
HotSpot has done devirtualization for a long time. You can detect not only
when a method is never overridden, but also when it's never overridden at a
particular call site. A virtual method that's never overridden can sometimes
have no extra overhead, while a virtual method that is overridden may have
sufficiently small overhead that it rarely matters.

[http://insightfullogic.com/2014/May/12/fast-and-
megamorphic-...](http://insightfullogic.com/2014/May/12/fast-and-megamorphic-
what-influences-method-invoca/)

‡ I've used non-OO languages, but never an OO language without virtual by
default.

~~~
kjksf
Since we're talking about performance: the time it takes HotSpot to perform
this optimization is also a perf hit for your program.

At the end of the day, the fastest code is one that doesn't have to run.

HotSpot is an impressive technology but the optimizations it has to do to
overcome Java's design really only pay for themselves in most frequently
executed code paths and only after some time to gather necessary info to
perform the optimizations.

It's ok for long-running server code but not good for, say, short-lived
command-line program.

Or to put it differently: a language that has perf-friendly design, like Go,
matches Java's speed with 10% of engineering time and resources spent on the
compiler and optimizations. Perf friendly design means it has to do 10% of the
work to achieve the same end result.

~~~
hyperpape
This may be true in general, but the CLR uses bytecode and a JIT compiler, so
that point may be a lot less relevant to it. In addition, devirtualization is
apparently valuable enough that they're going to add it to the CLR, per the
article.

~~~
MichaelGG
Java compiles to bytecode and most implementations JIT, just like .NET. JVMs
are more advanced than the CLR at optimization.

~~~
hyperpape
Yes, my point is that once you're comparing two environments that use bytecode
and a JIT, you can't necessarily cite the cost of startup time and the cost of
JIT compilation as a reason to avoid possibly-virtual calls.

------
matchagaucho
_" Another issue is that my approach to software design has significantly
changed. Where I would previously do a lot of inheritance and explicit design
patterns, I’m far more motivated toward using composition, instead."_

This, more than anything, has dramatically improved the quality of my
designs... and made coding fun again.

Immutability and Lambda functions have also had a tremendous impact on my
designs.

Is the term _Object-Oriented-Programming_ relevant anymore?

~~~
jameslk
I don't see how preferring composition over inheritance makes OOP less
relevant. In fact, GoF even _suggests_ using composition over inheritance in
OOP. Nor do I see how immutability and lambda functions are mutually exclusive
to OOP either. You can have all of these things and still reap plenty of
benefits from OOP. The benefits of OO polymorphism and several decades worth
of architectural design patterns are not irrelevant just because functional
programming concepts exist. Both should be used advantageously and when
appropriate.

~~~
matchagaucho
It's possible to do _Functional Programming_ with an OO language these days
using anonymous methods/Lambdas.

The GoF patterns either need updating or perhaps we're on the verge of calling
this hybrid environment something completely different (?)

------
marsrover
I'm surprised the creator of RavenDB took this long to come around on
composition vs inheritance. Good on him for admitting his transgressions,
however.

~~~
sevensor
> Another issue is that my approach to software design has significantly
> changed. Where I would previously do a lot of inheritance and explicit
> design patterns, I’m far more motivated toward using composition, instead.

I picked up on this too; object-oriented design patterns have lost a lot of
mindshare over the last ten to fifteen years. There was a time when it seemed
like design patterns were taking over the world. We're still living with some
of the monstrosities spawned during that era. I wonder if design patterns can
ever be rehabilitated.

------
PaulHoule
Java is not "virtual by default", it is virtual-only, except for private
methods, which don't participate in inheritence.

I like the Java convention, for one thing, because it is one less decision for
programmers to make. I've seen many C# programmers who are oblivious to what
virtual means.

~~~
0x0
What about "protected final"?

------
amag
_" Someone was wrong on the Internet, it was me."_

------
martamoreno
The bigger question is why the hell a post like this makes it on top of hacker
news? This was by far the most pointless read ever.

On top of that arguing that non-virtual by default is worse than virtual by
default is completely superfluous. Just add the damn keyword everywhere and
you have virtual everywhere. Same for final.

But Java has everything non-final and virtual by default, which sucks badass
because both require great care when implementing the method.

Extending code that was not designed to be extended is very common in Java,
because you can. Adding final can easily be forgotten. Removing final, which
is required in C#, will only be done IF you intended to make that method
extendable, same for virtual.

Yes a great gain. Now I need to argue for each final I add to Java classes and
methods, because you know, it seems wasteful to add it, while in fact it is
crucial, since maybe just 1% of any code I write was meant to be replaceable
by a third-party. Mostly, you want to use other mechanism for extension, like
decoration & composition.

If it took ten years to learn that falsehood (non-virtual is worse than
virtual by default), then we talk about one hell of a regression huh.

~~~
mihular
Virtual methods are also significantly slower to invoke.

~~~
sqeaky
Significantly?

Have any benchmarks to back that up?

Last time I benchmarked in the language I use most (C++) I couldn't get a
difference distinguishable from my margin of error.

