Hacker News new | past | comments | ask | show | jobs | submit login
I was wrong, reflecting on the .NET design choices (ayende.com)
165 points by redknight666 on April 21, 2017 | hide | past | favorite | 137 comments



If I attempt to generalize it, I think most of C#'s differences in language design that have opposites in Java are superior. Examples include:

+ not virtual by default (so base classes can be changed more easily without breaking/recompiling downstream clients that the base class writer doesn't know about; specifying something as "virtual" should be a deliberate conscious decision by the class author)

+ value types (for speed, because J Gosling's idea that "_everything_ is an object is 'simpler' for programmers" has a cost -- the boxing & unboxing, and inefficient cache-unfriendly pointer-chasing containers)

+ no checked exceptions (checked exceptions have benefits in theory but real-world practice shows that it forces programmers to copy-paste mindless boilerplate to satisfy the checked-constraint)

+ unsigned types (very handy for P/Invoke API boundaries because legacy Win32 has unsigned params everywhere; yes, yes, Gosling said that unsigned types are confusing and dangerous but nevertheless, they are still very useful)

+ many other examples

This doesn't mean that Anders Hejlsberg's C# language team was smarter than J Gosling's Java team. They simply had 7 years to observe how Java programmers (mis)used the language and therefore, could correct some design mistakes.

Nevertheless, C# still made some dubious decisions such as renaming "finalizers" to "destructors". Java had the better name and C# should have kept the original terminology.


> + no checked exceptions (checked exceptions have benefits in theory but real-world practice shows that it forces programmers to copy-paste mindless boilerplate to satisfy the checked-constraint)

As languages like Rust or Swift demonstrate, the issue is less the checked exceptions and more the abject lack of support for them in the language and type system, which ends up making them essentially unusable.

> Gosling said that unsigned types are confusing and dangerous

He is right of course, but he forgets that so are signed types if they're bounded.

If Java had unbounded signed integers (à la Python or Erlang) that'd be one thing, but Java does not, and neither does it have Pascal-style restrictions/user-defined integral bounds, which means it's no less confusing or dangerous, it's just more limited.


Neither Swift nor Rust have exceptions, checked or otherwise.

The kind of exceptions that unwind the stack until some part of the code up the stack catches the exception.

They both handle errors by returning error values, kind of like Go.

Rust has a try! macro, which might make you think it's try/catch equivalent, but it's not. It's just a syntactic sugar over error values.

Similarly in Swift try/catch/throw is just syntactic sugar for handling error values.

https://doc.rust-lang.org/book/error-handling.html

https://developer.apple.com/library/content/documentation/Sw...


While it's true that Swift and Rust don't unwind the stack, this is just an implementation detail.

The comparison to Go is very misleading. Neither Swift nor Rust require you to return a value on error like Go does, nor do they allow you to simply ignore errors. The semantics, not the implementation, is what matters.

Swift's error handling is not "syntax sugar." try/catch in Swift are not macros that desugar into normal Swift. Errors are not returned via the normal return path, but via a dedicated register. Just like stack-unwinding exceptions, Swift errors are part of the core ABI.

https://github.com/apple/swift/blob/master/docs/ABIStability...


I have absolutely no experience with Rust, but I know it normally depends on libunwind, isn't that for unwinding the stack?


Perhaps to implement https://doc.rust-lang.org/std/panic/fn.catch_unwind.html

But also just to display traces on asserts/panics, even if not "unwound" per se.


By default, rust's panics _do_ unwind the stack. However, you can also set a flag to compile them as an abort instead. Stack traces are still useful in that case.


> Neither Swift nor Rust have exceptions, checked or otherwise.

The point is that both of them have generic error types which "infect" every caller (transitively) in much the way checked exceptions do.

That Rust and Swift require explicitly bubbling error values should be a point in favour of checked exceptions.


> That Rust and Swift require explicitly bubbling error values should be a point in favour of checked exceptions

I thought the explicit bubbling was the nicest thing about error handling in Rust. It's usually just a single character ('?') that the editor can easily highlight, and nicely indicates where the operations are that might fail.

I'd add that checked exceptions don't play nicely with general functional/stream operations like "map" which is why Java went with unchecked exceptions for their streams api in Java 8. Rust on the other hand can handle interior failures in such functions nicely, promoting them to a single overall failure easily via collect(), using the blanket FromIterator<Result<A, E>> implementation for Result<V, E>.


> I'd add that checked exceptions don't play nicely with general functional/stream operations like "map" which is why Java went with unchecked exceptions for their streams api in Java 8. Rust on the other hand can handle interior failures in such functions nicely, promoting them to a single overall failure easily via collect(), using the blanket FromIterator<Result<A, E>> implementation for Result<V, E>.

And Swift has a `rethrows` marker to transitively do whatever a callback does, it's equivalent to "throws" if the callback throws, and to nothing if it does not. So e.g. `map(_ fn: (A) throws -> B) rethrows` will throw if the callback it is provided throws, and not throw if the callback doesn't throw.


As I see it, the Rust (etc) approach avoids two problems:

1. The C problem of forgetting to check error returns. Yes, exceptions (checked or not) also avoid this.

2. The C++/Java/C# problem of exceptions being more expensive than you'd like for common error situations. .NET has sprouted alternatives like `bool tryParse(String, out int)` as a workaround, but on balance I prefer the unified mechanism.

What I don't like is how innocuous `unwrap` looks, or how often it appears in example code.


> The C++/Java/C# problem of exceptions being more expensive than you'd like for common error situations.

Exceptions should not be used for common error situations! This is the prime mistake of Java which pretty much required exceptions when it should (and checked exceptions to boot).

> .NET has sprouted alternatives like `bool tryParse(String, out int)` as a workaround

I don't consider that a workaround. There is a deep semantic difference between TryParse and Parse. If you see TryParse then you know the data is expected to be invalid. If you see Parse than you know it's expected to be always valid. A good C# program should have very very few try/catch blocks (ideally just one).


> There is a deep semantic difference between TryParse and Parse

Sure, but I don't think that splitting every possibly-failing API call into throwing and non-throwing forms is the right way to express that difference, and a lot of the time it'll be a matter of context (meaning that the implementation can't magically do the right thing).

It's fairly easy to layer throwing behaviour on top of nonthrowing in a generic and efficient way (Rust's Option, Java's Optional etc), but the reverse is not true.

I must admit I'm losing track of what if anything we're in disagreement about, though...


I agree it's easy to layer throwing behavior on top of non-throwing behavior -- Java easily chose the worst possible way to do it.

But having both a Parse and TryParse means that I can ignore the result of the Parse call entirely and let it fall through to the exception handler. It is by-definition always expected to succeed so when it doesn't then that's a bug. If you only have one of TryParse or Parse you cannot judge the intention.


> If you only have one of TryParse or Parse you cannot judge the intention.

Sure you can. To take Java as an example, if you only have one parse method and it returns an `Optional`, you can indicate the intention by whether or not you call `get` directly or call something like `isPresent` or `orElse` first/instead. Yes, you can get that wrong, but you can get the choice between `TryParse` and `Parse` just as wrong.


That's a very good point. I do this all the time with nullable types; I'm not sure why I didn't consider it that way.


It's not really that innocuous. It'll cause the program to crash as soon as it's discovered that there wasn't a value where you were expecting one.

In languages with exceptions, the program will crash as soon as you try to use that value (rather than when you try to unwrap it), e.g. the infamous null pointer exception. Copying bad sample code in this case might result in code that is difficult to debug because a null value might be handed off several times before something tries to dereference it.

In languages that expect but do not enforce that you check the validity of the value (like C) you'll just get undefined behaviour that will hopefully cause your program to segfault when you try to use the value, but who knows what will actually happen? Copying bad sample code in this case will cause a security vulnerability.

Copying "bad" sample rust code (using unwrap) will cause a safe crash with maximum locality, for simpler debugging.


> The kind of exceptions that unwind the stack until some part of the code up the stack catches the exception.

Rust panics result in unwinding the stack:

https://doc.rust-lang.org/nomicon/unwinding.html

try!/? is preferred for handling errors, but if you "know" that a Result or an Option has a value, you can unwrap() or expect() it, and you'll get a panic if you're wrong.


so does panic() in Go. But no-one claims that Go uses exceptions for error handling, because it doesn't and neither does Rust.

In both languages panic() is for "this shouldn't happen" fatal errors, not for signaling errors to the caller, the way Java or C# use exceptions.

My comment was in response to person basically saying "checked exceptions in Java would be good if only they implemented it the way Swift/Rust did".


With Swift, that assumes that the function being called reports the error in the first place. Very few functions do.

Cast a large double to an int -- crash. How do you catch that error? You can't. You have to make sure it never happens yourself. The UserDefaults is another great one. All sorts of ways that it can crash your app, none of them can be caught or handled. Your app just crashes. My advice: convert all your object to a text string (like json) and store that. Do NOT store a dictionary representation.


FWIW the try!() macro has been replace with the ? operator.

Having spent extensive time with checked and unchecked exceptions I find Rust's error model to be very robust.

There's extensive tooling to handle and transform them, most of the pain comes from developer who are used to sweeping them under the rug(which totally makes sense in prototype land but when I'm shipping something I want guarantees).


I find unsigned types more dangerous than signed bounded types. For unsigned types the edge-case is at 0, a frequently used number in eg. arrays. For signed types the edge-cases are at (random) positive and negative numbers.

Having to think more about edge-cases makes the code more dangerous:

    for(uint i = arr.len()-1; i >= 0; i--) {...}


> I find unsigned types more dangerous than signed bounded types.

So you do agree that signed bounded types are dangerous, and it's only a matter of degrees between them and unsigned. Thank you.


> + unsigned types (very handy for P/Invoke API boundaries because legacy Win32 has unsigned params everywhere; yes, yes, Gosling said that unsigned types are confusing and dangerous but nevertheless, they are still very useful)

The lack of unsigned types in Java is a constant pain when doing cross platform development, or when trying to consume (or write out!) a binary wire format.

It can be worked around, but it is just needlessly stupid.


Not having type erasure is pretty nice (except that reflection code with generics is really hairy).


I like type erasure and depend on it a lot for Scala and Clojure interop. Any JVM language works with the same objects and methods, and can implement its own type system/syntax on top.


Type erasure is one of the ugliest things in java, on par with the Linq implementation using streams. It's not a pain only if you never write generic code and you are happy with having a lot of duplication. C# implementation is superior by far, and you can easily return newly created objects from a generic method or use an object properly without the need to pass the type of the object as a method parameter in addition of specifying the generic type.


That's true in the clr too as long as everything supports generics.


Things that have CLR-like generics work on the CLR. Anything else doesn't. E.g. Scala.


I don't really get the complaint. Even if they had type erasure you couldn't use Scala in the CLR because nobody has written a CLR target. And on the other hand, I don't know why there couldn't be a Scala implementation with true generics. A lot of crap like DummyImplicit and ClassTag is there for no other reason than to work around those limitations anyway.


I think it's largely an urban myth.

All expressive languages whose type systems don't have a 1-to-1 equivalence in the runtime's type system need to employ some degree of erasure.

The distinction between "erasure" and "no erasure" doesn't make much sense. It's always just about more (CLR 1, JVM < 10) or less (CLR >= 2, JVM >= 10) erasure.


I don't see how you figure. In C# List<string> and List<int> are two different types at runtime.


And ...?


If you're implementing a CLR language and want flawless interop with everything then you just have the constraint that you need to support generics in the same way that other CLR languages do. I'm not seeing why this model is so much worse.


Scala had a CLR target for a number of years, but getting the type system to work was so problematic it was eventually abandoned.


That's what I meant with: "That's largely an urban myth."


I'm curious in what sense it was difficult to get it to work.


Perhaps not the best explanation but this the gist, as given by Scala's creator in an interview https://youtu.be/p_GdT_tTV4M

> It's harder then it looks, because .NET has a very rich type system which happens to be not the Scala type system. In fact, it's very hard to map some of the parts of the Scala types like higher-kinded types...so it's a difficult task, much more difficult than JS or LLVM. LLVM is likely going to be the third target [after JVM and JS] because there we don't need to fight a type system which is foreign to us, which is what the .NET one is.


F* has an advanced type system with support for dependent types and refinement types. Yet, it runs on the CLR.

Source: https://www.fstar-lang.org


Hairy yes, but really, really powerful.


Java's virtual by default methods has never been an issue for me. On the other hand, some clown making their classes and methods final has been a recurring thorn.

David Bacon's Kava's approach to lightweight objects remains the correct answer and is sorely missed.

The only case for unchecked exception is mitigating terrible design choices. Like using Spring or Hibernate/JPA. When the client code can't do anything with a caught exception, it shouldn't even be thrown. Meaning, they're doing it wrong.

The omission of signed integers has caused me unnecessary pain a few times. Original bindings for OpenGL, a DICOM parser, etc.

--

Properties are the one C# feature sorely missed in Java. JavaBeans is silly.

Syntactic sugar for reflection would be nice, maybe something like 'Method m = myClass#myMethod( String )'.

Java has done plenty of terrible things, crimes against productivity and design rationality. Chiefly annotations, lambdas, and the silly Optional. C#'s equivalent quixotic fever dream is probably LINQ.

I have a laundry list for Java, JDK (especially), and JVM. Who doesn't? Mostly undoing bad ideas and culling deprecated stuff, plus some sugar to reduce the verbosity.

--

Unfortunately, Java's language is hard to separate from its culture. The taint of all the enterprise minded stuff (XML XSD etc, J2EE, Spring, excess indirection caused by design pattern newbies) has been hard to shake off.


> C#'s equivalent quixotic fever dream is probably LINQ.

What? LINQ is terrific.


> C#'s equivalent quixotic fever dream is probably LINQ.

Maybe the weird query-style syntax.

If you use it properly, it's no different than map, fold, filter, etc that are ubiquitous functional higher-order functions., just with T-SQL-ly names.

Linq is so much better than doing all of what it does by hand, with for loops and temporaries galore, the way we once did.


and amazingly despite all these apparently horribly complex choices that programmers won't understand, it's a great language to work in.

Maybe guys like Gosling and Pike should take note. A touch more complexity in the right places can make a huge difference.


What do you think of partial classes and methods in term of code quality?


I've always thought of "partial classes" as a language feature motivated by code generators not stomping on programmers' manually entered code. E.g. Winforms generates some declarations in one partial class while the programmer codes UI handlers in the other partial class.


When I first learned C#, I was having a hard time to navigate the code due to partial methods. So I have always wondered what C# developers think of having their methods spread out in different files.


Partial classes and partial methods exist specifically to support having code that is automatically generated paired with a human-managed source file.

While you could use the support for them to split human-managed code across separate source files, that would be a horrible practice that I've never encountered in the wild, even in .NET shops with otherwise-atrocious practices and code quality.


>While you could use the support for them to split human-managed code across separate source files, that would be a horrible practice

To continue that type of advice, some say "#region/#endregion" is another language feature that's intended for code generators so that the IDE can collapse specific lines of code and hide it from view. Programmers should not be hand-coding "#region" themselves. That said, there is debate on that: https://softwareengineering.stackexchange.com/questions/5308...


#region / #endregion, IME, largely serves to mitigate the visual offensiveness of poor (usually, completely neglected) design and (despite the potential utility in the case of code generation, and the utility it might have in the rare case where there is a good reason for a monster source file) I'd prefer it not to exist.


We use regions to standardize the layouts of our classes. Except for trivial classes, you will find the same regions in the same positions making it eas(ier) to find, for example, all methods implementing an interface, or all of the private helper methods.


Regions are just useless visual clutter most of the time. You can put the methods/fields in the same order without using the regions. In my experience regions are a very bad practice used only to mask bad design that produced gigantic classes. The only place where I think they may be helpful is when you are writing a library and your class must be huge because you are implementing for example a Trie or some other collection or some other object pretty complicated that doesn't make sense to divide in smaller classes. And even in that case I would first try really really hard to split it in smaller entities rather than just having a thousands lines class with some regions around.


Not sure where to go from there. You've precluded the possibility that regions and good design should exist at the same time in the same file.

Where does the absolutism in the tech industry come from? We are a bunch of individuals who have individual experience and then try to form a view of the world that satisfies our experiences. What about the experiences you haven't had or conceived of? We are constantly rewriting the rules in our head to fit the new experiences we have every day to make sure we are right all of the time. Surely, our current world views are not complete or we would have no room to grow.

Still, I'll take your comment under advisement in case my classes are big, poorly designed non-Tries.


Since it's buried at the bottom of the stackexchange: #region/endregion is useful for visually grouping Unit Tests per method under test. They're also occasionally useful to group interface implementations.


You could just use nested classes for that, with the added benefit that the test framework will reflect that organization in its UX.


The suggestion is appreciated. Thank you.

I have a minor doubt that I won't like the additional level of nesting incurred, but I'll attempt it regardless.


Partial classes are also useful in the case that you want a nested class to have its own file.


I think I may have manually created partial classes for huge classes to make them more manageable. Think for example expression tree visitors or collections of extension methods. Sure, I could just have created several classes like FooExtensions and BarExtension, but having a partial class Extensions seems ever so slightly better to me. But I generally agree, there are not many good use cases besides code generation and if you are tempted to do it, then you probably have a problem that has a different and better solution.


I think just about every time I've seen it it's either been exactly what the OP described or an attempt at refactoring a class whose scope had grown far too large.


It's a tool to use.

Like others have said, it makes working with generated code easier.


That's where every Visual Studio dev will see them used, because it's been a VS pattern since the beginning. But I create them manually often enough. Overall I use them sparingly, but there are cases where you definitely want a set of things in one class for code-organization, but still want to think of them as different modules for human-organization.


I did that once where I had a complicated class with a docile public API that controlled a not so docile long running threaded 'machine' of sorts. Having them in two files made thinking about stuff easier.


If you want to think of them as different modules then most likely they should be completely separated entities. Why on earth you would want something completely different to live in the same class? It is just screaming that it wants to be a separate class. Namespaces are the correct tool for code organisations, certainly not partial classes.


It's very interesting that you were able to see my use cases and come to this conclusion...


Regardless of the use case this is what the official documentation says about namespaces:

The namespace keyword is used to declare a scope that contains a set of related objects. You can use a namespace to organize code elements and to create globally unique types.

And about partial classes:

There are several situations when splitting a class definition is desirable: When working on large projects, spreading a class over separate files enables multiple programmers to work on it at the same time. When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio uses this approach when it creates Windows Forms, Web service wrapper code, and so on. You can create code that uses these classes without having to modify the file created by Visual Studio.

As you can see partial classes are not the right tool to organise code.


How would you propose to split a struct that has a single data member? Say, a struct where the only data is one unsigned short?

Also, "regardless of the use case" ? I barely know how to respond to that. Have a little imagination.


Thank you. It has just occurred to me I can use this feature at work!

lol


Not the OP but here are my 2 cents: partial classes should only be used when you have a mix of generated and hand written code. Any other usage should be forbidden. It requires some education but it is useful for this usecase.


If, like me, you're old enough to have used MFC (where the code generators generated source for classes that you had to augment, and randomly picked lines to replace or overwrite in those classes whenever it felt doing so (1), you think they are a godsend.

(1) that's an exaggeration; MFC used comments to identify sections that it owned in the source, but in my (limited) experience, there typically were zillions of ways to make changes to your code, but if you didn't use the one Microsoft picked as _the_ way (and which they didn't push into your face in the IDE), you were in for heaps of problems.


Java and C# are seen as "old news" by some here on HN but there's a trove of software engineering wisdom in there. It was a huge boon for Microsoft to be able to learn from Sun's mistakes when they were designing a language that, on paper, is basically the same thing. C#, Java, and Go are all "wonderfully boring" languages which is a divisive topic, but they're all very good at being boring languages.

It's not just language features that made C# an improvement over Java, either. CIL is a fair bit more elegant than JVM bytecode. JVM bytecode has type-specific operations to speed up bytecode interpreters, which turns out to be irrelevant since nobody cares about bytecode performance these days.


I'm not sure C# is even really in the "boring" category; they were way ahead of Java 8 with the functional collection stuff in Linq and they're borrowing lots of concepts from Scala for the latest versions.


In addition to the functional stuff, C# has a lot of syntactic sugar that I wouldn't associate with a "boring" language.

If you aggressively use all the syntactic sugar from the latest version of C# and compare that to similarly up-to-date Java code, they'll be worlds apart. The C# will look terse, and to more conservative programmers, rather weird.


Not to mention they still have/invent multiple ways to do the same. The old Tuple<>, anonymous types and new ValueTuples are pretty much different attempts/iterations to achieve a similar goal. I wonder if/how they will unify them or phase some of them out.

Btw, i kinda like java's Anonymous Classes. Do we know any reason for C# not to adopt this as well?


>Do we know any reason for C# not to adopt this as well?

Because delegates are a much easier way of dealing with things? Especially with lambda functions. Unless you're talking about the java pattern of passing an entire anonymous class for, let's say, a formatter or something. The reason for C# not to adopt that seems to be that:

- It's so terribly awful, who in his right mind is happy to implement yet another anonymous class? - API design is different and .NET manages to avoid the need of those quite gracefully.


There is some crap like that still kicking around. Why you need to create a Comparator class to provide a single function for sorting or filtering out duplicate items is silly, when a Func <T, bool> lambda would work? Because it's the old way of doing things, back in the 1/2 era, when delegates didn't have all the syntax sugar to make them painless.


> I wonder if/how they will unify them or phase some of them out.

They pretty much never phase anything out because they're serious about backwards compatibility (which frankly I consider refreshing in this world). I mean they've even said at some point like "yeah, the delegates are unfortunate because they're just a more awkward syntax for what the lambda functions do but we're not going to get rid of them because people have used them."


As a low-level game dev, I consider C# a nearly perfect, wonderful language, tragically self-defeated by garbage collection.


It's getting easier with every release to not allocate, with things like ref returns and locals[1] and new types like Span<T>[2]. I think there are more things like that coming in the pipeline[3].

[1] https://github.com/dotnet/roslyn/issues/118

[2] https://github.com/dotnet/corefxlab/blob/master/docs/specs/s...

[3] https://github.com/dotnet/roslyn/issues/10378


Here's a comment that mirrors my own thoughts, from the comments on that third link[1]:

"C# heap allocations aren't expensive and are faster than C/C++. They are even faster than stack allocations (if the stack alloc is too large to be a register) as the stack needs to zero out the memory first; whereas the heap has done this ahead of time. However, they do come at a cost, which is deallocation.

C/C++ have both an allocation and deallocation cost. The main advantage is you know when you are paying that deallocation cost; the main disadvantage is you often do it at the wrong time or don't do it at all which leads to memory leaks, using unallocated memory or worse trashing something else's memory (as it has been reallocated).

The GC isn't the bad guy; its the liver and blaming it is like blaming a hangover on the liver.

However; the free drinks the clr allocation bar gives you are very tempting, and does mean there is a tendency to over indulge so the hangovers can be quite bad... (GC pauses)."

[1] https://github.com/dotnet/roslyn/issues/10378#issuecomment-2...


Yes, someone always brings up the fact that alloc/dealloc isn't free in any language. But that doesn't change the particular animosity between real-time and GC. Manual is like a flexible payment plan. GC is a debt-collector appearing at your bathroom window while you're on the can.


Could you explain why garbage collection in your use case is a bad thing? The GC vs non-GC always fascinates me, and I have no strong opinion either way.

Also, doesn't C# have the ability to limit (or pretty much disable) GC?


Garbage collection is the bane of smooth framerates.

Players notice when the framerate drops. Presuming a pretty typical 60 Frames Per Second you have a tight 16.6ms time budget to do all of the work for the entire frame. All of the physics, all of the sound processing, all of the AI and everything else needs to be sliced up into little bits that can be distributed across the time the game is played.

There are many good ways to achieve this, and many ways that just appear good until someone else plays your game. If you allocate dynamically even occasionally you need a strategy to allocated about as much as you deallocate each frame or otherwise mitigate the costs.

C++ has this problem completely solved with strong deterministic semantics between destructors, smart pointers and allocators. This can be handled in C# a few ways as well, but sometimes a bunch of unallocated memory builds up and is cleaned between level loads or during downtime. When the Frame rate drops because the garbage collector consumes 1 of the 2 hardware threads in the middle of a firefight players get mad. If you only ever tested on your nice shiny i7 with 8 hardware threads you might never notice until a bug report lands in your inbox. That presumes it wasn't one of the stop the world collections and you couldn't use that last hardware thread better than the GC, both of which negate GC altogether.

Done right deterministic resource allocation costs almost nothing. You can get to zero runtime cost and nearly zero extra dev time. In practice a little runtime cost is fine, and a little time spent learning is OK, but a bug report in the final hour before shipping that the frame rate drops on some supported hardware setups but not others is really scary.


I wonder if the very low-latency GC in Go would be good enough, though? The occasional dropped frame doesn't seem like the end of the world, so long as it remains rare.

In practice, most games don't have entirely reliable performance, particularly on low-end hardware.


Depends on the game.

Drop a frame in a competitive First Person Shooter and be ready for death threats.

Drop a frame in an angry birds clone and be ready for 5 stars in a review just because you made your first game.

I suspect you could could get away with quite a bit of GC in most games. But by the time you learn whether or not you could get away with you have fully committed to language for several months. Unless you fully committed to D you are stuck with your memory management strategy. In order to be risk averse game devs dodge GC languages entirely because the benefit is small compared to the potential gain. Combine this with how everyone wants to make the next super great-<insert genre here> MMO that will blow everyone away, they think that they must squeeze every drop of perf out of the machine and sometimes they are right.

Lua is hugely popular for scripting in games. World of Warcraft used it to script the UI. Its garbage collector can be invoked in steps. You can tell it to get all the garbage or just to get N units of garbage. If you tell it to get 1 unit of garbage each frame while frugally allocating I expect you could easily meet the demands of many casual games.

Then there are games like Kerbal Space Program. All C# and all crazy inconsistent with performance. It will pause for no apparent reason right as you try to extend your lander legs and cause you to wreck your only engine on a faraway planet. I cannot say with certainty it is GC, but that cannot be helping.


Gamers hate dropped frames. They can also easily ruin multiplayer games.


Garbage collection is troublesome for real-time processing(including game engines) because it doesn't allow you to plan around hitting a latency deadline. When the collector triggers, it consumes a lot of time, which can add up to missing a deadline even in very relaxed situations.

Historically, many game engines have a "never allocate" policy: Everything is done with static buffers and arenas of temporary data. The broad strokes of this policy are mostly achievable in a GC language that allows value type collections(fewer managed references = less to trace = cheaper garbage). The problem typically comes in little bits of algorithmic code that need their own allocation: Because the language is garbage collected, all your algorithms are using the GC by default. And if you want to reclaim it, you have to fight a very uphill battle.

IME, though, most of the problem is resolvable if you have enough introspection into GC configuration. If a game can tell the collector that it needs a cheap scratch buffer to process an update loop and then get thrown out, that covers a lot of the problem. The last bit of it is fine detail over memory size and alignment, which some GC languages do give you introspection into already.

Edit: Also, I should note that the relative value of GC changes a lot when your process is long-lived and unbounded in scale(servers) or involves a lot of "compiler-like" behaviors(transformations over a large, complex data graph). The advantage of doing without it in a game engine has a lot to do with the game being able to be tuned around simple processing of previously authored data, with limited bounds in all directions.


Most problems with GC are when most GC implementations introduce non-deterministic latencies, but it can also be because they demand a large heap size to work well. Even if you can swap the GC implementation or turn it off, can you guarantee that when you turn it on, it will only run for < X milliseconds and then stop/allow you to control it again? If you can't, turning it off only buys you a little, unless your language also supports direct [de]allocation. Nim's strategy with GC with plain access to alloc() is pretty nice.


He's a game dev, says it all. The problem with GC is that often it imposes a performance overhead and, more importantly, it results in non-deterministic performance characteristics. In game dev it's typical to want to be able to complete an entire cycle of computations for calculating game mechanics within a single frame, which could be just 17 or even 7 milliseconds in entirety (60/144 fps). If you get even a fraction of a millisecond delay in computing a chunk of work that can screw up your frame rate, produce noticeable visual effects, etc.


GC is a no-go for a lot of games in general, just because the periodic stop-the-world cleanup causes unpredictable frame-rate-destroying performance hits. That doesn't matter a lot for some games, but it tends to stutter things like FPSs unacceptably.


Non-deterministic performance. It can be fast, even faster than managed code in some cases. But it's not predictable.


OP probably wants a Concurrent/MarkAndSweep GC rather than a Generational GC. because 16 ms is too much for game dev


As a game developer having used C# for two large scale projects, I can tell you that the garbage collector was the least of our worries.

Just be a good citizen and don't trash like crazy and the GC will never block in the action phase.

Also, don't use the default mono GC, that one is terrible.


Ah, but everybody's data is different, and that's the problem with a prescribed one-size fits all, too-clever mem manager. If you need a complex graph with a million nodes, then any heap walk during a frame is a catastrophe, and your only out is to write a custom mem manager within the confines of c#, or put your game's core data structure in a C++ lib.


You can use things like System.GC.TryStartNoGCRegion and GCLatencyMode.SustainedLowLatency to help mitigate the GC pauses though. There is a lot of work that has been going on with the CLR GC code somewhat recently.


What I want is no GC. I might settle for a way to tell it "Do not walk this graph over here, ever, ever." But really, I want to be nowhere near a GC at all, unless it gives me control of pretty much everything it does. I do develop in C#, and I have to work around the GC every step of the way. I really want not to have to work around it, not better work-arounds. For the most part, I simply don't allocate, but that is ugly and non-idiomatic.


> However, given that I’m working on a database engine now, not on business software, I can see a whole different world of constraints.

This might be my own confirmation bias, but this is my takeaway: point of view and the constraints that you see or believe are there are the main determinant of choices, and not whether some particular pattern or feature of a language is intrinsically good.

The older I get and the more code I write, the more I find this to be true. I change my own mind about things I used to argue fiercely over and things I thought were tautologically true.

I think (hope) this is making me more open minded as I go, more willing to listen to opposing points of view, and more able to ask questions about what someone else's constraints are rather than debating about the results. But, who knows, I might be wrong.


I think it is hard to separate this from developing more maturity in the field also. For example, people learning OO seem to almost have to go through a phase of over reliance on inheritance.

Pointing back at yourself over 10 years is pointing to a different place, sure, but it is also a different person.


I have many years of programming in Java under my belt. Until I started using dynamic languages I thought static typing was really important. It's not.


> I have many years of programming in Java under my belt. Until I started using dynamic languages I thought static typing was really important. It's not.

Static typing is useful, especially in large projects, if it provides the right guarantees.

OTOH, whether it does that depends on the type system. Go, Java, Pony, Rust, and Haskell are all static, but their type systems offer very different capacities. If you have a type system that has a lot of ceremony, and fails to provide the guarantees that are actually needed in your project, it's a pure burden. If it's low-ceremony and provides guarantees important to your project, it's a clear, undiluted benefit. Reality often falls somewhere in between.


It rules out certain categories of bugs, makes it hard to assign a string to an int, etc...

If you are writing a small one time use script to accomplish a task clearly this that kind of protection is of low value.

If you are trying to write or maintain a system intended to last 20 years and keeps bugs out of 100 millions lines of code, every kind of check that can be automated has extremely high value.

Most projects are somewhere between these two extremes. The nature of the cutoff point where strong static typing helps or does not is what we should be debating, not its inherent value as Dahart suggested.


Simple designations like "static typing" and "dynamic typing", even when you bring in the concept of strong vs. weak (Java allows concatenating an int to a string, Python throws an error), aren't very helpful when languages like Common Lisp exist. (Edit: nor "compiled" vs. "interpreted" either for the same reason but especially in current_year when just about everything compiles to some form of bytecode, whether that is then run by assembly or by microcode running assembly a small distinction.) Specific languages matter more, specific workflows within languages matter more too. And as you say what you're trying to build also matters, but not all that much.


You are right that the type system waters are muddied by a variety of technologies and perhaps that isn't the best line to draw. I think your focus on the semantics of static vs dynamic dodges much of my point.

The crux of my argument was the larger and more complex the work the more important it is to find errors early. It seems obvious to me that languages like Java, C++ and Rust do much more to catch errors early than languages like Ruby, Python and Javascript which are easier to get started with and make a minimum viable product. Put those two things together and it seems like strong heuristic to use when starting a project.


This is why I think workflows matter too, at least as much as the language itself. If you write Python like you write Java, of course you're going to not catch some errors that Java would have caught before you ship, and you're probably going to be frustrated when you're at a company where everyone writes Python like Java. But if you write Python like Python (you can't write Java like Python), you'll find many of your errors almost immediately after you write it because you're trying out the code in the REPL right away, and writing chunks in a way where that's easier to do in Python.

Maybe a few type errors will still slip by, but you'll have found and fixed so many other kinds of errors much earlier. Kinds of errors that benefit by being caught immediately instead of festering because they passed a type checker. (I've never really found a type error to be a catastrophic-oh-I-wished-we-found-this-sooner type of bug. You fix it and move on. It's not dissimilar to fixing various null pointer exceptions that plague lots of corporate Java code.)

To me your obvious claim is not obvious at all, because the tradeoff space is so much richer than what mere type systems allow. We're not even touching on what you can do with specs and other processes that happen before you code in any language, nor other language tradeoffs like immutability, everything-is-a-value, various language features (recalling Java only recently got streams and lambdas), expressiveness (when your code is basically pseudocode without much ceremony (or even better when you can make a DSL) there's a lot fewer places for bugs to hide)... Typing just doesn't tell that much of a story.


The type system is your friend, not your enemy. You are comparing the type errors to the null pointer exceptions, aka the billion dollar error. You can have an extremely powerful type system, with very low ceremonies, that checks continuously that you are not shooting your foot AND having a REPL. For example using F#. Your code will be extremely expressive and creating DSL can be a breeze, with the huge benefit that even your DSL will be type checked at compile time.


That's my point, all that is in favor of F#, not static typing in general. I'm not opposed to type systems -- Lisp's is particularly nice, I like Nim's -- but having static types or not isn't enough of a clue that such a language really is suitable for large systems or can catch/prevent worse errors quicker.


I think it's valuable to read interviews with Anders Hejlsberg [0] about the design process for both .NET and C#. They are old but clearly communicate why certain decisions have been made (spoilers: compatibility).

[0]: http://www.artima.com/intv/anders.html


You never really learn a language, and you never really are an expert in using it.

While you may know a lot about what the language is, how it works, and accepted ways of using it, your opinions on how to do things will always be evolving (hopefully).

Sometimes all the experts who use a language will be behind the times. For a long time experts championed strong OO design. Now all the experts champion hybrid OO/FP style things (witness Java 8!).

This too shall pass, and we should have the humility to realize that no-one knows for certain what will be the next evolution of software development.


One of the greatest lingering flaws in both C# and Java is the lack of metaclasses.

Because classes aren't real objects and therefore not necessarily also instances of other classes (their metaclasses) as they would be in Smalltalk, there is no class-side equivalent of "self/this," nor of "super." In effect, you cannot write static (class) methods that call other static methods without explicitly referencing the classes on which those other methods are defined, completely breaking class-side inheritance and rendering class behavior (and instance creation in particular) needlessly brittle.

I believe the explosion of factories, abstract factories, and just generally over-engineered object construction and initialization schemes in Java and C# would have been side-stepped if both languages had always had a proper metaclass hierarchy paralleling the regular class hierarchy, as well as some form of local type inference.


> I believe the explosion of factories, abstract factories, and just generally over-engineered object construction and initialization schemes in Java and C# would have been side-stepped if both languages had always had a proper metaclass hierarchy paralleling the regular class hierarchy, as well as some form of local type inference.

That's a bit harsh. "Factory" is a term that became prominent in Java as a result of the language design decision not to include first-class functions, so any time you see "Factory" just think "function that returns an object", and any time you see "AbstractFactory", think, "type of function that returns an object". In C# you can just use delegates and the explosion of factories isn't really there.

I'd say your opinion of this explosion might change if you work in a good codebase which makes sensible use of techniques like IoC. Yes, it feels a bit silly to have a component in your project which does nothing more than instantiate objects, but you end up with classes that are much more cleanly defined in terms of the interfaces they expose and consume, and you can write unit tests that don't make you feel like you're damaging your code base to get the unit test to work.

At least, when it goes well.

My experience with metaclass programming (a fair bit of Python metaclass programming) is that it can often be replaced by generics, reflection, or various code generation tricks in C#, and I don't end up missing metaclass programming that much. Metaclass programming isn't a silver bullet, it's a tool that complements other tools in the right toolbox (Python, Smalltalk) but would just get in the way in other toolboxes (C#, Go).

There's a narrative here that we're somehow "neglecting" the lessons we learned with old systems like Smalltalk, Lisp, etc. when we make languages. It's a seductive narrative but I think it's mostly papering over the sentiment that language X isn't like my favorite language, Y, and therefore it's bad. I welcome the proliferation of different programming paradigms, and besides a few obvious features (control structures, algebraic notation for math) there are few features that make sense in every language. That especially includes metaprogramming, generics, reflection, macros, and templates.


First class functions aren't replacements for factories. An abstract factory provides several methods for constructing related objects of different types. The objects come from different type hierarchies, whose inheritance structures mimic each other. For instance, you might have a hierarchy of EncryptionStream and EncryptionKey objects. Both derive in parallel into AESEncryptionStream and AESEncryptionKey. Then you have an EncryptionFactory base class/interface which has MakeStream and MakeKey methods. This is derived into AESEncryptionFactory, whose MakeStream makes an AESEncryptionStream and whose MakeKey makes an AESEncryptionKey.

The client just knows that it has an EncryptionFactory which makes some kind of stream and some kind of key, which are compatible.

AbstractFactory doesn't specifically address indirect construction or indirect use of a class, but it does solve a problem that can also be addressed with metaclasses. If we can just hold a tuple of classes, and ask each one to make an instance, then that kind of makes AbstractFactory go away.

The thing is that in a language like Java, these factories have rigid methods with rigid type signatures. The MakeKey of an EncryptionFactory will typically take the same parameters for all key types. The client doesn't know which kind of stream and key it is using, and uses the factory to make them all in the same way, using the same constructor parameters (which are tailored to the domain through the EncryptionFactory base/interface).

If we have a class as a first class object (such as an instance of a metaclass), that usually goes hand in hand with having a generic construction mechanism. For instance, in Common Lisp, constructor parameters are represented as keyword arguments (a de facto property list). That bootstraps from dynamic typing. All object construction is done with the same generic function in Common Lisp, the generic function make-instance. Thus all constructors effectively have the same type signature.

Without solving the problem of how to nicely have generic constructors, simply adding metaclasses to Java would be pointless. This is possibly a big part of the reason why the feature is absent.


Yes, you're absolutely right that functions don't cover all use cases of factories. I was mostly thinking about the "why are there factories everywhere" complaint, which is mostly about factories that just produce one object.

> If we can just hold a tuple of classes, and ask each one to make an instance, then that kind of makes AbstractFactory go away.

That seems like just one particular way to solve things. I guess I don't see what the fuss is about, if we are talking about metaclasses in particular, because we could also solve this problem with generics, and the factory solution doesn't seem that bad to begin with.

> Thus all constructors effectively have the same type signature.

Or turned around, the type system is not expressive enough to assign different types to different constructors, and is incapable of distinguishing them. This matches with my general experience, that metaclasses are useful on the dynamic typing side (Python, Lisp, Smalltalk, JavaScript) but annoying on the static typing side (C++, Haskell, C#).

But of course that makes sense. In a system without static types, the only way to pass a class to a function is through its parameters, so you have to pass the class by value. In systems with static typing, you have the additional option of passing a class through a type parameter, which has the advantage of giving you access to compile-time type checking. Furthermore, there are real theoretical problems with constructing type systems which allow you to use metaclasses involving whether the type checker is sound and whether it will terminate.


Whatever the reasons behind it, it is clear that a lot of the effective and high impact industrial languages at various levels of the stack (e.g. c++, ada, Java/C#, python, perl, javascript,etc.) have not managed to incorporate some of the real wisdom learned in earlier systems (e.g. smalltalk an some lisps for OO, etc.)

If this was in fact avoidable, it is a sad fact.


This type stupid bloviating is why programming will never be a proper engineering discipline. There are two ways to implement oop: classes aka "object templates" or prototypes. Educate your selves people!


The tone makes it difficult to tell, but I think you are supporting my thesis - mistakes of the past are, in fact, being repeated. Engineering disciplines succeed by a) learning the science and b) applying it properly. We do not do that well in programming. So yes, educate yourself, and then take the lessons to heart.


This a valid observation, why have no down voters commented? Oop implementation was studied extensively in 80's, CLOS was very contentious in fact. The lack of training and perspective is stunning.


C# has reified generics, which solve a lot of problems that would otherwise be addressed with metaclasses. For example, factories can be setup quickly by reflection over a generic type variable.


Over-engineered construction schemes are a result of multiple factors. One is simply the static type system which gives rise to overly rigid type signatures.

Class constructors tend to exhibit a variety of type signature. Even when objects are derived from the same base type and substituted for each other, the way they are constructed can be quite different. In a dynamic language, we can handle all construction with the same kind of function: something that takes a "property list". Because of that, we can have a "virtual constructor". That can be a method on a meta-class to create an instance, or just something built in: some make-new-object function which takes a type and a list of generic constructor arguments that any type can handle. This is very easy to indirect upon.

Adding meta-classes in Java wouldn't solve the problem of how to make construction generic.


While I'm inclined to suspect that non-virtual by default is better from a design perspective‡, don't assume the point about performance is overwhelming. HotSpot has done devirtualization for a long time. You can detect not only when a method is never overridden, but also when it's never overridden at a particular call site. A virtual method that's never overridden can sometimes have no extra overhead, while a virtual method that is overridden may have sufficiently small overhead that it rarely matters.

http://insightfullogic.com/2014/May/12/fast-and-megamorphic-...

‡ I've used non-OO languages, but never an OO language without virtual by default.


Since we're talking about performance: the time it takes HotSpot to perform this optimization is also a perf hit for your program.

At the end of the day, the fastest code is one that doesn't have to run.

HotSpot is an impressive technology but the optimizations it has to do to overcome Java's design really only pay for themselves in most frequently executed code paths and only after some time to gather necessary info to perform the optimizations.

It's ok for long-running server code but not good for, say, short-lived command-line program.

Or to put it differently: a language that has perf-friendly design, like Go, matches Java's speed with 10% of engineering time and resources spent on the compiler and optimizations. Perf friendly design means it has to do 10% of the work to achieve the same end result.


This may be true in general, but the CLR uses bytecode and a JIT compiler, so that point may be a lot less relevant to it. In addition, devirtualization is apparently valuable enough that they're going to add it to the CLR, per the article.


Java compiles to bytecode and most implementations JIT, just like .NET. JVMs are more advanced than the CLR at optimization.


Yes, my point is that once you're comparing two environments that use bytecode and a JIT, you can't necessarily cite the cost of startup time and the cost of JIT compilation as a reason to avoid possibly-virtual calls.


Depends. Recently I spent a day measuring every permutation of calculation-related optimizations I could think of for a critical inner code path. Then I noticed I'd lazily used one virtual call for convenience, which had been there since the prototype stage. The three minutes spent removing it was by far the biggest win that day.


It also just happens that the MS C# compiler always emits the `callvirt` instruction for instance methods, because the language spec requires that a NullReferenceException be thrown any time a method is called on a null instance, even if none of the instance fields are used in the method.

Source: https://blogs.msdn.microsoft.com/ericgu/2008/07/02/why-does-...


"Another issue is that my approach to software design has significantly changed. Where I would previously do a lot of inheritance and explicit design patterns, I’m far more motivated toward using composition, instead."

This, more than anything, has dramatically improved the quality of my designs... and made coding fun again.

Immutability and Lambda functions have also had a tremendous impact on my designs.

Is the term Object-Oriented-Programming relevant anymore?


I don't see how preferring composition over inheritance makes OOP less relevant. In fact, GoF even suggests using composition over inheritance in OOP. Nor do I see how immutability and lambda functions are mutually exclusive to OOP either. You can have all of these things and still reap plenty of benefits from OOP. The benefits of OO polymorphism and several decades worth of architectural design patterns are not irrelevant just because functional programming concepts exist. Both should be used advantageously and when appropriate.


It's possible to do Functional Programming with an OO language these days using anonymous methods/Lambdas.

The GoF patterns either need updating or perhaps we're on the verge of calling this hybrid environment something completely different (?)


I'm surprised the creator of RavenDB took this long to come around on composition vs inheritance. Good on him for admitting his transgressions, however.


> Another issue is that my approach to software design has significantly changed. Where I would previously do a lot of inheritance and explicit design patterns, I’m far more motivated toward using composition, instead.

I picked up on this too; object-oriented design patterns have lost a lot of mindshare over the last ten to fifteen years. There was a time when it seemed like design patterns were taking over the world. We're still living with some of the monstrosities spawned during that era. I wonder if design patterns can ever be rehabilitated.


Java is not "virtual by default", it is virtual-only, except for private methods, which don't participate in inheritence.

I like the Java convention, for one thing, because it is one less decision for programmers to make. I've seen many C# programmers who are oblivious to what virtual means.


What about "protected final"?


"Someone was wrong on the Internet, it was me."


The bigger question is why the hell a post like this makes it on top of hacker news? This was by far the most pointless read ever.

On top of that arguing that non-virtual by default is worse than virtual by default is completely superfluous. Just add the damn keyword everywhere and you have virtual everywhere. Same for final.

But Java has everything non-final and virtual by default, which sucks badass because both require great care when implementing the method.

Extending code that was not designed to be extended is very common in Java, because you can. Adding final can easily be forgotten. Removing final, which is required in C#, will only be done IF you intended to make that method extendable, same for virtual.

Yes a great gain. Now I need to argue for each final I add to Java classes and methods, because you know, it seems wasteful to add it, while in fact it is crucial, since maybe just 1% of any code I write was meant to be replaceable by a third-party. Mostly, you want to use other mechanism for extension, like decoration & composition.

If it took ten years to learn that falsehood (non-virtual is worse than virtual by default), then we talk about one hell of a regression huh.


Virtual methods are also significantly slower to invoke.


Significantly?

Have any benchmarks to back that up?

Last time I benchmarked in the language I use most (C++) I couldn't get a difference distinguishable from my margin of error.


What this guy said! thank you


The short answer is a) it's interesting because the author is very notable in the .NET space and b) he's not exactly known for admitting to being wrong about anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: