Hacker News new | comments | show | ask | jobs | submit login

If I attempt to generalize it, I think most of C#'s differences in language design that have opposites in Java are superior. Examples include:

+ not virtual by default (so base classes can be changed more easily without breaking/recompiling downstream clients that the base class writer doesn't know about; specifying something as "virtual" should be a deliberate conscious decision by the class author)

+ value types (for speed, because J Gosling's idea that "_everything_ is an object is 'simpler' for programmers" has a cost -- the boxing & unboxing, and inefficient cache-unfriendly pointer-chasing containers)

+ no checked exceptions (checked exceptions have benefits in theory but real-world practice shows that it forces programmers to copy-paste mindless boilerplate to satisfy the checked-constraint)

+ unsigned types (very handy for P/Invoke API boundaries because legacy Win32 has unsigned params everywhere; yes, yes, Gosling said that unsigned types are confusing and dangerous but nevertheless, they are still very useful)

+ many other examples

This doesn't mean that Anders Hejlsberg's C# language team was smarter than J Gosling's Java team. They simply had 7 years to observe how Java programmers (mis)used the language and therefore, could correct some design mistakes.

Nevertheless, C# still made some dubious decisions such as renaming "finalizers" to "destructors". Java had the better name and C# should have kept the original terminology.




> + no checked exceptions (checked exceptions have benefits in theory but real-world practice shows that it forces programmers to copy-paste mindless boilerplate to satisfy the checked-constraint)

As languages like Rust or Swift demonstrate, the issue is less the checked exceptions and more the abject lack of support for them in the language and type system, which ends up making them essentially unusable.

> Gosling said that unsigned types are confusing and dangerous

He is right of course, but he forgets that so are signed types if they're bounded.

If Java had unbounded signed integers (à la Python or Erlang) that'd be one thing, but Java does not, and neither does it have Pascal-style restrictions/user-defined integral bounds, which means it's no less confusing or dangerous, it's just more limited.


Neither Swift nor Rust have exceptions, checked or otherwise.

The kind of exceptions that unwind the stack until some part of the code up the stack catches the exception.

They both handle errors by returning error values, kind of like Go.

Rust has a try! macro, which might make you think it's try/catch equivalent, but it's not. It's just a syntactic sugar over error values.

Similarly in Swift try/catch/throw is just syntactic sugar for handling error values.

https://doc.rust-lang.org/book/error-handling.html

https://developer.apple.com/library/content/documentation/Sw...


While it's true that Swift and Rust don't unwind the stack, this is just an implementation detail.

The comparison to Go is very misleading. Neither Swift nor Rust require you to return a value on error like Go does, nor do they allow you to simply ignore errors. The semantics, not the implementation, is what matters.

Swift's error handling is not "syntax sugar." try/catch in Swift are not macros that desugar into normal Swift. Errors are not returned via the normal return path, but via a dedicated register. Just like stack-unwinding exceptions, Swift errors are part of the core ABI.

https://github.com/apple/swift/blob/master/docs/ABIStability...


I have absolutely no experience with Rust, but I know it normally depends on libunwind, isn't that for unwinding the stack?


Perhaps to implement https://doc.rust-lang.org/std/panic/fn.catch_unwind.html

But also just to display traces on asserts/panics, even if not "unwound" per se.


By default, rust's panics _do_ unwind the stack. However, you can also set a flag to compile them as an abort instead. Stack traces are still useful in that case.


> Neither Swift nor Rust have exceptions, checked or otherwise.

The point is that both of them have generic error types which "infect" every caller (transitively) in much the way checked exceptions do.

That Rust and Swift require explicitly bubbling error values should be a point in favour of checked exceptions.


> That Rust and Swift require explicitly bubbling error values should be a point in favour of checked exceptions

I thought the explicit bubbling was the nicest thing about error handling in Rust. It's usually just a single character ('?') that the editor can easily highlight, and nicely indicates where the operations are that might fail.

I'd add that checked exceptions don't play nicely with general functional/stream operations like "map" which is why Java went with unchecked exceptions for their streams api in Java 8. Rust on the other hand can handle interior failures in such functions nicely, promoting them to a single overall failure easily via collect(), using the blanket FromIterator<Result<A, E>> implementation for Result<V, E>.


> I'd add that checked exceptions don't play nicely with general functional/stream operations like "map" which is why Java went with unchecked exceptions for their streams api in Java 8. Rust on the other hand can handle interior failures in such functions nicely, promoting them to a single overall failure easily via collect(), using the blanket FromIterator<Result<A, E>> implementation for Result<V, E>.

And Swift has a `rethrows` marker to transitively do whatever a callback does, it's equivalent to "throws" if the callback throws, and to nothing if it does not. So e.g. `map(_ fn: (A) throws -> B) rethrows` will throw if the callback it is provided throws, and not throw if the callback doesn't throw.


As I see it, the Rust (etc) approach avoids two problems:

1. The C problem of forgetting to check error returns. Yes, exceptions (checked or not) also avoid this.

2. The C++/Java/C# problem of exceptions being more expensive than you'd like for common error situations. .NET has sprouted alternatives like `bool tryParse(String, out int)` as a workaround, but on balance I prefer the unified mechanism.

What I don't like is how innocuous `unwrap` looks, or how often it appears in example code.


> The C++/Java/C# problem of exceptions being more expensive than you'd like for common error situations.

Exceptions should not be used for common error situations! This is the prime mistake of Java which pretty much required exceptions when it should (and checked exceptions to boot).

> .NET has sprouted alternatives like `bool tryParse(String, out int)` as a workaround

I don't consider that a workaround. There is a deep semantic difference between TryParse and Parse. If you see TryParse then you know the data is expected to be invalid. If you see Parse than you know it's expected to be always valid. A good C# program should have very very few try/catch blocks (ideally just one).


> There is a deep semantic difference between TryParse and Parse

Sure, but I don't think that splitting every possibly-failing API call into throwing and non-throwing forms is the right way to express that difference, and a lot of the time it'll be a matter of context (meaning that the implementation can't magically do the right thing).

It's fairly easy to layer throwing behaviour on top of nonthrowing in a generic and efficient way (Rust's Option, Java's Optional etc), but the reverse is not true.

I must admit I'm losing track of what if anything we're in disagreement about, though...


I agree it's easy to layer throwing behavior on top of non-throwing behavior -- Java easily chose the worst possible way to do it.

But having both a Parse and TryParse means that I can ignore the result of the Parse call entirely and let it fall through to the exception handler. It is by-definition always expected to succeed so when it doesn't then that's a bug. If you only have one of TryParse or Parse you cannot judge the intention.


> If you only have one of TryParse or Parse you cannot judge the intention.

Sure you can. To take Java as an example, if you only have one parse method and it returns an `Optional`, you can indicate the intention by whether or not you call `get` directly or call something like `isPresent` or `orElse` first/instead. Yes, you can get that wrong, but you can get the choice between `TryParse` and `Parse` just as wrong.


That's a very good point. I do this all the time with nullable types; I'm not sure why I didn't consider it that way.


It's not really that innocuous. It'll cause the program to crash as soon as it's discovered that there wasn't a value where you were expecting one.

In languages with exceptions, the program will crash as soon as you try to use that value (rather than when you try to unwrap it), e.g. the infamous null pointer exception. Copying bad sample code in this case might result in code that is difficult to debug because a null value might be handed off several times before something tries to dereference it.

In languages that expect but do not enforce that you check the validity of the value (like C) you'll just get undefined behaviour that will hopefully cause your program to segfault when you try to use the value, but who knows what will actually happen? Copying bad sample code in this case will cause a security vulnerability.

Copying "bad" sample rust code (using unwrap) will cause a safe crash with maximum locality, for simpler debugging.


> The kind of exceptions that unwind the stack until some part of the code up the stack catches the exception.

Rust panics result in unwinding the stack:

https://doc.rust-lang.org/nomicon/unwinding.html

try!/? is preferred for handling errors, but if you "know" that a Result or an Option has a value, you can unwrap() or expect() it, and you'll get a panic if you're wrong.


so does panic() in Go. But no-one claims that Go uses exceptions for error handling, because it doesn't and neither does Rust.

In both languages panic() is for "this shouldn't happen" fatal errors, not for signaling errors to the caller, the way Java or C# use exceptions.

My comment was in response to person basically saying "checked exceptions in Java would be good if only they implemented it the way Swift/Rust did".


With Swift, that assumes that the function being called reports the error in the first place. Very few functions do.

Cast a large double to an int -- crash. How do you catch that error? You can't. You have to make sure it never happens yourself. The UserDefaults is another great one. All sorts of ways that it can crash your app, none of them can be caught or handled. Your app just crashes. My advice: convert all your object to a text string (like json) and store that. Do NOT store a dictionary representation.


FWIW the try!() macro has been replace with the ? operator.

Having spent extensive time with checked and unchecked exceptions I find Rust's error model to be very robust.

There's extensive tooling to handle and transform them, most of the pain comes from developer who are used to sweeping them under the rug(which totally makes sense in prototype land but when I'm shipping something I want guarantees).


I find unsigned types more dangerous than signed bounded types. For unsigned types the edge-case is at 0, a frequently used number in eg. arrays. For signed types the edge-cases are at (random) positive and negative numbers.

Having to think more about edge-cases makes the code more dangerous:

    for(uint i = arr.len()-1; i >= 0; i--) {...}


> I find unsigned types more dangerous than signed bounded types.

So you do agree that signed bounded types are dangerous, and it's only a matter of degrees between them and unsigned. Thank you.


> + unsigned types (very handy for P/Invoke API boundaries because legacy Win32 has unsigned params everywhere; yes, yes, Gosling said that unsigned types are confusing and dangerous but nevertheless, they are still very useful)

The lack of unsigned types in Java is a constant pain when doing cross platform development, or when trying to consume (or write out!) a binary wire format.

It can be worked around, but it is just needlessly stupid.


Not having type erasure is pretty nice (except that reflection code with generics is really hairy).


I like type erasure and depend on it a lot for Scala and Clojure interop. Any JVM language works with the same objects and methods, and can implement its own type system/syntax on top.


Type erasure is one of the ugliest things in java, on par with the Linq implementation using streams. It's not a pain only if you never write generic code and you are happy with having a lot of duplication. C# implementation is superior by far, and you can easily return newly created objects from a generic method or use an object properly without the need to pass the type of the object as a method parameter in addition of specifying the generic type.


That's true in the clr too as long as everything supports generics.


Things that have CLR-like generics work on the CLR. Anything else doesn't. E.g. Scala.


I don't really get the complaint. Even if they had type erasure you couldn't use Scala in the CLR because nobody has written a CLR target. And on the other hand, I don't know why there couldn't be a Scala implementation with true generics. A lot of crap like DummyImplicit and ClassTag is there for no other reason than to work around those limitations anyway.


I think it's largely an urban myth.

All expressive languages whose type systems don't have a 1-to-1 equivalence in the runtime's type system need to employ some degree of erasure.

The distinction between "erasure" and "no erasure" doesn't make much sense. It's always just about more (CLR 1, JVM < 10) or less (CLR >= 2, JVM >= 10) erasure.


I don't see how you figure. In C# List<string> and List<int> are two different types at runtime.


And ...?


If you're implementing a CLR language and want flawless interop with everything then you just have the constraint that you need to support generics in the same way that other CLR languages do. I'm not seeing why this model is so much worse.


Scala had a CLR target for a number of years, but getting the type system to work was so problematic it was eventually abandoned.


That's what I meant with: "That's largely an urban myth."


I'm curious in what sense it was difficult to get it to work.


Perhaps not the best explanation but this the gist, as given by Scala's creator in an interview https://youtu.be/p_GdT_tTV4M

> It's harder then it looks, because .NET has a very rich type system which happens to be not the Scala type system. In fact, it's very hard to map some of the parts of the Scala types like higher-kinded types...so it's a difficult task, much more difficult than JS or LLVM. LLVM is likely going to be the third target [after JVM and JS] because there we don't need to fight a type system which is foreign to us, which is what the .NET one is.


F* has an advanced type system with support for dependent types and refinement types. Yet, it runs on the CLR.

Source: https://www.fstar-lang.org


Hairy yes, but really, really powerful.


Java's virtual by default methods has never been an issue for me. On the other hand, some clown making their classes and methods final has been a recurring thorn.

David Bacon's Kava's approach to lightweight objects remains the correct answer and is sorely missed.

The only case for unchecked exception is mitigating terrible design choices. Like using Spring or Hibernate/JPA. When the client code can't do anything with a caught exception, it shouldn't even be thrown. Meaning, they're doing it wrong.

The omission of signed integers has caused me unnecessary pain a few times. Original bindings for OpenGL, a DICOM parser, etc.

--

Properties are the one C# feature sorely missed in Java. JavaBeans is silly.

Syntactic sugar for reflection would be nice, maybe something like 'Method m = myClass#myMethod( String )'.

Java has done plenty of terrible things, crimes against productivity and design rationality. Chiefly annotations, lambdas, and the silly Optional. C#'s equivalent quixotic fever dream is probably LINQ.

I have a laundry list for Java, JDK (especially), and JVM. Who doesn't? Mostly undoing bad ideas and culling deprecated stuff, plus some sugar to reduce the verbosity.

--

Unfortunately, Java's language is hard to separate from its culture. The taint of all the enterprise minded stuff (XML XSD etc, J2EE, Spring, excess indirection caused by design pattern newbies) has been hard to shake off.


> C#'s equivalent quixotic fever dream is probably LINQ.

What? LINQ is terrific.


> C#'s equivalent quixotic fever dream is probably LINQ.

Maybe the weird query-style syntax.

If you use it properly, it's no different than map, fold, filter, etc that are ubiquitous functional higher-order functions., just with T-SQL-ly names.

Linq is so much better than doing all of what it does by hand, with for loops and temporaries galore, the way we once did.


and amazingly despite all these apparently horribly complex choices that programmers won't understand, it's a great language to work in.

Maybe guys like Gosling and Pike should take note. A touch more complexity in the right places can make a huge difference.


What do you think of partial classes and methods in term of code quality?


I've always thought of "partial classes" as a language feature motivated by code generators not stomping on programmers' manually entered code. E.g. Winforms generates some declarations in one partial class while the programmer codes UI handlers in the other partial class.


When I first learned C#, I was having a hard time to navigate the code due to partial methods. So I have always wondered what C# developers think of having their methods spread out in different files.


Partial classes and partial methods exist specifically to support having code that is automatically generated paired with a human-managed source file.

While you could use the support for them to split human-managed code across separate source files, that would be a horrible practice that I've never encountered in the wild, even in .NET shops with otherwise-atrocious practices and code quality.


>While you could use the support for them to split human-managed code across separate source files, that would be a horrible practice

To continue that type of advice, some say "#region/#endregion" is another language feature that's intended for code generators so that the IDE can collapse specific lines of code and hide it from view. Programmers should not be hand-coding "#region" themselves. That said, there is debate on that: https://softwareengineering.stackexchange.com/questions/5308...


#region / #endregion, IME, largely serves to mitigate the visual offensiveness of poor (usually, completely neglected) design and (despite the potential utility in the case of code generation, and the utility it might have in the rare case where there is a good reason for a monster source file) I'd prefer it not to exist.


We use regions to standardize the layouts of our classes. Except for trivial classes, you will find the same regions in the same positions making it eas(ier) to find, for example, all methods implementing an interface, or all of the private helper methods.


Regions are just useless visual clutter most of the time. You can put the methods/fields in the same order without using the regions. In my experience regions are a very bad practice used only to mask bad design that produced gigantic classes. The only place where I think they may be helpful is when you are writing a library and your class must be huge because you are implementing for example a Trie or some other collection or some other object pretty complicated that doesn't make sense to divide in smaller classes. And even in that case I would first try really really hard to split it in smaller entities rather than just having a thousands lines class with some regions around.


Not sure where to go from there. You've precluded the possibility that regions and good design should exist at the same time in the same file.

Where does the absolutism in the tech industry come from? We are a bunch of individuals who have individual experience and then try to form a view of the world that satisfies our experiences. What about the experiences you haven't had or conceived of? We are constantly rewriting the rules in our head to fit the new experiences we have every day to make sure we are right all of the time. Surely, our current world views are not complete or we would have no room to grow.

Still, I'll take your comment under advisement in case my classes are big, poorly designed non-Tries.


Since it's buried at the bottom of the stackexchange: #region/endregion is useful for visually grouping Unit Tests per method under test. They're also occasionally useful to group interface implementations.


You could just use nested classes for that, with the added benefit that the test framework will reflect that organization in its UX.


The suggestion is appreciated. Thank you.

I have a minor doubt that I won't like the additional level of nesting incurred, but I'll attempt it regardless.


Partial classes are also useful in the case that you want a nested class to have its own file.


I think I may have manually created partial classes for huge classes to make them more manageable. Think for example expression tree visitors or collections of extension methods. Sure, I could just have created several classes like FooExtensions and BarExtension, but having a partial class Extensions seems ever so slightly better to me. But I generally agree, there are not many good use cases besides code generation and if you are tempted to do it, then you probably have a problem that has a different and better solution.


I think just about every time I've seen it it's either been exactly what the OP described or an attempt at refactoring a class whose scope had grown far too large.


It's a tool to use.

Like others have said, it makes working with generated code easier.


That's where every Visual Studio dev will see them used, because it's been a VS pattern since the beginning. But I create them manually often enough. Overall I use them sparingly, but there are cases where you definitely want a set of things in one class for code-organization, but still want to think of them as different modules for human-organization.


I did that once where I had a complicated class with a docile public API that controlled a not so docile long running threaded 'machine' of sorts. Having them in two files made thinking about stuff easier.


If you want to think of them as different modules then most likely they should be completely separated entities. Why on earth you would want something completely different to live in the same class? It is just screaming that it wants to be a separate class. Namespaces are the correct tool for code organisations, certainly not partial classes.


It's very interesting that you were able to see my use cases and come to this conclusion...


Regardless of the use case this is what the official documentation says about namespaces:

The namespace keyword is used to declare a scope that contains a set of related objects. You can use a namespace to organize code elements and to create globally unique types.

And about partial classes:

There are several situations when splitting a class definition is desirable: When working on large projects, spreading a class over separate files enables multiple programmers to work on it at the same time. When working with automatically generated source, code can be added to the class without having to recreate the source file. Visual Studio uses this approach when it creates Windows Forms, Web service wrapper code, and so on. You can create code that uses these classes without having to modify the file created by Visual Studio.

As you can see partial classes are not the right tool to organise code.


How would you propose to split a struct that has a single data member? Say, a struct where the only data is one unsigned short?

Also, "regardless of the use case" ? I barely know how to respond to that. Have a little imagination.


Thank you. It has just occurred to me I can use this feature at work!

lol


Not the OP but here are my 2 cents: partial classes should only be used when you have a mix of generated and hand written code. Any other usage should be forbidden. It requires some education but it is useful for this usecase.


If, like me, you're old enough to have used MFC (where the code generators generated source for classes that you had to augment, and randomly picked lines to replace or overwrite in those classes whenever it felt doing so (1), you think they are a godsend.

(1) that's an exaggeration; MFC used comments to identify sections that it owned in the source, but in my (limited) experience, there typically were zillions of ways to make changes to your code, but if you didn't use the one Microsoft picked as _the_ way (and which they didn't push into your face in the IDE), you were in for heaps of problems.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: