Hacker News new | comments | show | ask | jobs | submit login
Mono's New .NET Interpreter (mono-project.com)
229 points by benaadams on Nov 13, 2017 | hide | past | web | favorite | 115 comments



It's interesting how .NET is converging on the same runtime techniques used in Java and yet how drastically far behind they are. The Java guys realised that an interpreter was still useful decades ago. Tiered compilation shipped in Java 8, and that was more complex to implement as there were two JITs, so JITd code still had to do profiling.

It's also quite interesting that the interpreter was dropped due to the complexity introduced by having generics in the runtime. A long time ago I thought it was self evident that type erasure was a poor way to do generics, but Brian Goetz has been arguing persistently for years now that type erasure isn't necessarily a bug, just a different tradeoff. With runtime complexity being one of the factors that weighed into the decision ... something that's easier to understand now I read this blog post.

There are some things I like about .NET though it was a long time since I did any, but HotSpot seems so dramatically far ahead and the distance is increasing. The major selling point of .NET over Java originally was supposed to be great multi-language support (well, and better Windows support), but there are way more languages targeting the JVM than the CLR these days. HotSpot does tiered compilation, it now does AOT compilation too, it has a new JIT called Graal that either matches or completely smokes the best available runtimes for almost every language, and IntelliJ is every bit the match for Visual Studio. And it's all cross platform.

So what's .NET's major selling point these days?


It is worth pointing out that when we dropped the interpreter, we only had two or three engineers working on the VM and they had to both develop the JIT and maintain the interpreter, plus work on the GC, io-layer and other VM features.

Without a reason to keep the interpreter (the world was a JIT-friendly place back then), it made no sense to maintain it.

But times change, statically compiled environments are more common nowadays (iOS, PlayStation, Xbox, tvOS, watchOS) and with it the need to have dynamic capabilities.

To put things in perspective, adding generics to the revived interpreter probably took an engineer that was not familiar with .NET about 4-6 weeks of work.


I'm very confused as to if you believe that to be fast or slow. That seems incredibly fast for "an engineer"...


.NET is the perfect stack for the generalist.

- As a consultant if I am handed a .NET project, even if it's a ios, android, web app, or a wpf application, then I'm usually familiar with 100% of the ORMs/PDF libraries/GUIs/build systems/source controls/IDEs/web frameworks they use(excluding client side web). I can come up to speed and get running quickly whether it was built last year or 10 years ago. It's the exact opposite of the javascript environment where two different javascript apps have less in common than a ruby and python app. It also allows me to meet all my clients need well enough on one platform.

- Because of the previous item, it's perfect for enterprise which needs to be able to scale up and down teams internally.

- Tooling is built for ease of use and is amazing. A lot of tooling from my experience in the java world is more powerful but with much much steeper learning curves. Leading to far more specialization.

- Visual studio is a best in class IDE.

- It's not java. From a language perspective java is still playing catch-up.

Java is great, there are a lot of amazing open source projects on it, the big data story on java is far more compelling, you can write much better Android apps, and and large multi-department transactions financial/enterprise software. But it is in no way shape or form superior to .Net in all ways.


.Net does have two major downsides that kill it for me entirely (as someone who has used it for 15 years now): schizophrenic roadmap and deprecation, immature libraries. The deprecation means you can't rely on the longevity of the vendor's whims to build your product on. The library issue is a biggie - there are just so much more higher quality libraries in Java. The language matters considerably less than the libraries do when you want to get from A to B.


It also doesn't help that almost 100% of the time, when I'm Googling for docs, I get sent to a page for something deprecated that may or may not still exist in the version of .NET I'm using. It seems like more often than not, I have to really hunt for up-to-date information, and even then I'm never really sure. MS direly needs to do some SEO work on MSDN.

That and the alarming amount of mixed information you get on whether or not using X is a better idea than Y, for ill-defined (from all angles, including MS), deprecation-related reasons. At least that was my experience a couple weeks ago when I got placed on a one-off .NET project.

I'll admit that I'm not terribly experienced with .NET's standard library. But I am very familiar with C# as a language and I'm comfortable using it. It was rather frustrating for the stumbling block to be what it was.


On the library front we've used IKVM a few times where there was a java lib that did what we wanted with no equal in .NET land, e.g. Saxon XPath2 would be a primary example. However, with the developer of IKVM stepping down this has created a possible future risk.


As generalist I find being able to do Java, .NET and the occasional C++ dive, a nice way to have the best part of all cakes. :)


Excellent GUI frameworks on most platforms.

Much better unmanaged interop story: [DllImport] (btw it works on all platforms, e.g. on Linux it imports from .so libraries), structures, unsigned integers.

Better multithreading: thread pool, synchronization contexts, async-await.

Unsafe code, pointer arithmetic.

LINQ


Speaking of interop - the nice thing about .NET type system, is that it has facilities to do complete mapping to C, with the sole exception of setjmp/longjmp (which is generally a sore point with everything, even C++ doesn't handle it well). You already pointed out unsigned integers and structs; it's also worth pointing out raw (non-GC-aware) pointers, and explicit layout structs; the latter can handle C unions seamlessly, in particular (you just say that all fields have offset 0).


Variable-length C structures don’t map. Example of such structure:

https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...

But they’re rarely found in APIs, and there’s Marshal class in the framework to implement manual marshaling by reading/writing stuff at unmanaged memory addresses, even without /unsafe compiler switch.


With a little syntactic sugar in C#, they do map, in the same exact way you do this in C - you just use a "fixed" array of one as the last element:

   struct VariableLength {
       public int fixedSize;
       public fixed int varSize[1];
   }
Of course, this also means that you have to use stackalloc or other manual allocation to actually get a properly sized block of memory, computing said size yourself; but you also have to do it in C with malloc:

   byte* p = stackalloc byte[sizeof(VariableLength) + 10 * sizeof(int)];
   var vl = (VariableLength*) p;
Once you have a pointer, though, you can do vl->varSize[i] etc - this works because the type of vl->varSize when you access it is int*.


Came here to say this — one of my first projects in the industry as an intern way back with .net 1.1 was to create an c# based application to parse through effectively binary serialized variable sized c structs.


"fixed" was a C# 2.0 addition, though. Back in 1.x days, you basically had to declare that field as a simple scalar instead, and then use & to manually obtain the pointer (but then you could index it with [] etc, same as in C, so it doesn't really make that much of a difference in this case; "fixed" sure did come in handy for other stuff though).


Yeah, Mark Reinhold already stated multiple times that JNI was designed on purpose to be hard, because Sun wanted to drive people away from writing unmanaged code.

Thanks to JRuby guys and Project Panama, Java will eventually get a .NET like interop story, but it is still in the future. :(


Searched a bit on Project Panama, found an example:

    @Header(path="unistd.h")
While indeed very easy to use, that API is extremely hard to implement in the runtime to make it work not just for toy examples. Authors of real world libraries do all kinds of weird stuff in their C headers: they abuse C preprocessor, implement compiler-specific hacks, and even include pieces generated by external tools.


Wow when did he say that?


At the JavaOne 2013 Technical Keynote.

https://blogs.oracle.com/java/the-javaone-2013-technical-key...

<quote> He mentioned JNI 2.0 and said, “It just shouldn’t be so hard to integrate Java with native code after all these years.” </quote>

For the full context, you need to watch the keynote.


> Excellent GUI frameworks on most platforms.

Is platform_s_ here, Windows 10, Windows 8 and other ms platforms Last time I looked GUI on Mac and Linux was far from excellent something on par with python binding to QT/gtk works but not enjoyable


> but there are way more languages targeting the JVM than the CLR these days

This is often repeated, but it turns out that the JVM and CLR have roughly the same number of high profile languages:

Specifically, the CLR enjoys the following mature languages:

    C#
    F#
    VB (which has some distinct advantages over C#)
    C++/CLI
    PowerShell
    IronPython
    Nemerle
It's also worth noting that Clojure absolutely works on the CLR, and F* (F# with a more powerful type system) just hit version 0.9.

C# (and VB), get a bit of love of HN, but they're even more powerful than they get credit for. C# has monad comprehensions, a metaobject protocol[1], and can straight-forwardly simulate type classes (in the form of implicit conversions to constructed classes). So, as a language, C# can get close to both Scala and Clojure even without having higher-kinded types, multimethods, or macros.

[1] https://books.google.com/books?id=QGciuyBRyU0C&pg=PA119&lpg=...


> "IntelliJ is every bit the match for Visual Studio"

What debugging features does IntelliJ have that are of equal or superior utility compared to IntelliTrace?

https://blogs.msdn.microsoft.com/zainnab/2013/02/12/understa...


It has the same feature, called chronon


>but HotSpot seems so dramatically far ahead and the distance is increasing.

Benchmarkgame scores reached basically parity between java and C# once .net core 2.0 dropped

.NET selling points:

* you can do some SIMD explicitly, more is coming

* you have control over when things are stack allocated or not. structs, stackalloc, slices

* you have unsigned ints


Hm, seems like the big win for C# was actually someone going in and optimising the benchmark programs themselves:

http://anthonylloyd.github.io/blog/2017/08/15/dotnetcore-per...

The point of explicit control is usually performance though. So if Java can give you near equal or better performance but without the developer having to explicitly specify as much, isn't that a win?

Benchmark game also lacks the benchmark the Mono blog post is all about - developer iteration time.


In many cases, Java actually can't give you near equal or better perf. "Our optimizer is smart enough, you don't need to care about stack-allocating things" has been the promise for as long as Java existed, but it doesn't really deliver that outside of fairly simple cases (which are also the ones that are typically benchmarked).


Java's new partial escape analysis should help with that.


>So if Java can give you near equal or better performance but without the developer having to explicitly specify as much, isn't that a win?

It would be, but often times the lack of explicit control causes you to need more verbosity in Java to get around it, because the JIT isn't all knowing. For instance say you want an array of 2D vectors (x,y) and you want them in line in the array for data locality. In C# you just make Vector a struct, and put the structs in the array. In Java you would have to just put float primitives in the array and alternate the x and y and keep track of it by hand. Minecraft suffers greatly from this.

As well, Java won't do anything but the most trivial automatic vectorization, and only on integers, and SIMD can be a huge win at times. I'm looking forward to more complete support droping for .net

>Benchmark game also lacks the benchmark the Mono blog post is all about - developer iteration time.


F#

Single file static AoT compilation. Does the JDK support this yet?


Since the early days for anyone willing to pay for commercial JDKs, only Sun was against AOT compilation.

For those that rather use only OpenJDK, there is Linux x64 AOT support on Java 9, with improved support for other platforms in Java 10 roadmap, some of it already in master.

As for F#, currently it is lacking some love from.NET Framework, UWP and Visual Studio teams, which always think about C# and VB.NET terms. Even C++ seems to be better supported.


I would say Scala is on par with F#. And there's also Kotlin if you're looking to transition a Java team to a static FP language more gently. And there's Clojure if you prefer a dynamic Lisp.


Curious to see how this plays out with Microsoft investing in cross-platform development with .NET Core and .NET Standard.

The consequences of having multiple competing cross-platform options are already evident when using tools like intellisense in VS Code where you can (or used to be able to) switch between Mono and .NET Core runtimes and certain libraries were only compatible with one. And then there are build tools like Paket which currently only support Mono even though Paket itself can provision .NET Core runtimes.


We now share a lot of code between the runtimes, it is an effort that we have taken seriously to improve compatibility, and capabilities.

Today, when you checkout the Mono source code, it brings CoreRT and CoreFX.

From a user perpective, we have worked to make a universal API surface that goes everywhere Mono, .NET or Xamarin go, it is called the .NET Standard, and last month we released a major upgrade to .NET standard that grew the API surface across the board.

As for the runtimes, they have different pros/cons so they currently can not be unified into a single one, so we need to maintain different ones, but we are working to make the tooling work better across the board.


Until .NET Core starts supporting GUI frameworks there will always be a use case for Mono. I can get up and running with GTK# relatively quickly for a cross-platform desktop application using Mono when that isn't the case for .NET Core. That being said, if I were to build a web app I would definitely go with Core over Mono.


There is nothing preventing these GUI frameworks from working on .NET Core. They might need some work to support it but its entirely possible.

https://github.com/AvaloniaUI/Avalonia


Avalonia is in alpha right now and I would not trust a production desktop application with an alpha GUI toolkit. It is entirely possible to get frameworks like GTK# working on .NET Core but the problem is who is going to do that?


The last time I looked, GTK# wasn't a good experience for Windows users as it has to be installed separately - has this changed?


As long as we have things like this: https://github.com/dotnet/corefx/issues/19773

And this: https://stackoverflow.com/questions/27266907/no-appdomains-i...

And did I mention this? https://github.com/dotnet/coreclr/pull/8677

As long as we have things like that, and as long as .NET Core continues to pander only to the lowest common denominator of developers, then the Mono project is still providing value. It is not a matter of "competing".


I can't believe that people want to bring their old anti-patterns to coreclr. Running processes is a work for the operating system, not the language runtime. Maybe asp core should run kestrel as a kernel module too?!


I guess if those people have to re-write their code from scratch to port to .NET Core, porting to other programming language becomes an option as well.


I think you're right to call out that it's not competing in every way, but there are some cases in which it is competing directly—and it's in those cases where it can be a frustrating developer experience.


I've been a (mostly) C# developer for 10 years, and am terribly disappointed with the way .NET Core is being managed. Something is definitely wrong when promises are made to assign a developer to the issue, full-time, and then no communication happens after that, for months. All they needed to do is test and merge the hard work done by a volunteer!

These problems are eclipsed by the the marginal success of the 95-99% of use cases that are covered by .NET Core.

One of the lesser-known areas that C# and .NET has shined, involves plugin and modular development. AppDomains and secure code regions are a feature of the runtime. For more esoteric use cases, the classic .NET runtime itself is customizable, with replaceable COM interfaces using the C++ hosting API. This is how SQL Server implemented stored procedures that could be written with C#.

The way .NET Core has treated the issue is tantamount to a betrayal of trust. I understand that they are trying to take things in a different direction, but doing so undermines the other features that .NET provides, and does so well with. Not to mention they did promise to work on, and merge the feature, before leaving people hanging with no further updates in communication. A cross-platform .NET Core that doesn't have this kind of feature is not going to do much that Python or Java cannot do. We should be far more worried about .NET Core competing with .NET Framework, than .NET Core competing with Mono.

Is Microsoft trying to provide a good tool, to empower developers to build things that could not be easily built before? Or is the primary goal to be seen as friendly to the Open Source Community, by providing yet another dumbed down, cookie cutter cross-platform SDK that deserves no relevance? Are they intentionally omitting or delaying features that one would use in more advanced scenarios, forcing developers to make a choice to use classic .NET Framework (Windows/Mono)?

It is starting to seem like the same old tricks by Microsoft, just more cleverly hidden.


I think it's pretty weak to assume that because MS isn't supporting your extremely rare and complicated scenario they are obviously malicious.

.NET Core has been a huge rewrite of the old framework, focusing on common scenarios which are easily cross-platform first, then the harder ones later. E.g. System.Drawing, which they eventually included (in 2.0 in believe) via incorporating several implementations for each target architecture, including one from Mono.

They'll probably get to AppDomains eventually, assuming it isnt outright impossible on linux/osx or introduces a massive security hole.

But not doing it right now cause you want it isnt evil: which is more important? Full support for System.Crytography, or say Oracle Ado.NET, or runtime loading/unloading of dlls? Hint, the answer isnt dll unloading.


I would be careful with the term "rewritten". CoreCLR is really old (Silverlight), before it was bound to UWP(x86/arm) and before it was cross platform ported to other operating systems. There was heavy code stealing from the .NET Framework, but the split is already long time ago (like 2005).


There are elements of truth in this, but it is not the whole story.

While Silverlight did produce the first portable version of .NET and produces a lot of the code that allowed .NET to be portable, this code has been now retrofitted into the main CoreCLR and the VM source code has been upgraded.

What you see in the public CoreCLR is now the state of the art and while I do not know how the exact mechanics work, my understanding is that the CoreCLR and the .NET Desktop VM that ships source codes are very close to each other - modulo branches, deliverables, freeze dates and other loose ends.

On the library front, there has been a lot of rewriting for the sake of portability, cleaning up the code, and making the code more maintainable, and bringing it to the 2017 standards of coding.


I stand corrected. Thanks Miguel for your work.


Perhaps remixed, trimmed and partially augmented are better terms


Yeah. I just commented it to avoid confusion. .NET history is more complicated but also more rational than often seen


Extremely rare? You did check the links I posted, right? There are many comments, and that is just the outspoken minority. It's one of the most requested outstanding items in their entire repository.

And it's not a matter of priority or security, it's a matter of transparency. The community was promised a review, and stress test, and that someone from MS would be working on it full time. They followed up with more promises multiple times. And then silence. You're completely missing the point.


I enjoy .NET a lot, but as polyglot consultant we currently are only focused on .NET for straight Windows development.

For us to consider to use .NET Core instead of Java or C++ for UNIX-like OSes, it still needs to catch up a bit with its bigger brother.


.NET Core and .NET Standard

I don't understand why they need so many different "cross-platform" standards. Why is the platform so fragmented when MS controls the whole thing?


.Net Standard is the standard. There is only 1 standard.

.Net Core is one implementation of the standard. .Net 4.6 is another implementation of the standard. Mono, Xamarin, Universal Windows Platforms are other implementations.

The idea is that any code written for .Net standard, will work across the different implementations.


As I said, I don't really understand how it's all supposed to fit together, but what I've found in practice is that completely random and innocuous methods are missing from the System frameworks depending on which "standard" you target in Visual Studio.


The .NET Standard spec has multiple _versions_, each of them adding more APIs: https://github.com/dotnet/standard/blob/master/docs/versions...

It's true that .NET Standard 1.0-1.6 had a lot of gaps but these should be rectified with 2.0 now so you shouldn't run into these problems anymore (otherwise there's usually a good reason why a method/API is missing from the standard).

Disclaimer: I'm working on Mono for Microsoft/Xamarin, primarily class libraries and tools.


Thanks for your work!


It's been covered by others here already, but my understanding, as an outsider reading all this, is that .Net Standard is like C99 or C++14, a standard or specification that defined how the language is to function, and then .Net Core is implementation, like GCC or Clang are compilers that implement the C/C++ standards. In the prior example given, Java has a standard for each version that defined how the language functions, and then there are virtual machines that implement that standard, such as Oracle's (originally Sun's) JVM implementation, OpenJDK, or in the distant past, the Blackdown Java port[1].

Contrast to languages where there isn't a spec, or the spec is the interpreter, such as Perl 5 (Perl 6 actually has a specification and multiple implementations or various levels of conformance), or things in between, such as Python with the Python Language Reference. There are benefits and drawbacks to a standard/specification. One benefit is that it's easy for someone to start their own implementation and test for conformance and have a high degree of surety whether it will work with existing programs. One downside is that when there's a problem found in the spec, it often requires more work to fix, as the changes need to propagate out to the implementations, which will have their own timelines as to when they can implement it.

1: https://en.wikipedia.org/wiki/Blackdown_Java


from what I've heard, .Net Standard 2 should be stable for a while and should fix the random missing stuff that was present in the various versions of .net standard 1.x


Naming is still one of the most difficult things. Then again we get used to bad names, and keep on... (For example why The JSON standard persist on calling things objects, rather than say... maps or dictionaries)...


.NET Standard is a standard like JDK version, which can be JDK 8, JDK 9 and so on... .NET Core is an implementation like OpenJDK, Oracle JDK If you write something for JDK 8 standard, like using the Java Time API, you expect it to work with either Oracle JDK 9 or OpenJDK 9 In .NET, the standard version, means that up to specific version of .NET Framework or .NET Core or any other implementation like Mono can execute the software. I don't see any real fragmentation here, one is solely for people that are ok with Windows and the other one is meant for people that are not okay with Windows (ofc the .NET Core is meant for more than just this). The same with Java, OracleJDK is meant for people that are ok with Oracle and OpenJDK for people that are not okay with Oracle. What fragmentation? There are but a few implementations and each of them have a clear use cases. It's not like choosing a JavaScript framework or a tool, now that ecosystem is a true fragmentation... And who knows, maybe with some time, the .NET Core will evolve to the point, where it will be the only .NET implementation


Because they have different runtimes, and Microsoft has been trying to clean up the mess created by Sinofsky regarding .NET, UWP and C++/CX.


Historically spoken, it was the WinXP security mess which forced the .NET team to take over WPF and then as a consequence build the different CoreCLR for Silverlight. That however was killed by the same guy who killed Flash. Mr. Jobs from Apple. The Win8/Mobile Team Just harvested the most modern technology at that time (CoreCLR).

There is a nice talk about it, somewhere on YouTube.


Win 8 sucked for .NET developers regarding API compatibility, because it required lots of incompatible changes, which are the reason Windows 10 FCU is the first UWP version to be finally compatible with .NET Standard 2.0 (4.6.1).

Then there was the whole 8.0, 8.1 with UAP, and finally UWP transitions.

The two only good things from UWP for .NET devs were finally getting .NET Native, which should be there since 1.0 given the Delphi roots of Anders, and the COM Runtime model that was in the genesis of .NET (formerly known as Ext-VOS).

I think the talk you refer to, is one from OS research group at MSR.


No the talk is from the .NET Rocks guys.

https://youtu.be/IqWar6cEWsA

UWP/UAP is a new UI stack from the Windows team BOUND to a CoreCLR. That is different with the existing .NET UI technology stacks.

Additional, the CoreCLR and its more modern Assembly packaging (System.Runtime vs mscorlib) made the code difficult to share (recompile or PCL). This is solved now by the .NET Standard.


> UWP/UAP is a new UI stack from the Windows team. That is different with the existing .NET technology stacks.

Which is what I meant with "mess created by Sinofsky regarding .NET, UWP and C++/CX.". I do have UWP experience since WP 8.

Thanks for the link about the talk.


Agree. History has not played nice with .NET but neither has with Java or Python.


Two other favourite ecosystems of mine. :)


I think the only popular ecosystem with a positive trending history is JavaScript. But... Na I do not start there :)


That sounds like the right answer. (I don't imagine Mono/Xamarin are to blame, as they were independent until recently. This is a big self-inflicted mess by MS.)


.NET Core and Mono are two different implementations of .NET Standard. It's a standard as described in the name.


.Net isn't the standard, it's Microsoft's implementation. (Or rather, .Net Core and .Net Standard are each Microsoft implementations.)

The standard is either CLR (Common Language Runtime) or CLI (Common Language Infrastructure), I'm not entirely sure.

Edit As far as I can tell, yes, even Miguel de Icaza is consistently misusing these terms.

Edit 2 - Nope, I'm wrong!


"The .NET Standard is a formal specification of .NET APIs that are intended to be available on all .NET implementations."

https://docs.microsoft.com/en-us/dotnet/standard/net-standar...


I see I'm mistaken -- thanks.


Jesus, it's like they went out of their way to make it as word-salad as possible.


There is no .NET Standard "implementation", you may be confusing .NET Standard with .NET Framework, which is the legacy Windows-entangled implementation.

(Yes, there are also ECMA standards for the CLR/CLI, C# language, and a somewhat small subset of the BCL [Base Class Library]. .NET Standard subsumes those underlying standards and then specs out a much larger set of the BCL and even some libraries that aren't strictly BCL but may as well have been in the way they've become standards such as ADO.NET.)


.NET Standard is an interface/spec and .NET Core and Mono are implementations of said interface/spec. .NET Core and Mono are currently focused on different needs/niches (as pointed out somewhat in the article here), partly born of their very different histories. .NET Core and Mono also share a bunch of code as it makes sense to do so with respect to their histories/needs/niches.


Right now it is kind of a cleanup phase ending with massive code reuse. But ultimately, I believe we continue to see both .NET Core and Mono flourish. Why? Because .NET Core and Mono have very different targets. Mono has nearly a decade of being factored into any kind of platforms while .NET Core and the CoreCLR will stay a workhouse for dedicated use cases. Same for the .NET Framework which has a full dedication to Windows with it's huge legacy and compatability burden.


>Mono has nearly a decade of being factored into any kind of platforms while .NET Core and the CoreCLR will stay a workhouse for dedicated use cases.

I thought the "dedicated use case" of .NET Core was to be THE cross-platform .NET implementation.


Yeah. Cross platform server side.

With factored I meant more: native compiled, three shaked, WebAssembly compiled, iOS allowed, Arm, PowerPC, ....


That's the status quo for historical reasons, but I don't think many see this as a desirable state of affairs.


Agree. But reverting this is really hard. Because it is crucial to .NET to play both fields right now for the next ten years :). Reworking Xamarin is not an option in a time when the race against Cordova, React Native, Node, Swift, Go and Python is fully ongoing. They are pragmatic here, how they should be.


.NET Core is the gateway to Azure, I doubt we would see it on anything other than platforms developers might use.


The source code for .NET Core as on GitHub now powers .NET Core and UWP. It is a toolbox. We might not see .NET Core with a UI stack directly, but other factoring might happen. E.g. Samsung is using it for their Tizen platform in an own CoreCLR based factoring, including an UI Adapter between Tizen and the Xamarin.Forms abstraction. That is pretty much toolbox thinking. Now they can offer a .NET Standard 2.0 / Xamarin.Forms based programming environment with "little" effort.


To be honest, I would refrain to use Tizen as positive example of anything.

Samsung already rebooted the SDK so many times, and the platform lack of security is well known, I doubt it will ever get serious beyond a few watches and TV sets from Samsung.


:) you are right. But at least it shows the toolbox character. Whether something good comes out of it... Let us see :)


I agree that having mono and .NET Core being developed at the same time is not a good thing. How do you pick one over the other? There is way too much overlap.

With .NET core being open source they should find a way to work with the mono team.


> How do you pick up one over the other?

I believe you do not have a choice. iOS, Android, macOS, Linux Desktop goes to Mono. No choice there. Application Servers go to .NET Core. No choice there considering performance and Cloud deployability. Windows Legacy goes to the .NET Framework. No choice there. Modern Windows App goes to UWP (another .NET Core).

I do not see a choice except when you want to run a service on Mono. Which everyone will tell you not to do.


> Which everyone will tell you not to do.

Why?


The performance is dramatically different (see TechEmpower benchmark) and the overall stability seems also a topic (the CoreCLR is pretty battle tested). Mono from it's first origin as GNOME technology till its current peak now in Xamarin somehow always played in the UI field (which has consequences on e.g. garbage collection or jitting).


I think a lot of the Mono Team worked for Xamarin. As Microsoft acquired Xamarin and by that act, opened up most of the Xamarin/Mono codebase that had previously been licensed to ensure a commercial fee could be generated.

Miguel posted the article and he's a Microsoft Employee for sure.

I think the runtimes are slowly converging. The main issue at the moment is that Core is more focused on Web and back-end targets. To replace Mono, it needs to support UI targets such as desktop Windows, Mac and Linux, as well as mobile targets such as Android and iOS. It'll happen. Most of the .Net platforms seem to use the Roslyn compilers these days. Next step is to converge at a library level, then at a VM level I think.


> There is way too much overlap

It is, at least, good to see Microsoft attempting to prevent the opposite problem (fragmentation) by keeping the .NET Standard up-to-date.

https://docs.microsoft.com/en-us/dotnet/standard/net-standar...


DotNet Standard seems, at least in my limited experience so far(<2.0), to be a bit of a clusterfuck. I was trying to write a quick library to do some things, and continually ran into issues where things would compile, and fail with runtime errors upon loading DLLs, or namespaces and types would be mysteriously missing when switching from a lower version of the standard to a higher, ostensibly more complete, version.


Yup. Those days were painful. From 2.0 and on it has been a lot better. .Net 4.6.2 compatible libraries will be loaded by the runtime now, so you don’t have to repackage and redistribute every Library that has not already added a DotNet Core target. I’ve only had one library I’ve tried fail due to relying on an API that had not yet been ported to DotNet Core.


Less than 2.0 is more or less an alpha imo. I use 2.0 in production and love it. But not sure how they’re going to get over all the bad PR 1.0 and 1.1 generated.


That's like over a decade ago right?


I would recommend a bit reading to differentiate .NET Standard (more a spec; currently at version 2.0), .NET Framework (the big fat runtime as part of Windows; currently at 4.7.1) and .NET Core (the slender cross platform one; currently at 2.0.1).

They are very different things and due to the universal misunderstanding and terminology misuse easy to mix up.


I just misread it. The new naming is fubar beyond all belief and it's easy to make mistakes like this.


DotNet Standard 2.0 was just announced in August

https://blogs.msdn.microsoft.com/dotnet/2017/08/14/announcin...


Isn't .NET Core a Subset of Mono? I though Mono was like a cross platform version of the .NET Framework, of which .NET Core is also a subset.


No, Core happened before Mono integrated in to the Standard platform. I believe Mono started to use the open sourced components of Core/.Net framework to replace some of their components.That might be where the confusion lies. Then at some point after that, Microsoft acquired Xamarin and opened the Mono/Xamarin platforms fully (the Mono runtime used to have tricky licensing that meant that embedding it required a commercial license.)


.NET Core is incompatible with the .NET Framework which Mono is the oss counterpart to.


That's the official line but I honestly don't know anymore. We were trying to write a few cross platform libraries that we could also consume in Framework. We could not get .NET Standard libraries to import into Framework Apps without entering a dependency hell that made me long for the DLL Hell of the 90s. We discovered that re-targeting .NET Standard projects to .NET Core would allow them to be imported into .NET Framework projects without issue.

Of course you can't import .NET Core libraries into .NET Standard so in the end we actually made projects for each of them with links back to the .NET Standard libraries' files.


I think it depends on what you are trying to achieve. They are all slightly incompatible to some degree. I've written quire a few libs in .Net Standard and had them work on all three platforms, as well as writing Core command line apps and ported them to .Net 4.5. The issue is more with stuff like ASP.Net Core etc, where the libraries are slightly different conceptually.


It’s just mentioned as an aside in this article, but the work the Mono team is doing on WebAssembly is super exciting. Can’t wait to start messing with it.


Great to hear that you're excited by it, we're too :)

This blog post has a few more details and examples of the WebAssembly work: http://www.mono-project.com/news/2017/08/09/hello-webassembl...

Disclaimer: I'm working on Mono for Microsoft/Xamarin (primarily class libraries and tools).



And WebAssembly support merged into Mono master branch last week https://github.com/mono/mono/pull/5924 (not available as a release build, but soon)


For example, some game developers like to adjust and tweak their game code, without having to trigger a full recompilation. The static compilation makes this scenario impractical, so they resort to embedding a scripting language into their game code to quickly iterate and tune their projects.

There is a programming language where every new expression causes incremental compilation and linking inherently, natively: it’s called Lisp.


What is the main attraction of a proper interpreter vs. the with the compiler built into the platform (so static, but at runtime) that we have with Roslyn? I do some in-app scripting etc and I basically just compile/run code at runtime using Roslyn then. Performance is probably a benefit of doing that, but what is the drawback? is it more overhead e.g. in an interactive setting it might not be responsive for a REPL?


Roslyn is entirely unrelated. That's the compiler to CLIR. We're talking about the runtime interpreter and compiler to native code.


Yes? My question was "whats the benefit of an interpreter for runtime code exec, over using a compiler-as-a-service like Roslyn"


You've asked what is the benefit of using apples over oranges. The two things are unrelated.

When you use Roslyn you still need either a JIT or an interpreter to execute the output of Roslyn. You can't ask what is the benefit of using the interpreter over using Roslyn as you cannot do that. They don't fit into the same place in the system. They don't do the same job. They aren't alternatives.

It's a nonsense question.


This is a somewhat better answer :) The reason I'm asking was because the interpreter was framed as a solution to in-app code execution (e.g like Lua in many game engines) - but this type of scripting is exactly what I use Roslyn for. So if I can rephrase it: what is the benefit of interpreted scripts vs dynamic compilation of scripts in the context of , for example, app scripting? Specifically in .NET apps. Not e.g. a C++ app (there it's obvious as a replacement for e.g Lua)

> When you use Roslyn you still need either a JIT or an interpreter

Yes: Assume I'm already in a .NET appxontext so yes I have the JIT available. Does that make the interpreter less attractive as I can just load compiled code there?

I'm guessing then the benefit might be less initial overhead, but at the expense of slower execution?


This still doesn't make any sense:

> but this type of scripting is exactly what I use Roslyn for

Because you aren't using Roslyn to execute anything. You're using it to compile C# to CIL. That CIL is then executed by a different part of the system - the JIT. It's that entirely unrelated part of the system that is being talked about being replaced with an optional interpreter instead.

Roslyn goes from C# to CIL.

The JIT and interpreter execute CIL. They don't know or care that it comes from Roslyn and the fact that you use Roslyn or the command line C# compiler to generate the CIL is irrelevant.

You're being tripped up by the fact that the system has multiple levels of compilers. You're thinking about one level but this discussion is about another level.

But anyway there is actually a helpful answer to this question:

> what is the benefit of interpreted scripts vs dynamic compilation of scripts in the context of , for example, app scripting?

The interpreter will probably use less memory and run your code faster first time. You probably don't care about less memory in a game unless you are running in a very constrained system, and you probably don't care about first-time execution as either these scripts run every frame and so it's critical they're very fast, or they run once a few seconds or so in which case who cares how fast they run.


> Because you aren't using Roslyn to execute anything. You're using it to compile C# to CIL. That CIL is then executed by a different part of the system - the JIT.

Right - with Roslyn Run("somecode") means creating an assembly from the code, then loading it into the current AppContext and executing it there. If I were to run an interpreter then Run ("someCode") is interpreted. I get that.

My question was still only: would an application chose one or the other, when given both options. I get that they are completely separate ideas and Roslyn doesn't do anything an interpreter does (Roslyn and jit does) - but "compile+load" and "interpret" both offer a way to solve the Run("somecode") problem, which is why they are still somewhat related in the specific context of a .NET app with scripting


Ideally, an application doesn't have to choose, but the runtime uses some good defaults. For example, let's have a look at the HotSpot JVM: A Java method first gets executed in the interpreter, then, if it's hot enough it will compile the method with the C1 JIT, and then later uses the C2 JIT to bring the method up to full peak performance. There is also some profiling going on that is fed into the JITs to guide optimizations. The transition between the execution tiers is based on some heuristics and is kinda complex.

Do you need to care as an application? Ideally no :-) It should be transparent to the user and it's the runtime's job to do the best available thing for you. (Sure, the runtime can be tweaked for specific workloads etc.)

In Mono we aren't there yet: the interpreter is currently useful for restricted targets like iOS, where a JIT can't be used due to security concerns and AOT comes with certain limitations (that is, loading assemblies at runtime and execute them). If you want to use Mono's Interpreter to run roslyn, you can explicitly tell the runtime to do so, but it doesn't make much sense today, because it will give you less performance than the available JIT; we don't support switching the execution engine on the fly for single methods yet (it's also called tiered compilation).


What is the rationale for keeping working on mono rather than just port its unique libraries (gtk# etc.) to coreclr ?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: