Except they open-sourced a new .NET stack, not really the entire .NET Framework, .NET Core and .NET Framework are similar but not fully backwards compatible. I've been porting .NET Framework code to .NET Core and depending on how specialized your project is you may not always find the same libraries supporting your project. A project that perfectly compiles with .NET 2.0 will not just compile for .NET Core, at least not as easily as using a .NET Framework 2.0 project in .NET Framework 4.6. I love Microsoft and .NET but I think Core is executed poorly. You also have EntityFramework vs. EntityFramework Core which lacks features from EF which were deemed "unused" which bites some people in the face.
.NET Standard is all about compatibility - so you can use that to determine which framework you can/need to use. However .NET Core is a distinctly designed for cross-platform reach so it will be a break from .NET Framework in some ways - this is just a given, otherwise they would've just released another .NET Framework version instead of doing all this.
I guess I could've written 'Microsoft open-sourced the entire .NET stack'. I wanted to make it clear that they open-sourced everything, i.e. runtime, JIT, GC, base-class libraries, etc
Excellent, that's the effect I hope my posts have!
You still might quibble over "entire" .NET Framework, but a lot of the .NET Framework is open source in addition to .NET Core.
It'd be a shame if the progress they are making were stymied in order to be compatible with the .Net of a decade ago.
> I've been porting .NET Framework code to .NET Core
Yeah, but you probably don't need to do that. .Net Framework is still supported, and will be for a long time.
Where did you get this "unused" claim? I've been following EF Core and EF6 for quite a while and I've never heard them say this about most of the major features missing from EF Core.
EF Core is a complete re-write from the ground up of Entity Framework. They're attempting to fix some design issues that cropped up in the original Entity Framework, improve performance, and improve the underlying architecture (the APIs are similar, but the "under the hood" layout/design has shifted).
They've been extremely clear that the current release isn't on feature parity to Entity Framework 6, and they've told people to avoid it until it is. If you look at their EF Core TODO list it will come back into EF 6 feature parity eventually and with some new functionality provided by the underlying changes they're making in EF Core.
Honestly your whole post is full is misleading claims, this is just one example. You've started using brand new technology (which is a rewrite) and are complaining because you ignored their very specific warnings about the state of it. Then you justify your own lack of research by making wild outlandish claims about how it is everyone else's fault.
To quote EF Core Team's Roadmap:
> Because EF Core is a new code base, the presence of a feature in Entity Framework 6.x does not mean that the feature is implemented in EF Core.
> We have provided a list of features that we think are important but are not yet implemented. This is by no means an exhaustive list, but calls out some of the important features that are not yet implemented in EF Core.
> The things we think we need before we say EF Core is the recommended version of EF. Until we implement these features EF Core will be a valid option for many applications, especially on platforms such as UWP and .NET Core where EF6.x does not work, but for many applications the lack of these features will make EF6.x a better option. [Big List of missing features]
> There are many features on our backlog and this is by no means an exhaustive list. These features are high priority but we think EF Core would be a compelling release for the vast majority of applications without them [Big List of missing features]
> Our team is currently working on the EF Core 2.0 release. It's early days, so our plans will likely change as the release progresses, but here are the major features we are planning to address in this release. [Big List of missing features]
The truth is that YOU started using a piece of technology they expressly told you not to use and told you why you shouldn't. Then created fictional justifications for why it is their fault you screwed up.
But I welcome you to cite your "[won't implement features] which were deemed 'unused'" claim. Somehow I don't think you will...
2) My Team Lead picked .NET Core, we are porting from Silverlight, not from ASP .NET which I have a feeling eventually will stop being included as a project template for Visual Studio as more people swap to ASP .NET Core so if that becomes the case I could see why I wouldn't be asked to develop a project in ASP .NET just to recode it later on down the road.
3) I might of mistaken EF Core not implementing features with ASP .NET Core, they seem to be shifting a lot of things for ASP .NET Core, especially with Identity being a new way of doing things. ASP .NET has always "lost" features along the way, need we remember when it was not even called .NET and was just "ASP"? Not sure why I need citations for that, you could form your own conclusions. I'm just a "Junior" anyway what do I know?
BTW if it's reading material you want, you should also check out me other post 'Research papers in the .NET source' http://mattwarren.org/2016/12/12/Research-papers-in-the-.NET...
I think the reason for it being soo large is that the GC is shared across all the .NET versions, so having it in one file has some advantages.
See https://github.com/dotnet/coreclr/issues/408 for a discussion on reorganizing it.
> In #401 @cnblogs-dudu referenced a post explaining that the GC was machine-generated from LISP. Since gc.cpp is not a real source code (just intermediate code), but the lisp code is, should it be more useful to publish the lisp source and the transpiler (Source-to-source compiler) for LISP -> CPP?
There's a reply from a Microsoft employee that seems to confirm this, see https://github.com/dotnet/coreclr/issues/408#issuecomment-78...
I thought I knew a thing or two about .NET, having used it since beta days, I had no idea LISP was used to bootstrap the original GC code. Fascinating stuff.
I've archived it here just in case:
Patrick is also interviewed on Channel9 and he talks about the GC. Not watched it yet:
There's absolutely no reason they should be in C# and encourage bad coding practices (like overly long .cs files). Even if you need all your code in a single class, you can shard that class across several .cs files to make organisation and maintainability easier.
Regions are just a crutch people use to allow them to create 10K+ line .cs files that will eventually become a maintainability nightmare (as well as making merges/check-ins/finding things more annoying in general).
Only region I'll use is the built DEBUG one and I only use that to create:
private bool IsDebug()
BTW if you want a bit more info on it, I wrote a whole blog post, see http://mattwarren.org/2016/02/04/learning-how-garbage-collec...
The second biggest fear which made Sun sue Microsoft was that when MS licensed Java, their VM was faster than Sun's. Microsoft is (was?) the king of JIT. Technically biggest since if theirs was faster, they wouldn't have cared for Microsoft's anyway. Even now CLR is generally considered much faster than JRE.
I don't know how Chakra compares against v8 but then it also depends on what makes money for MS.
That’s... not really true. JRE’s HotSpot is much faster than CLR, even today.
I’d be interested to see benchmarks proving your argument, because basically every benchmark out there proves you wrong.
Sun’s HotSpot does recompilation and profiling, to reoptimize depending on use case, while CLR only JIT’s once, so the base performance is very low. This is also why HotSpot can reach better-than-C performance in some cases (because it can optimize at runtime for actually taken code paths, and jump to interpreted code for rarely taken paths), while .NET (and native code) can not. (This is what gives HotSpot the name – it recognizes hot spots, and swaps them out for more and more optimized versions at runtime)
But I can say that the number of research papers around the JIT in the JVM is probably and order of magnitude more than around the CLR. Someone else linked to 'Research papers in the .NET source', and guess what? Several of them are really JVM papers, talking about how they've used the same technique.
Additionally there is PGO support between runs and explicit SIMD support.
IBM J9 also does support caching JIT code thought.
Hmmm really? I'm not sure I can think of many technological breakthroughs in the area of JITs that Microsoft have been responsible for? Google and Mozilla have made lots of advances, as have Sun and Oracle, and lots of academics like the PyPy people, HP with Dynamo.
What have Microsoft done in the area of JITs that so notable as to earn them the title 'king' of JITs? Can you say anything specific?
We would have been better off without JIT. Even now with ARM. You compile to 3 architectures,... so what?
really? since when? can't say i've ever heard this claim before.
The only thing the CoreCLR might be quicker at is starting up, and maybe a little better memory usage though I don't have any numbers and this is just a feeling from using both. Anyone know for sure on this?