It also doesn't help portability at all, since the C# libraries are still all mono/windows dependent. The CLR itself was never OS specific - it's the libraries.
Total non-story. People see 'native' and immediately think that making it 'native' somehow turns C# into C++ and enables QT or something...
You can do all these things inside unsafe code in C# as well.
I think there's also a very plausible argument to be made that in C# garbage collection is often a performance bonus, not a performance hit. It's a rather different beast from what a lot of other GC languages use. For some specific kinds of applications it's definitely pure overhead - but for the business applications the language was designed for, the speedups you get from improved locality of reference and overall faster allocation and deallocation of small objects can more than outweigh the cost of the collection runs themselves.
Lower-level languages are great for when there's a lot to be gained by carefully managing CPU cycles. But considering how expensive (relatively speaking) a cache miss can be on modern hardware, there's a real danger of falling into the trap of being penny wise and pound foolish if you're not careful to take some time to diagnose your performance situation before writing prescriptions for it.
> For some specific kinds of applications it's definitely pure overhead...
I started coding back in the early 90's, so I have a lots of experience with manual memory management. Nowadays I use mostly GC enabled ones (native or managed).
There some special cases like real time systems, or very complex games where GC might hurt, but I think the main issue is that many developers lack the knowledge to use GC enabled languages properly.
You need to organize the data structures and algorithms in a GC friendly way, which is different than doing manual memory management. Allocating memory faster than the GC can handle it, or getting 1GB just because, won't make your application behave nicely.
> Lower-level languages are great for when there's a lot to be gained by carefully managing CPU cycles ...
On the other hand, the optimizer is sometimes limited in what it can do, because of those low level constructs. C pointer aliasing is a good example.
For me, the best solution would be a kind of hybrid approach, similar to what .NET already does, or the Lisp environments also do.
You make use of a VM like environment (with JIT) for development purposes, but additionally have the option to aot the code to native for redistribution.
So lets say like ngen, but where all the non used stuff is removed.
But good developers should also be able to drop down to Assembly, C, C++, if the application really requires it.
The ability to do that easily is one reason why I used to be very fond of Objective-C.
With something like .NET, though, I submit that it shouldn't be too much easier than what we already get with P/Invoke. The unmanaged code really should be isolated in order to prevent it from corrupting the managed code's memory space, but that inevitably means that there's a hefty performance toll to calling into the unmanaged code. Having the ability to pepper the application with tiny little snippets of low-level code would probably do more harm than good, by enticing less-experienced developers to make a habit of burning 10X time in domain-crossing every time they want to save X time in CPU cycles.
I think you mean the CLR/JIT often removes array checks if it can detect it (such as in certain loop constructs).
Please correct me if I'm wrong.
CIL is always compiled to native code, either via JIT or AOT (ngen, mono -aot).
ie. This story is terribly misguided as it seems to imply that C# is changing into something else because of a few compiler changes...
It does improve performance, but doesn't magically turn it into a ASM/C/C++ executable for the reasons set out above.
Isn't that, however, exactly what the Mono folks are doing with MonoTouch to get apps written in C# run on iOS (and approved in the App Store)?
Plus it won't be as fast as writing it in C/obj C as it will have garbage collection overhead, type checking, bounds checking, etc etc. (For iOS style UI apps this isn't a concern as they're mostly just thick clients to real calculations done elsewhere, so speed barely matters as you are just relying on the speed of UIKit for graphics anyway.)
C# -> IL -> Mono AOT* -> ARM binary -> xcode/LLVM -> normal iOS binary
* AOT = ahead of time compiler - like JIT, but... before it's needed. There is a linker step in there to remove a load of stuff too.
Indeed, mono --aot and MonoTouch come to mind.
Actual link here: http://channel9.msdn.com/Forums/Coffeehouse/MS-working-on-a-...
Here's an overview: http://www.silverlightshow.net/items/Windows-Phone-8-Compile...
Some video sessions:
[Deep Dive into the Kernel of .NET on WP8 - (talk about compilation starts around the 22 minute mark)]
[Inside Compiler in the Clound and MDIL]
NGEN precompiles the IL into native code if you use the normal framework.
Mono AOT compiles (and links) code into various architectures, but MonoTouch and Mono for Android (http://xamarin.com) would be the major public uses for it - native iOS / Android apps using C#.
If you wonder "can they be any good": go get the new Rdio app. It's done in MonoTouch (and their Android one, in beta, is done in Mono for Android)
The compilerjobs link (http://www.compilerjobs.com/db/jobs_view.php?editid1=648) has a bit more details:
"Specifically this work will include:
• Engineering parts of the reader for MSIL
• Creating a native compiler internal representation that the existing compiler can optimize
• Designing and implementing new managed optimizations to augment the existing optimizer like range check elimination or speeding up C# constructs on vector machines
• Engineering and co-designing the ability to emit a new object file format that will support rapid linking
• Fixing all existing phases of the compiler so that managed code can be correctly and efficiently compiled with the new auot-vectorizing/auto-parallelizing Win 8 compiler."
So, this sounds to me like an element of high-performance computing (maybe with the recent work getting Hadoop running on Windows?), rather than a fully-native toolchain.
It'll be exciting to see how this affects the C# ecosystem.
Although it's even more interesting what this says about Microsoft's plans for the future of the CLR. It seems as though MS might be coming to the conclusion that it was a failed experiment, which is interesting given the success of other VM systems such as the JVM. It'll be interesting to see whether or not this move helps advance C# adoption outside of the windows platform.
Mostly it's just how difficult and annoying it is to fix up a failed build. If you are trying to build something and it's failing because of strange 32bit vs 64bit incompatible libraries or any similar style of technical issues, you're left completely on your own with strange and vague build errors. On the pythin/java/ruby side this stuff is in fairly obvious config files and generally just seems to work. I can't be the only one who has run into this kind of stuff?
Also, the docs/code completion just doesn't seem to work as nicely as in Java where everything has nice big easy to read document blurbs on every function. For example, Bitmap.create<ctrl+shift> and eclipse gives me an awesome popup with all the different versions and a big documentation window from JavaDoc about how exactly to use the function and when/when not to use it.
With VS.net, I'm always having to go read up on MSDN pages in a browser which are poorly written. Just me?
Coming from python and php, the syntax highlighting is pretty sweet (but then, using the python REPL, I hardly needed it. It's sort of like "look at this problem you didn't have before that we solved!"), and I'm starting to understand why people like static typing, as it reduces certain classes of errors.
I think I want to go back to python and Vim or Notepad++, though.
Project files are like build scripts and changing them can cause breaks. Just like wanton changes in a build script can do the same. If you proceed working with VS, I'd suggest familiarizing yourself with project files and how they're set up. All automated processes make assumptions and rely on certain conventions. I've yet to meet an automated process that can adapt itself to any situation to suit its user's whims.
For new developers I generally recommend they review all changes they commit, including project files. Just because its an abstraction or things happen automatically doesn't mean you can just push all changes into the repository. When you open a 2008 project in 2010 you are prompted that changes will take place for example, so it's not like it was a stealth change.
Once they're properly set up project files are pretty maintenance free, but certain behaviors can make them unstable to use.
For example, if you need to support both VS2008 and VS2010, just create duplicates, it's no different than having build scripts that are not backwards compatible, you'll have to keep multiple versions around. Or just keep the VS2010 conversion locally and don't commit that back into the repository.
Anyways, you can go back to a simple text editor and just use msbuild manually, if you prefer keeping track of everything manually. best of luck!
That's pretty valid. We may be living in an edge case, and I trust that if everyone's experience with VS was as bad as mine, it just wouldn't get used.
Thanks for your thoughts.
I guess it depends on which group you're talking to too, I know more people who can't stand VS than who like it.
Microsoft has not, nor will they, come to the conclusion that the CLR was a failed experiment.
As for the actual job listing, it has precious little consequence for anything or anyone, save for possible application speedups for certain use cases on certain architectures a few years down the road.
It's easy to say that now, but time has a funny way of winkling out deeply seated truths that we try to shy away from. Trust me, I like the CLR, I used to work in DevDiv, it's close to my heart.
However, one has to wonder if something is missing or not quite right. The CLR was originally a grand strategic play. It was intended to be everywhere, in Windows, in browsers, in game consoles, in phones, everywhere. And not just present everywhere but also underpinning all of these important products. A lot of the big projects to convert native code to managed code, such as in Windows, foundered and failed. Other than Microsoft.com most of the rest of Microsoft compiles their products to native code. Meanwhile, Windows has released its own "runtime" (WinRT) and windows mobile development is migrating toward native development as well. Not to mention the failure of silverlight to gain traction. The niche the CLR serves today is vastly diminished from the vision even 5 years ago. Also, they've tried to break out of their niche by making sweeping improvements to the CLR such as adding dynamic language support and such-like but they still seem to be stuck in the same rut as ever.
Maybe the CLR will stick around forever, but given the trends it's looking increasingly likely that in, say, 10 years the CLR will be a bit like vb6. Still supported by default in every windows release but no longer fully embraced by the current tool chain.
Or a new C# compiler could be written in some kind of native C# variant that generates both managed and native code.
See https://tindie.com/shops/nwazet/netduino-go/ for an accessible example set up.
This approach gives a couple advantages and a couple disadvantages. Specifically, the advantages the debugging and reloading can be done extremely quickly, the bytecode is small thus giving a lot of functionality per the byte of storage space, and it's significantly easier to port from microcontroller to microcontroller. Downsides, it's pretty slow - typically the heavy lifting happens in the HAL/MF, and you can't write interrupt handlers directly in it - although you can indirectly via the managed driver interface.