Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Updates Visual Studio (techcrunch.com)
119 points by kanche on April 2, 2014 | hide | past | favorite | 59 comments

.Net native code compilation, aka static linking for the .net age. This feature is a LOOOOONG time coming, they should have had it available a decade ago.

P.S. Mark my words, this is going to be a huge deal. Especially since MS is acquiring Xamarin, which basically does the same thing, but for mobile platforms. With the ability to develop in .net languages and still deliver a plain-jane .exe that doesn't require jitting it'll open up a lot of new opportunities. Especially if they start targeting non-windows platforms like linux, iOS, and android with fully-baked tools. Imagine how much more popular C# would be if it wasn't tied to the Windows platform? And if you could choose whether or not you wanted to ship a small managed exe or a fat native exe that launched quickly?

Doesn't .NET already have native code compilation using NGEN?


Technically, yes, but for most situations, no. The requirement that ngen'ed assemblies must be in the GAC makes it very far from the "compile what you want to native" tool I'd like to see.

ngen is also client-side, which makes for slow, complex installs, as opposed to simply running an executable.

is it comparable with GCJ?

> Imagine how much more popular C# would be if it wasn't tied to the Windows platform?

It's not. Recent versions of Mono are quite good. You will have to pay Xamarin at least $299/year to deploy on iOS or Android, though.

> Especially since MS is acquiring Xamarin

Is this certain? Was it announced?

No, just widely speculated.

The inherent slowness of .net is the ONLY reason that I'm sticking with Delphi... It'll be a dream if we'll be able to compile .net/c# programs into native and runs FAST!

well, by "It'll be a dream " I actually meant "I dream of being able to native compile c# app for the desktop". Sorry for the bad English.

well, to those whom downvoted, I wish you can convince me, since I really want to enjoy the power of visual studio and the richness of .net libraries.

There are rumors that Microsoft is considering the purchase of Xamarin [1]. And now Microsoft is now previewing their .NET AOT compiler for X64 and ARM. I see great things in C#/.NET's future in mobile and cross platform development.

On another note, I wonder if Microsoft addressed the inherent limitations of AOT in C# [2]. I wonder if it's a compile-time error, or if that segment of code is interpreted. I doubt it's interpreted, as that's a giant perf loss.

[1] http://www.wpcentral.com/microsoft-reportedly-considering-ac...

[2] http://www.mono-project.com/AOT#Limitation:_Generic_Interfac...

If MS buy Xamarin then it'll be a fantastic validation for a team that was dumped by Novell not so long ago.

I'm also very excited about the idea of getting back into C# again. I always loved the language, but moved away from the .NET stack. Here's hoping.

Still rumor and I don't expect to see anything on that front for some time. Why buy the cow when the milk is free, not like Xamarin is flirting with enabling Java on iOS or something - they are sold out to C#/F# so Microsoft is reaping the benefit.

More evidence, Xamarin announced their Evolve conference dates, times and prices and when it goes on sale - if Microsoft were buying them they would hold off on letting several thousand people spend thousands of dollars for a dev conference that would later be tantamount to a bait and switch.

Are they even still rumors? I thought that had been announced.

This seems to be the sole source of those rumors:


"Microsoft is in the final stages of negotiations that could lead to either an acquisition or major investment in Xamarin, sources with knowledge of the discussions told CRN recently."

The article is from march 17th and there doesn't seem to be anything newer.

To my knowledge it's a rumor.

Native compilation! This are great news.

Never got the point why Java and .NET adopted a VM approach, back in the days when we already had safe systems programming languages like Modula-2, Modula-3, Oberon, Ada, Delphi with AOT compilers on their canonical toolchains.

The purpose of using a VM is that the developer ships one version of their compiled software that can be run by anyone who has the VM on their system. Otherwise, the developer needs to anticipate every single architecture a user might want to run it on. They might not be able to anticipate all these cases which might not even exist yet.

There is also the Go approach, which is your code is platform independent, but you need to compile it for each platform. You get native code that is portable, best of both worlds. (Some might argue "Good C" can do this, but that is often quite hard and C doesn't have the kind of stdlib that Go has..)

I'd call that the static compiling approach, not just the Go approach. It's possible to statically compile in a wide range of languages.

I was using Go as an example of this. The majority of statically compiled languages don't support compiling the same source code to a multitude of platforms.

The same Go code and be compiled for x86, amd64, ARM and runs on Linux,OSX,Windows,FreeBSD,OpenBSD,NetBSD,Dragonfly,NaCL,Solaris and Plan 9. Not many statically compiled languages pull that off.

You can only pull that off in Go as long as you only use the runtime library, which is no different than other language.

As soon as you bring a third party dependency, game over.

>>As soon as you bring a third party dependency, game over.

There are tons of 3 party libraries written in pure platform portable Go that will cause you no issues. In fact the vast majority of 3rd party Go libraries are just as portable as the stdlib.

So they provide implementations on all targets supported by Go compilers for the features required outside stdlib?

The point is they don't have to- while Go compiled binaries are platform specific, pure Go code is platform independent. That means any platform that runs "go build" can run your Go program.

You are avoiding to answer the question.

There isn't pure Go code if it requires to touch the file system, talk to OS using APIs not available in the stdlib.

The moment you depend on libraries outside stdlib, you are opening yourself to dependencies to cgo and/or OS syscalls outside your control.

Even some stdlib packages are UNIX specific, e.g. os.user and log.syslog.

>You are avoiding to answer the question.

I'm really not, I am missing your point.

The majority of 3rd party Go code simply uses the Go stdlib just like our hypothetical program.

Go is no less platform independent then Java, for example, yes there are some 3rd party Java libraries that use JNI, just like some Go libraries use cgo.

The fact that these native bindings to (often) platform dependent code exist does not make pure Go/Java programs non-platform independent.

Maybe our misunderstanding is what we mean my 3rdp party libraries, I'm thinking of the Go repos people put on github or the Gorilla project (http://www.gorillatoolkit.org/), I am not talking about using cgo to call glibc or something similar.

> Maybe our misunderstanding is what we mean my 3rdp party libraries, I'm thinking of the Go repos people put on github or the Gorilla project (http://www.gorillatoolkit.org/), I am not talking about using cgo to call glibc or something similar.

The thing is, there aren't first and second class 3rd party libraries, all are 3rd party.

Back in the day all languages allowed this.

Go design leaves a lot to be desired, but at least it is a way to show the younger generations that didn't learned the above listed languages, how memory safe languages can be compiled to native code without a VM in the middle.

You don't need a VM for that, just as portable assembly kind of.

OS/400 binaries use portable bytecodes which are compiled into native code at installation time.

Alphas had binary translation between binary formats.

From your description that sounds very similar to NGEN-ing your .NET apps during install time, which is a bit fiddly but one .NET deployment option.

Yes, however ngen has the caveat that the required .NET version has to be present on the system.

A fat binary with both common native architectures and also vm code would seemingly take care of this edge case, no?

Also, what's wrong with emulation?

It's slow, complex, and unnecessary?

What about when it's a machine for which hardware doesn't actually exist... like a virtual one? How is that less unnecessary?

I wouldn't call a JIT compiler an emulator.

However, it's certainly unnecessary (why not just generate the machine code already?), and it generates slower code because JIT compiling is a more difficult task due to the fact that the time limits on the optimizer itself are pretty harsh.

One good reason to use a VM - JIT's can generate very very fast code. Hotspot has gotten so good that there are several cases where numeric java code I've written is faster than reasonably tuned C.

Can you show us some Java code that's faster than reasonably tuned C compiled with a good C compiler with all optimizations on? Performance issues in Java are often related to memory management and lack of control over the layout of your objects and JITs also have real-time constraints, you can't sit all day while the JIT does it's thing.

I do machine learning, I don't particularly care about object layout - almost everything I deal with is just arrays of floats, doubles, or longs. I'm also not bothered by GC issues as I simply iterate over the same arrays many times. Lastly, with the type of code I run, JITing of hotspots is done long before the first iteration over the data is complete.

A coworker bet that java would be within 30% of similarly structured c code. I didn't believe him, so I rewrote two of my companies computationally expensive algorithms; a variant of matrix factorization and a neural network.

The production version of the C code did some tricks that would not be possible in java, so I rewrote the C versions to be a bit simpler. I then ported the simpler C programs to java, a fairly straight forward and mostly mechanical job.

The MF code was 5-6% slower than in C, the RBM code was about 1% faster than C.

Granted the tricks I was able to exploit in the production versions made those versions decisively faster than the simple java version, but java was and is much better than I realized.

I'd be interested in seeing the code. I guess my perspective is that with C++ or C getting something to run fast is a process. The first naive implementation will run pretty fast with a good compiler but then you whip out the profiler and look at the generated assembly. It's not just the language it's also the tooling. As an example, you might find that to make good utilization of SSE you want to process multiple matrices concurrently. You may arrange your input data such that you can quickly load the corner element of 4 matrices into an SSE register. You may further rearrange things so that you hide the latency of certain instructions. A good compiler can do some of that for you but the biggest thing is having visibility and being able to exert control at this level. This is often an order of magnitude difference in performance.

Now with a VM you can't really do that. Even if you could for a given implementation of the VM you might get terrible performance somewhere else. The run anywhere VM approach means you're giving up the ability to fine tune things and there's really no way around that. All you can do is try to minimize the impact and I guess JIT is one way. It's certainly true it's a lot faster than it used to be but presumably there are some sweet spots, patterns the JIT is very good at, and some less than sweet spots, patterns where it's not, and your visibility and ability to engineer things is reduced...

So, you wrote the C code to be slower, and then Java seemed fast? :-) I do numeric software for a living - similar to you: lots of arrays allocated once and iterated over. It would be great if Java could cut it, but I'll never believe when Java people claim that it's nearly the speed of C until they show me a Java FFT that compares to FFTW. One simple benchmark, and I'll believe the Java believers.

I work on ml code as well.

one awesome trick the jit does is in the case of child classes with virtual functions, if there is only one child class used in a run, it can remove the virtual-ness and inline the function used.

so, for example, one place you would see this is if you have distributions, and eg class Distribution has a virtual member function like deviance or some such. However, for a given run of your code, you only ever use one distribution. Also say that the loop that uses the deviance is hot enough that you can see the slowdown from the vtable and the function call. The java jit will inline the deviance function for the distribution you use (assuming the method is small enough, which it often is). The only way to get this behavior in c++ is to lift the decision out of the hot loop and generate one function for each distribution. I've done this and seen noticeable performance gains (10-15% on hours/days long runtimes), but it does not make the code nice to work with.

Very much true, but Sun had to pump lots of research money to make it so.

Had it been an optimizing compiler since day one, it wouldn't had to be like that.

Well... A lot of hotspots potential comes from runtime profiling, this is not necessarily the same as having an optimizing compiler generating code at compile time.

Modern C compilers have profile guided optimization for that. You run your code, profile it, and the optimizer uses that data in the next pass. So this is something you can get without a JIT compiler.

In my experience, PGO is something that relatively few C/C++ programmers know about or at least use... roughly the same percentage of java programmers know how JIT works. But all java programmers get to benefit from JIT.

Don't get me wrong, a lot of java bugs the hell out of me and I'm personally much more comfortable writing c++. I'm just pointing out that a code running on a virtual machine isn't some horrible back water.

Looks like I'm going start reading up on C# again! It's been a good 4 years without touching .NET but these developments make my want to try a hand in mobile Windows 8 development.

This confirms to me that as a software agency we were right to stick with C# even when it was "uncool" and there were plenty of new kids on the block (I'm looking at you RoR).

There were a number of times when I wondered if we had backed the wrong horse in Microsoft but being able to gain traction on mobile development (assuming the Xamarin purchase goes ahead) and other changes such as native compilation makes me happy with stuck with them.

But does it support inline x64 assembly yet?

It never will, it is a design decision. Even Intel pushes for intrinsics.

Those don't do the same thing.

They don't, but they're a lot saner to deal with. I don't even use inline assembly where it's supported; all the magic of a compiler, all the foot shooting of assembly, none of the transparency of either.

I'm happy either using intrinsics, or implementing assembly in separate .S files; I can't recall the last time my avoidance of inline assembly prevented me from implementing something I needed.

if only Visual Studio runned in other os's...as a matter of fact i know many developers stick with windows JUST because of vs...

I'm sticking with 2012, until 2013 stops crashing!

Why am I being downvoted? I can't use VS 2013 because it is unstable... I wonder if there are MS saboteurs among us.

And yet, we still can't get menu labels that don't make me wonder if it's 1986 again.

What, too soon?

Plugin developers are delighted - yet one more runtime version to support! Nice move Microsoft!

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact