P.S. Mark my words, this is going to be a huge deal. Especially since MS is acquiring Xamarin, which basically does the same thing, but for mobile platforms. With the ability to develop in .net languages and still deliver a plain-jane .exe that doesn't require jitting it'll open up a lot of new opportunities. Especially if they start targeting non-windows platforms like linux, iOS, and android with fully-baked tools. Imagine how much more popular C# would be if it wasn't tied to the Windows platform? And if you could choose whether or not you wanted to ship a small managed exe or a fat native exe that launched quickly?
It's not. Recent versions of Mono are quite good. You will have to pay Xamarin at least $299/year to deploy on iOS or Android, though.
Is this certain? Was it announced?
On another note, I wonder if Microsoft addressed the inherent limitations of AOT in C# . I wonder if it's a compile-time error, or if that segment of code is interpreted. I doubt it's interpreted, as that's a giant perf loss.
I'm also very excited about the idea of getting back into C# again. I always loved the language, but moved away from the .NET stack. Here's hoping.
More evidence, Xamarin announced their Evolve conference dates, times and prices and when it goes on sale - if Microsoft were buying them they would hold off on letting several thousand people spend thousands of dollars for a dev conference that would later be tantamount to a bait and switch.
"Microsoft is in the final stages of negotiations that could lead to either an acquisition or major investment in Xamarin, sources with knowledge of the discussions told CRN recently."
The article is from march 17th and there doesn't seem to be anything newer.
Never got the point why Java and .NET adopted a VM approach, back in the days when we already had safe systems programming languages like Modula-2, Modula-3, Oberon, Ada, Delphi with AOT compilers on their canonical toolchains.
The same Go code and be compiled for x86, amd64, ARM and runs on Linux,OSX,Windows,FreeBSD,OpenBSD,NetBSD,Dragonfly,NaCL,Solaris and Plan 9. Not many statically compiled languages pull that off.
As soon as you bring a third party dependency, game over.
There are tons of 3 party libraries written in pure platform portable Go that will cause you no issues. In fact the vast majority of 3rd party Go libraries are just as portable as the stdlib.
There isn't pure Go code if it requires to touch the file system, talk to OS using APIs not available in the stdlib.
The moment you depend on libraries outside stdlib, you are opening yourself to dependencies to cgo and/or OS syscalls outside your control.
Even some stdlib packages are UNIX specific, e.g. os.user and log.syslog.
I'm really not, I am missing your point.
The majority of 3rd party Go code simply uses the Go stdlib just like our hypothetical program.
Go is no less platform independent then Java, for example, yes there are some 3rd party Java libraries that use JNI, just like some Go libraries use cgo.
The fact that these native bindings to (often) platform dependent code exist does not make pure Go/Java programs non-platform independent.
Maybe our misunderstanding is what we mean my 3rdp party libraries, I'm thinking of the Go repos people put on github or the Gorilla project (http://www.gorillatoolkit.org/), I am not talking about using cgo to call glibc or something similar.
The thing is, there aren't first and second class 3rd party libraries, all are 3rd party.
Go design leaves a lot to be desired, but at least it is a way to show the younger generations that didn't learned the above listed languages, how memory safe languages can be compiled to native code without a VM in the middle.
OS/400 binaries use portable bytecodes which are compiled into native code at installation time.
Alphas had binary translation between binary formats.
Also, what's wrong with emulation?
However, it's certainly unnecessary (why not just generate the machine code already?), and it generates slower code because JIT compiling is a more difficult task due to the fact that the time limits on the optimizer itself are pretty harsh.
A coworker bet that java would be within 30% of similarly structured c code. I didn't believe him, so I rewrote two of my companies computationally expensive algorithms; a variant of matrix factorization and a neural network.
The production version of the C code did some tricks that would not be possible in java, so I rewrote the C versions to be a bit simpler. I then ported the simpler C programs to java, a fairly straight forward and mostly mechanical job.
The MF code was 5-6% slower than in C, the RBM code was about 1% faster than C.
Granted the tricks I was able to exploit in the production versions made those versions decisively faster than the simple java version, but java was and is much better than I realized.
Now with a VM you can't really do that. Even if you could for a given implementation of the VM you might get terrible performance somewhere else. The run anywhere VM approach means you're giving up the ability to fine tune things and there's really no way around that. All you can do is try to minimize the impact and I guess JIT is one way. It's certainly true it's a lot faster than it used to be but presumably there are some sweet spots, patterns the JIT is very good at, and some less than sweet spots, patterns where it's not, and your visibility and ability to engineer things is reduced...
one awesome trick the jit does is in the case of child classes with virtual functions, if there is only one child class used in a run, it can remove the virtual-ness and inline the function used.
so, for example, one place you would see this is if you have distributions, and eg class Distribution has a virtual member function like deviance or some such. However, for a given run of your code, you only ever use one distribution. Also say that the loop that uses the deviance is hot enough that you can see the slowdown from the vtable and the function call. The java jit will inline the deviance function for the distribution you use (assuming the method is small enough, which it often is). The only way to get this behavior in c++ is to lift the decision out of the hot loop and generate one function for each distribution. I've done this and seen noticeable performance gains (10-15% on hours/days long runtimes), but it does not make the code nice to work with.
Had it been an optimizing compiler since day one, it wouldn't had to be like that.
Don't get me wrong, a lot of java bugs the hell out of me and I'm personally much more comfortable writing c++. I'm just pointing out that a code running on a virtual machine isn't some horrible back water.
There were a number of times when I wondered if we had backed the wrong horse in Microsoft but being able to gain traction on mobile development (assuming the Xamarin purchase goes ahead) and other changes such as native compilation makes me happy with stuck with them.
I'm happy either using intrinsics, or implementing assembly in separate .S files; I can't recall the last time my avoidance of inline assembly prevented me from implementing something I needed.