
Windows 10 and .NET Native - WhitneyLand
http://www.anandtech.com/show/9661/windows-10-feature-focus-net-native
======
GermainSmith
I don't see why anybody should be surprised about the swing back to Ahead-of-
Time compiled native code. I'm actually surprised it has taken as long as it
has.

Although interpretation and Just-in-Time compilation were not new ideas at the
time, Java first brought them to a wide audience around 20 years ago with its
use of the JVM.

Anyone who remembers those times, and even many years beyond then, will
remember how slow JVM-based applications were compared to native apps.

Even today, there's still a noticeable difference between JVM-based software
and native software, although the exceptionally massive increase in
computation power have rendered the differences easier to overlook in many
cases.

Since then we've seen similar techniques used by a variety of other platforms.

Despite many years of effort and research, and much investment, we've yet to
see Just-in-Time compilation rival Ahead-of-Time compilation in terms of the
performance.

While Just-in-Time's supporters will claim that it allows for better
optimization or better portability, or they can show some contrived benchmarks
where it has a slight edge, we haven't seen such techniques stand up in the
real world.

Ahead-of-Time C and C++ compilers have consistently compiled software that's
very fast, lean, and the best performer in real-world scenarios.

For years, we've had people pointing out this very obvious reality regarding
how Ahead-of-Time compilation proves to be superior to Just-in-Time
compilation in realistic scenarios. Yet we've seen their claims very
forcefully denied by the Just-in-Time advocates.

Maybe after 20 years, it has finally become clear that the Just-in-Time
supporters were wrong, and that Ahead-of-Time compilation is the better
approach.

With battery-powered devices more prevalent than ever, the need for high-
performing and efficient binaries is greater than it has ever been. Returning
to Ahead-of-Time compilation is just what we need given these circumstances.
It's just a real shame it has taken so long for this important reality to be
recognized.

~~~
pjmlp
There have been AOT compilers from third party JVM vendors almost since the
begining. Used mostly by companies that didn't want to expose their bytecode.

However Sun decided to bet the farm on JIT and was against having AOT
compilation on the reference JDK. Meaning anyone that wanted a AOT toolchain
for Java had to go shopping.

Now, it appears Oracle is of a different opinion.

"Java Goes AOT" at JVM Language Summit 2015

[https://www.youtube.com/watch?v=Xybzyv8qbOc&list=PLX8CzqL3Ar...](https://www.youtube.com/watch?v=Xybzyv8qbOc&list=PLX8CzqL3ArzUo2dtMurvpUTAaujPMeuuU&index=16)

Being used to memory safe languages with AOT compilation like Turbo Pascal,
Modula-2, Modula-3, Oberon, Delphi, Ada among many others, I also would have
prefer that Java and .NET had offered the same type of toolchain from the
beginning.

Now almost 20 years later, we need to teach young generations that their
default implementations were just that, implementations. As they got used to
the (wrong) idea that memory safe languages imply a VM.

~~~
lmm
> Being used to memory safe languages with AOT compilation like Turbo Pascal,
> Modula-2, Modula-3, Oberon, Delphi, Ada among many others, I also would have
> prefer that Java and .NET had offered the same type of toolchain from the
> beginning.

Do those languages offer consistent, defined behaviour on all platforms (e.g.
for integer overflow)? Particularly in the context of multithreading - do they
have a crossplatform memory model? That to me was the big advantage of the
JVM.

~~~
pjmlp
They had their own set of issues, but not even close to the anything goes of
undefined behaviour across C compilers.

That was actually a reason why many of us embraced Java.

C compilers were between K&R and C89.

C++ compilers were even worse with the standard still ongoing.

These other languages suffered from not being adopted by OS vendors and as you
remember, back then we used to pay for compilers. So not being part of the
official OS compilers was a burden to adoption.

Java being C++ like, more consistent behaviour and free (as beer) won the
hearts of many.

------
mrec
> _they [desktop apps] really want to move to the new app platform anyway_

1\. Stop anthropomorphizing desktop apps, they don't like it.

2\. Can't tell if the author means "desktop devs want to" (false IMO),
"desktop devs _ought_ to" (also false IMO) or "MS wants desktop devs to"
(undeniably true).

3\. It depresses me that just as MS is genuinely 'getting' open source,
they're completely missing the appeal and value of an open platform. One step
forward, two steps back.

EDIT:typo

~~~
edgyswingset
> "MS wants desktop devs to" (undeniably true)

What makes you say this, the publishing of a third app model for Windows,
messaging, some combination of the two, something else?

~~~
mrec
Both, plus the fact that significant new tech infrastructure projects like the
Windows Runtime are (mostly) restricted to Store apps.

I'm not anti- app stores or declarative UI in general, and I think things like
standardized sandboxing would be great for many desktop apps too. But a
platform that places you at the whim of a single distribution path, and
cripples portability to boot, isn't worth it.

------
saosebastiao
It's frustrating that the .NET Native compilation is so tied to the Rosylyn
toolchain that it can't be used for other languages like F#, even though
they're using the same IR.

~~~
sixbrx
Yeah this seams to me like a repudiation of the whole idea of .NET, as a
multi-language system. They may add support for F# later, but by that time,
the damage will have been done because the message is clear: "use anything
other than C# or VB at your own peril".

Currently F# is completely shut out - it can't even be used to make libraries
to be included in these kinds of apps! That makes it more dangerous career-
wise to use F# going forward, lest your team get wedged into a position of
having to rewrite one of _your_ libraries because of your choice of using your
"pet" language. Not good for the career.

The whole _point_ of bytecode is to make things like this language
independent. Big failure of vision to let this group do such a hack, IMO.

~~~
kevingadd
This is most likely just an artifact of F# producing unusual IL that's harder
to handle. In my experience, it's less 'we only support C#' and more
'compilers other than C#/VB.net produce very strange IL that is hard to handle
well' \- which is true. Given the small # of users that rely on those weird
compilers, it's hard to justify supporting them first, especially if they
aren't performance-sensitive.

~~~
MichaelGG
I emailed the .NET Native team asking them about this. They confirmed that
they reverse out IL to higher-level constructs. So they aren't really working
at the IL level, they're working on C# represented as IL.

This isn't entirely uncommon. A lot of code that deals with Expression<T> does
so very poorly and won't work on many valid trees because they weren't able to
get C# to emit certain expressions.

------
simfoo
> At this time, it is not available for desktop apps, although that is
> certainly something that could come with a future update. It’s not
> surprising though, since they really want to move to the new app platform
> anyway.

Thanks, but no thanks. I'm not investing time and effort into a toolchain that
ultimately targets only a single platform, namely Windows 10, because they
don't want me to be able to just xcopy a .exe without touching their store

------
greggman
How do I verify the app I downloaded was made by the developer? I suppose on
phones there's no way to do that now anyway or at least I don't know how on
iOS.

My point being if ms is compiling then how do I sign the executable as proof
it's from me?

~~~
pcunite
Good question. I assume you upload a cert? My question: what happens when I
side load on my phone with a different compiler but ship it to MS who uses a
different version and the app runs differently?

------
moomin
You know, this is all very well, but the Windows 10 app market is pretty much
non-existent. You know who really uses .NET on Windows? LOB applications,
front ends to complex systems. The kind of thing you don't see in the app
store.

And that market is better served with, you know, a compiler, that runs on the
command line, on a machine under a developer's control.

~~~
MichaelGG
The Windows Store was traditionally just scams with blatant trademark
infringement. This has severely damaged their rep. A quick search now shows a
lot of it is cleared up. Unfortunately, that just leaves the shovelware junk.

Here is an interesting point: Right now, the #3 top free app is "Freeflix/Free
Movies Unlimited"[1] and #4 is a "free Mp3 downloader".

The publisher for the free movies is "Wamba Dev" which results in no related
hits on Google outside of Windows Store. Contact is a hotmail account, and the
privacy policy is on "dopeware.com".

So Microsoft isn't even able to get real publishers in top positions on its
own Store. Pretty damn sad.

Also the Metro runtime is diseased and broken. Opening the Store app right now
to do these searches took about 15 seconds. On a decently high-end ThinkPad.
It's embarrassingly bad. Even the Metro calc takes 2-3 seconds to open, ffs.

1: [https://www.microsoft.com/en-nz/store/apps/freeflix-free-
mov...](https://www.microsoft.com/en-nz/store/apps/freeflix-free-movies-
unlimited/9nblggh1z0m4)

------
jameshart
One consequence of these 'app store compiled' models is that the compilation
units you upload will have to be licensed to the app store in such a way that
the derived native apps are legitimately licensed. That probably rules out,
for example, even including LGPL libraries in your app, since the resulting
application will statically compile that library code in with .NET components.

~~~
dchest
LGPL doesn't forbid static linking, you'll just have to provide linkable
object files (or source code) separately to enable re-linking.
([https://www.gnu.org/copyleft/lesser.html#section4](https://www.gnu.org/copyleft/lesser.html#section4))

------
stevoski
I'd love to have the JVM equivalent of this for my company's main desktop
product, written in Java. Due to being Java (and having an embedded JRE) the
app has slow startup, due to the JVM having to be started up. The installer is
90 MB on OS X, 55 MB on Windows, much of which is the embedded JRE. So our our
fortnightly updates are much bigger than they need to be.

We tried an AOT compiler called JET. It was fine, albeit expensive. But it has
an extremely long build time, no great for our continuous build system. It's
support Java version also lags considerably behind the current Oracle Java
release.

~~~
jacques_chester
As I understand it, a big part of the slow load time for Java apps is sucking
everything on the classpath into RAM, even though most apps will use a
fraction of the classes that are loaded.

Java 9's module system should make that a lot less of a problem by giving the
JVM guidance on what it should load and skip.

I am not a Java expert, though, so I may have misunderstood what's going on.

~~~
fiatmoney
The JVM class loader by default reads a class on first reference to it in the
code it's executing. Never refer to it, never gets loaded.

Startup on the JVM is more affected by the JIT warming up, and the heap
getting into a "stable" configuration.

~~~
jacques_chester
Thanks for the correction.

------
redcalx
Every poorly performance problem I've ever encountered has been due to poor
design choices, poor choice of algorithm or, most often, badly implemented
algorithms.

Once past the JIT compilation delay the execution speed is not significantly
different for AOT and JIT code for mature platforms such as .Net and Java.

There _are_ issues in CPU and RAM resource limited systems such as mobiles and
embedded systems. And also web servers - where small improvements in
scalability can dramatically affect business viability and profitability.

~~~
MichaelGG
IIRC the .NET Native literature says the main benefits are reducing RAM by a
fair amount. Which makes sense - no runtime to load type info and no JIT to
run. AFAIK, they aren't claiming the compiled code is _that_ much better than
what the JIT does.

Of course it's probable that reducing RAM footprint significantly might
improve runtimes if it makes better use of cache.

------
graycat
I just wrote 18,000 programming language statements in Visual Basic .NET, in
files with 80,000 lines including comments, blank lines, etc., compiled much
of it with a command line script with line

    
    
        programPath = dq || SystemRoot || ,
        '\Microsoft.NET\Framework\v4.0.30319\vbc.exe' || dq
    

and give the rest to the Microsoft Internet Information Server (IIS) to
compile and run as Web pages, and it all works, but, still, I can't make any
sense out of the OP at all.

E.g., my command line compiles result in a file with extension EXE. What's in
that? Sure, I have the Microsoft .NET Framework 4.0 installed, and the VBC.EXE
invoked as above is, of course, one of the files in the installation of that
.NET Framework.

When from a command line I just run that script that runs that VBC.EXE, I get
nice output showing the command line options for the program -- simple,
direct, explicit, terrific.

So, how do I connect what that .NET Framework 4.0 VBC.EXE does with the
discussion in the OP?

BTW: Why use Visual Basic .NET instead of C#? C# borrows too much of the old C
syntax, and Visual Basic has syntax more _traditional_ and more like that of
Pascal, PL/I, Fortran, etc., and I find that more traditional syntax easier to
write and read and less error prone. But, likely and apparently, the
difference between C# and Visual Basic .NET is mostly just _syntactic sugar_
anyway.

By the way, where does the famous CLR -- common language runtime -- enter this
picture?

Thanks.

~~~
kyberias
What?

~~~
graycat
What part of my question do you not understand?

~~~
marvy
What is the question?

~~~
graycat
The question was:

> So, how do I connect what that .NET Framework 4.0 VBC.EXE does with the
> discussion in the OP?

~~~
msbarnett
The VB compiler compiles your code, not into native machine code, but into an
intermediate representation targeting the virtual machine defined by the
Common Language Runtime. This intermediate representation is known as CIL
(Common Intermediate Language), and it is the bytecode that the .Net CLR then
JITs at runtime.

This article is about compiling .Net code direct to machine code and skipping
the whole CIL/CLR/JIT chain.

~~~
graycat
Thanks for the progress.

I've used _virtual machines_ for decades, e.g., IBM's VM with the interactive
CMS (Conversational Monitor System). In what sense is the Microsoft CIL/CLR a
_virtual machine_? E.g., does it have any security properties, e.g.,
restrictions via some attribute control lists?

The CLI is a _virtual machine_ only in the sense that the _byte codes_ are not
actual machine instructions for any real processor core but are just some
_intermediate_ code _instructions_ for an _imaginary_ machine that does not
really exist and, in that sense, is _virtual_?

So, on Windows, say 7, 8.1, 10, Windows Server of some year, etc., suppose I
take a file A.VB I've typed in with Visual Basic .NET source code, give that
file A.VB to the .NET program VBC.EXE, which is in the collection of .NET
files, and get out from running VBC.EXE file A.EXE. Now A.EXE has _CIL byte
code_?

Why _byte_ code? Or, in what sense can each _code_ be only one _byte_ long?

Does this _byte code_ mean that it would be the same for running on 32 bit
Intel x86 processors, 64 bit Intel processors, ARM processors, etc.?

The CLR part is mostly _run time_ , that is, code my program A.VB and A.EXE
needs to run, that is, in old terms, a _library_ to be _linked in_ via a
_linkage editor_? Then with .NET on Windows, the JIT work also plays the role
of a linkage editor?

One old trick was, don't even bring in the code to be _linked_ and, really,
don't even _link_ to that code and, instead, as the program runs and such code
gets called, that is, an attempt is made to use it, in case that happens
(which maybe often it won't), there is a software _fault, interrupt_ , or some
such and the JIT code, still in Windows and still available to run, only then,
after the _interrupt_ , actually gets and links the code that is needed but so
far was missing?

Another old trick was, really, never bring that _library_ code in the user's
program, do _link_ to it, but have the code it part of the user's address
space that is shared with all the user address spaces or even in another
security _ring_ \-- maybe Microsoft is also doing some such trick? I just
outlined the Windows _Global Cache_ , e.g., mostly from DLLs instead of EXEs?

Okay, I can see the point, as in the OP, of the effort for _native_. Indeed,
I've suspected that on Windows frequently used programs are kept in a _cache_
somewhere and, likely, ready to read into an address space or part of a
_process_ as fast as possible, maybe even with the usual address _relocation_
work already done. So, this looks like an _under the covers_ version of
_native_ for Windows?

Ah, I just looked up Microsoft's _ngen_ , and maybe what I just described was
ngen?

I've seen no very clear documentation of these issues. I've been guessing at
what happens, and that's not so good.

Thanks for the tutorial.

~~~
wvenable
Some of your questions are because there is a lot of overloaded terminology.
Virtual Machine, for example, has two meanings:

1) A _system virtual machine_ provides a complete system platform which
supports the execution of a complete operating system (OS).[1] These usually
emulate an existing architecture, and are built with the purpose of either
providing a platform to run programs where the real hardware is not available
for use (for example, executing on otherwise obsolete platforms), or of having
multiple instances of virtual machines leading to more efficient use of
computing resources, both in terms of energy consumption and cost
effectiveness (known as hardware virtualization, the key to a cloud computing
environment), or both.

2) A _process virtual machine_ (also, language virtual machine) is designed to
run a single program, which means that it supports a single process. Such
virtual machines are usually closely suited to one or more programming
languages and built with the purpose of providing program portability and
flexibility (amongst other things). An essential characteristic of a virtual
machine is that the software running inside is limited to the resources and
abstractions provided by the virtual machine—it cannot break out of its
virtual environment.

The CLI is the second type of virtual machine and something like VirtualBox,
VMware, and IBM mainframe VM's are the first kind.

> Why byte code? Or, in what sense can each code be only one byte long?

Again, it's not "byte code" it's "bytecode" which is defined here:
[https://en.wikipedia.org/wiki/Bytecode](https://en.wikipedia.org/wiki/Bytecode)

"Bytecode, also known as p-code (portable code), is a form of instruction set
designed for efficient execution by a software interpreter"

~~~
graycat
Thanks.

Okay, I know some about the details of IBM's VM and a little about VMware.

I have wondered: The way VM has worked on IBM mainframe instruction set has
been partly due to an accident of the design of that instruction set and,
then, some extensions for VM. Really, that way, tough for the program running
to know if it is running as a virtual machine or not. So, in particular, the
program running on VM can be using privileged instructions and not know that
it doesn't really have access to the real hardware.

So I've wondered if the Intel x86 instruction set also has this accident and,
thus, can run operating systems that use privileged instructions but not know
they are running on a VM. Or maybe the ability to run on a VM was from some
extensions to the Intel instruction set. Do you know?

Does the Microsoft CLI/CLR software have some security features beyond just
any _native_ program in, say, Fortran or assembler running in an address
space, process, or whatever Microsoft calls where a program runs?

I'm beginning to get the prerequisites for reading the OP, that is,
understanding what the long standard alternative to native code has been.

Heck, I can understand native code -- at one time I entered some simple
programs via the computer console sense switches. And I printed out the
_object_ listing from a Fortran compiler and went over the machine language
instructions one by one. I discovered that even for some simple code and a
good Fortran compiler, assembler could be faster by a factor of several.

Just read the Wikipedia article. Nice and easy.

Thanks.

~~~
wvenable
> So I've wondered if the Intel x86 instruction set also has this accident
> and, thus, can run operating systems that use privileged instructions but
> not know they are running on a VM. Or maybe the ability to run on a VM was
> from some extensions to the Intel instruction set. Do you know?

No, originally x86 was very difficult to emulate in a virtual machine. User-
mode can be run directly on the processor but, for kernel-level code, binary
translation was necessary to dynamically re-write the code containing
privileged instructions. However, these days nearly all modern x86 processors
now have extra instructions specifically for virtualization.

> Does the Microsoft CLI/CLR software have some security features beyond just
> any native program

There are a bunch of sandboxing features available in the CLR but most
applications run with full trust and can do anything a native program can do.

~~~
graycat
Super! Thanks, I needed that!

