
Safe Native Code - panic
http://joeduffyblog.com/2015/12/19/safe-native-code/
======
pcwalton
> You need a great inliner. You want common subexpression elimination (CSE),
> constant propagation and folding, strength reduction, and an excellent loop
> optimizer. These days, you probably want to use static single assignment
> form (SSA), and some unique SSA optimizations like global value numbering
> (although you need to be careful about working set and compiler throughput
> when using SSA everywhere)…I hate to say it, but doing great at all of these
> things is "table stakes."

I'm not sure. It depends on the audience you're aiming for. Go has done very
well with a compiler that essentially performs no compiler optimizations (by
modern standards; certainly it doesn't do most of the above) by positioning
itself appropriately.

> For example, there are ways to write the earlier loop that can easily
> "trick" the more basic techniques discussed earlier:

Yeah, bounds check elimination ends up nightmarish quickly. I think I prefer
an approach that fixes the problem at its source, by just encouraging
programmers to stop using for loops (which are bad for non-performance-related
reasons as well). Note that all of the examples given use for loops; if they
were rewritten using iterators, there wouldn't be a problem. C# has iterators,
so the machinery is there; they just need to make it more painful to use a for
loop than the iterator alternative. :)

> Imagine you have a diamond. Library A exports a List<T> type, and libraries
> B and C both instantiate List<int>. A program D then consumes both B and C
> and maybe even passes List<T> objects returned from one to the other. How do
> we ensure that the versions of List<int> are compatible?

I don't understand why this is a problem. Both separately compiled
implementations of List<int> should monomorphize to bit-identical runtime
value representations. Does the problem have to do with quirks of the
compiler's RTTI implementation (e.g. making sure "foo instanceof List<int>" in
package A works even if "foo" came from package B)?

> First, Go generates the interface tables on the fly, because interfaces are
> duck typed.

Does it really? That's interesting and seems extremely inefficient. I would
have assumed that the Golang compiler would just statically determine the set
of vtables that need to be used by the program and write them up front into
rodata. Am I missing something?

~~~
cwzwarich
> I would have assumed that the Golang compiler would just statically
> determine the set of vtables that need to be used by the program and write
> them up front into rodata. Am I missing something?

IIRC the Go compiler will generate the itables that it can determine are
needed at compile time, but because you can dynamically request an interface
conformance for a type there is also a runtime fallback.

Swift does something similar; the compiler tries to generate protocol
conformances as much as possible, but because there may be an unbounded number
of types at runtime, clearly it can't generate all of them.

~~~
pcwalton
> IIRC the Go compiler will generate the itables that it can determine are
> needed at compile time, but because you can dynamically request an interface
> conformance for a type there is also a runtime fallback.

That would make more sense. But
[http://research.swtch.com/interfaces](http://research.swtch.com/interfaces)
claims it's all runtime-based (though that document may well be out of date).

~~~
cwzwarich
It looks like the optimization I remember was implemented in gccgo, but maybe
they never added it to the main compiler:
[http://www.airs.com/blog/archives/277](http://www.airs.com/blog/archives/277)

------
vive-la-liberte
>Thankfully, these days you can get an awesome off-the-shell optimizing
compiler like LLVM that has most of these things already battle tested, ready
to go, and ready for you to help improve.

I wonder if "off-the-shell" is a typo or a pun.

------
MichaelGG
So what exactly was Midori? He says they benchmarked "booting all of Windows"
\- what's that mean in context? And why are they back pushing and focusing on
C++?

Really good series of posts. Always good to see sufficiently smart compilers
and such.

~~~
krylon
One of the earlier posts in the series explained Midori in more detail.
Apparently it was a research operating system built at Microsoft to explore
using safe/managed code for building the entire OS while also aiming for the
same kind of performance you get with raw C/C++.

~~~
MichaelGG
So what does "booting Windows" mean? Did they port some sort of Win32 compat
thing or was that supposed to be a general term for their UI?

~~~
ellism
In addition to backend compilation for managed code, Phoenix could continue to
be used to compile C/C++ code. One of the things the team did was continue to
compile the Windows codebase to compare Phoenix and UTC (for both functional
and performance reasons, IIRC).

Disclaimer: I was on the Midori team for a few years but did not work on
Phoenix itself.

------
nickpsecurity
How much of this tech has been published in academic papers? Microsoft's
normal MO seems to be to just patent it then publish and/or use it like they
did with a lot of other stuff (incl VerveOS). I'd love to read the technical
reports on how they handled each problem and what specific results came from
it. Meanwhile, Joe's write-ups are great substitute.

~~~
pjmlp
Joe explains this on the first blog entry, almost nothing, hence why he
decided to write these posts, so that the information doesn't get lost and the
world outside MSR gets to learn a bit about what Midori was all about.

------
mwcampbell
Looking forward to CoreRT being usable for non-trivial applications. But I
wonder why CoreRT is still in early development, while .NET Native is ready
for universal apps on the Windows Store. Why wouldn't the runtime from .NET
Native be suitable for desktop and server apps today?

~~~
pjmlp
As I understand it, it doesn't cover all possible MSIL opcodes or .NET APIs.

Only libraries that can run on top of CoreCLR can be AOT compiled with .NET
Native. For example the F# team is currently making possible to target
CoreCLR.

Then there are possibly the political issues where Microsoft wants to drive
.NET Native.

------
lostmsu
Great article series. Very easy to understand, and brings a lot of inner works
of a managed runtime out.

------
AlexCoventry
How does this compare to native client?

------
jstclair
Admins, how did this dupe
[https://news.ycombinator.com/item?id=10764870](https://news.ycombinator.com/item?id=10764870)
(11 hours ago)?

~~~
DanBC
(I'm not an admin. You probably want to email HN because they won't see the
question otherwise.)

In the past HN had a fairly strict dupe-detection filter.

That meant that a lot of good stories that didn't get attention on the first
posting didn't get reposted.

Currently the dupe-detection is much weaker than it used to be. A story that
didn't get much attention on the first post can be reposted easily now.

HN tried an experiment where they'd email people and ask them to repost
submissions, and give those reposts a small bump. That was a lot of work, so
they only do that for "Show HNs". Now they do something like an auto-repost
which resets the timestamp.

This means that sometimes you'll post something, and it won't get much
attention, and a few hours later someone else will post the same thing and
it'll get upvotes.

This isn't going to stay like it is. They're working on a better system.

[https://news.ycombinator.com/item?id=10754760](https://news.ycombinator.com/item?id=10754760)

>> We've recently started doing things to make the original submitter get the
front-page slot more often

[https://news.ycombinator.com/item?id=10753401](https://news.ycombinator.com/item?id=10753401)

>> Invited reposts are mostly deprecated now in favor of re-ups [1], but when
it looks like the submitter might also be the author (as e.g. with Show HNs),
we still send them. It's nice for an author to know that their post may still
get discussed, and it's good for HN when an author jumps into the thread.

[https://news.ycombinator.com/item?id=10705926](https://news.ycombinator.com/item?id=10705926)

(A meta post, with links to previous discussion).

~~~
jstclair
Thanks for that! I knew they'd loosened de-duce up a bit, but this was ~10
hours.

------
melted
It's sad that such otherwise smart people choose to stay at Microsoft.
Microsoft doesn't yet realize it, but the would really would be a better place
if it were to die. They did a lot of interesting things over the years, but
their view of the world is so fundamentally out of touch with what the world
is today, that they can't help but lose mindshare. They're still quite strong,
as there's no alternative to them on the desktop (for most people), but they
never quite gained dominance on the server (and continue to further fuck it up
by introducing more and more bizarre licensing arrangements), their
development environments and APIs look like a bad joke, and most decent
engineers would rather have their balls/ovaries removed than code for the
Microsoft platform. Nearly a decade ago, when I actually did develop for
Windows, I asked a fellow engineer who coded for Linux why he refused to admit
how superior C# was to Java. He basically said that he is not interested in
chaining himself to any given platform. In the years that followed, I saw just
how right he was.

So my message to Joe and other solid folks like him: there are tons of
opportunities outside MS campus. I know large companies tend to make you
believe it's a barren desert out there, but it's simply not true, and it's
especially not true now, when job market is starved for good talent. Go out
there, make the world a better place. Let Microsoft ride into the sunset.

~~~
pikzen
> their development environments and APIs look like a bad joke

I agree, cmd.exe is bad. Everything that doesn't require the command line is
better in Windows land though. Visual Studio has nothing coming even close to
it (CLion is getting there for C/C++ though), .NET has an amazing set of APIs
and I wouldn't trade them for any other language's, Direct3D is actually a
good API compared to the pile of crap that is OpenGL. Win32 is actually not
that bad compared to the crap you have to deal with on Linux.

For certain things, Windows is the superior alternative. However, if your job
requires you to do JS/ruby/python/etc. then by all means, go with Linux/OSX,
it is clearly better. But the integration of some tools and languages on
Windows beats any other OS.

~~~
melted
Huh? For the past several years I've been doing C++ development in Vim, with
YouCompleteMe providing excellent code completion. Eclipse works pretty great
for C++ too, especially for browsing code. Seems to me you haven't seriously
worked in Linux. The only thing that's lacking is the debugger. Everything
else is either the same (for eg Java) or an order of magnitude better. All
non-ms programming languages are UNIX-first. They feel bolted-on when ported.
And let us also not forget the complete freedom you have on UNIX systems. Want
to spin up a VM or a dozen? Go right ahead, for free. Want your server to
serve five hundred people? Go ahead, no need for extra licensing. Want dev
setup for nearly any language under the sun? It's a one liner. And so on and
so forth.

Another benefit is, the lower level APIs are the same they were two decades
ago, some are even older. And that API surface is much smaller. You basically
learn them once, and they're good for life. Same with much of the tooling such
as text editors, build systems, command line tools, and so on.

Once you grok all this, there's _really_ no going back to ball and chain that
is Windows.

~~~
wolfgke
> All non-ms programming languages are UNIX-first. They feel bolted-on when
> ported.

This is not an argument against Windows, but against these programming
languages and their maintainers.

> And let us also not forget the complete freedom you have on UNIX systems.

There are lots of commercial UNIX systems that have much more restrictive
licensing terms than Windows.

> Another benefit is, the lower level APIs are the same they were two decades
> ago, some are even older. And that API surface is much smaller. You
> basically learn them once, and they're good for life.

The same holds for WinAPI etc.

~~~
melted
1\. Be that as it may, no one is idiot enough to create their life's work on a
proprietary platform. 2\. Which is why commercial unixes are being supplanted
by FOSS. And it far easier to do than, eg porting anything from Windows to
anything else. 3\. WinAPI is a verbose, poorly designed turd, so that doesn't
really help your argument any.

~~~
catnaroek
I'm primarily a Linux user, and only use Windows at work, and don't consider
POSIX precisely the pinnacle of tasteful API design.

~~~
melted
It doesn't have to run faster than the bear. It only has to run faster than
the other guy.

~~~
catnaroek
It isn't clear to me that POSIX is strictly better than the Windows API in all
respects. For instance, synchronizing processes manipulating a common file is
_very_ awkward in POSIX.

