I'm not sure. It depends on the audience you're aiming for. Go has done very well with a compiler that essentially performs no compiler optimizations (by modern standards; certainly it doesn't do most of the above) by positioning itself appropriately.
> For example, there are ways to write the earlier loop that can easily "trick" the more basic techniques discussed earlier:
Yeah, bounds check elimination ends up nightmarish quickly. I think I prefer an approach that fixes the problem at its source, by just encouraging programmers to stop using for loops (which are bad for non-performance-related reasons as well). Note that all of the examples given use for loops; if they were rewritten using iterators, there wouldn't be a problem. C# has iterators, so the machinery is there; they just need to make it more painful to use a for loop than the iterator alternative. :)
> Imagine you have a diamond. Library A exports a List<T> type, and libraries B and C both instantiate List<int>. A program D then consumes both B and C and maybe even passes List<T> objects returned from one to the other. How do we ensure that the versions of List<int> are compatible?
I don't understand why this is a problem. Both separately compiled implementations of List<int> should monomorphize to bit-identical runtime value representations. Does the problem have to do with quirks of the compiler's RTTI implementation (e.g. making sure "foo instanceof List<int>" in package A works even if "foo" came from package B)?
> First, Go generates the interface tables on the fly, because interfaces are duck typed.
Does it really? That's interesting and seems extremely inefficient. I would have assumed that the Golang compiler would just statically determine the set of vtables that need to be used by the program and write them up front into rodata. Am I missing something?
The ultimate answer to Singularity/Midori/etc is yes, you can have an OS (including drivers and interrupt handlers) that is 100% provably memory safe and thread safe and is performance-competitive with existing systems.
Unfortunately Microsoft basically sheleved most of the work and it doesn't look like anyone else is going to pick up the slack. I predict more heartbleeds and zero-day RCEs in our future. At least we'll all get 0wned with fast C code.
IIRC the Go compiler will generate the itables that it can determine are needed at compile time, but because you can dynamically request an interface conformance for a type there is also a runtime fallback.
Swift does something similar; the compiler tries to generate protocol conformances as much as possible, but because there may be an unbounded number of types at runtime, clearly it can't generate all of them.
That would make more sense. But http://research.swtch.com/interfaces claims it's all runtime-based (though that document may well be out of date).
> Go's dynamic type conversions mean that it isn't reasonable for the compiler or linker to precompute all possible itables: there are too many (interface type, concrete type) pairs, and most won't be needed.
I wonder if "off-the-shell" is a typo or a pun.
Really good series of posts. Always good to see sufficiently smart compilers and such.
Disclaimer: I was on the Midori team for a few years but did not work on Phoenix itself.
Only libraries that can run on top of CoreCLR can be AOT compiled with .NET Native. For example the F# team is currently making possible to target CoreCLR.
Then there are possibly the political issues where Microsoft wants to drive .NET Native.
In the past HN had a fairly strict dupe-detection filter.
That meant that a lot of good stories that didn't get attention on the first posting didn't get reposted.
Currently the dupe-detection is much weaker than it used to be. A story that didn't get much attention on the first post can be reposted easily now.
HN tried an experiment where they'd email people and ask them to repost submissions, and give those reposts a small bump. That was a lot of work, so they only do that for "Show HNs". Now they do something like an auto-repost which resets the timestamp.
This means that sometimes you'll post something, and it won't get much attention, and a few hours later someone else will post the same thing and it'll get upvotes.
This isn't going to stay like it is. They're working on a better system.
>> We've recently started doing things to make the original submitter get the front-page slot more often
>> Invited reposts are mostly deprecated now in favor of re-ups , but when it looks like the submitter might also be the author (as e.g. with Show HNs), we still send them. It's nice for an author to know that their post may still get discussed, and it's good for HN when an author jumps into the thread.
(A meta post, with links to previous discussion).
A dupe submit could be like a major upvote, weighed by proximity. 0h-24h : +100, 1d-12d : +50, _ : +1
So my message to Joe and other solid folks like him: there are tons of opportunities outside MS campus. I know large companies tend to make you believe it's a barren desert out there, but it's simply not true, and it's especially not true now, when job market is starved for good talent. Go out there, make the world a better place. Let Microsoft ride into the sunset.
I agree, cmd.exe is bad. Everything that doesn't require the command line is better in Windows land though. Visual Studio has nothing coming even close to it (CLion is getting there for C/C++ though), .NET has an amazing set of APIs and I wouldn't trade them for any other language's, Direct3D is actually a good API compared to the pile of crap that is OpenGL. Win32 is actually not that bad compared to the crap you have to deal with on Linux.
For certain things, Windows is the superior alternative. However, if your job requires you to do JS/ruby/python/etc. then by all means, go with Linux/OSX, it is clearly better.
But the integration of some tools and languages on Windows beats any other OS.
Powershell is a huge improvement, though, and in many ways far surpasses the unix command line environments. You have real objects at the prompt, with fields and methods, you aren't just wrangling plain text all over the place. Naming and parameter sets are consistent and non-cryptic. Everything is tab-completable (command names, parameter names, [some] parameter values, types, registry keys, paths, history, etc, etc). Writing new shell commands is super simple. Parameter sets for scripts and shell commands are completely declarative, no more manual argument parsing.
Not to say there aren't weak spots: performance and the error-handling model come to mind. Disabling execution of scripts by default, in a scripting environment, also a massive facepalm. There's also the separate-but-related matter of the terminal host environment itself, which has been historically terrible on Windows. At least with Win10 that's finally moving in the right direction.
Anyways, it's a shame people still think "Windows CLI" == "cmd.exe" because most of us moved past that 6 years ago when Win7 included Powershell by default.
These are unrelated statements, not sure what you are getting at.
Powershell is more than adequate to build one-liners, scripts, tools, and system automation. The syntax is kind of like C# and Perl had a baby. I'd say it stands up quite well against Bash, Perl, or unquestionably Batch. It's not without its warts, and I wouldn't want to build a large production app/service with it, but that's not what it was designed for nor what it claims to target.
As I mentioned in my earlier comment, the interactive terminal environment itself has long been quite poor on Windows, and Powershell inherited that. Maybe that's what you mean by "interactive command line"? It's miles better today, though, for 2 reasons - 1. The default Win10 terminal is much improved (resizeable on-the-fly, copy/paste, line selection ... still a ways to go though) 2. Powershell itself now integrates PSReadLine out of the box, which brings all manner of features that were badly needed (undo/redo stack, syntax highlighting, a solid multi-line editing story, optional emacs mode, smarter navigation, it-just-works copy/paste, easily remappable keybindings, more).
And what bothers you about the interactive command line?
Try zsh with a decent dotfile setup, and you'll undersstand. Until then it's like explaining colors to a color blind person.
I'd have to guess what exactly that means and entails, but you're talking about customization to your favourite shell and comparing that to the stock shell without anything on another OS. Don't you think that's a bit unfair?
Another benefit is, the lower level APIs are the same they were two decades ago, some are even older. And that API surface is much smaller. You basically learn them once, and they're good for life. Same with much of the tooling such as text editors, build systems, command line tools, and so on.
Once you grok all this, there's _really_ no going back to ball and chain that is Windows.
This is not an argument against Windows, but against these programming languages and their maintainers.
> And let us also not forget the complete freedom you have on UNIX systems.
There are lots of commercial UNIX systems that have much more restrictive licensing terms than Windows.
> Another benefit is, the lower level APIs are the same they were two decades ago, some are even older. And that API surface is much smaller. You basically learn them once, and they're good for life.
The same holds for WinAPI etc.
Disagree. It's the operating system's job to make it easy for programmers to write the programs they want. If language implementors consistently don't want to support a certain operating system, it's a sign that something might be wrong with that operating system. Not necessarily from a technical point of view - it could be, say, a marketing issue.
Having written code for Win32 for a while, I kind of have to disagree. Compared to Gtk+ (I admit, I only used it from Python/Perl/Ruby) or Qt, building GUIs in C/Win32 is a huge pain.
I only had very brief contact with Win32's threading API, but it looked like that was indeed more fun to use than pthreads.
On the other hand: WinAPI stays (even binary-)compatible with new versions of Windows. Gtk+ on the other hand is replaced by newer incompatible versions all the time.
(Although I feel for the poor programmers that have to make sure twenty year-old applications that grossly abuse the documented API keep running.)
Google could release Android 7 with another POSIX like kernel, using the same official NDK APIs and the only apps that would break are the ones using non public APIs.
Microsoft is not only Windows.
I don't think it says anything about their stance on Linux that AWS and Google do a better job.
They did when they used Linux servers to centralize all the Skype traffic: http://arstechnica.com/business/2012/05/skype-replaces-p2p-s...
Microsoft, alongside Apple, are the only companies left that still care about OS research.
Another great example is their VerveOS work for highly-assured OS's:
Xax for legacy code protection was clever, too:
"Dead"? how many research OS's aren't "dead"? The project being completed just means the active research has stopped. The whole point of making a research project like this is to find some interesting bits of knowledge that you can incorporate into other projects.
I'm sure this was already useful for .NET native, and will have bits (or at least lessons learned) for every ms. language and OS down the road.
> What "operating systems research" does Windows 10 actually represent?
I'm sure there are some fancy bits buried in it. The new JIT in .NET 4.6 is pretty fancy, for example (even though it isn't strictly OS research it's pretty tightly tied to the OS given how the universal app platform and store works). Win 10 is indeed mostly a polish release from Windows 8.1 (A few years ago it surely would have been called Windows 8.2 or Windows 8.1SP1). I think the naming was just part of the strategy to stop bumping major versions, like OS X.
Where are opportunities outside the MS campus for people who love research about operating systems and/or programming languages (besides academia)?