Yeah, I prefer not to use the built in Visual Studio project templates. Best to always use bin deployed binaries explicitly installed from NuGet packages. In the future (later this year) we can also bin deploy the runtime (.NET Core) and base class library (Core FX).
With STMIA it's an understandable optimization, but nobody would expect STMDA to work the way it does. That would require careful testing and a deliberate choice to use the weird one even though it feels counterintuitive.
The order is guaranteed by the architecture though (not merely by the implementation), provided the target is Device or Strongly Ordered memory. ("For a VLDM, VSTM, LDM and STM instruction with a register list that does not include the PC, all registers are accessed in ascending address order for Device accesses with the non-Reordering attribute." -- v8 ARM ARM.) So you don't need to test at all, you can just rely on the documentation to tell you it works.
Incidentally, the note "Since the write is done with one instruction, a DMA cannot preempt the CPU in the middle of the writes" from the article is likely not correct. The STM may be only one insn but it may generate multiple memory accesses to the bus, so it's quite plausible that a DMA device might get accesses in between words. (Of course RAM is usually mapped Normal in which case caches and store buffers will be heavily reordering it anyhow, so nobody relies on ldm/stm ordering here.)
The arguments used for contrained type parameters can be checked by the compiler, so no casting is necessary at runtime
"... C# does strong type checking when you compile the generic type. For an unconstrained type parameter, like List<T>, the only methods available on values of type T are those that are found on type Object, because those are the only methods we can generally guarantee will exist. So in C# generics, we guarantee that any operation you do on a type parameter will succeed."
Edit (more explicit quote):
"When you say K must implement IComparable, a couple of things happen. On any value of type K, you can now directly access the interface methods without a cast, because semantically in the program it's guaranteed that it will implement that interface. Whenever you try and create an instantiation of that type, the compiler will check that any type you give as the K argument implements IComparable, or else you get a compile time error."
Furthermore (related to the CLR avoiding boxing costs):
"I'm just pointing out that we do fairly aggressive code sharing where it makes sense, but we are also very conscious about not sharing where you want the performance. Typically with value types, you really do care that List<int> is int. You don't want them to be boxed as Objects. Boxing value types is one way we could share, but boy it would be an expensive way."
What I wonder is: Take "SortedList<T> where T:IComparable". Now there is a generic add(T t) method, which needs to call t.compareTo(x). As add does not know the dynamic type of t, we don't know the vtable offset of compareTo for compilation. Thus the compareTo call cannot be compiled to "load method pointer from vtable; call it". We need something more expensive or JIT magic (traces,guards,etc).
I compared the Win32_Products that are installed before and after. Even with all checkboxes unchecked, the installer adds 100+ packages. Unfortunately most of these are not listed in the add/remove programs control panel.
Windows Management Framework 5.0 Preview  contains early versions of Microsoft's OneGet and PowerShellGet.
OneGet can install applications from repositories, using any number of providers. The preview version comes with a version of Chocolatey  built in managed code (C#) instead of PowerShell, but it supports the same Chocolatey gallery (a repository of software packages) and protocol.
PowerShellGet  can install PowerShell modules (e.g. make new Cmdlets available on the PowerShell command line). The modules can be delivered as scripts or as compiled .NET assemblies. By default PowerShellGet is configured to use a (closed preview) repository , which makes it not very usable, but it's interesting to know what direction it is headed.
As a more practical alternative to PowerShellGet, have a look at PSGet , a PowerShell module for installing PowerShell modules, with its own dedicated repository. Hopefully Microsoft's PowerShellGet will support PSGet as a provider in the future as well, the names are certainly confusing.
Desired State Configuration (DSC)  is a new (Windows 8.1) capability to configure Windows using a declarative syntax extension of PowerShell v4. It can set registry keys, create files and directories, enable Windows Features, and more. DSC 'resources' are PowerShell modules, so DSC's capabilities can be extended, see this GitHub repository  for examples.
Alternatively, have a look at Boxstarter . It can do installation and configuration, and you can host your 'starter script' online and launch it with a single command. Boxstarter will take care of all Windows restarts that might be necessary along the way.
Be aware that, although most modern package solutions for Windows are using NuGet as a packaging format, using NuGet.exe directly (the application) and with nuget.org (the website) is meant for managing software development dependencies, not installing/updating end-user applications or command-line utilities.
Yes, which I think in simple cases is a benefit. For example if you are using an ORM, the massive queries in the OP are really a pain to deal with. With Sphinx, you perform the search using the search API and get back some ids, then just query those ids using your standard ORM constructs for getting the (small) set of whatever object/rows you are loading. Since you are loading small and fixed amounts of data by the primary keys, the performance shouldn't be an issue.
If you are looking for caching and other more complex features, I'd recommend elasticsearch (and I do highly recommend it). But sphinx is simpler, and I think it is a good alternative for the type of functionality talked about in the blog post. Granted, I really don't think elasticsearch is all that complicated either, and it is also really well documented. But sphinx is just painfully simple for the basic use cases (like anything postgres can do).
Additionally it helps to alleviate issues with stale data in the search index (which is often updated periodically ). If you have a list of ids to query for, there's no harm if one of them is no longer in the db, you just won't show it.
As ever, it's a tradeoff. I used to keep everything required in for search results pages in the search index (solr, at the time). Eventually I decided that the additional db lookup was well worth the extra few milliseconds to make sure I was working with reliable data.
You can actually store a bit more than that in sphinx. You can add attributes to your sphinx documents that can either be used for filtering or just as extra metadata when returning query results. The downside is they get added to the index which has to fit in memory (I think, it's been a while).