Sometimes they are. I really like it. I was selected to speak at a Ruby conference with a blind submission process. I remember feeling pretty good when one of the people who was a frequent speaker at Ruby conferences was whining on Twitter that their talk wasn't selected.
ILM and Pixar are routinely rejected from the SIGGRAPH Electronic Theater, while 30 second shorts from students often make it.
Noone cares that your computer graphics made a billion dollars, people care that they are cool.
SIGGRAPH also had, last I checked, the best gender balance of any tech community. One year our Wiki was vandalized claiming that the Archdiocese of Los Angeles was canceling a reception we were having on their property because the organization was supportive of the LGBT community and had strong LGBT membership.
So, yeah, Linus freakin' Torvalds can get rejected once in a while. I think his ego would survive. ;)
If it were a blind proposal, you wouldn't know it was from "yet another white male".
Individual speakers draw crowds, yes. But conferences are also a way for people to share their ideas and establish their names in the first place. A conference that optimizes for crowd draw is enforcing a preexisting hierarchy to the detriment of the community.
A solution is to use the keynote address and invited speakers to draw crowds, while taking blind proposals for most of the actual content.
"What is the goal of X?" is a good question to ask in any social endeavor.
It gives women, along with other frequently disadvantaged groups, such as minorities, the obese, seniors, and physically challenged persons, a more equitable chance to be heard in the first place. It means that the speakers who are chosen have met a minimum professional relevance level. This is a good short term win.
Will it mean those people have equitable chances to do keynotes today - no more so than the current systems.
Will it mean they have a chance to build a professional reputation that may result in invitations to do keynotes tomorrow - yes. This is a good long term win.
Exceptions actually bubble up to the active SynchronizationContext, and won't necessarily kill your process. In ASP.Net, aborting a request on an error is often desired behavior.
Still generally good advice though. Issues arising from accidental async void use are often very tricky to track down, and in my own code I only use it for fire-and-forget-best-effort event handling and run them all in a SynchronizationContext that captures and logs the exceptions.
Back in 2005 when I started Loopt I went all in with Windows Server, .NET, and SQL Server. I already knew the tooling. No regrets. It never held us back - product issues did.
Now I'm using it again, for the same reasons. I already know it, I'm pretty quick with it, it's predictable, and it works. I'm also old school and will probably deploy physical servers when the time comes. Look at StackOverflow - their whole site runs on a handful of properly configured servers running properly written code.
That said, .NET is not sexy. The tooling is expensive (Pro, Ultimate) or crippled (Express). It runs on Windows, which has fallen out of favor.
For startups technology choices are incidental when they're not the core of the business. If you're making a database, your tech choices matter. A consumer app? You're more likely to die of a bad product or team drama than fallout from rendering HTML with bash scripts via CGI.
So anyway, you don't see more .NET because it's expensive, it's not the best beginner language, and it doesn't run on OS X. Also, the alternatives are much improved - It's hard to argue for .NET over something else because even where it's better, it's not that much better. And most of the time it doesn't matter.
 Grew up with access to an MSDN subscription. I knew of and played with Linux, but in the pre-virtualization days, I had to reboot to use it. Understandably, I preferred tools I could use on Windows, my primary OS. Also, while it's no longer the case, the MSDN documentation used to be fantastic. I basically taught myself how to program by reading the MSDN docs and tearing apart the sample code and programs.
 For the same reasons I developed on Windows as a kid, younger developers will want something they can run on OS X directly, or for free in a VM.
> The tooling is expensive (Pro, Ultimate) or crippled (Express).
Visual Studio Community 2013 (and 2015) are completely free and nearly fully featured. Gone are the days of Express being a super lightweight VS IDE. I used to buy Pro for side work but now I can get by with Community.
> Visual Studio Community 2013 (and 2015) are completely free and nearly fully featured.
But have licensing terms which limit the contexts in which they can (legally) be used.
> Gone are the days of Express being a super lightweight VS IDE.
Express is still a super lightweight VS IDE. Community may eliminate most of the use cases for Express, since most of the cases where it makes sense to use Express may be places where Community is an available, per the licensing restrictions, and better choice.
SQL Server Web Edition is not the same as Standard or Enterprise. The reason you use SQL Server instead of PostgreSQL is for Database Encryption, Analysis Services, AlwaysOn Availability Groups, etc. Web Edition won't even do mirroring.
The cost is worth it if it saves you multiple employees worth of development work or administration overhead, and in my experience it does. The pricing is also (surprise!) negotiable. At Loopt, our annual software licensing cost was well under the cost of one developer, and this was before BizSpark.
I've shipped production code developed in Mono on OS X, deployed to both Windows and Linux. I don't think that should be a huge roadblock moving forward. MS itself is trying to foster this (for the benefit of selling more Azure services presumably) with the next version of ASP.net which will run on basically everything.
I've never used a curved 4k monitor (or even seen one), but I use a 32" ASUS PQ321Q at its native resolution for coding. I love it and can't go back.
It's right at the edge of what I can discern - any higher PPI and things might look sharper, but I couldn't get any more usable space out of it - I'd have to scale it. It fills my field of view so I I can use the entire screen without having to move my head much, if at all.
While I'd not recommend my specific monitor anymore due to non-competitive pricing, I wholeheartedly recommend DisplayPort, 32", 4k, IGZO or IPS monitors for coding. Get something cheaper at your peril, and definitely don't get a TV.
I think being I/O bound is orthogonal to whether you need kqueue or epoll.
You could be I/O bound on a small number of simultaneous sockets, in which case select() would work fine. Or you can be I/O bound on a lot, in which case select() would take too much CPU by scanning the descriptor table repeatedly.