1. Submit bug report.
2. Wait six months.
3. MS tech will post a comment, "this will be fixed in the next release".
4. Wait two more years.
5. Bug report will be closed as "won't fix".
1. Reported to connect whilst in preview release status. Closed. Reported again. Closed. FULL test cases provided.
2. We're a gold partner with a £500k spend a year on licenses. Partner support. 19 hours on the phone over 6 months, blame shifting between the IE and .net teams and a daily call to get the case closed without resolution. Got a registry patch from ass-end support after 4 months that we have to ship to 2000 users at 200 different companies rather than an upstream fix. This checks a check box in the security settings.
They broke their own product and won't fix it. Basically you can't use JS to redirect to a clickonce URL.
Now today, IIS just stopped serving shit with no errors, nothing. Can't get anything out of minidumps+windbg. Just stops. None of our code is running.
Who am I going to call?
Redhat that's who.
I deal with VSTO, WiX, ClickOnce, IIS, COM, MSMQ and the usual bits. Pays well but it made my hair fall out and has taken a couple of years off my life at least.
I long for the gong to ring so I can go home to my MacBook and OpenBSD (where I truly belong).
Yes deployment is a broken pile of crap. We inevitably did an NIH and wrote a massive push deployment framework for that. Cost a fortune. And I look after our integration environment as well (TeamCity). TC is nice but the .Net toolchain is horrific. Requires so much maintenance it's unbelievable. Also everything is stateful meaning repeatability is a PITA.
I'm the sweary guy too. Usually "we should have used Java - we don't have to invent new wheels every two mins".
The only bit of our infrastructure that is reliable is some memcache boxes on CentOS which have been online without a reboot for over two years!!!
Give me a C compiler, preferably LLVM and let me leave all this behind.
1. Submit bug report.
2. Get response like "we're looking into it"
3. Wait 2 to 5 months
4a. MS tech (or sometimes event Stephan T. Lavavej himself) posts a comment, "this will be fixed in the next release"
5. Bug report closed as fixed
6. Wait x time where x mainly depends on the point in time of the release cycle where 1 occurred. The later 1, the smaller x. You've gotta time those bugs to get them fixed quickly :P
4b: "won'tfix" / "by design" i.e. I got an explanation of what I did wrong, or otherwise why it happens
4c: "deferred" i.e. posted something that can be worked around and considered too minor of an issue to fix soon
The history of ET compiler: it started with LINQ in .NET 3.5. Originally it was pretty simple and just handled expressions. In .NET 4.0 we merged the entire codebase with the IronPython/IronRuby compiler trees, expanding the "expression trees" to handle statements. IIRC, it can generate almost any IL construct that you might need, and is usually a lot easier to work with. But we found .NET's runtime compiler (DynamicMethod) was a bit too slow for a lot of use cases. It also wasn't supported on some CLR configurations. To address this we wrote an interpreter and some heuristics to switch from interpreted to compiled. But the actual System.Linq.Expressions.Interpreter must have happened after 4.0, because I don't remember that at all. Instead we just shipped it as a shared library used by IronPython and IronRuby.
Here's the normal ExpressionQuoter:
And here was the interpreter. I don't see the ExpressionQuoter, so either that's a newer fork of the code that was rolled into System.Core, or maybe a completely new implementation.
IIRC, ExpressionQuoter was mainly to support the Quote expression, and was always a bit buggy. The 3.5 version was seriously messed up, and our prerelease versions of .NET 4.0 also had various bugs, and very few tests. I tried to fix it by having it use the same reuse closure mechanism as the normal compiler. Funny that same feature caused issues later on.
 one might wonder: why use StrongBox<T>, essentially boxing every parameter, rather than just generating a type with only the right fields? The reason was that generating a type in .NET 4.0 timeframe was absurdly slow. Like, a few hundred per second slow. I think this has been largely fixed now, but it was a huge performance problem for Iron* language runtimes back in the day
One thing I'll point out though, it's a Field on StrongBox<T> for correctness not performance - the field needs to be capable of being passed by reference to get consistently correct semantics. That's simply not possible on .NET native using the interpreter so it will end up with copy in / copy out semantics (which could break people but it's pretty unlikely). Also StrongBox<T> pre-existed the DLR expression compiler and was originally added w/ LINQ's ETs in 3.5 so we were also just re-using what they had already done. IronPython actually had Reference<T> early on which grew into the DLR's version and then finally converged back on StrongBox<T>.
AWESOME TIP, stoked to try it out, thanks!
Awesome read btw!
"But GCC's open source!" you say.
Well, GCC is next-to-impossible to compile for a target other than the host, especially if the target isn't x86 or ARM; and GCC maintainers insist on precise test cases to vet a bug, even if the issue is immediately obvious from reading the source code and the bug only occurs in certain very complex situations.
(/me looks forward to the day Clang/LLVM becomes the default on Linux…)
You're exaggerating. I had no problems compiling gcc 3.something targeting MIPS-I on an x86 Linux host.
Maybe in 3.x and MIPS it "just worked". My experience is 4.x and Tilera. Myself and another engineer dumped a week into that sinkhole before giving up.
Clang/LLVM on the other hand… ./configure && make && make install. No other wacky dependencies or build steps. And it generates better code in many cases (particularly when dealing with structures).
In some distributions, say, Debian, there are readily available cross-compiler packages (like g++-4.4-arm-linux-gnueabi). I haven't rebuilt gcc, but the approach must be to pull package's source, add a patch and run the build - then debian/rules and myriad of various helper scripts will take the hassle of doing everything. Just can't be the other way since package builds are automated.
And there are other tools, like buildcross that also automate the job.
I'm not. The process is well described, it tells you exactly prerequisites, in which order to build them, and how to configure the directory structure for building gcc itself.
It worked for me the very first time I tried it.
Though I needed only a freestanding implementation, so I didn't bother with compiling libc. I believe that cross-compiling libc can be painful.
Not exactly sure what you were expecting. GCC on Windows is already wonky enough (MinGW? MinGW-w64?), I have no Idea how one can voluntarily try compiling GCC on Windows, let alone a cross compiler.
I haven't really kept up with the latest stuff, but I've seen mentions of crosstool-ng, which may be a newer take on it:
So, no, it's not next-to-impossible. It's a little complicated, and that's why these projects exist, but my memory of crosstool was that it made it trivially easy: run the script, let it compile for a bit, and voila, you now have a brand-new cross-compiler toolchain.
Most of the commercial software dev stuff is like regularly salting and sandpapering your genitals. And I'm not talking about fine grit paper either. Even relatively popular fields like .Net are painful on a daily basis.
I really like to see inside the black boxes when they inevitably go wrong. Hell getting a backtrace out of a dump file on windows from commercial software is an art in itself for example.
I had the fortune of inheriting a Sun SparcStation 20 in 1999 that was being thrown out. That and NetBSD literally drove a spike through my mind and entirely destroyed my conception of proprietary closed source software. I really wouldn't touch it now but the money sorting out all the shitty little problems is pretty good, even if it tries to stab you in the face 5 times a day. There's lots of work as well unlike my preferred field of Unix and C.
So I agree with you but I'm a slut for cash and unreliable black boxes of software is good money even if it does feel like I'm the IT equivalent of a STD ridden stripper.
Assuming they haven't been obfuscated, this is an extremely useful tool. I've used it to track down a number of issues within Visual Studio itself and within some of the non-open sourced components of Roslyn.
I think the latest build can also provide PDBs to Visual Studio on the fly by acting as a source server (insane).
After years of Reflector being a free tool, Redgate added a time-bomb to new downloads and released a statement 
By the time we've got the PO's signed if it was open source then I'd have solved the problem at hand.
To get around this, defensive purchasing is required: at least one VS ultimate license, at least one MSDN sub, ANTS profiler, IDA Pro, Azure VM allocation ready to roll for test machines and two guys who actually know something about all this.
This is the safety belt cost: about $30000 + $120k a person
I would personally be very careful with the recompiling dance, at least in other languages, as alignments and such might come out of place. A patch feels much less dangerous if this is something that is to be deployed.
(I always also try to rig such builds so that the build bombs if the dependencies change. That way it doesn't survive version changes without forcing someone to take a long hard look at it.)
Signing is almost useless in .Net. And it's certainly not in place for security purposes.
This code is getting to look butt-ugly. It is not a good thing if it continues like this. We already have C++. Don't need another one.
* Haskell's type classes are awesome, and neither F# nor C# have them. But, you can realize them with a really simple pattern in C# whereas in F# you have to resort to reflection or generic hacks (but inline functions usually make such hacks nicer in F#).
* Sum types are awesome. F# realizes them in two ways: with discriminated unions and pattern matching (mainly), or with abstract classes and inheritance. Both are powerful approaches, but the latter is particularly useful when you need unbounded sum types. Unfortunately, inheritance is somewhat crippled in F# because you can't define protected members (though you can override them). I find myself needing this when I'm designing a solution rather than merely implementing one. For instance, I find that designing a language/compiler is much easier with C# than with idiomatic F#.
* This is kind of a summary of the first two points: idiomatic C# is a better dynamic language, and I'm not referring to the `dynamic` keyword, though it certainly helps. I've created some really nice and secure APIs by making use of user-defined conversions. System.Linq.Xml is a good example of what I'm talking about.
* Finally, idiomatic C# is a nicer classical OO language than idiomatic F#. This is kind of obvious since F# is a "functional language" with OO features whereas C# is an OO language with functional features. But, when I say classical OO, I mean the Smalltalk style OO that came before C++. Alan Kay defined that kind of OOP as (1) Encapsulation, (2) Message Passing, and (3) extreme late binding. Neither C# nor F# quite has (2) and (3), but C# gets me the closest without coloring too far outside of the lines. Of course, this begs the question that OO is better than functional. Having done both for a long time, I'm convinced that it is the case (though I do think learning functional programming is the fastest path to getting good at OO). Lamentably, I can't yet defend this point too vigorously because I've only just discovered the joy or Smalltalk. But, here is a preview of sorts: Imagine being able to update a database schema and having your statically-typed code continue to run without needing to recompile.
Shoot me an email. Would love to chat some time. I've been doing OO for 20+ years and have become a big FP fan in the last 5.
I might would go OOP in a large commercial setting, but for small team startup-type work, pulling in all of that wiring and structure for delivering an MVP just looks crazy to me. Even for large greenfield projects, it makes more sense to me to start in the REPL, code the minimum amount necessary, and then "grow" your class structure as you start needing all that OOP goodness.
I was also about to tell him to skip all the ildasm stuff and just use the online reference source (referencesource.microsoft.com) ... but that assembly isn't in there. So I guess ildasm was the best option.
I should be able to change the software on my own machine, stored on and running on hardware I own, in whatever way I desire and have it do what I want. (And in practice I have - opening a binary in a hex editor and changing a few bytes is not at all beyond me.)
No it should not. DLL is basicly a reusable software library.
What you are suggesting makes no sense. I will give you B+ for confidence.
Second, if you're able to modify one DLL, you could overwrite whatever DLL was tasked with computing the checksum.
As an aside, OP says he works with Bart De Smet and they're messing with expression trees. I bet that this is work related to Cortana and the expression trees are used for compiling standing Rx (Reactive Extensions) queries which Cortana uses extensively on the phone and in the cloud.
I wish i knew about it about a year ago for a project I did.
What's maybe surprising is that the C# compiler generated code 'adopts' the types in their hacked expressions library. But the C# compiler, wherever it depends on particular types that need to exist for it to do its thing, has always generally relied on finding those types by -name-, not by -name and assembly strong name-.
For example, the C# 3 compiler understands the calling and declaration syntax of extension methods, but to compile an extension method, it requires an Attribute type - System.Runtime.CompilerServices.ExtensionAttribute - which was only introduced into .NET in v3.5. But if you make an attribute class with that name available to the C# compiler, you can compile code using C#3 extension method syntax against the .NET 2.0 framework libraries.
So if you can modify the files in the GAC, you're already compromised at that point.
In most other languages, they'd have forked the source, fixed it, recompiled it for their own uses and submitted a pull request or patch.
I'm a big .NET fan, but the fact that we have to jump through such hoops to find and fix a bug, and then still have near certainty that we're going to have to reapply the patch for many updates to come, well, that's just a bit sad. It feels rather last-century, to be honest. Microsoft could save a lot of double work if they'd just open source .NET and attach a decent process to it. They can still be the Linus. Just consider my patches.
I should disclose that I actually work for Microsoft, and still chose the hard way. Fuck the police!
That, and they're too poor to buy IDA Pro.
Chose? Unless you work on the team that made that DLL, it's likely you'd be jumping through hoops even internally.
You don't actually jump through hoops. All you get to do is submit the bug, which gets logged in the bucket of thousands of other bugs.
Then the PMs will gather for bug triage for the next release, which have to compete with the next shiny project features. Nobody wants to fix bugs that only one developer has. It's just not worth it. This bug has existed for years, and it is likely you could modify your code to generate different byte code.
So the bug likely won't be fixed for a while.
I love .NET. I am just describing the reality of the process of bug fixing. Also, if you fix it, who knows what else will regress. That also imposes a bias towards not fixing these kinds of bugs.
This is an interesting cognitive hazard: it's relatively easy to measure the cost of paying a developer to fix a bug and possible to estimate the cost of getting it wrong but there's no way to measure the cost to everyone who encountered it but never did the work to reduce it to a testcase and file a bug report.
Worse, it's very hard to measure the cost of people who choose to use something else. Repeat that cycle a few times and a fairly high percentage of the people you will hear from are in the “never change anything less than critical” camp.
The cognitive hazard is a good observation.
But I think the most important one is the imbalance of interest, which is where open source shines: MS gets zero value from fixing this bug and a lot of value from adding new features, but for this one developer this bug could have been the most important issue on the road to his company's "value" (their own product's ship date).
That's pretty much how it works at Google. The code owners have to review and approve your change (and can reject it), but it's common (even encouraged) to commit a bugfix or new feature to someone else's project.
The only difference is that you can see the source and test on your own machine first.
MS employees don't have to commit to any codebase they like, but they should be able to obtain the actual source, so that "with enough eyes, all bugs are shallow".
PS: While the linked article is very relevant here, I'm getting tired of people always linking to Joel's blog -- he's not the definitive software spokesperson; has strong opinions which have been shaped by years at Microsoft - and a lot of his opinions are boxed into those views.
It wasn't that hard. It took us about 30 minutes (thanks largely to the fact that Bart is a genius.) Besides, that's not actually the source of this DLL. We are on .NET native, and for that matter, a bleeding-edge build that's only available here at MS.
Note: also MSFT employee, don't work on .NET though.
I work at MS, and we're using a slightly exotic version of this DLL. Please pardon the differences. :)
Yeah, I've never tried, but I have a copy of the actual Windows source tree, so I'm guessing the answer is yes.
> use tfs, some svn, some git, etc
you can go onto MS's job board and find Rails, iOS, and Node positions. :-D
2) The tools they used to find & fix the bug are all based on fully-specified, documented, and supported mechanisms: ildasm and ilasm are stock .NET tools, and ilspy is using Mono's IL decompiler/compiler library.
3) You can compile an app against your own builds of any of the standard libraries if you want (including mscorlib, with caveats) and run it, so a fix like this can be deployed and maintained as much as you like, even if it's not Great Engineering.
4) This hand-patched DLL wouldn't corrupt any other app on your system, and you can ship and version it yourself, so it wouldn't break as a result of a system-wide .NET hotfix or anything like that. .NET's fully-specified ABI also means that it will work across all supported platforms.
5) Mono is open source, so you could always ship against mono and mono's mscorlib (both open source, naturally) if you absolutely can't wait for a .NET bugfix to ship AND you're unwilling to patch the dll yourself.
People love to bitch about .NET, but it actually provides a really good ecosystem here. Very little of it is truly a black box, the spec is extremely readable and precise about the things it covers, IL disassembly is extremely readable, IL disassembly fed through a decompiler like ILspy/dotPeek is doubly readable, debugging is straightforward even without symbols, and recompilation/patching is trivial because everything is well-specified and there's a robust open source library for it. (Even if there wasn't a library, you could just use the standard ildasm/ilasm pair.)
Oh, and you can compile native no-kidding C/C++ to .NET targeting the same instruction set and get all the benefits of the above. If you want to.
My point is that MS should open source .NET. With a good governance process (super dictatorial no problem! Just be open so we can chip in). It's good for them and for us .NET devs. There's nothing to lose. Like you write, there's a lot of things great about .NET. But being closed source is a disadvantage.
Ah yes, that's why you pay $$$ for software :).
Normally if you find a bug in the core you might try to monkeypatch but many times that won't work so you have to suck it up and just work around it.
(Note that these were forks because they changed aspects of Ruby that were not in line with Ruby's general purposeness, so they were not allowed to merge into mainline Ruby)
The way they did this, only the program that needs this fix is actually using their monkeypatched Expressions library; other code on the same system - other libraries in the same application, even - will continue to use the standard .NET runtime classes. That's a pretty powerful capability, and not one that is necessarily made easier by an open language runtime.
monkey patching refers to dynamic modifications of code/behaviour at runtime.
this is simply editing a file on the disk, which people cracking software have been doing for about 30 years (using exactly the same method).
We actually chose the most expedient route. The reason is, the source you link to is not actually the source of the DLL we were working with. That particular file is very similar, but the rest of the repot is very much different from the version we were working on. (I work at MS, and our project is kind of exotic in this case.)
I went to the weekly tech lead meeting, and when it was my turn to talk, I said: "I think we should increase out internal NuGet package release interval to every 18 hours. F--k the Police!"
People just looked at me weird.