I thought about something similar for binary parsers. Here, too, there are often many different ways to express a certain structure. Most libraries (such as Exiv2 for PNG/JPEG/EPS/etc. metadata) perform the following procedure: First, they parse out all information they are interested in. Then the application modifies that structure. Then it has to be embedded back into to binary format (here the image). The latter step is a nasty type of merging. They go over the file, try to find places where to "safely" embed or replace parts, then serialize their stuff and try to put it there.
However, if the binary parser was lossless, all changes to the parsed structure would preverse the "trivia", so serializing is straight forward. And any potential issues could be handled on application level in the parsed structure, rather than by guesswork and heuristics during the merge phase.
The only problem could be large BLOBs, but those could easily be represented as position & offset in the original file, rather than as the actual binary data in memory. (assuming the file will not change in the meantime, but in that case there the "merging" approach is very dangerous, too.)
I think this is one of the most common mistakes compiler writers make -- they don't spend enough time thinking about incorrect code. Let me be clear: most code is wrong. If your code is right, you compile it once. If your code is wrong, you compile it many times until you get it right. More importantly, if you run your IDE off your compiler, almost every character you type is a wrong program.
Having a full-fidelity syntax tree is essential for having great experiences with wrong code. In addition, it easily solves the problem of having to serialize your trees -- the source text is the serialization.
This feeds into API considerations as well. A number of people have repeatedly talked about how cool it would be to have a lot of other tools understand your AST (like if Git could support checking in ASTs instead of source). This is the wrong way of looking at it. If you're dealing with a raw AST you often have to have domain-specific knowledge of the language itself. Instead, what you want is to take the thing with the most domain specific knowledge, the compiler, and allow it to answer questions, i.e. have an API. By having round-trippable source, all source is essentially given a transparent API that can be used just as if you were interacting with the source code itself.
Anyway, this is going off the rails, but it's one of the numerous things I'd point to for many production compilers and say, "you are the past, this is the future."
Instead, if you could somehow check in the refactoring action, all those problems would go away. You could rebase the code by undoing and redoing the rename, taking into account new usages of the renamed item, etc.
First, I think embedding language knowledge into the VCS is fraught with peril. For one, does that mean that you need to rev your VCS version every time your language changes? What about when your language revs its AST, but not the language itself? Is your VCS version now no longer backwards compatible with old versions?
Second, I think there's a significant amount of overhead and new technology here. Most DVCS's currently use hash-based filesystems for storing history. If you replace simple data diffs with semantic transformations then you have to find some portable way of encoding that. If you don't want the implementation to be language-specific than you have to find some language-agnostic encoding system that can also recognize that the textual diff and the alpha-rename are identical commits.
IMHO, I would rather have metadata on commits. That way you can always fall back to plain text and all the old tools (like an ancient vi) continue to still be usable, but more advanced language-specific tooling could recognize these things and provide a simple view to the user.
Because Wasabi doesn't implement syntax trivia, there's no way to losslessly round-trip Wasabi code. Whitespace is not preserved, and comments are always assumbed to be on a line of their own.
Did your team invent the idea, or is there prior research I could read?
 Microsoft's first product was a compiler for Altair BASIC, written by Gates himself
Not meant to be snippy, but ...
If you publish this under patent grant, why did you patent it in the first place?
Apart from that, I must say that I'm very glad to live in a jurisdiction where these types of patents are void. I could have written down this idea long before 2011, but I never thought this would be worth patenting. (Don't get me wrong, the concept is really great, but to me it doesn't make any sense to prevent others from implementing the same idea. It's not like Microsoft had to invest thousands of dollar into research to develop this idea, and aims to refinance that development effort via patent licenses.)
It could be part of a defensive patent strategy. If they successfully received a patent for that, then it makes it that much harder for a malicious third party to troll them. And if you ask me, turning around and publishing the patent under the APL demonstrates good faith to the /libre/ software community (it's a GPLv3-compatible license, no less), so that's a strategic win, too.
1. They make use the patent protection clauses via APL (and also GPLv3)
2. They use to to protect it from their own patent
What's still unclear to me: Does it affect other implementations? Assume that Clang/LLVM wants to implement "syntax trivia". Will they have to use exactly their implementation, and thus have to switch from their BSD-style license to APL or (A)GPL?