Our code often involves quantities which have intrinsic errors (like readings from accelerometers or photo-sensors or even just plain data like averages). The probabilistic framework enables us to computer with these kind of quantities that have uncertainties embedded in to them.
Every programmer should read this paper. It's one of the nicely written paper than pretty much anyone can understand.
Nothing wrong with the paper, but the relationship of the paper to the OP seems tenuous. The paper you link is not doing inference, it seems like just sampling. The OP is doing inference using some non-obvious program transformations and MCMC.
They are related in that they are both probabilistic programming languages, but the similarities stop there.
First, infer.net is a DSEL of C# implemented as a library, while this is a new language. More substantially, infer.net is designed to use variational inference, which limits the class of graphical models that can be supported, while R2 seems to be based on sampling hence has no limits on the models it can represent. A more similar approach would be Church (http://projects.csail.mit.edu/church/wiki/Church), a probabilistic language based on Scheme.
Of course, the downside of sampling is that inference is orders of magnitude slower, and there's no principled way to know when it converges, that's why many people prefer variational inference.