The author seems to be totally clueless. In addition to seeming to claim that this guy invented subset and union notation ("U, or union, makes a new relation or set out of the relations or sets that are passed to it"), he/she completely misunderstands the halting problem.
"Computers occasionally hang on one line of code and fail to move on to the next, and no one can reliably predict when that will happen."
Keep in mind that Wired writes only slightly above normal tech journalist standards, which is to say, not very high. Combine a journalist who either has never been exposed to the halting problem or only vaguely remembers it from an algorithms course ten years ago, with an audience that, in general, hates and fears math but thinks gadgets like computers are magical and gee-whiz cool, and you get explanations like this. That is, not a map to reality.
It doesn't appear to be a general-case fix (which makes sense, because that would be impossible), but a method for certifying a subset of algorithms against halting problems in finite time (while giving no information for all other algorithms). We have a number of those.
More specifically, this appears to allow for more general model-checking, as most programming languages do not explicitly work on transitions (some declarative languages can be exceptions).
They are probably referring to one of those "probabilistic" algorithms, something that will get a right answer with only some (possibly high) probability.
These kinds of algorithms can be used in various static analysis techniques to, say, approximate the value of a variable at a given point in the program (this reduces to the halting problem).
Wikipedia:
"the halting problem is a decision problem which can be stated as follows: given a description of a program, decide whether the program finishes running or will run forever"
"Computers occasionally hang on one line of code and fail to move on to the next, and no one can reliably predict when that will happen."
Seriously, Wired?