> We adopt a test case fuzzing approach for generating equivalences, with 18,000 randomly generated inputs and combinatorially generated corner-case inputs, inspired by test case generation in . ... helped Revec to significantly reduce erroneous equivalences, and is consistent with the methodology in .
It sounds like an interesting approach. But how hard would it have been to ask an expert to encode this information instead of empirically? And what about instruction side effects, proc/coproc flags/modes etc? Does equivalence consider those?
The llvm-revec fork has the pass, but does it also have the code to reproduce the automated enumeration (and testing)?
The problem is that this is really fucking hard to
compile, and that’s what Intel screwed up. Intel
assumed that compilers in 2001 could extract the
instruction-level parallelism necessary to make
VLIW work, but in reality we’ve only very recently
figured out how to reliably do that.
But from the point of view of a potential user, this kind of software is undeveloped, unsupported and rotting away (e.g. about two Clang versions behind): probably useful as a prototype to plunder and reimplement, not as a tool to get things done.