> We adopt a test case fuzzing approach for generating equivalences, with 18,000 randomly generated inputs and combinatorially generated corner-case inputs, inspired by test case generation in . ... helped Revec to significantly reduce erroneous equivalences, and is consistent with the methodology in .
It sounds like an interesting approach. But how hard would it have been to ask an expert to encode this information instead of empirically? And what about instruction side effects, proc/coproc flags/modes etc? Does equivalence consider those?
The llvm-revec fork has the pass, but does it also have the code to reproduce the automated enumeration (and testing)?