
Ask HN: Is there any work on self-testing software? - dogweather
For clarification, I don&#x27;t mean automated testing, test suites, or TDD. I mean more like HAL: &quot;Dave, I have detected an error in one of my circuits.&quot; I.e.,  software that, when run in normal production mode, updates its test fixtures with the latest environmental changes, and then runs its test suite on itself. If it passes, then the full production mode continues.<p>This is an idea I&#x27;ve had which would be a really nice advance for my field: parsing and scraping public government legal texts. I&#x27;m wondering what others have done like this.<p>Unfortunately, when I search for &quot;self-testing software&quot; I get only _automated testing_ ... even Martin Fowler using self-testing as a synonym for it: https:&#x2F;&#x2F;martinfowler.com&#x2F;bliki&#x2F;SelfTestingCode.html.<p>Thanks!<p>----<p>Details for the curious:<p>I have the problem of false positives when (say) the Nevada legislature changes the format of a statute. My program completes without error, but the output isn&#x27;t correct. (My take-away: I&#x27;ve allowed an illegal state to become representable, and my code needs to change.)<p>But I noticed something interesting: when I update my text fixtures (they&#x27;re full copies of a few of the Nevada HTML input files), I get test failures! This is great --- no false positives here. And it&#x27;s nice confirmation for me that I&#x27;ve got a good test suite. (I did write this code using TDD and was careful not to code ahead of my tests.)<p>And so I had a thought: I could automate this. When I run the app ( https:&#x2F;&#x2F;github.com&#x2F;public-law&#x2F;nevada-revised-statutes-parser ) for real, it can update these fixture files and re-run the test suite to see if it still holds up against any recent changes. This wouldn&#x27;t be a 100% guarantee of correctness because the source texts may have changed in some way that happens to avoid my tests. But it definitely would have saved me in this situation --- instead of finding the error way down the line with the data already in use.
======
gus_massa
Somewhat related:

A few yeas ago I used SpamBayes. The program calculates two values
independently:

* The "probability" that a message is spam

* The "probability" that a message is ham (not spam)

They aren't real probabilities and they don't add up to one. Usually you get
one value that is close to zero and one that is close to one.

Sometimes you get two values that are close to zero and it means that the
email is of an unknown type (and you must classify it so the program learns to
classify similar messages) and sometimes you get two values that are close to
one and it means that the email is of a weird type (and you must classify it
so the program learns to classify similar messages).

So the programs "knows" when it is confused.

~~~
dogweather
That's pretty cool, like a confidence score.

------
greenyouse
I don't think there's really a thing like you want where the system changes
it's parsing logic on the fly. You could definitely set up acceptance tests
for the system to see whether your program can parse the Nevada legislature
data properly as it changes. The program could pull the data fresh from the
API, run through, and check that the end result is the same. Automation tests
can run on a scheduler so you could at least figure out when the data changes
(daily, weekly, etc.).

Could you post back if you do get the dynamic parsing to work in Haskell?

------
dogweather
It looks like "self-healing" might be the thing I'm after.

~~~
_lol
erlang has self-healing baked in as one of it's core tenets.

