Interesting, but I see one major problem with AI decision provenance being cases that mirror "Who sunk the boat" and "the straw that broke the camel's back". When you are integrating loads of sensory inputs to make a decision even if you have the best provenance, when you have any system based on a threshold you will encounter these cases where the information contained within the system itself is insufficient to reconstruct the cause of a failure. These are exceptionally hard problems and often require recognizing that we don't need to know the cause of something for certain but we need to identify the larger context which allowed that state to be arrived at in the first place so that we can prevent it in the future.