A recent work not mentioned in the last chapter "Adversarial Training with Abstraction" is , which kind of explains this issue using the notions of continuity and sensitivity of the abstractions.
Like, would a network be able to overcome the error due to switch between a 3d model and actual video?
I think the verification tools are finally getting better to the point of them being useful for this kind of stuff.
Good job putting this together :)
I personally find the reasoning on graphs(neurons) to make the verification tractable really beautiful. :)