As I commented elsewhere, GPT is such a target rich security environment that it is hard to know why you would bother with this. On the other hand, advanced persistent attackers (eg the NSA) have a pretty good imagination. I could see them having both motive and means to go out of their way to achieve a particular result.
On human checks, http://www.underhanded-c.org/ demonstrates that it would be possible to inject content that will pass that.
On human checks, http://www.underhanded-c.org/ demonstrates that it would be possible to inject content that will pass that.