Not to excuse security holes, but at least they're adversarial - I would expect this code to generate all kinds of error logs if tested against any reasonable corpus along the "happy path" - if you ran your code against a sample of extremely representative input you'd find this.
Security holes are more excusable because someone who didn't realize the above could happen maybe never tested it... given the use case, this is more like "did you even run your code?"
Security holes are more excusable because someone who didn't realize the above could happen maybe never tested it... given the use case, this is more like "did you even run your code?"