Is that sufficient? I'm not very adept at modern AI but it feels to me like the only reliable solution is to not have the data in the model at all. Is that what you're saying accomplishes?
Why wouldn't the human mind have the same problem? Hell, it's ironic because one thing ML is pretty damn good at is to get humans to violate their prompting, and, frankly, basic rational thought: