Can't "awareness" in both examples be approximated by a random seed generator? Both the human mind and autoregressive model just need any initial thought to iterate and improve off of, influenced by unique design + experienced priors.
Samsung were also the ones who demonstrated a fatal flaw in C2PA: device manufacturers are explicitly trusted in implementation.
C2PA requires trust that manufacturers would not be materially modifying the scene using convolutional neural networks to detect objects and add/remove details[1]