You keep replying to me with one-sentence posts that say little and have no point. If you have something to say or a point you're trying to make, I'd prefer you just do that rather than trying socratic-method everyone to enlightenment.
To (try to) answer your question, to the best of my ability based on limited information what your question even is: no, it is not about detecting when that update should be performed. Remember that we are talking about brain states, i.e. the-world-as-you-think-it-is, as it is represented in your mind, and perhaps if you want to push it, as you are consciously aware of it. If you hold conflicting beliefs but you are truly ignorant of the fact, then you can't really be said to have cognitive dissonance. In Bayes terms, I guess this is analogous to having priors, but not being aware of how to evaluate them based on evidence, and maybe even that you should do so in the first place. Cognitive dissonance, on the other hand, would be having priors, knowing how (and when) to update them, doing the update, and then ignoring the results. Alternatively, it would be performing faulty updates and being at least partially aware of the error in doing so.
You should take value from those options rather than letting any one of them define value. Then trust yourself to move forward. Software is writ in water.
I do often continue working but then with a nagging sensation of not executing a task in the most optimal of ways, which is what I meant by having a healthy approach to cognitive dissonance, performing a task in a way you know or feel is not optimal but not being confused by feelings of cognitive dissonance, instead just see it as part of the learning process. In the end, the dissonance dissapear as soon as the learning process has finished, and I'm no longer anxious about that option.
Which in itself is not very interesting but might be in the context of building an AI. Apart from a NN, what else do you need for an AI? Perhaps you need mechanisms such as "cognitive dissonance" in order to acheive "effective learning" through coping with that dissonance. What we have today are clever NNs. Nothing close to a talking bear (you know, the one from A.I. Artificial Intelligence).
If it negates it for the majority of inputs, but you insist it negates it for very few or no inputs, I would call that cognitive dissonance as well. In other words, if it's highly unlikely that two things are both true, but you consider it very likely they are both true, that's probably cognitive dissonance even if the two propositions are not logical contradictions.
Do you disagree that given a bag of all functions which accept N inputs, it is unlikely that 2 chosen at random would negate each other across all N inputs?
Obviously I don't disagree with that. Where are you going with this? People don't formulate their beliefs by picking randomly from a collection of all theoretically-possible beliefs. To expand on your example, if you are selecting from a bag of beliefs according to some criteria (based on some combination of morality, desire, etc) which are themselves in conflict (e.g. "I wish to be feared; I wish to be loved") then I think you are more likely to pick some beliefs which happen to contradict each other (again, if not logically, then for most inputs) than you would just by selecting randomly. Or, probably, if your criteria are more in tune with one another.
I really appreciate you guys working on that. I have since moved to a debian VM, but might eventually move back if I don't need to frequently restart docker machine hosts.