Fundamental goals are neither rational nor irrational. A rational thinker with no non-rational goals would have no reason to ever do anything--including think at all.
Two perfectly rational agents with the same fundamental goals would, of course, inevitably select the same sub-goals and actions to reach those goals.
But at some point if you keep asking a rational agent "but why do you want to do that?" the only answer will be "because it is my nature."
>Fundamental goals are neither rational nor irrational.
That's a rather controversial statement. You have to bear in mind that the very idea that there's a clean split between issues of fact and issues of "values" or "goals" is a very controversial one. Hillary Putnam's book on this issue is a great read ("The Collapse of the Fact/Value Dichotomy").
Of course, there are also issues with the notion of "observation" used in Bayesian statistics. Since any observation which isn't actually just a direct report of a sense experience (e.g. "I see red in the top left corner of my visual field") involves the application of some theory, it's not clear what should count as an observation for a rational agent or group of rational agents. For example, did John observe that a car drove past? Or did he observe that something which looks like a car drove past? Can John observe that Bill was being petulant, or is that a value judgment? Did John observe that the current is 0.5A? Or did he observe that the ammeter reported that the current is 0.5A? Etc. etc.
The Bayesian framework is an idealization, resting on an ideal notion of "observation". I personally think that it's crazy to take this framework to be a model for rationality itself. We are not able to make observations in the Bayesian sense.
Does this mean some form of mystery about oneself must always remain? Assume complete self-referentiality of a being. Then the ultimate source of one's goals is no longer the black box called "my nature". What is it then? And would it be the same for two completely self-referential being?
(By self-referential I mean one who understands oneself absolutely; one who possess a perfectly accurate map of oneself. There probably is a better word.)
If the human brain functions more or less according to classical physics, with causality running forwards in time, then "one's nature" is just a very complicated function of the local state of the universe - the previous state of one's brain (including the encoding of the consciousness function itself) plus incoming sensory data.
More than once you've probably made a decision, at least in part, by psychoanalyzing yourself (and maybe psychoanalyzing the way that you're psychoanalyzing yourself) - so imagine being able to psychoanalyze yourself in real time, with (near-)perfect accuracy. (There might be information-theoretic problems with trying to simulate your whole brain inside of itself at speed.)
If one is the kind of person who has given serious thought to one's motivations in life (which must include coming to the realization that they are, in a certain fundamental sense, arbitrary) then I don't know that they would necessarily change all that much under these conditions.
Two perfectly rational agents with the same fundamental goals would, of course, inevitably select the same sub-goals and actions to reach those goals.
But at some point if you keep asking a rational agent "but why do you want to do that?" the only answer will be "because it is my nature."