E.g. if the tracking is off (i.e. the predicted x,y coord differs from the target you had in mind with your gaze) I guess one would try to auto-correct the error by a slightly different gaze, i.e. looking slightly beneath the target. I am not sure if this is a problem or not?
We do not have this problem with the mouse, as we control it via relative speed and do not set specific x,c coords.
Maybe one could solve this by also tracking the head? E.g. make and educated guess via gaze tracking and let the user refine it with her ... nose. ;)