I feel like I'm missing something from the examples. If the images are symmetrical why can't you just transpose the coordinates from the missing pixels and grab them from the symmetrical side?
EDIT: the GitHub page has an actual example. You see three input/output pairs to learn the rule, which you then apply to a fourth input. So the task is slightly different on each test case.
https://github.com/fchollet/ARC
EDIT: the GitHub page has an actual example. You see three input/output pairs to learn the rule, which you then apply to a fourth input. So the task is slightly different on each test case. https://github.com/fchollet/ARC