Being tricked by optical illusions is more about the sensory apparatus and image processing faculties than reasoning, but detecting optical illusions is definitely a reasoning task. I doubt it's an important enough task to train into general models though.
At this point think all reasoning really means is having seen enough of the right training data to make the correct inferences, and they're just missing some training data.
I just whish the people believing LLM can actually reason and generalize see that they don't.