The extreme ironing image example has a bullshit explanation in the paper. The extreme ironing on back of taxi is a popular photo with lots of text associated with that picture: https://google.com/search?q=extreme+ironing+taxi&tbm=isch
Give the model new images that are not in the training set (e.g. photos not on internet, or photos taken after model trained) and ask the same question and see how well it does!
The paper says: “Table 16. [snip] The prompt requires image understanding.”
I think the explanations (in the paper by OpenAI for the images) are probably misinformation or misdirection. I would guess it is recognising the images from it’s training and associating them with nearby text.
However, I still think they should not have used images from the internet/training set in their paper. And to be safe, neither should they use “generated” images.
I am looking forward to taking photos of some paintings by friends and seeing if ChatGPT can describe them!
https://cdn.openai.com/papers/gpt-4.pdf