> “The stereotypical image we might have of a meme is the image with captions at the top and bottom,” says Wark. “But memes have gotten a lot weirder over the last few years. Many don’t really have a punchline like a joke does [...] “A program that can classify these [memes] with a 92% accuracy rate could be extremely useful for meme consumers with visual impairment,” she says.
A large part of the memes I see day to day have no text whatsoever, and their humor comes from the context of the conversation they are being applied to. Many are simple edits whose meaning I think would be near impossible to convey through language.
And why memes? They might provide a good set of data for building algorithms that can truly grasp the context of content while parsing it's meaning, but most memes require you to be part of an in-group, have knowledge of their history and their values, and even then, a single meme could be a humorous homage for one group of people, and a mocking joke for another.
They're talking about the memes from a few years ago when you had the same image always re-used with different text at the top and bottom (Socially awkward penguin, Business Cat etc). They are exploiting the fact that that the image is always the same to help OCR the text.
I find that memes are, in many popular social contexts within my demographic to be important to participate in basic conversation online.
Those are the best memes.
I end up not wanting to participate, then feeling a bit guilty because I'm handicapped, though I'm not blind.
There should be native support. This shouldn't be an issue. They offer native support for embedding the GIFs. They should offer native support for the visually impaired. It shouldn't require a workaround.
Note: Image descriptions cannot be added to GIFs or videos.
Yeah, memes are exactly what researchers should spend time and resources on, especially with regards to blind people.