Next AI challenge: try to infer the intended panel reading sequence, and the flow of speech bubbles/narrative.
Would potentially be a useful augmentation to a digital comic book reader, refocusing from panel to panel in sequence. Not to mention making comic book content more accessible.
Is that solvable? It's my personal experience that humans can't reliably do that; I could imagine a machine doing better, but I'm not sure what information it would be working with.
I feel the problem is you often read some panels in the wrong order, but when you read them you realize the order is wrong after the fact, based on the text. The AI could probably use the text as additional information for determining the order.
Would probably never be perfect, but if it gets it right in all but a few outlying cases, that should be good enough.
Counterpoint; isn't reading and re-reading the page after figuring out the intended order part of the experience?
Take video games. From Software games tell stories through breadcrumbs; locations, speech lines, item descriptions, and only over time can you start to connect the dots.
Sure, you can watch a youtuber connect the dots for you, show you what you've missed, and make you aware of your own mental limitations. But it's not the same.
These programs would probably be useful to speed up the translation of comics, for example there are hundreds of thousands of Japanese self-published works and many of them do not have a translation.
Crunchyroll used to offer a guided reading experience for some of their manga.
You could probably build a tool that tags each panel and attempts to figure out the order, and then have a human editor do a validation pass. If you have enough people reading a series you can probably crowdsource the panel sequence.
That sounds similar to having an AI explain where you should be looking in a painting, or where to pay attention in a movie. This might be genuinely useful as an accessibility feature, but I'd also see strong sentiment against it, potentially from the creators of the art.
I'm more of a manga reader then a western comic one, but why would there be sentiment against this? In what way is the reading order of panels up to interpretation?
Focusing manga, the shoujo space for instance has a reputation for using out of frame character positionning and not shying away from composing pages with a visual flow that doesn't follow the actual character speech.
It would be complex to pin a given panel order as canonical when the author is playing games with the reader. I don't think authors would oppose an accesibility feature, but I could see the debate if it was a more prominent, sanctionned way of reading.
You've now focused too much on manga/comics. Stepping back a bit and looking at art more in general and using the provided example of looking at a painting, who are you to tell me where I should be looking. yeah yeah, i see the obvious thing your AI is trying to tell me where to look, but I'm looking at this less obvious thing that really strikes my fancy. maybe it's a blemish. maybe it's a unique brush stroke/technique that others might not care about, but an aspiring artist might. (even if it is another stupid AI.)
because that was the idea proposed on how to expand this concept to something else. that's how conversations evolve. someone says something that plants a seed for someone else, that then gets someone else thinking, that then, that then.
Yeah but isn't telling you where to look for and what to think contrary to the idea of art? Sure, a guide telling you why Van Gogh's brushstroke was interesting may give you a newfound appreciation of what from a distance is a simple looking painting, but treating looking at art as something that should be min/maxed and optimized sounds depressing.
I think aside from accessibility the main use case is for e-ink devices where the resolution isn't high enough to comfortably display the entire page at one and the refresh rate isn't high enough to pan/zoom quickly. With those limitations automatically "paging" through the panels probably makes for a much better reading experience (I haven't tried it myself).
A lot of youtube AI manga recap channels have popped up in the last year.
I assumed they have some open source tool that does the panel segmentation for them.
I always thought a fun side gig (or even volunteer opportunity) would be defining panel areas for digital comics. Both because I'd get to read a lot of comics, and because a lot of comics I read are frustratingly bad at it, and it makes it much harder to enjoy them. Well, there goes AI taking our jerbs.
You could label a bunch for OP and contribute to your own demise :D (It sounds like there isn't a large existing dataset since the author had to label P&C for testing and generate data for training.)
Awesome stuff! We're also working on comic segmentation @ https://toona.io and other stuff for motion comic generation. The synthetic dataset approaches are really interesting, I'm curious if you could use an algorithm like https://en.wikipedia.org/wiki/Flood_fill to aid in segmentation (especially for manga).
Oh wow I'll check that out! We're basically working on tools for turning normal comics into semi-animated versions with things like puppet animation (similar to live 2d style), and stable diffusion models. A lot of the work for traditional approaches is segmentation and in painting, so that's what we're working on now, but we're also making tools for colorization, effects, and controllable animations!
Segmenting panels is a graphic design element of storytelling that's part of the artist's job. Doing it programmatically ignores its actual nature in storytelling - establishing the amount and direction of the story beats, helping the reader understand the right reading order, and establishing the important elements on the page (Which beat is the most important, etc).
It's an interesting tech but giving such an important creative job to a computer instead of an artist is a bad idea for any comic artist who cares about their work.
I half agree with you. Yes, the panel segmentation can be an element of the storytelling, but the truth is that lots of comics out there are not exploiting this ability, sticking to a square grid instead.
Yes, Bill Waterson used his fame to get out of the standard grid [1] but in a world where people read comics on their phones this technology is necessary. And if this stuff helps comic artists reach more readers, so be it. We can always hope that making simple tasks easy today will encourage artists to try harder things tomorrow.
> I half agree with you. Yes, the panel segmentation can be an element of the storytelling, but the truth is that lots of comics out there are not exploiting this ability, sticking to a square grid instead.
It is an element of storytelling. If the artist isn't doing it, its because they're doing a bad job for storytelling, or publishing in a standard format where the beats are always the same (like saturday morning comics in the newspaper, if those still exist).
I haven't Smart Comic before. Thanks for the recommendation. Definitely going to give it a try. (I admit that panel handling is a nice to have feature, but what makes or breaks a comic reader for me is how easy is to sync comics---but this is a long departure from the thread).
YAC Reader [1] has great panel recognition. My other favorite comic reader has attempted but never got something that I could use.
> ... it is often easier to see how to improve the dataset than to design new heuristics. Once you do that, you almost have a guarantee that the Neural Network machinery will get you the results.
Money quote. Applicable to so many areas of ML/AI.
on the topic of AI and comic books, since ChatGPT was trained on Wikipedia and thousands of other properties with complete records of comic book lore, why does it get so many relatively basic comic book questions completely wrong? For example, I've asked several times to Jack GPT how did Psylocke temporarily gain the ability to move through shadows in the past? It was a side effect of drinking the Crimson Dawn elixir, which saved her life after she was nearly killed by Sabertooth. All of this information is readily available online. But ChatGPT completely makes up hallucinated explanations for this every time I ask it. Why is that?
Although comic book lore has a canon defined by intellectual property, the topics they explore tend to be interchangable fictions. That is: there are a lot of characters that have superpowers, characters who drink magic potions or elixirs, and characters who use these things to avoid death. Sometimes these things are defining backstories, other times they are alternate continuities, non-canon one-offs or fanfic posted on Reddit. Because GPT is inferring a "plausible next guess" for any particular piece of knowledge, the likelihood that it understands the specific causal relationships and their valuation is very low: Psylocke is a type of comic book character, therefore the ability is because of <a type of comic book event>, not <the specific event in issue such and such>.
GPT does similarly poorly if you ask it historical questions like "who were the most influential art educators of the 19th century?" It will respond with a jumble of people from different eras and books that those people did not write.
At any rate, if it keeps hallucinating answers then it means it simply doesn't know. Either it wasn't a part of the dataset or it wasn't mentioned often enough to be memorised.
I just tried it and it says she "...gained the ability to move through shadows due to her interaction with the Crimson Dawn. This storyline occurred when she was mortally wounded, and her allies sought the help of the Crimson Dawn to save her life."
Would potentially be a useful augmentation to a digital comic book reader, refocusing from panel to panel in sequence. Not to mention making comic book content more accessible.