The input you give it can be very long. This can qualitatively change the experience. Imagine, for example, copy pasting the entire lord of the rings plus another 100 books you like and asking it to write a similar book...
I just googled it, and the LOTR trilogy apparently has a total of 480,000 words, which brings home how huge 1M is! It'd be fascinating to see how well Gemini could summarize the plot or reason about it.
One point I'm unclear on is how these huge context sizes are implemented by the various models. Are any of them the actual raw "width of the model" that is propagated through it, or are these all hierarchical summarization and chunk embedding index lookup type tricks?
Reading Lord of the Rings, and writing a quality book in the same style, are almost wholly unrelated tasks. Over 150 million copies of Lord of the Rings have been sold, but few readers are capable of "writing a similar book" in terms of quality. There's no reason to think this would work well.
I doubt it’s smart enough to write another (coherent, good) book based on 103 books. But you could ask it questions about the books and it would search and synthesize good answers.