This sounds like the basis of hypertext fiction (https://en.wikipedia.org/wiki/Hypertext_fiction) which has of course existed for much longer than big-data as a concept.
As for what characterizes literature, I'm inclined to agree with you. However, is there not inherent value in another (albeit computational in nature) reading, so to speak? If we take Barthes to be correct, what does it say when a well-trained NLP model draws similar conclusions to humans with regard to imagery, analogy, irony, metaphor, etc. when reading major literary works? Different conclusions? Or no conclusions? What if some works "compute" and some don't? What if some works's features are culture-independent, i.e., a model trained on an Eastern literary corpus computes similar features as a model trained on a Western literary corpus, while some features aren't?
Perhaps these questions are more superficial than I'm making them out to be, but it seems presumptuous to assume that methods that look at this problem from this angle won't get at _any_ literary features.