It's just a Markov chain built by tokenizing a corpus of English words into syllables and analyzing the probability of transitioning from one syllable to another.
I think the library I used to tokenize the words did not do a perfect job and the corpus was suboptimal too, but it works well enough for a bit of nonsense :)
It's just a Markov chain built by tokenizing a corpus of English words into syllables and analyzing the probability of transitioning from one syllable to another.
I think the library I used to tokenize the words did not do a perfect job and the corpus was suboptimal too, but it works well enough for a bit of nonsense :)