My understanding of state-of-the-art language models is limited, but it seems like there has been a lot invested into having models that generate their output one token at a time, no edits.
Have there been any experiments into models that can re-read and edit their work, before spitting out entire sentences, paragraphs, or documents?
Have there been any experiments into models that can re-read and edit their work, before spitting out entire sentences, paragraphs, or documents?