It's worth pointing out that even with the largest models out there, coherence drops fast over length. In a local home ML setup, until somebody radically improves long-term coherence, models with < x memory may be a diametrically opposed constraint to something that still says the right thing after > y minutes of search.