I'm also unsure if sitting one developer down in a corner for a segment of each sprint and dedicating them exclusively to testing legacy code with no purpose is valuable. You should be testing legacy code as you come across it and making sure you harness it properly and make your modifications and continue to the next stop. If you are spending time doing something that doesn't complete a bug or a feature, you're spending valuable time on testing something that may completely removed in the future.
I've only ever had to do a test rotation once or twice, and it was like pulling the rip cord on a lawnmower. Requires effort at first and then it becomes self-sustaining over time. It establishes or affirms a culture of testing. The rotation doesn't even need to last long.
You should know which portions of the code are here to stay and which are nearing their end of life. Naturally, you want to spend your time where it will have maximum payoff.
Also for the latter point I guess that also depends on context. If you work for a consulting company you may not have the full knowledge of what the code base is, or even have direction to be touching some things. If you are developing software for your own company, I do agree you need to figure these things out, and maybe having a developer dedicated to it each sprint isn't a bad idea. I overstepped my bounds on that comment, as I have never worked for a company that makes its own software it sells, I've only ever done consulting and I sometimes forget about alternative perspectives, so sorry about that.
Of course you combine this with managerial support and coaching around task planning and messaging to other groups.
I've been a consultant, too, and I agree that it can sometimes (for some clients) be difficult to make the case for testing in that environment.