LArge Scale Experience Replay - https://arxiv.org/abs/1909.11583
I am missing a link between the two?
That approach is a promising way to make it easier to navigate the latent space as changes in one dimension will have a reduced or no influence on other aspects of the data encoded.
Here is a nice overview on disentanglement with further references: https://paperswithcode.com/method/beta-vae
Not manual and it sounds like a good idea.
The drawback is you can only learn tasks which are relatively similar (any time you restrict what motions are possible to improve learning, you obviously restrict what tasks are possible). The benefit is that you can learn tasks which do fall within the learned motion ranges a lot more quickly.
The best analogy within 'classical' control is task space control, where you do control in cartesian dimensions rather than the joint positions. But this has its own drawbacks in that you have to define these controllers manually, and Cartesian space is not sufficiently expressive / appropriate for many tasks.