Secondly, we do legitimately need to be concerned about what deepfakes will look like in five or ten years time. We're making substantial algorithmic improvements in efficiency and quality, coupled with a multi-billion-dollar race to improve computational performance for deep learning tasks.
Deepfakes may never improve (unlikely), they may gradually improve with better algorithms and more compute power, or there may be a sudden breakthrough in either area that makes 100% convincing deepfakes commonplace. We could wait until that moment to start thinking about social and technological countermeasures, but I wouldn't recommend it. If there's anything to be learned from 2020, it's that we should be investing a lot more in preparing for low-probability/high-magnitude risks.