Hacker News new | past | comments | ask | show | jobs | submit login

They definitely don't all look crap, although speech style transfer is still pretty rudimentary. The first point I'd make is that you can mask most of the defects of current deepfakes simply by degrading the recording quality. A mediocre voice deepfake sounds pretty damned credible when played through a cellphone with poor signal and a bit of background noise. A mediocre video deepfake appears wholly believable if you post-process it to look like low-res CCTV. For adversarial applications, this would tend to increase rather than decrease credibility - we would expect a covert recording of someone doing something illegal or shameful to be of poor technical quality.

Secondly, we do legitimately need to be concerned about what deepfakes will look like in five or ten years time. We're making substantial algorithmic improvements in efficiency and quality, coupled with a multi-billion-dollar race to improve computational performance for deep learning tasks.

Deepfakes may never improve (unlikely), they may gradually improve with better algorithms and more compute power, or there may be a sudden breakthrough in either area that makes 100% convincing deepfakes commonplace. We could wait until that moment to start thinking about social and technological countermeasures, but I wouldn't recommend it. If there's anything to be learned from 2020, it's that we should be investing a lot more in preparing for low-probability/high-magnitude risks.

https://www.youtube.com/watch?v=Ho9h0ouemWQ

https://intelligence.org/2017/10/13/fire-alarm/




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: