Hacker News new | past | comments | ask | show | jobs | submit login

Why would you be more concerned about false negatives? Wouldn't false positives erode trust and value in your product, and considering the applications you're targeting, possibly open you up to lawsuits if you start accusing innocent people of being deepfakes (which, IMO, currently seems unlikely)?



We provide a probabilistic percentage result that is used by a trust and safety team to set limits (ie. flag or block content) so it is not a binary yes/no. We search for specific deepfake signatures and we explain what our results are identifying.


So... that percentage sort of 'return type' allows the people using your service to decide how aggressive they want to be? Smart. It also could possibly turn your service into more of a tool and less of something that someone could blame incorrect results on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: