Hacker News new | past | comments | ask | show | jobs | submit login

I stumbled upon a talk from GRCon18 a few weeks ago where they did this [1], though on a much more restricted testcase.

Find the idea interesting, but I'm curious how resistant it is to attackers who know you are using it / how you'd design things to make backdooring in a way that isn't noticeable hard.

[1] https://www.youtube.com/watch?v=gDGzc_4wcjM




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: