Hacker News new | past | comments | ask | show | jobs | submit login

I respect your expertise and agree that good tools can help find potential vulnerabilities.

You aren't the first person to say that exploits created as a result from source code theft are rare and the theft is quickly publicly leaked when it happens. Why do you think this? I would think that unethical players like NSO Group would have even more motivation to ensure the use of stolen source code is never revealed.




Because I've been doing this for years and I know how we find exploits; we don't need source. Why would NSO need it?

NSO isn't an "unethical" player, they are "ethical" within their own twisted ethics (that most of us don't agree with). They aren't a spy organization outside the law, they're a company building tools for (supposedly) law enforcement. Being caught doing something blatantly illegal like using stolen source code would be the end of them. They can't afford that risk. They have absolutely no need to use source code. There are zillions of binary-only techniques for finding exploitable bugs (e.g. fuzzing). Source code just isn't nearly as useful as you think it is.

If you want a practical example: just a few weeks ago I got ahold of a peculiar, wholly undocumented embedded device (can't even find teardowns on the Internet, no public firmware downloads, etc) and within one day I had a remote root exploit working - this wasn't using an existing CVE in a library, this was a bespoke bug in this device's firmware, and the exploitation involved reverse engineering two authentication token algorithms and a custom binary communications protocol. No source code. Obviously this isn't iOS, which is quite bit more hardened, but that should give you an idea of just how easy it is to find exploitable bugs with just something like Ghidra, if you know what you're doing (I was: I was looking specifically for a kind of bug likely to exist, to narrow down the possibilities of where it might be present, and eventually found a suspicious point of attack surface that indeed turned out to be vulnerable; then it was just a matter of reverse engineering enough of the protocol and token requirements of that code to be able to actually trigger it remotely).

I was actually kind of annoyed it took as long as a couple hours to find it (once I had a decent understanding of the rest of the system); I was expecting even less, but it turned out they did a better job than I expected avoiding some of the classic mistakes - but not a good enough one :).


Thanks for the insight. It is super informative!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: