I remember reading somewhere that hardcore foreign teams stopped going to pwn2own once they realized that they were
1) effectively disclosing valuable zero days in a dark room
2) to the tagert manufacturer
3) and the pwn2own organizer who happens to be indirectly sponsored by the NSA
In exchange for what is pretty much just kudos and pocket change
I'm active in the CTF crowd, and I think the general feeling is that you only want to burn your lower tier exploits at pwn2own. Usually these will be the exploits for things that are harder to sell to an exploit broker. You probably won't see many top tier vulnerabilities at pwn2own.
Zerodium is definitely a bit too well known to have actual market pricing for vulnerabilities, but their tier graphic is pretty accurate (https://zerodium.com/program.html). Notice how most of the things hacked at pwn2own are at the bottom of this graphic.
Selling to a exploit broker is pretty much always more profitable for higher tier exploits, and usually if you're in this industry and want to make bank you'll just work for a company like dfsec where you get bonuses for finding these kinds of exploits anyway.
Generally, the way that you're paid if you sell to a exploit broker is in installments over time to prevent this kind of business.
I'm not sure about whether it's your intellectual property or not. Can exploits be "intellectual property"? Like sure your specific PoC could be considered intellectual property, but if you were to rewrite it differently (and there is always another way to exploit a vulnerability) then I have no idea whether that would still count or not.
What do they do with the vulnerabilities instead of going to pwn2own? If they report vulnerabilities to the manufacturer directly, how does the pay compare to the pay from pwn2own?
I'm a professional hacker. Almost all my colleagues got started by asking a simple security question like, "How do I spy on what people are browsing on my home wifi?" Then googling the current tools for it. The next step is writing either a Python script to automate the process or contribute the GitHub of the tool you used to improve it. Less than 10% of my general colleagues (and even lower % for the best colleagues) actually go to school for security. Similar to any other IT work, like getting printers connected to a network or deploying an Active Directory domain across your company. Google it then use the tools and tutorials afforded to you. Basic networking/linux/windows/programming knowledge is usually the prerequisite, but the keyword there is, "basic".
So how do you spy on people on your home wifi. If I run a reverse proxy on my computer and then make the router use it, will chrome detect that it goes through a mitm?
There must be more efficient ways to learn how to hack than hanging out at the local Starbucks waiting for the hackers to come rolling in to do some latte hacking.
It’s probably a lot less time consuming and risky than “going to market”. Reminds me of a panel at DEFCON this year that dealt with the fact this stuff is increasingly covered under export restrictions and a lot of the ecosystem is trust based.
Some probably don’t have connections they trust to sell to, and buyers mightn’t trust the sellers to not re-sell their exploits
There has been an increase from ~50 code execution bugs per year (2016—18) to more than 100 per year (2019—2022) in Windows 10, which makes us think that software has got worse.
Presumably the vast majority of the vulnerabilities in Windows 10 have been in Windows 10 from its release date.
So it is reasonable to assume that we detect more bugs, not that the actual number increased.
I still notice a raft of bugs in iPadOS, including egregious UI errors in new features like multitasking, but the quality of iPadOS is generally improving (anecdotally).
It's far more likely that software is improving, but the tools to break it and the number of people willing to invest the time in learning those tools, is increasing. The fact is there are absolutely untold amounts of bugs in every piece of software every written. Finding those bugs, up until the past decade of automation tools to find/exploit them, has been extremely time consuming with few people patient or curious enough to improve their skills in that arena.
Think about the number of programming frameworks that've come out in the past few years. Or GitHub Copilot and ChatGPT which literally write solid code for you. There's no way software has not improved a lot in the past decade but there are WAY more software devs than there are exploit devs or hackers. Similar to the ratio of lions:wildebeest. Way more prey than predators.
Except Microsoft should have world-class internal teams creating fuzzer tools and applying open source tools: you would hope that Microsoft had kept ahead of the hoipolloi.
1) effectively disclosing valuable zero days in a dark room 2) to the tagert manufacturer 3) and the pwn2own organizer who happens to be indirectly sponsored by the NSA
In exchange for what is pretty much just kudos and pocket change