The real problem here is that Apple is so ridiculously controlling with respect to who is allowed to develop software (in Apple's perfect world, all software development would require an Apple license and all software would require Apple review)--in a legal area that isn't really conducive to that (see Sega v. Accolade, which was important enough to later ensure permanent exemptions on reverse engineering and even jailbreaking for software interoperability purposes in the original DMCA anti-tampering laws)--that they are even working right now on suing Corellium, a company which makes an iPhone emulator (which again, has strong legal precedent), in order to prevent anyone but a handful of highly controlled people from being able to debug their platform.
Apple just has such a history of being anti-security researcher--banning people like Charlie Miller from the App Store for showing faults in their review process, pulling the vulnerability detection app from Stefan Esser, slandering Google Project Zero, denying the iPhone 11 location tracking until proven wrong, requiring people in their bug bounty program to be willing to irresponsibly hold bugs indefinitely so Apple can fix things only at their leisure, and using the DMCA to try to squelch research via takedowns--that this ends up feeling like yet another flat gesture: they should have done much more than this device at least a decade ago. I'd say Apple is in store for a pretty big fall if anyone ever manages to get a bankroll large enough to actually fight them in court for any protracted length of time :/.
Anyone can debug their platform, as they have been. You just need to be approved for this specific program.
Apple's case against Corellium is about intellectual property, and it's frankly going to be a slam dunk in court. There's already established precedent with Apple v. Psystar with an almost identical set of facts.
Since not just anyone can be approved, I don't think I"d consider that "anyone".
I mean, I guess technically "anyone" can learn to become a security researcher and spend years building up a "proven track record of success in finding security issues on Apple platforms, or other modern operating systems and platforms", but that's not generally how I think of the term "anyone". ;)
I took the parent to mean you "just need to be approved" to do debugging via this program, but not to use other means to do debugging, which you don't need approval for, hence anyone can do.
I tend to like Apple in general, I think they do a lot of things right, but I feel there are a few things they do that are clearly about money and not the purported reasons. I guess no company is impervious to that.
That said, on the back end outside of devices, they don't implement the same precautions. IE encrypting iCloud data / backups where they have limited access. I would like consistency is all.
One way or the other.
I would love it to be encrypted through out. Even it if means, that if I lose access I may not be able to get it back. It's a tradeoff. The marketing position of being secure is factually true for the phone, but they imply the data is when in fact it really isn't.
Anyone can take advantage of the unpatchable bootloader flaw on iDevices with the A11 SOC or earlier that allows you to exert full control over the device and any current or future version of iOS that runs on it.
>For security researchers, this is a huge boon, which should help them analyze any version of iOS that will run on an iPhone X or older. Since iOS research really can’t be done on a device that hasn’t had security restrictions lifted somehow, this will likely become one of the most important tools in researchers’ toolkits. This can benefit iOS users, as it can enable researchers to locate issues and report them to Apple.
You don't need Apple's permission.
I do agree with saagarjha and jedieaston's sibling comments, however. Checkm8 is great, but it's temporary.
Apple would not like “anyone” debugging their platform. The fact that people have jumped through hoops to do so is considered a bug and something that they actively try to prevent.
Well, if you don’t like closed boxes, just don’t buy a closed box.
In light of that, your argument is "Just don't hold Apple accountable!", which Apple would love... but that would also be harmful to consumers.
Well, then by that statement alone security researchers aren’t just “anyone”.
I don’t see a problem in a company imposing restrictions that make finding vulnerabilities harder for everyone in general provided they are willing to allow security researchers to jump those restrictions.
In fact I actually see it as a big win, since now bad actors have significantly higher costs while good actors that are legally liable have lower barriers of entry.
(And no: I entirely disagree that "bad actors" have significantly higher costs because of this, as bad actors can do illegal stuff like buy internal developer protocols off the black market from corrupt factory employees: there was a massive expose about this in Forbes last year. Hell: Apple bugs are actually less valuable on the black market now than Android bugs because there are so many of them! Apple's attempts to hide their devices from public scrutiny is about PR, not part of some coherent security strategy.)
I may just not be understanding you; maybe we just agree that this program doesn't change a whole lot.
Oh come on... I am a security researcher, and I have definitely had multiple opportunities to buy a stolen prototype device (as I am sure you would have also... but I also assume you don't need it as your company is one of the only companies in this space I have seen actively consulting for Apple--which I frankly feel like maybe you should be disclosing here? I guess you might still not have access to dev-fused devices as I have some vague memory of figuring out that you worked on server security... still :/--so maybe you don't pay as much attention or are as tempted as those of us on the outside); like just about every legitimate person who stumbles upon this, I said no, as I don't want to do something actively illegal (such as trafficking in stolen goods). Are you seriously trying to argue that I should be doing actively illegal things to do research?
> I may just not be understanding you; maybe we just agree that this program doesn't change a whole lot.
I never claimed the program (which I will assume you mean the device program, though I think this also mostly applies to the bug bounty program itself) did? I said "anyone who wants to should be able to buy such a device" and "this ends up feeling like yet another flat gesture" (and then cited numerous specific ways in which Apple clearly works against security research on their devices).
You are then here responding to a comment where I am defending against someone who is claiming anyone can do security research (as Apple can't legally stop me... which is "technically true" but "useless" as they can still sue me--as they did my friends at Corellium--and I can't afford to defend myself) and this device is sufficient as anyone can get one (if only they are willing to get over a few "hurdles") by explaining why most security researchers would not take part in these programs (which is, in fact, an argument for why this program "doesn't change a whole lot"). The argument is that Apple needs to do _more_, to put good actors (who have nothing but these programs and bootrom exploits for older devices) on the same level as bad actors (who have comparatively little issue doing research).
(2) I'm a software developer at Fly.io.
(3) Latacora, my previous security company, did no work at all for Apple.
(4) I have no idea what you mean by "server security"; you are probably thinking of someone else.
(5) I'm not asking whether you think it's OK that Apple sued Corellium. Most people in software security are not happy that Apple sued Corellium; I'm not going to be the oddball pissing into the wind this time with a contrary take.
If we agree, we agree, and it sounds like we do: one might not believe that the SRDP meaningfully improves security research on the iPhone, but it's hard to make an argument that the SRDP _harms_ it.
(The more important disclosure is that I don't specialize in the kind of work that would likely benefit from an unlocked phone.)
I understand the subtext that Apple could more efficiently help software security researchers by freely unlocking phones, but I'm not here to litigate that.
To individual researchers, yes, this gives them a new option–I guess that is good? What I am concerned about is that it is an attractive option for them and they get locked into whatever disclosure timeline/research focus Apple wants them to have. You could of course say that they could leave the program at any point and go back to how it was before, but I think people are generally reluctant to lose access to things.
Arguments about the legitimacy of Apple's locked platform are among the most boring we can have on HN, and date all the way back to the origin of HN. But arguments about the specific terms in the SRDP, or even Apple's bug bounty, are super interesting.
I was already very specific: "holding bugs indefinitely without public disclosure no matter how long it takes Apple to fix the issue" is the exact quote that I used after "clauses most security researchers consider unethical" in the comment that you replied to and which we were arguing about ;P.
In said comment, I noted that I wasn't sure if that clause only affected the bug bounty program, or if it also applied to the security device research program (which is crazy as the terms are right there: I must have just let them all blur together in my head); of course, as this is Apple we are talking about, there was no real risk that they would have suddenly decided to be reasonable, and so they are even more explicit about this immoral clause in this new program.
> Researchers must: Not disclose the issue publicly before Apple releases the security advisory for the report. (Generally, the advisory is released along with the associated update to resolve the issue).
> If you report a vulnerability affecting Apple products, Apple will provide you with a publication date (usually the date on which Apple releases the update to resolve the issue). Apple will work in good faith to resolve each vulnerability as soon as practical. Until the publication date, you cannot discuss the vulnerability with others.
I have many friends who believe in simultaneous disclosure, and I know many people who believe in "responsible" disclosure (with its associated deadlines before public disclosure); I have met almost no one who believes that this "tell Apple and give them indefinitely long to fix the issue without telling anyone else about it" disclosure model is legitimate (I'm sure they exist, but they are certainly a small minority).
This has also been discussed in a different thread on this same post https://news.ycombinator.com/item?id=23920454 with a link to someone from Google Project Zero expressing their disappointment with these same clauses "which seem specifically designed to exclude Project Zero and other researchers who use a 90 day policy".
My hope is that they'll settle the case and Corellium's assets go to Apple and the founders become employees and continue to work on their product, because clearly a virtualized service is better than a physical device. But perhaps there are other legal reasons I'm not aware of why they'd still want to do this program with physical devices.
Apple's case directly cites their Security Research Device program in the first paragraph of introduction on "Corellium's Infringing Product"... and, notably, also pushes into the idea that one of the things Corellium supposedly infringed was Apple's GUI Elements, which feels a bit ridiculous to me... (I don't feel like Apple had their best lawyers work on this one ;P).
> Corellium is “a startup that sells a product that allows users to create virtual instances of almost any iOS device in the world.” Corellium’s product creates exact digital
replicas of Apple’s iOS, iTunes, and GUI Elements (referred to here as the “Corellium Apple Product”), available via either Corellium’s web-based platform or a privately installed, Corellium-provided platform. Corellium admits that its product will compete with Apple’s iOS Security Research Device Program.
Their case also attempts to directly push at the problem using DMCA Section 1201 language, and notes that one of the things that Corellium is used for is to jailbreak your device; the language used claims that these jailbreaks--which are the alternative constantly cited here for what security researchers can use to learn about and test on iOS (ironic, as they are themselves failures of security)--are "unlawful ends" (which isn't true, but the fact that Apple wants this to be true so hard demonstrates their distaste for people being able to access their own hardware).
> The Apple Corellium Product also provides users with the ability to “jailbreak” virtual iOS devices. Jailbreaking refers to the act of modifying iOS to circumvent the software restrictions that prevent unfettered access to the operating system. Corellium openly markets the ability of its technology to “jailbreak... any version” of iOS. Corellium provides its jailbreaking technology to all its customers, regardless of their purpose.
> On April 1, 2019, Corellium again highlighted the unlawful ends to which its product is aimed by publicly acknowledging that it had given access to its platform to the developers of code used to jailbreak iOS devices called “unc0ver,” so the developers could test the jailbreaking code “on any device running any firmware” and distribute that code to the
public. Within weeks, those developers released a new version of unc0ver that allowed jailbreaking of iOS 12.6 In other words, Corellium has admitted not only that its product is designed to circumvent technological protection measures Apple puts in place to prevent access to and infringement of its copyrighted works in iOS, but that it has aided and abetted the creation and trafficking of other software that is also designed to circumvent those same technological measures.
In other cases, e.g. Android/Chromebooks, there's a common, immutable early chain-of-trust that stays the same between production and development devices (or in this case, between rooted and unrooted devices); which pops up a message during boot warning that a device is currently a development/rooted device, and therefore should not be trusted for production use-cases. It could just-as-well also say "DO NOT BUY THIS PHONE ON THE SECONDARY MARKET; IT HAS BEEN TAMPERED WITH, AND CANNOT BE TRUSTED WITHOUT FACTORY RE-VERIFICATION" — and then users told repeatedly in the company's messaging to look for messages at boot before buying.
Think of it like a classic USB drop attack but a bit more expensive: you install your remote management code on a phone, box it up like new, and drop it at the door of someone wealthy's house. I'd bet they would happily assume it's a wrong delivery and start using it if it's an upgrade over their current phone.
You swap it physically for the target's phone on the table, netting you the target device.
Moments later, when they pick up a phone that looks just like their own and enters a PIN several times, you now have both their phone (from when you swapped it) and the PIN to unlock it (from the broadcast), allowing you full use of the device, offline, at your leisure. The target is now confused why their phone isn't unlocking, and may not detect the attack for hours.
Apple really should put these audit devices in a big, boxy, couldn't possibly-be-mistaken-for-an-iPhone case.
You might as well let the user in while you’re at it, so it’s truly undetectable.
> Apple really should put these audit devices in a big, boxy, couldn't possibly-be-mistaken-for-an-iPhone case.
Someone in Shenzhen is spinning up their CNC machine as you speak to change that to “you could probably show it to a Genius and they wouldn’t be able to tell at a glance”.
I was thinking that the board might need to be larger, too, to make sure it couldn’t easily be transplanted.
Wouldn't that be costly from an assembly perspective? Economies of scale and all that.
Idk, this all seems much too spy-novel-esque for me. You could also install a hidden camera in the victim's room, or modify the phone to capture the video-out signal.
It sounds like a spy novel because spies spy on people who use regular, everyday hardware. A rooted iPhone is an extremely useful tool to that end.
Do you know of any instances where this happened with devices that can be rooted? (Computers, most Android phones, iPhones vulnerable to Checkm8)
The leveraging of Android malware for espionage (corporate and military both) is well-documented in the media.
"I'm handing out a bunch of water bottles; sign up here. The contents remain my property so when you're done please urinate back into them and return to me on demand."
I'd like to see Apple survive having to recreate half their software from scratch.
The license structure you suggest sounds kind of interesting.
Still, it seems for overall security to have this program exist then not have a program at all.
But the android ecosystem as a whole is not 'open source' and not 'open' like people like you tell about. Almost all access is due to hacks and bugs, not because the boot loader and the OS came with the option for user access on the inside.
But it's so, so far ahead of iOS in openness. You can sideload apps. You can spin up Android in a VM. You can buy Android devices with unlocked bootloaders and install your own roms or drivers or kernel modules or whatever else. The system can be made yours to command.
iOS doesn't even let me run my own code in userland.
This has been possible for a long time. Yes, there are some hoops to jump through (7 day time limit per signing unless you pay), but to say it doesn’t allow you to run your own code is just wrong.
Google though... I really hate their stance with SafetyNet. It's as if they're trying to say "your device can either be fully useful or fully under your control, but not both".
I still don't get why iPhones absolutely can't have unlockable bootloaders.
This particular case is also outrageous for other reasons:
1) They are only doing this now because Corellium has been selling virtually the same thing for a while already.
2) They are doing this to try and hurt Corellium financially, while they're already suing them in parallel.
3) Agreeing to their terms here, effectively makes you a glorified Apple QA engineer. Only you don't get a salary, but rather, a bounty for whenever you find a bug. For most people that would be way, way less money than just being employed wherever.
And, of course, that's the case with Corellium as well; it's not like Hopper or Binja, a tool that random people just buy to kick the tires on. The front page of Corellium's site is a "contact sales" mailto; the term of art we use for that pricing plan is "if you have to ask...".
Apple would almost certainly win the suit, but I think there's reasonable odds the suit would survive an early motion to dismiss before factual discovery.
 "If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so."
To show that "you aren't eligible for bounties on any bugs you find while using it" is false, let's break Apple's quote into two separate statements, and only consider things that are explicitly stated in them.
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party.
If you use SRD in the process of discovering a vulnerability, you have to disclose it to the software authors. Got it.
> If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so."
If you don't use SRD, Apple strongly encourages you to report the vulnerability. But they have no way to force it.
I understand how a casual interpretation of those quotes could be seen to imply that SRD excludes you from bounties, but that's not what Apple is saying.
> Vulnerabilities found with an SRD are automatically considered for reward through the Apple Security Bounty.
I would say that pretty explicit about being eligible for bounties.
Full bullet: If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so.
If you use the SRD, you are required to report any vulnerability. If you didn’t, you are not required but encouraged.
It doesn’t say if you used it you aren’t eligible for reward
Apple announced these devices last year at Black Hat.
In addition to the mandatory bug reporting, Apple reserve a right to dictate the researchers a mandatory publication date. No more 90/180 days responsible disclosure deadline policy. I highly doubt any serious researcher would agree to work with such conditions.
So vulnerabilities found through this program are not eligible for any reward. Then what would be the incentive to enroll (and accepting liabilities like losing the device, Apple suspecting you of breach of contract etc)? Just bragging rights?
This is huge. Not as a security device, but if this were the normal permission model on all iPhones (e.g. owners of devices get root on the devices they own... like a normal general purpose computing device) I could ditch my android and my mac and use an iPhone for everything.
I'm not saying this will ever happen, but in my mind this paints a bright picture of what the iPhone could be.
It's also a bit sobering as I'm quite concerned Apple is actually pushing the other direction in their shift from Intel to ARM.
Honestly the mac or desktop is where I enjoy the openness and do stuff I want to do. I would want to leave the phone untouched and as secure as possible.
I would like to hear your and others' take on it though.
One major issue is that Apple's security model is "we don't trust you". And by that I mean everything works from their root of trust; not yours. This isn't the usual "I think Apple is backdooring my iPhone, FWIW", what I'm really saying is that I want the ability to elevate some of my software to the same permissions that Apple gives theirs. There is no reason that I should not be able to vet my own code and add it to the "trust cache". So this isn't just "every app should run without a sandbox", but it should be "I think GDB that I personally should be able to attach to other apps, but nothing else".
One of the obvious conclusions from modern security research is that the user has become the number one vulnerability in pretty much all systems.
The corollary that a lot of people miss is that developers and security researchers are users too. They get pwned too. They some give the wrong permissions to an executable.
Let me ask you - do you have your elderly parents on an android? Then you will know already how totally owned those phones can become.
Half of these developer / root mode required secrets are going to be occasionally working mods and tweaks except with tons of baked in spyware and ads that can no longer easily be removed.
Perhaps some sort of per device profile which requires a paid developer account could work, but I’ve gotten a number of odd calls about YouTube videos involving Kodi from family before, so I’m not sure trusting in the give users freedom front.
However if someone wants to be an idiot, how far do you go to stop them? Apple's approach stops too many great possibilities for knowledgeable users. It should be in the same category as those "will it blend" types. Screw it up? No warranty.
For me there's several things I need it that are impossible because Apple won't allow them, so I have to use Android. But that's comes infected with Google spyware out of the box :(
Great, which is why I think offering a separate SKU to people who want control over their devices would be a wonderful compromise. Your parents can buy the normal locked-down iPhone that's sold in the Apple Store, and I'll buy the special one from the hidden page on Apple's website.
Not in my experience and certainly not evidenced by all the YouTube videos advocating disabling this or that security feature for questionable gain, including ones with scary text and/or strange key combinations.
Personally, I find it a bit sad and disturbing that people will so willingly and eagerly help --- even for free --- these companies put tighter nooses around themselves and others' necks.
(I'm not "Stallmanist", in that I'm not necessarily advocating open-source; but I am a strong proponent of being able to control what one's computing devices runs, regardless of source availability or even legality. In that sense, what Apple does with its walled gardens is really a strong DRM.)
Apple doesn’t secure and close down by default their devices to please/upset bunch of nerds on HN, they do it, because normal people don’t know a damn thing about technology. People who “willingly and eagerly help” to find bugs and patch them are heroes. Just like that.
Of course not, they lock these devices only because it makes them more money. They really want you to pay that 30% cut, "security" is just an excuse. There's plenty of more open and more secure platforms than the iPhone, just have a look at the web.
Compared with a computer where every program can steal all your data?
Incorrect and never was true that that was the only way or even the common way.
I was actually wondering what was taking so long.
I think there are other downsides to switching off of x86, but I think it strengthens the case for having one small portable computer to do everything. The question is if that device will allow real work like macOS, or if it’ll be stuck as a fancy consumer-only device..
Maybe just go for a PinePhone instead?  I mean, Linux GUIs aren't fully mobile and touchscreen friendly yet, but it's getting there real quick. I mean, they started in November 2019.
In my opinion the PinePhone is the most promising device, as all upstream projects use it as an official developer device and upstream linux has integrated support.
Also, if you found an exploit on a research iPhone because you made use of entitlements that were Apple-only, I wonder if that'd be worth anything bounty wise. Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened). But hypothetically the App Store review process is vulnerable to a bad actor inside Apple pushing an update to a big app that included malware, so I'd think that private entitlements shouldn't be available at all to binaries that didn't ship with the device/in a system update (unless some kind of hobbyist flag was flipped by the consumer). So I'd say that would be worth something, even if smaller than a more interesting exploit.
> Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened).
Snapchat (on iOS at least) is still subject to the app sandbox, no app has on iOS has been granted an exception there to my knowledge. On macOS there are apps that are “grandfathered in” to not require the sandbox on the App Store, but new apps are supposed to have it. Due to the way the dynamic linker works, until recently it was possible to upload an app that could bypass the sandbox, but Apple has said they have fixed this. Some apps do have an exception to this as well, as the broad way they fixed one of the issues broke legitimate functionality in library loading. You can find those hardcoded in AMFI.kext, theoretically they could turn off the sandbox for themselves if they wanted.
Booting custom kernels is not supported at the moment but as has been noted "the Mac remains the Mac" and booting a custom kernel is allowed on the Mac.
And of course you can disable SIP.
Developer and hobbyist scenarios are an explicitly supported workflow on the Mac. Default security policies need to be the right thing for the vast majority of users but that doesn't mean anyone wants to take away your ability to do all kinds of interesting things to the system.
However, I do still stand by my complaint; neither of us can go into too much detail of course but I think you understand that taking chips that were made to run iOS and with hardware-backed guarantees of certain properties for integrity on consumer systems makes for a poor experience when trying to do things like debug and patch the kernel. I mean, is it theoretically possible to debug the kernel? Yes, because they have been enabled superficially, but the experience of using them is much worse than you’d get on Intel (and not to mention developer-fused hardware). Personally I was only able to get it to work partially, and suspect it is even more broken/limited than how the KDK says it is; here is what I’m talking about: https://developer.apple.com/forums/thread/653319. If you aren’t aware, it took almost three weeks before someone could get a “hello world” up, so there is a real drag associated with this.
Again, I’m happy and pleasantly surprised to have these things, at least on macOS; it’s completely possible that these are just unintentional bugs or transitional issues or whatever, if they end up fixed I promise I will stop complaining about this particular thing. But I would like to emphasize that I do not consider the current state of affairs as laid out by the KDK to really count, regardless of the effort being put into this to make it work, which I fully understand helps back up the claim that “the Mac remains the Mac”.
Given that kext development is still supported (although highly discouraged), won’t they have to support the same level of kernel debugging as usual?
> On macOS there are apps that are “grandfathered in” to not require the sandbox on the App Store
Can you name any of these apps? Apple’s own apps don’t have to be sandboxed (like Xcode or macOS installers), but I don’t know of anything else that gets an exception. Some apps like Office get special “holes” out of the sandbox (in the form of additional SBPL), but fundamentally they’re still sandboxed.
They just need to support loading kernel extensions. As watchOS has shown, developers will figure out a way to get their thing working on your device even if your make debugging extremely painful. (Apple's current silicon prevents debugging entirely because the kernel is prevented from being patched in hardware.)
> Can you name any of these apps?
Sure. If your app's bundle ID matches one of
My big worry is them dropping terminal access altogether like on iOS. That would really make the platform useless to me.
However I don't think they would do this at this point. There's many user groups (like cloud developers) specifically favouring Mac because of the strong terminal access.
I don’t think this is technically possible.
I mean, if I find a bug I might report it, but I know people who work on jailbreaks and stuff–if they tell me something will I have to promptly report it? What if I find something on a non-SRD device? If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it? I can get 90% of the benefit from using a device with a bootroom exploit, with none of the restrictions here…
according to the terms no, unless you use the SRD to verify the information or vulnerability
>If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it
I imagine that if you sold a jailbreak for $$$$ that Apple would probably take a close look at the telemetry the device is sending. If you're confident in your ability to terminate all telemetry, and keep good opsec, and defend yourself in court, then maybe that avenue would be feasible. It certainly wouldn't be ethical.
If you want to interact with anyone making jailbreaks, don't do it. If you think you're going to develop a jailbreak, don't join the program.
They have better lawyers than you can afford. And if you're sued for breach of contract, can you afford it?
>even if I say I didn't use that device
is a huge difference from
>even if I didn't use that device
for us. So you an see why I'm a little touchy.
If your intentions are good, even if you're doing all the right things, you'd be playing with fire. To be honest, the people they hand out SRDs to probably have an excellent working relationship with apple already, anyways - toeing the line would probably preclude you from having an SRD or getting a second year access.
> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple
But let's say you pass their review, get a device, find a vulnerability, and don't report it. Then what? You're breaching the contract, but they have no way to know that, so there's no consequence?
> The poor security on android side
For example, if the OS isn't quite at the same level as the release OS, it could be an issue.
That said, this is not my field, and I am not qualified to offer much more than the vague speculation, above.
A few days later another researcher reported earning $75k for webcam access vulnerabilities: https://www.ryanpickren.com/webcam-hacking
These payments are not uncommon.
On the bright side, it will be very useful for jailbreak research and in a way, those bugs _do_ get disclosed to Apple for them to subsequently fix. Not necessarily the way Apple wants, but it does shine daylight on their code.
These guys keep working exploits close to their hearts and don't release them specifically so they can get a look at new hardware. That will no longer be necessary. You find an exploit, you can release it right away.
And on the gripping hand, it will also be used by malicious criminals and state actors to develop zero days for various evil purposes.
It’s useless for jailbreak research because Apple will force you to shut up about it at least until they patch it, so now you can’t jailbreak.
However finding a bug, reporting it and then 'suddenly' a jailbreak appearing that would use it, would be highly suspicious indeed. So they'd probably have to give up the chance of getting the bug bounty.
PS: I'm certainly not signing that NDA myself :)