Hacker News new | past | comments | ask | show | jobs | submit login
Apple Security Research Device Program (developer.apple.com)
330 points by dewey on July 22, 2020 | hide | past | favorite | 186 comments



Anyone who wants to should be able to buy such a device, as it isn't like any of the machine code you are getting elevated access to is even secret (you can download, from Apple, unencrypted copies of the entire operating system). (You can try to make an argument that this is about keeping you from getting access to third-party encrypted assets to prevent some aspect of piracy in the App Store, but this doesn't accomplish that either as you need only have a single supported jailbroken device for that to be easy, and the world already has millions of those and you can't really prevent them as the act of fixing bugs discloses the bug for the older firmware.)

The real problem here is that Apple is so ridiculously controlling with respect to who is allowed to develop software (in Apple's perfect world, all software development would require an Apple license and all software would require Apple review)--in a legal area that isn't really conducive to that (see Sega v. Accolade, which was important enough to later ensure permanent exemptions on reverse engineering and even jailbreaking for software interoperability purposes in the original DMCA anti-tampering laws)--that they are even working right now on suing Corellium, a company which makes an iPhone emulator (which again, has strong legal precedent), in order to prevent anyone but a handful of highly controlled people from being able to debug their platform.

Apple just has such a history of being anti-security researcher--banning people like Charlie Miller from the App Store for showing faults in their review process, pulling the vulnerability detection app from Stefan Esser, slandering Google Project Zero, denying the iPhone 11 location tracking until proven wrong, requiring people in their bug bounty program to be willing to irresponsibly hold bugs indefinitely so Apple can fix things only at their leisure, and using the DMCA to try to squelch research via takedowns--that this ends up feeling like yet another flat gesture: they should have done much more than this device at least a decade ago. I'd say Apple is in store for a pretty big fall if anyone ever manages to get a bankroll large enough to actually fight them in court for any protracted length of time :/.


>that they are even working right now on suing Corellium, a company which makes an iPhone emulator (which again, has strong legal precedent), in order to prevent anyone but a handful of highly controlled people from being able to debug their platform.

Anyone can debug their platform, as they have been. You just need to be approved for this specific program.

Apple's case against Corellium is about intellectual property, and it's frankly going to be a slam dunk in court. There's already established precedent with Apple v. Psystar with an almost identical set of facts.


> You just need to be approved for this specific program.

Since not just anyone can be approved, I don't think I"d consider that "anyone".

I mean, I guess technically "anyone" can learn to become a security researcher and spend years building up a "proven track record of success in finding security issues on Apple platforms, or other modern operating systems and platforms", but that's not generally how I think of the term "anyone". ;)


> Since not just anyone can be approved, I don't think I"d consider that "anyone".

I took the parent to mean you "just need to be approved" to do debugging via this program, but not to use other means to do debugging, which you don't need approval for, hence anyone can do.


The deal is that anyone willing to cough up the $100 developer fee is going to be 'approved'. Apple's not in a position to DQ anyone who's even remotely interested in making iOS more secure.


Why not? Their bug bounty program that they had previously was quite selective, for reference.


Yep, definitely designed to be a barrier that's just enough to say "look we're doing something", but restrict enough so that most won't bother. Sort of like their repair program.

I tend to like Apple in general, I think they do a lot of things right, but I feel there are a few things they do that are clearly about money and not the purported reasons. I guess no company is impervious to that.


"Look we're doing something"? Apple has a well-regarded security team that is among the largest in the industry. The locked platform is part of the premise of their security model. You can disagree with that; many smart people do. But you can't pretend that anything other than unlocking the platform constitutes a half-measure.


Apple's security model is not the only one that exists, and its model has the additional benefit of giving them the ability to control software distribution for the platform.


I'm actually not in disagreement of either points. I think they just need to stop with the whole see people can do stuff. Own it, I like the locked environment and security.

That said, on the back end outside of devices, they don't implement the same precautions. IE encrypting iCloud data / backups where they have limited access. I would like consistency is all.

One way or the other.


In what way do they not "own it"? They've been owning the fact that it's a closed system for 40 years, when tech nerds complained about their products being "appliances".


I'm more on the side of their argument that they have to lock the phone down without their own ability to access the data (which I agree with), but falling back on encrypting the backend the same way (buckling to Federal pressure).

I would love it to be encrypted through out. Even it if means, that if I lose access I may not be able to get it back. It's a tradeoff. The marketing position of being secure is factually true for the phone, but they imply the data is when in fact it really isn't.


>not just anyone can be approved

Anyone can take advantage of the unpatchable bootloader flaw on iDevices with the A11 SOC or earlier that allows you to exert full control over the device and any current or future version of iOS that runs on it.

>For security researchers, this is a huge boon, which should help them analyze any version of iOS that will run on an iPhone X or older. Since iOS research really can’t be done on a device that hasn’t had security restrictions lifted somehow, this will likely become one of the most important tools in researchers’ toolkits. This can benefit iOS users, as it can enable researchers to locate issues and report them to Apple.

https://blog.malwarebytes.com/mac/2019/09/new-ios-exploit-ch...

You don't need Apple's permission.


What happens when iOS 20 comes out and the A11 can’t run it? Is it suddenly okay then to ask Apple’s permission? I don’t think the court would hold up a bootrom exploit in an older chip as good enough for research purposes (what if an exploit only affected the A12 and A13 for some reason?)


How about some of the new features in Apple's recent chips? Is there a way to research those?


Oh, I understand where you were going now. Sorry, I should have read your post a bit more closely.

I do agree with saagarjha and jedieaston's sibling comments, however. Checkm8 is great, but it's temporary.


> Anyone can debug their platform, as they have been.

Apple would not like “anyone” debugging their platform. The fact that people have jumped through hoops to do so is considered a bug and something that they actively try to prevent.


That doesn't change the fact that you're able to do security research on Apple products without their permission.


Apple gives me a locked box, I complain that I can't open it. You're saying that I shouldn't because some people have figured out how to pick the specific lock they're using, even though Apple doesn't want to people to do that and their next lock is not pickable using that technique anymore. Oh, and the pickable locks will be obsolete in a few years. Do you see the problem?


It's not illegal for you to do security research and Apple is not attempting to make it so. You'd like it to be more convenient to do security research and that is what this program is designed to do. I don't see why it's unreasonable for Apple to have terms you need to agree to to benefit from this program.


I think it is not unreasonable for Apple to make security research convenient without adding onerous restrictions on how it is done. Many other platforms do this already, too–actually it's the norm for most of them.


> Apple gives me a locked box, I complain that I can't open it.

Well, if you don’t like closed boxes, just don’t buy a closed box.


Part of the point of security research is that practically 0% of consumers are capable of doing this research themselves and making informed purchasing decisions. Security researchers hold businesses accountable when consumers can't.

In light of that, your argument is "Just don't hold Apple accountable!", which Apple would love... but that would also be harmful to consumers.


> Part of the point of security research is that practically 0% of consumers are capable of doing this research

Well, then by that statement alone security researchers aren’t just “anyone”.

I don’t see a problem in a company imposing restrictions that make finding vulnerabilities harder for everyone in general provided they are willing to allow security researchers to jump those restrictions.

In fact I actually see it as a big win, since now bad actors have significantly higher costs while good actors that are legally liable have lower barriers of entry.


...but these hurdles (at least the ones I know of from the bug bounty program; and which I see elsewhere in this thread do seem to apply to these devices also) contain things like clauses most security researchers consider unethical (holding bugs indefinitely without public disclosure no matter how long it takes Apple to fix the issue) and seems to exclude people who don't generally show Apple in a favorable light.

(And no: I entirely disagree that "bad actors" have significantly higher costs because of this, as bad actors can do illegal stuff like buy internal developer protocols off the black market from corrupt factory employees: there was a massive expose about this in Forbes last year. Hell: Apple bugs are actually less valuable on the black market now than Android bugs because there are so many of them! Apple's attempts to hide their devices from public scrutiny is about PR, not part of some coherent security strategy.)


If this doesn't impact costs for bad actors, it's hard to see how it impacts costs for good actors, since, in the status quo ante of this program, both good and bad actors shared the same vectors to get kernel access to devices. Apple is, on this page, explicit about the notion that this program doesn't impact vulnerability research done outside the program. In what way does this program do anything but add an option for good actors?

I may just not be understanding you; maybe we just agree that this program doesn't change a whole lot.


> If this doesn't impact costs for bad actors, it's hard to see how it impacts costs for good actors, since, in the status quo ante of this program, both good and bad actors shared the same vectors to get kernel access to devices.

Oh come on... I am a security researcher, and I have definitely had multiple opportunities to buy a stolen prototype device (as I am sure you would have also... but I also assume you don't need it as your company is one of the only companies in this space I have seen actively consulting for Apple--which I frankly feel like maybe you should be disclosing here? I guess you might still not have access to dev-fused devices as I have some vague memory of figuring out that you worked on server security... still :/--so maybe you don't pay as much attention or are as tempted as those of us on the outside); like just about every legitimate person who stumbles upon this, I said no, as I don't want to do something actively illegal (such as trafficking in stolen goods). Are you seriously trying to argue that I should be doing actively illegal things to do research?

> I may just not be understanding you; maybe we just agree that this program doesn't change a whole lot.

I never claimed the program (which I will assume you mean the device program, though I think this also mostly applies to the bug bounty program itself) did? I said "anyone who wants to should be able to buy such a device" and "this ends up feeling like yet another flat gesture" (and then cited numerous specific ways in which Apple clearly works against security research on their devices).

You are then here responding to a comment where I am defending against someone who is claiming anyone can do security research (as Apple can't legally stop me... which is "technically true" but "useless" as they can still sue me--as they did my friends at Corellium--and I can't afford to defend myself) and this device is sufficient as anyone can get one (if only they are willing to get over a few "hurdles") by explaining why most security researchers would not take part in these programs (which is, in fact, an argument for why this program "doesn't change a whole lot"). The argument is that Apple needs to do _more_, to put good actors (who have nothing but these programs and bootrom exploits for older devices) on the same level as bad actors (who have comparatively little issue doing research).


(1) I know you're a security researcher.

(2) I'm a software developer at Fly.io.

(3) Latacora, my previous security company, did no work at all for Apple.

(4) I have no idea what you mean by "server security"; you are probably thinking of someone else.

(5) I'm not asking whether you think it's OK that Apple sued Corellium. Most people in software security are not happy that Apple sued Corellium; I'm not going to be the oddball pissing into the wind this time with a contrary take.

If we agree, we agree, and it sounds like we do: one might not believe that the SRDP meaningfully improves security research on the iPhone, but it's hard to make an argument that the SRDP _harms_ it.


(Of course, I'm talking about Matasano Security / NCC Group ;P. I knew people there when you all worked out of the Dental Fabulous--no clue if you still do--and had some incredibly awkward run-ins involving Apple people, as everyone on all sides wanted to pretend that no one knew anyone else, due to what I'm sure was a ton of NDAs, explicit and implied... it was pretty epic, actually, as one of the people involved was essentially a "double-agent"! Regardless, I'm willing to believe that you had just left before all of these contracts with Apple had happened, and it certainly undermines the premise that you yourself don't need one of these devices to do research, so "point still taken".)


I left Matasano more than 6 years ago. Unfortunately, Matasano SFBA moved from the dentist's office (which I have fond memories of) to Sunnyvale. What I'll say right now is: I have the same disclosable interest in Apple's security as most veterans in software security: they're an elite employer and I have a bunch of friends there.

(The more important disclosure is that I don't specialize in the kind of work that would likely benefit from an unlocked phone.)


It gives researchers who don't want to do illegal things debugging access to the kernel, whereas previously this was not possible on newer devices because the only way to do that outside of Apple was to somehow (illegally) obtain access to a development-fused iPhone.


Yes; I'm asking, how does providing that new option harm software security researchers?

I understand the subtext that Apple could more efficiently help software security researchers by freely unlocking phones, but I'm not here to litigate that.


I was just responding to the part where you mentioned that good and bad actors had the same access before this program, which isn't true. (And it still probably isn't true, since I hear these devices are research fused and you can buy developer fused devices–or more recently, swap out your production-fused device's CPU–from the black market.)

To individual researchers, yes, this gives them a new option–I guess that is good? What I am concerned about is that it is an attractive option for them and they get locked into whatever disclosure timeline/research focus Apple wants them to have. You could of course say that they could leave the program at any point and go back to how it was before, but I think people are generally reluctant to lose access to things.


And on this note, people are also extremely reluctant to too horribly piss off the gorilla: I called Apple out on the morality of these clauses with a pretty harsh and personal speech during the initial bug bounty program meeting, and I had a bunch of people come up to me afterwards telling me they agreed strongly but were too afraid that Apple would lock them out if they were to say anything themselves (and of course, I was never invited to any subsequent meetings, not that any of us--even among the people at Apple who championed me being at the meeting in the first place--ever believed I would be: I sort of get the impression that some of them mostly wanted to demonstrate to their managers that what they were doing wasn't universally liked, but understood the fear).


The morality of which clauses? Can you be more specific?

Arguments about the legitimacy of Apple's locked platform are among the most boring we can have on HN, and date all the way back to the origin of HN. But arguments about the specific terms in the SRDP, or even Apple's bug bounty, are super interesting.


> The morality of which clauses? Can you be more specific?

I was already very specific: "holding bugs indefinitely without public disclosure no matter how long it takes Apple to fix the issue" is the exact quote that I used after "clauses most security researchers consider unethical" in the comment that you replied to and which we were arguing about ;P.

In said comment, I noted that I wasn't sure if that clause only affected the bug bounty program, or if it also applied to the security device research program (which is crazy as the terms are right there: I must have just let them all blur together in my head); of course, as this is Apple we are talking about, there was no real risk that they would have suddenly decided to be reasonable, and so they are even more explicit about this immoral clause in this new program.

https://developer.apple.com/security-bounty/

> Researchers must: Not disclose the issue publicly before Apple releases the security advisory for the report. (Generally, the advisory is released along with the associated update to resolve the issue).

https://developer.apple.com/programs/security-research-devic...

> If you report a vulnerability affecting Apple products, Apple will provide you with a publication date (usually the date on which Apple releases the update to resolve the issue). Apple will work in good faith to resolve each vulnerability as soon as practical. Until the publication date, you cannot discuss the vulnerability with others.

I have many friends who believe in simultaneous disclosure, and I know many people who believe in "responsible" disclosure (with its associated deadlines before public disclosure); I have met almost no one who believes that this "tell Apple and give them indefinitely long to fix the issue without telling anyone else about it" disclosure model is legitimate (I'm sure they exist, but they are certainly a small minority).

This has also been discussed in a different thread on this same post https://news.ycombinator.com/item?id=23920454 with a link to someone from Google Project Zero expressing their disappointment with these same clauses "which seem specifically designed to exclude Project Zero and other researchers who use a 90 day policy".

https://twitter.com/benhawkes/status/1286021329246801921?s=1...


How many consumers do in depth security analysis on devices before buying them? You are in a bubble of you think that’s normal behavior. And to do such research on a device, wouldn’t you need to buy a device? You’re going to buy a device to research if you should buy the device? And sales numbers would indicate that most people are confident enough in Apple’s security.


There's a locked box, and there's the box that spies on you and also looks uglier to some people. Not much of a choice, is there?


Are you talking using or research ? For research, android is obviously much easier. As a user, sure, you may have your preference. Asking for Apple's cooperation in doing research on Apple devices seems quite antithetical to Apple's overall approach to business. I don't see why there should be any such expectation. I feel that once a company gets large enough (in stature, revenue, etc.), everyone bestows upon it certain qualities and expectations. It's not reasonable to do so. Large coal/oil company won't become environment friendly no matter how much money it makes.


Using.


Not only a flat gesture, I think this by this they are actively gunning for companies like Corellium and will have a huge amount of control over security researchers who join the program. Disclose your bugs to us on our terms or have your access yanked? Pretty yikes. (And this is completely ignoring the rest of your comment, because it's pretty clear that they don't want consumers with debuggable iPhones.)


We don't have to speculate about this; Apple is actively gunning for Corellium. Your best tip-off to that was them suing Corellium.


Oh, I know that Apple is gunning for Corellium, my question was if this specific thing was meant to be another arm of that dispute (it probably is).


In the sense that Apple probably wouldn't have done it if Corellium had never existed, sure.


The reason I imagine this comes up as a thread of discussion is because Apple promised they would do this long before Corellium existed--at least four years ago: the meeting I was at--and I heard it had likely even been shelved!... but it has seemingly now become something to bother with doing, and as far as anyone can tell it is because Apple's lawsuit with Corellium non-trivially tries to claim that Corellium can't use certain "obvious" defenses because their emulator somehow competes with and carves into the market for Apple's promised security research device program, which is in some sense the Daikatana of this community. So to people who might otherwise think "oh, Apple loves security researchers: look at this program as proof!" the real thought should be "Apple is likely only doing this at all to help them win a lawsuit and injunction against a company which provides the tools used by a lot of security researchers" (a lawsuit which also wants to push through Corellium so they can attack their customers).


Apple's lawsuit is about infringement of their copyright. And that gives them exclusive rights over their intellectual property without any conditions. Legally, their case isn't better or worse because of this program. The likely reason they announced it just before suing Corellium is to avoid giving the impression that they were attacking the security research community in general, rather than Corellium specifically.

My hope is that they'll settle the case and Corellium's assets go to Apple and the founders become employees and continue to work on their product, because clearly a virtualized service is better than a physical device. But perhaps there are other legal reasons I'm not aware of why they'd still want to do this program with physical devices.


You have a misinterpretation of the strength of copyright: there are a lot of legally protected (both by statute and by case law) things you can do with someone else's copyrighted works as long as you don't copy them (and even if you do make some copies, it is sometimes OK; I've cited this case a couple times already on this thread, but see Sega v. Accolade: Accolade ended up winning this case despite the fact that they had to actually make "infringing" copies in order to accomplish their goal of software interoperability).

Apple's case directly cites their Security Research Device program in the first paragraph of introduction on "Corellium's Infringing Product"... and, notably, also pushes into the idea that one of the things Corellium supposedly infringed was Apple's GUI Elements, which feels a bit ridiculous to me... (I don't feel like Apple had their best lawyers work on this one ;P).

> Corellium is “a startup that sells a product that allows users to create virtual instances of almost any iOS device in the world.” Corellium’s product creates exact digital replicas of Apple’s iOS, iTunes, and GUI Elements (referred to here as the “Corellium Apple Product”), available via either Corellium’s web-based platform or a privately installed, Corellium-provided platform. Corellium admits that its product will compete with Apple’s iOS Security Research Device Program.

Their case also attempts to directly push at the problem using DMCA Section 1201 language, and notes that one of the things that Corellium is used for is to jailbreak your device; the language used claims that these jailbreaks--which are the alternative constantly cited here for what security researchers can use to learn about and test on iOS (ironic, as they are themselves failures of security)--are "unlawful ends" (which isn't true, but the fact that Apple wants this to be true so hard demonstrates their distaste for people being able to access their own hardware).

> The Apple Corellium Product also provides users with the ability to “jailbreak” virtual iOS devices. Jailbreaking refers to the act of modifying iOS to circumvent the software restrictions that prevent unfettered access to the operating system. Corellium openly markets the ability of its technology to “jailbreak... any version” of iOS. Corellium provides its jailbreaking technology to all its customers, regardless of their purpose.

> On April 1, 2019, Corellium again highlighted the unlawful ends to which its product is aimed by publicly acknowledging that it had given access to its platform to the developers of code used to jailbreak iOS devices called “unc0ver,” so the developers could test the jailbreaking code “on any device running any firmware” and distribute that code to the public. Within weeks, those developers released a new version of unc0ver that allowed jailbreaking of iOS 12.6 In other words, Corellium has admitted not only that its product is designed to circumvent technological protection measures Apple puts in place to prevent access to and infringement of its copyrighted works in iOS, but that it has aided and abetted the creation and trafficking of other software that is also designed to circumvent those same technological measures.


Sure. I think we're all on the same page about this.


If anyone could buy this device, then tons of scammers would buy them, install malware, and sell them to people as normal phones. They could then control banking apps and whatever else they wanted.


Most such devices (e.g. "development kit" devices for game consoles) look very different than the release product. Usually in such a way that it'd be impractical to use them casually for your personal needs.

In other cases, e.g. Android/Chromebooks, there's a common, immutable early chain-of-trust that stays the same between production and development devices (or in this case, between rooted and unrooted devices); which pops up a message during boot warning that a device is currently a development/rooted device, and therefore should not be trusted for production use-cases. It could just-as-well also say "DO NOT BUY THIS PHONE ON THE SECONDARY MARKET; IT HAS BEEN TAMPERED WITH, AND CANNOT BE TRUSTED WITHOUT FACTORY RE-VERIFICATION" — and then users told repeatedly in the company's messaging to look for messages at boot before buying.


Most, maybe, but look at all of the Developer Transition Kits Apple has been distributing - they’re Mac Mini bodies with Apple Silicon inside. There’s no reason to think this won’t be an iPhone chassis with modified guts.


This will likely be an iPhone that looks identical to a production one, both inside and out. Perhaps there will be a serial number on it or something.


I agree, but I think there are non-technical people who would happily buy a cheap iPhone on Craigslist or Facebook and enter in all their iCloud or banking info without rebooting or looking at warnings.


Who is honestly going to buy this highly sought-after iPhone, backdoor it and flip it for a discount on Craigslist in an attempt to what, hack a random person? And you can just prevent that by doing something simple like disabling the App Store…


First, my argument only applies to when these are common and not highly sought after (see the top level comment I replied to). Not necessarily a random person, but having an easily obtainable rooted iPhone would absolutely enable targeted attacks against wealthy/famous people.

Think of it like a classic USB drop attack but a bit more expensive: you install your remote management code on a phone, box it up like new, and drop it at the door of someone wealthy's house. I'd bet they would happily assume it's a wrong delivery and start using it if it's an upgrade over their current phone.


Again, if you make the research iPhones unmistakable people will figure it out. And yeah, people ignore strange markings or click through warnings, but if you make it impossible to do the thing they want to do then they literally cannot ignore it.


I agree with the first sentence, but disabling the app store would greatly hamper security research, since you can't debug third party apps.


You'd just download the IPA manually and self-sign it or whatever. Basically just make it unmistakable that this device is not normal and block normal people from being able to use it as normal without realizing it's a development device.


I doubt they would cost the same as regular iPhones.


Does it matter what they cost if scammers could net tens/hundreds of thousands for a single one? (Assuming they pick targets right)


People with tens/hundreds of thousands of dollars are buying iPhones from random third parties?


It's easier than that. You simply modify the special phone to broadcast the unlock PIN being entered in realtime. You set the background to the same wallpaper as the target's phone.

You swap it physically for the target's phone on the table, netting you the target device.

Moments later, when they pick up a phone that looks just like their own and enters a PIN several times, you now have both their phone (from when you swapped it) and the PIN to unlock it (from the broadcast), allowing you full use of the device, offline, at your leisure. The target is now confused why their phone isn't unlocking, and may not detect the attack for hours.

Apple really should put these audit devices in a big, boxy, couldn't possibly-be-mistaken-for-an-iPhone case.


> The target is now confused why their phone isn't unlocking, and may not detect the attack for hours.

You might as well let the user in while you’re at it, so it’s truly undetectable.

> Apple really should put these audit devices in a big, boxy, couldn't possibly-be-mistaken-for-an-iPhone case.

Someone in Shenzhen is spinning up their CNC machine as you speak to change that to “you could probably show it to a Genius and they wouldn’t be able to tell at a glance”.


You couldn’t, without the data on the stolen target phone. The attack ends with the victim in physical possession of the security research device.

I was thinking that the board might need to be larger, too, to make sure it couldn’t easily be transplanted.


> I was thinking that the board might need to be larger, too, to make sure it couldn’t easily be transplanted.

Wouldn't that be costly from an assembly perspective? Economies of scale and all that.

Idk, this all seems much too spy-novel-esque for me. You could also install a hidden camera in the victim's room, or modify the phone to capture the video-out signal.


Apple is retaining ownership of the devices, as mentioned in the article. They are not for sale. The per-device cost is not hugely relevant.

It sounds like a spy novel because spies spy on people who use regular, everyday hardware. A rooted iPhone is an extremely useful tool to that end.


> It sounds like a spy novel because spies spy on people who use regular, everyday hardware. A rooted iPhone is an extremely useful tool to that end.

Do you know of any instances where this happened with devices that can be rooted? (Computers, most Android phones, iPhones vulnerable to Checkm8)


Barton Gellman wrote about this very thing happening to his iPad (remote jailbreak/root) when he was working with Snowden, in his book Dark Mirror.

The leveraging of Android malware for espionage (corporate and military both) is well-documented in the media.


So get a regular iPhone, disable the lock-screen timer, slap an app on it that mimics the unlock screen. No specialty hardware needed.


You could do that with jailbroken phones today.


A scam that requires an individually targeted bespoke device that nets tens or hundreds of thousands (how does that even work? how would the proceeds be exfiltrated untraceably?) is just a really expensive way to have a very short career as a scammer.


The move is in line with their reputation. Handing out a bunch of research devices which come with a catch is a great way to exert more control and influence over vulnerability reporting, and skew the bargaining power when it comes to disclosure. I expect the motivation is largely genuine to encourage security research that bolsters their platform, but also stems in part from an increasing fear of PR ramifications outside their control.

"I'm handing out a bunch of water bottles; sign up here. The contents remain my property so when you're done please urinate back into them and return to me on demand."


Honestly had to think about whether your offer was a good deal.


If only open source software licenses could have predicted the level of vertical integration control their software would be used in. Apple continually violates the good will of developers and puts forth their own bad will. I'm tempted to make up an 'MIT minus non-free platforms' agreement. If the OS can't be completely emulated and freeley installed without restriction, then you can't use the library.

I'd like to see Apple survive having to recreate half their software from scratch.


They made APFS in (probably?) less than ten years :(

The license structure you suggest sounds kind of interesting.


Sounds like you might want GPL?


Probably AGPL


Go ahead. See how nobody uses your library.


Consider Android is open source and people have been able to get root access on devices since the beginning.

Still, it seems for overall security to have this program exist then not have a program at all.


To be fair (as a problem we have is that there is a small oligopoly of players and they all mostly suck), the original Android G1 was a locked down device: to get root on it required you to buy the Android ADP1 (which cost a lot more and had serious limitations with respect to apps from the Market); we only got root on it due to a bug (so the same as with iOS... it was just a dumber bug ;P). But yeah: Android wouldn't be usefully open source to anyone if you couldn't at least build your own device to test and run it on, and they clearly support that and they even provide emulators (and, thankfully, more recent, first-party Google devices are nice and open: Google learned their lesson there pretty quickly).


The 'android' that you get from your phone vendor definitely isn't open source. The AOSP project is and the distro from your vendor will have parts of it that can be traced back to the source.

But the android ecosystem as a whole is not 'open source' and not 'open' like people like you tell about. Almost all access is due to hacks and bugs, not because the boot loader and the OS came with the option for user access on the inside.


> The 'android' that you get from your phone vendor definitely isn't open source.

But it's so, so far ahead of iOS in openness. You can sideload apps. You can spin up Android in a VM. You can buy Android devices with unlocked bootloaders and install your own roms or drivers or kernel modules or whatever else. The system can be made yours to command.

iOS doesn't even let me run my own code in userland.


> iOS doesn't even let me run my own code in userland.

This has been possible for a long time. Yes, there are some hoops to jump through (7 day time limit per signing unless you pay), but to say it doesn’t allow you to run your own code is just wrong.


Limited userland code.


What code do you want to run that you can’t in any way? How is running that more important than the alternative?


Code that can generate new executable pages on the fly, code that can access the various databases on my phone, code that modifies how the system applications work…I think that's fairly important and useful, especially since the alternative is…you can't do that at all? What's the issue?


That's somewhat tangential to the item at hand isn't it? We are talking about open source, not about hackability or the rather vague term of 'openness'. One could argue that a device that is usable to a lot of people is 'open' and devices that all require their own teaching/learning are not.


And the method by which you get root, which involves unlocking the bootloader and using a custom recovery to flash the su binary and an app to manage it, is in itself enough of a deterrent that only people who know exactly what they're doing do it. It's impossible to root accidentally, you have to install adb/fastboot on your computer and follow instructions. And then every time it boots there'll be a warning that the system has been modified.

Google though... I really hate their stance with SafetyNet. It's as if they're trying to say "your device can either be fully useful or fully under your control, but not both".

I still don't get why iPhones absolutely can't have unlockable bootloaders.


In fairness, no one is really developing for the .NET ecosystem without VS licenses either. I’m sure it’s theoretically possible but MS de facto runs the same scam.


From experience, I'll suggest all serious security researchers to never, ever, sign any agreement with the company whose products they are researching.

This particular case is also outrageous for other reasons:

1) They are only doing this now because Corellium has been selling virtually the same thing for a while already.

2) They are doing this to try and hurt Corellium financially, while they're already suing them in parallel.

3) Agreeing to their terms here, effectively makes you a glorified Apple QA engineer. Only you don't get a salary, but rather, a bounty for whenever you find a bug. For most people that would be way, way less money than just being employed wherever.


To whatever extent these devices are distributed, my guess is that they land predominantly in the hands of consultancies and security product firms, where the bulk of bread-and-butter security research is done. Those firms will all have their legal vet the actual contract (which this page is not).

And, of course, that's the case with Corellium as well; it's not like Hopper or Binja, a tool that random people just buy to kick the tires on. The front page of Corellium's site is a "contact sales" mailto; the term of art we use for that pricing plan is "if you have to ask...".


Kind of humorous to imagine a researcher suing apple under the anti-gig California law. Would be a factual question of whether the researcher has sufficient control over their work under the agreement.

Apple would almost certainly win the suit, but I think there's reasonable odds the suit would survive an early motion to dismiss before factual discovery.


I read the terms of the SRD [1] to suggest if you get one, and use it, you aren't eligible for bounties on any bugs you find while using it. So, you are an entirely unpaid Apple QA engineer. Knowledge is its own reward I guess.

[1] "If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so."


The (and rewards) bit isn't saying that SRD users are ineligible for rewards! Rather, it's trying to encourage non-SRD users to report vulnerabilities they find. If Apple explicitly stated that SRD users are ineligible for bounties, I'd be pretty confident Apple has lost their minds, as the SRD devices would be completely worthless to vulnerability researchers - only serving to contractually restrict them, and offering no practical benefit.

To show that "you aren't eligible for bounties on any bugs you find while using it" is false, let's break Apple's quote into two separate statements, and only consider things that are explicitly stated in them.

First:

> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party.

If you use SRD in the process of discovering a vulnerability, you have to disclose it to the software authors. Got it.

Second:

> If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so."

If you don't use SRD, Apple strongly encourages you to report the vulnerability. But they have no way to force it.

I understand how a casual interpretation of those quotes could be seen to imply that SRD excludes you from bounties, but that's not what Apple is saying.


Read a little further, the last bullet says

> Vulnerabilities found with an SRD are automatically considered for reward through the Apple Security Bounty.

I would say that pretty explicit about being eligible for bounties.


(That bullet point did not exist before.)


It doesn’t suggest that you aren’t eligible for bounties, but rather that you are not allowed to not disclose a vulnerability.


It certainly suggests this.

Full bullet: If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so.


Maybe it’s me but what I read in that paragraph is:

If you use the SRD, you are required to report any vulnerability. If you didn’t, you are not required but encouraged.

It doesn’t say if you used it you aren’t eligible for reward


It's probably hard to resist to use this SRD while trying to find a vulnerability if you have it such a device. If you use it for something you won't be eligible for the bounty.


> They are only doing this now because ...

Apple announced these devices last year at Black Hat.


Right before they promptly sued Corellium.


But after Corellium had shown their product to Apple and tried to get Apple to buy them. Apple did nothing until Corellium started selling access to the public with their product.


Apple did nothing for years after Corellium sold their product to the public, coincidentally they did file the lawsuit right after announcing that they would do “research-fused devices”. Surprise, surprise: guess what shows up at the top of the lawsuit as the “legal” way for researchers to do their work…


>If you report a vulnerability affecting Apple products, Apple will provide you with a publication date... Until the publication date, you cannot discuss the vulnerability with others.

In addition to the mandatory bug reporting, Apple reserve a right to dictate the researchers a mandatory publication date. No more 90/180 days responsible disclosure deadline policy. I highly doubt any serious researcher would agree to work with such conditions.


Would be interesting to see if Google Project Zero joins the program, given their inflexibility in disclosure.



That's what I've been thinking about as well. I'm leaning towards "no", but let's wait and see.


> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party. If you didn’t use the SRD for any aspect of your work with a vulnerability, Apple strongly encourages (and rewards, through the Apple Security Bounty) that you report the vulnerability, but you are not required to do so.

So vulnerabilities found through this program are not eligible for any reward. Then what would be the incentive to enroll (and accepting liabilities like losing the device, Apple suspecting you of breach of contract etc)? Just bragging rights?


I think that is supposed to be read as "you must report any vulnerabilities, which will be treated as any vulnerability you chose to voluntarily submit".


You are correct because they have now added a bullet point:

> Vulnerabilities found with an SRD are automatically considered for reward through the Apple Security Bounty.


:O

This is huge. Not as a security device, but if this were the normal permission model on all iPhones (e.g. owners of devices get root on the devices they own... like a normal general purpose computing device) I could ditch my android and my mac and use an iPhone for everything.

I'm not saying this will ever happen, but in my mind this paints a bright picture of what the iPhone could be.

It's also a bit sobering as I'm quite concerned Apple is actually pushing the other direction in their shift from Intel to ARM.


I dont get the allure of this. As someone working in security, the phone is an extremely leaky thing and very bad for privacy to begin with. On top of that you want to remove all restrictions and make it a security nightmare too? I get that you want to install what you like. Sure, but I don't think the convenience is worth the security trade off.

Honestly the mac or desktop is where I enjoy the openness and do stuff I want to do. I would want to leave the phone untouched and as secure as possible.

I would like to hear your and others' take on it though.


> As someone working in security, the phone is an extremely leaky thing and very bad for privacy to begin with. On top of that you want to remove all restrictions and make it a security nightmare too?

One major issue is that Apple's security model is "we don't trust you". And by that I mean everything works from their root of trust; not yours. This isn't the usual "I think Apple is backdooring my iPhone, FWIW", what I'm really saying is that I want the ability to elevate some of my software to the same permissions that Apple gives theirs. There is no reason that I should not be able to vet my own code and add it to the "trust cache". So this isn't just "every app should run without a sandbox", but it should be "I think GDB that I personally should be able to attach to other apps, but nothing else".


My security model actually is “I don’t trust myself”.

One of the obvious conclusions from modern security research is that the user has become the number one vulnerability in pretty much all systems.

The corollary that a lot of people miss is that developers and security researchers are users too. They get pwned too. They some give the wrong permissions to an executable.


Then perhaps a device that prevents you from doing stupid things may be what you want. But I would prefer that your choice does not prevent me from being unable to buy an iPhone that does let me do those things.


This is because apple feels its too easy to trick users into elevating software permissions - which in turn may cause risks and harms to their user base.

Let me ask you - do you have your elderly parents on an android? Then you will know already how totally owned those phones can become.


If you require the user to hook into iTunes/Xcode, flip the device into recovery mode, click a few buttons, and agree to a "You're hecked if you break it now" policy, it'll be enough to scare off 99.9% of people from getting owned. After that, just have it work like the current profiles/supervision system where Settings makes it clear that non-verified code is running and has a big "make it go away!" button (sideloaded IPAs show up in profiles with a delete app button, and that works well enough except for the time limit).


I don’t really agree to this, the end result is going to be a large number of YouTube tutorials instructing people on how to do this with captions like: watch free movies on iPhone, “popular mobile game” money hack, and Snapchat take screenshots without notifying hack.

Half of these developer / root mode required secrets are going to be occasionally working mods and tweaks except with tons of baked in spyware and ads that can no longer easily be removed.

Perhaps some sort of per device profile which requires a paid developer account could work, but I’ve gotten a number of odd calls about YouTube videos involving Kodi from family before, so I’m not sure trusting in the give users freedom front.


This proves exactly the point made above of Apple not trusting the user.

However if someone wants to be an idiot, how far do you go to stop them? Apple's approach stops too many great possibilities for knowledgeable users. It should be in the same category as those "will it blend" types. Screw it up? No warranty.

For me there's several things I need it that are impossible because Apple won't allow them, so I have to use Android. But that's comes infected with Google spyware out of the box :(


I think Apple point is that users that need being protected from themselves without even realizing it are far more than those who might get a benefit from root without getting burnt. Since the two things can’t exist at the same time, they’re going for the road that makes the majority happy.


And provides the least burden to their support service.


> This is because apple feels its too easy to trick users into elevating software permissions

Great, which is why I think offering a separate SKU to people who want control over their devices would be a wonderful compromise. Your parents can buy the normal locked-down iPhone that's sold in the Apple Store, and I'll buy the special one from the hidden page on Apple's website.


If you don’t mind, here’s a comment I wrote recently that I think is very relevant and I figured it just be easier to link to rather than retype: https://news.ycombinator.com/item?id=23784763


> Strange key combinations that lead to scary text and wiping the device seem to be fairly effective in keeping out people who cannot give informed consent.

Not in my experience and certainly not evidenced by all the YouTube videos advocating disabling this or that security feature for questionable gain, including ones with scary text and/or strange key combinations.


And wiping your device?


Yes


I'd rather live with the risk and enjoy the freedom than give in to the authoritarian corporations whose main goal is to achieve complete control over their users, and advertise it under the guise of security.

Personally, I find it a bit sad and disturbing that people will so willingly and eagerly help --- even for free --- these companies put tighter nooses around themselves and others' necks.

(I'm not "Stallmanist", in that I'm not necessarily advocating open-source; but I am a strong proponent of being able to control what one's computing devices runs, regardless of source availability or even legality. In that sense, what Apple does with its walled gardens is really a strong DRM.)


It’s not Stallmanism, it’s ignorance and thinking that you’re the only person on planet. It’s pretty much common thing here.

Apple doesn’t secure and close down by default their devices to please/upset bunch of nerds on HN, they do it, because normal people don’t know a damn thing about technology. People who “willingly and eagerly help” to find bugs and patch them are heroes. Just like that.


> Apple doesn’t secure and close down by default their devices to please/upset bunch of nerds on HN, they do it, because normal people don’t know a damn thing about technology. People who “willingly and eagerly help” to find bugs and patch them are heroes. Just like that.

Of course not, they lock these devices only because it makes them more money. They really want you to pay that 30% cut, "security" is just an excuse. There's plenty of more open and more secure platforms than the iPhone, just have a look at the web.


It’s business, but you still didn’t get what I wrote. :)


> As someone working in security, the phone is an extremely leaky thing and very bad for privacy to begin with.

Compared with a computer where every program can steal all your data?


A computer doesn't follow you around. It also doesn't have a ton of built in sensors (accelerometer, dual cameras, microphones, gps).


Yet the biggest trade secrets will only accessible via a computer when they have certain DLP policies in place which lock down the machine to the same effect that iPhones are locked down, only with the company’s own trust store. The iPhone is Apple bringing this level of lock down to consumers, but with safeguards that prevent the user from adding completely untrusted code (only when you use Apple Business Manager+MDM can you deploy in-house apps without review).


> only when you use Apple Business Manager+MDM can you deploy in-house apps without review

Incorrect and never was true that that was the only way or even the common way.


How is android any better? Anecdotally I’ve seen more compromised android devices than iPhones. It appears to me Apple has defaults, at least now, that protect the privacy of the user. Additionally the ecosystem is less littered with apps that can take over the whole OS. My mother’s phone is always getting compromised by malware or apps that inject ads.


I see it as the opposite: these iPhones are rented to you, and are clearly not what they want to "sell" to people. It's certainly a huge surprise that this exists at all, and I would certainly like more moves in the direction that you mentioned, but I am not sure that this is it.


Nitpick—I wouldn't say it's a huge surprise, since they said last year they planned to do this.

I was actually wondering what was taking so long.


Huge surprise for me, because I personally expected them to pull an AirPower with this and just forget about it (especially since the demand was gone with checkm8).


Curious what you mean by “pushing the other direction.” I would say the opposite — it seems like everything running on ARM is exactly what it would take for your phone to run desktop programs.

I think there are other downsides to switching off of x86, but I think it strengthens the case for having one small portable computer to do everything. The question is if that device will allow real work like macOS, or if it’ll be stuck as a fancy consumer-only device..


Don't get your hopes up. This is not what you think it is, sadly.


> like a normal general purpose computing device

Maybe just go for a PinePhone instead? [1] I mean, Linux GUIs aren't fully mobile and touchscreen friendly yet, but it's getting there real quick. I mean, they started in November 2019.

In my opinion the PinePhone is the most promising device, as all upstream projects use it as an official developer device and upstream linux has integrated support.

[1] https://wiki.pine64.org/index.php/PinePhone_Software_Release...


I wonder how much people are able to publish about the device. I'd expect not much, but it'd be nice to be able to compare a iPhone that was completely unlocked (at least, to whatever that means for Apple) with whatever security they put on the ARM Macs which are supposed to be "open for hobbyists". I'd expect that the ARM Macs have much of the same security stack (by default) that iOS devices have given what they said in the WWDC talks, but maybe that's not the case.

Also, if you found an exploit on a research iPhone because you made use of entitlements that were Apple-only, I wonder if that'd be worth anything bounty wise. Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened). But hypothetically the App Store review process is vulnerable to a bad actor inside Apple pushing an update to a big app that included malware, so I'd think that private entitlements shouldn't be available at all to binaries that didn't ship with the device/in a system update (unless some kind of hobbyist flag was flipped by the consumer). So I'd say that would be worth something, even if smaller than a more interesting exploit.


We’ll see how the shipping ARM Macs are “fused” when they come out, but my guess is that they will be more locked down than these devices: their OS will be more permissive but you will not have meaningful kernel debugging.

> Nobody can/should be able to write an that'll get through App Store checks if they asked for PLZ_NO_SANDBOX_ILL_BE_GOOD or something (at least, that's what I thought before the whole Snapchat system call thing happened).

Snapchat (on iOS at least) is still subject to the app sandbox, no app has on iOS has been granted an exception there to my knowledge. On macOS there are apps that are “grandfathered in” to not require the sandbox on the App Store, but new apps are supposed to have it. Due to the way the dynamic linker works, until recently it was possible to upload an app that could bypass the sandbox, but Apple has said they have fixed this. Some apps do have an exception to this as well, as the broad way they fixed one of the issues broke legitimate functionality in library loading. You can find those hardcoded in AMFI.kext, theoretically they could turn off the sandbox for themselves if they wanted.


The KDK has instructions for loading your own kernel extensions on Apple Silicon. This includes making a new writable root snapshot, modifying it, then blessing it for boot. It also includes kernel debugging.

Booting custom kernels is not supported at the moment but as has been noted "the Mac remains the Mac" and booting a custom kernel is allowed on the Mac.

And of course you can disable SIP.

Developer and hobbyist scenarios are an explicitly supported workflow on the Mac. Default security policies need to be the right thing for the vast majority of users but that doesn't mean anyone wants to take away your ability to do all kinds of interesting things to the system.


Yeah, I know, I read those instructions in full ;) I do have to admit that I am pleasantly surprised at how much is made accessible, I was fully prepared for this to be an opportunity to enforce mandatory codesigning, removing the ability to disable SIP or load code into the kernel, turn off secure boot, etc. but so far pretty much everything seems to be technically possible, which is nice.

However, I do still stand by my complaint; neither of us can go into too much detail of course but I think you understand that taking chips that were made to run iOS and with hardware-backed guarantees of certain properties for integrity on consumer systems makes for a poor experience when trying to do things like debug and patch the kernel. I mean, is it theoretically possible to debug the kernel? Yes, because they have been enabled superficially, but the experience of using them is much worse than you’d get on Intel (and not to mention developer-fused hardware). Personally I was only able to get it to work partially, and suspect it is even more broken/limited than how the KDK says it is; here is what I’m talking about: https://developer.apple.com/forums/thread/653319. If you aren’t aware, it took almost three weeks before someone could get a “hello world” up, so there is a real drag associated with this.

Again, I’m happy and pleasantly surprised to have these things, at least on macOS; it’s completely possible that these are just unintentional bugs or transitional issues or whatever, if they end up fixed I promise I will stop complaining about this particular thing. But I would like to emphasize that I do not consider the current state of affairs as laid out by the KDK to really count, regardless of the effort being put into this to make it work, which I fully understand helps back up the claim that “the Mac remains the Mac”.


> you will not have meaningful kernel debugging

Given that kext development is still supported (although highly discouraged), won’t they have to support the same level of kernel debugging as usual?

> On macOS there are apps that are “grandfathered in” to not require the sandbox on the App Store

Can you name any of these apps? Apple’s own apps don’t have to be sandboxed (like Xcode or macOS installers), but I don’t know of anything else that gets an exception. Some apps like Office get special “holes” out of the sandbox (in the form of additional SBPL), but fundamentally they’re still sandboxed.


> Given that kext development is still supported (although highly discouraged), won’t they have to support the same level of kernel debugging as usual?

They just need to support loading kernel extensions. As watchOS has shown, developers will figure out a way to get their thing working on your device even if your make debugging extremely painful. (Apple's current silicon prevents debugging entirely because the kernel is prevented from being patched in hardware.)

> Can you name any of these apps?

Sure. If your app's bundle ID matches one of

  com.aspyr.civ6.appstore
  com.aspyr.civ6.appstore.Civ6MetalExe
  com.aspyr.civ6.appstore.Civ6Exe
  com.tencent.WeWorkMac
  com.tencent.WeWork-Helper
  com.igeekinc.DriveGenius3LEJ
  com.igeekinc.DriveGenius3LEJ.DriveGenius
  com.igeekinc.DriveGenius3LEJ.dgdefrag
  com.igeekinc.DriveGenius3LEJ.dgse
  com.igeekinc.DriveGenius3LEJ.dgprobe
  com.prosofteng.DGLEAgent
  com.prosofteng.DriveGeniusLE
  com.prosofteng.DriveGenius.Locum
  com.prosofteng.DriveGenius.Duplicate
  com.prosofteng.DriveGenius.Benchtest
  com.prosofteng.DriveGenius.FSTools
  com.prosofteng.DriveGenius.Scan
  com.prosofteng.DriveGenius.Probe
  com.prosofteng.DriveGenius.SecureErase
  com.prosofteng.DriveGenius.Defrag
dyld interposing is enabled for your app even if it comes from the App Store, opening the door for subverting the mechanism for applying the sandbox.


Huh, I wonder why those got exceptions. You said they were "Grandfathered in", but Civ 6 at least is recent.


They're two separate groups. Group one, the grandfathered one, is "legitimate" software that was simply published to the store prior to the mandatory sandboxing requirement–those can still get updates and remain unsandboxed. The second group is the list that I posted here, that have special status in the dynamic linker (can interpose functions) and through that can (probably don't, but "can" on a technical level by exploiting flaws in how Apple does sandboxing) bypass the sandbox.


> We’ll see how the shipping ARM Macs are “fused” when they come out, but my guess is that they will be more locked down than these devices: their OS will be more permissive but you will not have meaningful kernel debugging.

My big worry is them dropping terminal access altogether like on iOS. That would really make the platform useless to me.

However I don't think they would do this at this point. There's many user groups (like cloud developers) specifically favouring Mac because of the strong terminal access.


Craig specifically said that this wasn't going to happen, in one of the podcasts he said people came up to him internally and said "Wait. There's still Terminal, right?" and he said "Yeah, it's a Mac.". The Platforms State of the Union host also said that they had made contact with a bunch of open-source projects with assistance (and in some cases, iirc the OpenJDK and CPython, pull requests) on moving to ARM.


Thanks, I didn't know that. Good to hear!


Yep. And Craig also said the Mac is staying open. And that he was sick of trying to convince people that!


There will be a terminal, and virtual machines, and kernel extensions. It's pretty much a "full" macOS experience to all but the most serious users.


What’s missing?


Well, we don't know entirely yet. But based on the videos and what we know about DTK, patching the kernel is no longer something you can do for example. That's enforced in the silicon itself almost immediately after the computer comes out of reset, so even with arbitrary code injection into the kernel (extensions) you're not getting around it.


> But hypothetically the App Store review process is vulnerable to a bad actor inside Apple pushing an update to a big app that included malware.

I don’t think this is technically possible.


I want to apply (not that I am sure that Apple would consider me a security researcher) but am unsure to what extend they're going to go with

> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple and, if the bug is in third-party code, to the appropriate third party.

I mean, if I find a bug I might report it, but I know people who work on jailbreaks and stuff–if they tell me something will I have to promptly report it? What if I find something on a non-SRD device? If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it? I can get 90% of the benefit from using a device with a bootroom exploit, with none of the restrictions here…


I’m not a lawyer nor your lawyer, but I read that to mean any vulnerability you discover as a result of your research using the SRD, not any vulnerability you otherwise discover or of which you have knowledge.


Right, but is Apple going to believe me when I say that I didn't? They could just revoke my access anyways. (I'm being honest here, this isn't a question of "can I trick Apple into thinking I didn't do this on the SRD".)


I’d think it’d be difficult to run in both circles for too long.


>if they tell me something will I have to promptly report it

according to the terms no, unless you use the SRD to verify the information or vulnerability

>If I ever hypothetically "write a jailbreak", will Apple come after me even if I say I didn't use that device for it

I imagine that if you sold a jailbreak for $$$$ that Apple would probably take a close look at the telemetry the device is sending. If you're confident in your ability to terminate all telemetry, and keep good opsec, and defend yourself in court, then maybe that avenue would be feasible. It certainly wouldn't be ethical.


You're taking this question the wrong way: my scenario isn't "I want to trick Apple", it's "will Apple believe me even if I am being honest" and "even if Apple thinks I am being honest will they hold it over my head anyways as a way to control what I disclose".


Don't do stupid things then.

If you want to interact with anyone making jailbreaks, don't do it. If you think you're going to develop a jailbreak, don't join the program.

They have better lawyers than you can afford. And if you're sued for breach of contract, can you afford it?


Sorry, I'm a security engineer by trade.

>even if I say I didn't use that device

is a huge difference from

>even if I didn't use that device

for us. So you an see why I'm a little touchy.

If your intentions are good, even if you're doing all the right things, you'd be playing with fire. To be honest, the people they hand out SRDs to probably have an excellent working relationship with apple already, anyways - toeing the line would probably preclude you from having an SRD or getting a second year access.


I used those words to emphasize that there is really no way for Apple to know I was telling the truth, so I could say anything–fully truthfully–and they could just turn around and claim that they don't believe me. I guess I can see how you'd end up thinking that, but yeah having this kind of restriction that is hard to actually prove/depends on what Apple believes would generally preclude a lot of people from being in the program.


This involves an interesting set of assumptions about the plausibility of deep-cover hacking operations.

> If you use the SRD to find, test, validate, verify, or confirm a vulnerability, you must promptly report it to Apple

But let's say you pass their review, get a device, find a vulnerability, and don't report it. Then what? You're breaching the contract, but they have no way to know that, so there's no consequence?


I would expect that if you put up jailbreakmeios14.com and they find you have one of these devices, they will remove you from the program.


Yes, but most exploits are deployed in secret by malicious groups trying to hack your shit and steal your money/identity/whatever, not publicized on consumer-facing websites with your name attached.


It's like with Al Capone and the taxman: at least Apple that way has an angle of legal attack against shady companies (Hacking Team, Finfisher, ...).


As a long time iOS user this single aspect has made me look over the fence to the android side the whole time. Not having full access to my own devices is insane. The poor security on android side has kept me away, but they’ve just recently been catching up enough that the scales are almost tilted.


    > The poor security on android side
You almost got it… almost.


Looks fairly cool, but I'll bet it isn't that popular with security boffins. I would be cautious about something that might not actually reflect a current "in the wild" device.

For example, if the OS isn't quite at the same level as the release OS, it could be an issue.

That said, this is not my field, and I am not qualified to offer much more than the vague speculation, above.


I would expect it to be exactly the same except that you can debug it, basically. iPhones have a special fuse in them that prevents that from being done on production hardware, and these will presumably have that "unblown". If you want to test on production hardware you always can, this just lets you do research (a metaphor might be that this is "a debug build with symbols, normal iPhones are a "release build".)


Has anyone here ever been paid a bounty for a vulnerability reported to Apple?


I don't know if the author is on HN (also not sure what difference this makes) but a payout of $100k made the news recently for https://bhavukjain.com/blog/2020/05/30/zeroday-signin-with-a...

A few days later another researcher reported earning $75k for webcam access vulnerabilities: https://www.ryanpickren.com/webcam-hacking

These payments are not uncommon.



Any for kernel bugs?


Enjoyed this


Why do experts make absurd comments like "“Everybody thinks basically that the method you learn in school is the best one" ?


Wrong thread?


I agree this is theater; no serious whitehat researcher would sign a deal forcing them to accept dates from the manufacturer. It won't be useful for its intended purpose.

On the bright side, it will be very useful for jailbreak research and in a way, those bugs _do_ get disclosed to Apple for them to subsequently fix. Not necessarily the way Apple wants, but it does shine daylight on their code.

These guys keep working exploits close to their hearts and don't release them specifically so they can get a look at new hardware. That will no longer be necessary. You find an exploit, you can release it right away.

And on the gripping hand, it will also be used by malicious criminals and state actors to develop zero days for various evil purposes.


> On the bright side, it will be very useful for jailbreak research and in a way, those bugs _do_ get disclosed to Apple for them to subsequently fix.

It’s useless for jailbreak research because Apple will force you to shut up about it at least until they patch it, so now you can’t jailbreak.


That is if people obey the NDA of course. I'm sure not everyone will do so.

However finding a bug, reporting it and then 'suddenly' a jailbreak appearing that would use it, would be highly suspicious indeed. So they'd probably have to give up the chance of getting the bug bounty.

PS: I'm certainly not signing that NDA myself :)


Yes. Once these devices exist they will be used by everybody interested in that sort of access. Ironically, pretty much everybody other than whitehats.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: