Select "Custom Alphanumeric Code" in Passcode Options, but only enter digits using the keyboard. iOS will display a pin pad on the lock screen that will accept any number of digits.
I picked this up from the delicious iOS 11 security whitepaper.
But of course alphanumerical would be even safer.
It would be terrible if all airports had these and could crack you on demand. I can imagine them holding you until the device cracks it.
Similarly, authentication systems (including passcodes and biometrics) are purely about ensuring that only authorized people may gain access with whatever permissions they're authorized for and no more. Their threat scenario and usage does not involve intent. Authentication "primitives" can be used as part of a counter-intent security measure, but by themselves they only determine a "who" not a "why". Strong authentication backed by strong encryption is a stand alone good and a required foundation to do more complex stuff, but that's it. The "wrench" attack is an intent-based threat scenario: the person performing access is in fact authorized, and there is nothing at all buggy, malfunctioning, misdesigned, weak or wrong in any way with an authentication system allowing access. It'd take an intent reactive security measure to deal with.
Regrettably there are still no main stream smartphones that implement this as far as I know (though at times it's been possible to do your own to some extent via jailbreaking on iDevices at least and I assume on rooted Android as well). Which is really too bad because Apple in particular have a lot of very good tools at this point to do a really, really good implementation. A classic measure would be coercion/distress codes, ie., alternate passcodes an operator can enter that will cause the device to perform different actions then a straight authentication (these can range from a full deletion to more subtle actions like a silent alarm). Apple though could use TouchID/FaceID to make this even more user friendly, merely "use this finger vs that finger" or "be making this facial expression vs that one" as triggers. They (and anyone else with a secure hardware stack) could also implement temporal and spatial trigger options which would be very useful. Finally the iOS design is decently placed to allow relatively easy and more selective view filters on accessing apps and data due to how heavily everything is silo'd. The silos have been intensely frustrating a lot of the time vs standard computer access patterns, but in this instance it could be a real strength. Imagine being able to have a "travel view" we could set before a trip so that sensitive apps simply vanish until GPS indicates we've arrived at our destination or connected to a specific network or whatever. Or there could be distress views that react to a facial expression or code or finger and then permanently hide everything but a cleaned minimal data and app set, including your cloud stuff, until you get home and enter a special unlock code (or for a set amount of time or whatever).
I think intent-systems/view triggers will (or at least should) be the next big leap forward for helping our personal information devices get not just more secure and better for privacy but more productive. I'd like to see that start to show up in future versions of iOS and Android more then nearly anything else.
Mostly because trying to unlock it displayed the standard keyboard, but simply wouldn't allow me to create a PIN with anything other than numeric characters, so every time I needed to unlock my phone I had to swipe the screen up, then change the keyboard over to the number/symbol mode, and enter my PIN using the small row of numbers there.
In the end, all I had to do was change my pin, and from that moment on, it only ever displayed the standard number pad for unlocking.
Thanks for the down votes.
Naturally I use fingerprint/smartwatch authentication throughout my normal day, but once the phone is off, good luck getting access to its contents.
The SEP is supposed to enforce a time delay between passcode attempts to prevent this sort of brute forcing. The timer could be defeated in older models by cutting power at just the right time, but Apple's whitepaper says it's supposed to survive restarts now.
Based on the screenshots it looks like it can load custom firmware on the iPhone. That's bad.
 HN discussion: https://news.ycombinator.com/item?id=16597626
 p15: https://images.apple.com/business/docs/iOS_Security_Guide.pd...
Apple's security paper says it would take more than 50 years to brute-force an alphanumeric 6-digit passcode at 80ms per iteration. I suspect that's still correct here (if “3 days or more” is for 6 numeric digits).
Of course that's non-trivial EE work, but the point is it's possible, for someone with enough money and the right equipment. What would make it intractable is to ditch the idea that a 4 digit pin is protecting you from anything. There's simply not enough entropy in that.
Time delays are useful protection when over a network. But not when the attacker has physical console access, e.g. to a phone. At that point proper cryptography and mathematics is the only good protection.
https://www.arm.com/products/security-on-arm/trustzone is the generic version of this, no idea what exactly Apple is doing.
Anyway, to break the encryption involves finding bugs in the software that runs in the secure zone (the same way you'd defeat a time delay in a networked application) or by opening up the CPU die and figuring it out with an electron microscope (perhaps while the CPU is running).
Ultimately, software is going to be the productive attack vector. While the TrustZone docs emphasize "small" for the amount of code that runs in the secure zone, no programmer can ever do "small" on a deadline and so I wouldn't trust it to provide any actual security. But the hardware infrastructure is there to be pretty secure against any adversary that doesn't have a lot of time or money, if the programmers do their job correctly. (I don't trust it because of the time schedules involved for these SoCs. One or two a year, with a lifetime of maybe a couple years. Nobody is going to spend the money to do the hard work like looking for bugs just so that someone can't rip Netflix video feeds. The engineering work would be more expensive than whatever their contract with the DRM vendors says they have to pay if they get hacked, which is the only incentive to be secure... and I doubt Apple would sign one of those.
Perhaps a nation-state actor could shave down the processor and read that key with a SEM or some crazy thing, but that's literally how far the design is supposed to have pushed iOS security. Which is what makes this hack so embarrassingly bad (if confirmed).
A security company can easily afford either.
Shaving down the processor is hard but still possible. A government with a billion-dollar budget can probably do it, if they want, including after practicing on a million throwaway phones in the process. Apple's design is also known as security by obscurity (https://en.wikipedia.org/wiki/Security_through_obscurity).
I much prefer an open design and security by high-entropy keys and well-established practices in the cryptography. I'd feel much safer using open cryptography libraries and strong passwords than arbitrarily and blindly trusting some secretive engineers at Apple.
Apple's design is super-hard to break but possible.
My Linux laptop, on the other hand -- the hard drive is standard SATA. Have at it. I don't care if you have a billion dollars, you'll have a hard time breaking into my data for a probably at least a few decades.
The only reason your Linux laptop would be more secure than an iPhone is if you were using a high-entropy key to unlock it every time you wanted to use it -- and the iPhone wasn't. That's it. But remembering high entropy keys without storing them in some less secure manner is so inconvenient and failure-prone for most people that it's actually a less secure design than Apple's SEP-assisted approach. And you could opt-in to a high entropy key on an iPhone if you wanted to, so even that is a false comparison.
That was and will always continue to be true. Even secure cryptoprocessors of the type used in smartcards and HSMs can be cracked with enough determination and time. There are companies in China who will read and clone them for surprisingly little money.
It has always amused me somewhat how scared (or the impression that articles like this give) some people are of governments, while at the same time completely accepting and trusting to being herded and controlled by the companies they purchase these locked-down computers from. Anything you truly want to keep secret should be encrypted by systems you have knowledge of, with a key that only you know, or even better --- not leaving your brain at all.
Unfortunately, the IP-Box 2 became widely available and was almost exclusively used illegitimately, rather than in law enforcement
If by "illegitimately" you mean third-party repair shops... I know Apple doesn't like that, but the whole *-box series are aimed at the mobile repair industry (a huge business in China), not law enforcement.
The faintest of ink will outlast the best of memory, or something like that.
Thats the way I've always heard it.
The accused has a right to know exactly how evidence was obtained, and if the chain of custody was broken, just hiding behind an NDA isnt going to cut it.
It's certainly an interesting problem of profit maximization!
The photo stagings remind me of ones I’d use on a pre-release marketing site for a vapor-ware product to test demand and a price point.
Presumably even the cheaper model could be reverse engineered to reveal the exploit used. But once it becomes known, it would be patched.
Did no one think, when they take someone's phone for 5 minutes at the border, they could be doing this to your phone.
>>It can take up to three days or longer for six-digit passcodes, according to Grayshift documents, and the time needed for longer passphrases is not mentioned.
So yeah, up to three days for six-digit passcodes. If you have a longer passcode with letters and special characters, you could wait a long, long time.
On the other hand, perhaps Apple could secretly partner with a law enforcement team and purchase one for themselves. $15k and $30k are literally nothing to Apple with their warchest in the tens of billions.
I imagine that telling someone, "Steal that TV and I'll give you $100 for it" would, additionally, make you a conspirator to the crime of theft.
All they have to do is discretely obtain the device in question and have a few good engineers quietly pick it apart for a few weeks to figure out how it works. They then patch the vulnerability in a regular update claiming they discovered it as part of normal procedure and nobody takes notice.
Edit: the legality of the device itself is kinda interesting to me. Like, even if it is doing something illegal (like using stolen code or something), how would Apple prove it as long as it was only sold to law enforcement? If the police aren't asking too many questions and Apple can't legally acquire one, how do they prove it? I suppose they'd have to gather enough circumstantial evidence to get a judge to issue a subpoena, but things get a bit dark and fuzzy.
My phone has no banking information, credit card information, Social Security numbers, or email accounts that can be used to recover or reset access to any online service. Why? Because I don't trust my phone.
But aside from all that, all that information is already on the black market. There have been so many breaches, Equifax just to name one, to think otherwise.
Few people imagine themselves to ever be in a position where they would want to protect the info on their phones from LE.
If this gets stolen and put on the black market, that would be a good thing. Because then Apple can buy one, figure out what vulnerabilities it's using, and patch them.
This looks like a software flaw, not a hardware attack. It will be interesting to know how Apple screwed this up.
Sometimes I wonder if real security is really and theoretically possible, or if it's just engineers who never manage to achieve it because designers want things to be usable for consumers.
What ever happens it doesn't seem really secure, consumer oriented device do exist. I wonder if there are android devices who do a good job at that, and what's the status of the security of android device in general, I would guess it's not better.
Are you willing to pay $500k for a phone? Is there a vendor who is willing to put R&D investment of $20mil so you can buy one? How much more phones they would sell? If you would be Ed Snowden would you even trust that company?
Are you going to buy a safe to keep family photos in it?
What does it even mean for you to have absolutely secure phone if you are going to be hit by a bus tomorrow?
This paper might prove to be an enlightening read: http://www.mdpi.com/2078-2489/7/2/23
I mean if they truly broke and iPhone lock, then it means they had to be tampering with a true Apple device (not a dummy) in order to make their device work. Therefore, they violate Apple TOS that I am sure forbids any sort of backdooring. I doubt they will go after a rouge chinese jailbreaker sitting in moms basement and trying to make a name for him/herself, but here we have example of a for-profit incorporated business that makes 100% of their money by breaking Apple's devices.
On the other hand, if this is all just some sort of marketing gimmick, or that device never been truly tested on iPhone, then I am sure they can go after them for attempting to shame iOS/iPhone for users to think their devices are less secure than they actually are, which could hit their bottom line.