It's a class which allows arbitrary types to be serialized on the wire.
I think it's time that Apple had a new policy that no NSKeyedUnarchiver will ever go from one iPhone to another. They could do that by having a random 64 bit number in the header, and if the header doesn't match the local device, refuse to unarchive.
Then all software that breaks, find new ways to encode the specific data they need to send over the wire. Specifically, require they send a specific serialized class structure (ie. something with a schema), rather than something general purpose.
I've consulted many companies which found out that they need to expand beyond Apple horizons, just to find out that their whole infrastructure is based on Apple's binary formats which aren't well documented and are not easily (and reliably) readable on other platforms.
Pragmatic move in these cases is to use formats that aren't platform dependent - things like JSONs, Cap'n Proto, msgpack and others.
I'm sure this isn't a particularly troubling issue for Apple.
Also interesting, zerodium would pay up to $2million for something like this, and even more now on Android.
warning, scrolljacking... Firefox should really fix that bug, like they did with the "new" popups.
Buying vulns/exploits is just a little government outsourcing.
Or possibly, "to spy on foreign nationals and to share with FVEY partners - who'll do the spying on US citizens, because doing it themselves would be illegal"... :sigh:
Yes, ASLR doesn't address the root bugs. Yes, ASLR can be bypassed. But it still lowers the overall risk and so is worth having around.
You're saying it protects against "inept" attackers, that lowers risk.
No need to nitpick a simple analogy.
Thank you, that is a very nice label for an annoying phenomenon.
ASLR changes the game in that instead of a control flow vulnerability needing to be found, you also need an info leak and the ability to put that into the exploit.
That’s a nontrivial improvement.
By volume no. There are enormous numbers of low-level and mid-level hackers out there working for any number of interests. You have to be a pretty elite level of organised crime before your attacks reach the level achieved by nation states.
Security design is to some extent a cost tradeoff - the more you spend on security the more the attackers are going to have to spend on attacking you. If you're dealing with insensitive data, and are only going to be of interest to script kiddies, then you can spend only a little. If you're a widely deployed device, and people use it to store sensitive data that might be of interest to nation states, then you have to spend enormously more to mitigate against it.
Layered security or "defence in depth" is how anyone in the field will tell you to design secure devices these days. ASLR doesn't do anything on its own, it just amplifies the cost of any other attack, as does other mitigations like W^X. No layer is ever 100% secure in the real world. But 5 layers that are 99.9% secure make for a difficult to attack system.
No single measure will ever be sufficient, one should never become complacent like that...
This all comes down to C designers ignoring the security best practices of other systems programming languages, and UNIX clones having mainstream adoption.
Morris worm (1988) wouldn't have been possible in regular Modula-2 (1978) code, nor Ada (1983), Mesa (1976), ESPOL/NEWP (1961), PL/I (1964), PL/S (1974), BLISS (1970), just citing a couple of them.
Yet C17 still doesn't provide language features to prevent another Morris worm, and the security Annex was made optional instead.
Can you please expand on this point?
For example, ESPOL/NEWP was the first systems programming language with explicit unsafe blocks, 8 years before C came into the world.
So hardware that was much weaker than PDP-11, and yet capable of supporting such security features.
Burroughs B5000 OS is still being developed, nowadays known as Unisys ClearPath MCP, and its top selling feature is being a mainframe system for customers whose security is the top priority.
What's special about it?
> Bounds checking, explicit unsafe blocks, proper strings, explicit type conversions, proper enumerations, checked arithmetic.
Plus a capabilities based security access, and unsafe blocks taint binaries, requiring admin permission to be executed.
Recent hardware versions use Xeons.
Again, just features missing from C, and available in other languages.
Another notable feature, it did not make use of any Assembly, in 1961, because all CPU features are exposed as compiler intrinsics.
There are other good security practices a slightly savvier user can follow, like running a soft firewall and installing fewer apps, but they wouldn't mitigate this vulnerability.
x.0 releases do a lot more than patching. Security updates are much smaller and more frequent than even x.x.1 updates
I wait a couple of days to make sure updates are not bricking devices. Waiting just a week saw 3 updates and a 4th within 10 days. Some of those updates saw serious crippling of capabilities of the device after the update.
Feel good in that you are probably not important enough for them to target you..
Or his Canadian friend Omar Abdulaziz?
> The vulnerability was first mitigated in iOS 12.4.1, released on August 26, by making the vulnerable code unreachable over iMessage, then fully fixed in iOS 13.2, released on October 28 2019.
(From the previous blog post in the series.)
Assume every electronically programmable device can, has been and will be compromised at now or at some point in the future.