Hacker News new | past | comments | ask | show | jobs | submit login
Breaking all macOS security layers with a single vulnerability (computest.nl)
606 points by afrcnc on Aug 14, 2022 | hide | past | favorite | 151 comments



>Changing a security model that has been used for decades to a more restrictive model is difficult, especially in something as complicated as macOS. Attaching debuggers is just one example, there are many similar techniques that could be used to inject code into a different process. Apple has squashed many of these techniques, but many other ones are likely still undiscovered.

> Aside from Apple’s own code, these vulnerabilities could also occur in third-party software. It’s quite common to find a process injection vulnerability in a specific application, which means that the permissions (TCC permissions and entitlements) of that application are up for grabs for all other processes. Getting those fixed is a difficult process, because many third-party developers are not familiar with this new security model. Reporting these vulnerabilities often requires fully explaining this new model! Especially Electron applications are infamous for being easy to inject into, as it is possible to replace their JavaScript files without invalidating the code signature.

It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems. It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!

Mobile OSes were a good break off point as far as security goes but that came with a lot of functionality sacrifice.

Although something like QubesOS can theoretically dream of being semi-mainstream with support from hardware and OSS OS vendors like RH/Suse/Canonical or even Microsoft.


It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems.

Contrarily, that makes me happy because if that happens we are really going to lose what little computing freedom we have left, as it will only make the walled garden silos even stronger.


It’s hard to sign off SOC2 compliance when you know that npm and maven packages can be introduced by any mistake on your team and contain an “Upload all files from HDD to a rogue server” script. I’d need to operate administrative files (customer records, contracts, accounting) on a separate machine from dev…

App sandboxing is… the way it will go, for insurance reasons.


If this continues into the long-term without solution, I suspect paid package repos will start making an appearance, whereby only vetted versions of components are available.

At least one big company already maintains internal clones of repos with registration of every version of every component used (I know this through personal experience) and I have no doubts the others are doing the same, especially if they are into cloud or government contracts.


What about urgent security vulnerabilities in such components?

Do these companies have people on-call who can security review urgent security fixes in the components? Or how does it work

(say, Log4j and the fixes need to get reviewed urgently, and let's pretend the language wasn't Java but a language that fewer people could understand?)


> I’d need to operate administrative files (customer records, contracts, accounting) on a separate machine from dev

You're saying that like it is a bad thing?

A developer probably shouldn't have access to all that live data anyways.


As a founder-developer of a company of 3, would you expect me to have 2 computers, for the security of customer data? It’s an interesting idea, I’m just asking.


Oh I am well aware about the differences between "how it should be" and what is actually the case when you're a small developer.

For this kind of issue one could use VM's where you encrypt the VM with the live customer data. Another option would be to use cloud.

But yes.. there's also nothing wrong with having more than one machine to physically separate the systems and thus data. Bonus on that one is that you can shut it down if not needed and as such it won't be accessible over the network unless switched on.

Personally -as a small developer myself- I use all of the above.


For such a small company I would probably expect that data like that would probably just live in a cloud service like GSuite that’s already SOC2 compliant, then if you’re running software integrations against that data, you’re spinning up isolated SOC2-compliant cloud servers to do so.

Unfortunately unless you get ridiculously creative and put in a bunch of extra time instead of cloud service spend, SOC2 compliance is gonna be expensive


QubesOS, then you can have many more than two. 32 GB laptop memory is good to have.


Yeah. The problem with all these security innovations is they allow corporations to seize control. The ability to debug and intercept is also the ability to reverse engineer and override.

We need secure software that empowers us, not some secure walled garden.


As long as the companies making the walled gardens need engineers to build them, there will be a way to write general purpose software.


Fuchsia does a pure capability model, and resources, even as basic as the file system are provided through handles - and your file system handle is a handle to what would be a directory elsewhere, but that is the entire file system so it’s not a matter of finding a traversal exploit.

In principle you could do something similar with the Mac/iOS sandbox by starting a process with a compute only sandbox, and then provide it with specific sandbox entitlements (from the parent) which includes specific file systems, however they still in principle can see a full version of the fs.

And yeah, any OS that isn’t completely new is burdened with support for old apps, but then iOS which used its newness to have a stronger base security model is constantly beaten up for that model.


Ah, I forgot about Fuchsia - https://arxiv.org/pdf/2108.04183.pdf seems to do a good job of explaining the security architecture without complicating things. With Google's backing (if they don't lose interest that is) and the potential to take over Nest/Chromebook/Android devices it might go farther than most new OS experiments.


Qubes is a good product while simultaneously being both the best and an objectively poor solution to a problem that shouldn't exist. That's how much of a mess the situation is.

Qubes is sandboxing at a machine level by putting every application set into it's own O/S. Is that in any way clean or ideal? Not at all. But it's necessary if you don't trust your own O/S not to have been compromised by the software running inside it.

I get the arguments against centralised software distribution - it encourages monopolistic behaviour and removes user freedom - but it does at least make a problem of this nature fixable if you can enforce compliance to breaking changes at distribution time.

I'd like to think a new layer could be added to Linux or BSD that would marshall this kind of compliance centrally without deliberately conflating payment into it, but I've no idea how you'd get widespread adoption of something like that even if you could organise to implement it. You'd also likely need to implement code-signing everywhere which was such a challenge in the past that (at least in the Linux kernel) it was abandoned.

A similar model to how domain registration works for white-listing might make sense here, in that it's not centralised - but it's not fully decentralised either - and at some point there needs to be a process to onboard new authorities who everyone needs to trust, at which point it's a slippery slope to self-certification.


> It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!

You could quarantine your old applications to their own individual VMs, while newer programs can make use of your shiny new security features?


Harder than it sounds, though. You still have to be able to communicate with other processes to present the user with a usable UI, so the isolation is never really complete. There are lots of variations on this theme and they all have to compromise in some way.


It's perfectly possible to have that with something like the X window system. The irony is that this protocol gives applications way too many capabilities. But Wayland is a chance to fix that for good and finally have proper compartmentalisation in the GUI.


True, but Apple still did it to allow Classic Mac apps to run on OS X when the initial version of OS X was released.


Oh, it's definitely no walk in the park. But more realistic than rewriting all old programs in one go.


Sort of like Fuchsia/QubesOS combo, interesting.


Maybe, I don't know these. I was just pointing out that we can have both progress and backwards compatibility.

Just like we can still play old games via dosbox, but that doesn't mean we have to run DOS as our main OS.


I think a lot of users would be fine with apps being fully silo'ed from eachother with the exception of being able to copy and paste between them, and have 'File Open' dialog boxes able to access files from anywhere.

Every other kind of interaction between seperate applications can be blocked without breaking much functionality. And you can limit breakage further by running all 'legacy' applications in the same silo, and only running new applications disconnected from eachother.


Literally any other kind of interaction between applications feels like the apps are spying on me. I don't even want one to suggest which app I should open for a doc. Mac OS has taken security in a strange direction where you can't change the suffix of a file without exposing possible vulnerabilities by apps automatically opening it - and the backstop is supposed to be validating all applications through your "apple account" (whatever that is?). (although in some sense the suffix and permissions flaws have been around since System 7).


> Although something like QubesOS can theoretically dream of being semi-mainstream with support from hardware and OSS OS vendors like RH/Suse/Canonical or even Microsoft.

I'm trying to encourage as many people as I can to run it, or at least play with it for some while to gain familiarity with it. Properly applied, I think it does add quite a bit of useful practical security... though it's not going to automatically solve all problems. I like the silos of compromise, at least, and you can do high risk things (like "anything web") in disposable VMs that reset on VM power cycle.

My only major concern with Qubes is that I've not decided if it's weird and niche enough to be mostly left alone by the 0day markets, or if it's a super high priority, high value target to attack because of the type of people who are likely to use it. I'd like to see an ARM port of it, because Xen on ARM is quite a bit simpler than Xen on x86, but the hardware to run that doesn't quite exist yet. Maybe with the RK3588...

The fundamental problem here is that software developers (and I'm guilty here too, as much as anyone in that industry) tend to view complexity as a one way ratchet function - add features. Add features. Add features. Add knobs. And when it comes crashing down around your ears, "add security" (in the form of sandboxes, or process isolation, or... https://xkcd.com/2044/ applies here).

And then it turns out that "adding complexity to solve problems created by complexity" isn't a strategy with a great long term success rate.

I'm slightly encouraged by Apple admitting, as clearly as they ever admit anything, that this strategy isn't working - with their Lockdown mode, that's "only for people with the most extreme threats, blah blah blah," and I'd expect anyone in the security or software industry to turn that on basically as soon as they get iOS 16 and not look back. Or install the beta to give that option.


I have tried the QubesOS and boy does it bring back memories of Windows being called Pentium to 286 converter. It is slow as molasses on hardware where even Windows 10 and Gnome are both fast and to make it usable you have to keep relaxing the security to the point where it is probably less secure than regular OS. And don't even bother if you have to use scaling other than 100%, sure you can scale the DOM0 but the rest of the VMs are not scaled and there is no documentation on how to do it. What we need are simple sandboxes that isolate GUI applications into chroot environment and keep them away from other applications and documents.


> It is slow as molasses on hardware where even Windows 10 and Gnome are both fast...

I haven't had that problem. There's no GPU acceleration, so anything heavy on the GPU is a problem, but in terms of general use, I don't find it slower than Linux on the iron.

> and to make it usable you have to keep relaxing the security to the point where it is probably less secure than regular OS.

How so? What settings? I don't run with a USB Qube all the time for just my HID devices, but I light it up when I'm doing anything else on USB. I haven't had issues with having to turn down a bunch of security settings either.

> And don't even bother if you have to use scaling other than 100%, sure you can scale the DOM0 but the rest of the VMs are not scaled and there is no documentation on how to do it.

Yes there is. https://github.com/Qubes-Community/Contents/blob/master/docs...

> What we need are simple sandboxes that isolate GUI applications into chroot environment and keep them away from other applications and documents.

The history of local root exploits ("Cheap and easy!") would argue that doing such a thing and relying on the kernel is just security theater.


>The history of local root exploits ("Cheap and easy!") would argue that doing such a thing and relying on the kernel is just security theater.

it may not be perfect but surely its better than nothing. it wouldn't protect you from a sophisticated nation state attacker, but most people don't have that in their threat model. surely it would be good enough to prevent google chrome from snooping through your home directory and other such things.


> surely it would be good enough to prevent google chrome from snooping through your home directory and other such things.

You want firejail, I think; this is one of its headline features. (Or possibly bubblewrap.)


Please do not use firejail. See this issue page: https://gitlab.alpinelinux.org/alpine/aports/-/issues/12635

Bubblejail is an acceptable alternative https://github.com/igo95862/bubblejail


That is a somewhat controversial claim. See also this issue page: https://github.com/netblue30/firejail/issues/3046

Also, bubblejail ships all of 8 profiles; I'm skeptical of its claim to be a full replacement.


I have been using Lockdown mode for a few weeks on iOS 16 beta and iPadOS 16 beta. I really like it, it does not ruin the experience of using my devices and I feel like it makes my devices safer. I plan on always having Lockdown configured. What about the rare web sites that don’t work with Lockdown? I either ignore them or add the URIs to my todo list and visit them when using a laptop. BTW, I only use macOS and Linux laptops when I am developing software. Otherwise I use either my small or large iPad Pros.


> I'd like to see an ARM port of it, because Xen on ARM is quite a bit simpler than Xen on x86

AWS VMs use Xen on x86 mostly, right? So if someone has a Xen/x86 0day, they're going to use it to break out of EC2 guests and perform cross-tenant attacks, not use it on relatively low-value Qubes where you'd need to already have RCE anyway to make the attack work.


AWS VMs mostly use the AWS Nitro Hypervisor (on x86 and ARM), which has a smaller surface area than Xen and also further isolates itself from the guest using hyper-virtualization extensions. With the Nitro system, the network and storage virtualization are also implemented on dedicated Nitro cards - which have their own CPU and memory that is separate from the machine hosting the VM.


Thanks for the correction!


> because Xen on ARM is quite a bit simpler than Xen on x86

Why's that? If there's just less backwards compatibility, could we make a cut down version on x86?


x86 virtualization is pretty nasty because it was tacked on long after the architecture was developed. The ARMv7, and especially ARMv8 virtualization, is a lot cleaner, with less "nasty corners in the hypervisor."

https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization...

There's stuff that can be cut out of x86, but it's still just a somewhat sharp bit of code, with lots of weird corners, compared to ARM virtualization.


> Mobile OSes were a good break off point as far as security goes but that came with a lot of functionality sacrifice.

I have a tiny bit of hope that they'll eventually be able to replace desktop OSes through virtualization. It's actually what I long thought Apple would do with iPad OS (why else put in an M1?).


> It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems. It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!

I don't think that's the case though. Microsoft overhauled the entire security post of windows during Vista. It wasn't a good OS when it came out, but Windows is much better for it.

But yes, if you're going to start from the ground up, you're going to lose a bunch of functionality. Not only due to the nature of rewrites, but also because of "what new limitations does the security model imply"?


Google’s upcoming Fuchsia OS has an entirely new security model that looks really solid. I imagine it’s still a few years away however from any kind of desktop usage.


Well, GNU Hurd 2 is going to be released real soon now. It will fix all the issues with current OS architectures.

And than, at some point Google's Fuchsia will also surface in mainstream likely.

Both systems will bring capability-security. (You know, almost like seL4, only less secure). A technology that wasn't used until now, even it's available for almost 50 years, and would solve almost all security problems of computers. We could have had that in fact directly supported by hardware almost four decades ago… But the market didn't like that.

The problem is that there is and won't be any progress in computer technology. It's like that since at least one hundred years. We're buried in the von Neumann local "optimum" since than… As long as this does not change we're doomed to have crappy, inefficient, and insecure computers. The invisible hand will just prevent any progress until forever. (OK, our future AI-Overlords could change that possibly).

But who cares about the status quo? The market insists on it. So any attempt at resistance is therefore futile.

___

Please excuse the slight amounts of sarcasm. I just couldn't hold back.

And seriously: There may be some jokes hidden in here. If you try you'll recognize one or two of them, I bet… :-D


> It is unclear what security the AES encryption here is meant to add, as the key is stored right next to it. There is no MAC, so no integrity check for the ciphertext

I imagine this is to prevent accidental disclosure of sensitive data through basic tools (only) like grep, and Spotlight.

Also to prevent layman attempts at tweaking the files in a text editor like one used to do tweaking save files as a kid. But not to protect against dedicated attackers.


This type of security is bad. Either something should be possible, and easy for anyone to do... Or it should not be possible, and protected by real cryptography.

Hiding something with ROT-13 just 'so it doesn't show up in grep' is a bad idea.


Security isn’t a binary ecosystem. There’s varying degrees of criteria. Performance, functionality, as well as acceptable level of risk.


True, but this is "we put a big padlock picture so everyone thought it was secure" levels of security. That is just deceptive.


Isn’t this normally stored on an encrypted volume anyway? If the user cares about security even in the slightest, they’d be using FileVault, of course

So really, this type of encryption is defeating scanning/probing, not a dedicated attacker who knows where to look and why. In such cases the attacker is worried about gaining access to the walled garden first — where the protections are.


I think this is more like putting a lock on your door. It won't deter a locksmith or a dedicated attacker, it just filters out the low effort attackers.


Putting a lock on your door but also putting the key under your door mat.


Yeah, that's a thing people do.


Filtering out amateurs seems to be a reasonable goal.


I'd compare it to non-encrypted and signed pagestate in ASP.net or JSP, it works but it's completely untrustworthy. IMO this shouldn't be hard for apple to solve either, they can just create a key in keychain and use that to sign the file using the existing and standardized XML signing algorithm. That way any change to the file, even the XML will cause it to get discarded.


No, not really - fundamentally broken things like this one are worse than useless, because they degrade security by making it harder to understand the situation.


Respectfully, why would there be any valid use case for understanding the private format stored by this DB? In ideal circumstances it is private, is it not?


I don't mean understanding the database format; I'm talking about understanding the security model.


I actually think something like ROT-13 is fine in applications where obscuring it from humans is all you care about. It's serving the same purpose as the "Staff only" sign on that door in the restaurant. Does it somehow prevent you entering without an employment agreement? Would it stop a robber or thief? Nope. But since there's a sign you know that's the wrong way and will stay out of where you aren't wanted.

AES looks like security, ROT-13 is clearly not security, so there's no illusion.

Suppose a maintenance programmer is looking at logs around a weird issue, scrolling through hundreds of entries they happen to notice that the phase "FuckDonaldTrump" appears in the logs - huh, what? Oh, it's the password for the administrator user. Well, the way human memories work that password is stuck in their head now. They didn't try to learn the admin password but now they know it, whereas if the log said "ShpxQbanyqGehzc" well even though that's the "same" information your brain doesn't retain it automatically because it doesn't mean anything.

They're not trying to learn the admin password, and with ROT-13 they are less likely to accidentally do so, that's actually a benefit.


If data is to be used locally, it has to be encryptable and decryptable locally with just resources accessible locally, so it's pretty much not secureable from local software.


You can still secure things with local keys managed in eg. a keychain that checks the identity/entitlements/permissions of the caller before giving out the keys.


That just translates to a slightly longer exploit chain in practice. It's not a fundamental obsatacle.


A safe can be opened, so should safes not have doors?

It is often good to leave keys right next to a locked lock.


Surely there is a better fix to a deserialization exploit than just making every new app implement `bool dontBeHackable { return true; }`??

Even just, I don't know, forcing all builds to silently include this property in the compiled output - I mean, interacting with OS app state data files should be abstracted away from the programmer anyway, so it shouldn't matter if they're signed/encrypted behind the scenes, just handle it automatically, right?


Apple originally intended to require NSSecureCoding in some way after it was introduced in macOS 10.8 (2012) but constantly delayed those plans due to the compatibility issues sibling mentioned.


Did apple lose their courage? Maybe they should throw a dialog not unlike the 32-bit x86 deprecation


It could break a bunch of apps. Could argue that the user should get to decide, but it would have consequences to just do it across the board automatically


Could this simply be a 'quick fix'? Can't imagine this being the definitive solution to this.


Related to this, is there an easy way to run arbitrary programs in the macOS sandbox? AFAIK sandboxing is opt-in for app developers at the moment. Does manual invocation of sandbox-exec on the command line still work, and are there GUI helpers for running arbitrary apps with this tool?

Edit: Apparently sandbox-exec is still usable, just not (publicly) well-documented. Would be nice of Apple to make sandboxing easier for regular users running untrusted apps that don't opt-in to the sandbox. I’m thinking of Firefox and Firefox extensions in particular.

https://7402.org/blog/2020/macos-sandboxing-of-folder.html

>The only non-folkloric documentation is found in the man page for sandbox-exec [...]

>Sandbox documentation has been a moving target over the years. Because it is a private interface, Apple is under no obligation to maintain forward or backward compatibility. Take note of the publication date of any information found online.

>Just to be clear, the sandbox profile format is not documented for third party use. Feel free to experiment with this stuff, but please don’t try to ship a product based on it.


Question from reading the article, although I might have missed the response.

Is a user's latest version of macOS still vulnerable to this exploit if they're running any applications that do not return true for this boolean?

(i.e. does this mean older apps still make the entire machine vulnerable?)

If so, is there a means for the users to enforce this flag globally and just deal with the crashes if an app tries to do something that relies on this privilege?


Seems like old apps are vulnerable:

> This vulnerability will therefore be present for as long as there is backwards compatibility with older macOS applications!


Just what apple needed, another casus belli to remove more backward compatibilty from macos. I still am on mojave out of principle.


Oh that hurts to watch, brutal. The "Pwn" button helps keep it light, and I'm definitely stealing that, but ouch.

Can someone who knows mac OS development educate me about why it would be broken/expensive/inadequate to just page the application's mapped pages to disk? I gather at least on the lower memory Apple Silicon devices that swap is pretty aggressive even for running applications?


I worked on AppKit's persistent state feature. One of its primary uses is persisting UI state across app restarts, for example when performing a software update. Simply writing out memory to disk would "persist" data like Mach ports or file descriptors, which would no longer be valid when the app is re-launched.


Makes sense! Thanks for the explanation.


Would it be practical to prevent apps from writing to other apps’ persisted state?


Yes of course, by design apps write their state into their own sandbox in the filesystem, and trust the OS to enforce security boundaries.


Then how does the entitlement escalation part of the exploit work? The writeup sure makes it sound like any app can write to the persisted state of another app.


Sandboxed apps cannot write into the container of other apps unless the user explicitly allows it.

Here the bug is that OpenAndSavePanelService, which is privileged, was decoding state from the app it was acting on behalf of. An app can save malicious objects into its own serialized state, and then it will be injected into OpenAndSavePanelService, leading to a sandbox escape. The fix is for the service to stop decoding state.


Isn’t that what happens when it goes into sleep mode?


Yes, but that persists the entire system, including the kernel. AppKit's restorable state is meant to be consumed by brand new processes.


You most likely meant to ask any questions about deep lore Apple internals to the sibling, who is a super well-known expert on the topic. I barely know my way around XCode. :)


Just a note of appreciation to the author for the clear, accessible way this is written up. Understandable by semi-technical people like me.


Backwards compatibility over security, reminds me of a certain, popular OS.

They could enforce it for all applications signed after a certain date and/or mark the unsafe method as deprecated. This would still not prevent downgrade attacks for a while, but at least offer a path forward.


How much did Apple pay for the vuln report?


The lawsuit is being sketched as we speak.



I'm confused, could you clarify how these are related? These seemed like different vulnerabilities to me, but I don't know much about OS X.


Related because both allow code injection into a signed app, but mine uses disable-library-validation entitlement method and this uses SecureRestorableState method.


Small correction to the article, it is the Transparency, Consent and Control (TCC) framework rather than Trust, Transparency, and Control. More here https://eclecticlight.co/2019/07/22/mojaves-privacy-consent-...


Presumably Apple has restricted the entitlement processing to disallow downgrades/SIP bypass of the boot OS? Personally I'm fine with requiring recovery mode for downgrades, and maybe for blacklisted upgrades as well.

IIRC they do something like revoking/disabling old iOS installers, but that seems a bit extreme for macOS.


Something something TLA backdoor much?


Just wrote a email to product-security@apple.com User should able to disable "saved state" when app does not implement `applicationSupportsSecureRestorableState`.


when "saved state" works it help with certain siutation, but the convenience it bring can't not justfy the security cost. Besides, this mechanism is problemmactic under certain situation (AFPS disk full, power cut off, OS crash, etc), there will have corrupted state file, and may leads to data lose.

I'm still remember this problem happen to Atom, cause me a lot of data loss, in the end, apps will have their way of store temporary state file, without relying on macOS's build-in state, like office and all other modern code editor.


The 'Reopen windows' feature is a complete crapshoot on what the app actually does when it comes back up, so I never use it. It has always been half baked on OSX, and seems to also add weird complexity.

Honestly what is the benefit? A computer is off, on, or sleeping. All of that worked fine for years. I don't get it.


> The 'Reopen windows' feature is a complete crapshoot on what the app actually does when it comes back up, so I never use it.

I do use it, frequently. So frequently that I have designated Force Quit as my “turn it off and on again” mechanism for nearly everything for years. Granted my daily usage app catalogue is minimal, and I agree some aspects of normal behavior are less predictable than one might wish (the article covers those). So I respectfully defer to your experience and decision not to use it. In my experience it’s as reliable as Quit or Shutdown for every app I use, and sometimes more reliable for misbehaving apps which have custom state saving functionality. VSCode in particular comes to mind, it restores undo/redo history when force quit and falling back to built in system behavior but comes up with no history if I so much as agree to a software update.

> Honestly what is the benefit? A computer is off, on, or sleeping.

The benefit is that, excepting certain details most use cases rarely encounter, the computer and apps running on it have only one state instead of three: resume where you left off. For active multitasking, that means waiting for volatile storage. For “App Nap”, Force Quit, and crashes that means waiting for non-volatile storage. For reboots and logins that means waiting for a bunch of other apps and services to resume. But if the app uses the functionality even semi-faithfully, you’re only waiting for it to become responsive again, then it resumes as if it were in the same state you left it. Regardless of whether your computer or even your app was on, off, or sleeping.

Again, not saying it’ll work that way for you and I’ll defer to your experience. But if you want to know how other people benefit from it, my experience is I can kill almost any process on my computer or even force reboot, and the only consequence before I resume what I was working on is how long I might have to wait.


I force quit textedit as a way of saving. Then I don't have to go in and name my 20 open documents. Every few months I'll manually save or delete my files. But it's interesting that force quit is more convenient and less annoying than "save"


I never understood why TextEdit just doesn’t silently exit and open all the unsaved windows the next time. I thought that was the point if this whole feature? (That’s exactly how Sublime Text implements it and I love it. I use a lot of unsaved scratch buffers for temporary notes).

It feels like the worst of both worlds…


I’m not sure if I’m glad someone else has my exact organizing strategy, but I’m definitely glad it’s working for you!


Did you know that Command+Option+Q doesn't reopen windows?


I did! I use Force Quit because it mostly does reopen windows. Sometimes it’s more reliable, and it’s almost always faster, than waiting for an app to quit “normally”. I mostly trust the automated behavior more than I trust individual apps.


Ah, I misunderstood what you were saying


I'm with you. Besides the crap shoot, generally the only reason I restart my macbook is because the particular combination of poor cooling, tabs leaking memory, video conferencing software somehow maxing out all cores and Spotlight's indexer apparently rebuilding from scratch have made my experience so awful I've just held the power button for 5s to kill everything and restart. (Side note: why doesn't Mac OS have a "just shutdown already" button like Windows?)

The last thing I want is for all that crap to load back up on restart all at once.


It always seems to work for me properly? People want this feature so they can restart their computer without losing their Safari tabs. I used it today when updating my computer. I didn't have to save my tabs because Safari just brought them all back automatically.


I don’t think tabs are lost when you restart safari; at least there’s a different setting to reopen tabs when safari starts. Different than reopen windows.


Browser tabs are in the browser settings on all platforms (much simpler because it's just a URL)


In Safari it's part of the window restoration setting. There are options to reopen those tabs if it fails (in the History menu) but getting your tabs back when you quit Safari without closing a window is part of the native state restoration.


It may be implemented that way, but it doesn't require that feature, because all browsers support tab restoration on all platforms


Sure...but that doesn't change that macOS state restoration is working well in this example, even if it's a trivial example


Side note, I really would love it if there was à way to stop os x from reopening all windows on the next boot after it crashes. I hate that so much.


Hold shift while you log in


Remarkable - I remember that key preventing the loading of extensions on startup back on System 7. It's cool that the intent of it has lasted so long.


Fun history: Windows actually took inspiration and adopted that way back in the day.

I believe that it still works as well.


 > Restart > https://i.imgur.com/uPTJyyS.png

That's always done it for me. It's one of the first things I uncheck on a new Mac.


Presumably, they mean that they want windows to reopen after a controlled shutdown (when they likely quit and clean up applications before doing so); while they don't want windows to reopen after an uncontrolled shutdown (when that results in dozens of windows reopening from 12 different apps, all at once, bringing the system to a crawl for 10+ minutes.)


Yup, plus I always uncheck this which works for a controlled shutdown but doesn't seem to have any effect on uncontrolled shutdown


I had this exact problem with an old machine.

The default behavior was to triple everything, which resulted in 20-30 minute boot cycles. I’m not exaggerating, the boot loop would swap to disk and there she blows


Perhaps the "Close windows when quitting an app" setting in System Preferences -> General may do it?


Would be nice if it actually worked though...


That doesn't take effects for reboots, only when you manually quit an app


My father-in-law was complaining about his Mac being incredibly slow, even after a reboot.

Turns out, he had a lot of programs running despite not having a document open, and that "reopen window" function was causing them to all reload on startup. Browser, word processor, tax software, whatever else. All on his aging Mac that I'm fairly sure only has 4 GB of RAM.


The fact that applications close when you close the window is a huge usability win for windows versus macs for most users I'd say.


Preview works this way on Macs for some reason. Close all my preview windows and the app itself disappears from the app switcher. I assume it self-quits.


> Honestly what is the benefit?

Clearly, you're not in the business of selling high performance, modern computers with 8GB of RAM...

Apple's base amounts of RAM are stingy at best.


I use it whenever I have to restart my work laptop for a system update, or because my company's sketchy VPN client stopped working again. Saves me quite a bit of time and effort


Power would be the big one, and given Apple’s primary devices are mobile anything that reduces the need for cpu to be doing anything is a win.


I am legit curious what the reason for the downvote is here. I don't think there's anything personal/shitty, and this is an objectively true statement answering the "what's the point?" comment.


For me 9 times out of 10 when I manually restart my device I want a fresh start anyway.


> Applications can now opt-in to requiring secure coding for their saved state by returning TRUE from this method. Unless an app opts in, it will keep allowing non-secure coding, which means process injection might remain possible.

> This vulnerability will therefore be present for as long as there is backwards compatibility with older macOS applications!

This sounds… really really bad?


NSSecureCoding is so old at this point (macOS 10.8, 2012) that anyone still writing Obj-C who hasn't adopted it is essentially committing malpractice.


I mean...people are still writing code with textbook SQL injection and XSS vulnerabilities.

A startling number of developers know nothing about security.


also the first footnote is very bizarre


>> Process injection is the ability for one process to execute code in a different process

This is the kind of absurd, foundation-shaking statement that, as someone who's been coding since I was 7 (in 1987..) feels purely nauseating. Although I sort of know the answer, my first question is how did we reach the point where something so fundamental could be so insecure and distributed to so many people?


This is what happens when a new security model is retrofitted onto an existing one.

In the original Unix security model, there was no security concern with this (except maybe for chroot environments): it didn't allow a process to do something it couldn't otherwise do, since all processes owned by a uid had exactly the same rights. Now that we've started sandboxing user processes in various ways on macOS and Linux, that's no longer the case, and we suddenly need to crack down on useful tools like strace and gdb.


Sorry... this is above my pay grade, but I still think of processes as running on a single thread, reserving memory and being mostly inviolable other than maybe sampling what they're holding at the moment. How does giving a tool the ability to analyze a thread allow it to inject code into the process as it's running? Forgive me if I'm just way behind but isn't the kernel of any modern OS supposed to prevent exactly that thing from happening?


A lot of legitimate debugging features involve actually modifying the code of the target. This is a common way of setting breakpoints: you replace the instruction at the given address with a trap instruction that will hand control back to the debugger. Then the debugger puts the original instruction back and resumes the target's execution.

And since the two processes already run as the same user, in the original model there's nothing the target can do that the debugger cannot also do, so this was not a privilege escalation path.


If you're debugging as the same user that makes sense because the debugger is supervising the code. (the debugger can't for example halt other processes besides the code it's supervising). But how can some other random process even with the same user just inject itself into running compiled code without somehow having the ability to rewrite memory? [edit: memory that has already been allocated by the kernel for the thread it's trying to interfere with]


The difference between debugger and non-debugger in 80's unix is... none, besides calling ptrace().

I called ptrace() on your pid, therefore I am your debugger now.


LD_PRELOAD, which is basically the same thing, has been with us since forever.

Unix security model was never meant to protect user processes between themselves, it only meant to protect system from user apps, which is a bad model these days, where basically every device has a single user, and the real threat is user apps hacking and stealing each others data.


OpenBSD has pledge(4) and unveil(4) today.


[flagged]


That's impressive. still not process injection unless it's still repeating itself every time I reload ;)

[edit] oops. I reloaded. What was that script kiddy running against the server that generated all that text? Couldn't have been rendered on the client


> In PHP, exploitability [of untrusted deserialization] for RCE is rare.

I think that is disputable.

Anyways, great write up


[flagged]


So you're experimenting with your bot on HN hey....

Well it isn't welcome.... And even if it were welcome, it's spewing random junk from the training set.


What an incompetent shitshow.

I don't believe this will impact AAPL's price though. Shareholders and ad sellers are the only "customers"/"users" Cook's Apple is interested in and works for and news of this vulnerability will be shushed or soothed outside of tech forums.

Whether this will be somehow fixed (Dear Cook's Apple, consider starting with not keeping decryption keys together with encrypted data, duh) or ABI/compatibility will get broken, Apple's marketing will sell the news to shareholders as an improvement and the magic of capitalism will boost the price.


>don't believe this will impact AAPL's price though. Shareholders and ad sellers are the only "customers"/"users"

This is completely wrong. Apple makes very little of its revenue from advertising. Their cash cow customers are iPhone users.


Like many/most public companies, Apple doesn't work for cash cows (it's the other way around) but their real customers - shareholders.


Shareholders are not customers FFS. Stop repeating that like it’s some grand insight.

Shareholders are owners and shareholders want money. The way to get money is from customers. You don’t make money from shareholders unless you’re running a Ponzi scheme.


> Shareholders are not customers FFS. Stop repeating that like it’s some grand insight.

Customer is a recipient of goods provided by seller or vendor in exchange for money or other valuables. User of free service is not a customer by that definition. Unless you would openly treat privacy as currency.

Investor who gave a company money is - in my view - a paying customer. They give a business some cash, expecting more in return. It is not paying for service but it is still paying and expecting something in return. A publicly traded company is a product itself and investor is buying that product.


Hmm. What public companies do you know of that don't work for their shareholders?


Fresh IPOs have products quality which made them IPOs. It gets downhill around that time and pro-user changes to pro-shareholder, which are very often in opposition.


I see so many ads for their own products inside their products. I hardly believe they don’t make any money from that.


They do make money from ads, but it's a rounding error compared to anything else they make.


idk why there's so much negativity in your comment, tbh. i don't think that's how apple is at all, and a lot of issues are simply a result of too many priorities and things falling through cracks until improvements are made on focus and development efforts. it's not as malicious as you think


> idk why there's so much negativity in your comment

I'm a MacBook Pro 2017 owner. It's the negativity towards Cook's Apple in general.

Currently my biggest issue with Apple is that their phone is still the lesser evil. Other than using iPhone, Jony Ive and Tim Cook efficiently moved me to other brands.


FileVault is the actual crypto protection. The inner encryption just likely defeats scanning for known secure strings.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: