We did our best but the fact is that sandboxed apps run more slowly, have fewer features, are more isolated, and take longer to develop. Sometimes this cost is prohibitive (see Coda 2.5).
IMO the app sandbox was a grievous strategic mistake for the Mac. Cocoa-based Mac apps are rapidly being eaten by web apps and Electron psuedo-desktop apps. For Mac apps to survive, they must capitalize on their strengths: superior performance, better system integration, better dev experience, more features, and higher general quality.
But the app sandbox strikes at all of those. In return it offers security inferior to a web app, as this post illustrates. The price is far too high and the benefits too little.
IMO Apple should drop the Mac app sandbox altogether (though continue to sandbox system services, which is totally sensible, and maybe retain something geared towards browsers.) The code signing requirements and dev cert revocation, which has been successfully used to remotely disable malware, will be sufficient security: the Mac community is good at sussing out bad actors. But force Mac devs to castrate their apps even more, and there won't be anything left to protect.
The software developers and software companies of today are on average not to be trusted: they will outright steal the customers' private information and sell it or give it away without a second thought.
Absent adequate laws, the success of the iOS permissions model and the complete failure of the Android permission model in protecting customers is perhaps the clearest proof of what a sandbox can achieve.
I am reasonably sure that I can download an iOS app - any app - and that my private information will remain secure.
The platforms of today must offer a secure, privacy-preserving running environment. The free for all is dead, and it has been killed by greedy devs and companies.
So users have been trained to not use the App Store, where apps have to go through an approval process, and instead to install software from the wild west of random websites.
The fact is that Macs are not phones, the iMac Pro costs six thousand dollars, and expecting users to use the crippled selection of software from the App Store is unworkable. Nobody pays six thousand dollars to play candy crush.
You are definitely not in that crowd, but most Mac users definitely are. Frankly, until the App Store came out, they were all terrified of installing anything on their computer without some well known brand attached to it (e.g. Microsoft, Google, Adobe).
The App Store has been a great way for Apple to turn a demographic who was traditionally scared of installing apps into a new market.
> So users have been trained to not use the App Store
Again, you and I have been trained. Most users crap their pants when they see anything happed contrary to the "blessed path."
I don't want to allow any app free access to the file system. And anyway, developers have found creative solutions to this problem also. E.g: by asking the user to open the target location from the app they can circumvent the protection.
The AppStore has its negative sides too: apps can disappear and be replaced by v2, forcing one to pay again and again.
I'd LOVE to offer my app on the AppStore, but its too expensive and too restrictive.
A lot more awesome apps are available outside the AppStore.
Nonsense. The vast majority of users don't need software that does wacky stuff with the file system. They need software that opens documents they made, which all software in the app store can do. I'm a pro (the kind that currently owns the Mac Pro and will likely own the next one or the iMac Pro next), and I basically avoid all software that's not from the App Store. Your ranting sounds delusional to me.
Its not about "wacky stuff" its about basic things. Apps in the AppStore are horribly crippled compared to their not AppStore counterparts.
I'm a pro, and like many other pros, basically avoid all software from the AppStore.
Uhhhh, what? You're justifying a company you PAID thousands of dollars to remain in control of your device? We're fighting this sort of inane thoughts where these ideas are already executed. Look up John Deere, and the Freedom to Fix.
> They need software that opens documents they made, which all software in the app store can do. I'm a pro (the kind that currently owns the Mac Pro and will likely own the next one or the iMac Pro next), and I basically avoid all software that's not from the App Store.
Ok. You can choose to or not to.
> Your ranting sounds delusional to me.
You're calling someone else delusional for calling bullshit on Apple's faux-security? And you're calling for the app-ification of all computers? The person calling bullshit on Apple-faux-security isn't the delusional one here.
This is the bit that you FOSS diehards don’t get. Nobody really cares. What they want is something that they can switch on and go to Facebook or edit their documents on. They don’t want to tinker with kernel exstentions or run servers. For most, computing appliances are far more convenient.
You might want to pack up that opinion and take it to some place called "consumernews" or something (they might call it "productnews" these days). That is where nobody really cares.
While most of the "FOSS diehards" get it just fine. People want convenience. It's not that hard to understand and you're making yourself look like a fool if that's your claim. Doesn't mean convenience is the right way forward, even if people want it or don't care.
Since when is "people don't care" a viable argument for anything, the people that don't care, don't understand this.
So climb down from your ivory tower and make them understand without reverting the usual rhetoric as you have done here.
You would be surprised.
Keep in mind that professional users that buy the iMac Pro use pro apps, like AutoCAD, Final Cut, Logic, ProTools, Creative Suite and the like; these apps are mission-critical to their workflows and are not crippled.
This is an 'unintended side effect' of the software-as-a-service model. When software was licensed the software itself was what made the money, and there was no way that someone would buy a piece of software if it did not at least try to keep the data safe, in one piece and on the users' computer. But with software-as-a-service the user expects to see their data moving elsewhere and once that has happened all bets are off on how it can - and will - be monetized.
The performance and limitations can definitely be improved, if not eliminated, if Apple focused on it. The question is: Will they?
I agree with you but think there should be more emphasis on connectivity.
In the 80's and 90's (less so of course in the 90's) if you got a computer virus it was probably from sharing software ... on a floppy disk. I used a Mac during those decades and like other software viruses were rare on that platform. :-/
The internet changed everything. Not only does the internet serve up compromised software it provides the back-channel so the malware can phone home.
I don't have any suggestions or any sort of panacea. It's nice sometimes just to reflect on how computing used to be. Back before passwords, sandboxes....
If I'm using the App Store stuff, then my text editor can't edit all my files, my video player can't open videos when I double-click them, my Evernote can't print to PDF, my disk usage analyzer can't analyze my disk because it can't ask me for authorization to do so, and so on.
At the same time, though, in theory I do want sandboxing, at least by default. I do want to have to explicitly authorize some random app before it can install a printer system extension, or before it can scan my entire disk.
I feel like the problem is more political/cultural: Apple prioritizes their low-information, low-computer-literacy users (hi, Dad!) so dearly, that they aren't willing to expend resources to find any compromise for their technically proficient ones. And through the sandboxing requirements, they force their third-party developers to either tell their power users to fuck off, or to give up on the App Store.
Or, I don't know, perhaps they just don't have enough engineers left for the Mac after moving so many of them to their higher-volume Candy Crush console business.
Regardless, though, if there were mechanisms for the user to control the sandboxing enforcement to a greater degree, I would be all for it.
If we could tell our Macs "yes, dude, I hereby authorize DaisyDisk to see all the files" and "Computer, hey bud do me a favor, overrule this restriction that says CotEditor can't ask me to authorize modifying root-owned files", then I would actually not only be happy with, but even prefer, the App Store versions.
But we can't, and it's sad.
The problem though, if you allow for user overrides, then the next TotallyNotShadyVideoPlayer will kindly ask you to add the manual override for "can't read the screen" and, if you agree, it'll steal your credentials. That is, regular users will be easy to selfpwn.
I don't think this can be solved on the software side. It's "security vs. usefulness, pick one" kind of situation. Maybe we need to tackle the human factor - make it easier to prosecute malicious code authors - but not sure if that's any more possible to implement than software that's both useful and secure.
That's true, as far as it goes. My view is, maybe you make the dialog to do that scary enough; maybe you have to go enable some security setting to even make the dialog possible; maybe, even so, you have to accept that some users like my dad will get fooled and exploited.
OTOH, it's already a slippery slope; even more secure would be to completely prohibit all third-party software entirely.
I think they are drawing the lines wrong.
This sounds like you're on the right track. Make it a shell command you have to execute with SIP disabled.
I fail to see how the macOS sandbox prevents this. Disk usage analyser can ask you for disk access, sandboxed video players can surely open files double clicked in Finder, sandboxed apps can print to PDF.
A few example: QuickTime Player is sandboxed, Pages is sandboxed.
I think you might be thinking of the reasonable sandbox we wish we had, not the one we have.
(Apple's own apps do not have to follow the rules that third-party apps do.)
* Open dialogs
* A limited number of files that have been opened in the past via open dialogs (intended for e.g. recent files)
* Files that live in the app's sandboxed region of the filesystem.
That's it. So your text editor can't open ~/.bash_profile unless it pops up an open dialog and asks you to manually click it.
This sandboxing would be a problem with editors that naturally work with multi-file projects (e.g. an IDE where you'd want to open a whole Java project of linked files), but in the text editor scenario this limitation seems to make sense.
Yes it can. The print dialog has a whole PDF menu.
That feature is very useful, though; it can save anything, from any app that can print, to Evernote.
To save PDF into Evernote on the Mac:
- if it's a file- use the share option (right click>Share>Evernote)
- if it's in a browser- in Safari click on share (top right hand corner of the browser) > Evernote. If it's in Chrome, AFAIK, you have to use Evernote Web Clipper (extension) which is also available for Safari.
- Of course, you can also copy/paste or drag and drop PDF into Evernote
On the other hand, I am not sure that code signing has been an all-around success. It seems to make life harder for the long tail of one-off apps and open source ports. And bad actors are only blacklisted after they've been caught. https://panic.com/blog/stolen-source-code/ - related: https://github.com/HandBrake/HandBrake/issues/619
I think I would prefer a security model were you could just download and start any .app, but upon first opening it, a dialog would inform you about its code-signing author and the permissions that the app has (with striking red/green color coding). Because right now, there is no way to tell which apps even use sandboxing without opening Activity Monitor, and there is no incentive for non-App-Store-apps to use sandboxing. (There is a dialog when opening a downloaded app for the first time, but it's grey and boring and not helpful at all.)
I agree with the need of a better first launch dialog.
But it's still better than nothing, I'm sure they would offer unsigned HTTP downloads if Apple didn't force them to get their shit together.
This would solve most of my complaints.
I'm not sure Apple should give up on it though, I don't want any old application I download to be able to read through the spreadsheets in my Accounting folder.
Perhaps the emergence of Electron is a wake up call for Apple and Microsoft, there is clearly a demand for creating applications with web technologies, OS developers need to respond to that rather than letting a third party eat their lunch.
But Apple doesn't enjoy the luxury of solving this problem in a nuanced way, because Mac apps are not acting from a position of strength. I suspect you aren't downloading lots of Mac apps today, and the reason is not insufficient sandboxing, but instead the limited selection, annoying install experience, etc. These are the problems that Apple must fix first.
> Perhaps the emergence of Electron is a wake up call for Apple and Microsoft, there is clearly a demand for creating applications with web technologies
34 years ago, the Mac was new and generated developer excitement. But Apple was afraid the Mac would be dragged down via shitty DOS ports of apps like e.g. WordPerfect. Apple's response was to set a high bar via first-party "killer apps" like MacWrite, which would embarrass any such DOS ports. It worked: the Mac set the desktop publishing standard for years, decades even, and arguably still dominates.
Yes, there's a demand for developing apps with web technologies, but embracing that is a losing strategy for Apple. Why should I buy a Mac to run web apps that run equally well on a Chinese Chromebook? That's ceding any software advantage.
Instead Apple should leverage the Mac's unique software strengths. Aggressively evolve the Mac's unique "UI vocabulary" and application frameworks. Empower, not punish, the dedicated and passionate developer community. Ship love to the userbase (perhaps the only one in existence) that's willing to open their wallets for high-quality desktop software. And yes, tolerate web-tech apps too - but embarrass them!
Yes it's still much better than the typical Windows installer experience, but zero-install web apps are now setting the bar.
Web apps in the browser are OS-Agonstic and Electron apps can be easily made cross-platform.
1. Superior performance. Native apps are just faster. They launch faster. They use an order of magnitude less memory. Multithreading via GCD is much much nicer than Web Workers. Large files are better supported. You can have very large tables. etc.
2. They properly implement Mac UI idioms. By comparison even the nicest Electron-like app (VSCode) violates many longstanding expectations: it doesn't properly highlight the menu bar when typing a key equivalent, menu items don't disable properly, the text highlight color is wrong, text selection anchors incorrectly, no column selection, text drag and drop is badly broken, undo doesn't know what it's undoing, undo coalesces incorrectly, hell even arrow keys sometimes go the wrong way. It's an island app doing its own thing.
The theory of the Mac is to establish a set of UI conventions. When you launched a new app, you would already know how to use most of it, because it was a Mac app. It looks and behaves like other apps, so you feel at home already. And as a developer, you get the right behavior now and in the future, for free.
But if every developer builds a cross-platform app with a custom framework and appearance and behavior and UI, then the OS loses its role in defining the platform conventions. In that event, what's the point in having more than one OS?
From a developer point of view, it's just better to make a Web App or a Electron App that will be used by everyone irregardless of their OS. Not only it's less of a hassle to develop but also guarantees that more people are gonna use it.
I'd also argue that VS Code isn't much worse off than most native code editors. Most code editors outside of Xcode don't use NSTextView and so have to implement all of the behavior themselves anyhow. For example, as far as I'm aware Sublime Text pretty much implements just as much of the core editing logic as VS Code does.
I'm not denying your complaints, of course--they're valid--but I don't think the gap between native and Web/Electron is as big as you're implying.
Whereas developing Mac apps seems to rely on a set of arcane knowledge and having been in the scene for years. The documentation for AppKit is vastly outdated and there is almost no blogging/tutorials scene so as a newbie you are basically going to be going through a lot of trial and error.
And then most of the documentation you will find is in ObjC while everyone around you is telling you to develop in Swift, but then Swift changes every few months. You open up a project from a few months ago, it doesn't compile anymore, and you basically have to work at Apple or be a dev god to fix it.
So.... native is easier to develop for? No way.
That’s an okay decision to make if it fits your product and market needs, but it’s important not to forget the cost.
Native apps perform better, integrate better with other system services, are more power efficient, use less storage space… usually technically superior in every way, with the exception that that are less portable and in some cases more time consuming to build.
Clearly the key reason Mac apps cost so much to develop is the platform-specific codebase they require. Mac apps have little in common even with iOS apps, so it's a huge cost. Against that baseline, adding sandbox entitlements is an extremely minor incremental cost.
It's not like eliminating the sandbox would drop the cost of Mac apps by some huge factor. Developers are writing web apps because they can use a cross-platform codebase, plain and simple. That's why I am much more optimistic about the rumor that Apple will lower the delta between Mac and iOS codebases, letting devs leverage their iOS investments. There will always be a cost to adapting a touch-based app to the desktop, but there's orders of magnitude more efficiency to be gained there than in blowing away a whole layer of the security model.
It's used by homebrew during the build steps, I hear it's in progress for Nix, and I may make use of it in the future for archmac. Unfortunately it's been marked as deprecated in the man page:
SANDBOX-EXEC(1) BSD General Commands Manual SANDBOX-EXEC(1)
sandbox-exec -- execute within a sandbox (DEPRECATED)
I mean, you were on the AppKit team, so I don't think I need to list out the oddities/issues/etc it's currently got. Just curious if the general attitude in Apple was akin to "it's fine", or "we know it needs work, but we'll get there eventually".
That said it's easy to overextend yourself, as Microsoft demonstrated with their API treadmill. The art of engineering is to set the right evolutionary pace.
Your question is very good, and above my pay grade. In 2013 I had lots of opinions about how ObjC was evolving too slowly, and I was floored the next year when Swift (a well-kept secret within Apple) was unveiled. Apple is full of surprises, even to its own employees.
Anyway, not looking to derail the discussion, just wanted to say thanks for commenting - wasn't sure if anyone ex-Apple would be comfortable doing so.
> their strengths: superior performance, better system integration, better dev experience, more features, and higher general quality
These are important, but you generally do not want to sacrifice security to get them. The sandbox is opt-in on the Mac so where you have an app you believe cannot live in it, you have that choice. Saying Mac devs are forced to castrate their apps isn't accurate or fair.
Isn't that what apps are for? This just seems like expected behavior. If you don't want apps doing stuff don't run them on your desktop computer.
Every binary I run could delete all my data or steal my passwords or whatever else. That's not something to be afraid of --- it's something to cherish and be proud of, and it's why you don't run anything you don't trust. This "free-for-all" sharing and access encourages the sort of ad-hoc and unpremeditated interactions which are beneficial to the software ecosystem as a whole.
I usually have a magnifier, color identifier, and clipboard-listening-translator running. None of those would be easily implementable or may not even exist, had computers started out as locked-down systems from the beginning.
In the macOS sandbox, an app can access any old file, but only ones the user has implicitly told the app it can access - through the system "open file" dialog, double-clicking, drag & drop, etc.
It was meant as a souped up iPod platform, tethered to iTunes.
I just wish we didn't have to way for Windows 8/10 to get "sane" file management on tablets, after Google mangled Android from 3.1 onwards.
Now you're completely screwed. All of your emails, pictures, medical information, other embarrassing personal information/media, bank logins, and so forth are compromised. Maybe you have two-factor authentication for all of your accounts (doubtful), but who cares? It can just read the memory of your browser process, quietly wait for you to login to the various services, and then hijack your sessions in the background.
So tell me: how in the world is this an acceptable situation?
 I sure hope you never use any of these new-school package managers for various programming languages, text editors, and such. You know, the ones that grab the latest commit from GitHub repositories run by total strangers and then run them with full privileges.
It is benefit vs cost: for me it is acceptable because i want to be able to do whatever i want with my computer using applications that can freely interact with each other without the OS getting in my way. I want the underlying system to provide the functionality for the applications to perform their tasks but not add unnecessary barriers that make features practically impossible.
Also FWIW personally as a developer i highly dislike the idea of signed software. First of all i do not like the idea of having to ask (let alone pay) anyone to have my program be runnable by others and second i do not like the idea of my name being carried alongside my programs unless i explicitly add it.
It is not the eighties anymore. Software is exploited through malicious files that exploit vulnerabilities in file readers (e.g. malicious PDFs). Software distribution sites get compromised (Handbrake, Transmission, Linux Mint), etc. Signatures and proper sandboxing reduce the attach service considerably.
Also, to address your point: sandboxing does not have to be binary. For instance, on macOS, a sandboxed app can request the user access to the calendar or address book. In that way some random app can't just steal your address book.  By default an application cannot open any files from the user's home directory, unless the user e.g. choses it using a file opening dialog (which runs out-of-process). On Flatpak on Linux, you as a user can decide to run an application with more entitlements (AFAIK currently only through the command-line).
We need to go to a world where the power lies with the user and not the developer to decide what an app can access, upload, etc. macOS sandboxing and Flatpaks on Linux are now providing part of that solution.
Also FWIW personally as a developer i highly dislike the idea of signed software. First of all i do not like the idea of having to ask (let alone pay) anyone to have my program be runnable by others
Flatpaks on Linux are normally signed, but they can be signed using GnuPG key. No need to pay or ask permission. Of course, the downside of this model is that it puts the burden on the user to check that the key is owned by a reputable person.
second i do not like the idea of my name being carried alongside my programs unless i explicitly add it
Most users should not want to run a program by a random person.
 Don't say that free software is not vulnerable to such problems. A popular free software documentation browser embeds a Google Ad, some open source applications use Google Analytics. I do not want to upload any data to Google. Unfortunately, as a user, I can only avoid that by inspecting all my outgoing connections. Of course, this is out of the reach of a normal user. I would absolutely be in favor of sandboxes where by default an app could not make any network connection.
I do not see those as placing power in the hands of the user, i see it more as placing power in the hands of the platform holder the developer develops for and the user uses. The user should be able to do whatever he pleases, even at the cost of the developer's wishes and to do this, the platform must allow the user to subvert both the platform's restrictions and the program's requests.
Anything that doesn't put the user in a position of utmost authority and trust is not something that places powers in the hands of the user.
This is demonstrably not true. Most users will go out of their way to run any software if they believe that it will do what they’re looking for.
It's very depressing to be transitioning into a world where you can't use a small trusted open-source mail client and IM client etc, where you have to install crazy huge untrusted piles of flaky and inefficient code in order to interact with real-world people and institutions, and you can't easily control which version you have installed, and mobile devices become obsolete after just 3 years. (Even iPhones - not because of Apple, but because Instagram and Snapchat got too heavy/shitty for the older models! This does not happen for communication over SMTP or XMPP!)
So I see the "we need super sandboxes around every little thing" as a sort of bandaid on a much bigger problem which is not going to be fixed in the foreseeable future.
The only reason Debian didn't distribute some really bad malicious code (that we know of) yet is because they got lucky and no developer tried to mess with them.
The sandbox has always been completely different from iOS e.g. it doesn't ask permission to access photos, microphone, webcam.
1) Your webcam turns a light on. Not permission, but you know what's going on.
2) "People must grant permission for an app to access personal information, including the current location, calendar, contact information, reminders, and photos."
But you can always go straight to the files themselves for cases like Photos and Calendars and also to the SQLite databases that back a lot of user data.
And current location is easy to get on OSX. You can simply look up the user's IP address and WiFi networks to triangulate the user's location with about the same accuracy as Apple can do.
With webcam you can easily switch the camera on, take the photo and switch it off. Could easily trick people into thinking it was a hardware fault rather than something nefarious with the app.
The fact is that none of what I listed above is possible on iOS.
Sure, if you download software from the web, it can do whatever it pleases.
But if you download apps only from the Mac App store, you should be safe.
On iOS, Apple checks all your software. On the Mac, they give you a choice: If you want "safe" software, go to the Mac App Store. If you want any software, go somewhere else.
The sandbox isn't as complete as on iOS, but the stuff you listed requires special permissions and confirmations when you get your software from the Mac App Store.
The thing that bugs me is there’s no indication an app is connected to my microphone.
Better safe than sorry!
Having said that, plenty of useful apps can't ship in sandboxed form because of limitations in the current sandboxing model. And of course the fact that sandboxing is required for store distribution makes the store less useful. The implementation has much room for improvement, as do the store distribution policies.
If an app has to ask me permission in order to access my keystrokes, I think it's fair to assume that the same would be true of apps that read (anything outside their window regions) from my screen. The fact that macOS doesn't require this is discoverable by sleuthing (browsing the Privacy pane, or remembering that screenshot apps don't ask for it), but there's reasonable, and probably common, mental models that don't make it obvious.
It would be like renting a house with four locks on the front door and none on the back. Yes, you could audit the house, but it's a weird asymmetry to have to look for.
(P.S. I'm older than you, FWIW :-)
Edit: It seems you a many of the others replying to the GP are talking about apps from the App Store, which I basically never use. And that's sort of the parent's point, I think. I get apps from the internet, install them, and use them, like and old-school desktop/PC computer user. I rarely need to use the app store (except for updates, and it's weird UX that OS updates are in the app store to begin with) and have no compelling reason to do so.
You're right that this isn't super fine-grained. This single checkbox seems to govern several unrelated things, such as keystroke logging and insertion (Dash and Keyboard Maestro), mouse recording and control (Keyboard Maestro), and whatever APIs are used to badge Finder icons (Google and Dropbox). My belief is that an app that I haven't checked off there doesn't have access to my keystrokes. (Maybe this is wrong?) In which case I'm surprised there's nothing like this for recording the screen.
A lot of users got tired of the on-screen badge being on all the time and probably set it to hidden (very easy to do) so not everyone is actually aware of when an HTML5 application is accessing a microphone/camera/screen.
How does he think that desktop app can do many different things?
Now, sandbox is a different issue. But of course, yes it have access to the screen.
If apps on desktop is going to be as restrictive on mobile, why use desktop at all?
Should anything at all run in the host OS?
The advantage of running every app in their own VM means that apps won't get necessarily tossed to the side in this constant arms race of upgrades, especially for apps that don't need internet. Imagine still actively using several perfectly good apps that were last updated 10 or 20 years ago?
Isn't that equivalent to saying that the set of available kernel calls should be more restricted?
(User mode is a kind of sandbox, after all)
But as my other comment mentions , I wish I had the knowledge required to understand the meaning / implications of Flatpak apps. Please let me know if I'm misunderstanding :)
Happy Saturday night y'all !
At the end of the day people don't care about security that much. Wayland is still getting there. Its getting better, particularly in the last year, but its not a huge priority for many people so it has taken upwards of ten years to become usable.
I think many people are also not aware of how bad X11 security is. The other day I mentioned that X11 applications can read all keystrokes to someone who does system administration/security at an ISP. To my big surprise, he was not aware of this. (Didn't everyone play with xev at some point?)
On Linux, I only use Wayland because of X11's broken security model. Fortunately, it works very well these days with the amdgpu driver.
And let's not forget that just using Wayland doesn't grant any improved security compared to X11. There are numerous other ways to get input data, display contents, etc., so you need further sandboxing anyway, at which point you might just as well sandbox X11 clients you don't trust to get the same security. The benefit here is that you can decide if you trust your color picker and let it do its work by not sandboxing it, while Wayland just decides for you that no color picker can be trusted.
Because you can't do that in Unix. You can't ensure that the process with PID 1234 is really executing /usr/bin/foo.
I read that GNOME just implements both in the same program (and calls it "shell").
> Or how can you ensure that the shell is not a rogue one?
I'm confused. Do you mean this kind of attack would become somewhat more probable or more dangerous if the original shell had some mechanism of allowing certain apps to pick colors?
 https://access.redhat.com/solutions/3176 <--- here they show how to turn it off because it's in enforcing mode by default.
Briefly, SELinux adds some new permissions to files, new contexts to users and so on, so what happens is you specify what a given context is allowed to do, to a very granular level. For example, if you take postfix, it's split up into lots of programs, so you could say that the "init" context is allowed to run the postfix startup script but none of the rest of it. The postfix startup script will convert into some "postfix startup" context which is allowed to start the various programs but nothing else. So there will be a a postfix process that allows incoming IP addresses and can send on a specific FIFO and nothing else. There is another program which can read from the FIFO, change certain files and nothing else. So, imagine if you break into the program that talks with the internet: all you can do at that point is talk with the internet and talk with the FIFO. You can't read any files, write anything, execute anything, etc.
So that's kind of the idea of it. The above assumes an extremely tight SELinux configuration and no bugs in SELinux that would allow you to break out of your context. I've never seen a configuration as tight as described above but there did use to be a guy who had a Debian linux machine running on the internet where you could log in as root and try to break stuff. As far as I know, no one ever hacked that machine, starting out as root in a bash shell.
For example, in order for the Color Picker Tool to work, The Gimp should be marked with a "color picked allowed" capability, so when it asks the Wayland server for the color of pixels outside the surfaces it already owns, the server can check it and send the requested info. But a rogue program/process trying to scrap the screen content pixel by pixel shouldn't be able to do that. The inability to safely map processes to executables in Unix (and the possibility of manipulating their running code via exec(), library injection, ...) make it a very hard problem to solve without a paradigm shift that SELinux doesn't provide (as far as I can tell).
So the benefits of the easier sandboxing provided by Wayland are somewhat overstated, at least for typical Linux users.
Like how often does software need to do screenshots, not many. Same with most other app permissions. Most only need network access and a folder.
With the permission model of apps, then we can start having system apps like istat menus being sold on the app store again too.
I think you haven't used iOS that much. In practice it's fairly rare for an app to shut down without a permission unless it's central to the app.
Also android takes a while to propagate new versions onto the majority install base :)
I regularly use Borland C++ 5 (because it has a nice enough customizable IDE and a very fast C compiler which is practically instant on modern PCs), which (IIRC) was released in 1996. And i also use a ton of other stuff from the 90s, including many games.
Things already work without the overhead and resource wastage of virtual machines, all it takes is OSes and libraries try to not break their ABIs.
Not "especially for apps that don't need the Internet", only for apps that don't need the Internet. Anything that connects to the Internet needs constant and rigorous update discipline to stay ahead of a constant stream of exploits.
Sandboxing applications within their own VMs leads to a false sense of security. Now, whoever is responsible for distributing the app, is responsible for distributing the OS that runs the app as well. If the underlying OS doesn't get updated, Murphy guarantees that there will be an OS-level exploit within that VM. And then, even if the exploit is sandboxed to only affect that app and its OS, the attacker still owns anything that the user passes through that app, including but not limited to usernames and passwords (which are often reused across websites and apps), any sensitive information that passes through the app (banking and financial accounts, anybody?), and any privileges existing within the VM (session tokens, client keys, privileged access / firewall passthrough to other machines in the local network, etc.).
It's one thing to sandbox each app within its own userland-level sandbox, but sandboxing each app within its own VM / OS only makes security even more difficult. That kind of deployment model only makes sense if a sysadmin is doing so to make deployments more predictable and has the full power to reinitiate deployments when OS updates are released. On an end-user level, you're either risking the end-user's security (if upstream controls the OS update) or wasting the end-user's resources.
I do. I don't know exactly what my oldest binary is, but it's probably from the 80s (some text processing utilities), and I still actively use apps I wrote myself over a decade ago. Of course, I use Windows which has historically been far better at backwards compatibility than Mac OS.
This was a program rather than another webpage, and I haven't bothered to read how it worked beyond the general idea of exploiting a cache side-channel. I expect the bit-rate is low compared to Spectre/Meltdown. But it made me leery of confidentiality in current systems.
Are windows store apps sandboxed? If so, is it trivially easy to do this with those? Or is it just win32?
Wayland on the other hand does not. How and if clients can communicate with each other is considered out of the scope of the Wayland protocol, except a few exceptions (e.g. copy/paste) so it's up to the Wayland compositor. That's why there are already multiple different "APIs" to do screenshots on Wayland compositors, some more powerful some less, and not a single one is implemented by all major compositors - basically fragmentation hell. That's also why stuff like color pickers still don't work without tedious hacks (e.g. misusing some of the screenshot APIs).
Yes, that's a feature I have used many times in the past. If you want to sandbox an app, run it through a nested X server.
And for Windows, more than once I wrote stuff that captures or injects keystrokes globally. Being able to do this is a feature that lets you solve problems by making things more interoperable.
Flatpak does aggressively isolate apps from the OS though: http://docs.flatpak.org/en/latest/working-with-the-sandbox.h...
Flatpak has no X11 sandbox. Others, like Firejail, have attempted to create X11 sandboxes, but they've had holes (last I tested they had the X RECORD extension enabled, which has no use except being a keylogger), or broke valid applications / accessibility tools.
Wayland doesn't grant Wayland clients permission to those resources, but without further sandboxing and using technologies like Linux namespaces, AppArmor, SELinux, ... that's completely useless in itself. Wayland only makes it harder/impossible for trustworthy applications to do their job, while malicious applications just take one of the many other routes to get what they want.
What you described is exactly the paper of Flatpak, I think it even used SELinux for it. AFAIK, Flatpak with Wayland is a good way to sandbox apps on Linux.
With X11 on the other hand I can sandbox X11 clients I don't trust, thereby limiting them in what they can do, but other clients I do trust have the ability to do all sorts of great things so I can do my work efficiently. While not perfect, not even close, it's certainly better than not being allowed to do anything.
Regular applications, yes, you can pretty much do whatever in userspace in Win/Mac/X11. If you're going to create a new sandbox environment for applications (Windows Store/MacOS store) in today's world, it really needs to be locked down with explicit permissions for each type of I/O access.
(On a system without Yama enabled, you can also ptrace any other process running as the same user and run code as it, e.g., using gdb, but lots of desktop-focused Linux distros enable Yama to close this particular approach.)
The sandbox does.
> You need a sandbox to prevent it fro doing these things, which is certainly doable, but much harder than just opening a new X server.
Of course. I never said simply running a new X server is sufficient. All I asked is if the sandbox needed to run the program in a separate user account.
Hey kind stranger who is supposed to do the garden while I go shopping, I really don't trust you. So to be sure you only do the garden and nothing else, here are the keys to my house, please ensure that every door and window is locked. Thanks.
The only other entity who could set it up for you, so every application automatically launches in a sandboxed environment, is the distributor, but then again it's your responsibility to chose a distribution that does that.
If you want security you have to do something about it at one point or another.
Leaving this to the user leaves the vast majority of users unsafe. This is an unacceptable state.
There is no way around you either taking care of that yourself or you choosing an operating system that enforces it for you, like Qubes OS.
Because they are the ones who understand the necessary capabilities of their program and the ones who have access to the source code...
> That's a huge waste of time and it's much more efficient if the operating system or the user enforces it instead by using existing sandboxing technologies like firejail.
Actually it's a far better sandbox when built into the program. And it doesn't leave users relying on installing arcane operating systems or becoming technically savvy.
> It is also untrustworthy and insecure, since after all you don't trust the application.
No, trusting the application is implicit since it's installed by the user. The sandbox exists to protect against a compromised application.
That sounds terrible. This behavior is how desktop applications are meant to work. I’m surprised this surprised anyone.
Desktop applications are not "meant to" have access they don't need. They sometimes have that access as an accident of history, but they are not "meant to"; we've known about the principle of least privilege for a long time. And the MacOS sandbox (which, to be honest, doesn't work very well, but that's neither here nor there) is intended to enforce application privileges and reduce escalation.
If it doesn't need to read/write your screen in order to provide its features, and then does it, wouldn't you agree that something is fishy? Wouldn't you like to know when fishy things are going on?
What is the point of security if any app you download can see everything you do?
The most obvious answer would be to take screenshots, like GIMP's "Create from screenshot" command or a dedicated program like the snippet tool in Windows. Many graphics tools offer that functionality, even some that you can run from the command line.
Other, similarly widely available functionality, is to record the desktop - a common functionality needed for screencasting and video streaming (think Twitch) programs. This also need to capture audio. Also a more niche tool is to create captures directly to GIF files (i have such a tool both in Windows and Linux).
Of course less commonly implemented but still very useful functionality is for remote access/remote desktop (in which case you also need to also capture input events but also create fake input events indistinguishable from the user's events).
Finally several utilities also benefit from being able to read the screen, like utilities to magnify and perhaps enhance part of a screen (that can be useful for people with sight issues, or for developers to inspect the output of a graphics program at the pixel level without flattening their face on the monitor) or utilities like color pickers or even just funny toys that manipulate the screen contents (i've seen a game at the past grab a screenshot of your desktop and then zoom it out when you launch it).
> What is the point of security if any app you download can see everything you do?
I'd turn that around: what is the point of security if the apps you download cannot do their job because of it? At the end of the day computers need to be useful, not to be burned and buried in a waste disposal field (where they'd be in their most secure state).
This is just a simple strawman.
It's not that hard to have a middle ground, just disallow apps from using things like your webcam or screen without your explicit permission. Just because 1 program uses that functionality doesn't necessitate it to be common to every single program you ever run.
iOS already manages this. Just have a notification pop up when you use the program to allow X access from system settings. Certain programs already do something similar by requesting access from Accessibility, like window managers (albeit that's to get around certain limitations).
This stuff is really barely a notch above expecting people to read EULAs.
As an iOS developer I run Xscope to check details of designs, I don't know, maybe once every two weeks? And I guess I've probably used Acorn's color picker outside its own window during the last year, but I'm not sure about that.
I'd imagine that most users need apps reading arbitrary pixels off their screen less often than I do. I'd appreciate a warning from the operating system when an app tries to do that.
I do not think the frequency of needing that functionality is really relevant when you need to use it - you can either do it or you cannot.
But there are users who do need to use applications that capture the screen way more often than once every two weeks. One example would be Twitch and YouTube streamers (not necessarily about games - many stream other things, like creating art, programming, composing music, etc) who do that for several hours every day.
> do you think most users
There is no point talking about "most users" because really anything can be attributed to what "most users" do or do not - "most users" is not something that is well defined. We can only talk about specific use cases.
After all, all users (which by definition include most users) start as not being able to do anything, that doesn't mean the OS should not be able to do anything.
> As an iOS developer I run Xscope to check details of designs, I don't know, maybe once every two weeks? And I guess I've probably used Acorn's color picker outside its own window during the last year, but I'm not sure about that.
These are also useful examples for cases where you need screen reading, but these are really niche. The most common is probably streaming, as i mentioned above, but there are also other cases where you need frequent screen reading capabilities - for example documentation writers will often need to capture parts of the OS, either as screenshots or as short video segments (for interactive docs) to use as part of the documentation. It is also very helpful for customer support - in both sides of the equation (in a company i worked many years ago, the employee responsible for tech support had SnagIt open all the time).
> I'd appreciate a warning from the operating system when an app tries to do that.
I suspect that if that warning comes as part of your screenshot or video capture you wont appreciate it that much :-P
That's why I think questions about frequency and "most users" (nebulous term, I admit) come into play: yes/no permission dialogs aren't the best and many users do just click through them without thinking, but there's a world of difference between asking several times a day and asking a couple of times a month.
If my usage is far below average, then I agree that it's probably better not try to restrict it. If, as I would guess, I use it more than most, then it feels exceptional enough that letting the user know when it's happening would be fine.
If you could give a permanent permission to a given app, I'd imagine even the daily streamers wouldn't be too badly inconvenienced.
I'd like to phrase it as "security vs. usefulness, pick one".
Unfortunately, I don't see a good way out of this. The more secure apps and the OS gets, the less useful it is - it loses composability and interoperability, any remnants of them being mediated by third parties (basically, see how SaaS apps talk to each other, and imagine this is your desktop now). But the more useful an app or OS is, the easier it is to make users selfpwn themselves through a stupid or compromised download.
I refuse to accept that I'm not allowed to do whatever I want with my own device, including running code that does whatever it wants with other applications, and especially things the authors of those other applications didn't anticipate or want me to do. But then, I can't see how a regular person could use the same computer without fear of their passwords or data getting stolen.
Are there any smart people working on this? Do they have any suggestions?
And no, i wont get spyware, i never got spyware in the three decades i use software when desktop systems were at the apex of their popularity and i used the most popular of them - i doubt i'll get any sort of spyware today, especially when i'm not even using an OS that spyware authors would even bother with.
Note that i am not against such environments, quite the opposite, although i tend to prefer VMs like VirtualBox instead of OS sandboxes because - as the linked article shows - they are not as safe as VMs (yes i know that VMs can also be compromised but it is much harder and way more rare).
I believe that applications that run locally in my computer should be able to do whatever they please (if bugs wouldn't make it impractical, i'd also like it if they could read and write each other's memory), but a consequence to such a setup is that you need to be careful for what applications end up in the computer. So i am very thankful for any tool that does that - as long as it remains a tool and not impose itself without my will.
Programs cannot decide by themselves to be installed into a computer without some prior action from the user, it is the user who has to do something for them to install themselves. And no two users do the exact same sequence of actions to be able to claim that all users have the same likelihood to be affected by spyware.
Also FWIW when i use "apps" i mean it as a shorthand for "applications" and for a few messages already upwards in the thread the discussion isn't just about macOS but regardless of OS. So apps can be open source (and personally unless it is some piece of software i trust - usually older widely used programs - or some game i got from a place i trust, i stick with open source apps with security being indeed a major reason).
+1, only you can decide to install a program. However its total functional scope, "what it does", is never fully known, unless it is open source. "Doing research" about the developer will not solve the lack of transparency in the tool you're installing. It is the consumer OS's responsibility to make sure third parties don't get access to data consumers would deem sensitive, without proper authorization.
It's been one of the most catastrophic decisions apple have ever made and IMO is hindering actual progress for desktop apps.
AFAIK the logic is that any user initiated event (click, keystroke, touch, etc) allows the page to see (and modify) clipboard contents for that instance. So if you accidentally click in a rogue page, it's possible for your clipboard contents to be stolen.
I believe Apple is at a crossroads, something needs to be done with the Mac.
I used linux for a good long while. Started with Ubuntu, then on to Mint, then Arch as I became more of a "power user". But you know what? Sometimes I don't want to put in manual effort to get things to work, even if that means I lose some customization. Over my years with linux, I've had driver issues with displays, with the network manager, with mounting remote drives... Basically any interaction with the hardware was a solid kick in the groin. Since switching to a Macbook, everything has been smooth sailing.
And especially nowadays with homebrew and docker, all the tools I need are available on a Mac, and it's a more integrated experience, and IMHO it looks better. But if you like linux and it works for you, that's also fine by me. To each their own.
What annoys me about the last decadeish of Mac developers is the terrible prevalence of things like "curl|bash", custom pacakage managers, unversioned software etc.
This could just be the company I work for, but as far as I'm concerned software is not released unless there is a collection of software and instructions on installing. Deb, rpm, even a rat all is fine. Anything that downloads something from a random website is not.
Now obviously Mac developers don't enforce this attitude, but it does seem to correlate.
Installing stuff on Ubuntu I used a couple of PPAs. How is this any different than curl|bash? You just trust some repo blindly.
Also, please refer me to this magical distro that just works cause I have seen these before going back to Mac:
- HiDPI scaling does not just work. There are weird issues here and there.
- Closing the lid does not reliably put the laptop to sleep. Sometimes when it does, opening it does not just wake it up.
- Once in a while, the OS completely forgot that it had a Wi-Fi device. I just restarted and it returned.
I don’t have time to test workarounds, read logs, try different distros and shit like that. I never once had to think twice before tossing a MacBook into my pack because it might still be awake?
I love Linux on servers, used o love it on my desktop but that was high school / college days. I had too much time to tweak things just perfect.
Edit: pls respond.
The only reason to use Linux is for extra op security, but that's not what programmers main concern usually is.
Mint is over a decade old, hardly "modern".
Wasn't the whole point of Mac to "think different"?
That is the most FUD bit of this. There's a very easy way for a user to protect themselves: don't download any untrusted applications. Only run code you trust. Don't trust the sandbox that much.
Vulnerabilities like the android MMS one where anyone could send any number an image and pwn it were "as a user you can't protect yourself".
Local roots and things like this require a user to opt-in to being exploited by running a malicious app. That's significantly different and less scary.