Although it's not officially documented, Snow Leopard's sandbox is already quite capable and easier to use than the norm; it's nonsensical to list "sandboxing" and "mandatory access controls" as wins for other operating systems. Lion will make it mandatory for all App Store apps and add features like a secure open dialog (where the OS handles the open dialog and gives the app access to only user-selected files) and an easy-to-use privilege separation API (to make it easier to take advantage of the sandbox); the result is much more advanced than anything mainstream in Windows or Linux. Lion will also get rid of previous limitations on DEP and ASLR; in particular, it randomizes dyld.
The article also seriously underestimates the benefit of the centralized App Store model (which has an equivalent in Linux, but not Windows); despite all the horrible rejections and review issues, if it becomes the usual way to obtain Mac applications, it will greatly reduce the chance that users will come into contact with malware.
My mother, a sweet 75 year old lady who loves to share powerpoint presentations with her friends, has been using Ubuntu Linux for the past 5 years or so. She moved up from MacOS 9, which she used since she abandoned Windows (because I threatened her I would not remove malware from her computer again).
So far, she has been very happy with it. She is now using Natty and quickly transitioned to the Unity shell.
Every once in a while, I log in remotely and brush the machine's teeth. Never found anything remotely suspicious.
If they did this (even as just an option), then there'd be an uproar about how they're incrementally making os x a completely closed system (a walled garden).
You can turn on parental controls and select which apps a user can execute, either per checkbox or (if they are from the App Store) based on age ratings.
Parental controls will, however, disable installing apps for the user completely. They get a prompt asking for the admin password. As far as I can see there is no way to enable users to only install apps from the app store. By the way, there is such an option in iOS.
I don't think anyone ever complained about them adding options to the parental controls, so Apple could absolutly add an option to install only App Store apps to the parental controls.
"The Unix Design is significantly less granular than Windows..."
That's why it's more secure. Complexity means you don't know what's going on. Complexity means you will forget something. Complexity means there's more likely to be a way to squeeze through, more likely to be a bug, more likely to be a little thing that is forgotten.
This is also a problem with complex cryptographic APIs, overly complicated things like PKCS11 and X.509, etc. It's curious that security-related systems are among the most complex, since complexity is inherently bad for security.
I'm sorry but this is a horrible argument. Granular security is critical to having a system that can actually be locked down. Which is why SELinux support is built into the kernel now.
And from my experience, most admins turn it off immediately rather than rewriting security policies so that Apache can access data outside of /var/www/, etc. Sure you could modify the policy, but it's enough of a hassle that no one I know has ever done it.
Restrictive security that just gets in people's way is terrible security. Just like forcing people to change their password every 14 days results in people using the same password repeatedly and incrementing a digit on the end (or writing the password down and sticking it on their monitor), creating overly complex rules means that people who absolutely must deal with these things (or who have the time) do so, and everyone else just turns it off and forgets it ever existed.
I'm glad I'm not the only one that hates SELinux. I find its setup to be super complicated yet all it ends up doing is stuff that I can do anyway with native unix permissions. As best I can tell it's a completely parallel world that is just there in case you mess up your normal permissions.
You admit that you don't understand SELinux so could it be that you hate it because you don't understand it?
The fact is there are a lot of things that SELinux makes easier. In SELinux you have your services run in contexts and you can say what they can do (e.g. can listen on port 80 but not make outgoing connections, etc.). You no longer have this ridiculous need to run as one user (root) and switch to another.
Unix security is so simple that, for my tastes, it's actually more complex to set up securely than SELinux. If you use a distro that supports it SELinux is drop dead simple anyway.
The granularity actually can make it dangerous if it's an impediment to correctly designing such rights. Unix model has this advantage to be simple enough to accommodate most needs. A subtle misstep or two with ACLs can create a hard to notice path to escape, while since Unix rights are coarse, holes are generally quite coarse too.
I would love for SELinux to be easier, but as far as I can tell it's the only game in town. I would never put anything internet facing that wasn't SELinux. Normal unix rights are simply not sufficient. Once you break into a program you can do everything the user that program is running under can do. Sure, you can create a new user for every bloody program/service you have but that means escalating privileges, etc.
With SELinux I no longer have to worry about any switching-user nonsense. I can just give that service those specific rights. In that sense it is a lot simpler than the overly simplistic approach we've been using.
Unix by far is not secure. Access control via unix permissions is a mess, this is why we have selinux, apparmor, smack... The whole 'complexity' argument is moot todays unix with selinux, chrooting, jails, apparmor is much more complex than say a capability based security.
One example: given a file, you can create several different access levels. One group can be read-only, one group can have read and write but NOT delete, one group might only be able to modify permissions, and one group might have full access to the file, while "EVERYONE" has no access at all. Administrators, incidentally, need not have access beyond "take ownership" which is an obvious and easily-audited action.
These are all standard features in most ACL-based multi-user environments.
Unix file permissions don't use ACLs, so off the top of my head I'm not sure how you would set this up on Unix. For one thing, I am pretty sure w implies delete permissions. So that group can't even exist, and if it could, there's no easy way to have that group be different from the read-only group, and still have a no-access-at-all group.
I suspect most complicated requirements can be resolved with some combination of sudo and traditional permissions but it's not always straightforward and probably won't be exactly equivalent to the way you would do it in Windows.
And those ACL's are available within OS X as well, as well as within Linux, Solaris, FreeBSD (UFS2 and ZFS).
This complaint doesn't hold water. Those features are available within standard Unix environments (Solaris probably counts the most as a real Unix, OS X is technically certified Unix as well!).
So Unix file permissions can use ACL's. The default is POSIX file permissions but they aren't the only ones available.
It's not a complaint, it's just the way it is. The windows (NTFS and later) default security model uses ACLs. Unix doesn't. This gives windows a few minor advantages. Yes, of course you can do ACLs on Unix. If you really need them there are plenty of ways to do it. But the limitation I described is, nevertheless, a limitation of the default unix permissions model.
It's mostly pointless to debate whether one is "better" than the other. There are advantages and disadvantages to both approaches, and it's trivial to screw up permissions either way.
The biggest advantage of unix permissions is the culture and history surrounding them, as well as the design and conventional use of the system itself. On unix, application developers, maintainers and administrators have a pretty good idea about how permissions should be set. Generally, the need to run as root is fairly well quarantined to system administration tasks. It's not perfect, but it's much better than what I remember of windows, and a quick search suggests the situation hasn't much improved. Here's a user who discovered a problem using visual studio, he was able to solve it by running as Administrator:
If a unix OS were to abandon too much of the conventional unix way of setting permissions (regardless of whether ACLs are used or not), you could begin introducing similar problems.
The old standard file permission system is the default on most unixy systems because it's easy to use and understand (more or less). That doesn't mean these systems don't have ACLs.
I would say it's because that's conventional, it's what users expect, it's what many applications expect perhaps most importantly having ACLs is just not that important. It's not necessarily that it's "easy to use an understand."
In Windows, it's there by default. I'm not claiming that windows is better or more secure, I am simply answering the question that was posed.
This is a good point. Getting ACLs to work - really and truly work in such a manner that they aren't overly restrictive or overly relaxed - is not trivial.
If you can attain security by trivial actions... that's a lot better than having to be really clever about it.
I discovered ACLs on some flavor of UNIX in the 1990s and tried to play with ACLs for a while. What I found was that a granular approach to security simply does not work for anything but trivial scenarios because of the amount of state that needs to be understood and managed. It turned out that in order to manage ACLs efficiently I would need additional tools to keep track of all the state and to discover potential problems in the configuration. The tools that came with the OS were not much help. Also, it took quite a while to communicate how the ACLs were set up, and why, to my colleagues.
I've always found compartmentation to be a far better strategy than granular control. Mostly because it means you can reason about a system at a much higher level and you do not need to keep a lot of knowledge about state in your head while doing so.
In fact, on most well-run UNIX systems I have seen, compartmentation seems to be the dominant strategy for managing security. The simplest form of which is to assign different users to different subsystems and to restrict access to these users as much as possible. For instance, if you run a database, you create a user owning all the data files managed by the database. You then, very selectively expose only what is needed to interact with the database to other users. (Interestingly you usually do not let the database user own the binaries since there is no need for the database user to manage these files).
On various UNIXen, tools for offering compartmentation have been around for quite a while. Ranging from various forms of "jails" all the way to running virtual machines. I've even been involved in running a startup that sought to harden the Linux kernel in various ways to provide some tools to make compartmentation better (although this was never any sort of commercial success -- we ended up finding success in entirely different areas :-) )
My experience with operating systems and security is that it is extremely hard to make something that is both secure and user friendly. I do not expect operating systems that are appropriate for general consumer use to become particularly secure any time soon. We make security sacrifices because quite frankly we don't know how to reconcile these problems.
It is also my experience that anyone claiming that OS A is inherently more secure than OS B is usually full of shit. In particular, anyone claiming the opposite of what empirical knowledge suggests, is a moron. Empirical knowledge seems to suggest that there is more malware and more problems with malware on Windows than any other OS, thus the statement that Windows should somehow be more "secure" is pure nonsense. Yes, it may present a larger target and more effort may have gone into making windows more secure, but to claim that it IS effectively more secure is, to be quite frank, a little bit insulting since it is a departure from observable reality.
Note that I am not saying that Linux or OSX or FreeBSD is inherently more secure than Windows, but I will say that I think the traditional way of reasoning about security in UNIX environments is a lot simpler than in Windows environments.
And simplicity is extremely important in security.
I agree that this works well with isolated systems, but this becomes a problem in enterprise networks. A distinct advantage of ACLs in Windows is their integration with AD. While there are many problems with AD, there are also logistic problems with having 100 different users on 100 different servers with potentially 100 different passwords. This could easily lead to bad password policies.
The point he was trying to get across is that Apple shares attack surface with other Unix operating system vendors, which -- given his assessment of them as derelict in resolving vulnerabilities -- increases the harm their users are exposed to while Apple is sitting on fixes that other vendors have already written and deployed.
No piece of software is synonymous with insecurity -- except, perhaps, Sendmail. ;)
Your parentheses seem to imply that the OpenBSD team only started PF, and then let go of development, which isn't the case at all. The OpenBSD team is still the lead developer of PF, and FreeBSD sources changes from "The Source". To the best of my knowledge, there are no notable PF forks around from which the OpenBSD team can, or ever have sourced changes from, but I'd be happy to learn otherwise if you have any accounts to share.
Because the developers of pfSense are FreeBSD users? I fail to see your point with that. And I beg to differ regarding the "no lead developer". Indulge yourself by looking at the OpenBSD changelog from release to release, year by year, and notice how every single change, minor and major, stems from there, and radiates out to, f.e., the PF port in FreeBSD.
Just as a curiosity: yesterday I watched a talk by Thomas Ptacek at some indie Mac dev conference where he showed, en passant, how some kludges used by Apple produced vulnerabilities in Mac OS X. It’s old, fixed stuff by now, but I was like “WTF?” all the same. Because it’s very stupid stuff from Apple.
Here’s the talk, slides (check slide 11), and related blog post:
> A lot of OS X users seem to have this idea that Apple hired only the best of the best when it came to programmers while Microsoft hired the cheapest and barely adequately skilled...
Is this really a commonly held belief? I've never encountered anyone expressing this opinion.
It’s possible some people might believe that, perhaps not HN readers But the quality of the management plays a very important role in the quality of the end result: Apples has Jobs and Microsoft has Ballmer. So Microsoft is at a disadvantage human-resource-wise.
As an engineer (though admittedly one at Microsoft), Steve Jobs seems like he'd be a /horrible/ boss. All appearances suggest that he doesn't care about good engineering, but rather that he cares about good user experience, damn the torpedoes.
As a former Apple engineer, I can confidently say that, while Jobs is the putative boss of everyone in the company, 99.9% of Apple engineers will never cross paths with him.
I know a fair few people at Microsoft, and elsewhere, and I've never seen evidence that the engineering talent distribution at Apple is really all that different from the talent distribution anywhere else. There are superstars and dolts in the expected proportions.
yeah, this lines up with my external perception as well, and the same is generally true of steveb at Microsoft; the only time I or most of my teammates ever see him is at the Company Meeting every year, and occasionally at engineering town halls.
that said, Steve Jobs seems (at least from external appearances) to have far more thorough top-down control over the company's engineering efforts than Steve Ballmer does; the highest I ever see engineering efforts come down from is our division director.
Looking in from the outside, I would hazard a guess that Microsoft is more Balkanized than Apple; there are many more products and the successful ones have been around for quite a while, allowing groups to pick up political capital that just isn't available, or rather, is expended differently at Apple.
I've heard a lot of stories of politicking at Microsoft (e.g. the Office project manager didn't want to implement handwriting recognition to add support for tablets, which hurt MS's early tablet OSes).
I compare that to Apple, which seems to have a top-down vision, from which all project behaviours and priorities descend. Lion's adding support for auto-save? You'd better believe that implementing auto-save support into iWork is a top priority, regardless of what the iWork PM thinks about it. That said, Apple seems to rarely hire people who don't share the same vision, and with that comes a certain uniformity of direction that tends to reduce inter-project scuffles.
Also, I get the sense that if (for example) the project manager for iWork was causing unnecessary friction with other teams instead of working with them towards a common goal, he'd be replaced with someone else who's more of a team player.
Ballmer certainly looks like a great boss, having great care for good engineering practices such as yelling, throwing chairs at people and being generally obnoxious.
Also, user experience is a part of good engineering.
Steve Jobs certainly looks like a great boss, calling the entire MobileMe team into an all-hands and asking them point blank why the fuck their software doesn't work.
user experience is indeed a part of good engineering, but it's not the be-all and end-all, and eventually you will /always/ run into a place where you must compromise between a system which is well-engineered and one that behaves in accordance with user expectation.
this is why OS X doesn't have full ASLR and DEP, because it can cause applications to start crashing at random because they were poorly written in a way that used to be invisible.
this is why UAC on Windows Vista is a terrible experience, because even trusted applications need to prompt the user to make sure they approve of them executing on an administrator token.
this is why our operating systems still have to reboot while applying security updates, because long-running services and the kernel have to be replaced and there's no good way to do it seamlessly yet.
About the rest, I wasn't praising Jobs as a great boss, just noting your equivalent isn't something to be proud of.
And I disagree that you will always need to compromise between good engineering and good UX, for example you can certainly have ASLR and DEP with the same UX OS X currently has, they don't add any burden on the user.
Of course sometimes you need to compromise, but it's not everytime.
Having something take over your PC is terrible user experience. Tradeoffs may be necessary, including some that degrade user experience if they can otherwise improve user experience.
UAC is, in principle, not at odds with equating good user experience with good engineering. It’s all about tradeoffs.
This is done to prevent malware faking the prompt and/or user consent. And it's not just "blacking out the display" - the whole thing is executed on another desktop (in OS terms, not in user terms):
http://blogs.msdn.com/b/uac/archive/2006/05/03/589561.aspx
>"and there's no good way to do it seamlessly yet."
Right. There are ways to do it, but not any /good/ ones. Good here meaning "while still letting the software execute efficiently and without a ton of added complexity"
I wouldn't disagree that good engineering includes good user experience. But I would have thought that good engineering would include a bit more than that.
Good UX engineering is good UX engineering. Software engineering / architecture / development in general is not necessarily the same.
An app can be beautifully engineered by have an awful UX. The inverse is less likely to be true (because bugs and obvious flaws like long delays and unresponsive UIs can quickly degrade UX), but still possible.
The problem I was alluding to in my original post is that the user is divorced from the engineering decisions -- as if there can be a better way of building something that doesn't take into account the very reason that it exists at all.
I sincerely hope that Microsoft can turn the ship. They've got lots of really smart people and a lot of cool ideas.
The author may have some good points, but this essay is so poorly organized that it's hard to tell what they are or put them into proper perspective. It's mostly a good argument for teaching essay-writing in school.
> The Unix Design is significantly less granular than that of Windows, not even having a basic ACL. The UNIX design came from a time when security was less of an issue and not taken as seriously as it did, and so does the job adequately. Windows NT (and later OSes) were actually designed with security in mind and this shows.
This comparison doesn't even make sense, comparing a decades old UNIX design to a comparatively newly designed OS (Windows NT). POSIX permissions have stood the test of time for a long time and by far were much better than what was available in Windows for the longest time. Off course Windows NT has improved on what was available at the time.
That being said, Mac OS X since 10.4 has had ACL, so that argument goes right out of the window. ACL's are enabled by default and they function as designed.
touch testing
chmod 700
chmod +a "otheruser allow delete"
su - otheruser
ls -lahe testing
rm testing
> They often share vulnerabilities with core libraries in other UNIX like systems with samba and java being two examples.
That is because they use that exact open source software. This is a simple no shit sherlock kind of deal. Luckily those are going away and won't be in Lion. Java will be an extra download, like Adobe Flash and Samba won't be included by default because of the GPLv3.
Apple's policy regarding third-party software vulnerabilities could definitely be improved, and they already have, but it could still be better. Ultimately many of the third party tools they ship are never used by consumers and even though they may be exploitable they aren't accessible to an attacker (looking at you PHP ...)
> They are extremely difficult to deal with when trying to report a vulnerability, seemingly not having qualified people to accept such reports. Even if they do manage to accept a report and acknowledge the importance of an issue they can take anywhere from months to a year to actually fix it properly.
This has been fixed recently, they have a new head of security [1] and have increasingly shown that they are getting faster at closing bugs and bringing out updates to fix issues. Look at the Pwn2Own contest iPhone bug, Apple was notified and an update was made available that fixed only that one flaw.
Do I think they are doing the best of job? No, MSFT has them beat by a mile with their security response team (really impressive), however the above sentence makes it sound like this is still the case which is no longer true.
--
It is a pretty good article in that it shows that there are certain issues that Apple could definitely improve upon, but completely ignoring any development to OS X for the past couple of years doesn't look good at all especially when the flaws you are attempting to point out have already been fixed.
Whether ACLs are present or not makes little difference. When a user is logged in, he either has access to do something or he doesn't. Only one line of that ACL really matters, and that's the same on a system with permission control but no ACLs (such as traditional Unix).
If the user is able to escalate his privileges (whether with UAC or sudo, the OS doesn't matter) in order to install malware then he loses.
The recent malware entirely (as I understood) depended on the user putting their password into an OS prompt for permissions for the bad program. How can the OS make the line between that and something safe like Growl or Virtualbox?
Stating the obvious: App Store only software installs, and other DRM solutions. And yes, that has a price that people may or may not be willing to pay.
It's somewhat scary, but I'm starting to think that we will be forced to adopt something like that. Computers are used for serious stuff too (payments, medicine, things like that), and, apparently, way too many people can't be trusted to administer their computers securely. Right now people are mostly damaging own life, but if this starts happening to medical records, it's going to go beyond personal security.
It's too bad, really. Even if "developer programs" were free -- i.e., just required asking the corp for a developer key (this could be enforced on state level), it would be more of a hassle than it should be.
So lets see an option in 10.8, aka Ocelot, where there is a fairly hidden button called Allow non-app store app installs, with a readable (unlike itunes eula), in Security.
on a unix server, you have to be root to read everyone's data.
on an os x laptop, you can be the logged-in user and read everyone's data. file permissions don't really mean much when everything of importance on the system is owned by one user (which is running dozens of applications with large attack surfaces).
that's not really a criticism of mac os, because it's the same on a windows desktop. you need elevated privileges on both systems to be able to do certain things to the system, but if all you want to do is steal sensitive documents, spy on the system's webcam, launch DDoS scripts, or add a command to the startup/login sequence, there's no need to bother elevating privileges.
one way to fix that problem is to make the system actually use the file permissions and user separation that the system already has, so that safari is running as a separate user with no access to the operating user's home directory, and that itunes has no access to the machine's webcam.
i haven't really looked into the sandbox feature of lion, but i'm assuming it does pretty much that, just like ios' concept of each 3rd party application being segregated from each other and not able to read files it's not supposed to.
> on a unix server, you have to be root to read everyone's data.
> on an os x laptop, you can be the logged-in user and read everyone's data
Shenanigans. Unless you have the password of the logged in user, you can't read stuff belonging to other users. Further, if the user in question does not have an Admin account, you're shit out of luck even if you do know their password.
> Shenanigans. Unless you have the password of the logged in user, you can't read stuff belonging to other users
Their home directory defaults to world readable, and the default umask is set so documents created are world readable, so you can read things in their home directories.
The Desktop, Documents, and so on directories are 700, so things in those should be unreadable.
"[Unix permissions] were much better than what was available in Windows for the longest time"? Try DOS. Windows 1.0 - Me were never multi-user operating systems, which was largely the purpose of having permissions, until the world realised just what a mistake it is to have full, unguarded permission to your system files.
MS's first multi-user OS was Windows NT in 1993, which shipped with ACLs.
Agreed; follow the link the author offers near the top of the article to Secunia. Of a few common OSes I looked at (Red Hat Enterprise 5, Windows XP Pro, Windows 7, OS X), OS X had the fewest advisories for 2009, 2010, and 2011; most vulnerabilities seemed to be of a more benign nature than other OSes.
Perfect? Probably not, but it's still the OS I'm going to recommend to my mom.
Apple has fewer advisories because it's their standard operating procedure to sit on security bugs for several months and then patch them all at once, even if their contemporaries are patching them as they appear.
If you look at the numbers for OS X as opposed to Windows XP, OS X has 1,544 vulnerabilities in 153 advisories (~10.1 vulns/advisory) and Windows has 472 vulnerabilities in 358 advisories (~1.31 vulns/advisory).
Unless you have a good reason to believe that bugs in Windows are nearly eight times "more unique" than bugs in OS X, please don't compare advisories.
Well in the years prior there were zero and now there is one. This is an increase of infinity percentage!
Okay, beside the snark it is true, Apple should maintain security bugs better and "File Quarantine" looks to me rather rudimentary: http://support.apple.com/kb/HT3662
But as long as the "Trojan Botnets" he mentions are simple PHP scripts which are distributed years ago by pirating Photoshop and are simply killed by deleting the file and a reboot I personally stay feeling pretty secure.
Apple did one thing very well: they ask for a password when doing something potentially harmful, but made sure that the password popup is rare enough that you won't be trained to blindly fill it in.
That one thing has more security value than any of the advanced security techniques listed in the article like "stack canaries" and "fine grained ACL".
It's too bad there are so many security consultants that focus on the technology instead of user behaviour. If they would just look at the statistics they'd see that >90% of security issues are not technology issues, they are behavioural issues.
Sure, it would be nice to have a few of those advanced security techniques in OS X if they don't cause too much usability or performance issues, but it will have very little effect on security as a whole.
However, you only need the password if you want to be root, and most of the stuff malware wants to do (including keylogging, which the article mentions; requiring root to intercept keyboards is only moderately useful if the regular user can gdb -p whatever app has the password field) does not require being root.
Oh, is this new in Lion or something? I've never seen that prompt before. (But in any case, there are other options such as clever use of DYLD_INSERT_LIBRARIES.)
My go-to bookmarklet for poor article design: Readable (not Readability). Someone posted it as a Show HN a few months ago and I haven't looked back since―it's incredibly fast and very customisable.
I don't see why this is discussed so much, afaik this article just says "Windows is more secure than OSX", mentions Mac Defender and goes on OSX about market share... The same story Mac users have heard for the last 10 years. Nothing new here, moving on, and remembering the days of Melissa, Kournikova, Sober, MyDoom etc...
I don't know any enterprise installations of Mac OS X Server that use AFP.
As for kerberos, that is painful on any platform. At the moment at work I am trying to figure out why Mac OS X takes 10 minutes to connect to a Windows Server 2003 based file share, all I see with Wireshark is a bunch of Kerberos stuff being thrown around, whereas Windows clients connect without issues, but without ever attempting to use Kerberos.
your Windows clients are probably using NTLM (or NTLMv2), Microsoft's old, terrible auth protocol that the Windows team eventually abandoned for Kerberos. there are policy settings you can change to force Kerberos; I'd suggest Googling to see if you can find them, and see if it breaks your Windows clients as bad as your OS X clients seem to be.
quite possibly. I'm not terribly familiar with Kerberos as a protocol, but I know I've definitely seen Kerberos login misbehave in a way that caused it to take multiple minutes and then time out.
We have some colocation clients who have a full cab of all XServes and Mac Pros (with OSX Server installed). One time, I asked what they run with all of that. They said "Ooh, we needed it to run Tomcat". Uhh....
I don't really understand the point of OSX Server beyond possibly render farms (for music / movies)
> I don't really understand the point of OSX Server beyond possibly render farms (for music / movies)
Small businesses, because they are very easy to manage.
More importantly though, Mac imaging. You can't run DeployStudio on anything but a Mac running OS X Server. So if you have more than 5-10 Macs to manage, having an OS X server around is a no brainer. It doesn't cost much and it makes managing & imaging Macs as simple or simpler than PCs. This is by far its most legitimate use.
>Personally for me, malware is a minor threat with the impact being negligible as long as you follow basic security practices and can recognize when something looks out of place.
Likewise, with proper security knowledge, the holes that Apple leaves unpatched for months are "minor threats." For example, disabling Java in the web browser when there's a known vulnerability. It's an inconvenience, but so is having to always be on the watchout for things that are out of place.
Apple is not fantastic on security, but they are good enough for the current threat level, as long as you take basic security precautions.
with apologies to ESR: "with sharp enough eyes, all bugs are visible."
"the impact [is] negligible as long as you follow basic security practices and can recognize when something looks out of place" is a worthless statement, because the majority of users have repeatedly proven to be unable to do that (hence MacDefender, hence the largest families of malware on Windows being fake AV.)
it also makes it too easy to hand-wave away security threats. you got a trojan on your MacBook? you obviously weren't following basic security practices.
It is a pity the author does not include at least one Linux distro. Especially for the mentioned "targeted attacks", servers are the most likely targets.
And in the servermarket, OSx is hardly around, and is the share of various Linux servers growing larger then Windows, even.
The whole article seems overly emotional and not objective at all. The only thing I agree with are that ASLR and DEP are not implemented as well as they could ( though I have not looked at it myself).
found this to be salient, there's a lot of malice to be done in plain sight of an ignorant user:
Root access is only needed if you want to modify the system in some way so as to avoid detection. Doing so is by no means necessary however, and a lot of malware is more than happy to operate as a standard user, never once raising an elevation prompt and silently infection or copying files or sending out data or doing processing, or whatever malicious thing it may do.
Does anyone have a version of this article with an even smaller font size? Maybe something that requires a microscope to read? Size 8 font isn't blinding enough.
One point I would add is that by default, Macs have Perl, Python and Ruby (I think). So it's easy to script malware or write portable tools. I'm not suggesting that these languages are insecure or should not be installed, only that a malware designer can pretty much count on having them available to use. This may make Mac/Linux cross-platform malware easier as well.
Those applications aren't launched or available from the outside. If the user runs/double clicks on something it is already game over. Social engineering attacks are never going away so long as humans are humans and want to see Anna Kournikova naked.
Is it only me who find it funny that the name is allthatiswrong given how many factual errors there are in there. :) Can't quite make up mind if the author is trolling or if he have just failed to read up on the topic he tries to school us in.
All things said and done , one of the biggest flaws will be both Unix and Max giving a user the sense that both are secure and nothing can go wrong. That is the same reason Mac Defender worked.
The article also seriously underestimates the benefit of the centralized App Store model (which has an equivalent in Linux, but not Windows); despite all the horrible rejections and review issues, if it becomes the usual way to obtain Mac applications, it will greatly reduce the chance that users will come into contact with malware.