How on earth can this be effective? It is trivial to create customized binaries in the installer, so no two users have the same file checksum. With a bit of work, one can rig their download server so each user has a completely different installer checksum as well.
I remember reading about polymorphic viruses back in the 1990’s - not only those had different checksums for each computer, they also had no long-ish substrings common between two versions, making signature analysis ineffective. Did malware authors just forgot about all those and went back to trivial “everyone gets the same image” strategy?
In short, md5 sums and signatures are there to protect against the low hanging fruit, spray and pray type malware that's pretty common. If someone wants to target you with uniquely signatured malware they can. Identifying it isn't going to be what stops it, but proper opsec can.
It seems to me like it trivial to create a webserver which says "serve the same binary, but put a random ASCII string in bytes 40-48". Or make malware installer which says, "write out the executable file to disk, but put a random value in bytes 80-88". Sure, it won't help against good YARA rule, but it seems really easy to do, and it will frustrate researchers, and even defeat some endpoint protection software, like  and .
Could they do this? Sure, but it would be a lot more complex than the current method of just finding any target where they can upload files, and the payloads do change very often anyway (sometimes daily) so there is no real need for them to change dynamically
Anyone in particular you recommend following?
Many systems already do something similar, e.g. I think I've seen Chrome put speed bumps and scary warnings on rare files.
This forces malware authors to choose between the "hey this is the first time we see this file on the planet, it's probably bad" warning and serving the same hash to many victims.
A more extreme approach is binary whitelisting - if the hash isn't explicitly approved, it doesn't run at all.
Hashes are also useful to uniquely identify what you're talking about. Even for a fully polymorphic sample, being able to tell other researchers/defenders "hey, I found ab512f... on my network, and it does X" is useful even if nobody else has the same hash, because then they can get a copy of that sample and analyze it themselves.
It feels like most malware is going backwards from the elaborate schemes of the past to "hey user, please run my malware".
Of course, state sponsored malware _can_ be the exception, although it often uses the same patterns, because they apparently work well enough against most targets.
You can’t limit people to opening only PDFs which you approved. Therefore if a PDF reader has an exploit (be it ability to write to it’s code locations via PDF payload, a scripting language embedded writhin PDF), then you are pwned.
EXE files are far from the only files that allow you to embed code today. Heck, you can embed JS in an SVG image and if you’re unlucky to display that with some not-so-smart image previewed then you’re done.
Fingerprinting is useless for that kind of stuff because it takes away the only good use of computers: sharing of new, unique information.
It raises the bar for the attacker from "trick the user into running your malware" to "have a zero-day for your PDF reader + an escape for whatever sandbox the reader may be put into + fileless malware to actually do something useful after they gained initial execution on the target machine".
Again, not claiming it's unbreakable, but it stops the vast majority of attacks (allowing you to spend time defending against the rest instead of reimaging machines infected with commodity malware).
> Did malware authors just forgot about all those and went back to trivial “everyone gets the same image” strategy?
No. But if you make your thing as polymorphic as possible and deliver that automated, i take 1000 samples and look what doesn't change to make a signature.
Also these things exist as webservices these days and criminals pay per use to generate fresh undetected executables.
Just makes more sense to only release a new binary once it had been detected, even if it's just a button click and could be easily automated.
There's fully metamorphic viruses where even the code mutating part is changed. Algorithmically it's equivalent but there's no "static" portion of the virus.
Never seen it play out that way. There's always something in practice. See you just talked about randomizing the code. But a windows PE isn't just code. It also has an icon. And yes i've seen AVs detect specific icons.
In theory it's easy to not have anything static, in practice, that's the cat and mouse game you're playing.
And that's how you get overzealous AVs marking executables because they happen to have PE headers...
I bet most researchers have tools to bin files together automatically based on the same MD5. Why make their life easier when this is so easy to defeat?
It's a pure whitelisting solution where every single executable and kernel driver needs to have an approved digital signature or matching hash value or they won't be permitted to run.
It's virtualization assisted and can't be disabled without rebooting and if you use a digitally signed policy and someone tries to remove it, the machine will refuse to boot.
The coolest thing is, it even expands to the scripting languages built-in to Windows so PowerShell execution is restricted according to policy etc.
In practice of course, it's a big pain in the ass to manage - many software are not digitally signed etc.
Every single artifact of every program needs to be digitally signed or have a matching hash in the policy or they won't be permitted to run.
For example, suppose a software installer: the .msi itself is digitally signed so can easily be permitted to run... But then, during installation it unpacks an .exe into %temp% that isn't digitally signed and attempts to run that - oops, won't run. I've come across even Microsoft software that does this.
The built-in scripting languages do, PowerShell enters constrained language mode and IIRC something also happens to VBScript/JScript, haven't even looked at those.
A positive result means an immediate call to action, and a known type of malware. Chances that malware authors would produce a different malware with the same MD5 is close to zero (though, if pulled, it would be quite a practical joke).
At the end of https://redcanary.com/blog/clipping-silver-sparrows-wings/
For even more protection, I flat out locked the folders ~/Library/LaunchAgents and ~/Library/LaunchDaemons in Finder, though this could interfere with some software you use.
Also, Little Snitch will show you the connections it makes every hour. Before anyone says it - in macOS 11.2, Apple removed the exclusion list that allowed their own software to bypass firewalls.
> ~/Library/._insu (empty file used to signal the malware to delete itself)
> /tmp/agent.sh (shell script executed for installation callback)
> /tmp/version.json (file downloaded from from S3 to determine execution flow)
> /tmp/version.plist (version.json converted into a property list)
1) All of the tools expect and produce MD5 because it is the convention. Computing MD5 hashes of every file on disk or passing through a network is a relatively common forensic operation right now, SHA256 is not.
2) IOCs are not intended to be used in scenarios in which a malicious collision presents a problem (no one would want to... mask their malware to still look like malware?) so there is little downside to carrying on the convention.
While I would recommend against MD5 in most modern applications, if nothing else to avoid having discussions like this all the time, before upsetting an entire ecosystem of tools it is important to consider whether or not the known weaknesses of MD5 actually pose a problem. In this case, a matching hash is the bad state, and so there is no real impact of a preimage attack.
Now, if you used it to verify whether a binary was _secure_, this would be problem. But in this case, the (still unlikely) possibility of a false positive is not really a threat.
Even that's probably fine. Collision attacks require the attacker to control both inputs. In the case of code signing this would mean the publisher is in on it, in which case you're already screwed.
Technically, no. It gets a lot easier when you only need to find any pair of inputs which produce a collision, but with computing power increasing a lot and md5 being very broken, it is feasible that a determined attacker with a lot of compute power can create a collision with a fixed hash, or at least will be able to in the somewhat near future.
I'd agree that you're reasonably secure, but when you need security, you shouldn't bother with outdated tech when secure and future-proof alternatives are readily available.
That's a preimage attack, which is a much higher bar. Even for md5 there hasn't been much progress made: https://en.wikipedia.org/wiki/MD5#Preimage_vulnerability
If the NSA found a way to get the complexity down a bit more (they had a decade) and the target is valuable enough (imagine, for example, having a "valid" TrueCrypt executable), this is not in the realm of impossible anymore.
Now, of course, the NSA will probably not outcompute the Bitcoin blockchain and transfering the hash rate is not that simple, but they do have a lot of power and cryptographers. What I'm trying to say is that the orders of magnitude are not in the "totally impossible" space anymore. Yes, you're probably safe, but given the availability of secure alternatives, there's really no reason not to get off the thin ice.
If MD5 was being used to verify a piece of software you actually want, it's not secure as it's not collision resistant.
But since we can be quite sure no one has made a file that shares an MD5 hash with this new strain of malware, MD5 is sufficient as a checksum in this use case.
You're correct to point out that newer hashes are still preferable though, simply to get out of the habit of using MD5 if nothing else. I assume you got downvoted because MD5's weaknesses aren't relevant in this specific instance. But still they could just have easily used SHA256.
You were downvoted (or, rather, I downvoted you) for your knee jerk "MD5 BAD!" comment without thinking through what the problems would be, and worse, you took an aggressive tone from the beginning ("I can't wait to hear the apologists") that just makes you sound like a jerk.
While your overall point is correct, there is a specific tactic where a collision attack would be useful: make two md5-colliding versions of the virus that behave subtly differently, in a attempt to make researchers throw out useful hypotheses about how it works because someone already determined that (the other version of) it didn't actually do that.
I don't really see how the weaknesses of MD5 are applicable here but I'd like to learn if I'm wrong.
We saw just recently the likely Russian backed attack on a wide net of US companies and even government agencies.
I doubt Macs are used often in US gov agencies but they are used in tech companies for example. That latest attack hit a lot of seemingly random US companies but if you want to destabilise and cause loss of trust it is a very effective strategy.
I happen to know someone who works high up in one of the affected companies where the malware had been sitting on the server for a few months passively gathering information before the attack. They struck in the middle of the night local time and erased all the backups before they begun running ransomware across the entire corporate network.
Thankfully the attack was caught within 10 minutes and they were able to recover fairly quickly once the dust settled, but they've got especially good security. Such an attack could have done far more damage to a company.
And if, like me, your first thought was "you're seriously saying they have good security but no off-site backups?" Yeah, I know. I'd bet they have regular off-site backups now though...
Makes the theory this is a nation state attack far more plausible then.
Also I notice that Homebrew packages run just fine without needing to be signed by Apple, not sure how but that's a possibility if you really don't want to jump through Apple's hoops.
I say the best thing to do is distribute your software using Homebrew then. As well as it being super convenient since it's effectively the same as apt or any other package manager common to other Unix systems, it bypasses Gatekeeper.
Got curious how and it's amazingly simple, it literally just provides an environment variable that deletes that "quarantine" xattr metadata.
Tell your users Homebrew is the supported installation method and you can skip right over Gatekeeper.
It's an automated system.
Also, if you tell me your app is only installable via Home-brew, I'm not installing it. Comparing Homebrew to Apt, is like comparing a playdough and crayon sandwich, with an actual sandwich.
Sure, they both look kind of similar at a distance, and technically you can eat both of them, but one is really not well thought out, and if you say you don't like how it tastes, the child who made it will get upset with you.
There are some ways Homebrew is actually more secure than apt. For example in order to do anything with apt you must give it superuser rights. The same is not true of Homebrew, which installs binaries in userspace and explicitly tells you to never use sudo.
A Homebrew installer is a simple Ruby script you can easily audit for yourself.
The packages are SHA256 signed to ensure code integrity.
You can point it at a specific repo you trust and tell it to get a package from there.
All downloads are done through a TLS connection, which is not the case for apt.
And of course the whole thing is open source.
I fail to see where the hate is coming from.
> Notarizing is not strict by any definition of the term, unless you consider "scans your software for malicious content, checks for code-signing issues" to be strict?
I mean, having to register as a developer, get a certificate to sign your apps, and still have to send off your software to Apple each time you update it before you can distribute it on your own website is pretty "strict" compared to every other OS.
It doesn't seem to do much to prevent malware in the wild either.
I still use brew (because it has more apps than macports), but why in the world they made this decision rather than using, say ~/Applications (the macOS recommended practice for software that only one user needs) or ~/homebrew is beyond me (granted, apt doesn't do this either, but I'm 99% sure that you can do it with yum and it is how scoop works on windows).
Can you name a single attack that requiring root for Homebrew can protect against?
> but why in the world they made this decision rather than using, say ~/Applications (the macOS recommended practice for software that only one user needs) or ~/homebrew is beyond me
Because it's not possible to distribute binary packages without using a predetermined prefix path. You can easily find devs bullied into explaining the same thing if you look into forums or issue trackers of any binary system package manager.
> granted, apt doesn't do this either, but I'm 99% sure that you can do it with yum
No you can't. Yum, like any other binary package manager, doesn't let users choose their own installation path. There are exceptions though, and RPM packages can be marked as relocatable by the packagers, but that's a rare case.
> and it is how scoop works on windows
Windows is an outlier here, because Windows programs are mostly relocatable by necessity. Scoop packages can't rely on shared paths unlike Unix packages. This is why you end up with so many copies of bash.exe on Windows.
> The packages are SHA256 signed to ensure code integrity.
And apt uses GPG signatures.
> You can point it at a specific repo you trust and tell it to get a package from there.
Exactly like apt?
> All downloads are done through a TLS connection, which is not the case for apt.
Since apt enforces GPG signatures by default, this could be a privacy issue but shouldn't be a security issue.
Unless you meant only for the sudo/non-sudo to be your point on being better than apt and the rest was just defending homebrew?
Installing global software without superuser rights is a security failure, not a feature. You can argue about this, but you're wrong. Decades of good Unix security says Homebrew is doing it wrong.
Also your understanding of this problem is wrong: Homebrew does not install tools in "user space". I know the default path used to be "/usr/local" but that is not "user space". It's still global to the machine.
Apt is perfectly capable of downloading packages from a HTTPS server, or any of the other protocols supported by apt transports - because its actually a problem that had some thought put into it, rather than just "hey lets clone this git repo that keeps growing over time to everyones machine".
My real issue with Homebrew is about how half-assed the approach is, and how the core developers essentially react like children when questioned/challenged about their solutions.
Initially Homebrew was source only: there was no binary distribution, everything was build locally, all the time. Because security is just a pesky annoyance, it does this all as the user - rather than the more sensible approach of building as a regular user, and then installing as root. But like I say, pesky security.
Several years ago Homebrew added the concept of binary distribution. The problem is, they either drank a bottle of tequila each before implementing it, or they have literally never use another package manager before.
Without binary distribution in the picture, the logic for handling dependencies mostly worked OK. If you had a Homebrew package that depends on what Apt would call a "virtual package" - i.e. something that is provided by multiple other packages, and you build from source, it will check if one of those dependencies is installed, and if not, build and install one - probably just the first one, I'd imagine.
When Homebrew added the ability to install prebuilt binary packages.... they never changed the dependency management (or if they did, they didn't change it to support the virtual package pattern, which is not exactly rare in real package management systems).
So if you have package A, which depends on Foo, and Foo is provided by both B and C, when you do a source install, it'll check if anything providing Foo is installed already. No? Ok, build and install something that provides Foo, so we'll use B. Now carry on and build and install A with a dependency on B.
In the same scenario, but you want to do a binary install... Homebrew has already done all that "Ok, what provides Foo.. Ok, build and install B, now proceed and build A with a dependency on B"... So you do a binary install of A, having previously installed C, and all of a sudden, it'll tell you it has to uninstall C. Because the binary package doesn't have a dependency on Foo. It has a dependency on B.
The suggested solution is to install from source... Well was.. I don't know what they say now, because I know when a tool is not worth wasting my time on, and I promptly stopped using Brew when this came up.
This is what the Homebrew website says, right now, about building from source:
> Building from source takes a long time, is prone to fail, and is not supported.
The dependency scenario gets even more fucked, if you want or need to provide a dependency from a different repository (called a tap, because apparently nobody told these people that alcoholism is a thing, and over-worked analogies make you sound like an idiot)... You can't. You just, can't. There is deliberately no way to satisfy dependencies from third party repositories, overriding the "core" repo.
The project decided at some point that they need to know who installs what, and when, and what colour underwear they had on at the time..
For reference, Debian does also have a 'package tracking' concept too, the `popularity-contest` package. What Debian does, is ask the user if they'd like to provide package statistics, and then goes to quite long lengths to give the user multiple options to ensure that the data sent is anonymous, and stores that data on project-controlled server(s)....
What did Homebrew do? Oh. Right. They send data to Google Analytics, and it's opt-in by default.
Homebrew is a tire fire inside a dumpster fire, and any time the fire department turns up and says "hey it doesn't have to be this way" the project says "no no, we like it this way".
> I mean, having to register as a developer, get a certificate to sign your apps, and still have to send off your software to Apple each time you update it before you can distribute it on your own website is pretty "strict" compared to every other OS.
It's suddenly very clear to me why you think Homebrew is high quality software if you think signing your apps is some onerous task.
> It doesn't seem to do much to prevent malware in the wild either.
Can you point to some evidence of specifically malicious software that has passed the Notarisation process?
I'd consider "you can't ship software for people to run on their own machines without first uploading it to Apple to get their seal of approval" to be quite strict, regardless of what Apple actually does / looks at when you upload it to them. I don't care how low their bar is, I don't care that it's automated, I frankly wouldn't care if it was a complete automatic rubber-stamp with no checking at all - Apple forcing every developer to go through them is draconian.
That means it is no longer a general-purpose computer, but an extension of Apple's cloud.
Just as if I download some software that hasn’t come from a ‘store’ on Linux I check it out before using it and only set execute permission if I’m happy, I do the same on MacOS.
I was under the impression that any program with a hash that had not been seen yet must be first approved remotely by a central server before it is allowed to run:
I quit around 10.11, after watching things get more coddling and abusive for a couple of releases.
I still use and love my MBA, but I probably won't be returning to that world anytime soon for my primary desktop.
Not to mention Login Items under System Settings.
Finally, I hope it's obvious if you're infected for all you know all your legitimate looking launch agents could be infected and secretly run the malware in a background process upon execution.
Launch daemons are installed for the system only. Normal users can only run agents.
What? Even random hackforums bots have this feature.
The article apparently doesn't explain how to protect against the malware?
Cannot hurt to manually create ~/Library/._insu right? (not that it seems to offer great protection but I take it cannot hurt?)
Anyone got any idea as to how to harden OS X a bit against this malware?
There'd be no point manually creating the file, by the time the malware sees it, you are already infected.
My advice: keep your OS and browser and everything else updated, use uBlock Origin in the browser, and use a network-wide ad blocker (Pi Hole, AdGuard Home - personally I prefer the latter) with a few malware blocklists and keep them updated.
Malwarebytes were the ones who discovered how big this thing is so you can install that on your Mac and run scans.
You may also want to invest in Little Snitch which won't necessarily protect you against an infection but it will alert you to the calls the malware keeps making to its C&C servers. It's also entirely possible the self-destruct mechanism they found is triggered by such software being installed on the machine. Past Mac malware often removes itself if it detects Little Snitch.
And, obviously, don't install random software from shady sources, but I assume anyone on HN knows this already.
Bonus: Avoid a lot of low-quality content.
No word in this article about the infection vector.
Honestly, I'm tired of HN users downvoting anything critical of Apple. It only further confirms how much of an echo chamber this place can be.
For instance there were headlines around the internet not too long ago stating Android is more secure than iOS based on claims made by Zerodium.
Any Android user who read that may well have the same attitude you've described from iOS users. And it's potentially far more dangerous on Android because it allows you to sideload apps.
You can even extend it to Windows. Your average user will buy a laptop preinstalled with McAfee  and think "no need to worry about viruses now because I've got an antivirus."
Don't get me wrong I agree it's perfectly reasonable to be critical of Apple when it's warranted, but we don't yet know if it is in this case. It's entirely possible (and fairly likely) this malware was delivered as a trojan using pop ups the user had to interact with. If that's the case you can't blame Apple for user error, especially when trojans exist for every single OS that allow the user to install software from the internet.
> most users perform two types of installs: relatively safe ones through the App Store, or venturing out into the wild west of the internet and bringing home a random binary that hopefully does what it says.
This seems to imply any software installed from the App Store is safe while anything from outside the App Store is dangerous.
Isn't the implication that App Store downloads must automatically be safe falling into the same trap you're criticising here?
Quotes from the article:
> Developer ID Saotia Seay (5834W6MYX3) – v1 bystander binary signature revoked by Apple
> Developer ID Julie Willey (MSZ3ZH74RK) – v2 bystander binary signature revoked by Apple
So both of these malware packages were signed by Apple.
Seems like relying on Apple's review processes to determine how safe a particular binary is only provides the same false sense of security you're describing.
 https://www.youtube.com/watch?v=bKgf5PaBzyg (sorry, couldn't resist)
Since this didn’t go through the App Store it probably wasn’t reviewed but the developer’s certificate would be checked when it’s run - hence the revocation now.
No kidding I saw that first hand while naively asking a harmless question about Cuda support on M1. Got downvoted to oblivion and added the comment later: https://news.ycombinator.com/item?id=26149344
You already have to be root to do this.
This tells you nothing about Mac security.
I'm an amateur when it comes to application security yet even I know that you should mlock() your secret data addresses and never pass secret data through command line arguments. Both trivial mistakes that Apple has made with FileVault. They even shipped code that would set the password hint to the actual password. How are their security standards so lax to have missed all of that in their encryption software alone?
While relevant getting root is still different from getting the encryption keys.
It’s a very severe bug, for sure, but it wasn’t long-standing and looks like it only affected a small number of versions before it was fixed.
It sounds like you haven’t told them any technical details about the exploit, at all. If this is the case, then they have no way to tell you apart from a timewaster or scammer.
If all you have done is tell them you have an exploit and tried to schedule a meeting, it’s unlikely they’ll take you seriously.
I've done many bounty programs in the past. Companies will always choose to pay nothing or close to nothing when it is favorable to them. Apple refuses to share what the bounty ranges are given technical information about it, and asked me to submit the entirety of my research without having any idea of what the compensation may be. So they made it impossible for me and perhaps many others to help improve the security of their OS ethically.
For every 1 company that pays you in bug bounties, 10 don't. If you're a security researcher, you can't afford the possibility of getting nothing for months of research.
It's somewhere between $5,000 and these numbers (as $5,000 is the minimum).
I have been unsuccessful at confirming with Apple that a local privilege escalation (LPE) to root is in any one of those categories, even though it's a widely understood type of vulnerability. So I've been trying to get their response on a category or at least have some frame of reference so I can have a reasonable expectation of what the ranges are.
That is what Apple will not provide.
Do you have reason to believe that Apple doesn’t pay, or are you basing this on other companies behavior?
I've read some articles about Apple's bounty program from people who were less than thrilled with it, and Apple went back and revised the bounty payouts when it ended up being bad PR/in the news. In the bug bounty programs I've done, I found nothing but P1 vulnerabilities (LPE, RCE, SQLi, LFI). The payouts have always been far below minimum wage, which gave me perspective on how much I could have made working in fast food instead.
I have no reason to believe Apple will be more fair to researchers based on their track record, and their unwillingness to talk about it doesn't inspire any confidence.
I’m sure they get a bunch of bullshit and scams from people trying to sell them exploits, so the distrust is mutual.
This does represent a trust issue in the industry, but I don’t see how it demonstrates that they ‘Don’t care about security’.
The seems like bullshit but to me.
Also claiming people are shills is against HN guidelines.
By the way, the reason why I mentioned this particular example is that it almost certainly proves malicious intent. It is not credible that a team of security engineers at Apple developed file vault without thinking at all about locking memory for the passphrase and then continued not to think about it for years. They basically only fixed it (and first even in the wrong way) after it became folklore on every second Mac fan site. (Lost your file vault password? Don't despair: Just copy & paste this in the terminal.)
There is 0 reason to trust Apple on security.
I did make any such claim. You are seeing things where they aren’t.
Encryption at rest is very important.
However, it’s worth putting different risks into perspective.
What you can do if you have root, simply is a different category of risk than what you can do without. Pointing that out insinuates nothing.
> By the way, the reason why I mentioned this particular example is that it almost certainly proves malicious intent.
Obviously not, because otherwise they wouldn’t have fixed it.
That you can find a security problem from the past that has now been fixed, is the precise opposite of evidence that a company doesn’t care about security.
Anyway, you're kind of missing my point. There are plenty of other examples of Apple not addressing security issues in time. Be that as it may, if you feel secure about your Mac, who am I to argue with that...
Yes, some people need worry about people entering their office to steal their home directories, but it’s not a simple thing to exploit.
Apple is quite slow to address certain issues, but on the whole end user security is still excellent for most end-users now.
And let’s remember - this is a fixed problem. Obviously they did care about it.
Not really. An encrypted home directory is mounted when the user is logged in and surfs the web. If there is a remote exploit, then the attacker can just read the unencrypted data.
And as I've said, they started caring about it after many years and when the exploit was on every Mac fan page.
Here's the typical cycle for problems reported on Apple products:
1. A few members post reports of the problem, report it to Apple
2. No response from Apple
3. Increased number of people report the issue
4. No response from Apple
5. Apple apologists dismiss the reports as very rare, the result of trolling, or exaggeration by drama queens
6. Even more reports of the problem
7. No response from Apple
8. News of the problem hits blogs
9. Apple apologists dismiss the blogs as simply engaging in clickbait
10. No response from Apple
11. Those affected by the issue threaten a class-action lawsuit
12. Apple apologists decry the "sue happy" nature of American consumers
13. Apple acknowledges the legitimacy of the problem
14. Apple apologists are silent
15. Apple release an update to correct the problem
15. They set up a "program" to address the problem.
16. Apple gains some positive publicity
17. Apple apologists applaud Apple for doing the "right thing". (for an issue that they said from day-1 was not actually an issue)
18. First hand experience with the “program” reveals very strict guidelines and restrictions that greatly reduce the number of affected customers that can participate in the program.
And, of course, the whole comment is inane. The MacOS security model these days very effectively prevents apps from accessing data outside their assigned sandbox. Meanwhile, distributing outside the App Store still requires only a (strict) subset of the steps required for distribution in the App Store.
> Amazon Web Services and the Akamai content delivery network
Why isn‘t AWS investigating?
This kind of affair usually gets escalated to CEO level. Bezos will pick up the phone if Cook calls. But usual plebs business goes via abuse notices as I described.
For all we know, this software could just be quietly collecting wallet passwords waiting for an opportune moment to attack.
With the sophistication of red team hackers from Russia, China, NK, and Iran, why would we want to computers for such critical infrastructure as payment?
That feature still makes them desirable...especially in places like frozen Texas this past week.
There was no cell service and the place could only take cash. It was a 30 minute wait in the cold, but I used the water both for drinking, and so I could flush my toilets (city water was not working).
So if you disagree, fine - in fact share your opinion - I’ll probably learn something. But by downvoting it showed ignorance.
It's only T-shirt weather if you're moving from one heated space to another.
> “To me, the most notable [thing] is that it was found on almost 30K macOS endpoints... and these are only endpoints the MalwareBytes can see, so the number is likely way higher,” Patrick Wardle [...] wrote in an Internet message.
So it seems MalwareBytes detected it on 30k customers' Macs.
“Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute.”
I really need a new app. Any recommendations?
Recently I found out about Bitwarden and have been using it for a couple weeks. No regrets, it's great.
It uses E2EE to sync passwords between devices and the clients are all open source. It's also undergone multiple third party security audits.
Makes everything a billion times more convenient and I feel safe trusting it.
I’m not sure how relevant the difference is considering most valuable data is owned by the user, but also because MacOS has evolved the security model well beyond the UnIX standard.
I trust Microsoft not to be literal malware, but I don't trust Microsoft to avoid doing stupid unnecessary crap to your Mac.
Moreover, it is also quite granular. You can allow access to "Downloads" but still have to grant permission for it to access "Documents" later.
Zoom used that hole to gain admin permissions and install itself before the user even completed the installation process if the user was admin (and we know most people use admin accounts as their main ones). I'm sure plenty of other malware has done this as well.
If this was simply delivered as a trojan with one of those fake "Flash Player needs updating" type popups, it could have very well abused that.
If it's installing without user interaction the attack vector is a far more advanced 0day exploit chain.
I'll be very interested to find out.
I am also curious about how they can get away with using AWS and Akamai as C&C. Surely now this malware has been found, those providers will just shut down the accounts being used? They'll also have some kind of trail towards whoever's behind it, it's not like AWS takes payment in crypto.
Can you link me to an article or a piece of documentation that would explain it to a non-mac-developer?
Or for a GUI option: Pacifist from https://www.charlessoft.com/
I also chown + chmod LanchAgents and LauchDaemons in both libraries so that only root may write to it.
Kinda parallels some of the theories about COVID...