Hacker News new | past | comments | ask | show | jobs | submit login
New malware found on 30k Macs has security pros stumped (arstechnica.com)
368 points by furcyd 16 days ago | hide | past | favorite | 195 comments



Tangentially related to this post: I am surprised at the infosec’s love of file checksums, like MD5 here. There are endpoint tools which collect MD5’s, network scanners which collect MD5s, and I have heard about actual infosec directors who, upon seeing a blog post like that, would carefully type the hashes in their endpoint management interface to make sure their org is not infected.

How on earth can this be effective? It is trivial to create customized binaries in the installer, so no two users have the same file checksum. With a bit of work, one can rig their download server so each user has a completely different installer checksum as well.

I remember reading about polymorphic viruses back in the 1990’s - not only those had different checksums for each computer, they also had no long-ish substrings common between two versions, making signature analysis ineffective. Did malware authors just forgot about all those and went back to trivial “everyone gets the same image” strategy?


The infosec community at large is well aware of how unreliable just using md5 checksums to identify malware is. If anything it is the absolute first line of defense for identifying malware, in that it is easy to implement quickly and has a decent enough chance of filtering out low hanging fruit. The biggest use for the checksums between malware researchers is for identifying if they have the same strain of malware as someone else. Identification is mostly not based on checksums, but rather things like YARA rules where different identifying factors of malware are outlined to be compared against binaries. This isn't foolproof either, but there is a rather large ecosystem of malware researchers out there constantly taking samples and releasing rules. I follow a lot of these folks on Twitter and the majority of what they post are their findings on the bajillionth strain of whatever malware is in vogue at the moment. This sort of stuff is going to catch the majority of what will be coming at most people and anything that slips by the first lines of detection usually gets picked up somewhere along the way and passed on to researchers who do an exceptional job of reversing and identifying new malware or strains of old ones. But of course the reliability of that whole ecosystem depends on sensible organization security policy to start with.

In short, md5 sums and signatures are there to protect against the low hanging fruit, spray and pray type malware that's pretty common. If someone wants to target you with uniquely signatured malware they can. Identifying it isn't going to be what stops it, but proper opsec can.


And that's what I don't understand! You say it "has a decent enough chance of filtering", and I believe you -- but this just seems so strange.

It seems to me like it trivial to create a webserver which says "serve the same binary, but put a random ASCII string in bytes 40-48". Or make malware installer which says, "write out the executable file to disk, but put a random value in bytes 80-88". Sure, it won't help against good YARA rule, but it seems really easy to do, and it will frustrate researchers, and even defeat some endpoint protection software, like [0] and [1].

[0] https://help.symantec.com/cs/ATP_3.2/ATP/v106632175_v1273003...

[1] https://docs.mcafee.com/bundle/network-security-platform-9.2...


Its like scam/phishing e-mails with typos in them. Most widescale hacking is lowest effort looking for lowest hanging fruit. If you hack enough servers for your purposes without worrying about checksum randomization then worrying about it is just wasted effort. And you want targets with excessively shitty security postures or else you might actually get tracked down and busted.


Actually not as easy as you’d think to do that, criminals don’t upload malware payloads to their own servers, they use hacked websites (old Wordpress installs, etc) to spread the binaries.

Could they do this? Sure, but it would be a lot more complex than the current method of just finding any target where they can upload files, and the payloads do change very often anyway (sometimes daily) so there is no real need for them to change dynamically


Lucky for us, most criminals are lazy.


I don't think this comes from them being lazy. I think this comes from them not being aware of (1) the defense; and (2) the mitigation. It's an example of security through obscurity.


Or even if they know about the defense and the mitigation, it is additional work. In my work in the formal economy I rarely get to ship the technically best and most complete solution but instead a compromise 'MVP' that'll receive more work only if the problem proves to demand it. I expect the same holds true in the informal economy.


I think, like with many things, basic steps are not taken, through laziness, carelessness or ignorance.


> I follow a lot of these folks on Twitter and the majority of what they post are their findings on the bajillionth strain of whatever malware is in vogue at the moment.

Anyone in particular you recommend following?


krebsonsecurity, notdan, donk_enby are all good cybsec follows. you can probably find others from people that they follow/rt


@malwaremustd1e @malwrhunterteam @0xrb @capesandbox @malware_traffic


The "create customized binaries" issue can be bypassed by treating unknown hashes as suspect and blocking them.

Many systems already do something similar, e.g. I think I've seen Chrome put speed bumps and scary warnings on rare files.

This forces malware authors to choose between the "hey this is the first time we see this file on the planet, it's probably bad" warning and serving the same hash to many victims.

A more extreme approach is binary whitelisting - if the hash isn't explicitly approved, it doesn't run at all.

Hashes are also useful to uniquely identify what you're talking about. Even for a fully polymorphic sample, being able to tell other researchers/defenders "hey, I found ab512f... on my network, and it does X" is useful even if nobody else has the same hash, because then they can get a copy of that sample and analyze it themselves.


Binaries... by binaries you mean pdfs and zip files? Because these are probably the most probable vector of attack rather than exes.


In the end, most malware runs an executable. If it's a ZIP file, it probably contains a binary. If it's a PDF, yes, it could contain an exploit, but nowadays, it's much more likely that it contains a link for the user to click and download malware from. And if it does contain an exploit, there's a decent chance that the exploit will be used to download and install an executable.

It feels like most malware is going backwards from the elaborate schemes of the past to "hey user, please run my malware".

Of course, state sponsored malware _can_ be the exception, although it often uses the same patterns, because they apparently work well enough against most targets.


I don’t think you got the idea of what I am saying.

You can’t limit people to opening only PDFs which you approved. Therefore if a PDF reader has an exploit (be it ability to write to it’s code locations via PDF payload, a scripting language embedded writhin PDF), then you are pwned.

EXE files are far from the only files that allow you to embed code today. Heck, you can embed JS in an SVG image and if you’re unlucky to display that with some not-so-smart image previewed then you’re done.

Fingerprinting is useless for that kind of stuff because it takes away the only good use of computers: sharing of new, unique information.


As I said, exploits are out of scope for binary (i.e. executable) whitelisting and a malicious document could be opened even with binary whitelisting in place and would bypass it - but they're rare in practice.

It raises the bar for the attacker from "trick the user into running your malware" to "have a zero-day for your PDF reader + an escape for whatever sandbox the reader may be put into + fileless malware to actually do something useful after they gained initial execution on the target machine".

Again, not claiming it's unbreakable, but it stops the vast majority of attacks (allowing you to spend time defending against the rest instead of reimaging machines infected with commodity malware).


> How on earth can this be effective?

It's not.

> Did malware authors just forgot about all those and went back to trivial “everyone gets the same image” strategy?

No. But if you make your thing as polymorphic as possible and deliver that automated, i take 1000 samples and look what doesn't change to make a signature.

Also these things exist as webservices these days and criminals pay per use to generate fresh undetected executables.

Just makes more sense to only release a new binary once it had been detected, even if it's just a button click and could be easily automated.


>I take 1000 samples and look what doesn't change to make a signature.

There's fully metamorphic viruses where even the code mutating part is changed. Algorithmically it's equivalent but there's no "static" portion of the virus.


> There's fully metamorphic viruses where even the code mutating part is changed. Algorithmically it's equivalent but there's no "static" portion of the virus.

Never seen it play out that way. There's always something in practice. See you just talked about randomizing the code. But a windows PE isn't just code. It also has an icon. And yes i've seen AVs detect specific icons.

In theory it's easy to not have anything static, in practice, that's the cat and mouse game you're playing.


Look into Mistfall. It would infect other executables by splitting them into basic blocks that end with some branching instruction and inserting metamorphic portions of itself in between blocks and fixing up all of the modified addresses. There literally isn't any "static" portion. Here's a mirror of the author's site, he is Russian but there's a ton of English translations and articles that are a great read.

http://z0mbie.daemonlab.org/


> But a windows PE isn't just code. It also has an icon. And yes i've seen AVs detect specific icons.

And that's how you get overzealous AVs marking executables because they happen to have PE headers...


Why not both? Trivial change (like embedded timestamp in a file) for every user, and advanced polymorphism for the times when the malware is detected.

I bet most researchers have tools to bin files together automatically based on the same MD5. Why make their life easier when this is so easy to defeat?


Because many AV systems assume that any unknown file (unknown to their cloud service) is probably doing this, and thus probably malware.


Note that on macOS many things must be signed and thus modifying them requires a new cryptographic hash. If you’re trying to get through something like the notary service you may not want to submit a bunch of sketchy requests for malware with tweaked hashes.


What they really need is a whitelist of checksums instead. Preferably not md5 though.


That is what Windows Defender Application Control does. It's probably the most cutting edge solution on the market actually.

It's a pure whitelisting solution where every single executable and kernel driver needs to have an approved digital signature or matching hash value or they won't be permitted to run.

It's virtualization assisted and can't be disabled without rebooting and if you use a digitally signed policy and someone tries to remove it, the machine will refuse to boot.

The coolest thing is, it even expands to the scripting languages built-in to Windows so PowerShell execution is restricted according to policy etc.

In practice of course, it's a big pain in the ass to manage - many software are not digitally signed etc.

Every single artifact of every program needs to be digitally signed or have a matching hash in the policy or they won't be permitted to run.

For example, suppose a software installer: the .msi itself is digitally signed so can easily be permitted to run... But then, during installation it unpacks an .exe into %temp% that isn't digitally signed and attempts to run that - oops, won't run. I've come across even Microsoft software that does this.

https://docs.microsoft.com/en-us/windows/security/threat-pro...


How does it handle scripts? E.g. my virus is a python script bundled with the known and clean python executable


It doesn't. Python would need to add support for it.

The built-in scripting languages do, PowerShell enters constrained language mode and IIRC something also happens to VBScript/JScript, haven't even looked at those.


A negative result means little, as you described.

A positive result means an immediate call to action, and a known type of malware. Chances that malware authors would produce a different malware with the same MD5 is close to zero (though, if pulled, it would be quite a practical joke).


That is simply a misuse of checksums. Checksums are for verifying that your binary matches the one you thought you were getting. The more people using them for identifying viruses, the more common polymorphic viruses will be, especially considering how trivial they are to implement.


I remember editing a virus to avoid detection, all I needed to do was switch 2 lines of assembly with each other (the order of execution for these 2 lines didn't matter)


The reality is, if you aren’t stupid, you can actually rob a bank without getting caught, it happens all the time.


> For those who want to check if they're Mac has been infected, Red Canary provides indicators of compromise at the end of its report.

At the end of https://redcanary.com/blog/clipping-silver-sparrows-wings/


PROTIP: Select your LaunchAgents and LaunchDaemons folders in /Library and ~/Library, select Folder Actions Setup in the Services menu, and enable folder actions. You can use "add - new item alert.scpt" to be notified whenever a new item is added to those folders.

For even more protection, I flat out locked the folders ~/Library/LaunchAgents and ~/Library/LaunchDaemons in Finder, though this could interfere with some software you use.


There’s also the fantastic app BlockBlock to help with this. Notifies you of changes to those folders and allows you to accept/deny whatever an app is trying to change.


Another option is Lingon X. It provides a GUI for launchd and includes a notification setting under Preferences > Notifications for items being added/removed.

https://www.peterborgapps.com/lingon/

https://www.peterborgapps.com/lingon/#preferencesnotificatio...


This is super useful. Thanks for sharing. I wish there was something out there where I could always see/check what folders are changed and what program is making external http calls to dodgy IPs. Then block the good jesus out of them.


The same devs also make LuLu, an application firewall similar to Little Snitch:

https://github.com/objective-see/LuLu


That's great advice, never occurred to me despite knowing about folder actions. Thanks!


Thanks this is a very useful tip, the folder actions in general seem pretty useful


Sounds like this was discovered by MalwareBytes so if you have that installed a scan should let you know too.

Also, Little Snitch will show you the connections it makes every hour. Before anyone says it - in macOS 11.2, Apple removed the exclusion list that allowed their own software to bypass firewalls.[1]

[1] https://blog.obdev.at/a-wall-without-a-hole/


Specifically, here's the list of indicators common to v1 & v2, quoted from the article:

> ~/Library/._insu (empty file used to signal the malware to delete itself)

> /tmp/agent.sh (shell script executed for installation callback)

> /tmp/version.json (file downloaded from from S3 to determine execution flow)

> /tmp/version.plist (version.json converted into a property list)


[flagged]


I hate to be an apologist, but MD5 is the norm in IOCs for two reasons:

1) All of the tools expect and produce MD5 because it is the convention. Computing MD5 hashes of every file on disk or passing through a network is a relatively common forensic operation right now, SHA256 is not.

2) IOCs are not intended to be used in scenarios in which a malicious collision presents a problem (no one would want to... mask their malware to still look like malware?) so there is little downside to carrying on the convention.

While I would recommend against MD5 in most modern applications, if nothing else to avoid having discussions like this all the time, before upsetting an entire ecosystem of tools it is important to consider whether or not the known weaknesses of MD5 actually pose a problem. In this case, a matching hash is the bad state, and so there is no real impact of a preimage attack.


I didn't downvote you, but this use case of MD5 is fine - you want to verify whether the binary matches the one they have. You could spend a few hundred bucks on AWS to have a binary which matches that as well, but what would be the point - have users delete it?

Now, if you used it to verify whether a binary was _secure_, this would be problem. But in this case, the (still unlikely) possibility of a false positive is not really a threat.


>Now, if you used it to verify whether a binary was _secure_, this would be problem

Even that's probably fine. Collision attacks require the attacker to control both inputs. In the case of code signing this would mean the publisher is in on it, in which case you're already screwed.


> Collision attacks require the attacker to control both inputs.

Technically, no. It gets a lot easier when you only need to find any pair of inputs which produce a collision, but with computing power increasing a lot and md5 being very broken, it is feasible that a determined attacker with a lot of compute power can create a collision with a fixed hash, or at least will be able to in the somewhat near future.

I'd agree that you're reasonably secure, but when you need security, you shouldn't bother with outdated tech when secure and future-proof alternatives are readily available.


>it is feasible that a determined attacker with a lot of compute power can create a collision with a fixed hash

That's a preimage attack, which is a much higher bar. Even for md5 there hasn't been much progress made: https://en.wikipedia.org/wiki/MD5#Preimage_vulnerability


The referenced paper is from 2009 and points out a complexity of 2^123 (or roughly 10^27). Bitcoin miners currently put out ~150 EH/s, so about 10^17. Only judged by the rounds, double-sha256 (Bitcoin) has 128 and md5 has 4, so let's assume they'd do 10^18 hashes per second on md5. So it would take them about 31 years to find a collision.

If the NSA found a way to get the complexity down a bit more (they had a decade) and the target is valuable enough (imagine, for example, having a "valid" TrueCrypt executable), this is not in the realm of impossible anymore.

Now, of course, the NSA will probably not outcompute the Bitcoin blockchain and transfering the hash rate is not that simple, but they do have a lot of power and cryptographers. What I'm trying to say is that the orders of magnitude are not in the "totally impossible" space anymore. Yes, you're probably safe, but given the availability of secure alternatives, there's really no reason not to get off the thin ice.


Why spend a few hundred bucks on amazon when you could just wait a few days for your laptop to do the same ^.^


While MD5 hashes are insecure for hashing passwords or other sensitive data, they're still fine for verifying the integrity of data if you are simply verifying a file has not been corrupted.

If MD5 was being used to verify a piece of software you actually want, it's not secure as it's not collision resistant.

But since we can be quite sure no one has made a file that shares an MD5 hash with this new strain of malware, MD5 is sufficient as a checksum in this use case.

You're correct to point out that newer hashes are still preferable though, simply to get out of the habit of using MD5 if nothing else. I assume you got downvoted because MD5's weaknesses aren't relevant in this specific instance. But still they could just have easily used SHA256.


Because why not, what's the threat here? The purpose of the hash is so an investigator can easily check if a given package file is this malware. Note that (a) it is of course trivial for the malware author to change something in the package to change the hash, so a different hash is not a guarantee the package is clean and (b) the collision problems of MD5 aren't really a problem here, as why would someone else have an incentive to make their file falsely look like this malware.

You were downvoted (or, rather, I downvoted you) for your knee jerk "MD5 BAD!" comment without thinking through what the problems would be, and worse, you took an aggressive tone from the beginning ("I can't wait to hear the apologists") that just makes you sound like a jerk.


> (b) the collision problems of MD5 aren't really a problem here, as why would someone [] have an incentive to make their file falsely look like this malware.

While your overall point is correct, there is a specific tactic where a collision attack would be useful: make two md5-colliding versions of the virus that behave subtly differently, in a attempt to make researchers throw out useful hypotheses about how it works because someone already determined that (the other version of) it didn't actually do that.


Could you please explain why this specific use of MD5 is inappropriate?

I don't really see how the weaknesses of MD5 are applicable here but I'd like to learn if I'm wrong.


You are actually right. Given how one can forge an md5 of any file, some malware authors could match a "signature" so to speak of a very popular useful file, leading to mass false detections, for example.


MD5 is not broken in a way where that would be possible. The best that could be done by such a prankster would be to create two pieces of malware with the same MD5. I guess that could be confusing or something...


I knew a website that used MD5 to generate unique colors for usernames. There's a finite amount of colors in rgb colorspace and it doesn't really matter if you get a collision because it's just username colors. So there's definitely still a realistic place for md5 in this world.


This is not a cryptographic use of hashing, but just simplification/identification. There is nothing to gain from being able to impersonate some malware because you could always just include the malware itself to match any hash.


A pretty polished delivery mechanism, including native M1 binary, with no payload. Sounds like R&D in preparation for a real deployment, or possibly a demo for a package that will be sold to third parties.


It might also be a targeted attack. I can easily imagine a TLA using a wide net to get the target infected, but only delivering the payload to the intended target (which can be detected by IP, for example).


If this is the work of a nation state attacker, as sophisticated attacks often are these days, this is a very likely possibility.

We saw just recently the likely Russian backed attack on a wide net of US companies and even government agencies.

I doubt Macs are used often in US gov agencies but they are used in tech companies for example. That latest attack hit a lot of seemingly random US companies but if you want to destabilise and cause loss of trust it is a very effective strategy.

I happen to know someone who works high up in one of the affected companies where the malware had been sitting on the server for a few months passively gathering information before the attack. They struck in the middle of the night local time and erased all the backups before they begun running ransomware across the entire corporate network.

Thankfully the attack was caught within 10 minutes and they were able to recover fairly quickly once the dust settled, but they've got especially good security. Such an attack could have done far more damage to a company.

And if, like me, your first thought was "you're seriously saying they have good security but no off-site backups?" Yeah, I know. I'd bet they have regular off-site backups now though...


Macs on the desktop are quite common in the US government.


TIL. Honestly wouldn't have guessed that.

Makes the theory this is a nation state attack far more plausible then.


M1, 153 countries, AWS+Akamai as control infra? Yeah that has to be a tech demo.


They should rent this out as an installer service to Apple developers who are sick of Gatekeeper and complex app review requirements.


Aren't the app review requirements only necessary to submit apps to the App Store? Pretty certain you can still distribute software through your own channels without Apple reviewing it, you just need to apply as developer with Apple so the app is signed, then Gatekeeper doesn't get in the way.

Also I notice that Homebrew packages run just fine without needing to be signed by Apple, not sure how but that's a possibility if you really don't want to jump through Apple's hoops.


Signing doesn't keep Gatekeeper away, you need to also get your app notarized, which involves uploading the app to Apple's servers.


Ah fair enough, didn't know they were so strict with software distributed outside the App Store.

I say the best thing to do is distribute your software using Homebrew then. As well as it being super convenient since it's effectively the same as apt or any other package manager common to other Unix systems, it bypasses Gatekeeper.

Got curious how and it's amazingly simple, it literally just provides an environment variable that deletes that "quarantine" xattr metadata.[1]

Tell your users Homebrew is the supported installation method and you can skip right over Gatekeeper.

[1] https://github.com/Homebrew/homebrew-cask/issues/85164


Notarizing is not strict by any definition of the term, unless you consider "scans your software for malicious content, checks for code-signing issues" to be strict?

It's an automated system.

Also, if you tell me your app is only installable via Home-brew, I'm not installing it. Comparing Homebrew to Apt, is like comparing a playdough and crayon sandwich, with an actual sandwich.

Sure, they both look kind of similar at a distance, and technically you can eat both of them, but one is really not well thought out, and if you say you don't like how it tastes, the child who made it will get upset with you.


What is your specific beef with Homebrew? You insult it but don't provide any reason it's so much more inferior compared to apt.

There are some ways Homebrew is actually more secure than apt. For example in order to do anything with apt you must give it superuser rights. The same is not true of Homebrew, which installs binaries in userspace and explicitly tells you to never use sudo.

A Homebrew installer is a simple Ruby script you can easily audit for yourself.

The packages are SHA256 signed to ensure code integrity.

You can point it at a specific repo you trust and tell it to get a package from there.

All downloads are done through a TLS connection, which is not the case for apt.

And of course the whole thing is open source.

I fail to see where the hate is coming from.

> Notarizing is not strict by any definition of the term, unless you consider "scans your software for malicious content, checks for code-signing issues" to be strict?

I mean, having to register as a developer, get a certificate to sign your apps, and still have to send off your software to Apple each time you update it before you can distribute it on your own website is pretty "strict" compared to every other OS.

It doesn't seem to do much to prevent malware in the wild either.


Last I checked homebrew doesn't ask for root every time because it changes the permissions on /opt/homebrew (or /usr/local/bin if you are on Intel) to allow you to install software as non-root. This is still extremely insecure, as you can now install/remove/upgrade software on the system without root's permission, which is annoying if more than one user uses the device. Not to mention other applications that you run can now also write to these directories and blow stuff up without your permission, whereas if the permissions were set as default you'd get a password prompt at least.

I still use brew (because it has more apps than macports), but why in the world they made this decision rather than using, say ~/Applications (the macOS recommended practice for software that only one user needs) or ~/homebrew is beyond me (granted, apt doesn't do this either, but I'm 99% sure that you can do it with yum and it is how scoop works on windows).


> This is still extremely insecure, as you can now install/remove/upgrade software on the system without root's permission, which is annoying if more than one user uses the device.

Can you name a single attack that requiring root for Homebrew can protect against?

> but why in the world they made this decision rather than using, say ~/Applications (the macOS recommended practice for software that only one user needs) or ~/homebrew is beyond me

Because it's not possible to distribute binary packages without using a predetermined prefix path. You can easily find devs bullied into explaining the same thing if you look into forums or issue trackers of any binary system package manager.

> granted, apt doesn't do this either, but I'm 99% sure that you can do it with yum

No you can't. Yum, like any other binary package manager, doesn't let users choose their own installation path. There are exceptions though, and RPM packages can be marked as relocatable by the packagers, but that's a rare case.

> and it is how scoop works on windows

Windows is an outlier here, because Windows programs are mostly relocatable by necessity. Scoop packages can't rely on shared paths unlike Unix packages. This is why you end up with so many copies of bash.exe on Windows.


I don't have a stake in this fight, but some of those don't really seem like advantages over apt -

> The packages are SHA256 signed to ensure code integrity.

And apt uses GPG signatures.

> You can point it at a specific repo you trust and tell it to get a package from there.

Exactly like apt?

> All downloads are done through a TLS connection, which is not the case for apt.

Since apt enforces GPG signatures by default, this could be a privacy issue but shouldn't be a security issue.

Unless you meant only for the sudo/non-sudo to be your point on being better than apt and the rest was just defending homebrew?


Adding TLS into the picture introduces many extra failure modes. Examples: clock out of sync, wrong version of SSL, certificate signing problem. All of these things would cause your install to become non-upgradeable by a non-expert.


Homebrew is not a sentient being, so I don't think it's really possible to insult it.

Installing global software without superuser rights is a security failure, not a feature. You can argue about this, but you're wrong. Decades of good Unix security says Homebrew is doing it wrong.

Also your understanding of this problem is wrong: Homebrew does not install tools in "user space". I know the default path used to be "/usr/local" but that is not "user space". It's still global to the machine.

Apt is perfectly capable of downloading packages from a HTTPS server, or any of the other protocols supported by apt transports - because its actually a problem that had some thought put into it, rather than just "hey lets clone this git repo that keeps growing over time to everyones machine".

My real issue with Homebrew is about how half-assed the approach is, and how the core developers essentially react like children when questioned/challenged about their solutions.

Initially Homebrew was source only: there was no binary distribution, everything was build locally, all the time. Because security is just a pesky annoyance, it does this all as the user - rather than the more sensible approach of building as a regular user, and then installing as root. But like I say, pesky security.

Several years ago Homebrew added the concept of binary distribution. The problem is, they either drank a bottle of tequila each before implementing it, or they have literally never use another package manager before.

Without binary distribution in the picture, the logic for handling dependencies mostly worked OK. If you had a Homebrew package that depends on what Apt would call a "virtual package" - i.e. something that is provided by multiple other packages, and you build from source, it will check if one of those dependencies is installed, and if not, build and install one - probably just the first one, I'd imagine.

When Homebrew added the ability to install prebuilt binary packages.... they never changed the dependency management (or if they did, they didn't change it to support the virtual package pattern, which is not exactly rare in real package management systems).

So if you have package A, which depends on Foo, and Foo is provided by both B and C, when you do a source install, it'll check if anything providing Foo is installed already. No? Ok, build and install something that provides Foo, so we'll use B. Now carry on and build and install A with a dependency on B.

In the same scenario, but you want to do a binary install... Homebrew has already done all that "Ok, what provides Foo.. Ok, build and install B, now proceed and build A with a dependency on B"... So you do a binary install of A, having previously installed C, and all of a sudden, it'll tell you it has to uninstall C. Because the binary package doesn't have a dependency on Foo. It has a dependency on B.

The suggested solution is to install from source... Well was.. I don't know what they say now, because I know when a tool is not worth wasting my time on, and I promptly stopped using Brew when this came up.

This is what the Homebrew website says, right now, about building from source:

> Building from source takes a long time, is prone to fail, and is not supported.

The dependency scenario gets even more fucked, if you want or need to provide a dependency from a different repository (called a tap, because apparently nobody told these people that alcoholism is a thing, and over-worked analogies make you sound like an idiot)... You can't. You just, can't. There is deliberately no way to satisfy dependencies from third party repositories, overriding the "core" repo.

The project decided at some point that they need to know who installs what, and when, and what colour underwear they had on at the time..

For reference, Debian does also have a 'package tracking' concept too, the `popularity-contest` package. What Debian does, is ask the user if they'd like to provide package statistics, and then goes to quite long lengths to give the user multiple options to ensure that the data sent is anonymous, and stores that data on project-controlled server(s)....

What did Homebrew do? Oh. Right. They send data to Google Analytics, and it's opt-in by default.

Homebrew is a tire fire inside a dumpster fire, and any time the fire department turns up and says "hey it doesn't have to be this way" the project says "no no, we like it this way".

> I mean, having to register as a developer, get a certificate to sign your apps, and still have to send off your software to Apple each time you update it before you can distribute it on your own website is pretty "strict" compared to every other OS.

It's suddenly very clear to me why you think Homebrew is high quality software if you think signing your apps is some onerous task.

> It doesn't seem to do much to prevent malware in the wild either.

Can you point to some evidence of specifically malicious software that has passed the Notarisation process?


> Notarizing is not strict by any definition of the term, unless you consider "scans your software for malicious content, checks for code-signing issues" to be strict?

I'd consider "you can't ship software for people to run on their own machines without first uploading it to Apple to get their seal of approval" to be quite strict, regardless of what Apple actually does / looks at when you upload it to them. I don't care how low their bar is, I don't care that it's automated, I frankly wouldn't care if it was a complete automatic rubber-stamp with no checking at all - Apple forcing every developer to go through them is draconian.


It does seems inconvenient but also intended to help keep the platform- and therefore users- secure. I’m not sure the word ‘draconian’ fits here, especially considering its original meaning and historical uses.


It makes the device non-serviceable without a central authority. You could not do anything with it offline.

That means it is no longer a general-purpose computer, but an extension of Apple's cloud.


This isn’t true. Software can be installed and used without being through this process, if the user explicitly allows it.

Just as if I download some software that hasn’t come from a ‘store’ on Linux I check it out before using it and only set execute permission if I’m happy, I do the same on MacOS.


Are you sure about that?

I was under the impression that any program with a hash that had not been seen yet must be first approved remotely by a central server before it is allowed to run:

https://sneak.berlin/20201112/your-computer-isnt-yours/


Yes. I flip the switch to allow such software almost daily. Probably lots of others on this site do the same. If you have the chance to use MacOS then you can try this for yourself.


I was a happy user beginning with Rhapsody, so believe me, I know how nice it can be.

I quit around 10.11, after watching things get more coddling and abusive for a couple of releases.

I still use and love my MBA, but I probably won't be returning to that world anytime soon for my primary desktop.


I see people mentioning ~/Library/LaunchAgents to look for suspicious apps. I should mention there are at least 4 other places where a launch agent could start on OSX:

~/Library/LaunchDaemons ~/Library/LaunchAgents /Library/LaunchDaemons /Library/LaunchAgents

Not to mention Login Items under System Settings.

Finally, I hope it's obvious if you're infected for all you know all your legitimate looking launch agents could be infected and secretly run the malware in a background process upon execution.


This app will alert you when launch agents and other persistence items are added: https://objective-see.com/products/blockblock.html


> ~/Library/LaunchDaemons

Launch daemons are installed for the system only. Normal users can only run agents.


Actual source, not this wattered-down Ars rewrite: https://redcanary.com/blog/clipping-silver-sparrows-wings/


So without having to install yet another app on my mac which could be riddled with malwares itself disguised in the form of an anti-virus program and typically notorious for thrashing the machine, what's the best way to find out if my machine is infected?


Look at the files in the "Detection opportunities" section at the bottom of the article: https://redcanary.com/blog/clipping-silver-sparrows-wings/


> Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations.

What? Even random hackforums bots have this feature.


And competent malware author can add that to some level of sophistication, it's a no brainer. (Agreeing)


> ~/Library/._insu (empty file used to signal the malware to delete itself)

The article apparently doesn't explain how to protect against the malware?

Cannot hurt to manually create ~/Library/._insu right? (not that it seems to offer great protection but I take it cannot hurt?)

Anyone got any idea as to how to harden OS X a bit against this malware?


We don't actually know what attack vector it's using so it's pretty much impossible to say at this point. It could be a simple trojan or it could be an advanced 0day chain. We have no clue.

There'd be no point manually creating the file, by the time the malware sees it, you are already infected.

My advice: keep your OS and browser and everything else updated, use uBlock Origin in the browser, and use a network-wide ad blocker (Pi Hole, AdGuard Home - personally I prefer the latter) with a few malware blocklists and keep them updated.

Malwarebytes were the ones who discovered how big this thing is so you can install that on your Mac and run scans.

You may also want to invest in Little Snitch which won't necessarily protect you against an infection but it will alert you to the calls the malware keeps making to its C&C servers. It's also entirely possible the self-destruct mechanism they found is triggered by such software being installed on the machine. Past Mac malware often removes itself if it detects Little Snitch.

And, obviously, don't install random software from shady sources, but I assume anyone on HN knows this already.


One more: browse the web without JS, except on trusted sites, and tell everyone else to sod off.

Bonus: Avoid a lot of low-quality content.


See PROTIP above by lapcatsoftware (several threads above)


Apple makes claims that they're "secure" but is that really true? Is their fundamental design really different from anyone else's?

No word in this article about the infection vector.


As long as you can execute anything you want on your Mac, you can execute malicious code. Without the infection vector (which is probably not known yet), it is impossible to say whether it is user error or a security vulnerability.


Social engineering is always the most effective infection vector, and the more secure you think your device is, the more vulnerable you are. This is particularly the case on the higher end of the spectrum, where I've seen iPhone users click on obvious phishing scams with the justification of "oh, I'm on iOS, they can't get my data". I've seen a lot of the same mentality exist in the desktop Mac space, where most users perform two types of installs: relatively safe ones through the App Store, or venturing out into the wild west of the internet and bringing home a random binary that hopefully does what it says.

Honestly, I'm tired of HN users downvoting anything critical of Apple. It only further confirms how much of an echo chamber this place can be.


This attitude from users is a very real problem, but I'd argue it's platform agnostic.

For instance there were headlines around the internet not too long ago stating Android is more secure than iOS based on claims made by Zerodium.

Any Android user who read that may well have the same attitude you've described from iOS users. And it's potentially far more dangerous on Android because it allows you to sideload apps.

You can even extend it to Windows. Your average user will buy a laptop preinstalled with McAfee [1] and think "no need to worry about viruses now because I've got an antivirus."

Don't get me wrong I agree it's perfectly reasonable to be critical of Apple when it's warranted, but we don't yet know if it is in this case. It's entirely possible (and fairly likely) this malware was delivered as a trojan using pop ups the user had to interact with. If that's the case you can't blame Apple for user error, especially when trojans exist for every single OS that allow the user to install software from the internet.

> most users perform two types of installs: relatively safe ones through the App Store, or venturing out into the wild west of the internet and bringing home a random binary that hopefully does what it says.

This seems to imply any software installed from the App Store is safe while anything from outside the App Store is dangerous.

Isn't the implication that App Store downloads must automatically be safe falling into the same trap you're criticising here?

Quotes from the article:

> Developer ID Saotia Seay (5834W6MYX3) – v1 bystander binary signature revoked by Apple

> Developer ID Julie Willey (MSZ3ZH74RK) – v2 bystander binary signature revoked by Apple

So both of these malware packages were signed by Apple.

Seems like relying on Apple's review processes to determine how safe a particular binary is only provides the same false sense of security you're describing.

[1] https://www.youtube.com/watch?v=bKgf5PaBzyg (sorry, couldn't resist)


I’m curious if more will come out of this but it sounds like the attackers probably already attacked Julie or her employer (she appears to have worked for Tile, Oculus and others) and just signed the app with her ID. I bet they have a ton of these credentials in their pocket from previous infections.

Since this didn’t go through the App Store it probably wasn’t reviewed but the developer’s certificate would be checked when it’s run - hence the revocation now.


Edit: I'm also pretty sure this means they breached her iCloud account entirely - as I am pretty sure you need to sign in to your account to sign applications. That's pretty scary, and I hope if it's possible that is the case, someone is looking into it!


> Honestly, I'm tired of HN users downvoting anything critical of Apple. It only further confirms how much of an echo chamber this place can be.

No kidding I saw that first hand while naively asking a harmless question about Cuda support on M1. Got downvoted to oblivion and added the comment later: https://news.ycombinator.com/item?id=26149344


[flagged]


> You used to be able to get the password of encrypted home directories from the page file with a simple grep command

You already have to be root to do this.

This tells you nothing about Mac security.


And you would also already need root for encrypting the home directory to even matter in the first place. Leaving the password on disk right next to the encrypted content is on par with leaving the keys in a fancy locked safe. It completely nullifies the whole point of the added security to begin with.

I'm an amateur when it comes to application security yet even I know that you should mlock() your secret data addresses and never pass secret data through command line arguments. Both trivial mistakes that Apple has made with FileVault. They even shipped code that would set the password hint to the actual password. How are their security standards so lax to have missed all of that in their encryption software alone?


Isn't the page file on disk? Can't you just force shutdown and then boot an alternative OS that ignores the file permissions and read it?


Who knows - the problem was fixed long ago.


Then why did you write in italics to emphasize that you already have to be root, if you were so unsure?


On many systems you were able to get root access by simply leaving the password empty.

While relevant getting root is still different from getting the encryption keys.


What systems, and when?



Doesn’t seem like it was ‘many systems’.

It’s a very severe bug, for sure, but it wasn’t long-standing and looks like it only affected a small number of versions before it was fixed.


I'm a different security guy from the one above. I have a root exploit in macOS that Apple has refused to discuss with me for half a year. I told them it affected Big Sur (even since the betas) and they were unfazed. I have reached out to everyone I know at Apple to schedule a demo or even a phone call and they've been met with silence from their product security team. How is that for some perspective on Apple's security?


Apple routinely interacts with security researchers, and pays bug bounties.

It sounds like you haven’t told them any technical details about the exploit, at all. If this is the case, then they have no way to tell you apart from a timewaster or scammer.

If all you have done is tell them you have an exploit and tried to schedule a meeting, it’s unlikely they’ll take you seriously.


In fact, I have given them technical details as to what the capabilities are and the affected macOS versions. I refuse to submit all of my research without them telling me roughly what the bounty ranges are for the vulnerabilities in the same class/equivalent impact.

I've done many bounty programs in the past. Companies will always choose to pay nothing or close to nothing when it is favorable to them. Apple refuses to share what the bounty ranges are given technical information about it, and asked me to submit the entirety of my research without having any idea of what the compensation may be. So they made it impossible for me and perhaps many others to help improve the security of their OS ethically.

For every 1 company that pays you in bug bounties, 10 don't. If you're a security researcher, you can't afford the possibility of getting nothing for months of research.


https://developer.apple.com/security-bounty/payouts/

It's somewhere between $5,000 and these numbers (as $5,000 is the minimum).


$5,000 is the minimum for all categories.

I have been unsuccessful at confirming with Apple that a local privilege escalation (LPE) to root is in any one of those categories, even though it's a widely understood type of vulnerability. So I've been trying to get their response on a category or at least have some frame of reference so I can have a reasonable expectation of what the ranges are.

That is what Apple will not provide.


> For every 1 company that pays you in bug bounties, 10 don't.

Do you have reason to believe that Apple doesn’t pay, or are you basing this on other companies behavior?


A bit of both.

I've read some articles about Apple's bounty program from people who were less than thrilled with it, and Apple went back and revised the bounty payouts when it ended up being bad PR/in the news. In the bug bounty programs I've done, I found nothing but P1 vulnerabilities (LPE, RCE, SQLi, LFI). The payouts have always been far below minimum wage, which gave me perspective on how much I could have made working in fast food instead.

I have no reason to believe Apple will be more fair to researchers based on their track record, and their unwillingness to talk about it doesn't inspire any confidence.


Seems like an interesting dilemma for both you an Apple.

I’m sure they get a bunch of bullshit and scams from people trying to sell them exploits, so the distrust is mutual.

This does represent a trust issue in the industry, but I don’t see how it demonstrates that they ‘Don’t care about security’.


Publish your research


Can't do it if there is no way to keep my lights on and food on my table.


One of the oddest things about Apple's reaction is they come to Hacker News and try to bury people talking about real problems that real users see.


“Apple's reaction is they come to Hacker News and try to bury people talking about real problems that real users see.”

The seems like bullshit but to me.

Also claiming people are shills is against HN guidelines.


You're thinking of the shadow file, which I don't believe OP is referring to.


Right, no need to encrypt anything, just don't set any read permissions. Problem solved!


Nobody is saying this other than you.


You insinuated it, though, by claiming that it wasn't a security problem as it required root. If you think failure to encrypt (leaking the passphrase) is not a problem because the exploit requires root, then you understand absolutely nothing about security.

By the way, the reason why I mentioned this particular example is that it almost certainly proves malicious intent. It is not credible that a team of security engineers at Apple developed file vault without thinking at all about locking memory for the passphrase and then continued not to think about it for years. They basically only fixed it (and first even in the wrong way) after it became folklore on every second Mac fan site. (Lost your file vault password? Don't despair: Just copy & paste this in the terminal.)

There is 0 reason to trust Apple on security.


> You insinuated it, though by claiming that it wasn't a security problem as it required root

I did make any such claim. You are seeing things where they aren’t.

Encryption at rest is very important.

However, it’s worth putting different risks into perspective.

What you can do if you have root, simply is a different category of risk than what you can do without. Pointing that out insinuates nothing.

> By the way, the reason why I mentioned this particular example is that it almost certainly proves malicious intent.

Obviously not, because otherwise they wouldn’t have fixed it.

That you can find a security problem from the past that has now been fixed, is the precise opposite of evidence that a company doesn’t care about security.


Okay, okay. You insinuated nothing. I stand corrected. Just a little side note: You didn't need root for the exploit, it was also trivial to boot from a CD or another OS to get the passphrase from the page file. Basically it meant that file vault was useless for many years.

Anyway, you're kind of missing my point. There are plenty of other examples of Apple not addressing security issues in time. Be that as it may, if you feel secure about your Mac, who am I to argue with that...


Ok - physical without root definitely makes it worse, but it’s still not open to a remote exploit.

Yes, some people need worry about people entering their office to steal their home directories, but it’s not a simple thing to exploit.

Apple is quite slow to address certain issues, but on the whole end user security is still excellent for most end-users now.


Well, the whole point of encrypted home directories is to secure the data when the user is not logged in and does not attend the machine...


Sure - but not only against physical access, also against remote.

And let’s remember - this is a fixed problem. Obviously they did care about it.


> Sure - but not only against physical access, also against remote.

Not really. An encrypted home directory is mounted when the user is logged in and surfs the web. If there is a remote exploit, then the attacker can just read the unencrypted data.

And as I've said, they started caring about it after many years and when the exploit was on every Mac fan page.


Shamelessly copied from sracer [1]

    Here's the typical cycle for problems reported on Apple products:

    1. A few members post reports of the problem, report it to Apple
    2. No response from Apple
    3. Increased number of people report the issue
    4. No response from Apple
    5. Apple apologists dismiss the reports as very rare, the result of trolling, or exaggeration by drama queens
    6. Even more reports of the problem
    7. No response from Apple
    8. News of the problem hits blogs
    9. Apple apologists dismiss the blogs as simply engaging in clickbait
    10. No response from Apple
    11. Those affected by the issue threaten a class-action lawsuit
    12. Apple apologists decry the "sue happy" nature of American consumers
    13. Apple acknowledges the legitimacy of the problem
    14. Apple apologists are silent
    15. Apple release an update to correct the problem
    or
    15. They set up a "program" to address the problem.
    16. Apple gains some positive publicity
    17. Apple apologists applaud Apple for doing the "right thing". (for an issue that they said from day-1 was not actually an issue)
    18. First hand experience with the “program” reveals very strict guidelines and restrictions that greatly reduce the number of affected customers that can participate in the program.
[1] https://forums.macrumors.com/threads/apple-faces-another-cla...


“You used to be able...” “... with no attempt to fix” is somewhat contradictory.

And, of course, the whole comment is inane. The MacOS security model these days very effectively prevents apps from accessing data outside their assigned sandbox. Meanwhile, distributing outside the App Store still requires only a (strict) subset of the steps required for distribution in the App Store.


It took them about three major OS updates to fix this problem.


Hold on:

> Amazon Web Services and the Akamai content delivery network

Why isn‘t AWS investigating?


They do forward abuse notices to whoever rents the infrastructure. Perhaps some kind of investigation happens if these are not acted upon.


Lol, so Apple would notify Amazon of some kind of advanced malware, and Amazon’s first step would be to notify the malware authors?


Generally speaking, malware authors build their command and control networks out of compromised computers owned by third parties. After all, they're already in the business of compromising computers, and using their own computers would leave an unnecessary trail back to them.


LOL, do you think whoever is managing the malware has themselves paid AWS to host it?

This kind of affair usually gets escalated to CEO level. Bezos will pick up the phone if Cook calls. But usual plebs business goes via abuse notices as I described.


I did indeed think so and did not consider that the AWS servers are controlled via hacks/malware themselves! It just sounded like the malware authors just rent AWS with their own credit cards...


Perhaps they are bound by a National Security Letter.


News like this reminds me why it will take a lot to adopt crypto currencies.

For all we know, this software could just be quietly collecting wallet passwords waiting for an opportune moment to attack.

With the sophistication of red team hackers from Russia, China, NK, and Iran, why would we want to computers for such critical infrastructure as payment?


I'm no crypto apologist, but dollars are hardly less digital than bitcoins at this point


You make a good point, but I can still use dollars in offline mode - i.e. the paper in my pocket - even when I am nowhere near a computer or communication device.

That feature still makes them desirable...especially in places like frozen Texas this past week.


For whomever downvoted this comment, you should have been there in 29 degree weather when I was buying 6 gallons of bottled water with cash.

There was no cell service and the place could only take cash. It was a 30 minute wait in the cold, but I used the water both for drinking, and so I could flush my toilets (city water was not working).

So if you disagree, fine - in fact share your opinion - I’ll probably learn something. But by downvoting it showed ignorance.


Why am I missing? That’s only like -2C which practically t-shirt weather isn’t it?


Presumably OP is from Texas, meaning the wind chill was likely around 15F, roads were icy as hell, and there was no (or extremely limited) power or water available.


I'm quite cold-tolerant, but I would hardly consider below freezing (0C) to be T-shirt weather. I would need 3+ layers to feel comfortable in that long-term.

It's only T-shirt weather if you're moving from one heated space to another.


I’m guessing that’s a Texan.


Indeed I am, and live in Houston. Howdy!


most of types of dollar transactions are reverse-able. stealing bitcoin is like someone getting mugged and cash stolen (not credit cards and such).


Considerations about fingerprint in this thread made me think that if I had crypto wallets, I would change relevant file names and extensions, and patch binaries to get hashes unrecognizable by a stealer implant.


How do they get or approximate an infection count?


At the end of the article:

> “To me, the most notable [thing] is that it was found on almost 30K macOS endpoints... and these are only endpoints the MalwareBytes can see, so the number is likely way higher,” Patrick Wardle [...] wrote in an Internet message.

So it seems MalwareBytes detected it on 30k customers' Macs.


Yeah so I kinda wonder what percent of macs use malwarebytes.


Looks like the researchers are running a sinkhole that the malware phones home to. From there it’s a matter of counting unique ips and dropping cookies.


It's probably phoning home, although the article isn't clear on that, so perhaps the people controlling it are only using it in attacks targeting specific people or organisations.


From the second paragraph of TFA:

“Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute.”


Oops, missed that. Then I don't see what stumps the pros so much.


Does hn shadow ban? I saw 1 comment count but none in the thread.


As far as I know, it only hides dead (-4 votes) or flagged comments. You can enable them with the "showdead" toggle in your profile, as the sister comment correctly pointed out (I'd recommend doing so, to see through biases a bit).


Yes, use the "showdead" toggle on your profile page.


I have showdead enabled, there are no dead comments on this thread and the current comment count matches the number of visible comments. I suspect the GP comment's observation was due to a timing issue


Or someone may have vouched for the comment in the meantime


Hmn I dunno what it was to be honest. I checked multiple times and refreshed, at least I thought I did. Shrug. My app that I'm using has no settings to showdead so I wouldn't be surprised if the refresh didn't work as it doesn't from time to time.

I really need a new app. Any recommendations?


Have you tried a web browser? Safari is nice.


Why does it need an app?


Mainly because I have and use 5 browsers and can never remember which one I logged into. There is not a chance that I'll remember my password. I wish. There's just way too many. Also don't use a pw manager on my phone due to security reasons.


I don't like password managers in general mostly because the ones that do all the syncing up magic you want are closed source.

Recently I found out about Bitwarden and have been using it for a couple weeks. No regrets, it's great.

It uses E2EE to sync passwords between devices and the clients are all open source. It's also undergone multiple third party security audits.

Makes everything a billion times more convenient and I feel safe trusting it.


Modern phones, when configured correctly, are more secure than their desktop counterparts.


That's not a very useful assumption to make when you see a mismatched counter. It's almost always stale cache, like after comment deletion.


yes


That's why I'm always hesitant executing .pkg installers. Luckily, the .app in most .pkg files can simply be extracted on the command line using xar.


Maybe this is a stupid question, but what makes you trust the app when you mistrust the installer? Or mistrust the installer when you trust the app? Shouldn't both come from the same more or less trustworthy source?


I believe the .pkg installer usually asks for a password and runs privileged, while moving the app only uses root privilege to move the bundle, but never executes any external code it’s privileges?

I’m not sure how relevant the difference is considering most valuable data is owned by the user, but also because MacOS has evolved the security model well beyond the UnIX standard.


The installer needs root (admin) permission, which may be abused.


Having access to $HOME is already gone enough to produce irrecoverable damage, specially if there is Internet connection.


As far as "malware" is concerned, there's no reason to trust an app over a pkg. Either one is going to get you. But there are legit companies that definitely "overpackage" their software. For example, Microsoft distributes Edge for Mac with a pkg installer, even though Edge is based on Chromium, and Google just distributes Chrome as a drag and drop app in a dmg. But you can extract the Edge app out of the pkg and just install that, skipping whatever junk Microsoft decides to do in their installer scripts.

I trust Microsoft not to be literal malware, but I don't trust Microsoft to avoid doing stupid unnecessary crap to your Mac.


I would consider mandatory "telemetry" to be malware-ish...


To add to blub's response, macOS operates on a permissions system similar to iOS. A new application has to ask permission the first time it tries to access anything in your home folder.

Moreover, it is also quite granular. You can allow access to "Downloads" but still have to grant permission for it to access "Documents" later.


Only since Catalina.


Modern macOS doesn't give access to specific home subfolders to apps.


Which not everyone is using, this is only enabled since Catalina.


A really great app for this is Suspicious Package:

https://www.mothersruin.com/software/SuspiciousPackage/


There's also a very easy attack vector with pkg installs where an installer can "run software to ensure it's compatible with this machine" that's been there since forever. Not sure if it was removed in Big Sur or not but I hope so.

Zoom used that hole to gain admin permissions and install itself before the user even completed the installation process if the user was admin[1] (and we know most people use admin accounts as their main ones). I'm sure plenty of other malware has done this as well.

If this was simply delivered as a trojan with one of those fake "Flash Player needs updating" type popups, it could have very well abused that.

If it's installing without user interaction the attack vector is a far more advanced 0day exploit chain.

I'll be very interested to find out.

I am also curious about how they can get away with using AWS and Akamai as C&C. Surely now this malware has been found, those providers will just shut down the accounts being used? They'll also have some kind of trail towards whoever's behind it, it's not like AWS takes payment in crypto.

[1] https://twitter.com/c1truz_/status/1244737672930824193


Speculation, but I'd not be at all surprised if the C&C servers are compromised boxes that have other uses.


I like .app a bit more than .pkg, but have no idea whether a .pkg installer without admin rights can actually do anything more than a bunch of shell scripts before and after the installation.

Can you link me to an article or a piece of documentation that would explain it to a non-mac-developer?


>Luckily, the .app in most .pkg files can simply be extracted on the command line using xar

Or for a GUI option: Pacifist from https://www.charlessoft.com/


A .app can install a persistent service or alike just as easily as a .pkg. Unless the .pkg is from a untrusted source but the .app is signed. But then again why even install from an untrusted source without signature.


The installer asks for root permission and (I believe) may abuse it, being able to modify system files for example.

I also chown + chmod LanchAgents and LauchDaemons in both libraries so that only root may write to it.


yeah but it also has the biologically useful ability to efficiently synthesize ATP so we may incorporate it into our germline rather than uninstalling it


It's probably North Korea trying to extract some Bitcoin. All their computer science resources are now focused on this 24/7. So be careful.


It's probably the CIA/Israel trying to do another Stuxnet. All their computer science resources are now focused on this 24/7. So be careful.


The fact that it can remove itself sounds like it might be a PoC that made it out of some lab and into the wild.

Kinda parallels some of the theories about COVID...




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: