If the creators read this, I suggest some ways of building trust. There’s no “about us”, no GitHub link, etc. It’s a random webpage that wants my personal details, and sends me a “exe”. The overlap of people who understand what this tool does, and people who would run that “exe” is pretty small.
Author of cyber scarecrow here. Thank you for your feedback, and you are 100% right. We also dont have a code signing certificate yet either, they are expensive for windows. Smartscreen also triggers when you install it. Id be weary of installing it myself as well, especially considering it runs as admin, to be able to create the fake indicators.
I have just added a bit of info about us on the website. I'm not sure what else we can do really. Its a trust thing, same with any software and AV vendors.
Not many software promises to fend off attackers, asks for an email address before download, and creates a bunch of processes using a closed source dll the existence of which can easily be checked.
Then again, not many malware targeting consumers at random check for security software. You are more likely to see a malware stop working if you fake the amount of ram and cpu and your network driver vendor than if you have CrowdStrike, etc. running.
Concerning code signing: Azure has a somewhat new offering that allows you to sign code for Windows (SmartScreen compatible) without having an EV cert. It is called "Trusted Signing" [1], non-marketing docs [2]. The major gotcha is that currently you need to have a company or similar entity 3 years or older to get public trust. I tried it with a company younger than 3 years and was denied. You might have a company that fits that criteria or you might get lucky.
The major upside is the pricing: currently "free" [3] during testing, later about 10 USD/month. As there doesn't seem to be a revocation mechanism based on some docs I read, signed binaries might be valid even after a canceled subscription.
[3] You need a CC and they will likely charge you at some point. Also I had to use some kind of business Azure/MS 365 account which costs about 5 USD/month. Not sure about the exact lingo, not an Azure/MS expert. The docs in [2] was enough for me to get through the process.
Don't you know.. microsoft doesn't believe in discounts. The evil-empire runs a taxing system envied by the IRS itself. Entire industries have gone up in arms complaining that M$ cloud price structure doesn't allow for third party margins and still they hold strong to their price structure.
$70 isn't correct though. The cost was originally described upthread as ($10 per month) + ($5 per month), not ($10 per year) + ($5 per month).
That said, EV certs jumped in price over the past couple years. The total cost ends up being higher than the list price -- vendors tack on a non-trivial extra fee for the USB hardware token and shipping. All-inclusive I paid like $450 a year ago, and that was after getting a small repeat-customer discount.
So yes, Azure's service is substantially cheaper than an EV cert. And it also has the flexibility of being a monthly plan, rather than an annual commitment.
One more thing you could do is put the real name of any human being with any track record of professionalism, anywhere on the website. Currently you're:
- commenting under a pseudonymous profile
- asking for emails by saying "please email me. contact at cyberscarecrow.com"
- describing yourself in your FAQ entry for "Who are you?" by writing "We are cyber security researchers, living in the UK. We built cyber scarecrow to run on our own computers and decided to share it for others to use it too."
I frequently use pseudonymous profiles for various things but they are NOT a good way to establish trust.
It's a neat concept, although I imagine this'll be a cat and mouse endeavor that escalates very quickly. So, a suggestion - apply to the Open Technology Fund's Rapid Response Fund. I'd probably request the following in your position:
* code signing certificate funding
* consulting/assessment to harden the application or concept itself as well as to make it more robust (they'll probably route through Cure53)
* consulting/engineering to solve for the "malware detects this executable and decides that the other indicators can be ignored" problem, or consulting more generally on how to do this in a way that's more resilient.
If you wanted to fund this in some way without necessarily doing the typical founder slog, might make sense to 501c3 in the US and then get funded by or license this to security tooling manufacturers so that it can be embedded into security tools, or to research the model with funding from across the security industry so that the allergic reaction by malware groups to security tooling can be exploited more systemically.
I imagine the final state of this effort might be that security companies could be willing to license decoy versions of their toolkits to everyone that are bitwise identical to actual running versions but then activate production functionality with the right key.
> consulting/engineering to solve for the "malware detects this executable and decides that the other indicators can be ignored" problem, or consulting more generally on how to do this in a way that's more resilient.
This would be a boon for security folk who analyze/reverse malware: they can add/simulate this tool in their VMs to ensure the malware being analyzed doesn't deactivate itself!
> decoy versions of their toolkits to everyone that are bitwise identical to actual running versions but then activate production functionality with the right key
I kinda think this functionality could be subverted into a kill switch for legit-licensed installs simply by altering the key.
Obviously this should be an open source tool that people can build for themselves. If you want to sell premium services or upgrades for it later, you need to have an open/free tier as well.
Also are you aware of the (very awesome) EDR evasion toolkit called scarecrow? Naming stuff is hard, I get that, but this collision is a bit much IMO.
> We also dont have a code signing certificate yet either, they are expensive for windows.
When someone is offering you a certificate and the only thing you have to do in order to get it is pay them a significant amount of money, that's a major red flag that it's either a scam or you're being extorted. Or both. In any case you should not pay them and neither should anyone else.
Besides paying money you also go through a (pretty simplistic) audit. It’s about the only way we have to know who published some code, which is important. If you can come up with a better way you should implement it and we’ll all follow.
As a side note, I’ve been trying to figure out how to get an EV code signing cert that isn’t tied to me (want to make a tool Microsoft won’t like and don’t want retaliation to hurt my business) but I haven’t come up with a way to do it - which is a good thing I suppose.
If said Craigslist rando likes getting police visits and potentially being criminally liable for helping you commit a felony ...
All code signing promises to give you the name of a real person or company that signed the binary. From there it's the end user's responsibility to decide if they trust that entity.
In practice the threat of the justice system makes any signed executable unlikely to be malicious. But that doesn't mean you have to uncritically trust a binary signed by Joe Hobo
The threat is that if you sign malware with your name you will be quickly connected with said malware. If you don't live in a country that turns a blind eye to cyber crime that is a quick ticket to jail.
Of course people stealing other people's signing keys is an issue. But EV code signing certificates are pretty well protected (requiring either a hardware dongle or 2FA). It's not impossible for a highly sophisticated attacker, but it's a pretty high bar.
There’s an audit to go through where you (sort of) prove who you are. The system isn’t great, but if you can come up with something better there’s a lot of space to make software more secure for people.
Where is that additional info? It just says you're a group of security researchers, but there are no names, no verifiable credentials, nothing. You haven't really added any info that would contribute to any real trust.
Exactly. This continues to tell us absolutely nothing.
"Who are you?
We are cyber security researchers, living in the UK. We built cyber scarecrow to run on our own computers and decided to share it for others to use it too."
Something that would have built trust with me that I didn't find on the site was any mention of success rate. Surely CyberScarecrow has been tested against known malware to see if the process successfully thwarts an attack.
I'd suggest putting down the actual authors. If you're UK based there should really be no issue in putting down each of the people involved and what their background in the industry is. Otherwise this just looks like a v1 to get people interested and v2 could include malware. Tbh it'd be quite a clever ploy if it is malware. Trust isn't built blindly, most smaller software creators always have their details known. I'd suggest if you want it to pick up traction, you have a full "about us" page.
You're collecting personal info and claiming to be in the UK: identifying the data controller would be a start, both for building trust and complying with GDPR.
How are you planning on preventing bad actors to identify scarecrow itself? You gonna randomize the name/processes etc?? Like anti-malware software do to install in stealth-mode??
It is a cat and mouse game. And security by obscurity practice. Not saying it won't work, but if it is open sourced, how long before the malware will catch on?
I'd be willing to bet good money that 99% of malware authors won't adapt, since 99% (more like 99.999%) of the billions of worldwide windows users will not have this installed.
For the cat to care about the mouse it needs to at least be a good appetizer.
If I were to run a Windows computer, I wouldn't care what 99.999% of other people didn't do to make their computer safe. If it were something that I could do, then that's good enough for me. However, the best thing one can do to protect themselves from Windows malware is to not use Windows. This is the path I've chosen for myself
I've worked in companies with horrendous security, where someone with just a bit of SQL injection experience could have easily carried out the data. Yet, since this was a custom in-house application and your off-the-shelve-scanners did not work, this never happened; the only times the servers were hacked was when the company decided to host an (obviously never updated) grandfathered Joomla instance for a customer.
But even more simply, just setting your SSH port to something >10000 is enough to get away with a very mediocre password. It's mostly really not about being a hard target, not being the easiest one is likely quite sufficient :)
> But even more simply, just setting your SSH port to something >10000 is enough to get away with a very mediocre password.
Given how easy and free tools like Wireguard are to setup now (thanks Tailscale!), I really don't understand why folks feel the need to map SSH access to a publicly exposed port at all anymore for the most part, even for throw away side projects.
I mostly agree, but even this leaves you exposed to new bugs found in SSH in the future etc if on an unpatched/forgotten server. I still think its best (and really, really easy now with tools like tailscale) to simply never expose the software to the wide world in the first place and only access over Wireguard.
Fundamentally, it makes no sense to expose low level server access mechanisms to anyone other than yourself/team - there is no need for this to sit listening on a public port, almost ever.
Look into Windows NT source code that was leaked. The if-else/switch statements in there is just another level of string matching hell. Seems like software development just become "let's jerry rig it to just make it work and forget about it." Pretty sure management (without tech clue) have something to do behaviours like this.
> Pretty sure management (without tech clue) have something to do behaviours like this.
Always the same bullshit with you people here. Could never possibly someone built a sub-optimal system -- it HAD to be management fucking with our good intentions!
Well yeah. Left to their own devices, people want to build good stuff. It's when some dumb turd with his metrics and clueless plan shows up that things get screwy.
Author of scarecrow here. Our thinking is that if malware starts to adapt and check if scarecrow is installed, we are doing something right. We can then look to update the app to make it more difficult to spot - but its then a cat and mouse game.
You had an answer canned for one part of the query. Why are you trying to release security software completely anonymously? This is insane - you want an incredible amount of trust from users but can’t even identify a company.
Simply, if users are as intelligent as you think, they’re too intelligent to use your product.
If you think that is what will make it a cat and mouse game instead of understanding it has been a cat and mouse game since the beginning of time, then you're not compelling me into thinking you're very experienced in this space.
It's not a cat an mouse game; it's a diver and shark game. In SCUBA training we joked that you had the "buddy system" where you always dive in pairs, because that way if you encounter a shark you don't have to outswim the shark, you only have to outswim your buddy.
A low-effort activity that makes you not be the low-hanging fruit can often be worth it. For example, back in the '90s I moved my SSH port from 22 to ... not telling you! It's pretty easy to scan for SSH servers on alternate ports, but basically none of the worms do that.
Some malware will catch on, some will not. It's a cost vs profit problem. Statistically, this will always decrease the number of possible malware samples that can be installed on the machine, but by what margin? Impossible to say.
A lot of security stuff is a bit ironic like that. "Give this antivirus software super-root access to your machine".. it depends on that software being trustworthy.
That's a problem with a lot of software and developers these days. An "About Me" section with a real face and presence is important and I don't mean anime characters and aliases either. Tell me who you are, put yourself out there.
I don't understand why the software is built how it's built. Why would you want to implement licensing in the future for a software product that only creates fake processes and registry keys from a list: https://pastebin.com/JVZy4U5i .
The limitation to 3 processes and license dialog make me feel uncomfortable using the software. All the processes are 14.1MB in size (and basically the scarecrow_process.dll - https://www.virustotal.com/gui/file/83ea1c039f031aa2b05a082c...). I just don't understand why you create such a complex piece of software if you can just use a Powershell script that does exactly the same using less resources. The science behind it only kinda makes sense. There is some malware that is using techniques to check if there are those processes are running but by no means is this a good way to keep you protected. Most common malware like credential stealers (redline, vidar, blahblah) don't care about that and they are by far the most common type of malware deployed. Even ransomware like Lockbit doesn't care, even if it's attached to a debugger. I think this mostly creates a false sense of security and if you plan to grow a business out of this, it would probably only take hours until there would be an open source option available. Don't get me wrong - I like the idea of creating new ways of defending malware, what I don't like is the way you try to "sell" it.
They know that if this idea catches on, a dozen completely free imitations will crop up, so ... the time to grab whatever cash can be squeezed out of this is now.
McAfee/Norton/etc. could license signed "scarecrow" versions of their products for use with something like this so that it's impossible for the malware to distinguish a scarecrow version of MacAfee from the real thing (and they would get a cut/kickback).
I would pay a small amount for a scarecrow version of AV software if a) it had zero footprint on my system resources, and b) it really did scare away malware that checks for such things.
Either way, though, it makes malware more onerous to develop since it has to bundle in public keys in order to verify running processes are correctly signed.
Are you telling me this thing spawned 50 new processes on your computer? Could you zip up all the executable files and whatever it installed and upload it somewhere so we can analyze the assembly?
Is there a way for me to curl their executable into my UNIX terminal so I can read the assembly? Or does Any Run keep the samples to themselves? I know a lot about portable executable but very little about these online services.
To your point, I made this a few years ago using powershell. I just created a stub .exe using csc on install and renamed it to match a similar list of binary names. Maybe I will dig it up...
But this literally comes off as probably being malware itself.
If your going to ship something like this, it needs to be open source preferably with a GitHub pipeline so I can see the full build process.
You also run into the elephant repellent problem. The best defense to malware will always be regular backups and a willingness to wipe your computer if things go wrong.
Better known as the Elephant Repellant Fallacy — a claim that a preventative is working when, in fact, the thing it prevents rarely or never happens anyway.
"Hey you better buy my elephant repellant so you don't get attacked!"
'Okay.'
...
"So were you attacked?"
'No, I live in San Francisco and there are no wild elephants."
I would assume there would be a small intersection of people that would download and install a windows program from an unknown web page and those that are worried about malware.
Author of cyber scarecrow here. You are right, its a trust thing. Completly understand if people wouldnt want to install it and thats fine. It's the same for any software really. We just havent built up any confidence or trust like a big established company will have.
At this point, the simplest explanation is that it actually is malware. A more credible explanation than security researchers making something that looks this much like malware, but actually isn't.
Even the WHOIS response gives "Privacy service provided by Withheld for Privacy ehf" under the contact field. The developers claim to be living in the UK, but don't provide any legal identity - and it's not hard; you don't even need to be a British resident to start a shell company in Britain.
Lol, this website is registered to someone in Iceland, despite the assurance that it is a "security researcher living in the UK". I'm sure the results from this experiment will make a cool blog post about pwning tech savvy folks.
Hmm my Namecheap domains keep the location details even with WHOIS privacy enabled. To be fair they are 7+ years old so maybe something has changed in that time?
I guess if this gets enough attention, malware will just add more sophisticated checks and not just look at the exe name.
But on that note, I wondered the same thing at my last workplace where we'd only run windows in virtual machines. Sometimes these were quite outdated regarding system and browser updates, and some non-tech staff used them to browse random websites. They were never hit by any crypto malware and whatnot, which surprised me a lot at first, but at some point I realized the first thing you do as even a halfway decent malware author is checking whether you run in a virtualized environment.
> I guess if this gets enough attention, malware will just add more sophisticated checks and not just look at the exe name.
But more sophisticated detection means bigger payload (making the malware easier to detect) and more complexity (making the malware harder to make / maintain), so mission accomplished.
The more scarecrow is installed, the easier it gets for real security researchers to hide from these checks and detect viruses. So actually the dynamic helps security research.
“Sophisticated” detection can be as simple as checking rss and pcpu, the bullshit decoy processes probably aren’t wasting a lot of CPU and RAM, otherwise might as well run the real things; if they are, well, just avoid, who cares. So no, it’s not going to meaningfully complicate anything.
Wouldn't that be more fragile though? CPU usage is not constant in time, so if - again - you're not sophisticated enough, you get more false negatives / positives, depending on which side of the heuristic you err.
This is only useful for dragnet malware targeting the masses, where false positives/negatives have low impact to begin with. High value targets can run the real programs if this is proven to have any effect — the average corporate IT can approve some more bloat for security, no problem. Also, you take a sample.
That's where I wonder about a tool like this interfering with legitimate software.
For example, I believe the anti-cheat software used by games like Fortnite looks for similar things -- my understanding is that it, too, will refuse to start when it is executing in a VM[0]. As a teenager (90s), I remember several applications/games refusing to start when I'd attached a tracing process to them. They did this to stop exactly what I was doing: trying to figure out how to defeat the software licensing code. I haven't had a need to do that since the turn of the century but I'd put $10 on that still being a thing.
So you end up with a "false positive", and like anti-virus software, it results in "denial of service." But does anti-virus's solution of "white list it" apply here? At least with their specific implementation, it's "on or off", but I wonder if it's even possible to alter the application in a way that could "white list a process so it doesn't see the 'malware defeat tricks' this exposes." If not, you'd just have to "turn off protection" when you were using that program. That might not be practical depending on the program. It's also not likely the vendor of that program will care that "an application which pretends it's doing things we don't like" breaks their application unless it represents a lot of their install base.
[0] I looked into it a few years ago b/c I run Tumbleweed and it's a game the kids enjoy (I'm not a huge fan but my gaming days have been behind me for a while, now) ... I had hoped to be able to expose my GPU to the VM enough to be able to play it but didn't bother trying after reading others' experiences.
What do you mean by 'legitimate software' exactly? If you described what a modern anti-cheat solution does to someone without telling them what it is, they'd automatically call it malware. The similarity really is uncanny. It almost feels like the difference between them is more of a technicality.
You are right, some games, especially multiplayer ones will refuse to work in the VM to prevent cheating, but this is, of course, the business decision on their side. You can always construct the software in such a way that when it detects something suspicious on the system it ceases to function: some copy protections looked up for change in the network card hardware id as developers presumed it is highly unlikely someone will change network interface, but that stopped to be common, when people started using on-board interfaces that change with every motherboard change.
There is also a difference when using commercial stuff such as vmware instead of qemu or virtualbox as open source is more suitable to be tailored to the specific thing, in this case, cheating.
In the end, this approach works well for slowing done malware as there is less risk for normal software to allow working inside of vm in contrast to malware that should be coded to be extra paranoid in order to avoid as many tar pits as possible.
Why does malware “stop” if it sees AV? Sounds as if it wanted to live, which is absurd. A shady concept overall, cause if you occasionally run malware on your pc, it’s already over.
Downloading a random exe from a noname site/author to scare malware sounds like another crazy security recipe from your layman tech friend who installs registry cleaners and toggles random settings for “speed up”.
Take malware that is part of a botnet. Its initial payload is not necessarily damaging to the host, but is awaiting instructions to e.g. DDOS some future victim.
The authors will want the malware to spread as far and wide as it can on e.g. a corporate network. So they need to make a risk assessment; if the malware stays on the current computer, is the risk of detection (over time, as the AV software gets updates) higher than the opportunity to use this host for nefarious purposes later?
The list[1] of processes simulated by cyber scarecrow are mostly related to being in a virtual machine though. Utilities like procmon/regmon might indicate the system is being used by a techie. I guess the malware author's assumption is that these machines will be better managed and monitored than the desktop/laptop systems used by office workers.
Many pieces of malware are encrypted and obfuscated to prevent analysis. Often, they'll detect virtual machines to make it harder for people to analyse the malware. Plenty of malware hides the juicy bits in a second or third stage download that won't trigger if the dropper is loaded inside of a VM (or with a debugger attached, etc.).
Similarly, there have also been malware that will deactivate itself when it detects signs of the computer being Russian; Russia doesn't really care about Russian hackers attacking foreign countries (but they'll crack down on malware spreading within Russia, when detected) so for Russian malware authors (and malware authors pretending to be Russian) it's a good idea not to spread to Russian computers. This has the funny side effect of simply adding a Russian keyboard layout being enough to prevent infection from some specific strains of malware.
This is less common among the "download trustedsteam.exe to update your whatsapp today" malware and random attack scripts and more likely to happen in targeted attacks at specific targets.
This tactic probably won't do anything against the kind of malware that's in pirated games and drive-by downloads (which is probably what most infections are) as I don't think the VM evasion tactics are necessary for those. It may help protect against the kind of malware human rights activists and journalists can face, though. I don't know if I'd trust this particular piece of software to do it, but it'll work in theory. I'm sure malware authors will update their code to detect this software if this approach ever takes off.
> Why does malware “stop” if it sees AV? Sounds as if it wanted to live, which is absurd.
Malware authors add in this feature so that it’s harder for researchers to figure out how it works. They want to make reverse engineering their code more difficult.
If these were laypeople that would then give up, sure.
But I'm surprised that it's even worth malware authors' time to put in these checks. I can't imagine there's even a single case of where it stopped malware researchers in the end. What, so it takes the researchers a few hours or a couple of days longer? Why would malware authors even bother?
(What I can understand is malware that will spread through as many types of systems as possible, but only "activate" the bad behavior on a specific type of system. But that's totally different -- a whitelist related to its intended purpose, not a blacklist to avoid security researchers.)
It's not about the usual AV software, but about "fake" system used to try and detect and analyse malware. AV Vendors and malware researcher in general use such honeypots to find malware that hasn't been identified yet.
This software seems to fake some idiciators that are used by malware to detect wheter they're on a "real system" or a honeypot.
It's not really about "normal" antivirus programs, but tools used by security researchers. It's well-known that more sophisticated malware often try to avoid scrutiny by not running, or masking their intended purpose if the environment looks "suspicious".
A paranoid online game like e.g. Test Drive Unlimited, might not launch because the OS says it's Windows Server 2008 (ask me how I know). A script in a Word document might not deliver its payload if there are no "recently opened documents".
The idea with this thing is to make the environment look suspicious by making it look like an environment where the malware is being deliberately executed in order to study its behaviour.
Even back in my script kiddy days, 10 years ago, I remember RATs and cryptors would all have a kill switch option if it detected it was running on a VM.
this +100 I can't just let some random exe run on my machine with nothing but claims from the author.
In my head, I'm also wondering why a botnet wouldn't just want to take over such a machine because they know for sure that it's a scarecrow. But security by obscurity is no way to instill trust here
Claims by an unidentified author(s) replying to comments with a 4-hour old HN account.. How did this make it to the front page other than the catchy name?
I heard you could do something very similar but with installing the Russian Keyboard layout and having it available as an option. A lot of malware from Russia won't run on computers with a Russian keyboard layout, because they only get in trouble with the law if the malware impacts Russian users.
One of the reference in "How does it work" [1] mentioned that some hackers will not mess with computers with Russian keyboard, so you can add one to reduce your chance of getting hacked.
Hilarious aside, it would only work if you don't actually use multiple keyboard -- otherwise an additional one would make switching between multiple keyboards very annoying [*].
It also mentions some other changes like adding RU keywords to your registry. Again, these measures would have many side effects since lots of software actually use these registry entries for legit reasons. So I don't know if this Cyber Scarecrow product would have this problem, since it does modify registry, too.
*: A little rant: as someone who use three virtual keyboards (English, Chinese, Japanese), it is already a pain in ass to switch them since MS does not follow "last used" switching order (like alt+tab). Instead, it just switches in one direction.
> A little rant: as someone who use three virtual keyboards (English, Chinese, Japanese), it is already a pain in ass to switch them since MS does not follow "last used" switching order (like alt+tab). Instead, it just switches in one direction.
Actually, I much prefer this order. Depending on what keyboard I currently use, I know exactly how often to switch instead of having to remember what I used previously. In fact, I don't even like this order when Alt+Tab'ing, it makes switching between more than two windows pretty inconsistent (yes, I know Windows+Number works, too).
Having "last used" order makes quickly switch between two windows very easy, which is something I personally use more. It's easier than pressing alt+tab/shift+alt+tab alternately.
To switch to the third window, you can use alt+tab+tab.
Small correction: not "some hackers", but some malware families (the difference being that the check is automatic). And honestly, not "some" but "most of them" :).
Though I often see this implemented by calling GetKeyboardLayout, so this will only work if you actually use the Russian (or neighbourly) layout when malware detonation happens.
1. The Shift+Alt chord is obnoxiously unreliable, sensitive to which key comes down first, or something.
2. Japanese is always comeing up in A mode even though you last had it in あ mode.
3. Bad performance: sllllow language switching at times: you hit some keyboard sequence for changing languages or modes within a language, and nothing happens. This interacts with (2): did we hit an unreliable chord? Or is it just slow to respond?
I have to use a 3rd party Japanese IME precisely because of 2. No idea why they haven't add an option for it to be default to あ mode.
Also, in ANY modern Chinese IME (Microsoft or 3rd party), switching between English/中文 mode is simply pressing shift once. You still have to use alt+` for that in JP IME, which I find unbearable.
> When hackers install malicious software on a compromised victim, they first check to make sure its safe for them to run. They don't want to get caught and avoid computers that have security analysis [...] tools on them.
Game anti-cheat code makes similar checks (arguably it is malware, but that's besides the point). So, running this might put you at risk of getting banned from your favourite game.
Good analogy, except putting up the sign actually works because there isn't any other layer around it... whereas putting up IOCs onto your Microsoft Windows OS will trigger Windows Defender, any SIEM, and generally speaking most security-oriented software worth its salt.
Fun concept, but this is security by obscurity. Other heuristics:
- providing fake manifests to hardware drivers commonly associated with virtual machines
- active process inspector handles
- presence of any software signed by hexrays (the ini file is usually enough)
Malware uses signals to determine if they are running in a VM. If we can degrade those signals, they will have to play a cat and mouse game trying to avoid VMs.
The less clear it is if a process is running in a VM, the easier time security researchers will have testing exploits found in the wild.
I really don't get why this would be a 71mb installer that takes up 113mb when installed. If they are literally just fake processes running that have the right names, why couldn't this be a 100kb installer?
Hahaha it's such a lovely idea! Turning the opponents detection against them, I very much dig it!
Here's a caveat though: Attackers will at some point notice scarecrows and simply work around them. Now suuure, if you have a better lock than your neighbours, that decreases your chances of getting broken into, but in the end this is a classic "security by obscurity" measure. So if your time and computer/data is valuable, I would rather invest in other security measures (firewall, awareness training, backups etc.)
I guess the indicators used largely overlap with the ones used by anti-cheat software, so you probably want to think twice before using that on your gaming pc :)
Krebs said that some malware checks for a cyrillic keyboard to try and geo target outside of the country of operation. This seems to be the same type of thing.
I get the idea but the "science" is based on reports it doesn't look like this has been tested with actual malware. Would be interesting to know how well it works
Also make it OSS and ask for donations. Not sure what your feature earning model is but is seems easy to replicate and as point out several times right now it asked to blindly thrust you
As much as I'd love to see something like this everywhere, the problem is it's useless for everyone who loves to play online games or watch DRM-encumbered content, so the majority of the population... because DRM, anticheat and malware all fear the same set of tools/indicators.
Solution: temporary "game mode" that disables most protections that can impact DRM, or a custom rule engine that disables protections if some application is detected to be running (e.g. fortnite.exe or something), but this second method should be done manually by the user.
Costs a lot of cycles to run those for real, and it’s not super common to get infected with anything, so you’re wasting cycles for a small chance at avoiding it. This could be better since, I assume, it doesn’t do a lot of stuff.
Author of scarecrow here. The idea is cyber scarecrow is just super easy and light weight for anyone to use. Honeypot tech tends to need some good tech understanding to use (eg the cli), and can be a bit heavyweight for always running in the background of your computer.
"Fake Processes. Scarecrow will create a number of background processes that don't do anything, but look like security research tools.
Fake registry entries. Scarecrow creates registry entries to make it look like security tools are installed on your computer."
I'd be interested to see this tested, there's tons of good malware repos out there like vx-underground's collections that can be used to test it.
If you dont wanna share the source, somewhat logical. Perhaps run a test versus gigabytes of malware samples and let us know which ones actually query these process names / values you create and disable themselves as a result??
This is a really cool concept! Even if it's difficult to trust it as-is (for reasons stated ad nauseam in other comments), this might put gas on the fire of a so-far small area of malware research, which will be good for the community at large.
It's obviously an arms race when it comes to malware, but this could be a significant step forward on the defensive side, forcing malware developers to evolve their TTPs.
I decided to use Bitdefender a few months ago becouse i suspected my Mac had malware. I was right, there was a adware in the firefox files so it did it’s job.
But, my experience with the antivirus was horrible. When i first opened the app there were popus everywhere advertising for their other products, and the overall ui didn’t look trustworthy.
I am no security expert, so I’m asking: is this the best way to deal with malware?
It asks for our names and emails, provides an opaque exe and no source code, asks to be run as admin, pings home, doesn’t say who you are or how many of you there are, and justifies it all with “trust me bro”.
People, this is malware. Please don’t fall for it.
I don’t think it’s wise to leave this on the front page. I hope dang agrees and takes it down.
Many of the most dangerous threat actors simply don't care about getting caught. They are operated, financed and protected by nation states, and/or operate from geopolitical locations where law enforcement is lacking.
Sounds like a very interesting concept. I'd like to see someone actually test this though.
Try running this on a Windows PC with Windows Defender off & just Scarecrow running. You could use the MaleX test kit [1] or a set of malware such as the Zoo collection [2] or something more current. I'd be very interested to see how many malware executables stop half way through their installation after seeing a few bogus registry entries/background programs running. I'm not trying to imply it's worthless, but it needs some actual "real world" test results.
Author of scarecrow here. Sweet idea, thankyou for sharing. What i would really like to do, is have some sort of stats in the app, that shows if it has 'scared' away any malware. But im not sure how to do that, and work out what other processes on the machine have exited because it saw some cyber scarecrow indicators in the systems process listing.
I would assume with a minimalist program like yours, it wouldn't have the capability to detect whether anything malicious was running on the system. That kind of thing would require some more advanced trip wires that would notice when certain things were triggered when they shouldn't have been or a full blown AV detection engine.
I suppose it could work like Sysinternals Process Explorer/Autoruns/etc & submit running hashes to Virustotal.com or other databases, but there's always the likelihood of false positives with that.
If you search Github for "malware samples" There are loads of them. Vx Underground also has a large collection [1]. So, I would go through there & look for commonalities to try and find what malware often tries to trigger on startup.
I'll just end with this example of an interesting form of a trip wire I've seen in use on Windows PCs: ZoneAlarm makes an anti-ransomwear tool I can't think of the name of. It placed hidden files & folders in every directory on the hard drive. It would then monitor if anything tried to access it - as ransomwear would attempt to encrypt it - and force kill all running programs in an attempt to shut down the malware before it could encrypt the entire HDD.
While this is a really interesting idea, and assuming that it's actually completely safe, the irony is that it looks exactly what I would expect a trojan to look like - somewhat vague promises of security that could be interpreted as snake oil, conveniently packaged as an EXE with scant information about who's behind it, what it does, and no way to verify any of it. No offense to the authors :)
I wonder if you can make malware think your language and keyboard layout is Russian without having to endure the setup, that's been known to deter some nasty stuff.
Get a PTR record for your IP, let it resolve to honeypot087.win.internal.security.example.com, that will make your IP less interesting... To some people
legit, or best malware install attempt ever? assume all is good if you detect the cyberscarecrow process? how can this have a long-term effect?
if you have malware probing your processes to decide if it can run or not you have a very serious problem regardless of whether it decides to run or not, there is an entrance to your systems you don't know about.
I call BS. How it works says: "When hackers install malicious software on a compromised victim, they first check to make sure its safe for them to run."; Download asks e-mail and name; Does not seems multiplatform and would never install anything like that on my computer in a dream unless it were open source.
I'm a malware researcher and reverse engineer for a living. This is absolutely true, but oversimplified. Focus on
>They don't want to get caught and avoid computers that have security analysis or anti-malware tools on them.
Malware doesn't want to run in a sandbox environment (or in general when observed), because doing malicious things in the AV sandbox is a straight way to get blocked, and leaks C2 servers and other IoCs immediately. That's why most malware families[1] at least try to check if the machine they're running on is a sandbox/researcher pc/virtual machine.
I assume this is what this tool does. We joke at work that the easiest thing to do to make your windows immune to malware is to create a fake service and call it VBoxSVC.
[1] except, usually, ransomware, because ransomware is very straightforward and doesn't care about stealth anyway.
It's very platform-dependent, because for each one there are different ways in which a virus checks for markers that it's being analysed - for instance, if it's being ran in a VM, it might check registry entries, check for Guest-Host drivers or whatever, on Windows. Still, I wouldn't trust something like this if it asks for PII, isn't open-source and leaves traces around on the disk.
Setting aside the concerns with this specific implementation and thinking more of "the idea" I think the biggest concern is this sort of application causing legitimate software to fail to run[0] and how one would "white-list" an application from seeing these "fake artifacts designed to trick malware."
The problem is "the fake components" would have to be prevented from being detected by legitimate software and the only way I can think to do that would be to execute everything in a sandbox that is capable of: (a) hiding some contained running processes (the fake ones) from the rest of the OS while (b) while allowing the process that "sees the fake stuff" to be seen by everything else "like any old process."
Applying ACLs (and restricting white-listed processes) might work in some cases; might equally just be seen as a permissions problem and result in a nonsensical error (because the developers never imagined someone would change the permissions on an obvious key), or it might be that the "trick" employed is "Adding a Russian Keyboard" which can be very disruptive to the user "if they use more than one input language" or "is one of those places where a program may read from there never expecting to encounter an error."
A lot of this seems like it would require use of containerization -- docker/docker-like -- for Windows apps. I'm familiar with a few offerings here and there, but I've worked with none of them and I run Linux more than Windows these days. So my questions really boil down to:
Where's Windows containerization at? Would it be possible to run an application in a docker or docker-like container with a Windows kernel which can have its environment controlled in a manner that is more transparent to the application running within the container? Is there any other approach which would allow for "non-white-listed applications" to run containerized and "see the Scarecrow artifacts", while allowing the white-listed applications[1] to run outside of the container in a manner that hides some of the processes within the container. Can it do all of that in a manner that would work if the same "check" were repeated immediately after confirming an Elevation dialog[2]? from the white-listed application in a manner that couldn't be defeated by repeating the same "check" after presenting an elevation dialog?
Again, that's assuming "this is a brilliant idea" -- and there's some evidence that as a concept, at least, it would help (ignoring this particular implementation of the idea), but it still suffers from its success, so the extent that it helps/is adopted equates to how long any of these techniques aren't defeated. And just from the sense I get of the complexities required to "implement this in a manner that legitimate won't fail, too", I suspect it will be easier to defeat a tool like this than it will be to protect against its defeat. In other words, the attacker is a healthy young cat chasing a tired old mouse.
[0] Anti-cheat being the most obvious, but those are often indistinguishable from malware. I'd encountered plenty of games/apps in the 90s that refused to run when I ran software to trace aspects of their memory interaction. I had some weird accounting app that somehow figured out when my own code (well, code I mostly borrowed from other implementations) was used for the same purpose.
[1] The assumption being that "a legitimate application which does these kinds of checks" is also likely to refuse to run within a container unless it's impossible to detect the container as reliably as everything else (and vendors are completely tolerant of false positives if the affected customers don't represent enough in terms of profit, or the solution is "don't run that unusual security software when you run ours").
[2] I've seen it enough with Easy Anti-cheat that I just click "Yes" like a drone. There was at least one occasion when it popped up after I had installed some developer tooling but not had a game update come down between launches. Because it was a huge install, it may just have been that the game detectedI have no idea why this happens -- on a few occasions, I had no update applied between loads but had installed other software so it could have been "to fix something that software broke" but it could also have been "to re-evaluate the environment as an administrator because something changed enough on the system to warrant a re-check that it is still compliant with the rules"
Doesn't exist. Not even UAC is a reliable security boundary. Likely, it will never exist.
> Is there any other approach which would allow for "non-white-listed applications" to run containerized and "see the Scarecrow artifacts",
Sounds a bit like WoW64. It should be easy enough to replicate this behaviour with a rootkit. However, the software would always be able to peek behind the curtain.
> In other words, the attacker is a healthy young cat chasing a tired old mouse.
I always thought of the attackers as the mice, and anti-malware folk as the cats.
i'm confused about the tradeoff of not running the software that your pretending to be running? Most AV definitly feels like malware itself so maybe thats your point? But it would probably be better to run good software than fake bad software?
But there is no good software for defense. They either introduce obstacles while being barely useful or are useful, introduce obstacles for you and are proprietary and thus are malicious by design.
Outside of the authorship/open-source fears[0], this is one of the more interesting ideas to surface in anti-virus.
Facing reality: anti-malware tooling is inadequate -- so inadequate, I haven't found a reason to purchase it for the one Windows machine I still have. People say "Defender works well enough, now!" and I think that's a pretty adequate way of describing it in that anti-malware has an impossible job and that is evident by every vendor's failure to succeed at it. So why pay for it?
It's always a cat-and-mouse game. This is an interesting approach, though, because it could shift the balance a little bit. Anti-malware's biggest problem is successfully identifying a threat while minimally interfering with the performance of an application. A mess of techniques are used to optimize this but when a file has to be scanned, it's expensive. It'd be interesting to see if it'd be possible to eliminate some variants of malware from on-demand scanning "if this tool defeats the malware as effectively", pushing scanning for those variants to an asynchronous process that allows the executable to run while it is being scanned.
I can see a lot of the problems with this kind of optimization[1]: it turns a "layer in the onion" into a replacement for an existing function which has more unknowns as far as attacks are concerned. Creating the environmental components required to "trick the malware" may be more expensive than just scanning. White-list scenarios may not be possible: I suspect anti-cheat services and potentially legitimate commercial software might be affected, as well[2] ... getting them to white-list a tool like this won't be easy unless the installed base is substantial. I suspect that "hiding the artifacts this tool creates to trick malware" from a white-listed processes might be impossible.
For at least a brief moment, this might be a useful tool in preventing infections from unknown threats. Brief, because -- by the author's own admissions (FAQ) -- it will devolve into a cat-and-mouse game if the tool is popular enough. There's another cat-and-mouse game, though. If this technique isn't resource intensive while offering protection somewhere in line with what it would take to implement, all of the anti-virus vendors will implement it -- including Microsoft. And they will be seen by customers as far better equipped to play "cat" or at least "the choice you won't get fired over."
And that's where it makes a whole lot of sense to open-source the product. It's a clever idea with a lot of unknowns and a very low likelihood of being a business. Unless it's being integrated into a larger security suite (same business challenges, but you have something of "a full product" as far as your customers are concerned), it's only value (outside of purely altruistic ones) would be either "popping the tool on the author's related business's website" to bring people to a related business/service or as a way to promote the author's skill set (for consulting/resume reasons). I'm not arrogant enough to say there's no way to make money from it, I just can't see it -- at least, not one that would make enough money to offset the cost of the "cat and mouse" game.
[0] Which, yeah, "I wouldn't run it on my computer" but I give the authors enough of the benefit of the doubt that "it's new"
[1] Not the least of which being that I do not author AV software so I have nothing to tell me that any of my assumptions about on-demand scanning are correct.
[2] It used to be a common practice to make reverse engineering more difficult.
If the creators read this, I suggest some ways of building trust. There’s no “about us”, no GitHub link, etc. It’s a random webpage that wants my personal details, and sends me a “exe”. The overlap of people who understand what this tool does, and people who would run that “exe” is pretty small.