Hacker News new | past | comments | ask | show | jobs | submit login
Avoid Non-Microsoft Antivirus Software (ocallahan.org)
859 points by bzbarsky on Jan 26, 2017 | hide | past | favorite | 374 comments

I also want to raise an alarm about a current AV practice, not mentioned in the article:

    AV products like Bitdefender will MITM
    your HTTPS connections by installing their
    own root certificates, by default and 
    without warnings
In the name of "security", this undermines the very purpose of what HTTPS is about, knowingly endangering their users.

And consider that I, a highly technical and security conscious software developer, only noticed it because I saw green icons appearing in my search results and then noticed that Google's SSL certificate is now a fake. And I only noticed it because I know how this shit works and those green icons seemed suspicious.

And yes, I'm using the word "fake", because I doubt that companies like Bitdefender have to pass the same certifications as a certificate authority or that they have any deals whatsoever with Google. And it's a serious vulnerability, because their certificate can get stolen and used by malicious software, not to mention you now have to trust a third-party with all of your secure connections, which includes your Google searches exposing your most secret desires, your Facebook and Slack chats, your bank account, everything. A third-party that does not have the scrutiny of your open-source web browser.

That's just preposterous and these products only survive because users are gullible and technically illiterate.

I've written and talked about this a couple of times. Each and every one of these products does some kind of TLS security degredation:

* https://blog.hboeck.de/archives/869-How-Kaspersky-makes-you-...

* https://media.ccc.de/v/camp2015-6833-tls_interception_consid...

Yes. Also, let's finally start a public discussion about AV companies making money by selling data (they do, either all of them or most).

Of course that being able to peek into https traffic gets them more data (specific urls, not just whole sites).

_Everyone_ is collecting our data nowadays. Who's left to sell it to?

Not true. Google collects your searches. They don't sell your searches, they sell whatever they infer from your searches (your compiled and quite vague profile and I know, because I interacted with their AdSense platform), because they'd be stupid to sell your actual searches, since that's their most valuable property.

Does anybody else know your search history? Besides the NSA, whom I assume have access to all US-hosted data, no. And not even Google knows my most sensitive searches, because my private mode is a Tor Browser connecting to DuckDuckGo, answering for my porn needs mostly.

And I trust Google to keep my data safe more than I trust shady AV companies, because Google has hired a lot of security researchers, at their size all eyes are watching them and their behavior has been acceptable compared with that of others like Facebook.

Information security is all about compartmentalization ;-)

It often irks me when people say things like "Google sells all of your data to advertisers and you are the product!"

Not because there are no potential issues to discuss around ad-funded free services and data aggregation, but more because it's like clickbait (in that it oversimplifies a complex issue for emotional effect and makes discussion of actual issues more difficult).

Using algorithms to build a general profile in order to increase relevance is not the same as "selling your data". They sell access to your eyeballs in much the same way as a broadcast TV station or free alt-weekly does. The main difference is that by getting some sense of who you are and what you might be interested in, they can decrease (in theory) the amount of irrelevant ads that end up on your screen versus the traditional methods of just blanketing things with general ads or using cruder demographic info.

Lots of companies flat-out do sell your data, either in aggregate and somewhat anonymized or in full. I've not found anything yet that leads me to believe this is how Google runs their advertising business. To this day, my main concern with Google isn't so much with Google as it is with a malicious third party somehow gaining access to the info Google has on me.

You're right, Google hoards your data, instead of selling it. They are the ones buying.

That's not the point. The point is there is a dark market trade in your personal identifying data and metadata, with the ultimate goal of knowing everything about you. The more they know about you, the better they can advertise to you.

Advertisements on the Internet are bought and sold algorithmically on high speed marketplaces. Omniscience leads to better decisions in this environment. Therefore your data is precious to whomever owns it.

> Google sells all of your data to advertisers

I totally agree with that part, including the last sentence of your post. However, I don't quite see the difference between "you are the product" and "access to your eyeballs is the product", it's just describing the same thing differently. That others are worse doesn't really change that, the ones who are better are the standard as far as I'm concerned.

Once personal data is collected it could be abused or used in a way you object to. The abuse could be perpetrated by a third party who got access to the data legitimately or illegitimately, or by an employee of the company, or by an owner of the company.

As the target of such surveillance and data collection, you just never really know how that data will be used or who by. It's optimistic, to put it mildly, to expect that data will only be used for purposes you don't object to by good people with the best intentions.

It's disingenuous to compare the targeting that Google allows with the largely untargeted advertising on TV or in a newspaper. These traditional advertising media also don't perform the same intensive tracking that online advertising does. There is really no similarity, other than the fact they both result in ad impressions.

>and their behavior has been acceptable compared with that of others like Facebook.

If you have the time, would you mind expanding on why you consider Google's behavior better than Facebook's? I find myself very wary of FB but much less so of Google, but I can't really explain why.

You're just biased, because Facebook has way more potential for doing you harm and has used your data against your interests already.

Facebook manipulates the emotions of its users: http://www.forbes.com/sites/kashmirhill/2014/06/28/facebook-...

MasterCard to access Facebook user data in order to make you spend more: http://www.theage.com.au/it-pro/business-it/mastercard-to-ac...

Facebook ads use your face for free: http://www.itworld.com/article/2746556/networking-hardware/f...

Facebook is inventing phony likes to promote stories you've never seen to your friends: https://web.archive.org/web/20161215092610/http://www.forbes...

Facebook guesses your race and uses it for targeting: https://arstechnica.com/information-technology/2016/03/faceb...

Facebook has been the main promoter of bogus news and misinformation in 2016: https://www.theguardian.com/technology/2016/sep/09/facebook-...

Facebook has started to collect WhatsApp data, in spite of the app's original policy: https://www.theguardian.com/technology/2016/aug/25/whatsapp-...

Facebook collects the texts that you don't send: https://arstechnica.com/business/2013/12/facebook-collects-c...

Facebook tracks and builds user profiles for people without accounts: https://www.theguardian.com/technology/2015/mar/31/facebook-...

Facebook's privacy settings have been designed with black patterns, making it easy to publish by mistake: https://www.theguardian.com/technology/2016/jun/29/facebook-...

Facebook makes it easy to tell when you're asleep: https://www.washingtonpost.com/news/innovations/wp/2016/02/2...

Short story to support your articles.

I used a fake name on FB since the first day I signed up along with a photo of my favorite rock star. About a year ago, someone outed me and FB locked my account and said unless I emailed them a copy of my drivers license or some form of identification that proved who I was, they would keep my account locked.

I thought, "Whatever, I'll just fire up a new one."

This past month I signed up a new account under a very generic name like "John Johnson". No problems. I never uploaded a photo, just connected with the handful of people (less than 5) and felt like, "ok, we're cool now".

Yesterday, I got the same message and now the links you provided make a lot of sense as to why. FB really is after all of your data and essentially force you to give it to them, otherwise they hold your account hostage. Both times, I really felt like my privacy was being stampeded and this was a big intrusion to get at my personal information. No way am I going to give you an identifiable information about me. I already gave up an account I had used more or less for a decade instead of coughing up my personal information and photo.

So yeah, I'm done with FB - not that I was ever a huge fan, but this past week just confirmed what I always suspected.

Sure that's scary. But it also sounds like 'anti-fraud'. How could we distinguish the two? They're slammed if they want authentic accounts; they're slammed if folks create large numbers of spam accounts. How do we suppose they could win in this scenario?

Just do like most of the social media accounts do.

Have algorithms that detect spam? Let users report on accounts being used to spam other users?

Sure, if I'm someone abusing the system, this should be easy to ferret out without having to surrender all your personal information and identifying markers just to make a SOCIAL MEDIA platform free of spam.

I don't know this for a fact, but if I were FB I think I'd be fanatical about verifying real users to prevent people setting up social media PBNs. A holy grail of grey-hat SEO nowadays would be to control large networks of interlinked fake social media accounts, which could be used to promote content artificially. This kind of spam is potentially hard to detect since the networks could be very large and appear organic (to the point, theoretically, of having AI-driven "users" behind each one), so the first line of defence is to identify fake user accounts.

Thanks for the links. I had heard of some of those issues but the sleep tracking and unsent text collection were new to me.

I have the same feeling but I cannot find arguments: they make money by selling very similar materials.

I think Google only better markets its intrusions than Facebook... Something to do with the public sentence "don't be evil". Indeed "evil" is like "common sense": everybody has its own and understands what comforts him/herself. Seems like pure marketing.

Maybe someone has any argument about the reality of this different Google/Facebook privacy intrusion/protection feeling?

Actually, I still have a problem when all my "actual searches" becomes someone else's "most valuable property".

Of course there are some other less intrusive search engines (DuckDuckGo, maybe Qwant), but unfortunately they still are less efficient than Google for fine or rare searches.

Well, I have a problem with that too, but then I'm talking about the average user, which is never going to install Tor in order to connect to DuckDuckGo. And to tell you the truth, I don't trust DuckDuckGo that much either, as they can always turn around and start collecting data without me knowing it. But for us, the technically inclined and privacy aware, there are always solutions.

But for the average user, until a better Google comes along, I think it's OK to trust Google with their searches. And compartmentalization is paramount to information security, my point being that trusting some other company besides Google with that data is not acceptable, which is why I find that intercepting HTTPS connections is simply wrong and evil, regardless of reasons. This besides the fact that intercepting HTTPS traffic increases the attack surface, making users less secure.

I see a distinct difference between DDG and Google.

We know Google does what it does. DDG's reason for existence is predicated on not doing so.

If DDG were found to be lying, I'd guess >80% of its customer base would evaporate overnight. It would mean destroying many years of branding, trust and relatively difficult cultivation of user browser defaults.

But that's worth gaming out - what would make it worth it to light all that on fire? About the only thing I can think of is a Lavabit-style conundrum, wherein our intelligence-overlords threaten someone's freedom. So, absolutely could happen, absolutely would come out.

So that's why I trust DDG to be less forthcoming with their logs.

When I started using DDGo some years ago I had the same problem; not quite so relevant search results. I think this is not the situation anymore, the results are very good. And you dont have to worry about security.

If the search results aren't relevant, you can always add !g to the search and force it to go to Google. They may still not be as relevant, of course, because Google can't use their data about you to filter the results to what they think you wanted. Which I'm perfectly fine with. I'm happy with the allegedly sub-par results that are displayed because of my anonymity. Others may not be.

you are not paying for search service; ergo, your searches are the product and your search results are but a byproduct of that product

I agree with ysavir: I do not mind being exposed to ads related to my current search - I understand it is the price to pay for a free service... I mind my searches being stored and attached to my (not event anonymous) profile in a database.

And the "not even anonymous" is not an option: to be able to have a fully functioning phone, I hardly can escape declaring my full details to Google.

And this is clearly an "evil" choice of Google: I never created a Linux, Debian, Ubuntu or Mint account to keep my desktop computer up-to-date and featured by additional apps.

Ehhhhhh I'm paying for it by viewing the sponsored (and non-sponsored) ads.

Any tracking that isn't announced to the user is not a cost, but is espionage.

What kind of porn are you looking at that you feel the need to use Tor and anonymized search, is my question.

Not everyone. FOSS doesn't.

Ubuntu unity sells your searches in the desktop environment by default

No Ubuntu doesn't, and nor has it every done.

It connects on-line and off-line searches, so it shows you the result in on-line locations. The underlying assumption was that users increasingly see on-line and off-line content as all part of the same world ("their content").

The commercial aspect was that it connected to places like Amazon. It made money for Canonical by using affiliate links if the user chose to make a purchase.

That is not the same as collecting all the history of the user and (anonymising) then selling that to a third-party or presenting adverts based on that data.

The default is off as users felt searches by default connecting to external services was an invasion of privacy - that's different to "selling searches".

Frankly, this closed-off the last viable manner for desktop Linux to secure a wider revenue stream of sufficient size to drive employing enough full time developers to keep up with the other platforms, in my personal opinion. FOSS doesn't change the dynamic that full-time developers cost real money.

Source: I worked at Canonical from the early days of the desktop, for ~10 years.

I'm sorry, but I think I trust Canonicals' privacy policy as a source more than you:

"Unless you have opted out, we will also send your keystrokes as a search term to productsearch.ubuntu.com and selected third parties so that we may complement your search results with online search results from such third parties including: Facebook, Twitter, BBC and Amazon. Canonical and these selected third parties will collect your search terms and use them to provide you with search results while using Ubuntu."

Source: Ubuntu's third party privacy policy.

* The default was not off in 12.10.

I'm fine if you want to make money this way! That's why you're a company, people need to make money. My argument was that some OSS software sells your search results, one way or another. I didn't take a position in this argument (but you can hopefully guess my position).

Now, you mention that your users complained, which was true. It caused a huge amount of backlash from your established user-base, many of which contributed to OSS themselves and have seen their contributions being monetized by Canonical (which is fine too, no worries). But besides the users, it was the pressure from EFF which caused Canonical to buckle to the pressure [1].

So, I don't care what Canonical underlying assumptions were, I don't care whether it is disabled now, I don't care whether Unity showed affiliate links or not. It's all just distracting from the main point: search terms entered in Unity were send, by default, to third-party servers!

[1] https://www.eff.org/deeplinks/2012/10/privacy-ubuntu-1210-am...

> The default was not off in 12.10

Which was four years ago. It's off now, and has been since 16.04 (the most recent LTS release, which shipped last year).

I agree the Amazon integration in the Dash was a mistake, but it's a mistake that has been fixed. It's simply not true anymore that "Ubuntu unity sells your searches in the desktop environment by default," and continuing to tell people so is deeply misleading.

You're still supporting it, so still selling: https://www.ubuntu.com/info/release-end-of-life

Perhaps, I wasn't clear enough with my point. You stated that Canonical sold data - "unity sells your searches". I was factually correcting you because it doesn't and didn't. Whereas you said in the previous comment - "search terms entered in Unity were send, by default, to third-party servers!", this is a true statement, though there's far more nuance behind it. The two things are not the same (Canonical didn't sell the searches), and in the context of the wider thread I felt it was confusing.

The rest of your points are personal opinions on users and data privacy, I shouldn't have commented on that area, I apologise. I see no value in getting drawn into discussing the strong emotions associated with this area as it never ends well :-)

Wow, that's yuck. Thanks for the heads up. :(

Not true, because even on Ubuntu 15.10, your keystrokes were being monetized.


I agree it's not the same as collecting user history and selling it to third parties. However, it's still against the expectations of most users of FOSS. These things, if they must be there at all, must absolutely be opt-in, never opt-out. Canonical deserved the backlash.

Yeah, agreed it was against user expectation - it's a great demonstration of what happens when organisations don't prepare their audience well or read the emotional response.

It's a personal frustration and professional regret that desktop Linux is under-funded to compete on an equal footing - the business model challenge feels intractable. RedHat/SUSE, Mandriva and Canonical have tried different options. But, there's been no sustained success that can get desktop Linux over 5% of market. Perhaps Google will have more success with ChromeOS.

The basic problem is that none of them are interested in actually keeping APIs stable. Likely because their big income stream is support contracts.

Observe by contrast the efforts Google goes to with the Android APIs.

It had nothing to do with reading emotions or preparing expectations and everything to do with being stubborn, arrogant and aloof.

You're passionate and certain in your beliefs - probably from a strong philosophical basis; My opinions are formed by my practical "experiences" and hard work for 10 years in this space. There's never been a good quality discussion when philosophical certainty crashes against pragmatic experience!

You're clearly angry, but I don't deserve the implication of being called stubborn, arrogant or aloof.

I've already apologised to you for pushing your comment side-ways in the other thread - I'm not sure what else you'd like from me at this point.

I am certainly not calling you stubborn, arrogant or aloof, I'm terribly sorry if you got that impression and I definitely did not want to imply this.

Ubuntu is trying very hard to abide by the letter of Free Software but not the spirit. Is there a solution? Simple: don't use it. There are many other distros out there which do abide by the spirit as well as the letter.

Canonical have tried to do some very good things to, and deserve some credit for successfully making Linux more end-user-friendly. They're not terrible people, just folks with a different set of perspectives & incentives from me.

It was a definite breach of trust. Back when it happened I had to get people to go through a lot of command line fiddling to fix it. I'm still mad that Canonical didn't publish an apology and a hotfix which made the behavior opt in.


I thought they changed that?

Not since 16

Oops, my bad. But at least they did, which was my point to begin with. I said goodbye to Ubuntu and their raking since the notorious community discussion, even before it was implemented. They lost my trust and goodwill with that move.

It's less prevailent and severe, but FOSS sometimes does: https://www.fsf.org/blogs/rms/ubuntu-spyware-what-to-do https://support.mozilla.org/en-US/kb/firefox-health-report-u...

There are probably lots of smaller examples; especially in Android.

That link says nothing about selling your data.

Also note that while Homebrew may be open-source, it is not "free software".

>>>> _Everyone_ is collecting our data nowadays. Who's left to sell it to?

I confirm that he is probably right about _____collecting____ data. Yes, this most definitely includes FOSS software. If your qualifier for FOSS is not using GA or anything like that than your are right, however, most of probably still count brew as FOSS. Hope that helps.

BSD-2 license isn't free?

What makes it nonfree?

The fact that it uses GA contradicts Free Software's definition of free software, however it is open-source.

But not everyone has access to all of your browsing history; especially across browsers and devices.

There's Microsoft, Google, and/or Apple have that. The profit models of these big companies create some disincentives on onselling this data.

AV providers are often on much smaller margins and the return from selling this data or building their own products on it is much higher.

I wouldn't be surprised if ISPs and VPNs also sold on data.

_Everyone_ may be collecting data (I'm collecting steam player data, project pieces in alpha and beta stages), but not everyone has access to a lot of your data. I might be able to tie your player name to your identity, and know your work and sleep schedule, but this brings me no closer to your browser history.

Everyone is collecting as much data as possible, but few are in position to get all of the users' browsing data. Even fewer (if any?) make it available for sale. So there certainly IS interest in such data.

Marketing and ads companies?!

Some of what is was their EULA (which is subject to change), and what they collected (which is subject to change with updates):



We recently published a paper on this exact issue and quantify the degree to which AV / corporate middlebox systems degrade the security of HTTPS connections. The tl;dr is that we find an alarming amount of MITM on the public internet (5-10%), mostly due to AV/middleboxes, and they almost always degrade the security of the connection.


Many corporations do this on their networks so that they can inspect traffic for security purposes and outbound loss prevention. It's not uncommon today and seems to be gaining in popularity.

Edit: I don't mean to imply that it's the right or the wrong thing to do (it probably depends on the situation). Just stating what I have seen in industry.

That communication belongs to the company, the session is work product on a company owned device. Feels squeamish if you didn't think about it that way, but is implied by almost every employment agreement.

This is quite different than the AV vendor who does not own your communication from your own device.

I still wish most companies knew/had a better "best practice" than just MITM interception certificates, because that is potentially brittle and is an threat to corporate security. If you already have all of your machines MITMed, then an attacker could gain access to the existing MITM certificate and who would ever know.

I know I'm a relative minority in the corporate IT world, but as a software developer downloading/uploading dependent libraries or the outputs of my development issues, corporate MITM interception certificates absolutely scare me for my personal threat model and the threat model of the projects that I work on.

>because that is potentially brittle

It is. I do work in a Fortune 500 occasionally, and have to use their MITM gateway (websense SSL intercept).

They haven't yet fixed the internal cert to not use SHA-1.

If you're using something other than a corporate windows desktop + browser, you have to install the root certificates manually.

They have to make manual exceptions for sites that do certificate pinning. When they miss a site, it creates issues. Github is broken for me...I have to use crazy workarounds.

If there were a movement to enable certificate pinning everywhere, it would be very disruptive for the Corporate MITM vendors.

Edit: They also have irritating "content filters". So, if I'm tasked to research options for a project, like say a VPN, I can't search from their network. It blocks pages talking about VPNS because there's a policy to block "websense proxy avoidance".

Similar anecdote: an internal intercept certificate that Firefox outright refused to install to a trusted store because the cert seemed suspicious/insecure. (Not caught by corporate IT because of a Chrome monoculture, which is a different problem.)

As someone who's worked on corporate web proxies, I can also tell you there's usually someone who knows what they're doing administering them. Besides the "implied consent" of being on an employer network, you also have people at the company who know how to ensure bad SSL from the proxy -> website will not be permitted.

If the MITM function of bitdefender isn't advertised, how can anyone consent to it, or knowlegably ensure it's still enforcing connection resets on bad SSL certs?

Don't forget that if their software is changing the certificates for every HTTPS site you visit they're probably doing it on the fly. This means that the private key is on your computer. Assuming they generate a per install private key that won't be a big issue but if it's the same private key for all their installs it could get pretty bad once someone gets the key

I am not disagreeing with you, but I want to point out that, usually, the certificate is generated locally during setup and then installed in the trusted certificates store. So no one else should have that certificate. I also assume there is an option somewhere to disable the MITM scanner.

> I also assume there is an option somewhere to disable the MITM scanner.

By default this is ON and users don't have the competence to recognize that this is in fact increasing the surface area for attacks and to disable it. The mere existence of a setting that is ON by default doesn't absolve such AV companies.

But speaking of Bitdefender in particular, I installed it on my wife's computer, disabled that option, confirmed that it survived a restart, then one month later I discovered that it is ON again, probably due to an automatic update. It's also an "admin" setting and my wife's user account does not have admin privileges to turn it on or off.

So even with a setting in place, it's untrustworthy.

When I was using bitdefender, this was the first setting I disabled after installing it. Bitdefender also has a slew of other issues, including BSODing after installation.

Bitdefender likes to revert to default settings frequently. I prefer Windows Firewall (Win7) and have to turn off the Bitdefender firewall usually at least once a week.

What is the reason you're using the software at all? What is it achieving for you? Genuinely interested in the motivations.

Bitdefender free version doesn't have ssl MITM feature. Neither it installs root cert as far as I can see.

Microsoft doesn't exactly have a great record with root certificates either.

>Emergency Windows update revokes dozens of bogus Google, Yahoo SSL certificates


They revoked certs like this silently in the past which makes it even worse.

Any references, because I don't know what you're talking about?

I'm not a Windows user, haven't been a Windows user since 2001, my AV experience has been with the PCs of my family, whom I'm trying to keep safe.

But even if I were a Windows user, if you can't trust Microsoft, you can't trust their OS, at which point it would be better to use something else because security really depends on how trustworthy that OS and its vendor are. I do trust Microsoft more than I trust an AV vendor though.

You can manage pre-installed root certificates manually in Windows. As far as I've seen, there was nothing sinister in default Windows root CA list.

Microsoft just silently adds back root certs when you delete them (if it is trusted by microsoft). Or at least it did in winxp by default.

That's hardly relevant for the average computer user. By default, root certs are updated automatically.

>As far as I've seen, there was nothing sinister in default Windows root CA list.

Are you in any way related to MS or is your memory just very short?

>Emergency Windows update revokes dozens of bogus Google, Yahoo SSL certificates


"Thursday's unscheduled update effectively blocks highly sensitive secure sockets layer (SSL) certificates covering 45 domains that hackers managed to generate after compromising systems operated by the National Informatics Centre (NIC) of India. That's an intermediate certificate authority (CA) whose certificates were automatically trusted by all supported versions of Windows"

I'd argue that's a problem in CA trust model, not MS. If you trust a certain CA, of course you trust their issued certificates by design. Currently, if some high tier CA f*cks up, there's no other way to invalidate their issued certificates than propagating CRLs and removing its certificate from the root CA stores manually (or by updates, as in MS case).

Got an example? I haven't heard anything about this and I'm genuinely curious.

There has been critics on the quietly updating of root certs by Microsoft.


Added a link to the post.

Don't most browsers have hooks for AV (and other plugins) to get into web traffic without having to mess with TLS?

No, I don't think so and if it does, please tell me which browser does it so I can keep away from it, because that defeats the purpose of TLS.

Either way, Bitdefender installs their own root certificate and generates their own for google.com. I've got proof if you want.

I don't think it defeats the purpose of TLS.

From Wikipedia: "TLS and SSL are cryptographic protocols that provide communications security over a computer network". Your host is not "the network" and it's expected to be your trusted asset.

If the AV software can't be trusted, that's another issue not addressed by TLS.

No, I don't think so and if it does, please tell me which browser does it so I can keep away from it, because that defeats the purpose of TLS.

AVs generally run with complete permissions, and can do everything up to and including injecting their own code inside your browser's running process. Providing them with an API doesn't weaken the security, it just reduces the chances they'll screw the browser up.

That's for the Wireshark debugging use case.

Indeed it is, yet it can be used for other things as well. Such as an AV that would want to MITM everything without supplying its own CA.

Is some AV product using it for that?

Trend micro officescan, their enterprise offering, has plugins for Firefox.

Well, it had a plugin that got disabled by following Firefox updates.

And that is using the NSS key dump files?

I have bitdefender and I was wondering how I can check if it does it on my pc and mac and how I can disable it (if possible). Could you help me in the right direction?

Open https://www.google.com/ and see what cert you've got. If it's "bitdefender something" instead of google then uncheck "Scan SSL" in bitdefender and google how to remove root cert from trusted root cert store assuming former doesn't do it for you.

Browsers grudgingly support local MITM as an ugly half-ass solution, mostly because banks require it as part of their data loss prevention measures. Since there is no OS-provided API, there is no alternative that makes corporate clients happy.

Depending on the AV vendor, the MITM implementation will "give AV access to your SSL traffic" or "allow everyone to intercept it" (Symantec).

I feel like, instead of MITM'ing all TLS connections, antivirus companies could implement this same thing in a browser extension. If good ad blockers can prevent requests for ads from being completed, an antivirus extension should be able to do something similar, without having to tamper with the TLS connection between the browser and the site.

That being said, users would probably be much safer if they skipped the antivirus and just installed a decent ad blocker.

At least with Chrome, the extension API doesn't allow you to "peek" into the content. You do have the ability to see the url before it's fetched[1], and block the fetch/redirect. But you can't see the data until it's too late.


Not to detract from the main point but for Bitdefender specifically ssl MITM looks like a paid feature. Which is ironic. So, just use free version. I am not sure if it sends out all the plain url's though. If anybody knows for sure please let us know.


I only recently noticed this when Google chrome started marking even gmail.com as insecure on my dad's laptop.

Turned out that the Bitdefender license had expired and somehow this made certificate validation to fail?

How does that work with chromes certificate pinning for google? Do they do some runtime modification (e.g. DLL-injection) to disable the check?

Cert pinning ignores root certs. This is by design :(

>The Chromium browser disables pinning for certificate chains with private root certificates to enable various corporate content inspection scanners and web debugging tools (such as mitmproxy or Fiddler). The RFC 7469 standard recommends disabling pinning violation reports for "user-defined" root certificates, where it is "acceptable" for the browser to disable pin validation.

The alternative would be no Chrome and/or Firefox at my workplace, and many others.

It does not work with certificates installed by the user.

> In the name of "security", this undermines the very purpose of what HTTPS is about, knowingly endangering their users.

It doesn't have to be insecure. If the software that does the MITM checks the certificates correctly, I don't see how it would be worse than letting the browser handle it.

Not that I'd ever use an antivirus, of course.

It actually is worse. The problem comes "what does the interception do when it encounters an invalid certificate"?

So for example a self-signed cert. does it

a) create a "valid" cert itself, hiding the error from the user? This is obviously dangerous

b) create an "invalid" self-signed cert. This is messy as a user will then see a self-signed cert from the A-V vendor, which they may be more or less inclined to trust

c) Pass the traffic through without inspection, missing any potential threats

And that's just one case. SSL/TLS interception is very hard to get right and easy to make the user's security worse as a result.

With Eset you get a message explaining the issue, similar to if your connection was blocked because malware was detected.

I don't think this practice is a big issue because the local machine would have to be compromised for it to be an issue, in which case it's irrelevant because the game is over already. Also the alternative is not scanning ssl traffic for malware which has it's own very real risks.

The issues I've mentioned (problems dealing with self-signed certs, Cert pinning and EV-SSL) don't have much to do with the client being compromised. They're examples of how SSL MITM (even assuming no implementation flaws) can damage user security by breaking the operation of SSL for user web access.

I think what bluecoat proxy at my work does in this case is just inspect traffic and pass invalid cert to the client, since it is invalid already. If the cert is valid, replaces it with its own cert.

It can simply stop the connection and show the user, under the normal certificate, a message telling him there's a problem with the cert.

So to do that it's going to stop the users browsing session, redirect them to a local web page and then present something to let them make a decision about carrying on? not the best user experience in the world..

But remember like I said that's just one example of why it's a bad idea, there's others, e.g. what do you do about EV-SSL certificates? You can't fake the browser element for them (remember this is the case where the A-V product hasn't hooked the browser), so where you want to MITM a EV-SSL connection you have to downgrade it to Non-EV.

Also what do you do with certificate pinning (either browser in-built or HPKP headers?)

Of course there are lots of what-ifs, but how do you tamper with HTTPS traffic if you don't MITM?

Easy, you don't tamper with HTTPS traffic, it's innately a very bad idea.

Consider the goals that are trying to be achieved. You're attempting to stop the user either downloading malicious content or perhaps getting hit with a browser exploit or possibly you're trying to stop users going to a "bad" site.

The first one can be covered off with traditional on-access scanning of files.

The second one is much better addressed by improvements in browser sandboxing or general app. security.

The third one can be handled at a DNS level with reputation based block lists.

That doesn't work for corporate communications, though. There are numerous use cases where a corporation must be able to penetrate HTTPS internally in order to comply with regulations, both for direct reasons such as regulations regarding corporate communications, and indirectly for things such as internal security, protection against insider threats, and a lot of other second-order issues like intrusion detection.

And, rolling back around to the main topic, preventing internal machines from being compromised by viruses, since a lot of people end up having to hit at least one website of some sort that has at least one tracking or advertising widget that could by three layers of indirection get compromised to serve viruses, which is, alas, not some sort of far out scenario nowadays, but just another day of the week on the web. (That is, even perfectly safe browsing habits can still get you owned on the modern web. And saying that a modern network can't count on "firewalls" and must have defense in depth still doesn't mean it's just peachy keen if an internal machine gets compromised.)

If you're going to insist those corporations can't penetrate HTTPS for compliance and security reasons, you're going to have to be willing to lift those restrictions and deal with it when their security fails. There's no two ways about this; either you grant them the necessary tools for compliance and security, or you stop complaining when they can't comply and aren't secure. (And at scale, let's be honest; the latter isn't on the table.)

I know corps need to do that, and they get to handle the trade-offs that it generates (although I'd argue that HTTPS interception doesn't in any way provide a panacea for the internal security issues you've mentioned).

The advantage is that they should have informed professional security people who can understand the trade-offs and make intelligent decisions about them.

Even then this strategy fails against certificate pinning which is becoming ever more common in mobile and also web space, so corps need other solutions to those problems (likely endpoint based)

However what we're talking about here is end-user A-V products and their use of HTTPS interception at a desktop level and the trade-offs that this forces on individual end users who are less equipped to handle this.

Realistically the A-V product will likely choose to cause "less noise" to the user so won't present them detailed technical information about the errors their masking, potentially making the user's security worse.

If the alternative is not scanning ssl traffic for malware then perhaps if it's done correctly then it's not a bad compromise. For example a broken upstream cert should just be treated the same as if malware was detected. I bet good AV would update revocation lists more often than the OS & browser does too.

i dont think you understand how this works. they install a root certicicate on your machine and do mitm "attack" so they can scan the urls, and block some attacks (i remember when some forum had embeded a pdf, that had some attack and antivirus blocked it )

also you have installed an application that has a root acces to the pc, if it was mallicius it could do allot more damage. it is ultimately a question of trust.

i created and installed my own root certificate because i dont want clicking on the exception if i open a new incognito window, its especialy anoying for websocket connections.

That is a bad idea.

If you MITM the connection locally it triples the computational cost for both encryption and handshake operations. Then more websites don't use TLS because it's three times as slow for the user.

It also prevents you from using a good cipher suite when the MITM doesn't support it even though the browser and the server both do, again reducing security or performance or both. And it's very easy to screw this up the other way and have the browser show a good secure connection with strong primitives and forward secrecy while the MITM is actually communicating with the server using export ciphers or RC4.

The existence of a trusted root private key on your machine exposes you to KCI of all servers. And key compromise is not even necessary if they use the same root private key for everyone, which has actually happened.

This is not a comprehensive list of the reasons why that is a bad idea.

ok some of those are valid concerns but i would argue that being infected trumps all of those. they have to get it only once.

Compromising TLS is an infection vector. People regularly download programs from trusted websites and run them. Some apps automatically download updates from the vendor's site via TLS.

AV scanners do not have a 100% detection rate. Letting malware be where a trusted program is expected is how you get infected.

I don't think you understand how HTTPS works.

> so they can scan the urls, and block some attacks

The purpose of HTTPS is to provide a guarantee that your connection to Google is direct, with no intermediaries, such that (1) only Google knows your search query and (2) you get a guarantee that the received content is from Google.

And you get this guarantee from certificate authorities that have a good reputation and that are in business because they've proven they can keep their shit secure. And when one of them violates that trust, the OS / browser vendors can start to invalidate their certificates. AV companies are bypassing it all.

The blocking of attacks reasoning is kind of bullshit, because Google's Safe Browsing service and browser extensions maintained by a community like uBlock Origin are doing a better job of warning against potentially malicious websites. There are always vulnerabilities to exploit of course, though it's getting harder for those to pop up due to the modern sandboxing of browsers.

However I have yet to see evidence that AV software is doing a better job of catching those, because it's a whack-a-mole game and it's more likely that browser vendors find and fix those vulnerabilities faster than AV companies, because bugs get reported to browser vendors first. And sure, if you have Adobe Reader or Oracle Java installed as plugins in your browser, that's a huge risk, but it's actually easier to uninstall those and browsers have started disallowing plugins. Safari for example is disabling everything by default.

The problem with installing their own root certificate is precisely one of trust. Yes, you allow a piece of software to run with root permissions, for as long as you don't see it do stupid shit, like installing a root certificate, at which point all of that trust should be gone.

And that is because a custom root certificate that doesn't belong to a competent certificate authority cannot be trusted and will increase the attack surface. This is security 101.

"The purpose of HTTPS is to provide a guarantee that your connection to Google is direct, with no intermediaries, such that (1) only Google knows your search query and (2) you get a guarantee that the received content is from Google."

This is a popular misconception, but false. SSL creates an encrypted tunnel so that nothing "in the middle" can penetrate the tunnel (in theory), but there is no contradiction in the two sides of the tunnel delegating out their trust. There better not be, because in practice, there are almost always intermediates now on the real web. It is very common for a WAF or a load balancer to be the thing responsible for the SSL rather than the server generating the response, or you have CDNs or DOS prevention like Cloudflare doing the real work, etc. etc. There is no particular problem with the user doing the same thing. Sure, they can be irresponsible with it... well, so can the HTTPS server side, so, well, yeah? If you're going to declare a particular encryption technique unusably flawed because it could be used incorrectly and insecurely, you're not going to be encrypting very many things.

In fact, the very web connection you are reading this on, if I understand it correctly, was not actually encrypted by YCombinator. It uses a cert they own, but they're not the ones terminating the SSL connection; that's been delegated to a trusted third-party.

See also: https://news.ycombinator.com/item?id=8383466

Cloudflare et al don't terminate TLS using their own root certificates installed on the client, which means that doesn't expose you to KCI of all servers.

And they don't MITM the TLS connection, they terminate it. The difference being that the performance is better rather than worse (so more people use TLS instead of fewer), the server is aware of this happening so it isn't fooled into thinking the connection is using more secure ciphers than it actually is, there is no third party forcing lowest common denominator security between the three, etc.

There are no intermediaries, AV mitm the traffic because otherwise it cannot scan the content, nor the urls, if it was remote on the AV vendor server then i would understand it but its doing it on you local machine, and you can disable it if you dont want it. If i have a antivirus installed on my pc and get infected with one of those fly byes i would be furious because i was thinking that im protected.

I dont know where do you live but there is a TON of different javascript injections in the wild with ad networks that are not caught by browser vendors or are caught to late. Google does not do deep scan of the page or files it just has a blocklist of urls. If i host my code on another url or change a file slightly only AV is good in this situation.

And remember they have to get it right only one time.

I still don't see why AV should scan websites. Get AD blocker + javascript blocker (with whitelist for trusted sites) and AV to scan local files. MITM'ing TLS makes more troubles and potential dangers than it solves.

>The blocking of attacks reasoning is kind of bullshit

>I have yet to see evidence that AV software is doing a better job of catching those

I wouldn't be so one-sided. Imagine a fresh new variety of ransomware starts spreading. No one can catch it at 0-day, but good AV can catch it at 1-st day (OK, 1-st week) and neither Google nor uBlock or the likes can't.

Do you have any evidence for these claims? What's the concrete mechanism that allows AVs to observe and react to threats earlier than Google? (Since you allow up to 1 week of reaction time, I'll assume that you're not referring to heuristic detection methods.)

With cloud reputation service all AV user base (provided sufficiently large) turn into global sensor network, along with honeypots vendors maintain separately. This allows (at the cost of users' privacy) to detect new emerging threats within hours, then acquire samples, analyze them and deploy new signatures within days.

Google can of course react equally fast. But "signal delay" may be much higher, as users report only URLs they can immediately link to their troubles, e.g. malware that crash browser.

And second, what Google can do now is to block only one attack vector, namely web page.

Thinking rationally, chances are high that Google is seriously considering to enter the AV business. They are in highly advantageous position to do it successfully, with their user base, resources and AI tech.

Google owns VirusTotal, so they either alternatively have a strong set of tagged samples to work with, or an incentive to not disrupt their partnerships with exiting vendors.

Good point. A huge, high quality dataset to train their own AI-based malware detection engine. Someone must be doing this already, at least as a research project.

What's more, as third party antivirus software becomes increasingly irrelevant, many of these companies resort to harmful and even actively malicious tactics to stay in business. On the more benign end, you see an increase in 'safe web browsing' and such tools that parse javascript while browsing and somehow attempt to make it.. safer, I guess. My main experience with these things is when they randomly decide to block bits of code on our sites, breaking functionality for no discernible purpose.

Far worse are the lengths that a company like AVG will go to to get and keep their software installed on your computer. Their browser toolbars essentially take all the dirty tricks they've apparently learned dealing with malware to.. build a piece of malware. Honestly whether it's active malice, incompetence, or lack of motivation I don't know, but I do know I've spent hours trying to extract their stuff from people's browsers. (I should say here that I fully expect someone reading this has managed to uninstall an AVG toolbar with no issues. They have multiple different auxiliary tools to their antivirus, and I'm not sure specifically which one(s) caused me trouble personally. It's also likely that they're only a _real_ pain in certain circumstances. But regardless, if you google something like 'how uninstall avg' or 'avg malware' I'm sure you'll find many more examples.)

> I should say here that I fully expect someone reading this has managed to uninstall an AVG toolbar with no issues. They have multiple different auxiliary tools to their antivirus, and I'm not sure specifically which one(s) caused me trouble personally.

I can say this: I never had a problem with uninstalling a browser toolbar, or restoring the default search engine in the browser. What I always have problems with, is getting rid of AV software itself. Oh God, how hard it is sometimes.

Norton AV taking half an hour to uninstall is a known thing; I'm convinced they actually have some Sleep() calls in their code just to piss people off. But just last week I tried to get rid of Comodo AV (+ 2 bullshit pieces of software it installed) on my neighbour's computer. Took a while. The uninstaller didn't work (it reported "an error" and gave up), so ultimately I had to resort to manually deleting stuff until the uninstaller finally unlocked itself and cleaned up the rest.

I've been having similar experiences with all AV software in past few years. They're a menace.

Ya, IIRC it wasn't specifically the AVG toolbar, but some other thing in integrated into the browser. It refused to uninstall, and then even when I downloaded and ran the super-secret installer from their site, it replaced itself on the next restart. Extremely frustrating.

A friend of mine worked as an on-site contractor for the AVG. He claimed that the "toolbar department" in the company, working on the browser toolbar displaying ads is as big as the "antivirus department", working on the engine (or bigger). It shouldn't be a surprise, since the toolbar is the main revenue source for the company.

Given how they often MITM the connection they would be able to do things like reorder Google search results. This would be a huge revenue stream. Can also sell browsing data to advertisers to target specific people.

Ok, disclaimer first: I've previously worked at Kaspersky Lab (incident response division). Now, I want to say that many of the incidents that we have investigated, would have been prevented by anti-virus software (in many cases AV software was deliberately disabled by user). And I'm talking about incidents that resulted in million-dollar thefts - not just cases of some user getting cryptolocker on their home computer. I agree that AV software is bloated and has very large, messy and barely maintainable codebase, but I disagree with people who say that "I have never used any AV products and in 10 years have never been infected with malware" - this attitude is careless, to say the least, and in corporate environment could lead to huge financial losses. There are many criminal groups that put serious effort in the development and distribution of malware - not just script kiddies, but professional programmers and hackers.

BTW, there are also region-specific malware - so for example I would rely more on Kaspersky for detection of malware targeted at Russian businesses, than Symantec or Microsoft AVs.

Just to play the devils' advocate, I do think that the attitude of "never use AV products" could work in corporate environment, provided the administrators are competent and draconian enough to counter-weight the absolute incompetence of users (because, frankly, the largest attack surface is the incompetence of the user):

use security policies of the domain to only allow whitelisted applications to be run;

restrict internet use to whitelisted destinations;

configure mail servers to accept only whitelist sources, use DKIM/DMARC, and reject multipart messages.

Mandate usage of wired-only HID peripherals which are soldered to the port. Don't use wifi, and physically secure the access to network wires.

Glue shut all other computer ports.

Go all-out Saudi-arabian with people who don't comply with security policies and punish them by removing digits and public hangings for repeated offenses.

It's really that simple.

I work as a security consultant for a major tech company and my clients are almost always Fortune 500 (with some Fortune 100 companies, and at least one top-10 company). When they hire us, we get to learn everything about their security infrastructure.

The trend is clear: AV is out, Carbon Black (or Crowdstrike, etc) is in. This is especially prominent in the financial industry. My wife works at a tiny local bank and they're doing trials of Carbon Black.

AV is terrible software, the chemotherapy of the security world. It only exists because it's slightly better than the alternative, and if you don't have an active disease, it acts as a disease of its own. You're glad its there when it saves your life, but you curse its name every day. Application whitelisting tools don't interfere with the day-to-day workings of your computer, but don't let the bad stuff in. You're only allowed to run the software you need to run, and nothing else.

It's not set-it-and-forget-it like AV, but it's a damn sight more effective and less annoying to the users.

Except AV started out like how Carbon or Cylance did (lean, effective, buzzworthy, etc) and other popular applications started out. It was decades of feature creep, poor competition, out of control pricing, etc that killed the AV industry.

I'm seeing the same thing today. Getting a trial of Cylance for a small environment seems next to impossible and when 3rd party testers test these apps, the false positive rates are terrible. Worse, they miss a lot of obvious malware traditional AV doesn't.

I am skeptical this technology is some silver bullet for the industry. I imagine cryptolocker changed the game where its politically expedient to whitelist everything be it application, driver, URL, etc where in the past IT departments were told to pound sand because some executive couldn't install Bonsai Buddy on the weekend or whatever.

Once you have proper whitelisting then you can pretty much remove AV or go with a non-traditional AV product like the kinds you list or no AV at all. Whitelisting requires a centralized IT department, no BYOD, and a lot of other infrastructure and talent smaller organizations simply don't have. I suspect traditional AV is here to stay for rational reasons and the technology behind things like CB or Cylance will eventually be part of a traditional AV package.

Arguably, the heuristics behind Win10's more advanced SmartScreen are a poor man's version of this and SS comes with every copy of Windows10 (The Win7 version is actually very poor). I imagine there's a lot of anxiety about being acquired by these companies before traditional AV reverse engineers what they do or SmartScreen gets good enough to the point where you can run a flawed local AV and still get some world-class heuristics watching your back as well.

Whitelist-only works until it doesn't. All an attacker has to do is compromise one of the whitelisted apps (e.g. a web browser) and they will have infiltrated the device and perhaps the network. Certain institutions can tolerate operating as a digital supermax prison (law firms, banks, Government). Most can't. The future is likely some mix of network defense, whitelist/blacklist management, traditional AV for each device, VMs (less effective with migration of apps to cloud), and lots of user education.

I'm pretty sure no AV would help against targeted attacks on high profile target. If you have multi-million business to secure, you play at totally different risk model.

That's exactly what I had in mind when I read the GP. If third party AVs have a large and complex codebase with unknown or even known security flaws, they won't help much against targeted attacks or make them even easier.

On the other hand, AV usability is so bad you can't expect it to help "normal" people. All those popups do more harm than good when people start ignoring them.

Well, I agree that AV most likely wouldn't protect you against targeted attacks - but most of the attacks that we investigated were targeted quite broadly - phishing email campaigns targeting financial organizations (with address lists based on some hacked legitimate resources for accountants, for example). And usually these attack succeeded because of insecure infrastructure, poorly trained admins, old, non-updating systems (some people still think using Windows XP on internet-connected computers is fine), and lack of AV software.

"usually these attack succeeded because of insecure infrastructure, poorly trained admins, old, non-updating systems (some people still think using Windows XP on internet-connected computers is fine)"

In this case, there are much bigger problems than the lack of AV.

> most of the attacks that we investigated

Isn't that a case of the survivorship bias? Or at least the broader case of selection bias?

What do you mean, exactly? All I want so say that while targeted attacks are the most difficult to defend against (well, by definition), it is the medium-sophistication-level attacks that cause the most damage (in my experience), just because of their volume. It's not some state-of-the-art APT malware, it's bundles of RATs + generic backdoors/keyloggers packed in SFX archives, that are usually quickly detected by most AVs (provided that AV bases are regularly updated).

Maybe some of them thought they were fine if they were using AV software? I know what you mean, but the marketing departments of many AV vendors praise it like some kind of all-around solution. I'm pretty sure some people think they can get away with disabling updates etc. and than just buy AV software afterwards when they feel they can't handle their systems anymore.

Maybe the perception that you can achieve some kind of security through band-aid solutions is exactly the cause for the lacking security of many organizations?

> in many cases AV software was deliberately disabled by user

Right, because the only way AV software can ever be effective is if it blocks things that legitimate programs also do (if a given piece of functionality has no legitimate uses it wouldn't be in the OS in the first place) - so users get in the habit of disabling it. Installing a piece of software that e.g. stops you running any downloaded .exe files is useless - if you didn't want to run the .exe you wouldn't be trying to run it, and if you do want to run it you'll turn off the antivirus. If you just want to disallow it completely, you can do that at the OS level easily enough.

There is no magic that AV can do to make it any easier to tell legitimate software from not. Reactive scanning for specific threats is ineffective in the modern era - by the time AV knows about a new form of malware most of the damage has already been done. So all that AV can do is monitor what programs do and apply inherently unreliable heuristics, and maybe be more or less sensitive about those heuristics than the OS is.

Example with .exe files isn't good one. Modern AVs may do better job than just blocking them. I use Norton AV, which shows a report summary on new downloaded files, based on which I can make informed decision on whether to launch it or not (I personally launch immediately only trusted executables and google for any issues of the rest). The same can be done with all threats: AVs warn, provide some details and let users decide what to do.

> I use Norton AV, which shows a report summary on new downloaded files, based on which I can make informed decision on whether to launch it or not (I personally launch immediately only trusted executables and google for any issues of the rest).

Trusted in what sense? Does Norton maintain their own whitelist? Is there any reason to believe that whitelist would be any better than the digital signature check that's built into windows?

> based on which I can make informed decision on whether to launch it or not (I personally launch immediately only trusted executables and google for any issues of the rest). The same can be done with all threats: AVs warn, provide some details and let users decide what to do.

But what information can the AV offer that actually helps the user makes a better decision than they would have otherwise?

For apps looks like they have a whitelist based on usage statistics, so it's basically vetting by other users of NAV. It does not replace digital signature check, but it's a good addition to it.

For other threats it can be similar solution.

Any AV software is better than having none but that's not the point of the article. It specifically recommends Microsoft's AV and to stay clear of all the others.

I'm sure it's hard on all the AV vendors out there but with Microsoft Essentials and Windows Defender I don't see the need for a third party AV.

IMO, I think common sense, basic hygiene practices, a minimal education and a decent firewall goes a longer way, being much better than an AV could.

For example the most common way people get infected is by installing software from unreliable sources and by not keeping their computer up to date. I'm pretty sure that learning to regularly update your OS and browser, learning to search, recognize and use the official sources for software, to stop doing software piracy for that matter, learning to not click on .exe files received in emails and to be suspicious of all attachments, learning to uninstall everything that infects your browser with useless plugins, I'm pretty sure such simple knowledge would cut 99.9% of all incidents.

Most software vulnerabilities in the wild are not novel, "zero day" exploits are not that common. This is why even though I hate Microsoft's recent update policies, on the other hand I understand their newfound aggressiveness in pushing those updates, as it is really frustrating that users ignore update warnings. I also appreciate Chrome's fast updates, which encouraged Firefox to do the same.

Forget even Windows Defender. The one and only "AV" a normal user will ever need is…

Google Safe Browsing.


Anything you download is already checked with Google, why waste CPU cycles on checking it again locally?

Google runs the largest advertising network in the world. Plenty of malware slips through the cracks every day, both downloadable apps/software/extensions as well as ads that lead to obvious scams. Facebook, Microsoft, Yahoo etc all suffer the same problems. I think these problems are likely unavoidable at that kind of scale. But I would never rely on these companies as the only (or even primary) line of defense.

Of course the primary line of defense is not running random crap executables.

I'd also recommend uBlock Origin or similar. The number of fake download links you see otherwise is scary.

You need an antivirus that can watch running programs for bad behavior. Polymorphic viruses have been around for decades and will defeat any simple blacklist. And the halting problem means you can't possibly categorize every program as being harmful or not by static analysis.

One reason is that they simply do not perform as well on benchmarks. Other reason is that if there is only one AV vendor then it is a lot easier for developers of malware to penetrate systems than if there are dozens of vendors.

Sounds like they're choosing their battles just like the US ones.

My takeaway here is not to trust either Russia or US based companies, as none of them will escape working with the secret services. China and India have plenty of exploitative AV like software as well, mainly for mobile.

Are there any European AV? Or Japanese? Or South African? I'd love to have something that has an eye eg on Microsoft's products, because there's no doubt that they have backdoors and report home.

How does that make them the worst?

Commercial companies in free countries may be greedy or unethical, but they are generally predictable and usually follow the letter of the law.

A state controlled entity in authoritarian country is another story.

It's just about making a choice of which spy agency is going to get your data. NSA for western companies, KGB-or-whatever for Russian ones. If you live in the West, it may be worth considering both options.

Do you have some facts or reasonable suspicions about western AV companies collaborating with NSA? What I’ve heard they aren’t collaborating, NSA is researching AV vulnerabilities just like bad guys do.

Besides, Russia considers themselves at the state of “hybrid war” against the whole world. It sounds insane but apparently that’s what their government believes, and that’s what their propaganda broadcasts. That’s why an AV product made by a Russian state controlled company carries some unique risks.

Since mid-December, a high-ranking Kaspersky manager, Ruslan Stoyanov, is in jail for high treason. Do you know what kind of deal KGB wants from him? I don’t.

There are rumors that he's in jail for association with "Shaltai Boltai" hacker group, which published emails of D.Medvedev. Same for FSB officer, who supposedly worked with Stoyanov.

I’ve heard other rumors. I’ve heard he’s in jail because he failed to secure their systems from Ukrainian hackers and 1GB of confidential data was leaked: https://en.wikipedia.org/wiki/Surkov_leaks

But just hearing rumors doesn’t mean we know anything.

Besides the fact a US company probably cooperates with US intelligence, there are plenty of examples of companies outright breaking the law.

> the fact a US company probably cooperates with US intelligence

“the fact” and “probably” are mutually exclusive.

> plenty of examples of companies outright breaking the law

I know and that’s why I wrote “usually follow the letter of the law”. Majority of the companies follow the law, however.

> “the fact” and “probably” are mutually exclusive.

I don't see how; statements about probability can be factual and we have plenty of evidence that Google, Microsoft, and US telcos do; why should AV vendors be different?

As far as companies usually following the letter of the law... do they? What makes you so sure?

> statements about probability can be factual


In natural science or in medicine you can estimate that probability (because control groups, multiple experiments, statistical methods, etc). In such context, a statement about probability can indeed be factual.

In general conversation or in legal context they can’t. If you have facts, there’s no “probably” because you know for sure. And if you don’t, it can be you belief, or your personal opinion, but not a fact.

> do they? What makes you so sure?

Over my career, I’ve worked in several US software companies. Lately, I’m working with various US companies as a contractor.

Multiple times a company put a lot of efforts and money to comply with the law: we redesigned our products, moved across states, trained employees to comply with various regulations, and so on. Having friends in the industry with similar observations, I conclude such things happen all the time.

Bit strange to go from a hyper-rational scientific stance on one point to using anecdotal evidence for another.

Well, what I’ve seen with my own eyes during 17 years in the industry is much more believable than the BS about evil corporations that (mostly liberal) media wants me to believe. That’s no evidence. That’s what makes me so sure.

Speaking of which, what makes you so sure plenty of US companies are breaking the law?

Do you ever read the newspaper? Do you remember Enron? Robosigning? Wells Fargo helping Mexican cartels launder money (https://www.theguardian.com/world/2011/apr/03/us-bank-mexico...)? Conflict minerals? Nestle and slave labor (https://www.theguardian.com/sustainable-business/2016/feb/01...)? There seems to be plenty of evidence that corporate malfeasance is a serious problem.

And, yes, there are certainly lots of compliance programs out there, but I'd argue those have more to do with avoiding enforcement action than necessarily adhering to the law. I'd guess Wells Fargo (Wachovia at the time) had a compliance department while they were laundering money for drug cartels and yet it still happened.

I find it eminently believable that many or even most US companies would comply with an illegal request from US intelligence agencies.

Just as I expected, your beliefs are by 100% caused by media.

The image of the world as it’s shown in the media is extremely biased. Watch this: https://www.ted.com/talks/hans_and_ola_rosling_how_not_to_be... The video is about inequality and education, but the topic of corporate crime is skewed just as much.

There are 6 million companies operating in US, employing 115 million people. If the majority of them were breaking the law, you’d knew about that not just from the media but also from some of those 115 million people who happen to be your friends and family.

> I find it eminently believable that many or even most US companies would comply with an illegal request

I don’t find it believable because I don’t see motivation for such compliance.

In an authoritarian state, a government can abruptly take away your business (https://en.wikipedia.org/wiki/Euroset) and optionally throw you in jail for 10 years (https://en.wikipedia.org/wiki/Yukos) if you don’t comply, and you can’t do anything against it. That’s a strong motivation to comply. I don’t see such motivation for western companies.

If your argument is "anecdotal evidence is much better than reported, sourced news" then I have to disagree, regardless of what the TED talk says. The examples are some of the biggest and most prestigious companies in the country and I'm gonna guess most of the low-level schmucks like me and my friends and family aren't in on the huge, illegal operations.

My argument is, you should distinguish two majorities, the majority of media-reported incidents, and the majority of some real-life things.

Those two are very different. If you don’t distinguish between them, you’ll come to absurd conclusions like “the majority of US drivers are drunk”, “vast majority of US citizens voted for Hillary”, or “the majority of US companies are breaking the law”.

> of the biggest and most prestigious companies in the country

Prestigious means nothing ‘coz it’s hard to measure that. But for biggest ones, here’s the list: https://en.wikipedia.org/wiki/List_of_largest_companies_by_r... Good luck finding Enron or Wells Fargo there.

Here’s some report on financial crimes in 2010-11: https://www.fbi.gov/stats-services/publications/financial-cr...

The total number of financial crime cases in 2011 was around 10000. Even if we assume each case was against different company (that gives us upper estimate), that’s merely 0.17% of the US companies who were charged with financial crimes in 2011.

As you see, real data is pretty close to my anecdotal evidence.

And it very far from your reported, sourced news.

As always, it depends on the product that you are referring to. Purely by coincidence, I installed [product] again a few weeks ago, after having used Defender since Windows 10 launched.

> see bugs in AV products listed in Google's Project Zero

All software has vulnerabilities, including Defender. Searching for [product] in Project Zero shows that only 3 vulnerabilities have been discovered (which is arguably a bad thing, but not according to this author) and it took, at most, 4 days for them to be resolved.

> if they make your product incredibly slow and bloated

This is precisely the reason that I have returned to [product]: performance. I'm running off an HDD and Defender saturates my HDD for a good 2 minutes after boot. I don't experience this with [product]. In addition, it has a "gaming mode" which allows you to further cut back on its activity (I have never needed it). Looking at objective tests, Defender fares quite poorly in both performance[1] and protection[2].

Additionally, a homogeneous market is an easy market to exploit. Let's assume that everyone took this advice and installed Defender. It is guaranteed that Defender has vulnerabilities. If you wanted to pwn as many machines as possible, you would only have to worry about exploiting a single AV.

This is just bad advice, I'm sticking with the competition (which may not always be [product]). There are bad players (McAfee, Norton) but that does not mean everyone sans Microsoft is utterly incompetent.

[1]: http://www.av-comparatives.org/wp-content/uploads/2016/05/av... [2]: https://www.av-test.org/en/antivirus/home-windows/windows-10...

>This is just bad advice, I'm sticking with the competition (which may not always be [product]).

This is my thinking as well. Microsoft's virus definitions are often worst in class and the agent itself only seems to update its definitions daily or, at most, twice day while 3rd party applications do so hourly or more. I've never seen MSE or Defender stop any ransomware attack. Not once. It just can't move fast enough to keep up.

Avast, Sophos, ESET, Panda, etc all trounce MS. Most of these are free for home and are largely trouble-free. Just because the author had a bad experience with Norton and McAfee doesn't mean the MS product is superior. I suspect the person who wrote this isn't a sysadmin who manages many users. The level at which MS can't keep up is embarrassing. I'm surprised to see this kind of thing at the top of HN.

My only compliment for MS is that SmartScreen is very aggressive in Win10 and will often flag suspicious executables correctly. I suspect the author is confusing SS with Defender. SS works because its heuristics based. Defender sucks because its signature based. The nice part is that these are two seperate applications, so if you run Avast or ESET, you still get SS.

Its also worth mentioning that a lot of Win10 "privacy" guides, often linked on HN, recommend disabling SS. I can't stress how much of a questionable practice that is. SS is a proper security layer and if sending MS a hash of an executable is such a problem for you, I suggest getting off Windows, as Windows does so much worse in regards to privacy even after following those guides.

> often flag suspicious executables correctly

The false-negative rate is embarrassing, though - especially with reputable open-source projects. Still, unblocking the file potentially gives a user more time to think about what they are doing.

> recommend disabling SS

The last one I saw left UAC turned off. Defender might not be the best (in addition to Windows 10 spying), but Microsoft really does have the best defaults otherwise.

> All software has vulnerabilities

Most software doesn't run in ring0. And most software doesn't actively break exploit mitigation techniques in other software either.

All software has vulnerabilities but not all software has the egregious blunders that Travis Ormandy finds --- so many, in such a short period of time.

Furthermore, most software provides value that offsets security risks. Since the entire value of AV products is to improve security, when they fail to do that, they're worse than useless.

The homogeneous market argument is weak. If a determined attacker wants to compromise as many machines as possible with a single attack, they'll come up with an exploit that passes all AV products.

Avira won the speed test? It reliably made every PC I installed it on 2-3 times slower and adds a few minutes to the boot time compared to MSE or whatever it is now called.

The real-time scanning mode of Windows Defender completely destroyed the performance of Cygwin Setup when accessing a mirror stored on my NAS, to take one example. I'm not talking about a few minutes extra; it issued loads of network requests for every signature verification Setup tried to do, the process was still non-responsive after several hours. Turned off realtime scanning, and it immediately finished.

Realtime monitoring has the biggest risk of performance degradation.

Is there any way to turn off Windows Defender without installing anything else?

Yeah, through the new Windows control panel ("Settings" -> "Update & Security" -> "Windows Defender"). Windows will hound you about that (and there is no way I know to turn off the nag).

I've heard if you disable the service directly, then the "Windows Defender reports that the service is turned off" message stops happening.

  sc.exe config "WinDefend" start= disabled
  sc.exe stop "WinDefend"

No good I'm afraid, gave me:

    [SC] OpenService FAILED 5:

    Access is denied.
(this is from an administrator command prompt)

For the record: you can turn it off using the group policy editor.

I thought it didn't just nag but turned itself back on after a while?

As far as I know, only if you fall for the dark pattern nag. I haven't run in to this, either because I haven't had it disabled for long enough or because Microsoft buckled and removed it.

>I'm running off an HDD and Defender saturates my HDD for a good 2 minutes after boot.

Yea, that's why I disable all AV - every install or clean build or untar or w/e brings the PC to a crawl. Haven't had problems yet.

This is my advice to everyone I know that gets a new Windows PC. Windows 10's built-in protection is more than adequate, and catches the majority of bad software - anything more is unnecessary, and many of the AV vendors are predatory.

It sucks that you cannot reset your Windows to MS-Vendor settings.

For example if you get some Acer laptop and reset it using windows built-in functionality it'll still reset it with all the bloatware - including AV.

This should do it:


They've had similar tools since at least Windows 7 IIRC, if not XP - you've just always had to download them separately, and they've never been advertised with much enthusiasm. Probably trying to strike a balance between pleasing power-users and keeping the bundled-bloatware ecosystem happy, seeing as MS benefit financially from both.

Interesting, I will try this on my laptop this week and will report if it will work.

Can't you just download a pristine Windows 10 ISO from Microsoft's website and install from that? I've done that on my Dell Precision and it works great.

You can! Just make sure it's the exact same edition and language that came with the laptop. It will just pick up the OEM key from the ACPI table.

The problem is that if it's not the same edition as is pre-installed you will lose your OEM license.

Microsoft released a tool[0] that does exactly that

[0]: https://www.microsoft.com/en-us/software-download/windows10s...

Uh, yes you can. Couple weeks ago I reset my Windows 10 MSI laptop to a clean state using nothing but Windows itself. Somewhere in Windows 10 restore options there's an option to format your HD and install only Windows and nothing else. You don't need a DVD nor a USB drive. A click of a button and off you go.

Didn't work for me.

For instances where it's not worth the time to reformat, I've always liked PC Decrapifier. Hard to forget the name once you've heard it :D


Yes, that was the point of the "Signature Windows" series laptops, which looked promising but didn't seem to actually go anywhere...

Windows 8/10 with the MS built-in protection or Linux + clamav

Sometimes I used CClenaer and/or Spybot to deal with something really nasty, but the MS stuff really does a good job (Someone checked if the hell is frozen now ?)

Which AV vendors are predatory? (Though Kaspersky makes me nervous).

What do you think about ESET?

I have 150 ish machines on eset across a few different clients, obviously my experience with it has been very good over the years in all aspects. Eset don't offer the biggest margins but I stick with them because it doesn't cause support issues and I can count the number of infected eset machines I've had to deal with on one hand.

McAfee in particular aggressively pushes "free trials" (often getting them bundled into the installers of unrelated software) that will then show scarily-worded warnings encouraging you to buy once the trial has expired.

I recommend going by test results and by the features that interest you most.

Kaspersky, BTW, has the best scores (See here: https://bestantivirus.reviews/tools/test-results-calculator) but Bitdefender and Avira are also great according to these tests, and Bitdefender is my choice, personally.

Kaspersky just worries me because of the Russian connections https://www.bloomberg.com/news/articles/2015-03-19/cybersecu...

It's sort of like Cloudflare protecting against DDoS attacks while also protecting booters.

My advice is similar these days when someone is buying a new PC, with the additional push that everyone should buy a "Signature Edition" PC [1]. Microsoft requires "Signature Edition" machines to be sold without additional software/bloatware. Friends/relatives can most easily buy such PCs from the Microsoft Store, but also some Best Buys and office stores will sell them if you ask.

[1] https://www.microsoftstore.com/store/msusa/en_US/cat/categor...

What stops majority of bad software for my family is disabling administrator account. They don't care installing patches and antivirus updates anyways.

> At best, there is negligible evidence that major non-MS AV products give a net improvement in security.

I apologize for present anecdote when data is needed but I manage a Windows network with 100+ users and on a daily basis, Kaspersky catches 5-10 emails from Outlook that have nasty attachments. It prevents my users from opening these innocuous looking but nasty Invoice-Jan-2017.docx files. Without a good AV there is no way to know which Invoice-Jan-2017 has a virus/worm vs. which doesn't. Relying on the Office security feature is not sufficient because actual vendor/customers send macro-enabled files to us regularly.

Have you actually tired Defender and tested it out against Kaspersky? Nobody is telling you to rely on just Office's security features; Defender is a full fledged AV product built into Windows. I believe it and System Center Endpoint Protection are essentially the same product, and in fact, in Windows 10 Defender just applies SCEP policies instead of installing a new program.

Email attachments should be scanned by the mail server.

does not mean that the answer to "should I install this 'security' product?" is always "yes".

It does mean that "the server should stop x anyway" is not a reason for the client not to prevent x.

Does anyone on your network have a valid reason to execute Office macros? If not, disable them via group policy. Solves so many problems. See what @SwiftOnSecurity has to say on the topic, they manage thousands of users and it seems to work excellently.

This is the site she runs, with configuration guides. https://decentsecurity.com/

You should setup and remove the virus at the source, from the mail server. End users should never receive these mail. Also this is a bad habit to receive invoice from mail (who do that in that way?? in a word docx?

Does Microsoft security essentials not scan an attachment when you download/open it?

> Invoice-Jan-2017.docx uh, docx files can hack my PC now?

Is this a bug of MS Word or docx format really has ability to become a virus?

There are exploits for pretty much every file format in existence. [1] There are also exploits that work by just having the e-mail arrive in your e-mail client without you having to even open the message. In fact the e-mail may not even reach your computer if you use some corporate proxy which has anti-virus installed. Project Zero revealed just recently a Norton/Symantec flaw where just sending the e-mail is enough for code execution. [2]

[1] Almost none of these are zero day though, so if you're up-to-date you'll be fine.

[2] https://googleprojectzero.blogspot.com.ee/2016/06/how-to-com...

It is a feature [0]. Microsoft office products allow for "macros" which are Visual Basic code embedded within a document or a worksheet that can be used by developers to add extra functionalities to their MS files (e.g. validate all data in a work sheet after a user clicks a specific button in the worksheet).

Just like any programming language, it could be used maliciously, and there is no easy way to distinguish which macro-enabled file is safe and which isn't (without going through the code yourself prior to enabling the functionality)

[0] https://support.office.com/en-us/article/Enable-or-disable-m...

For this exact reason docx macros are disabled by default and you have to do some enabling. Presumably there are also more sophisticated exploits that don't rely on the user dismissing multiple security warnings.

These viruses show a blank docx file in macro-disabled mode with only one image, which says "Enable macros to view secure invoice" and shows a picture guide on how to enable macros. Some of them have better instructions than the user guides I write for my users.

Still, some user intervention is required. Assuming you found a vulnerability in Office, it'd be preferable to have a vector where the user just had to open the file.

normally docx viruses are simply VBA scripts but sometimes they exploit an active x embed or image rendering bug.

However other times things like browsers do dumb stuff:

docx files and silverlight files are both just zip files with completely different structures meaning they can live together in the same file.

IE used to look at txt files that contained html tags and say hmm maybe i should display that as html

that meant on sites that accepted txt and docx uploads (a lot of recruitment sites etc) you could upload a txt file that simply embed the docx as a silverlight component. When the admin looked at the txt file it would run the code as the currently logged in (admin) user.

An extraordinary amount of Cryptolocker outbreaks were due to .docx files containing macros.

Yes, it has a default behaviour of "prompt to execute macros", but it happily shows the advice in the malicious document to "please click yes at this prompt to get a free iPhone", at which point the majority of users click "yes".

I think Swift on Security posted a tweet about this a while ago, with a screenshot of completely banning all Office macros via group policy.

Office macros are really useful, though.

.docx files can't contain macros

Correction: It's .doc files I've seen the majority of this behaviour in.

.docx files could contain macros just fine.

They cannot. Anything that has macros has to be docm.

Sorry, my bad. I meant files in OOXML format.

Probably .docx.exe or .docm (or whatever the macro enabled document extension is).

Are you the target audience for this blog post, though? As far as I can tell, the post talks about one-user setups.

I'd argue that the starting point in a corporate environment, where you can assume that users can be quite negligent, is fundamentally different from a one-user setup, especially since I agree that you can't "fix" the user in corporate.

You want the e-mails gone, not just a warning about it, but a warning is perfectly fine if you're one person and have an idea what you're doing.

I don't see any caveats in the article about the kind of user that the advice is intended for.

In your case, the av can run on the Linux mail server.

Do you scan on the endpoints or centralized on the mail server?

For managed networks you should configure Application Whitelisting. Users simply shouldn't be able to click on something and execute it.

Wouldn't it be preferable to catch those on the e-mail server?

It's irresponsible to make such a broad claim and back it up with really vague anecdotal evidence. Yes, there are a lot of lousy AV products that are at best a break-even for security, but there are some that don't suck and generally you have to pay for them - what a strange concept.

I'm not going to advocate for any particular vendor as I used to work for an AV company (and currently use a product from a competitor). But I can attest that I've used products that have caught threats that Windows Defender didn't, and many products also include a much more robust and configurable firewall.

It's annoying when someone else's lousy code breaks your own code. This happens to the sites I administer frequently, where we will randomly get blacklisted by some no-name AV product's web security feature. I understand the frustration when you have no control over this. But to conclude that all AV software is bad does not follow from the evidence given.

> really vague anecdotal evidence


> I can attest that I've used products that have caught threats that Windows Defender didn't

Since you brought it up, the latter statements sounds suspiciously like the very definition of "really vague anecdotal evidence. SCNR

Perhaps it is counter-anecdotal evidence showing the futility of anecdotal evidence as a whole.

I completely agree with you, I find this "disable antivirus" to be such a bad advice! Yes, it may work for tech savvy or security aware person. If you know what you're doing you're much less likely to get into problems. It won't work for general public though.

And the argument being made that "for example, see bugs in AV products listed in Google's Project Zero. These bugs indicate that not only do these products open many attack vectors" could be made for any piece of software your install.

Actually I don't think they could. A-V products inevitably insist on running with very high privileges on target machines, restricting the OS' ability to mitigate any vulnerabilities.

A-V products also have been shown be research from Google Project Zero to be doing very dangerous things (like running a local web server you can send commands to that are executed on the device).

When you combine high-privileged code with dangerous practices you get a very nasty set of risks that aren't present with most other software.

As there is an alternative that doesn't have similar problems (MS Defender) it seems sensible to recommend it.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact