AV products like Bitdefender will MITM
your HTTPS connections by installing their
own root certificates, by default and
And consider that I, a highly technical and security conscious software developer, only noticed it because I saw green icons appearing in my search results and then noticed that Google's SSL certificate is now a fake. And I only noticed it because I know how this shit works and those green icons seemed suspicious.
And yes, I'm using the word "fake", because I doubt that companies like Bitdefender have to pass the same certifications as a certificate authority or that they have any deals whatsoever with Google. And it's a serious vulnerability, because their certificate can get stolen and used by malicious software, not to mention you now have to trust a third-party with all of your secure connections, which includes your Google searches exposing your most secret desires, your Facebook and Slack chats, your bank account, everything. A third-party that does not have the scrutiny of your open-source web browser.
That's just preposterous and these products only survive because users are gullible and technically illiterate.
Of course that being able to peek into https traffic gets them more data (specific urls, not just whole sites).
Does anybody else know your search history? Besides the NSA, whom I assume have access to all US-hosted data, no. And not even Google knows my most sensitive searches, because my private mode is a Tor Browser connecting to DuckDuckGo, answering for my porn needs mostly.
And I trust Google to keep my data safe more than I trust shady AV companies, because Google has hired a lot of security researchers, at their size all eyes are watching them and their behavior has been acceptable compared with that of others like Facebook.
Information security is all about compartmentalization ;-)
Not because there are no potential issues to discuss around ad-funded free services and data aggregation, but more because it's like clickbait (in that it oversimplifies a complex issue for emotional effect and makes discussion of actual issues more difficult).
Using algorithms to build a general profile in order to increase relevance is not the same as "selling your data". They sell access to your eyeballs in much the same way as a broadcast TV station or free alt-weekly does. The main difference is that by getting some sense of who you are and what you might be interested in, they can decrease (in theory) the amount of irrelevant ads that end up on your screen versus the traditional methods of just blanketing things with general ads or using cruder demographic info.
Lots of companies flat-out do sell your data, either in aggregate and somewhat anonymized or in full. I've not found anything yet that leads me to believe this is how Google runs their advertising business. To this day, my main concern with Google isn't so much with Google as it is with a malicious third party somehow gaining access to the info Google has on me.
That's not the point. The point is there is a dark market trade in your personal identifying data and metadata, with the ultimate goal of knowing everything about you. The more they know about you, the better they can advertise to you.
Advertisements on the Internet are bought and sold algorithmically on high speed marketplaces. Omniscience leads to better decisions in this environment. Therefore your data is precious to whomever owns it.
I totally agree with that part, including the last sentence of your post. However, I don't quite see the difference between "you are the product" and "access to your eyeballs is the product", it's just describing the same thing differently. That others are worse doesn't really change that, the ones who are better are the standard as far as I'm concerned.
As the target of such surveillance and data collection, you just never really know how that data will be used or who by. It's optimistic, to put it mildly, to expect that data will only be used for purposes you don't object to by good people with the best intentions.
If you have the time, would you mind expanding on why you consider Google's behavior better than Facebook's? I find myself very wary of FB but much less so of Google, but I can't really explain why.
Facebook manipulates the emotions of its users:
MasterCard to access Facebook user data in order to make you spend more:
Facebook ads use your face for free:
Facebook is inventing phony likes to promote stories you've never seen to your friends:
Facebook guesses your race and uses it for targeting:
Facebook has been the main promoter of bogus news and misinformation in 2016:
Facebook has started to collect WhatsApp data, in spite of the app's original policy:
Facebook collects the texts that you don't send:
Facebook tracks and builds user profiles for people without accounts:
Facebook's privacy settings have been designed with black patterns, making it easy to publish by mistake:
Facebook makes it easy to tell when you're asleep:
I used a fake name on FB since the first day I signed up along with a photo of my favorite rock star. About a year ago, someone outed me and FB locked my account and said unless I emailed them a copy of my drivers license or some form of identification that proved who I was, they would keep my account locked.
I thought, "Whatever, I'll just fire up a new one."
This past month I signed up a new account under a very generic name like "John Johnson". No problems. I never uploaded a photo, just connected with the handful of people (less than 5) and felt like, "ok, we're cool now".
Yesterday, I got the same message and now the links you provided make a lot of sense as to why. FB really is after all of your data and essentially force you to give it to them, otherwise they hold your account hostage. Both times, I really felt like my privacy was being stampeded and this was a big intrusion to get at my personal information. No way am I going to give you an identifiable information about me. I already gave up an account I had used more or less for a decade instead of coughing up my personal information and photo.
So yeah, I'm done with FB - not that I was ever a huge fan, but this past week just confirmed what I always suspected.
Have algorithms that detect spam? Let users report on accounts being used to spam other users?
Sure, if I'm someone abusing the system, this should be easy to ferret out without having to surrender all your personal information and identifying markers just to make a SOCIAL MEDIA platform free of spam.
I think Google only better markets its intrusions than Facebook... Something to do with the public sentence "don't be evil". Indeed "evil" is like "common sense": everybody has its own and understands what comforts him/herself. Seems like pure marketing.
Maybe someone has any argument about the reality of this different Google/Facebook privacy intrusion/protection feeling?
Of course there are some other less intrusive search engines (DuckDuckGo, maybe Qwant), but unfortunately they still are less efficient than Google for fine or rare searches.
But for the average user, until a better Google comes along, I think it's OK to trust Google with their searches. And compartmentalization is paramount to information security, my point being that trusting some other company besides Google with that data is not acceptable, which is why I find that intercepting HTTPS connections is simply wrong and evil, regardless of reasons. This besides the fact that intercepting HTTPS traffic increases the attack surface, making users less secure.
We know Google does what it does. DDG's reason for existence is predicated on not doing so.
If DDG were found to be lying, I'd guess >80% of its customer base would evaporate overnight. It would mean destroying many years of branding, trust and relatively difficult cultivation of user browser defaults.
But that's worth gaming out - what would make it worth it to light all that on fire? About the only thing I can think of is a Lavabit-style conundrum, wherein our intelligence-overlords threaten someone's freedom. So, absolutely could happen, absolutely would come out.
So that's why I trust DDG to be less forthcoming with their logs.
And the "not even anonymous" is not an option: to be able to have a fully functioning phone, I hardly can escape declaring my full details to Google.
And this is clearly an "evil" choice of Google: I never created a Linux, Debian, Ubuntu or Mint account to keep my desktop computer up-to-date and featured by additional apps.
Any tracking that isn't announced to the user is not a cost, but is espionage.
It connects on-line and off-line searches, so it shows you the result in on-line locations. The underlying assumption was that users increasingly see on-line and off-line content as all part of the same world ("their content").
The commercial aspect was that it connected to places like Amazon. It made money for Canonical by using affiliate links if the user chose to make a purchase.
That is not the same as collecting all the history of the user and (anonymising) then selling that to a third-party or presenting adverts based on that data.
The default is off as users felt searches by default connecting to external services was an invasion of privacy - that's different to "selling searches".
Frankly, this closed-off the last viable manner for desktop Linux to secure a wider revenue stream of sufficient size to drive employing enough full time developers to keep up with the other platforms, in my personal opinion. FOSS doesn't change the dynamic that full-time developers cost real money.
Source: I worked at Canonical from the early days of the desktop, for ~10 years.
"Unless you have opted out, we will also send your keystrokes as a search term to productsearch.ubuntu.com and selected third parties so that we may complement your search results with online search results from such third parties including: Facebook, Twitter, BBC and Amazon. Canonical and these selected third parties will collect your search terms and use them to provide you with search results while using Ubuntu."
* The default was not off in 12.10.
I'm fine if you want to make money this way! That's why you're a company, people need to make money. My argument was that some OSS software sells your search results, one way or another. I didn't take a position in this argument (but you can hopefully guess my position).
Now, you mention that your users complained, which was true. It caused a huge amount of backlash from your established user-base, many of which contributed to OSS themselves and have seen their contributions being monetized by Canonical (which is fine too, no worries). But besides the users, it was the pressure from EFF which caused Canonical to buckle to the pressure .
So, I don't care what Canonical underlying assumptions were, I don't care whether it is disabled now, I don't care whether Unity showed affiliate links or not. It's all just distracting from the main point: search terms entered in Unity were send, by default, to third-party servers!
Which was four years ago. It's off now, and has been since 16.04 (the most recent LTS release, which shipped last year).
I agree the Amazon integration in the Dash was a mistake, but it's a mistake that has been fixed. It's simply not true anymore that "Ubuntu unity sells your searches in the desktop environment by default," and continuing to tell people so is deeply misleading.
The rest of your points are personal opinions on users and data privacy, I shouldn't have commented on that area, I apologise. I see no value in getting drawn into discussing the strong emotions associated with this area as it never ends well :-)
It's a personal frustration and professional regret that desktop Linux is under-funded to compete on an equal footing - the business model challenge feels intractable. RedHat/SUSE, Mandriva and Canonical have tried different options. But, there's been no sustained success that can get desktop Linux over 5% of market. Perhaps Google will have more success with ChromeOS.
Observe by contrast the efforts Google goes to with the Android APIs.
You're clearly angry, but I don't deserve the implication of being called stubborn, arrogant or aloof.
I've already apologised to you for pushing your comment side-ways in the other thread - I'm not sure what else you'd like from me at this point.
Canonical have tried to do some very good things to, and deserve some credit for successfully making Linux more end-user-friendly. They're not terrible people, just folks with a different set of perspectives & incentives from me.
There are probably lots of smaller examples; especially in Android.
Also note that while Homebrew may be open-source, it is not "free software".
I confirm that he is probably right about _____collecting____ data. Yes, this most definitely includes FOSS software. If your qualifier for FOSS is not using GA or anything like that than your are right, however, most of probably still count brew as FOSS. Hope that helps.
There's Microsoft, Google, and/or Apple have that. The profit models of these big companies create some disincentives on onselling this data.
AV providers are often on much smaller margins and the return from selling this data or building their own products on it is much higher.
I wouldn't be surprised if ISPs and VPNs also sold on data.
Edit: I don't mean to imply that it's the right or the wrong thing to do (it probably depends on the situation). Just stating what I have seen in industry.
This is quite different than the AV vendor who does not own your communication from your own device.
I know I'm a relative minority in the corporate IT world, but as a software developer downloading/uploading dependent libraries or the outputs of my development issues, corporate MITM interception certificates absolutely scare me for my personal threat model and the threat model of the projects that I work on.
It is. I do work in a Fortune 500 occasionally, and have to use their MITM gateway (websense SSL intercept).
They haven't yet fixed the internal cert to not use SHA-1.
If you're using something other than a corporate windows desktop + browser, you have to install the root certificates manually.
They have to make manual exceptions for sites that do certificate pinning. When they miss a site, it creates issues. Github is broken for me...I have to use crazy workarounds.
If there were a movement to enable certificate pinning everywhere, it would be very disruptive for the Corporate MITM vendors.
Edit: They also have irritating "content filters". So, if I'm tasked to research options for a project, like say a VPN, I can't search from their network. It blocks pages talking about VPNS because there's a policy to block "websense proxy avoidance".
If the MITM function of bitdefender isn't advertised, how can anyone consent to it, or knowlegably ensure it's still enforcing connection resets on bad SSL certs?
By default this is ON and users don't have the competence to recognize that this is in fact increasing the surface area for attacks and to disable it. The mere existence of a setting that is ON by default doesn't absolve such AV companies.
But speaking of Bitdefender in particular, I installed it on my wife's computer, disabled that option, confirmed that it survived a restart, then one month later I discovered that it is ON again, probably due to an automatic update. It's also an "admin" setting and my wife's user account does not have admin privileges to turn it on or off.
So even with a setting in place, it's untrustworthy.
>Emergency Windows update revokes dozens of bogus Google, Yahoo SSL certificates
They revoked certs like this silently in the past which makes it even worse.
I'm not a Windows user, haven't been a Windows user since 2001, my AV experience has been with the PCs of my family, whom I'm trying to keep safe.
But even if I were a Windows user, if you can't trust Microsoft, you can't trust their OS, at which point it would be better to use something else because security really depends on how trustworthy that OS and its vendor are. I do trust Microsoft more than I trust an AV vendor though.
>As far as I've seen, there was nothing sinister in default Windows root CA list.
Are you in any way related to MS or is your memory just very short?
I'd argue that's a problem in CA trust model, not MS. If you trust a certain CA, of course you trust their issued certificates by design. Currently, if some high tier CA f*cks up, there's no other way to invalidate their issued certificates than propagating CRLs and removing its certificate from the root CA stores manually (or by updates, as in MS case).
Either way, Bitdefender installs their own root certificate and generates their own for google.com. I've got proof if you want.
From Wikipedia: "TLS and SSL are cryptographic protocols that provide communications security over a computer network". Your host is not "the network" and it's expected to be your trusted asset.
If the AV software can't be trusted, that's another issue not addressed by TLS.
AVs generally run with complete permissions, and can do everything up to and including injecting their own code inside your browser's running process. Providing them with an API doesn't weaken the security, it just reduces the chances they'll screw the browser up.
Well, it had a plugin that got disabled by following Firefox updates.
Depending on the AV vendor, the MITM implementation will "give AV access to your SSL traffic" or "allow everyone to intercept it" (Symantec).
That being said, users would probably be much safer if they skipped the antivirus and just installed a decent ad blocker.
I only recently noticed this when Google chrome started marking even gmail.com as insecure on my dad's laptop.
Turned out that the Bitdefender license had expired and somehow this made certificate validation to fail?
>The Chromium browser disables pinning for certificate chains with private root certificates to enable various corporate content inspection scanners and web debugging tools (such as mitmproxy or Fiddler). The RFC 7469 standard recommends disabling pinning violation reports for "user-defined" root certificates, where it is "acceptable" for the browser to disable pin validation.
It doesn't have to be insecure. If the software that does the MITM checks the certificates correctly, I don't see how it would be worse than letting the browser handle it.
Not that I'd ever use an antivirus, of course.
So for example a self-signed cert. does it
a) create a "valid" cert itself, hiding the error from the user? This is obviously dangerous
b) create an "invalid" self-signed cert. This is messy as a user will then see a self-signed cert from the A-V vendor, which they may be more or less inclined to trust
c) Pass the traffic through without inspection, missing any potential threats
And that's just one case. SSL/TLS interception is very hard to get right and easy to make the user's security worse as a result.
I don't think this practice is a big issue because the local machine would have to be compromised for it to be an issue, in which case it's irrelevant because the game is over already. Also the alternative is not scanning ssl traffic for malware which has it's own very real risks.
But remember like I said that's just one example of why it's a bad idea, there's others, e.g. what do you do about EV-SSL certificates? You can't fake the browser element for them (remember this is the case where the A-V product hasn't hooked the browser), so where you want to MITM a EV-SSL connection you have to downgrade it to Non-EV.
Also what do you do with certificate pinning (either browser in-built or HPKP headers?)
Consider the goals that are trying to be achieved. You're attempting to stop the user either downloading malicious content or perhaps getting hit with a browser exploit or possibly you're trying to stop users going to a "bad" site.
The first one can be covered off with traditional on-access scanning of files.
The second one is much better addressed by improvements in browser sandboxing or general app. security.
The third one can be handled at a DNS level with reputation based block lists.
And, rolling back around to the main topic, preventing internal machines from being compromised by viruses, since a lot of people end up having to hit at least one website of some sort that has at least one tracking or advertising widget that could by three layers of indirection get compromised to serve viruses, which is, alas, not some sort of far out scenario nowadays, but just another day of the week on the web. (That is, even perfectly safe browsing habits can still get you owned on the modern web. And saying that a modern network can't count on "firewalls" and must have defense in depth still doesn't mean it's just peachy keen if an internal machine gets compromised.)
If you're going to insist those corporations can't penetrate HTTPS for compliance and security reasons, you're going to have to be willing to lift those restrictions and deal with it when their security fails. There's no two ways about this; either you grant them the necessary tools for compliance and security, or you stop complaining when they can't comply and aren't secure. (And at scale, let's be honest; the latter isn't on the table.)
The advantage is that they should have informed professional security people who can understand the trade-offs and make intelligent decisions about them.
Even then this strategy fails against certificate pinning which is becoming ever more common in mobile and also web space, so corps need other solutions to those problems (likely endpoint based)
However what we're talking about here is end-user A-V products and their use of HTTPS interception at a desktop level and the trade-offs that this forces on individual end users who are less equipped to handle this.
Realistically the A-V product will likely choose to cause "less noise" to the user so won't present them detailed technical information about the errors their masking, potentially making the user's security worse.
also you have installed an application that has a root acces to the pc, if it was mallicius it could do allot more damage. it is ultimately a question of trust.
i created and installed my own root certificate because i dont want clicking on the exception if i open a new incognito window, its especialy anoying for websocket connections.
If you MITM the connection locally it triples the computational cost for both encryption and handshake operations. Then more websites don't use TLS because it's three times as slow for the user.
It also prevents you from using a good cipher suite when the MITM doesn't support it even though the browser and the server both do, again reducing security or performance or both. And it's very easy to screw this up the other way and have the browser show a good secure connection with strong primitives and forward secrecy while the MITM is actually communicating with the server using export ciphers or RC4.
The existence of a trusted root private key on your machine exposes you to KCI of all servers. And key compromise is not even necessary if they use the same root private key for everyone, which has actually happened.
This is not a comprehensive list of the reasons why that is a bad idea.
AV scanners do not have a 100% detection rate. Letting malware be where a trusted program is expected is how you get infected.
> so they can scan the urls, and block some attacks
The purpose of HTTPS is to provide a guarantee that your connection to Google is direct, with no intermediaries, such that (1) only Google knows your search query and (2) you get a guarantee that the received content is from Google.
And you get this guarantee from certificate authorities that have a good reputation and that are in business because they've proven they can keep their shit secure. And when one of them violates that trust, the OS / browser vendors can start to invalidate their certificates. AV companies are bypassing it all.
The blocking of attacks reasoning is kind of bullshit, because Google's Safe Browsing service and browser extensions maintained by a community like uBlock Origin are doing a better job of warning against potentially malicious websites. There are always vulnerabilities to exploit of course, though it's getting harder for those to pop up due to the modern sandboxing of browsers.
However I have yet to see evidence that AV software is doing a better job of catching those, because it's a whack-a-mole game and it's more likely that browser vendors find and fix those vulnerabilities faster than AV companies, because bugs get reported to browser vendors first. And sure, if you have Adobe Reader or Oracle Java installed as plugins in your browser, that's a huge risk, but it's actually easier to uninstall those and browsers have started disallowing plugins. Safari for example is disabling everything by default.
The problem with installing their own root certificate is precisely one of trust. Yes, you allow a piece of software to run with root permissions, for as long as you don't see it do stupid shit, like installing a root certificate, at which point all of that trust should be gone.
And that is because a custom root certificate that doesn't belong to a competent certificate authority cannot be trusted and will increase the attack surface. This is security 101.
This is a popular misconception, but false. SSL creates an encrypted tunnel so that nothing "in the middle" can penetrate the tunnel (in theory), but there is no contradiction in the two sides of the tunnel delegating out their trust. There better not be, because in practice, there are almost always intermediates now on the real web. It is very common for a WAF or a load balancer to be the thing responsible for the SSL rather than the server generating the response, or you have CDNs or DOS prevention like Cloudflare doing the real work, etc. etc. There is no particular problem with the user doing the same thing. Sure, they can be irresponsible with it... well, so can the HTTPS server side, so, well, yeah? If you're going to declare a particular encryption technique unusably flawed because it could be used incorrectly and insecurely, you're not going to be encrypting very many things.
In fact, the very web connection you are reading this on, if I understand it correctly, was not actually encrypted by YCombinator. It uses a cert they own, but they're not the ones terminating the SSL connection; that's been delegated to a trusted third-party.
See also: https://news.ycombinator.com/item?id=8383466
And they don't MITM the TLS connection, they terminate it. The difference being that the performance is better rather than worse (so more people use TLS instead of fewer), the server is aware of this happening so it isn't fooled into thinking the connection is using more secure ciphers than it actually is, there is no third party forcing lowest common denominator security between the three, etc.
And remember they have to get it right only one time.
>I have yet to see evidence that AV software is doing a better job of catching those
I wouldn't be so one-sided. Imagine a fresh new variety of ransomware starts spreading. No one can catch it at 0-day, but good AV can catch it at 1-st day (OK, 1-st week) and neither Google nor uBlock or the likes can't.
Google can of course react equally fast. But "signal delay" may be much higher, as users report only URLs they can immediately link to their troubles, e.g. malware that crash browser.
And second, what Google can do now is to block only one attack vector, namely web page.
Thinking rationally, chances are high that Google is seriously considering to enter the AV business. They are in highly advantageous position to do it successfully, with their user base, resources and AI tech.
Far worse are the lengths that a company like AVG will go to to get and keep their software installed on your computer. Their browser toolbars essentially take all the dirty tricks they've apparently learned dealing with malware to.. build a piece of malware. Honestly whether it's active malice, incompetence, or lack of motivation I don't know, but I do know I've spent hours trying to extract their stuff from people's browsers. (I should say here that I fully expect someone reading this has managed to uninstall an AVG toolbar with no issues. They have multiple different auxiliary tools to their antivirus, and I'm not sure specifically which one(s) caused me trouble personally. It's also likely that they're only a _real_ pain in certain circumstances. But regardless, if you google something like 'how uninstall avg' or 'avg malware' I'm sure you'll find many more examples.)
I can say this: I never had a problem with uninstalling a browser toolbar, or restoring the default search engine in the browser. What I always have problems with, is getting rid of AV software itself. Oh God, how hard it is sometimes.
Norton AV taking half an hour to uninstall is a known thing; I'm convinced they actually have some Sleep() calls in their code just to piss people off. But just last week I tried to get rid of Comodo AV (+ 2 bullshit pieces of software it installed) on my neighbour's computer. Took a while. The uninstaller didn't work (it reported "an error" and gave up), so ultimately I had to resort to manually deleting stuff until the uninstaller finally unlocked itself and cleaned up the rest.
I've been having similar experiences with all AV software in past few years. They're a menace.
BTW, there are also region-specific malware - so for example I would rely more on Kaspersky for detection of malware targeted at Russian businesses, than Symantec or Microsoft AVs.
use security policies of the domain to only allow whitelisted applications to be run;
restrict internet use to whitelisted destinations;
configure mail servers to accept only whitelist sources, use DKIM/DMARC, and reject multipart messages.
Mandate usage of wired-only HID peripherals which are soldered to the port. Don't use wifi, and physically secure the access to network wires.
Glue shut all other computer ports.
Go all-out Saudi-arabian with people who don't comply with security policies and punish them by removing digits and public hangings for repeated offenses.
It's really that simple.
The trend is clear: AV is out, Carbon Black (or Crowdstrike, etc) is in. This is especially prominent in the financial industry. My wife works at a tiny local bank and they're doing trials of Carbon Black.
AV is terrible software, the chemotherapy of the security world. It only exists because it's slightly better than the alternative, and if you don't have an active disease, it acts as a disease of its own. You're glad its there when it saves your life, but you curse its name every day. Application whitelisting tools don't interfere with the day-to-day workings of your computer, but don't let the bad stuff in. You're only allowed to run the software you need to run, and nothing else.
It's not set-it-and-forget-it like AV, but it's a damn sight more effective and less annoying to the users.
I'm seeing the same thing today. Getting a trial of Cylance for a small environment seems next to impossible and when 3rd party testers test these apps, the false positive rates are terrible. Worse, they miss a lot of obvious malware traditional AV doesn't.
I am skeptical this technology is some silver bullet for the industry. I imagine cryptolocker changed the game where its politically expedient to whitelist everything be it application, driver, URL, etc where in the past IT departments were told to pound sand because some executive couldn't install Bonsai Buddy on the weekend or whatever.
Once you have proper whitelisting then you can pretty much remove AV or go with a non-traditional AV product like the kinds you list or no AV at all. Whitelisting requires a centralized IT department, no BYOD, and a lot of other infrastructure and talent smaller organizations simply don't have. I suspect traditional AV is here to stay for rational reasons and the technology behind things like CB or Cylance will eventually be part of a traditional AV package.
Arguably, the heuristics behind Win10's more advanced SmartScreen are a poor man's version of this and SS comes with every copy of Windows10 (The Win7 version is actually very poor). I imagine there's a lot of anxiety about being acquired by these companies before traditional AV reverse engineers what they do or SmartScreen gets good enough to the point where you can run a flawed local AV and still get some world-class heuristics watching your back as well.
On the other hand, AV usability is so bad you can't expect it to help "normal" people. All those popups do more harm than good when people start ignoring them.
In this case, there are much bigger problems than the lack of AV.
Isn't that a case of the survivorship bias? Or at least the broader case of selection bias?
Maybe the perception that you can achieve some kind of security through band-aid solutions is exactly the cause for the lacking security of many organizations?
Right, because the only way AV software can ever be effective is if it blocks things that legitimate programs also do (if a given piece of functionality has no legitimate uses it wouldn't be in the OS in the first place) - so users get in the habit of disabling it. Installing a piece of software that e.g. stops you running any downloaded .exe files is useless - if you didn't want to run the .exe you wouldn't be trying to run it, and if you do want to run it you'll turn off the antivirus. If you just want to disallow it completely, you can do that at the OS level easily enough.
There is no magic that AV can do to make it any easier to tell legitimate software from not. Reactive scanning for specific threats is ineffective in the modern era - by the time AV knows about a new form of malware most of the damage has already been done. So all that AV can do is monitor what programs do and apply inherently unreliable heuristics, and maybe be more or less sensitive about those heuristics than the OS is.
Trusted in what sense? Does Norton maintain their own whitelist? Is there any reason to believe that whitelist would be any better than the digital signature check that's built into windows?
> based on which I can make informed decision on whether to launch it or not (I personally launch immediately only trusted executables and google for any issues of the rest). The same can be done with all threats: AVs warn, provide some details and let users decide what to do.
But what information can the AV offer that actually helps the user makes a better decision than they would have otherwise?
For other threats it can be similar solution.
I'm sure it's hard on all the AV vendors out there but with Microsoft Essentials and Windows Defender I don't see the need for a third party AV.
For example the most common way people get infected is by installing software from unreliable sources and by not keeping their computer up to date. I'm pretty sure that learning to regularly update your OS and browser, learning to search, recognize and use the official sources for software, to stop doing software piracy for that matter, learning to not click on .exe files received in emails and to be suspicious of all attachments, learning to uninstall everything that infects your browser with useless plugins, I'm pretty sure such simple knowledge would cut 99.9% of all incidents.
Most software vulnerabilities in the wild are not novel, "zero day" exploits are not that common. This is why even though I hate Microsoft's recent update policies, on the other hand I understand their newfound aggressiveness in pushing those updates, as it is really frustrating that users ignore update warnings. I also appreciate Chrome's fast updates, which encouraged Firefox to do the same.
Google Safe Browsing.
Anything you download is already checked with Google, why waste CPU cycles on checking it again locally?
My takeaway here is not to trust either Russia or US based companies, as none of them will escape working with the secret services. China and India have plenty of exploitative AV like software as well, mainly for mobile.
Are there any European AV? Or Japanese? Or South African? I'd love to have something that has an eye eg on Microsoft's products, because there's no doubt that they have backdoors and report home.
A state controlled entity in authoritarian country is another story.
Besides, Russia considers themselves at the state of “hybrid war” against the whole world. It sounds insane but apparently that’s what their government believes, and that’s what their propaganda broadcasts. That’s why an AV product made by a Russian state controlled company carries some unique risks.
Since mid-December, a high-ranking Kaspersky manager, Ruslan Stoyanov, is in jail for high treason.
Do you know what kind of deal KGB wants from him?
But just hearing rumors doesn’t mean we know anything.
“the fact” and “probably” are mutually exclusive.
> plenty of examples of companies outright breaking the law
I know and that’s why I wrote “usually follow the letter of the law”. Majority of the companies follow the law, however.
I don't see how; statements about probability can be factual and we have plenty of evidence that Google, Microsoft, and US telcos do; why should AV vendors be different?
As far as companies usually following the letter of the law... do they? What makes you so sure?
In natural science or in medicine you can estimate that probability (because control groups, multiple experiments, statistical methods, etc). In such context, a statement about probability can indeed be factual.
In general conversation or in legal context they can’t. If you have facts, there’s no “probably” because you know for sure. And if you don’t, it can be you belief, or your personal opinion, but not a fact.
> do they? What makes you so sure?
Over my career, I’ve worked in several US software companies. Lately, I’m working with various US companies as a contractor.
Multiple times a company put a lot of efforts and money to comply with the law: we redesigned our products, moved across states, trained employees to comply with various regulations, and so on. Having friends in the industry with similar observations, I conclude such things happen all the time.
Speaking of which, what makes you so sure plenty of US companies are breaking the law?
And, yes, there are certainly lots of compliance programs out there, but I'd argue those have more to do with avoiding enforcement action than necessarily adhering to the law. I'd guess Wells Fargo (Wachovia at the time) had a compliance department while they were laundering money for drug cartels and yet it still happened.
I find it eminently believable that many or even most US companies would comply with an illegal request from US intelligence agencies.
The image of the world as it’s shown in the media is extremely biased. Watch this:
The video is about inequality and education, but the topic of corporate crime is skewed just as much.
There are 6 million companies operating in US, employing 115 million people. If the majority of them were breaking the law, you’d knew about that not just from the media but also from some of those 115 million people who happen to be your friends and family.
> I find it eminently believable that many or even most US companies would comply with an illegal request
I don’t find it believable because I don’t see motivation for such compliance.
In an authoritarian state, a government can abruptly take away your business (https://en.wikipedia.org/wiki/Euroset) and optionally throw you in jail for 10 years (https://en.wikipedia.org/wiki/Yukos) if you don’t comply, and you can’t do anything against it. That’s a strong motivation to comply. I don’t see such motivation for western companies.
Those two are very different. If you don’t distinguish between them, you’ll come to absurd conclusions like “the majority of US drivers are drunk”, “vast majority of US citizens voted for Hillary”, or “the majority of US companies are breaking the law”.
> of the biggest and most prestigious companies in the country
Prestigious means nothing ‘coz it’s hard to measure that. But for biggest ones, here’s the list: https://en.wikipedia.org/wiki/List_of_largest_companies_by_r... Good luck finding Enron or Wells Fargo there.
The total number of financial crime cases in 2011 was around 10000. Even if we assume each case was against different company (that gives us upper estimate), that’s merely 0.17% of the US companies who were charged with financial crimes in 2011.
As you see, real data is pretty close to my anecdotal evidence.
And it very far from your reported, sourced news.
> see bugs in AV products listed in Google's Project Zero
All software has vulnerabilities, including Defender. Searching for [product] in Project Zero shows that only 3 vulnerabilities have been discovered (which is arguably a bad thing, but not according to this author) and it took, at most, 4 days for them to be resolved.
> if they make your product incredibly slow and bloated
This is precisely the reason that I have returned to [product]: performance. I'm running off an HDD and Defender saturates my HDD for a good 2 minutes after boot. I don't experience this with [product]. In addition, it has a "gaming mode" which allows you to further cut back on its activity (I have never needed it). Looking at objective tests, Defender fares quite poorly in both performance and protection.
Additionally, a homogeneous market is an easy market to exploit. Let's assume that everyone took this advice and installed Defender. It is guaranteed that Defender has vulnerabilities. If you wanted to pwn as many machines as possible, you would only have to worry about exploiting a single AV.
This is just bad advice, I'm sticking with the competition (which may not always be [product]). There are bad players (McAfee, Norton) but that does not mean everyone sans Microsoft is utterly incompetent.
This is my thinking as well. Microsoft's virus definitions are often worst in class and the agent itself only seems to update its definitions daily or, at most, twice day while 3rd party applications do so hourly or more. I've never seen MSE or Defender stop any ransomware attack. Not once. It just can't move fast enough to keep up.
Avast, Sophos, ESET, Panda, etc all trounce MS. Most of these are free for home and are largely trouble-free. Just because the author had a bad experience with Norton and McAfee doesn't mean the MS product is superior. I suspect the person who wrote this isn't a sysadmin who manages many users. The level at which MS can't keep up is embarrassing. I'm surprised to see this kind of thing at the top of HN.
My only compliment for MS is that SmartScreen is very aggressive in Win10 and will often flag suspicious executables correctly. I suspect the author is confusing SS with Defender. SS works because its heuristics based. Defender sucks because its signature based. The nice part is that these are two seperate applications, so if you run Avast or ESET, you still get SS.
Its also worth mentioning that a lot of Win10 "privacy" guides, often linked on HN, recommend disabling SS. I can't stress how much of a questionable practice that is. SS is a proper security layer and if sending MS a hash of an executable is such a problem for you, I suggest getting off Windows, as Windows does so much worse in regards to privacy even after following those guides.
The false-negative rate is embarrassing, though - especially with reputable open-source projects. Still, unblocking the file potentially gives a user more time to think about what they are doing.
> recommend disabling SS
The last one I saw left UAC turned off. Defender might not be the best (in addition to Windows 10 spying), but Microsoft really does have the best defaults otherwise.
Most software doesn't run in ring0. And most software doesn't actively break exploit mitigation techniques in other software either.
Furthermore, most software provides value that offsets security risks. Since the entire value of AV products is to improve security, when they fail to do that, they're worse than useless.
The homogeneous market argument is weak. If a determined attacker wants to compromise as many machines as possible with a single attack, they'll come up with an exploit that passes all AV products.
Realtime monitoring has the biggest risk of performance degradation.
sc.exe config "WinDefend" start= disabled
sc.exe stop "WinDefend"
[SC] OpenService FAILED 5:
Access is denied.
Yea, that's why I disable all AV - every install or clean build or untar or w/e brings the PC to a crawl. Haven't had problems yet.
For example if you get some Acer laptop and reset it using windows built-in functionality it'll still reset it with all the bloatware - including AV.
They've had similar tools since at least Windows 7 IIRC, if not XP - you've just always had to download them separately, and they've never been advertised with much enthusiasm. Probably trying to strike a balance between pleasing power-users and keeping the bundled-bloatware ecosystem happy, seeing as MS benefit financially from both.
Sometimes I used CClenaer and/or Spybot to deal with something really nasty, but the MS stuff really does a good job (Someone checked if the hell is frozen now ?)
What do you think about ESET?
Kaspersky, BTW, has the best scores (See here: https://bestantivirus.reviews/tools/test-results-calculator) but Bitdefender and Avira are also great according to these tests, and Bitdefender is my choice, personally.
It's sort of like Cloudflare protecting against DDoS attacks while also protecting booters.
I apologize for present anecdote when data is needed but I manage a Windows network with 100+ users and on a daily basis, Kaspersky catches 5-10 emails from Outlook that have nasty attachments. It prevents my users from opening these innocuous looking but nasty Invoice-Jan-2017.docx files. Without a good AV there is no way to know which Invoice-Jan-2017 has a virus/worm vs. which doesn't. Relying on the Office security feature is not sufficient because actual vendor/customers send macro-enabled files to us regularly.
Is this a bug of MS Word or docx format really has ability to become a virus?
 Almost none of these are zero day though, so if you're up-to-date you'll be fine.
Just like any programming language, it could be used maliciously, and there is no easy way to distinguish which macro-enabled file is safe and which isn't (without going through the code yourself prior to enabling the functionality)
However other times things like browsers do dumb stuff:
docx files and silverlight files are both just zip files with completely different structures meaning they can live together in the same file.
IE used to look at txt files that contained html tags and say hmm maybe i should display that as html
that meant on sites that accepted txt and docx uploads (a lot of recruitment sites etc) you could upload a txt file that simply embed the docx as a silverlight component. When the admin looked at the txt file it would run the code as the currently logged in (admin) user.
Yes, it has a default behaviour of "prompt to execute macros", but it happily shows the advice in the malicious document to "please click yes at this prompt to get a free iPhone", at which point the majority of users click "yes".
I'd argue that the starting point in a corporate environment, where you can assume that users can be quite negligent, is fundamentally different from a one-user setup, especially since I agree that you can't "fix" the user in corporate.
You want the e-mails gone, not just a warning about it, but a warning is perfectly fine if you're one person and have an idea what you're doing.
I'm not going to advocate for any particular vendor as I used to work for an AV company (and currently use a product from a competitor). But I can attest that I've used products that have caught threats that Windows Defender didn't, and many products also include a much more robust and configurable firewall.
It's annoying when someone else's lousy code breaks your own code. This happens to the sites I administer frequently, where we will randomly get blacklisted by some no-name AV product's web security feature. I understand the frustration when you have no control over this. But to conclude that all AV software is bad does not follow from the evidence given.
> I can attest that I've used products that have caught threats that Windows Defender didn't
Since you brought it up, the latter statements sounds suspiciously like the very definition of "really vague anecdotal evidence. SCNR
And the argument being made that "for example, see bugs in AV products listed in Google's Project Zero. These bugs indicate that not only do these products open many attack vectors" could be made for any piece of software your install.
A-V products also have been shown be research from Google Project Zero to be doing very dangerous things (like running a local web server you can send commands to that are executed on the device).
When you combine high-privileged code with dangerous practices you get a very nasty set of risks that aren't present with most other software.
As there is an alternative that doesn't have similar problems (MS Defender) it seems sensible to recommend it.