I guess what goes around comes around: http://surveillance.rsf.org/en/blue-coat-2/
Personally, I think maybe they got scared that someone might figure out that their code is worse than everything else on the planet, and so they want to try to put the clothes back onto the Emperor.
Yup, totally makes me want to buy copies.
"No, guys, security by obscurity totally works in this one case! Because it's us! Come on, you trust us right?"
This is an argument for factoring out the means of virus detection into a closed-source plugin/module, while opening the source of the rest of the code. Particularly since detection is presumably pure (i.e. functional programming notions of purity and referential transparency), and thus much less likely to be a source of vulnerabilities, compared to the rest of the client which actually interacts with the OS, disks/files, etc. and is therefore much more likely to be exploited. Because the vulnerability scanner would still be a closed-source binary blob, the public would need to trust the company that the blob is actually pure, but seeing that blob within the context of an open-source client which is handling I/O makes that trust easier.
Yes, it makes it easier for malware creators to test their creations against the closed-source module before releasing their malware into the wild. But sophisticated malware writers are already doing that, by installing the anti-virus client into a VM, updating it, disconnecting it from networks, then loading the malware into the VM and seeing if the malware is detected or not. So malware writers don't gain that much from the opening of the rest of the codebase (unless they succeed in finding vulnerabilities that the rest of the world doesn't), and the white-hat public gains a much more trustworthy security tool.
But, you’re also forgetting that these virus scanners can also be vulnerabilities and exploits in themselves; i seem to remember one virus exploited a flaw in the compression code of a virus scanner to establish some type of malware. Just because something is a trade secret doesn’t exactly lessen the risk of it existing.
The same can't really be said of Google's algorithm, as it's essentially a hugely complex black box, and you can barely interact with it. That's kind of like reverse engineering a chip purely using its inputs / outputs.
BTW, this is true. Seeing the source code versus having to go through assembly listings - I know which one I'd pick if I had to find logic bugs.
>Why is this not applicable to virus detection algorithms?
Its not an algorithm, but a heuristic. If you want to look for a suspect, you don't announce "I'm looking for someone 5 feet 5 inches tall with a buzz cut who drives a ford and wears size 12 nike sneakers". In much the same way, security via heuristics doesn't mean creating a perfect detection system, because it doesn't exist. They want to make the game harder to play by hiding the rules of the game, not because they're sure that they're going to win. This is a real, tangible benefit for the customers. There is nothing really special about it, we've been using such ideas for centuries.
I don't know what that means. Perhaps superficially there is some overlap since both run on deterministic hardware, but a heuristic is completely different from an algorithm. Its a technique that can perhaps give you an imperfect answer to the question you're asking. An algorithm describes a method, which, if followed, gives you the answer. Here is an AV heuristic that I made up just now:
-Is it encrypted? +1 point
-Does it contain self modifying/unpacking code? +1 point
-Does it call OS APIs to monitor running programs? +1 point
-Does it run at startup? +1 point
-Does it have no UI? +1 point
-Does it try to punch a hole through NAT? +1 point
-Does its process name contain random strings? +1 point
If you get > 5 points, hash the executable and send the hash/executable for analysis.
>Perhaps the entire AV model employed by Symantec is flawed.
Well, for one, the heuristic isn't the "entire AV model". But what makes you think the entire AV model is flawed? Every major OS uses parts of the AV model.
the vulnerability i would call is if i sent you to hell and you found a way to escape.
We're talking about downloadable software here, not a cloud service like google. Once a hostile nation state has access to your binaries (as they would with an installed product like A-V) they can just fuzz the A-V detection method to find bypasses.
Heck that's what pentesters and red teamers do on a regular basis, A-V bypass is a common thing in that world, so if people at that level can do it you can bet that nation state actors can do it.
We also did third party security audits on a regular basis, but still wouldn't be comfortable allowing that to be done with other countries. Purely my own opinion here, but my concern wouldn't be a security one so much as an intellectual property one- it's pretty well known that other governments (China, Russia) have strong links to their commercial sectors and little regard for IP protection.
Don't forget that nation states also produce malware (Recall Stuxnet?)  and evading detection is substantially easier when you know exactly what to avoid doing.
Did they say they allow no audit or outside code review?
Or simply that political nation states who have intelligence agencies that actively subvert security solutions to compromise computers (the very things AV companies work to prevent) shouldn't have access to the very cookie pot they work to steal from?
Frankly, I have no idea why you'd let people review your source code who have a vested interest in finding exploits that they will use against people using your software.
Look at it from the perspective of those countries
Symantec "hey buy all our security software it's super-great"
Foreign Gov. Customer: "sure can we check the source code first to see if there are any heinous security bugs or NSA backdoors"
Symantec "Oh gee no, allowing to you see the source code of products we want you or companies in your country to run might compromise it's security"
Foreign Gov. Customer: "..."
You can't please all customers if customer wants you to protect them from another potential customer of yours, you have to pick a side and stick to it.
However lets extrapolate and say what if the same thing were applied to Apple or Microsoft, who sell a very large amount of software to countries like China.
Should they forbid China access to their source code due to concerns from US customers.....
Would their shareholders be happy if they did? China is a large market, loss of access to that would be bad for a companies finacial health.
Foreign Gov. Customer: "But what I really want is for my tech spooks to scan pre-selected high value modules for already known and suspected zero day exploits for our own clandestine use"
Also If Symantec won't trust their customers to that degree, why should a foreign government or their key industries trust symantec software?
If a US audit firm audits US software, why should an international government trust that there isn't a US backdoor in there? Or perhaps that the US audits have uncovered issues but instead of patching them they handed them to the NSA for later use in their TAO teams... (wannacry anyone?)
Obviously symantec are free to withdraw from a given market as they have here, but to suggest that trust is a one-way street seems well a bit unbalanced.
As an American, I certainly would not trust any non-American AV software.
I would assume that all AV made in another country is compromised by that countries government intelligence. That would be a safe assumption.
I would be safer user American AV as an American because despite what the anti-gov propaganda wants us to believe, it's far harder for the NSA to spy on Americans than non-Americans.
Regardless, this entire thread (and your post) seems to treat nation-state actors as inherently innocent, which is so blindly naive that it's difficult to rationally respond to.
But this is the nature of cyberwar. Damaging, effective, wide-spread--- and invisible and plausibly deniable.
Symantec giving source to Russia should be seen as a violation of American national security at this point, because it gives a hostile foreign government a blueprint to attack US networks.
My point is if the US treats foreign gov's as dangerous then those foreign gov's should treat the US as dangerous equally, including US software.
Given US software companies international sales volumes that's a massive existential threat to the US economy.
If China/Europ/Russia etc stop using US software products then what will happen to the profits of Microsoft/Google/Apple et al....
My other point was the apparent one-way nature of trust that I felt you were implying. that foreign gov's should trust US software whilst at the same time accepting those software companies do not trust them...
Would you trust Chinese software that was only ever allowed to be reviewed by Chinese auditors?
You should trust a firm to review your code not based on their nationality, but based on a wide criteria.
Included in that criteria for me would whether or not the organization is committed to the work of subverting your software through intelligence operations.
But, that's just an end-around because all countries with markets worth selling in have intelligence agencies which subvert AV and other software for clandestine purposes, so all nation states are excluded.
W.r.t Chinese auditors, because of their oppressive and authoritarian government which goes so much further than western governments to control business and speech, and which has a much deeper history of subverting any control structure outside of the Communist Party, I would certainly treat their work as suspect by nature, but if there were a Chinese auditing firm renowned for its quality, privacy and separation from their government, I don't see why I wouldn't consider it.
Open it out to code review by only a few number of people, mainly governments, and you are opening it out to a small set of people doing code review explicitly driven by the primary intention of finding vulnerabilities in it. This would apply to even the US govt, who routinely request software vendors to delay patching or even disclosing 0-day vulnerabilities till they have sufficiently exploited it.
Allowing more scrutiny will work only if enough eyeballs are devoted to it driven by benevolent intentions. Best results would be to open source the whole thing but that would not make business sense to the company.
Basically, either you open it out completely or not open it up at all. Opening out to a few government funded hackers is probably the worst choice they could make.
Until the government comes knocking and can demand pretty much everything with your only option being a secret court that always sides with the government anyway.
Is it too much tinfoil to think that this isn't so much about "putting security over sales" than it is about "making sure that NSA backdoor remains hidden"?
There are courts, and rules, and sometimes rules mean you hand things over, but sometimes they mean you can say no.
"Only One Big Telecom CEO Refused To Cave To The NSA ... And He's Been In Jail For 4 Years"
"the government has avoided a trial in which the 65-year-old former executive planned to air what he says was his refusal, in 2001, to allow Qwest to participate in a National Security Agency program he believed was illegal."
He believes that that the government only brought the action against him because he refused to divulge user data, but he is in jail because of insider trading.
"the NSA proposition to Qwest was nearly seven months before 9/11, according to Nacchio."
"In a bizarre twist, the judge in Nacchio's case, Edward Nottingham, was soon embroiled in scandal, accused of soliciting prostitutes and allegedly asking one to lie to investigators. He resigned and apologized, but wasn't prosecuted."
"Nacchio's conviction was overturned on appeal in a decision that found Judge Nottingham made key errors.
But the government got the conviction reinstated by a split judges' panel."
Unfortunately running an anti-virus is an overly broad requirement in some industries to pass certifications and audits. It's one of the cases where "security" mandates and requirements leads to insecurity.
You either do what they want or die. Your choice.
This is not impractical for my situation, because I do not have a large throughput of dubious files, perhaps a couple every 6 months or so.
Surely, in this aspect, it stands to reason that this section of the tech services industry is more robust in the face of such an encroachment. The only losers in such a situation are the likes of Symantec, whom claim secrecy and obfuscation are a feature rather than a bug.
The fingerprints have to be some sorts of data, like regular expressions or other limited instruction set which can only parse the incoming file and not communicate with outside world.
The company could automatically release fingerprints into the open after a time, say 6 months.
This seems like a pretty good business model idea though!
Say what you want about Russians but they know how the game is played. And they are good at it too.
There's no guarantee that the source code you are looking at matches the binaries that are being distributed isn't it?
Couldn't one compile from source and then compare blobs?
Is there a link on this you recommend?
1) From a security point of view, put yourself in the shoes of the other states. The NSA and its friends have a well proven history of backdoors and state-sponsored malware. From the Stuxnet/Flame family to the backdoors that were found on the hard-drivers malware (Story was on HN recently, I'll try to find it). So it is very normal, and as a matter of fact I'd say it's abnormal for a government to take a security product that holds administrative rights on the computer, without first inspecting its code to verify for backdoors. There is no such thing as a better state. I read on the comments "politically charged states". Well from the point of view of a Russian, the US is a politically charged state. Keep it relative ladies and gentelmen.
2) I've read people complaining about how "the way they scan"/"the way they do the detection" will be compromised. The way the AV software works IN GENERAL doesn't differ from one another. From a binary of the software one can identify with "relative ease" (for threat actors who are sponsored by governments), when the unpacking happens, decompression happens, sandboxing, hashing blocks, etc. As for the parts that are unique to the AV, for example watchdog parts and heuristics, these can also be reverse-engineered or just obteined through classical spying etc.
So all in all, source code reviews are, in my opinion, a very necessary thing. Because frankly if a simple source code review is going to fundamentally break your AV software, there has to be something wrong with that product. Because setting aside the government looking at the source code, hundreds of devs have already looked at it.
I am missing some key piece of information? Why is Symantec willing to allow the US Government to review the code bot not "foreign" states? What makes the US Government the pinnacle of virtue and honor?
Furthermore, if you said "well, private users only can review the code," then every government is just going to ask its code reviewers to independently purchase private licenses. There's no way to keep government users out of source code reviews unless you totally block code reviews whole cloth.
> These are secrets, or things necessary to defend
Backdoors from NSA?
"In a Shadowserver six-month test between June and December 2011, ClamAV detected over 75.45% of all viruses tested, putting it in fifth place behind AhnLab, Avira, BitDefender and Avast. AhnLab, the top antivirus, detected 80.28%."
> Can it intercept network traffic
"On Linux servers ClamAV can be run in daemon mode, servicing requests to scan files sent from other processes. These can include mail exchange programs, files on Samba shares, or packets of data passing through a proxy server (IPCop, for example, has an add-on called Copfilter which scans incoming packets for malicious data)."
It seems that there is a third party tool that provides heuristic detection.
From the article:
Tech companies have been under increasing pressure to allow the Russian government to examine source code, the closely guarded inner workings of software, in exchange for approvals to sell products in Russia.
There's also a very different mode to their relationships. Symantec is selling to the Russian government - bureaucracies, and it's selling black boxes. Russia is trying to leverage it, to give it advantage in a different mode - Intelligence. Both Symantec and the Russian intelligence agencies are in the infosec business. It's not that uncommon for businesses to do business despite competing in some areas - Samsung was a core iPhone supplier despite also making phones.
The source-code reviews by foreign states are not about checking to see 'if it works' - it's about checking that it doesn't include inserts from NSA etc..
This has nothing to do with 'security review' in the general sense of robustness, it's a 'state actor' thing.
I don't see how there is a way around this.
It's doubtful that Russia will allow them to sell this stuff without reviewing it - and the reverse is true as well - Russian state actors will surely use this 'review' as an opportunity to embellish their own hacking tactics etc..
I don't see any real way around this in the world in which we live.
Russians are going to have to make their own anti-virus. Which I would imagine they are capable of doing.
Can't speak for the US, here in Germany the Kaspersky tools are used on large companies responsible for critical infrastructure. With the option for source code review, I'm still with a good impression on their tools when compared to Symantec and no option for review.
That is a terrible idea, as anyone who knows anything about Russia would tell you.
Using American software, you might be getting NSA's eyes on you.
Using Russian software, you'll probably have FSB's eyes on you.
So take your pick :)
Propaganda aside and looking at facts, one company is providing access to review their source code and the other one is not.
Which one of them is a terrible idea to purchase?
You gain the tacit approval of the product from an adversarial government customer, which signals other enterprises in the country they should feel comfortable buying the product.
Can understand the worry for malware products from startups and innovative companies that are demonstrating an impressive work and surely need that secrecy to thrive against the market gorillas. Symantec can't really be called innovative since 2009 or so, doesn't make much sense the security by obscurity mantra unless there are other reasons.
I'd assume the US govt is as much as risk as any other.
Easier target than a nation state.
This practice is fundamentally flawed.