Hacker News new | comments | show | ask | jobs | submit login
Symantec CEO says source code reviews by foreign states pose unacceptable risk (reuters.com)
171 points by ohjeez 9 days ago | hide | past | web | 117 comments | favorite





Yeah, sure. Thats the company which according to Google (March) has a huge mess in own nest of Certification Authority resulting in google chrome removing their certs: https://arstechnica.com/information-technology/2017/03/googl...

This. I would trust most non-security tech CEOs to give better security advice than the executives at Symantec.

The Symantec CEO has been in that position only since Symantec acquired Blue Coat last year, where they were CEO previously. The Symantec CA happened well before their current term.

Blue Coat. Has problems with foreign governments being unfriendly?

I guess what goes around comes around: http://surveillance.rsf.org/en/blue-coat-2/


What a fantastic website is that! To add to your point: Amazing how skewed ones morals can be if they believe they're doing the right thing by providing those countries these kinds of tools.

More reasons "trust us" isn't acceptable anymore.

https://en.m.wikipedia.org/wiki/Unethical_human_experimentat...


In case of Syria, this must have been used to help the regime identify people to be disappeared, for sure.

He's doing a bang up job so far then!

Yup. The tire fire of the security industry is saying that letting other people look at the code is a security risk.

Personally, I think maybe they got scared that someone might figure out that their code is worse than everything else on the planet, and so they want to try to put the clothes back onto the Emperor.


Not sure that applies in this case. If the only party you allow to look at your code is one that's heavily invested in secretly hacking it, then you're probably going to give them some zero-days which they'll use maliciously. It's different if you let everyone look at it, since a white-hat might find something helpful. But this isn't a choice between everyone and no-one, it's between a known black-hat and no-one.

Why is it not a choice between everyone and no-one?

They have already divested their certificate business. http://investor.symantec.com/About/Investors/press-releases/...

So they're basically admitting that their antivirus tools aren't secure enough to handle a basic code review?

Yup, totally makes me want to buy copies.

"No, guys, security by obscurity totally works in this one case! Because it's us! Come on, you trust us right?"


To play devil's advocate, they may not be worried about vulnerabilities in their code but rather vulnerabilities in their method of virus detection, the same way Google doesn't share details about their search algorithm partly so it isn't gamed by spammers. Actually this is common in software that is meant to protect against sophisticated attackers. Blizzard and Valve used to have periodic mass bans but they would never say what exact action triggered a ban. In fact you would get no information and the ban itself may have come months after some hack was used so that crackers wouldn't know what specifically triggered it.

> To play devil's advocate, they may not be worried about vulnerabilities in their code but rather vulnerabilities in their method of virus detection

This is an argument for factoring out the means of virus detection into a closed-source plugin/module, while opening the source of the rest of the code. Particularly since detection is presumably pure (i.e. functional programming notions of purity and referential transparency), and thus much less likely to be a source of vulnerabilities, compared to the rest of the client which actually interacts with the OS, disks/files, etc. and is therefore much more likely to be exploited. Because the vulnerability scanner would still be a closed-source binary blob, the public would need to trust the company that the blob is actually pure, but seeing that blob within the context of an open-source client which is handling I/O makes that trust easier.

Yes, it makes it easier for malware creators to test their creations against the closed-source module before releasing their malware into the wild. But sophisticated malware writers are already doing that, by installing the anti-virus client into a VM, updating it, disconnecting it from networks, then loading the malware into the VM and seeing if the malware is detected or not. So malware writers don't gain that much from the opening of the rest of the codebase (unless they succeed in finding vulnerabilities that the rest of the world doesn't), and the white-hat public gains a much more trustworthy security tool.


Well, that’s an argument they should have made! I think it’s extremely charitable to assume this is why, though, when every indication points to code-audit fearmongering.

But, you’re also forgetting that these virus scanners can also be vulnerabilities and exploits in themselves; i seem to remember one virus exploited a flaw in the compression code of a virus scanner to establish some type of malware. Just because something is a trade secret doesn’t exactly lessen the risk of it existing.


What's the difference between vulnerabilities in code and vulnerabilities in virus detection? Isn't the virus detection done in code? Is security through obscurity valid for virus detection but not code?

I don't think the parent is talking about vulnerabilities, but the fact that if you know how the antivirus engine works it may be easier to write a virus able to avoid detection.

That makes sense, though I think there's still a large difference between the virus detection and ranking algorithm comparison. The entirety of the virus detection code is running on the client's PC; surely it can be reverse engineered and understood fairly successfully?

The same can't really be said of Google's algorithm, as it's essentially a hugely complex black box, and you can barely interact with it. That's kind of like reverse engineering a chip purely using its inputs / outputs.


Sounds like a vulnerability. Isn't that how the argument went about source code? "If you know how the program works it may be easier to write an exploit." But then experience taught people that exposing source code to the bright sunlight by opening its source could actually make software more secure through many eyes finding holes. Why is this not applicable to virus detection algorithms?

Now that I think about it, you have a point. In general when I think about a (software) vulnerability I think about taking advantage of some bugs or unforeseen behavior of the software. If the software is acting as intended but can not protect you from a certain kind of issue can we say it has a vulnerability ? My answer was no before, now I am in doubt :-).

>If you know how the program works it may be easier to write an exploit."

BTW, this is true. Seeing the source code versus having to go through assembly listings - I know which one I'd pick if I had to find logic bugs.

>Why is this not applicable to virus detection algorithms?

Its not an algorithm, but a heuristic. If you want to look for a suspect, you don't announce "I'm looking for someone 5 feet 5 inches tall with a buzz cut who drives a ford and wears size 12 nike sneakers". In much the same way, security via heuristics doesn't mean creating a perfect detection system, because it doesn't exist. They want to make the game harder to play by hiding the rules of the game, not because they're sure that they're going to win. This is a real, tangible benefit for the customers. There is nothing really special about it, we've been using such ideas for centuries.


How do you build a heuristic if not with an algorithm? Perhaps the entire AV model employed by Symantec is flawed.

>How do you build a heuristic if not with an algorithm?

I don't know what that means. Perhaps superficially there is some overlap since both run on deterministic hardware, but a heuristic is completely different from an algorithm. Its a technique that can perhaps give you an imperfect answer to the question you're asking. An algorithm describes a method, which, if followed, gives you the answer. Here is an AV heuristic that I made up just now:

-Is it encrypted? +1 point

-Does it contain self modifying/unpacking code? +1 point

-Does it call OS APIs to monitor running programs? +1 point

-Does it run at startup? +1 point

-Does it have no UI? +1 point

-Does it try to punch a hole through NAT? +1 point

-Does its process name contain random strings? +1 point

If you get > 5 points, hash the executable and send the hash/executable for analysis.

>Perhaps the entire AV model employed by Symantec is flawed.

Well, for one, the heuristic isn't the "entire AV model". But what makes you think the entire AV model is flawed? Every major OS uses parts of the AV model.


because its not that easy. if i write some part of the code to detect if you are a good human and will you go to hell or heaven, to evaluate that for me would be hard. and if you had a access to my source code you could check what i am looking for and could maybe cheat.

the vulnerability i would call is if i sent you to hell and you found a way to escape.


I can second that. A lot of virus detection basically boils down to detecting this particular substring. Which is usually quite easily bypassed.

Somebody please reply to this. Both this comment and the above comment seem reasonable. I don't know what to believe!

For me Worrying about "vulnerabilities in their virus detection method" seems unlikely.

We're talking about downloadable software here, not a cloud service like google. Once a hostile nation state has access to your binaries (as they would with an installed product like A-V) they can just fuzz the A-V detection method to find bypasses.

Heck that's what pentesters and red teamers do on a regular basis, A-V bypass is a common thing in that world, so if people at that level can do it you can bet that nation state actors can do it.


Yeah, when I worked at Malwarebytes we did not really care about this issue. If people are doing to download it they are going to reverse engineer it.

We also did third party security audits on a regular basis, but still wouldn't be comfortable allowing that to be done with other countries. Purely my own opinion here, but my concern wouldn't be a security one so much as an intellectual property one- it's pretty well known that other governments (China, Russia) have strong links to their commercial sectors and little regard for IP protection.


I believe the latter post (obfuscating the method of detection) over incompetence.

Don't forget that nation states also produce malware (Recall Stuxnet?) [0] and evading detection is substantially easier when you know exactly what to avoid doing.

[0] https://en.m.wikipedia.org/wiki/Stuxnet


Evading detection is easy if you have the slightest clue of what you're doing. Antivirus evasion simply isn't difficult enough for this to be a reasonable explanation.

You're intentionally conflating "basic code review" with "politically charged state actor performing code review", which are not the same thing.

Did they say they allow no audit or outside code review?

Or simply that political nation states who have intelligence agencies that actively subvert security solutions to compromise computers (the very things AV companies work to prevent) shouldn't have access to the very cookie pot they work to steal from?

Frankly, I have no idea why you'd let people review your source code who have a vested interest in finding exploits that they will use against people using your software.


Usually companies allow source code review beccause they're trying to sell their solutions in the countries in question.

Look at it from the perspective of those countries

Symantec "hey buy all our security software it's super-great"

Foreign Gov. Customer: "sure can we check the source code first to see if there are any heinous security bugs or NSA backdoors"

Symantec "Oh gee no, allowing to you see the source code of products we want you or companies in your country to run might compromise it's security"

Foreign Gov. Customer: "..."


They're completely okay with that response. What they're worried about is that customers in U.S. government would consider their product more secure if they can ensure that the potential attackers in e.g. Russian government don't have access to that source code.

You can't please all customers if customer wants you to protect them from another potential customer of yours, you have to pick a side and stick to it.


Well in this case it's not a big problem, as stated in the article Symantec didn't do much business in Russia.

However lets extrapolate and say what if the same thing were applied to Apple or Microsoft, who sell a very large amount of software to countries like China.

Should they forbid China access to their source code due to concerns from US customers.....

Would their shareholders be happy if they did? China is a large market, loss of access to that would be bad for a companies finacial health.


Symantec: "No, our code is audited professionally by the most reputable international firms along international standards of quality. You can review our audit reports and engage with the international body responsible for regulating audits to raise any concerns"

Foreign Gov. Customer: "But what I really want is for my tech spooks to scan pre-selected high value modules for already known and suspected zero day exploits for our own clandestine use"

Symantec: "...."


What "interational body for regulating audits" would that be, I'm not aware of any such body...?

Also If Symantec won't trust their customers to that degree, why should a foreign government or their key industries trust symantec software?

If a US audit firm audits US software, why should an international government trust that there isn't a US backdoor in there? Or perhaps that the US audits have uncovered issues but instead of patching them they handed them to the NSA for later use in their TAO teams... (wannacry anyone?)

Obviously symantec are free to withdraw from a given market as they have here, but to suggest that trust is a one-way street seems well a bit unbalanced.


I do not personally believe in todays world that national security and software can be separated.

As an American, I certainly would not trust any non-American AV software.

I would assume that all AV made in another country is compromised by that countries government intelligence. That would be a safe assumption.

I would be safer user American AV as an American because despite what the anti-gov propaganda wants us to believe, it's far harder for the NSA to spy on Americans than non-Americans.

Regardless, this entire thread (and your post) seems to treat nation-state actors as inherently innocent, which is so blindly naive that it's difficult to rationally respond to.

But this is the nature of cyberwar. Damaging, effective, wide-spread--- and invisible and plausibly deniable.

Symantec giving source to Russia should be seen as a violation of American national security at this point, because it gives a hostile foreign government a blueprint to attack US networks.


You've picked me up entirely incorrectly if you think I'm of the opinion that nation state actors are innocent.

My point is if the US treats foreign gov's as dangerous then those foreign gov's should treat the US as dangerous equally, including US software.

Given US software companies international sales volumes that's a massive existential threat to the US economy.

If China/Europ/Russia etc stop using US software products then what will happen to the profits of Microsoft/Google/Apple et al....

My other point was the apparent one-way nature of trust that I felt you were implying. that foreign gov's should trust US software whilst at the same time accepting those software companies do not trust them...


Well we all are citizens of one country or another. So what exactly does outside code review mean? American code can only be reviewed by American code reviewers?

Would you trust Chinese software that was only ever allowed to be reviewed by Chinese auditors?


Private firms earn their reputations by their behavior. We have international / multi-national / NGO's which can exist beyond the politics of the nation states they reside in.

You should trust a firm to review your code not based on their nationality, but based on a wide criteria.

Included in that criteria for me would whether or not the organization is committed to the work of subverting your software through intelligence operations.

But, that's just an end-around because all countries with markets worth selling in have intelligence agencies which subvert AV and other software for clandestine purposes, so all nation states are excluded.

W.r.t Chinese auditors, because of their oppressive and authoritarian government which goes so much further than western governments to control business and speech, and which has a much deeper history of subverting any control structure outside of the Communist Party, I would certainly treat their work as suspect by nature, but if there were a Chinese auditing firm renowned for its quality, privacy and separation from their government, I don't see why I wouldn't consider it.


Well, to be fair, they don't really have a choice in the matter.

Open it out to code review by only a few number of people, mainly governments, and you are opening it out to a small set of people doing code review explicitly driven by the primary intention of finding vulnerabilities in it. This would apply to even the US govt, who routinely request software vendors to delay patching or even disclosing 0-day vulnerabilities till they have sufficiently exploited it.

Allowing more scrutiny will work only if enough eyeballs are devoted to it driven by benevolent intentions. Best results would be to open source the whole thing but that would not make business sense to the company.

Basically, either you open it out completely or not open it up at all. Opening out to a few government funded hackers is probably the worst choice they could make.


Obscurity is a valid part of some security schemes. It shouldn't be the only method, of course.

> “As a vendor here in the United States,” Clark said, “we are headquartered in a country where it is OK to say no.”

Until the government comes knocking and can demand pretty much everything with your only option being a secret court that always sides with the government anyway.

Is it too much tinfoil to think that this isn't so much about "putting security over sales" than it is about "making sure that NSA backdoor remains hidden"?


Just ask the former Qwest CEO about saying “no”.

what about Dreamhost (see yesterday's story)? Or about countless cases that end up with the government not getting what they want?

There are courts, and rules, and sometimes rules mean you hand things over, but sometimes they mean you can say no.


Could you elaborate?

2013:

http://www.businessinsider.com/the-story-of-joseph-nacchio-a...

"Only One Big Telecom CEO Refused To Cave To The NSA ... And He's Been In Jail For 4 Years"

2015:

https://www.forbes.com/sites/janetnovack/2015/05/01/u-s-avoi...

"the government has avoided a trial in which the 65-year-old former executive planned to air what he says was his refusal, in 2001, to allow Qwest to participate in a National Security Agency program he believed was illegal."

2016:

http://fullmeasure.news/news/politics/encryption-battle


I think it's unfair not to clarify that he went to jail for insider trading.

He believes that that the government only brought the action against him because he refused to divulge user data, but he is in jail because of insider trading.


Some details from the last link:

"the NSA proposition to Qwest was nearly seven months before 9/11, according to Nacchio."

"In a bizarre twist, the judge in Nacchio's case, Edward Nottingham, was soon embroiled in scandal, accused of soliciting prostitutes and allegedly asking one to lie to investigators. He resigned and apologized, but wasn't prosecuted."

"Nacchio's conviction was overturned on appeal in a decision that found Judge Nottingham made key errors.

But the government got the conviction reinstated by a split judges' panel."


Anti-virus products are a huge security risk.

Some might think it is joke, but it is dead serious:

https://googleprojectzero.blogspot.com/2016/06/how-to-compro...

Unfortunately running an anti-virus is an overly broad requirement in some industries to pass certifications and audits. It's one of the cases where "security" mandates and requirements leads to insecurity.


Insurance company made us all install anti virus software

They didn't make you do anything. They either refused to insure you or would raise your rate if you didn't.

made, required, mandated...I don't see how this makes a siginificant difference.

No they didn't hold a gun to their heads, pretty sure. I think it's pretty clear they made it a condition of not dropping them or not raising their premiums.

Eh. Holding a gun to your head doesn't _make_ you do anything.

You either do what they want or die. Your choice.


I have found it best to treat Anti-virus products like mal-ware themselves. They only get to live inside a VM for the sole purpose of antivirus scanning. This VM has access to several different antivirus products, and I use a battery of them (after updating signatures) to scan any file that I am leery of trusting. I delete the VM afterwards.

This is not impractical for my situation, because I do not have a large throughput of dubious files, perhaps a couple every 6 months or so.


There's also VirusTotal.

The article decries balkanization of tech services but it noticeably omits a middle path -- offering consulting services for open source software.

Surely, in this aspect, it stands to reason that this section of the tech services industry is more robust in the face of such an encroachment. The only losers in such a situation are the likes of Symantec, whom claim secrecy and obfuscation are a feature rather than a bug.


It would make more sence to make the code open source but not free - anyone can see but nobody can use the code.

The term open source implies the code is free. Simply having the source available does not make it open source.

And how would you enforce that? What would prevent anyone with access to the code from building it and using it? I don't see any way except maybe stripping the code of significant parts

You could keep the virus fingerprint database outside the codebase. Customers would then pay for access (and updates) to the fingerprints.

The fingerprints have to be some sorts of data, like regular expressions or other limited instruction set which can only parse the incoming file and not communicate with outside world.

The company could automatically release fingerprints into the open after a time, say 6 months.


So... stripping the code of significant parts

This seems like a pretty good business model idea though!


Copyright laws?

I would think all serious clients would want to review the source code of any security critical software that they intend to use. However, there could be some argument that allowing only selected clients (Russia) to review the source, while denying the larger security community access to source, does pose a risk. Of course Symantec does not, surely, intend to imply that its code should be published.

Meanwhile my own government in it's infinite stupidity is storing confidential tax records on US owned clouds. Apparently they haven't revised their 1950s policy that the Americans are the good guys.

Say what you want about Russians but they know how the game is played. And they are good at it too.


Do you need to access source code in order to analyze software for backdoors? Shouldn't you be looking directly at the compiled machine code?

There's no guarantee that the source code you are looking at matches the binaries that are being distributed isn't it?


> There's no guarantee that the source code you are looking at matches the binaries that are being distributed isn't it?

Couldn't one compile from source and then compare blobs?


I wonder how many commercial code bases/products can do that. Certainly an interesting proposal for verification purposes like this, but e.g. the recent Debian efforts show that it is not trivial.

> the recent Debian efforts show that it is not trivial

Is there a link on this you recommend?



https://reproducible-builds.org/docs/ (it's linked from the Debian site already posted by leni536, but wanted to explicitly point it out since it explains the issues and solutions very well)

If you want to find and develop exploits, having the source code certainly makes it easier - the expected use case here isn't the foreign gov't ensuring that Symantec software is safe so they can use it, but rather the foreign gov't developing exploits to circumvent Symantec software when attacking e.g. USA computers.

I think there are two points to consider:

1) From a security point of view, put yourself in the shoes of the other states. The NSA and its friends have a well proven history of backdoors and state-sponsored malware. From the Stuxnet/Flame family to the backdoors that were found on the hard-drivers malware (Story was on HN recently, I'll try to find it). So it is very normal, and as a matter of fact I'd say it's abnormal for a government to take a security product that holds administrative rights on the computer, without first inspecting its code to verify for backdoors. There is no such thing as a better state. I read on the comments "politically charged states". Well from the point of view of a Russian, the US is a politically charged state. Keep it relative ladies and gentelmen.

2) I've read people complaining about how "the way they scan"/"the way they do the detection" will be compromised. The way the AV software works IN GENERAL doesn't differ from one another. From a binary of the software one can identify with "relative ease" (for threat actors who are sponsored by governments), when the unpacking happens, decompression happens, sandboxing, hashing blocks, etc. As for the parts that are unique to the AV, for example watchdog parts and heuristics, these can also be reverse-engineered or just obteined through classical spying etc.

So all in all, source code reviews are, in my opinion, a very necessary thing. Because frankly if a simple source code review is going to fundamentally break your AV software, there has to be something wrong with that product. Because setting aside the government looking at the source code, hundreds of devs have already looked at it.


I have always found this fear over "foreign states" to be a bit odd. Sure if you are a International Company, or a US Government Contractor it might be a concern but for me a Natural Born US Citizen that rarely if ever travels abroad and never to the nations of China or Russia I have limited reasons to fear those nation states, sure they could steal my ID and cause me some momentary financial harm but the US government is empowered to put me in a cage, physically harm me or even kill me. With the state of the Legal System, and massive amounts laws and regulations that can be used to literally arrest anyone at any time I have much much more to fear from the US government obtaining my information than I do from the Russian or Chinese Governments

I am missing some key piece of information? Why is Symantec willing to allow the US Government to review the code bot not "foreign" states? What makes the US Government the pinnacle of virtue and honor?


It's interesting that Symantec claims to be denying governments the ability to review code for the safety of their end-users, but won't allow those same end-users the ability to review the code for themselves.

In many cases those governments are the end users. They're buying the same licenses as run-of-the-mill businesses.

Furthermore, if you said "well, private users only can review the code," then every government is just going to ask its code reviewers to independently purchase private licenses. There's no way to keep government users out of source code reviews unless you totally block code reviews whole cloth.


Without code review, how can a government make sure that the product doesn't contain backdoors?

> These are secrets, or things necessary to defend

Backdoors from NSA?


Learn from Israel - hack Kaspersky and do the code review from inside. :-)

Run ClamAV instead, it's built to be run continuously, on sensitive servers, handling untrusted data sent directly over email.

Is it comparable to commercial AV solutions? Can it intercept network traffic, run executables in a sandbox, use heuristics for detecting new viruses?

> Is it comparable to commercial AV solutions?

"In a Shadowserver six-month test between June and December 2011, ClamAV detected over 75.45% of all viruses tested, putting it in fifth place behind AhnLab, Avira, BitDefender and Avast. AhnLab, the top antivirus, detected 80.28%.[9]"

> Can it intercept network traffic

"On Linux servers ClamAV can be run in daemon mode, servicing requests to scan files sent from other processes. These can include mail exchange programs, files on Samba shares, or packets of data passing through a proxy server (IPCop, for example, has an add-on called Copfilter which scans incoming packets for malicious data)."

It seems that there is a third party tool that provides heuristic detection.

Source: Wikipedia


ClamAV's detection ability is really a joke.

Its 94% as good as the best AV, so not too bad.

The truth is all AV is pretty bad.

They're not so arrogant about their code being bulletproof that they're willing to hand it to an adversary and say, "Sure, knock yourself out - see if you can find any holes"? Yeah, I'm not sure that I see a problem here.

If Symantec considers the government to be an adversary, then why are they trying to court them as a customer?

From the article:

Tech companies have been under increasing pressure to allow the Russian government to examine source code, the closely guarded inner workings of software, in exchange for approvals to sell products in Russia.


Customers are often your adversaries. In fact, it's almost entirely the case - They're looking to extract concessions, get cheaper goods and services, etc. The customer has incentive to bleed as much as they can from the seller, and the seller also has incentive to bleed as much as they can from the customer. Cooperation despite adversarial relationship is the great benefit of capitalism, but doesn't mean you can ignore the adversity.

There's also a very different mode to their relationships. Symantec is selling to the Russian government - bureaucracies, and it's selling black boxes. Russia is trying to leverage it, to give it advantage in a different mode - Intelligence. Both Symantec and the Russian intelligence agencies are in the infosec business. It's not that uncommon for businesses to do business despite competing in some areas - Samsung was a core iPhone supplier despite also making phones.


"They're not so arrogant about their code being bulletproof that"

The source-code reviews by foreign states are not about checking to see 'if it works' - it's about checking that it doesn't include inserts from NSA etc..

This has nothing to do with 'security review' in the general sense of robustness, it's a 'state actor' thing.

I don't see how there is a way around this.

It's doubtful that Russia will allow them to sell this stuff without reviewing it - and the reverse is true as well - Russian state actors will surely use this 'review' as an opportunity to embellish their own hacking tactics etc..

I don't see any real way around this in the world in which we live.

Russians are going to have to make their own anti-virus. Which I would imagine they are capable of doing.


Well, Kaspersky Lab is a russian company.

Interesting enough, the CEO of Kaspersky already offers the source code for review in the US: https://www.engadget.com/2017/07/02/kaspersky-lab-offers-sou...

Can't speak for the US, here in Germany the Kaspersky tools are used on large companies responsible for critical infrastructure. With the option for source code review, I'm still with a good impression on their tools when compared to Symantec and no option for review.


> Can't speak for the US, here in Germany the Kaspersky tools are used on large companies responsible for critical infrastructure.

That is a terrible idea, as anyone who knows anything about Russia would tell you.


Ya.

Using American software, you might be getting NSA's eyes on you.

Using Russian software, you'll probably have FSB's eyes on you.

So take your pick :)


Even in that sense, choosing is quite simple, since for all kinds of gov't usage (e.g. in military as a NATO country) NSA is an ally of Germany but FSB is actively hostile to it.

Would you rather be East Germany or West Germany?

Germany shares closer and deeper ties with both countries than most would imagine in modern days.

Propaganda aside and looking at facts, one company is providing access to review their source code and the other one is not.

Which one of them is a terrible idea to purchase?


Would you expect an adversary to actually tell you the holes they find? If they don't, what do you gain from sharing source code with them?

You gain the chance to sell the product to the adversary that also is your customer.

You gain the tacit approval of the product from an adversarial government customer, which signals other enterprises in the country they should feel comfortable buying the product.


You lose the chance to sell to everyone else that considers the adversary a threat, because you have made it easier for the adversary to attack or evade your product than those of your competitors.

Not seeing the problem doesent prevent it from beeing there. Security by obscurity has a track record of not working.

Of course it doesn't work. But handing your source code to your attacker works even less well than that.

Yes. Handing out code to everyone (open source) can provide many benefits, but there's no benefit to handing it over only to agents you do not expect to cooperate (foreign governments, or governments in general).

Plus the chance for competitors to completely understand and replicate your brand new techniques for malware exposure.

Can understand the worry for malware products from startups and innovative companies that are demonstrating an impressive work and surely need that secrecy to thrive against the market gorillas. Symantec can't really be called innovative since 2009 or so, doesn't make much sense the security by obscurity mantra unless there are other reasons.


Taking the hiss out of the snakeoil, wouldn't they?

Welp, He's straight up lying. Source code would not be their actual virus definitions which are binary patterns, so screw these guys.

I'm sure that properly run foreign states consider unauditable software an unacceptable security risk as well. Between this nonsense and symantec's history of security mishaps, I'll be making sure to avoid this company from now on. I'll also recommend against dealing with them if asked.

Ridiculous.


The article doesn't say anything about blocking foreign states specifically; they are blocking all code reviews (the US doesn't generally require them however, at least on the record); perhaps update the title?

I'd assume the US govt is as much as risk as any other.


Right, just trust the US govt to do the morally right thing.

protip: Nevada requires the source code of all the video casino games.

Easier target than a nation state.

This practice is fundamentally flawed.


The irony.. Installing Symantec software, they pose an unacceptable risk.

poor NSA... Now they surely won't be able to obtain the source code

Colluded people like to fight in public.

The enemy knows the system.

Foreign Government code reviews, not code reviews in general.

Added above. Thanks!



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: