There are diagnostics in our network switches that allow for traffic to be replicated and sent to other ports with a different destination mac (this isn't port mirroring is more like port re-directing). Clearly in the hands of a bad guy they might set up a machine on the LAN to get a copy of all the traffic. Is it a cyberwar beach head? Probably not. Could it be exploited in an attack? Probably. Of course if someone tried to route all that traffic outside the network into the transit network it would be pretty obvious. So not a good scenario.
Like the controller back door article on Ars last month I suspect most of these things are diagnostic aids. You ask an engineer to test something and that something is buried inside a bunch of silicon and the only way to do that is to build some stuff in there that lets you look at things.
Of course you can do this in a 'smart' way, and in a 'stupid' way. When I started at Intel there were extra pads on the silicon that got to these extra functions, you ordered a 'bond-out' chip where bonding wires (between the chip pins and the silicon) would be attached. All of the in circuit emulators up to the 386 had a 'bond out' version in the emulator pod that gave you access to internal state of the chip. Others have pointed out the key for loading replacement microcode, another 'feature' to fix bugs in the field and do diagnostics.
So things which require either 'special' chips or attaching a JTAG probe directly to the part, are generally ok in my book. Once you have physical access nearly all bets are off.
Its an expensive way to compromise the enemy. Simpler to just build a piece of gear that looks and operates exactly like the original but is your own design. There was some counterfeit Cisco boxes like this in the channel for a bit. Of course they 'fail' when you update IOS and it fails. Still the cost to exploit is lower and more assured than back dooring silicon in a fab.
Its also pretty hard to add features to a chip without the designer of the chip in on the game. Every transistor is accounted for by long verification and analysis so 'extra' ones would show up. That limits the risk to a chip manufacturer being the 'bad guy' (and they are very traceable so unlikely to do that)
None of this though should take away from the excellent work Cambridge is doing. The silicon analysis is really cutting edge stuff, and I think it would be useful for chip designers in verifying their masks are accurate too. If you could effectively 'decompile' the resulting silicon and verify it against your netlist, that would catch mask errors. And that would save anywhere from $100,000 to $2,000,000 depending on size of the mask.
iOS == Apple's mobile OS
IOS == Internetwork OS (Cisco gear)
Mac == Macintosh
MAC == Media Access Control (Address), common in configuration of Cisco equipment...
If "==" was appropriate then there would be no problem of ambiguity. And there would be no need for the clarification.
Because meanings could never change with context. There could be no "misinterpretation". Only the truth table result of "false".
Which is more important in human communication: case-sensitivity or context-sensitivity?
Human communication is not a computer program.
with no evidence either way, I am guessing that US intelligence (and others?) are loudly saying this is happening not because they can prove it in silicon but convincing human intelligence has told them
I happen to think that the techniques developed to analysis silicon are going to be useful all round, and that a backdoor in a chip does not make a successful cyber attack in the wild, but if these are real in-the-fab additions
then the implications are so large we should be prepared to not label this "it's just a test component" and look at ideas like validating silicon (noted somewherre else below)
My instincts would be that in the absence of real evidence they are 'loudly saying this is happening' to beat the war drums, declare it as proof a 'cyberwar' is happening, are using it to get more funding and preparing for new draconian measures to control it both domestically and internationally.
Given what I know of silicon chip manufacturing, and the verification that goes on during, after, and while, manufacturing. I assert it would be extraordinarily difficult for a fab operator (like TSMC) to insert a back door without the designer/manufacturer knowing it.
I also brought up that in my experience adding back doors was certainly done to aid in testability. Sometimes those aids are done in a way that they cannot be used by third parties (bond-out chips) and sometimes they could be (JTAG access) but are obscured in some way.
Backdoor access in the firmware however, is a much easier threat to actualize as it doesn't involve silicon hacking per se. So that is a more credible threat. And I mentioned that we've seen counterfeit versions of 'name brand' products already which would be a fairly straight forward threat.
I simply don't get it?
* what happens when Intel release the nextgenration of chips? Apparently Intel needs to rebuild a while new fab plant at x billion - does the NSA?
* do they trust the designs made by Intel? If not what do they do ? If Intel is introducing backdoors for the NSA what guarantee is tere those backdoors won't get used y someone else?
* if they do trustthe design but don't trust the fab process surely it is better to put armed guard in the fab room or similar checks
* and this is only for one generation of one class of chip. Do this for the chips in the CCTV cameras and the
door locks and the ...
I'd assume the NSA's fabrication capability is more on the scale of the pilot plants fabs build at each new process scale. Some universities certainly have fabrication equipment testbeds as well, so the NSA effort may be more that modest scale.
If I were tasked with the problems the NSA faces, I think I'd at least focus in on:
1. CMOS reverse engineering equipment that can shave down dies, image and analyze the structures, etc.
2. Small scale fabrication for extremely sensitive infrastructure. These roles probably aren't performance critical. Eg if you have some microcontroller that plays a role in say nuclear weapon arming protocols, you need that to be pretty much beyond suspicion.
3. Some way of sampling commodity parts for unexpected behavior non-destructively. If this could be done efficiently enough, you could use it in combination with #1 to get reasonable confidence for off the shelf parts.
One thing I'd suspect is that if the NSA did find highly targeted flaws they probably wouldn't disseminate that fact unless absolutely necessary. Keep an adversary using a strategy you know rather than provoking improvement.
Personally I doubt the NSA forces backdoors into commodity chips. In theory there might be some way of introducing a flaw that would cripple specific large computations like crypto-analysis of a particular code, or biasing a particular random number generator. But that just seems too likely to backfire.
I'd always thought it was interesting that the pentium FDIV bug was most easily found by code calculating twin primes. But there may be a mundane explanation for that rather than cloak and dagger stuff.
I did a lot of work for the UK Ministry of Defence and the US Department of Defence over the years on custom silicon and FPGA work and the paranoia factor is scary. We had the layouts of everything bought in - even 74-series logic which can pretty much be assumed to be inert. Samples were regularly decapped and scanned using an SEM to verify to make sure the vendors weren't screwing us or integrating backdoors.
Every part was asset managed to hell as well. Every part was traceable to the point that every finger that poked it was known (I moved from engineering to writing the asset management systems before leaving).
The chain is only as strong as its weakest link.
A manufacturer outsourcing stuff has a hell of a lot of documentation to forge. Each screw, each washer, each resistor, has a batch number that it can be traced to.
Outing Valerie Plame Wilson as a CIA agent, a 100% treason charge
Bottom line is to make any sort of computer at a remotely competitive price, you're probably going to use some Asian parts. At least some parts. Then it's a matter of where you draw the line and the price vs. risk. How about a Chinese power supply? It all depends on where and how the device is being used. Then it also depends on the system not "promoting" that device to another purpose.
You can manage it all and make it from only 100% trusted sources, but you know what? It's insanely expensive and by the time you get a computer, there are ones on the market 6x better.
There's a relationship here.
This isn't particularly surprising - several large "real money" investors (pensions and the like) have had this sort of relationship with the Treasury. It enabled them to directly place bids on Treasury issuance without going through Primary broker-dealers (Wall Street). They were called "Directs" and would bid through the "TreasuryDirect" system.
Basically enabled the largest investors in USG securities to bypass the "commissions" other investors would pay to Wall Street firms.
I'm guessing that for some reason foreign accounts such as governments and sovereign funds had not been given access to this system for some reason, and after a point, the Chinese government investment funds (which are some of the largest in the world both in terms of funds managed and funds committed to the US) laid bare the inconsistency that they'd been disallowed this, simply on the grounds of being foreign.
This is especially important now as China has widened the Yuan trading band. You wouldn't want excessive FX volatility to manifest itself as a result of your foreign-reserve management decisions being (mis)-interpreted by investment banks.
Primary dealers are not allowed to charge
customers money to bid on their behalf at Treasury
auctions, so China isn't saving money by cutting out
Sometimes the typos are more fun than the whole thread.
Does Wall St. check for backdoors in hardware?
Where else are they going to get the chips in the quantities required since the US outsourced most of its commercial silicon foundries? Of the few remaining in the US, the largest is wholly owned by the Taiwanese company TSMC. Post-industrial economics is idiotic, and this is one of the major examples of why.
We (government contractor) pay a bunch extra to buy tools made in the US - even if they aren't normally manufactured here - just to satisfy the buy US provisions.
Typically what happens is that a subcontractor says it is US made, and then outsources to a foreign country and pockets the difference. Obviously there is money to be made there.
However, that is defrauding the government and those subcontractors can/will/do go to federal prison.
You say that like the US Military (among other US and non-US government victims) doesn't have the ability to dictate sourcing of parts to the point of driving the growth of domestic foundries.
This looks more like poor decision making, not a fact of "post-industrial economics".
 Wouldn't claiming that post-industrial economics is not the problem, by suggesting a course of military funded industrial economics as the solution, not be agreeing with the point that post-industrial economics is the problem, even while claiming otherwise?
Have a look at this video from ChipWorks http://www.youtube.com/watch?v=Il5sTZKBLO0
See the schematics? They've created those from scratch by deconstructing the chip. (I can say with certainty that this is the case because I'm familiar with the original schematics for this part. The ChipWorks ones are much neater!)
Doing this for a larger, all-digital chip is substantially the same. In that case you can probably step up from identifying individual transistors and identify the standard cells directly, since they tend to have distinctive-looking gate structures.
In the cross section case you are going to see a few dozen transistors out of the millions in a design of any complexity.
It would be remotely feasible to discover some sort of shenanigans if you knew the exact layout of the design, which would basically mean you are a foundry yourself. In that case you might get lucky and spot some difference between the mask you made, and the mask that was used to produce the part under inspection.
But the scales involved make this not believable to me. It would be roughly like scanning the whole of, say, America, and checking every street and intersection of every town, and comparing it against some known quantity to see if something changed in Springfield Missouri.
Maybe somebody could automate this, but the chemical processes for removing layers is less than perfect. Those strands of metal stretching across the ASIC have some built in tension, and if you remove the layer of glass above them, then tend to spring up and jumble. Good luck trying to do something with that.
(I guess there is no real advantage in keeping this obscured)
Who's to say that manufacturing in China means the backdoor was injected by China? I would have thought the US is just as likely a source, given that the design came from there. Surely the US government would love having access to FPGAs in foreign systems?
The cynic in me says that Cambridge needs to keep poking, as they might find two backdoors: one inserted by the US, the other by China.
Power glitch detection, mechanisms to detect decapping/stripping, wire mesh shielding, protection against ultra-violet laser stimulation of transistors, ... are all important.
For those interested in further reading, Security Engineering by Ross Anderson contains a section on chip security. Another paper by Ross Anderson and Markus Kuhn (1996) provides additional background.
I can envision a scenario where this "backdoor" is actually part of the designed-in security features of the chip designed to prevent an unauthorized party from reading out the FPGA "programming" as it were. As such, it's conceivable that there might be multiple keys or even a series of "transport" or "default" keys that are similar to those found on ISO smartcards. What we might be looking at is a "feature" as opposed to a "backdoor."
In any case, this sort of thing only becomes a critical security breach if the application you're using the chip in depends on periodic (or boot-time) reprogramming of the FPGA. In either case, either the physical security or the trust chain of your firmware loads is broken. As we all know, key management and side channel attacks are the hardest part of implementing a secure crypto system, so is this really news?
Edit: they seem to have submitted a patent application for the process of sending test signals to a chip and monitoring it with an oscilloscope: http://www.sumobrain.com/patents/wipo/Integrated-circuit-inv...
Are there backdoors in silicon? Of course there are backdoors in silicon. Just like in software, most of them will be deniably accidental. It's unlikely we'll be able to trace most of them to deliberate sabotage, but the net effect will be the same.
Having set the stage, consider: the competency required to manually evaluate silicon packages is extraordinarily rare. Even if you wanted to shell out 6 figures for a competent superficial evaluation, you'd have a lot of trouble finding available Chris Tarnovskys to do the work.
If you have 50% of the competence of Tarnovsky and the ability to automate any significant portion of that work, you can probably write your own ticket.
So: what's the likelihood that any such person, with an actual affiliation to a respected EE/CS security program, would just be making stuff up?
"Look, the people you are after are the people you depend on. We boot your servers, we back up your drives, we write your applications, we maintain your kernels. We guard your data. Do not... fuck with us. "
Could secure hardware be bootstrapped? Could we use the embarrassment of riches we have in terms of number of transistors available to implement arrays of small and fast processors which can emulate security hardware and be programmed using formal verification? This way, we could concentrate all of our scrutiny on one unit, and change much of the hardware problem into a software one. It wouldn't be as fast or as cheap, but it might be fast enough and workably secure.
I'm less curious about whether overseas silicon is backdoored than I am in how exposed the attack/activation surface for those backdoors are.
"Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip" - er, what? So either they have some approach for turning silicon into a machine readable form, in which case "code breaking" makes no sense, or they're attacking the chip via its interfaces. Why mention both? Because "advanced code breaking" sounds cool.
"In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems" - advanced Stuxnet weapon? This is blatant namedropping, Stuxnet is irrelevant here being a piece of software.
"The scale and range of possible attacks has huge implications for National Security and public infrastructure." - "this is a general purpose chip that happens to be used in military applications".
"adaptable - scale up to include many types of chip" - implies there are complexity limits, so likely they've applied their process to some relatively simple piece of silicon, again suggesting some boring chip.
"found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract." - hardly uncommon, in fact the Intel CPU I'm typing this on has such a feature - for encrypted microcode updates.
Until there are more details, this vague news article is just dressing.
Having said that, I take issue with almost every point you made:
* Both Chris Tarnovsky and Karsten Nohl have, supported so far as I know by none of the resources of a major university, given security conference talks on processes for "Turning silicon into machine-readable form". Nohl actually has an open source package to help do it. There's nothing incredible about that claim.
* I'm not sure I follow how the most famous act of computer-aided industrial espionage isn't germane to hardware backdoors. Researchers put their work into context so people outside the field will take it seriously.
* The military uses Microsoft Windows and Red Hat Linux, too, both of which are general-purpose packages. You think a universally distributed backdoor in either that had escaped detection until 2012 wouldn't be relevant to national security?
* Go read Tarnovsky's blog, where he has blogged about extracting keys from silicon.
The only point you've made here that I agree with is that the attack/activation surface of these illicit features is likely to be more important than anything else.
The point is that hardware reversing is not an incredible claim.
That claim about 99% of chips being manufactured in China is very easy to verify as being utterly false. I have to wonder about the trustworthiness of the rest.
- kryptiskt, http://news.ycombinator.com/item?id=4030818
It has actually not been "very easy" for me to verify this, but I did find something saying that in 2009, China had 9% of the world's production capacity, which makes me strongly doubt that they are now 99% of the actual manufacturing amount: http://www.manufacturingnews.com/news/10/0212/semiconductors...
@tptacek: Care to provide references for why Cambridge Security Lab is as big a deal as you're making them out to be, and why we should overlook this blatantly exaggerated fact they cited?
This security lab's tendency to exaggerate the seriousness of the security problem they've identified is exactly what is in question here.
I never evaluated any project at all, just asked why anyone should take you or this web page seriously, and you have been nothing but dismissive in response.
On top of that, he also has a history here of useful and insightful commentary on security issues. That's also why anyone should take him seriously.
The reason he's responding dismissively to you is probably that you keep attacking the OP for irrelevant niggles. The sort of reasoning you're employing here would lead someone who saw a speech by Albert Einstein to dismiss it by saying, "Bah, he can't even be bothered to do his hair well. Why should I think he does his research any better?" Attacking Einstein's hair does not make his ideas any less valid. If you had material objections to the OP, you'd probably get a more congenial response.
3: I will admit, I had read his other responses in this thread, and intentionally chose to provoke a dismissive response by presenting something on the verge of being immaterial. I even apologize to anyone at the Cambridge Security Lab for any disrespect.
I don't apologize for being irreverent towards tptacek and the Cambridge Security Lab. I still think my core point, "This security lab's tendency to exaggerate the seriousness of the security problem they've identified is exactly what is in question here.", was a totally material response to his original comment, "Cambridge Security Lab is not fucking around.". I also think (and intended) that even though I was trying to provoke him, my response was totally congenial and had a material point and therefore acceptable, while he should not have been so dismissive in response, to me and to everyone else.
...and http://www.lightbluetouchpaper.org/ ; I am unaffiliated with Cambridge other than knowing a few of the people there.
Taiwan is a major US ally, so if this backdoor is real, then there will be trouble. It would be best for all parties involved for this to turn out to be a false alarm.
Taiwan is not a "major" US ally, rather the US is Taiwan's major ally. The US has several other regional countries it has a significantly greater alliances with, such as Japan, South Korea and Philippines. Though through an act of Congress, the US may (depending on the situation) have some obligations to aid Taiwan in its defense if attacked by mainland China.
As a random but relevant example, Foxconn is a Taiwanese company but much of its manufacturing capacity is on the Chinese mainland.
Taiwan is politically an ally of the US, but economically it is much more closely aligned with China.
"Renegade province" is the official stance, but you're right, it is much more complex. In practice Taiwan is autonomous, and the degree of interaction with the mainland is a big political issue -- there were no direct flights between Taiwan and the PRC until just a few years ago. And yet, as you say, Taiwan is economically interlocked with China.
It is interesting to see the discussion here and elsewhere focus on China as a bogeyman. I suppose the news fits into the narrative that has been constructed about Chinese espionage and such.
"These are good guys. This paper is the real deal."
I appreciate what you bring to HN, but that this is the top comment worries me, particularly when it comes to security of all things. There's valuable comments that are contrary to your opinion surrounding you, and I wish you'd explain your side a bit more clearly in cases like this.
It isn't just this particular instance that is driving my comment (in which I acknowledge your reputation, and nobody else's, a small oversight in your reply). The driving force is more your showing up in threads, saying something either plainly obvious or, worse, absolutely confusing, and then expecting your reputation to carry your comment the rest of the way. Most of the time, the reasoning behind your comment is completely unclear. It isn't avoiding drama to elaborate, it's making your point clearer and not relying upon a name you've created for yourself in this community when the rationale behind your opinion is unclear to those of us without your ability. The other comment that annoyed me recently, and most front of mind, was this one about nginx:
"This is a very bad bug, and you should fix it ASAP. Don't wait."
Two things here:
1. Thank you, Captain Obvious. What an enlightening comment.
2. What does "very bad" mean?
Think about what a novice admin walks away from that comment with. Yes, he upgrades, awesome. That's exactly what we expect of administrators. There's something more sinister underlying your end result, though, which is that you've trained an administrator to act on what you and other security professionals say when it comes to security, without any explanation or reason. Security would be a much better place if people started gaining the ability to think for themselves and understand the issue, and you're working to reverse that. I see this crap with bcrypt, too. "Just use bcrypt." "Why?" "Because smart people said so." Now what if you fuck up? What if you give bad advice? Half of this community is going to take you at face value, because you don't present supporting facts for your position to be debated openly. Because it's 'tiring'.
You are quite unmatched in the security arena with your technical prowess. There's no question of that. It's comments like these, however, that make me annoyed that you're using said reputation inappropriately, and any questioning you receive on the matter leads to single-sentence snark like the pointless gray comment in this thread. Before asking why I commented, consider your own comments and the different standard you hold your own commentary to in this forum.
: I fully expect a snarky reply to this hypothetical, so make it good.
I think if you use the search box at the bottom of the screen, you'll have no trouble at all finding thousands and thousands of words spelling out in great detail what I think about bcrypt.
I am a person, not a web service. You cannot file bugs every time I don't provide exactly the comments in exactly the tone you're looking for. Or, as you're amply demonstrating, you can, but it's unlikely to do you any good.
If tptacek or anyone else wants to speak in Zen koans and leave it up to motivated readers to figure out what he meant, so be it. No one owes anyone anything here, and you should think of it as a chance to sharpen your research and investigative skills.
But seriously, omit the entitlement and drama. It's the last thing HN needs.
found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key...
It's hard to understand what this guy is talking about. Is he claiming that the manufacturer added additional hardware that the designers were unaware of? Or they made modifications to existing circuitry so it doesn't match the design? It would be very hard to do either without cooperation from the designers, especially given the paranoia of hardware engineers (and of defense hardware engineers, an entirely different level of paranoia). The question "are we manufacturing what we designed?" is one that is constantly asked throughout the lifetime of a part. In fact the answer, for individual parts, is often "no", because they can be defective. Still, the question is constantly asked with a variety of automated tools at all points of the manufacturing process.
Here's what I think he might have found: an additional fixed key introduced by the designers themselves into the chip, and having nothing special to do with the manufacturer. In other words, a deliberate backdoor.
That claim about 99% of chips being manufactured in China is very easy to verify as being utterly false. I have to wonder about the trustworthiness of the rest.
Sure, this stuff gets harder with modern technology, but it would be ridiculous to assume that manufacturers blindly click together chips and hope for the best because they can't inspect their work.
Maybe 99% of volume, but not type, is produced in China.
Or 99% of types are made in China, but with volume elsewhere.
Maybe 99% of this particular IC family are made in China, with the rest being made elsewhere is something that works.
The 99% does strike me as something that was thumbsucked.
The author would probably like to stay involved with this tech, or at least to be able to hand it off to CESG.
 I assume CESG. Perhaps QinetiQ would do it?
 I have no idea what they do. All those Qs? You've seen 007? They're the real Q department. I doubt they do laser beam watches.
- Assumes the Chinese put the backdoor in. There are plenty of others interested in backdoors.
- Assumes the designing company doesn't do any detailed production product checks. Not likely since this is a many, many billion dollar business.
- Claims a systemic problem but only notes one chip. That one FPGA could just have a design flaw. Need more details on the others.
- At the end it claims an investigation over ten years but the fab world has greatly changed over ten years. Many micro controller companies actually own their Chinese fabs now.
As a side note, if you discover something like this, don't assume you found something you weren't meant to find. You're discovery may just have made you found.
Whether any of those backdoors are deliberate is much less relevant than whether they're known to your adversaries. In the case of Chinese electronics engineering, your adversaries have the blueprints.
Do you really think it's likely that designers of bespoke silicon reliably decap, image, and analyze the finished products? I think you're attributing Intel/AMD-level wherewithal when, just like in software, a huge chunk of the market has nothing resembling the resources of the leading vendors.
Heed your own advice.
Show me some source, a schematic, or a technique that you're using, and then I might believe you, otherwise this is just FUD. They didn't even name the bloody chip.
How could the authors know the backdoor design is not the intent of American military?
Helion Technology Limited -- Helion Technology.
1) Say what you will about the military-industrial complex, but they do buy a load of physical products. When those are sourced domestically it has a lot of good spillover effects on the rest of the industry (see Steve Blank's Secret History of Silicon Valley).
2) I'd be far more worried about Intel, AMD, nVidia, Texas Instruments, et al, especially if I was a foreign procurement officer. The logic in those chips is incredibly complex and almost impossible to verify in any detail by a third party. Coincidentally, they're all US companies.
It's interesting to note that in the DPA/SPA world the standard model of operation is to develop a new attack and then patent the countermeasures ;)
It should be noted that this is "probably" not a backdoor in the traditional sense (intentionally planted by some nefarious government organisation), rather just bad, leaky design that has been identified by an improved attack methodology...
From this, I'd say that anyone who used the backdoor would basically be able to take over the chip completely. Which is somewhat scary, considering the author says it's used in weapons systems—hopefully the author's informed an intelligence agency with the specifics.
The configuration is commonly stored in a small serial eeprom (tiny 8-pin chip) and automatically read when the FPGA powers up. The content of this chip is often called "bitstream", this configuration eeprom/flash is sometimes also internal to the FPGA.
The key this configuration is encrypted with is supposed to be stored securely inside the FPGA, but they managed to extract it using undocumented commands on the "debug port" (JTAG) that the vendor explicitly claimed did not exist.
Note: This is an interface that normally is not easily accessible from the outside, but sometimes connected to a microcontroller to update the FPGA configuration.
Theoretically someone who gets access ("normal" computer backdoor over the network) to such a device might be able to re-program the chip thereby causing malfunction or add a flaw deliberately. The second scenario would be to decrypt the configuration information, "decompile" it and learn about secret algorithms or functions.
From the description I'm guessing an interface device that does something in the order of I2C/CAN/M on one end and external comms to the outside world on the other (why else would require "sophisticated encryption standard").
First, we must understand what these are used in: embedded systems. Typically, at the heart of most embedded systems you have two possibilities: a microcontroller or microprocessor, or an FPGA. The microcomputers run some kind of firmware (instruction set fed to a processor architecture) which is completely different then an FPGA which are actually re-configurable transistor arrays to implement fixed digital logic. This transistor configuration is typically loaded from EEPROM on power up - so it is stored/uploaded by a user somewhere after they've done some work in their CAD tool.
In either case, whether it be firmware written for a microprocessor based system, or the "firmware" for an FPGA (I forget what that logic routing configuration format is called - technically not firmware since it's not instructions) it is likely that whoever wrote it would want to protect it from being read or protect their device from having another firmware loaded on. There are many schemes to do so, it is possible that this is what has been compromised.
But, this Frienemy war is not about taking advantage of these backdoors. That is the nuclear option. The war is about who has the potential to pwn the other.
BTW- I'm typing this on a Chinese netbook.