Interesting discussion. Some denial, some tin hat, some contemplative. I think I've had all of those emotions with this sort of thing.
There are diagnostics in our network switches that allow for traffic to be replicated and sent to other ports with a different destination mac (this isn't port mirroring is more like port re-directing). Clearly in the hands of a bad guy they might set up a machine on the LAN to get a copy of all the traffic. Is it a cyberwar beach head? Probably not. Could it be exploited in an attack? Probably. Of course if someone tried to route all that traffic outside the network into the transit network it would be pretty obvious. So not a good scenario.
Like the controller back door article on Ars last month I suspect most of these things are diagnostic aids. You ask an engineer to test something and that something is buried inside a bunch of silicon and the only way to do that is to build some stuff in there that lets you look at things.
Of course you can do this in a 'smart' way, and in a 'stupid' way. When I started at Intel there were extra pads on the silicon that got to these extra functions, you ordered a 'bond-out' chip where bonding wires (between the chip pins and the silicon) would be attached. All of the in circuit emulators up to the 386 had a 'bond out' version in the emulator pod that gave you access to internal state of the chip. Others have pointed out the key for loading replacement microcode, another 'feature' to fix bugs in the field and do diagnostics.
So things which require either 'special' chips or attaching a JTAG probe directly to the part, are generally ok in my book. Once you have physical access nearly all bets are off.
Its an expensive way to compromise the enemy. Simpler to just build a piece of gear that looks and operates exactly like the original but is your own design. There was some counterfeit Cisco boxes like this in the channel for a bit. Of course they 'fail' when you update IOS and it fails. Still the cost to exploit is lower and more assured than back dooring silicon in a fab.
Its also pretty hard to add features to a chip without the designer of the chip in on the game. Every transistor is accounted for by long verification and analysis so 'extra' ones would show up. That limits the risk to a chip manufacturer being the 'bad guy' (and they are very traceable so unlikely to do that)
None of this though should take away from the excellent work Cambridge is doing. The silicon analysis is really cutting edge stuff, and I think it would be useful for chip designers in verifying their masks are accurate too. If you could effectively 'decompile' the resulting silicon and verify it against your netlist, that would catch mask errors. And that would save anywhere from $100,000 to $2,000,000 depending on size of the mask.
To clarify for people reading, IOS is the name of Cisco's operating system for their router's and network switches. Apple licensed the trademark from Cisco when they switched the naming of their mobile operating system.
Are we sure that "=" and not "==" is the correct operator?
If "==" was appropriate then there would be no problem of ambiguity. And there would be no need for the clarification.
Because meanings could never change with context. There could be no "misinterpretation". Only the truth table result of "false".
Which is more important in human communication: case-sensitivity or context-sensitivity?
I'm sorry I am not sure what you are saying here. It seems to be "this is far more likely to be a test engineers backdoor that was not on the spec" then a Chinese backdoor added at the fab"
with no evidence either way, I am guessing that US intelligence (and others?) are loudly saying this is happening not because they can prove it in silicon but convincing human intelligence has told them
I happen to think that the techniques developed to analysis silicon are going to be useful all round, and that a backdoor in a chip does not make a successful cyber attack in the wild, but if these are real in-the-fab additions
then the implications are so large we should be prepared to not label this "it's just a test component" and look at ideas like validating silicon (noted somewherre else below)
> with no evidence either way, I am guessing that US intelligence (and others?) are loudly saying this is happening not because they can prove it in silicon but convincing human intelligence has told them
My instincts would be that in the absence of real evidence they are 'loudly saying this is happening' to beat the war drums, declare it as proof a 'cyberwar' is happening, are using it to get more funding and preparing for new draconian measures to control it both domestically and internationally.
You are correct, there is no evidence either way. So one way to look at it is to consider what would have to be true for it to be installed by the 'fab' without the knowledge of the guy who designed the chip, vs installed by the chip designer.
Given what I know of silicon chip manufacturing, and the verification that goes on during, after, and while, manufacturing. I assert it would be extraordinarily difficult for a fab operator (like TSMC) to insert a back door without the designer/manufacturer knowing it.
I also brought up that in my experience adding back doors was certainly done to aid in testability. Sometimes those aids are done in a way that they cannot be used by third parties (bond-out chips) and sometimes they could be (JTAG access) but are obscured in some way.
Backdoor access in the firmware however, is a much easier threat to actualize as it doesn't involve silicon hacking per se. So that is a more credible threat. And I mentioned that we've seen counterfeit versions of 'name brand' products already which would be a fairly straight forward threat.
Sorry, the NSA has it's own billion dollar fab soitcan build copyrighted Intel clones?
I simply don't get it?
* what happens when Intel release the nextgenration of chips? Apparently Intel needs to rebuild a while new fab plant at x billion - does the NSA?
* do they trust the designs made by Intel? If not what do they do ? If Intel is introducing backdoors for the NSA what guarantee is tere those backdoors won't get used y someone else?
* if they do trustthe design but don't trust the fab process surely it is better to put armed guard in the fab room or similar checks
* and this is only for one generation of one class of chip. Do this for the chips in the CCTV cameras and the
door locks and the ...
Well, AFAIK the NSA doesn't publish what their fabs are capable of or what they do with them. Maybe someone here knows better?
I'd assume the NSA's fabrication capability is more on the scale of the pilot plants fabs build at each new process scale. Some universities certainly have fabrication equipment testbeds as well, so the NSA effort may be more that modest scale.
If I were tasked with the problems the NSA faces, I think I'd at least focus in on:
1. CMOS reverse engineering equipment that can shave down dies, image and analyze the structures, etc.
2. Small scale fabrication for extremely sensitive infrastructure. These roles probably aren't performance critical. Eg if you have some microcontroller that plays a role in say nuclear weapon arming protocols, you need that to be pretty much beyond suspicion.
3. Some way of sampling commodity parts for unexpected behavior non-destructively. If this could be done efficiently enough, you could use it in combination with #1 to get reasonable confidence for off the shelf parts.
One thing I'd suspect is that if the NSA did find highly targeted flaws they probably wouldn't disseminate that fact unless absolutely necessary. Keep an adversary using a strategy you know rather than provoking improvement.
Personally I doubt the NSA forces backdoors into commodity chips. In theory there might be some way of introducing a flaw that would cripple specific large computations like crypto-analysis of a particular code, or biasing a particular random number generator. But that just seems too likely to backfire.
I'd always thought it was interesting that the pentium FDIV bug was most easily found by code calculating twin primes. But there may be a mundane explanation for that rather than cloak and dagger stuff.
The bit that surprises the fuck out of me is that they're buying stuff in from China. I've never seen that - ever! They would buy expensive stuff fabbed specially in the US rather than import usually.
I did a lot of work for the UK Ministry of Defence and the US Department of Defence over the years on custom silicon and FPGA work and the paranoia factor is scary. We had the layouts of everything bought in - even 74-series logic which can pretty much be assumed to be inert. Samples were regularly decapped and scanned using an SEM to verify to make sure the vendors weren't screwing us or integrating backdoors.
Every part was asset managed to hell as well. Every part was traceable to the point that every finger that poked it was known (I moved from engineering to writing the asset management systems before leaving).
I think the problem is that there are too many suppliers. Everyone wants to be a middle-man. If the government asks for domestic parts from it's big contractors, the big ones ask their small ones, and they ask theirs and so on. But at the end of the day someone realizes it's cheaper to outsource it - does so - and forges the documentation. As the part travels all the way back up the chain each one says it's domestic.
They need custody chain management. I assumed they had one. And everyone who signed for it has securety clearance meaning they signed a paper that basically says "I understand that I'll go to jail for twenty years if I'm caught lying about anything".
There is this perpetual balancing game being played. The rules have changed for different US agencies over the years, originally it was only "100% made in the USA" or maybe some select partners (the UK and Israel perhaps) but due to competitive pressures and pricing for a lot of federal products, it's okay to assemble in the US and source parts from where ever. DoD has slightly stronger rules and then really strong rules for some devices.
Bottom line is to make any sort of computer at a remotely competitive price, you're probably going to use some Asian parts. At least some parts. Then it's a matter of where you draw the line and the price vs. risk. How about a Chinese power supply? It all depends on where and how the device is being used. Then it also depends on the system not "promoting" that device to another purpose.
You can manage it all and make it from only 100% trusted sources, but you know what? It's insanely expensive and by the time you get a computer, there are ones on the market 6x better.
> "China was given a direct line around Wall Street to buy Treasuries directly from the USG"
This isn't particularly surprising - several large "real money" investors (pensions and the like) have had this sort of relationship with the Treasury. It enabled them to directly place bids on Treasury issuance without going through Primary broker-dealers (Wall Street). They were called "Directs" and would bid through the "TreasuryDirect" system.
Basically enabled the largest investors in USG securities to bypass the "commissions" other investors would pay to Wall Street firms.
I'm guessing that for some reason foreign accounts such as governments and sovereign funds had not been given access to this system for some reason, and after a point, the Chinese government investment funds (which are some of the largest in the world both in terms of funds managed and funds committed to the US) laid bare the inconsistency that they'd been disallowed this, simply on the grounds of being foreign.
That line is available to any other central bank in the world without going through primary dealers at all - it's just that China is using it to avoid giving info of its UST holdings or buying/selling trends to the open market.
This is especially important now as China has widened the Yuan trading band. You wouldn't want excessive FX volatility to manifest itself as a result of your foreign-reserve management decisions being (mis)-interpreted by investment banks.
I'm not sure I see the problem. They let the biggest debt buyer cut out the middle man. I'm sure there was some backroom dealing going on there, but it seems like the biggest losers there are the middlemen who wound up getting cut out.
I'm not sure I can articulate it very well, but my general point regards the access given to core functionality in both military hardware and finance. This doesn't really comport with the US's relationship to China that is portrayed in the mainstream.
The bit that surprises the fuck out of me is that they're buying stuff in from China. I've never seen that - ever!
Where else are they going to get the chips in the quantities required since the US outsourced most of its commercial silicon foundries? Of the few remaining in the US, the largest is wholly owned by the Taiwanese company TSMC. Post-industrial economics is idiotic, and this is one of the major examples of why.
The US military might, but they are not in charge, and the military in general hasn't been in charge ever since the city machine subsumed the war machine.
 Wouldn't claiming that post-industrial economics is not the problem, by suggesting a course of military funded industrial economics as the solution, not be agreeing with the point that post-industrial economics is the problem, even while claiming otherwise?
I simply do not believe one could find a "back door" looking at a chip in a SEM. It sounds to me like you are describing destructive physical analysis whose purpose is to make sure requisite manufacturing practices are being followed.
There are companies that specialize in reverse engineering schematics from silicon. It's entirely possible (albeit relatively expensive and time-consuming compared to good old-fashioned industrial espionage) to recover schematics from silicon.
See the schematics? They've created those from scratch by deconstructing the chip. (I can say with certainty that this is the case because I'm familiar with the original schematics for this part. The ChipWorks ones are much neater!)
Doing this for a larger, all-digital chip is substantially the same. In that case you can probably step up from identifying individual transistors and identify the standard cells directly, since they tend to have distinctive-looking gate structures.
The way I read that is that they make the designs and look for circuitry that does not match the designs, which is presumed to be backdoors. I would think that defects and intentional backdoors would both be findable on an SEM. Do you think otherwise? I don't have a ton of experience with bare silicon, so I'd be interested to know if that's unreasonable.
Even then - the manufacturer could supply design documents that surreptitiously include backdoors... there's simply so much to look at when it comes to actual circuit schematics, I can't see how anyone would spot "backdoor" circuitry amongst everything else that is presumably legitimate. I don't know much about silicon, so maybe I'm wrong.
I do think it is unreasonable. With a SEM you only get to look at the surface of things, which is going to be either glass or metal or polysilicon. The only way to see a transistor in a sem is if you chemically remove all the top layers (which are the connections between transistors), or perform a cross section.
In the cross section case you are going to see a few dozen transistors out of the millions in a design of any complexity.
It would be remotely feasible to discover some sort of shenanigans if you knew the exact layout of the design, which would basically mean you are a foundry yourself. In that case you might get lucky and spot some difference between the mask you made, and the mask that was used to produce the part under inspection.
But the scales involved make this not believable to me. It would be roughly like scanning the whole of, say, America, and checking every street and intersection of every town, and comparing it against some known quantity to see if something changed in Springfield Missouri.
Maybe somebody could automate this, but the chemical processes for removing layers is less than perfect. Those strands of metal stretching across the ASIC have some built in tension, and if you remove the layer of glass above them, then tend to spring up and jumble. Good luck trying to do something with that.
Just to add... Any "advanced" backdoors take significant silicon and would be visible from the top of the device as a significant design change would be required to accomodate them. Subtle flaws in designs are another thing altogether.
Ouch, as it would be quite easy to think that you are "safe" if you spin your own processor from scratch and run it on an FPGA.
Who's to say that manufacturing in China means the backdoor was injected by China? I would have thought the US is just as likely a source, given that the design came from there. Surely the US government would love having access to FPGAs in foreign systems?
The cynic in me says that Cambridge needs to keep poking, as they might find two backdoors: one inserted by the US, the other by China.
I see no mention of tamper-resistance/self-destruct features?
Power glitch detection, mechanisms to detect decapping/stripping, wire mesh shielding, protection against ultra-violet laser stimulation of transistors, ... are all important.
For those interested in further reading, Security Engineering by Ross Anderson contains a section on chip security. Another paper by Ross Anderson and Markus Kuhn (1996) provides additional background.
"Unlike SRAM-based FPGAs or conventional ASIC solutions, ProASIC3/E devices offer one of the highest levels of design security in the industry. In fact, ProASIC3/E devices bring new levels of security to the FPGA market place. An FPGA industry first, secure ISP is performed using the industry-standard 128- bit AES block cipher algorithm. Reprogramming can be securely performed in-system to support future design iterations and field upgrades with peace of mind that valuable IP cannot be compromised or copied." ISP stands for in-system-programming
Is there ANY chance that this is a bit of a tempest in a teapot?
I can envision a scenario where this "backdoor" is actually part of the designed-in security features of the chip designed to prevent an unauthorized party from reading out the FPGA "programming" as it were. As such, it's conceivable that there might be multiple keys or even a series of "transport" or "default" keys that are similar to those found on ISO smartcards. What we might be looking at is a "feature" as opposed to a "backdoor."
In any case, this sort of thing only becomes a critical security breach if the application you're using the chip in depends on periodic (or boot-time) reprogramming of the FPGA. In either case, either the physical security or the trust chain of your firmware loads is broken. As we all know, key management and side channel attacks are the hardest part of implementing a secure crypto system, so is this really news?
The language used in this article seems very much like the author has something to sell and is trying to create the impression that it is advanced and mysterious. The claims about improvements of many orders of magnitude in speed and cost as well as the unavailability of information and services to private individuals suggest to me that someone is trying to get a defense contract for some overhyped technology that won't really deliver what's promised.
Backdoors --- intentional, accidental, or (most typically) "deniably" accidental --- are extremely common in software of all kinds, from RTOS kernels to web stacks to third-party database wrapper libraries.
Are there backdoors in silicon? Of course there are backdoors in silicon. Just like in software, most of them will be deniably accidental. It's unlikely we'll be able to trace most of them to deliberate sabotage, but the net effect will be the same.
Having set the stage, consider: the competency required to manually evaluate silicon packages is extraordinarily rare. Even if you wanted to shell out 6 figures for a competent superficial evaluation, you'd have a lot of trouble finding available Chris Tarnovskys to do the work.
If you have 50% of the competence of Tarnovsky and the ability to automate any significant portion of that work, you can probably write your own ticket.
So: what's the likelihood that any such person, with an actual affiliation to a respected EE/CS security program, would just be making stuff up?
"Look, the people you are after are the people you depend on. We boot your servers, we back up your drives, we write your applications, we maintain your kernels. We guard your data. Do not... fuck with us. "
Having set the stage, consider: the competency required to manually evaluate silicon packages is extraordinarily rare. Even if you wanted to shell out 6 figures for a competent superficial evaluation, you'd have a lot of trouble finding available Chris Tarnovskys to do the work.
Could secure hardware be bootstrapped? Could we use the embarrassment of riches we have in terms of number of transistors available to implement arrays of small and fast processors which can emulate security hardware and be programmed using formal verification? This way, we could concentrate all of our scrutiny on one unit, and change much of the hardware problem into a software one. It wouldn't be as fast or as cheap, but it might be fast enough and workably secure.
I'm only going based on the tone of the writing and the content of the patent application; both are written like hype. He might actually be doing something novel, or he might just be trying to get attention for his company and not doing anything special relative to others in the field. There may be good reasons to avoid talking about details in his field, but when someone selling something does that, hype is a reasonable default explanation.
It sounds to me like grant-proposal language. I wouldn't call it hype, but it is meant to convince people that you have done something important, and you are deserving of more money to do further research.
Aside from this not being a very useful comment, I think there's good cause to assume this may be a little dressed up:
"Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip" - er, what? So either they have some approach for turning silicon into a machine readable form, in which case "code breaking" makes no sense, or they're attacking the chip via its interfaces. Why mention both? Because "advanced code breaking" sounds cool.
"In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems" - advanced Stuxnet weapon? This is blatant namedropping, Stuxnet is irrelevant here being a piece of software.
"The scale and range of possible attacks has huge implications for National Security and public infrastructure." - "this is a general purpose chip that happens to be used in military applications".
"adaptable - scale up to include many types of chip" - implies there are complexity limits, so likely they've applied their process to some relatively simple piece of silicon, again suggesting some boring chip.
"found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract." - hardly uncommon, in fact the Intel CPU I'm typing this on has such a feature - for encrypted microcode updates.
Until there are more details, this vague news article is just dressing.
I assume most people on HN don't follow security and might not be familiar with the University of Cambridge's security program.
Having said that, I take issue with almost every point you made:
* Both Chris Tarnovsky and Karsten Nohl have, supported so far as I know by none of the resources of a major university, given security conference talks on processes for "Turning silicon into machine-readable form". Nohl actually has an open source package to help do it. There's nothing incredible about that claim.
* I'm not sure I follow how the most famous act of computer-aided industrial espionage isn't germane to hardware backdoors. Researchers put their work into context so people outside the field will take it seriously.
* The military uses Microsoft Windows and Red Hat Linux, too, both of which are general-purpose packages. You think a universally distributed backdoor in either that had escaped detection until 2012 wouldn't be relevant to national security?
* Go read Tarnovsky's blog, where he has blogged about extracting keys from silicon.
The only point you've made here that I agree with is that the attack/activation surface of these illicit features is likely to be more important than anything else.
If I had to guess, he's being funded by the military and he is definitely fishing for money. I've seen the same kind of language before. On the other hand, as the parent comment said, these researchers are reputable and we should assume that they've actually found something. This report was just written for a different audience (generals, not engineers).
Definitely written for generals, but if the claims are true (and it wouldn't be difficult for a general to send and engineer to check it out and report back), they definitely should be thrown some money for more research
A hardware security researcher's inability to perform Gartner-correct market research is not relevant to his/her ability to decap, image, and analyze silicon, and thus not at all relevant to me. Wow do I ever not care about this particular gotcha.
Hacker News Protip: If you click tptacek's name, you can see his profile, which will inform you that he's a computer security professional. That's why anyone should take him seriously.
On top of that, he also has a history here of useful and insightful commentary on security issues. That's also why anyone should take him seriously.
The reason he's responding dismissively to you is probably that you keep attacking the OP for irrelevant niggles. The sort of reasoning you're employing here would lead someone who saw a speech by Albert Einstein to dismiss it by saying, "Bah, he can't even be bothered to do his hair well. Why should I think he does his research any better?" Attacking Einstein's hair does not make his ideas any less valid. If you had material objections to the OP, you'd probably get a more congenial response.
3: I will admit, I had read his other responses in this thread, and intentionally chose to provoke a dismissive response by presenting something on the verge of being immaterial. I even apologize to anyone at the Cambridge Security Lab for any disrespect.
I don't apologize for being irreverent towards tptacek and the Cambridge Security Lab. I still think my core point, "This security lab's tendency to exaggerate the seriousness of the security problem they've identified is exactly what is in question here.", was a totally material response to his original comment, "Cambridge Security Lab is not fucking around.". I also think (and intended) that even though I was trying to provoke him, my response was totally congenial and had a material point and therefore acceptable, while he should not have been so dismissive in response, to me and to everyone else.
According to China (the People's Republic of China), Taiwan (the Republic Of China) is a "renegade province." Both the PRC and the ROC claim that they are the legitimate government of China. In the US, ever since Nixon instituted the "two China" policy, China is always taken to mean the PRC. Perhaps the security researchers are not aware of this distinction, but I am also not familiar with how the issue is treated in the British press.
Taiwan is a major US ally, so if this backdoor is real, then there will be trouble. It would be best for all parties involved for this to turn out to be a false alarm.
Nixon acknowledged the "One China" policy, not two, when the US shifted diplomatic support from Taiwan (Republic of China) to mainland China (PRC).
Taiwan is not a "major" US ally, rather the US is Taiwan's major ally. The US has several other regional countries it has a significantly greater alliances with, such as Japan, South Korea and Philippines. Though through an act of Congress, the US may (depending on the situation) have some obligations to aid Taiwan in its defense if attacked by mainland China.
The parent's point is that PRC<->Taiwanese relations are more complex than they appear. Words like "renegade province" make it sound like they are sworn enemies, but the parent is right: it is much more complex than that.
As a random but relevant example, Foxconn is a Taiwanese company but much of its manufacturing capacity is on the Chinese mainland.
Taiwan is politically an ally of the US, but economically it is much more closely aligned with China.
OK, perhaps the parent presented a rhetorical question. I do think that this discussion needs to keep the two governments distinct, so I was trying to give some background. But it seems I may have just muddied the waters by not mentioning how relations are in practice.
"Renegade province" is the official stance, but you're right, it is much more complex. In practice Taiwan is autonomous, and the degree of interaction with the mainland is a big political issue -- there were no direct flights between Taiwan and the PRC until just a few years ago. And yet, as you say, Taiwan is economically interlocked with China.
It is interesting to see the discussion here and elsewhere focus on China as a bogeyman. I suppose the news fits into the narrative that has been constructed about Chinese espionage and such.
Cisco has been fighting with the Chinese over imitation network gear (branded as Cisco) for years. So rest assured that the switches are all backed-doored too. It's a big problem. Really, tribes out to forge their weapons at home.
I'm respectful of your qualifications, but annoyed when you use your credentials without qualification. A paranoid man might assume your comments are strategically placed to benefit parties you're aligned with, based on how little context there is here; I know better, others might not.
"These are good guys. This paper is the real deal."
I appreciate what you bring to HN, but that this is the top comment worries me, particularly when it comes to security of all things. There's valuable comments that are contrary to your opinion surrounding you, and I wish you'd explain your side a bit more clearly in cases like this.
There's really no debate that Cambridge has probably the top university hardware security analysis program in the world. They published attacks on the IBM 4758 Security Coprocessor, a bunch of attacks on specific smartcards, and are basically the standard bearer for (non classified) research into this kind of stuff. I think some of the chip companies (Intel, IBM) might have better resources for pure silicon debugging, but less security clue to go with it.
That wasn't the gist of my comment, but okay. I'm disappointed that you feel the rest of my comment wasn't worth your time, and you chose to attach to a throwaway hypothetical. Alas, it was worth a shot.
I find it exhausting to have to write things defensively just to ward off drama. The fact that I had to offer was very simple: that the security group at Cambridge is very credible. And you knew that, because you acknowledged in your comment. I'm left wondering why you commented at all.
Excellent! They are credible. Why? Because you said so? That is good information, but just hearing your reasoning would make your comment a lot more credible itself without your reputation, which many people aren't familiar with. Other people here are making the case that the motives behind this research aren't perfectly pure, so, presenting the unique insight that you have on the credibility of this group would be awesome and potentially shift the topic of conversation.
It isn't just this particular instance that is driving my comment (in which I acknowledge your reputation, and nobody else's, a small oversight in your reply). The driving force is more your showing up in threads, saying something either plainly obvious or, worse, absolutely confusing, and then expecting your reputation to carry your comment the rest of the way. Most of the time, the reasoning behind your comment is completely unclear. It isn't avoiding drama to elaborate, it's making your point clearer and not relying upon a name you've created for yourself in this community when the rationale behind your opinion is unclear to those of us without your ability. The other comment that annoyed me recently, and most front of mind, was this one about nginx:
"This is a very bad bug, and you should fix it ASAP. Don't wait."
Two things here:
1. Thank you, Captain Obvious. What an enlightening comment.
2. What does "very bad" mean?
The actual situation related to that vulnerability was much more complex, and the threat fairly small. You, however, glossed right over that and skipped to basically informing the lay to panic, then got really snarky when people questioned you on the motivation. I don't believe that making the lay panic is the right way to achieve greater security, regardless of your credentials, and this is one of those cases where the reasoning behind your comment would have gone a long way.
Think about what a novice admin walks away from that comment with. Yes, he upgrades, awesome. That's exactly what we expect of administrators. There's something more sinister underlying your end result, though, which is that you've trained an administrator to act on what you and other security professionals say when it comes to security, without any explanation or reason. Security would be a much better place if people started gaining the ability to think for themselves and understand the issue, and you're working to reverse that. I see this crap with bcrypt, too. "Just use bcrypt." "Why?" "Because smart people said so." Now what if you fuck up? What if you give bad advice? Half of this community is going to take you at face value, because you don't present supporting facts for your position to be debated openly. Because it's 'tiring'.
You are quite unmatched in the security arena with your technical prowess. There's no question of that. It's comments like these, however, that make me annoyed that you're using said reputation inappropriately, and any questioning you receive on the matter leads to single-sentence snark like the pointless gray comment in this thread. Before asking why I commented, consider your own comments and the different standard you hold your own commentary to in this forum.
I am sorry that it upset you that I was unable to share details about a bug, but I am unwilling to withhold heads-ups to the other people here running apps behind nginx just to save your feelings.
I think if you use the search box at the bottom of the screen, you'll have no trouble at all finding thousands and thousands of words spelling out in great detail what I think about bcrypt.
I am a person, not a web service. You cannot file bugs every time I don't provide exactly the comments in exactly the tone you're looking for. Or, as you're amply demonstrating, you can, but it's unlikely to do you any good.
I think you guys are talking on different tracks here. jsprinkles brings up a genuine concern. However, he goes ahead and calls your skill unmatched in your domain and I respect that. Honestly, that was a bit of a shocker but I will accept this. Its one thing to be cagey with details and unscientific with your "forum" approach. But it bothers me that you go ahead and underestimate and even disrespect your audience. I can only come to the conclusion that you dont understand. Thats it, this is a very fitting response.
Pro-tip: If you don't understand or want more clarification, just ask. If you disagree, say why. It's not hard, and usually effective. Skip all the other BS.
If tptacek or anyone else wants to speak in Zen koans and leave it up to motivated readers to figure out what he meant, so be it. No one owes anyone anything here, and you should think of it as a chance to sharpen your research and investigative skills.
But seriously, omit the entitlement and drama. It's the last thing HN needs.
As a former chip designer I question the idea that the manufacturer introduced this backdoor (if indeed there is one).
found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key...
It's hard to understand what this guy is talking about. Is he claiming that the manufacturer added additional hardware that the designers were unaware of? Or they made modifications to existing circuitry so it doesn't match the design? It would be very hard to do either without cooperation from the designers, especially given the paranoia of hardware engineers (and of defense hardware engineers, an entirely different level of paranoia). The question "are we manufacturing what we designed?" is one that is constantly asked throughout the lifetime of a part. In fact the answer, for individual parts, is often "no", because they can be defective. Still, the question is constantly asked with a variety of automated tools at all points of the manufacturing process.
Here's what I think he might have found: an additional fixed key introduced by the designers themselves into the chip, and having nothing special to do with the manufacturer. In other words, a deliberate backdoor.
To add, silicon can, and obviously is inspected by manufacturer using optical and electronic microscopes. There are companies specialized in reverse engineering chips or verifying existing components. Even hobbyists can grind down chips and figure out what the circuitry does, using nothing more than good optics and a digital camera plus freely available software to stitch the images together and start analyzing traces from the picture.
Sure, this stuff gets harder with modern technology, but it would be ridiculous to assume that manufacturers blindly click together chips and hope for the best because they can't inspect their work.
To people complaining about the language - this reads more like a short briefing note for politicians or non-technical managers. That's why things like Stuxnet are mentioned; to give context and scale.
The author would probably like to stay involved with this tech, or at least to be able to hand it off to CESG.
 I assume CESG. Perhaps QinetiQ would do it?
 I have no idea what they do. All those Qs? You've seen 007? They're the real Q department. I doubt they do laser beam watches.
I'm skeptical. There are too many unsupported claims in this article. Off the top of my head:
- Assumes the Chinese put the backdoor in. There are plenty of others interested in backdoors.
- Assumes the designing company doesn't do any detailed production product checks. Not likely since this is a many, many billion dollar business.
- Claims a systemic problem but only notes one chip. That one FPGA could just have a design flaw. Need more details on the others.
- At the end it claims an investigation over ten years but the fab world has greatly changed over ten years. Many micro controller companies actually own their Chinese fabs now.
As a side note, if you discover something like this, don't assume you found something you weren't meant to find. You're discovery may just have made you found.
Many (maybe most? I don't specialize here) backdoors are deniably accidental, a term I'm coining here to mean "could be sabotage, could be a development artifact".
Whether any of those backdoors are deliberate is much less relevant than whether they're known to your adversaries. In the case of Chinese electronics engineering, your adversaries have the blueprints.
Do you really think it's likely that designers of bespoke silicon reliably decap, image, and analyze the finished products? I think you're attributing Intel/AMD-level wherewithal when, just like in software, a huge chunk of the market has nothing resembling the resources of the leading vendors.
Hardware trust is something I've been wondering about for a while now. It's easy to hide a software bug. (As evidenced by the occasional blue moon story about somebody stumbling over one.) But a hardware bug just seems like a constant paranoia that can never be investigated without expensive tooling.
There's a bit of research going into this area right now. Verification strategies for hardware, etc. Another way around it is to bump up your integration of trusted FPGA platforms where you can write and use your own hardware in a potentially more trustworthy way.
1) Say what you will about the military-industrial complex, but they do buy a load of physical products. When those are sourced domestically it has a lot of good spillover effects on the rest of the industry (see Steve Blank's Secret History of Silicon Valley).
2) I'd be far more worried about Intel, AMD, nVidia, Texas Instruments, et al, especially if I was a foreign procurement officer. The logic in those chips is incredibly complex and almost impossible to verify in any detail by a third party. Coincidentally, they're all US companies.
This appears to be an improvement on Differential Power Analysis attack against a FPGA. Congrats to the guys who discovered it!
It's interesting to note that in the DPA/SPA world the standard model of operation is to develop a new attack and then patent the countermeasures ;)
It should be noted that this is "probably" not a backdoor in the traditional sense (intentionally planted by some nefarious government organisation), rather just bad, leaky design that has been identified by an improved attack methodology...
First reaction to this for most including myself is that the U.S. is really f--ked. But if the U.S. found this out, odds are they had chips manufactured that looked like the Chinese version but really weren't, with the exception of some small detail, perhaps not on the chip but on the board, that would indicate that the chip was the "fixed" version.
But, this Frienemy war is not about taking advantage of these backdoors. That is the nuclear option. The war is about who has the potential to pwn the other.
"This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key."
From this, I'd say that anyone who used the backdoor would basically be able to take over the chip completely. Which is somewhat scary, considering the author says it's used in weapons systems—hopefully the author's informed an intelligence agency with the specifics.
They have procured programmable logic chips (FPGA) with the feature that the configuration data that defines the function on powerup can be encrypted/signed.
The configuration is commonly stored in a small serial eeprom (tiny 8-pin chip) and automatically read when the FPGA powers up. The content of this chip is often called "bitstream", this configuration eeprom/flash is sometimes also internal to the FPGA.
The key this configuration is encrypted with is supposed to be stored securely inside the FPGA, but they managed to extract it using undocumented commands on the "debug port" (JTAG) that the vendor explicitly claimed did not exist.
Note: This is an interface that normally is not easily accessible from the outside, but sometimes connected to a microcontroller to update the FPGA configuration.
Theoretically someone who gets access ("normal" computer backdoor over the network) to such a device might be able to re-program the chip thereby causing malfunction or add a flaw deliberately. The second scenario would be to decrypt the configuration information, "decompile" it and learn about secret algorithms or functions.
This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems.
From the description I'm guessing an interface device that does something in the order of I2C/CAN/M on one end and external comms to the outside world on the other (why else would require "sophisticated encryption standard").
I'm going to put my speculation hat on here. Others here have mentioned that the chip in question is an Actel FPGA.
First, we must understand what these are used in: embedded systems. Typically, at the heart of most embedded systems you have two possibilities: a microcontroller or microprocessor, or an FPGA. The microcomputers run some kind of firmware (instruction set fed to a processor architecture) which is completely different then an FPGA which are actually re-configurable transistor arrays to implement fixed digital logic. This transistor configuration is typically loaded from EEPROM on power up - so it is stored/uploaded by a user somewhere after they've done some work in their CAD tool.
In either case, whether it be firmware written for a microprocessor based system, or the "firmware" for an FPGA (I forget what that logic routing configuration format is called - technically not firmware since it's not instructions) it is likely that whoever wrote it would want to protect it from being read or protect their device from having another firmware loaded on. There are many schemes to do so, it is possible that this is what has been compromised.
Taken from text:
"This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems."
Several months ago there was a report of similar nature that mainstream Intel CPUs include a concealed (hyper-)hypervisor that appears to exist in China-produced chips, but absent from pre-production samples made by Intel themselves. I don't know where this all went, but it was some Russian guy who found it by accident, and he was largely dismissed as a loon and generally laughed at (though from I could tell he did know a thing or two about hypervisors, system programming and what not).
I reacted the same way to this news as to the news that an electrical distribution system was compromised over the Internet. That is, "are you kidding me?!". Just as it's stupid to connect certain critical systems to the public Internet, it's really silly to so loosely control military electronics sourcing.
Evidently, the military prefers to cut cost rather than have complete control over the manufacturing of their computer chips. Spending hundreds of millions on jets that have to be American-made is fine, but it's on the computer chips powering those jets and pretty much all advanced military technology that they have to save money.