Hacker News new | past | comments | ask | show | jobs | submit login

You have several ways to deal with trust issues in hardware:

1. Monitor hardware itself for bad behavior.

2. Monitor and restrict I/O to catch any leaks or evidence of attacks.

3. Use triple, diverse redundancy with voter algorithms for given HW chip and function.

4. Use a bunch of different ones while obfuscating what you're using.

5. Use a trusted process to make the FPGA, ASIC, or both.

I've mainly used No's 2-4 with No 5 being the endgame. I have a method for No 5 but can't publish it. Suffice it to say that almost all strategies involve obfuscation and shellgames where publishing it gives enemies an edge. kerckhoff's principle is wrong against nation-states: obfuscated and diversified combination of proven methods is best security strategy. Now, ASIC development is so difficult and cutting edge that knowing that the processes themselves aren't being subverted is likely impossible.

So, my [unimplemented] strategy focuses on the process, people, and key steps. I can at least give an outline as the core requirements are worth peer review and others' own innovations. We'd all benefit.

1. You must protect your end of the ASIC development.

1-1. Trusted people who won't screw you and with auditing that lets each potentially catch others' schemes.

1-2. Trusted computers that haven't been compromised in software or physically.

1-3. Endpoint protection and energy gapping of those systems to protect I.P. inside with something like data diodes used to release files for fabs.

1-4. Way to ensure EDA tools haven't been subverted in general or at least for you specifically.

2. CRITICAL and feasible. Protect the hand-off of your design details to the mask-making company.

3. Protect the process for making the masks.

3-1. Ensure, as in (1), security of their computers, tools, and processes.

3-2. Their interfaces should be done in such a way that they always do similar things for similar types of chips with same interfaces. Doing it differently signals caution or alarm.

3-3. The physical handling of the mask should be how they always do it and/or automated where possible. Same principle as 3-2.

3-4. Mask production company's ownership and location should be in a country with low corruption that can't compel secret backdoors.

4. Protect the transfer of the mask to the fab.

5. Protect the fab process, at least one set of production units, the same way as (3). Same security principles.

6. Protect the hand-off to the packaging companies.

7. Protect the packaging process. Same security principles as (3).

8. Protect the shipment to your customers.

9. Some of the above apply to PCB design, integration, testing, and shipment.

So, there you have it. It's a bit easier than some people think in some ways. You don't need to own a fab really. However, you do have to understand how mask making and fabbing are used, be able to observe that, have some control over how tooling/software are done, and so on. Plenty of parties and money involved in this. It will add cost to any project doing it which means few will (competitiveness).

I mainly see it as something funded by governments or private parties for increased assurance of sales to government and security-critical sectors. It will almost have to be subsidized by governments or private parties. My hardware guru cleverly suggested that a bunch of smaller governments (eg G-88) might do it as a differentiator and for their own use. Pool their resources.

It's a large undertaking regardless. Far as specifics, I have a model for that and I know one other high-assurance engineer with one. Most people just do clever obfuscation tricks in their designs to detect modifications or brick the system upon their use with optional R.E. of samples. I don't know those tricks and it's too cat n mouse for me. I'm focused at fixing it at the source.

EDIT: I also did another essay tonight on cost of hardware engineering and ways to get it down for OSS hardware. In case you're interested:

https://news.ycombinator.com/item?id=10468534




I guess I still don't follow. Allow me to better specify the threat model I have in mind:

Consumer wants one computer system that he trusts. Consumer should be able to get one without having to trust any of the manufacturers or integrators. They should not be able to subvert the security of the system, assuming the published code and specs contain no errors. There should be no black boxes to trust.

Design team wants to make and provide open hardware. They want to service Consumer, and they want to do it in a way that Consumer does not need to trust any blackbox processes.

How does this happen? Note that I'm not asking about keeping the VHDL code secure, how to physically secure the shipment to the fab company, etc. I'm asking how Consumer, who gets one IC, can verify that the IC matches exactly with the published VHDL code and contains no backdoors.

It seems you mainly focus on how the design team can minimise the chances of subversion. That's a much lower bar and not really sufficient in my mind. There's still too many places to subvert, and the end consumer still needs to trust his vendor, which is the same situation we have today.

The bit about multiple independent implementations with voting (NASA-style) sounds extremely expensive and inefficient, but also very interesting for high-security systems. Are you aware of any projects implementing it for a general-purpose computer, specifically to prevent hardware backdooring (as opposed to for reliability)?

UPDATE: To clarify, as wording is important in these kinds of discussions: When something is described as 'trusted', that's a negative to me, as a 'trusted' component by definition can break the security of the system. We need a way to do this without 'trusted' components. So when you say 'Use a trusted process to make the FPGA, ASIC, or both.', that sounds like exactly what we have today - the consumer gets a black box, and no way to verify that it does what it's claimed to do. The black box must be 'trusted' because there's no other way. Me knowing that the UPS shipment containing the mask had an armed guard does not make me more likely to want to trust the chip.


"Design team wants to make and provide open hardware. They want to service Consumer, and they want to do it in a way that Consumer does not need to trust any blackbox processes. How does this happen?"

That was covered here: " I have a method for No 5 but can't publish it. Suffice it to say that almost all strategies involve obfuscation and shellgames where publishing it gives enemies an edge."

There are black box processes trusted and checked in my best scheme, though, with security ranging from probabilistic to strong with some risks. Mainstream research [1] has a few components of mine. They're getting closer. DARPA is funding research right now into trying to solve the problem without trust in masks or fabs. We're not there yet. Further, the circuits are too small to see with a microscope, the equipment is too expensive, things like optimal proximity correction algorithms too secret, properties of fabs too varying, and too little demand to bring this down to so just anyone can do it and openly. Plus, even tooling itself is black boxes of black boxes out of sheer necessity due to esoteric nature, constant innovation, competition, and patents on key tech.

Note: Seeing chip teardowns at 500nm-1um did make me come up with one method. I noted they could take pictures of circuits with a microscope. So, I figured circuit creators could create, distribute, and sign a reference image for what that should look like. The user could decap and photo some subset of their chips. They could use some kind of software to compare the two. If enough did this, a chip modification would be unlikely except as a denial-of-service attack. Alas, you stop being able to use visual methods around 250nm and it only gets harder [2] from there.

Very relevant is this statement by a hardware guru that inspired my methods which embrace and secure black boxes instead of go for white boxes:

"To understand what is possible with a modern fab you'll need to understand cutting edge Lithography, Advanced directional etching , Organic Chemistry and Physics that's not even nearly mature enough to be printed in any text book. These skills are all combined to repeatedly create structures at 1/10th the wavelength of the light being used. Go back just 10 or 15 years and you'll find any number of real experts (with appropriate Phd qualifications) that were willing to publicly tell you just how impossible the task of creating 20nm structures was, yet here we are!

Not sure why you believe that owning the fab will suddenly give you these extremely rear technical skills. If you dont have the skills, and I mean really have the skills (really be someone that knows the subject and is capable of leading edge innovation) then you must accept everything that your technologists tell you, even when they're intentionally lying. I cant see why this is any better then simply trusting someone else to properly run their fab and not intentionally subvert the chip creation process.

In the end it all comes down to human and organizational trust, "

Very well said. Still an argument for securing machines they use or transportation of design/masks/chips. The critical processes, though, will boil down to you believing someone that claims expertise and to have your interests at heart. I'm not sure I've even seen someone fully understand an electron microscope down to every wire. I'll assure you the stuff in any fabrication process, from masks to packaged IC's, are much more complex. Hence, my framework of looking at it.

"how to physically secure the shipment to the fab company, etc. I'm asking how Consumer, who gets one IC, can verify that the IC matches exactly with the published VHDL code and contains no backdoors."

Now, for your other question, you'd have to arrange that with the fabs or mask makers. Probably cost extra. I'm not sure as I don't use the trusted foundry model [yet]. My interim solution is a combination of tricks that don't strictly require that but are mostly obfuscation. You'd need guards you can trust who can do good OPSEC and it can never leave your sight at customs. You still have to trust mask maker, fab, and packager. That's the big unknown, though, ain't it? The good news is that most of them have a profit incentive to crank out product fast in a hurry at lowest cost while minimizing any risks that hurt business. If they aren't attacking or cooperating, it's probably for that reason.

"how to physically secure the shipment to the fab company, etc. I'm asking how Consumer, who gets one IC, can verify that the IC matches exactly with the published VHDL code and contains no backdoors."

That's semi-true. Re-read my model. The same one can protect the consumer with minor tweaks. That's because my model maps to the whole lifecycle of ASIC design and production. One thing people can do is periodically have a company like ChipWorks tear it down to compare it to published functionality. For patents and security, people will do that already if it's a successful product. So, like Orange Book taught me long ago, I'm actually securing the overall process plus what I can of its deliverables. So long as process stays in check, it naturally avoids all kinds of subversions and flaws. High assurance design and evaluation by independent parties with skill do the rest.

"The bit about multiple independent implementations with voting (NASA-style) sounds extremely expensive and inefficient, but also very interesting for high-security systems. Are you aware of any projects implementing it for a general-purpose computer, specifically to prevent hardware backdooring (as opposed to for reliability)?"

It's not extremely expensive: many embedded systems do it. Just takes extra hardware, an interconnect, and maybe one chip (COTS or custom) for the voting logic. These can all be embedded. Those of us doing it for security all did it custom on a per-project basis: no reference implementation that I know of. There's plenty of reference implementations for the basic scheme under phrases triple modular redundancy, lockstep, voting-based protocols, recovery-oriented computing, etc. Look up those.

You can do the voting or error detection as real-time I/O steps, transactions, whatever. You can use whole systems, embedded boards, microcontrollers, FPGA's, and so on. The smaller and cheaper stuff has less functionality with lower odds of subversion or weaknesses. Helps to use ISA's and interfaces with a ton of suppliers for diversity and obfuscation part. If your targeted, don't order with your name, address, or general location. A few examples of fault-tolerant architectures follow. You're just modifying them to do security checks and preserve invariants instead of mere safety checks, although safety tricks often help given the overlap.

App-layer, real-time embedded http://www.montenegros.de/sergio/public/SIES08v5.pdf

Onboard an ASIC in VHDL http://www.ijaet.org/media/Design-and-analysis-of-fault-tole...

FPGA scheme http://crc.stanford.edu/crc_papers/yuthesis.pdf

A survey of "intrusion-tolerant architectures" which give insight http://jcse.kiise.org/files/V7N4-04.pdf

"To clarify, as wording is important in these kinds of discussions: When something is described as 'trusted', that's a negative to me, as a 'trusted' component by definition can break the security of the system."

Oops. I resist grammar nazi's but appreciate people catching wording that really affects understanding. That example is a mistake I intentionally try to avoid in most writing. I meant "trustworthy" and "trusted" combined. You can't avoid trusted people or processes in these things. The real goal should be to minimize amount of trust necessary while increasing assurance in what you trust. Same as for system design.

"Me knowing that the UPS shipment containing the mask had an armed guard does not make me more likely to want to trust the chip."

Sorry to tell you that it's not going to get better for you outside making sacrifices of above-style schemes which are only probabilistic and with singificant unknowns in the probabilities. Tool makers, fabs, and packaging must be semi-trusted in all schemes I can think of. They must be turned into circuitry at some point. Best mix is putting detection, voting, or something critical on an older node or custom wiring. What you can vet by eye if necessary. Can still do a lot with 350nm. Many high assurance engineers use older hardware with hand-designed software between modern systems due to subversion risk. I have a survey [3] of that stuff, too. :)

Note: My hardware guru did have a suggestion I keep reconsidering. He said most advanced nodes are so difficult [4] to use that they barely function at all. Plus, mods of an unknown design at mask or wiring level are unlikely to work except most simplistic cases. I mean, they spend millions verifying circuits they understand, so arbitrary modifications to black boxes should be difficult. His advice, though expensive, was to use most cutting-edge node in existence while protecting transfer of design and the chips themselves. Idea being that subversion of ASIC itself would fail or not even be tried due to difficulty. I like it more I think about it.

[1] https://www.cs.virginia.edu/~evans/talks/dssg.pptx

[2] https://www.iacr.org/archive/ches2009/57470361/57470361.pdf

[3] https://www.schneier.com/blog/archives/2013/09/surreptitious...

[4] http://electronicdesign.com/digital-ics/understanding-28-nm-...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: