Hacker News new | past | comments | ask | show | jobs | submit login

The FPGAs could still be backdoored, and the WiFi chipset should receive a fair bit of suspicion. But I guess it is a good first step.



It’d be a pretty gargantuan task to backdoor the FPGA silicon itself. You’d have to have compromised Xilinx’s software and had some idea of what signals you want to tap. Kinda interesting to think about... I suppose that’s were open source tools for FPGAs would be nice.

The image? Sure, could easily be backdoored, but that’s what open source is for; auditability.

Edit: FPGA silicon is kinda backdoored by definition thanks to JTAG configurability/readability. (Barring cases where keys are used.) So I think the really interesting thing would be addition of nefarious logic by the design tools.


Well, yes, but in the past this type of device would be floated as an attempt to fight state-level actors. And... it can't do that. That's all I'm pointing out.

Either the silicon, the synthesis software, or both could be compromised. Per leaked documents usually what gets attacked is random number generation, but there are more avenues I am sure.

You can usually turn off JTAG, but having JTAG or other debug interface not be permanently disabled is actually an exploit class.


What’s stopping the toolchain for your micro from being backdoored as well? The chain of trust has to start somewhere.


This is the likely target for NSA. Intercepting supply chains for stock parts inside of China is not their specialty. Further, to bother with custom hardware would require substantial resources and time to develop before even getting it deployed. Nobody is going to do that. Bunnie's compiler just changed checksum...

To fight such an attack, the output of deterministic builds running on geographically dispersed systems with disparate stacks (physical, cloud, newly purchased, multiple OSs, etc.) may be compared before release.

The protected body of software should also include the firmware upload utilities.

Another attack, given the open source nature of the device, could be distributing cheap, compromised units broadly after the fact to ensure they are widely adopted.


>Intercepting supply chains for stock parts inside of China is not [the NSA's] specialty

>Another attack, given the open source nature of the device, could be distributing cheap, compromised units broadly after the fact to ensure they are widely adopted.

I like thinking about high-level threat models as much as the next guy, but these two statements seem to be at odds. Unless by "compromised units" you don't mean what I think you mean.


First was referring to supply chain interdiction for third party fabricators attempting to produce non-compromised units. Second was referring to active fabrication and distribution of compromised units to unsuspecting consumers.


Oh, ok. So its the difference between opening the box to put in a wayward chip, versus starting a factory who makes units with the wayward chip to begin with. Fair enough.


the iCE40 chip used is supported by icestorm/yosys/nextpnr stack.

But I do not know if the Xilinx reversing effort supports the Xilinx chip. That project is in a much earlier state of development.

And I do have to wonder if an ecp5 (supported by trellis project) wouldn't have been able to do the job.


That could be true even if you sourced and programmed them yourself, no? We already know that AMD/PSP and Intel/ME are back footed, for example.

How deep do you want to go down the rabbit hole? Are you capable of fabbing your own silicon?


interestingly, bunnie's previous take has been that fabbing your own silicon is likely less secure than using an fpga due to supply chain security

if your asic if compromised in tranist, it's a total game over. if your fpga is compromised in transit, the attacker has to have some knowledge of target bitstream

it has been argued that by making it easy for end users to rearrange the bitstream, an fpga can be more secure than an asic

https://www.bunniestudios.com/blog/?p=5706


I looked through the link and that does not seem to be what he says. He is just talking about how much verification you need to do, like I am trying to do.

It is shortsighted to view making your own ASIC as less secure. The supply chain weaknesses exist whether or not you make your own hardware. Making your own hardware can potentially limit your exposure to systemic baked in vulnerabilities. Additionally, after-manufacture exploits (as in the NSA interdiction of Cisco router shipments) are easier for mass produced goods.

There are likely edge cases involving small, easier to bribe manufacturers, but I'm not sure you can make broad generalizations on that possibility alone.


Calling it an "edge case" dismisses the fact that Facebook/Apple/Google are never going to make the kind of openly verifiably secure device without a colossal shift in the market. We're not going to see mass produced goods that target the same market as Precursor, so all there is in this space is "small, easier to bribe manufacturers".

It's great that Facebook/Apple/Google can securely manufacture devices, with a supply chain thats verifiable by them, but externally, TouchID's security amounts to "we're Apple, trust us". That's not good enough for the target market for Precursor.


>That's not good enough for the target market for Precursor.

I know? Your two points are really disjoint.

>so all there is in this space is "small, easier to bribe manufacturers".

I think the amount of people who will willingly take money for nefarious purposes is smaller than you think. I would worry more about misguided cooperation with law enforcement. See, for example, the case of the Russian trying to bribe a Tesla employee to install malware. There may be factors like the belief of the employee that the breach attempt would be caught, but it would be very easy to pass the employees actions off as a mistake.

My real point is that between large manufacturers that are likely to have moles or cooperate with law enforcement and smaller manufacturers that are less likely to have moles but may be easier to compromise in general, it's probably about even between them, maybe siding with smaller manufacturers depending on what you have available.

In any case you have trust issues with either one, so I don't see why pointing out "Trust us is not enough" is topical.

With a smaller manufacturer you are more likely to get in to see their facilities and can build a personal relationship. With a larger one that is likely impossible.


The article actually posted to this HN thread contradicts that or at least would imply a change of stance:

> We are also using the FPGA in Precursor to validate our SoC design, which will eventually give us the confidence we need to tape out a full-custom Betrusted ASIC, thereby lowering production costs while raising the bar on hardware security.


There is no change of stance, but there is a subtlety. The hair to split here is that between "security" and "trustability".

I'm defining "security" to include the ability of a device to keep a secret after it's been verified and provisioned. This is also known as "tamper resistance".

I'm defining "trustability" as the ability of one to draw the conclusion that the device in front of you is in fact the device you think it is. It's an essential pre-requisite for security, but it is not identical to "tamper resistance", which is probably what more people expect when they hear the word "security" (that is, security sounds more like a bullet proof vault than a correctly constructed system).

From the trustability standpoint, even a sophisticated technologist will typically have no evidence-based reason to trust any given chip, because they likely have no tools on hand that can verify its correct construction without simultaneously destroying the chip. For example, not many people have a ptychographic x-ray system at home.

On the other hand, you may have some reason to trust an FPGA, because with the tools in your home you can craft your own bitstreams and designs that incorporate countermeasures to potential exploits buried in the FPGA. It tips the balance of power from a "hands up I surrender" situation to a cat-and-mouse game. Furthermore, there is a limit to how deep the rabbit hole can go, because with sufficient countermeasures the circuitry required to backdoor the design without detection becomes larger than can fit within the raw size of the FPGA's silicon.

Thus, an FPGA is "more trustable" than an ASIC in the sense that there is any direct evidence-based reason at all for trusting it.

However, an FPGA is not necessarily more tamper-resistant than an ASIC. If your adversary has full physical possession of your device and has no regard to leaving it intact, then they have a number of venues to attack both the FPGA and the ASIC.

What this statement implies is that a properly designed ASIC will generally raise the bar on how hard it is to extract secrets compared to an FPGA, assuming an adversary with direct physical access to the device and no regard to evidence of tampering with the device.

More importantly, however, the ASIC will be cheaper. That is really the main point of that statement.


Wow, its so cool to hear words that resonate. My own thoughts on trust-ability also run to minimalism, but, in an attempt to reach something even better than cat-and-mouse, also goes the measurement of power draw. There is a nice sharing of concerns here, since most users want their devices to run a long time, and it also happens to be the case that power draw scales with computation, and subversion of the kind you're talking about requires computation, and hence power-draw. So if you have a baseline quiescent draw, you can measure your application(s), and anything above that is a possible threat.

Another area that I've been thinking about, but I see you haven't written about here, is the issue of boot and IO. I would love to see IO systems greatly simplified, to the point where input devices are designed to just write measurement to memory on power on, and stop on power off. In the same way, devices periodically check a fixed memory region for something to write. If the input memory region was large enough, this would be a perfect opportunity for a poor-man's circular buffer of arbitrary size, which has lots of applications. Indeed, you could have explicitly zero copy use of input if you could guarantee that your process completes before the buffer is overwritten, which you can guarantee by just making it really big (or carefully tuning if you're memory constrained).

The goal of all of this is to embrace the modern era of computing which is NOT memory constrained at all, and to build computers that function more closely to their Platonic ideals. A system like yours seems to get the closest I've seen to this goal, modulo a few things mentioned above.

Cheers, and good luck (from a backer).


There's a small-scale IC fabrication system under development. It costs $5-30M, fits in an office, and can manufacture 0.5-inch wafers with CMOS and MEMS. The company provides training and rents out time on the machines.

https://www.yokogawa.com/yjp/solutions/solutions/minimal-fab...

HN post: https://news.ycombinator.com/item?id=24540562




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: