The 30-minute timeout is particularly mischievous. It's like they REALLY want to slow down any effort at patching out the ME.
Are we going to have to wait on an insider leak on what's the real deal here? Or have I completely missed out on a perfectly good excuse for what's going on?
Seriously, anyone who has actually worked in a real company knows that it is a huge amount of effort to get source code released to the public, and if any of it is licensed from third parts it is probably near-impossible.
Take microcode for example. At one time (as I understand it) microcode was not a signed blob. However companies wishing to hide details of their microarchitecture chose to encrypt it.
My guess is that these encrypted blobs grew first out of corporate closed source culture, which is strong in HW companies. If they are subverted with actively malicious code it was probably by secretive efforts, not the NSA simply propositioning the HW manufacturer.
Finally I'd like to point out that unless you design your CPU chip yourself and oversee the layout of it on the die, it is also possible that the semiconductor manufacturer you hire could embed their own nefarious processor within your design.
In practicality, I think running RISC V on an FPGA would have a very low risk of subversion. Though the FPGA design tools might add nefarious logic too.
If you think there's a evil NSA front for this type of stuff -- its Absolute Software. Their bits have been embedded in most BIOS packages since the 90s, and nobody has heard of them.
You can wipe, encrypt, lock, view & kill processes, retrieve any file and view every file on machine, and view hardware & software status and licensing. It also incorporates a bunch of other features, but those are what scare me most.
This is only made worse by the fact that it is readily exploitable: https://threatpost.com/millions-of-pcs-affected-by-mysteriou...
A past employer looked into the product and had a reasonably high level engagement. We never got complete answers to many questions, and the company itself didn't feel particularly large. Granted we disengaged when we couldn't make the ROI work -- we just don't lose many devices. It seems unusual that a teeny company from Vancouver that nobody has heard of can navigate the bureaucracy of massive PC vendors and Asian suppliers of motherboards and android SoCs for decades.
It also seems weird when you consider that Intel, despite having a near monopoly on x86 and the ability to get other mega corps to put Intel stickers on things, (and even push them to make Atom phones that nobody wants!) gets comparatively little love for its management layer.
The reason Absolute is Vancouver based by the way is the Canadian Govt gives massive tax breaks to software companies, hence why a ton of point of sale and other software companies are based just to the north.
We have CJDNS (which salsa20's all your data & can VPN legacy networks to ya), fully FLOSS SBCs for under $20ea, and 802.11n and AC outdoor radios can be had for cheap, this is merely a community involvement problem.
I also work at a company that has exactly this focus - to sell, and eventually produce devices that can be run with free software from top to bottom - but I don't see ourselves producing our own devices in the next 5 years, even if we would become wildly sucessful.
The hope seems to lie with ARM for the moment - C100 / C201 have even the Embedded Controller (EC) code avaiable - but they do have plans to implement something simillar to ME, AFAIK.
Also, most people already are living with the thought that their computers are cracked/hacked/virused the moment they are connected to the internet - all my friends and relatives ask me to check their computer for viruses - almost none trust their computers or phones (especially Android phones, it seems). For such people, where this is the natural state of the world, it's very hard to imagine that they can change anything about it - and telling them that there are backdoors from the moment the laptop is assembled, doesn't help much.
OPi with no build flags:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-120.00 sec 290 MBytes 20.3 Mbits/sec 165 sender
[ 4] 0.00-120.00 sec 290 MBytes 20.3 Mbits/sec receiver.
OPi with optimal build flags:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-120.00 sec 366 MBytes 25.6 Mbits/sec 141 sender
[ 4] 0.00-120.00 sec 366 MBytes 25.6 Mbits/sec receiver
What about the other powers? China, France, Russia have their own NSA's that would be asked to provide solutions to protect all the PCs in the service of their own governments, what are they doing about it?
(see also https://en.wikipedia.org/wiki/Trusted_computing_base)
Although I recommend switching off x86, I don't buy the claim that we cant do another x86 without backdoors. For one, Intel or AMD might do a "semi-custom" design without one for a price. Second, Centaur has long been the 3rd player in x86 for low power stuff sold by VIA. They'd probably do a high-performance design if incentivized. Third, many x86 players showed up over time that simply failed in market or were acquired. Nothing stopping another unless there's a legal restriction Im unaware of.
There is also microcode, and many patches are issued through microcode.
Canonical presentation: REcon 2014 - Intel Management Engine Secrets (Igor Skochinsky) https://www.youtube.com/watch?v=4kCICUPc9_8
Decoding ME firmware in BIOS updates until Skylake (2015): http://io.netgarage.org/me/
Inevitably the complaint is, "Well if they have physical access you're screwed anyways." And I just don't understand how anyone can maintain that farce when the last year has shown that it's a genuine challenge even for the US FBI to unlock a mobile device without the owners say-so and it's getting harder all the time.
If you truly believe that physical access is a trump of any security then you can never trust your hardware anyways, as it is exceptionaly hard to prove it conforms to a spec.
For physical access, I though the case was "If anyone has access to your device while unlocked, or locked but not disk-encrypted, consider it permanently compromised. If anyone has access to it while disk-encrypted, consider replacing it if you're very concerned." The 'permanently' bit is for unknown firmware compromise, and this position seems pretty sane.
But trusted computing modules are something else altogether. Even non-physical access can compromise them. There's some evidence that they can be compromised around a fully-encrypted disk. And checking whether they're compromised is effectively impossible.
Yes, it might be possible to execute trusted code around the module, if it never hits the machine in a vulnerable form. But that's slow, non-interactive, and virtually nonexistent at present. Right now, trusted computing modules do compromise machines are roughly Ring -3, with no real recourse.
Difficulty? Yes. But the FBI is not the NSA, they don't specialize in such attacks. It's like asking your plumber to do heart surgery. So they commissioned it to else who does, and boom, they had access.
Strong cryptographic security shouldn't have a pricetag any lower than "we feed all the hydrogen in the universe to black holes to harvest enough energy for the computations".
And phone security is orthogonal to baked-in firmware signing keys. The only change you need is allowing the user to add their own signing keys maybe with the caveat that all data in the protected keystore gets destroyed in the process. Then you have freedom and secure boot in one package.
The signing keys are the issue, not the ring -1 management code.
> If you truly believe that physical access is a trump of any security then you can never trust your hardware anyways, as it is exceptionaly hard to prove it conforms to a spec.
Here are some simple steps
1. compel a manufacturer to create a spy-firmware, signed with their signing key
2. get access to a device for a few minutes
3. patch firmware that exfiltrates the data once the device is unlocked by the user
4. return device to user / to where the user placed it
1. acquire device
2. a) compel manufacturer to create a firmware that bypasses "delete on unlock failure" feature
b) unsolder chips, apply silver needles to flash controllers so you can
read/restore internal key storage whenever it gets wiped
3. enumerate all N-digit pass codes until it is unlocked
Your argument is no one can be trusted to make them. But my argument is that if you believe that then you know you can't trust anyone to make anything one way or the other.
Surely rather than botch the whole thing (because we don't like the vendors) we should start to propose stronger and more consumer-centric versions of these?
A skeptical and cyncial part of me notes: given the tiny tiny number of users who can actually verify their machines are what they say they are, all "trusted computing" rebukes do is argue against a tech that actually does mitigate rwal attacks for the vast majority of uses.
Because the thing you are asking for is not possible. You have a bad premise:
> And I just don't understand how anyone can maintain that farce when the last year has shown that it's a genuine challenge even for the US FBI to unlock a mobile device without the owners say-so and it's getting harder all the time.
Which has two flaws. First, it wasn't a challenge for them, they were just using it as an excuse to whine about the second one that actually is. And second, the only real security is math (encryption), but it doesn't require any special support from the hardware.
If you have full disk encryption with a strong passphrase and the device is currently locked (i.e. the key is not in memory), the only way to get that data is to have the passphrase or break the encryption, and breaking the encryption is not expected to be possible.
The problem is, if someone has physical access to your device and can compromise your firmware, they can record your passphrase the next time you unlock the device, and then they don't need to break the encryption.
But this is not a thing you can do anything about. If someone has physical access to your device they can just steal it and leave you with one that looks the same up to the point of you entering your passphrase and then transmits it to the attacker. Nothing about the original device can fix that because it isn't the original device.
Similarly they can install a surveillance device in your room that can record you entering your passphrase and then come back tomorrow to take your device.
The only answer to these attacks is physical security. Secure boot does nothing.
They bought a hack from another company for an old model of phone. If the case had been involving the latest model handset, the vendor claimed they had not yet (but were confident they would) hack it.
> The problem is, if someone has physical access to your device and can compromise your firmware, they can record your passphrase the next time you unlock the device, and then they don't need to break the encryption.
And if the device is tamper-resistant? I had this same conversation with a person who hated Yubikey. Nearly exactly the same. It made even less sense, because the entire point of the Yubikey is to be tamper proof.
> The only answer to these attacks is physical security. Secure boot does nothing.
I think maybe what I object to most is that essentially the only attacks considered in this discourse are attacks directly by nation-states at scale. Not only is it clear that handsets and self-built computers are subject to these (at scale), but it's doubly not sure that if you used a TPM then you wouldn't be exposed to attacks against TPM-backdoored devices since your information in an online world is stored on commodity clouds that probably DO have that hardware and if an attack exists, it'll certainly be able to ignore FDE.
Even if we ignore that, TPM mitigates real attacks we see in the real world, and increases the difficulty of those attacks. Average consumers are a case that should be considered in the discourse.
Most people can't effectively harden themselves against nation-state level attacks (if only because incarceration and interrogation exist and even physical security won't stop them), but nation-state level attacks involving a conspiracy amongst manufacturers and the NSA is the justification used to discredit the use of TPMs.
And for the dubious benefit of saying, "Well I have a spec and presumably this board is fully defined by this spec." Of course, truly verifying the board has no back doors is not made substantially easier by the absence of a TPM. So I have trouble believing that this is not an argument among different factions who want final say on a wide variety of consumer hardware.
That is missing the point, as this is not about the security of an individual against targeted attacks, but about the reliability of our governing structure. In order to harden a democracy against subversion by minorities, it's not necessary for each individual to be able to fend off an army. That does not mean that implanting every citizen with a centrally triggered kill device would be a good idea.
Also, no conspiracy is required: If there is a remote access key, say, that is a single point of failure that no company can defend if a nation state wants to have access to it, even if that may well be their intention.
There is actually an important distinction to make here too.
When you have something like Secure Boot, whose purpose is to make the device trustworthy to enter your passphrase into, it's completely impossible. You don't know if the device you're using is actually the same device, you don't know if someone is watching you, the thing it claims to do is not a thing it can actually accomplish.
But Apple does something separate from that. They have tamper-resistant hardware for storing keys, so that the hardware can store a strong key and enforce a maximum number of guess attempts for a weaker password/PIN.
The disadvantage of this is that it's pure attack surface compared with using a strong passphrase to begin with. If you have a strong passphrase the attacker has to break the encryption. If you have a weak PIN for hardware protecting a stronger key the attacker can break the encryption or break/backdoor the hardware or guess the weak PIN before hitting the maximum number of attempts.
The advantage is of course that it lets you use a PIN instead of a long passphrase, but there is also something else. That hardware doesn't need root. All it needs is to store a key while the device is locked and then spit it back out if you give the right PIN and erase it if you make too many bad attempts. No part of that inherently requires it to be at ring -3. It can be completely independent from all of that.
> And if the device is tamper-resistant?
That's the problem with Secure Boot -- it doesn't matter. Stealing your passphrase by recording it is an attack that can be pulled off by a middle schooler with a nanny cam. It's easier to do that than to backdoor the firmware on a non-tamper-resistant computer, which at least requires you to know what "firmware" is. So what attack are we actually preventing at the cost of having untrusted and potentially vulnerable code at ring -3?
What isn't possible is to create hardware that can protect you against an attacker with unrestricted physical access.
>including Secure Boot, which even now requires FOSS
users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override."
Can someone explain this to me, would this be for instance be Lenovo laptops making a deal with Microsoft since Windows is the default OS installed on these laptops? Is Microsoft mandating all OEMs/hardware vendors to configure secure boot with a MS signing key? Even if I order a laptop with no OS installed?
The signature database (db) and forbidden signature database (dbx) contain a whitelist and blacklist respectivly of keys, signatures, and hashes that are trusted to run.
Updates to either of the above lists must be signed by a Key Exchange Key (KEK). Most implementations allow multiple Key Exchanges Keys.
Updates to the list of Key Exchange Keys must be signed by the Platform Key (PK). Most implementations only allow 1 PK, and that PK is Microsoft's.
This means that any binary run on a secure boot machine with Microsoft's PK has a chain of trust rooted at Microsoft.
It may be possible to update the PK before transitioning the system to secure mode; but most consumer devices ship already in secure mode. This is different from simply disabling secure boot, which would still not allow you to update PK (for obvious reasons).
EDIT: It appears that it is called "user mode" and "setup mode" instead of secure mode.
Also it seems that some systems allow you to re-enter setup mode from the "bios" .
On an unrelated note, what do we call the firmware provided settings app now that it is no longer part of the BIOS.
>" Most implementations only allow 1 PK, and that PK is Microsoft's."
Isn't this a bit monopolistic and coercive though? "If you want the Microsoft Hologram on your product the PK has to be has to be Microsoft and there can only be one PK." I can't believe this doesn't violate some type of anti-trust laws.
There is no open standard that defines what a PC is. Linux and other operating systems are piggybacking on the Windows PC standard. If they want OEMs to manufacture hardware to their standards, they'll have to create their own "Linux-compatible" specification and persuade OEMs to follow it.
I disagree. I think the OEMs want to make hardware that consumer buy. I don't think they care one bit what OS consumers run on top of their hardware. In fact I would imagine OEMS would prefer to bring their products to market without consulting Microsoft at all.
It's coercive in the sense the the secure execution is predicated on there only being one PK and the OEMs have to knuckle under to MS just to be considered a "potential" machine that Microsoft allows to run Windows.
>"Linux and other operating systems are piggybacking on the Windows PC standard."
What exactly is the "Window PC Standard"? I have never heard this term before. Linux didn't piggy back on Windows anything. Maybe you mean X86? X86 predates Windows.
Not but the OEM vendors and not MS should be the owner of the sole PK allowed in the engine. MS doesn't make the hardware yet they are in charge of it. I think that's the issue, it has nothing to do with a grand conspiracy.
Basically yes; it's required to get the Windows sticker. I haven't heard that MS charges money to sign bootloaders, though.
If you look at the UEFI requirements for Windows 10, specifically clauses 19 and 20, it says for non-ARM systems the user MUST be able to put Secure Boot into Custom signature-checking mode.
The whole story around Secure Boot could be understood (even without a tinfoil hat) as a part of a slippery slope to lock out alternative OSes, highly recommended post: https://www.phoronix.com/forums/forum/phoronix/general-discu...
This isn't actually true, is it?
In fact, many distributions don't support Secure Boot at all.
This is only true on x86. On arm, Microsoft's requirements state that it should not be possible for users to disable Secure Boot.
However, it doesn't look like they're going to come remotely close to hitting their funding goal. Fabricating chips is expensive.
I wonder if a Bitcoin shared mining setup could co-op some of those hashes to brute force the keys.
This is currently the top-voted question with almost four thousand upvotes. AMD gave a noncommittal "we'll look into it" response, but now at least they're aware that a lot of people actually do care about things like this.
That'll build confidence that the others aren't compromised.
The costs could be reasonable (http://electronics.stackexchange.com/questions/7042/how-much...) under a kickstarter-style campaign.
I read about some new hard/software for secure boot, etc., but don't recall all the details now.
So, for a shorter approach,
suppose I just buy a processor from AMD, a motherboard from ASUS, hard disk drives from Western Digital, etc., and plug it all together for myself. So, then I'm the manufacturer or OEM of my computer.
Q. 1. For what the OP is talking about,
where do I have threats to privacy, control of my machine and its data, and security?
Q. 2. To use the machine I plugged together, do I have to get some keys from Microsoft?
Q. 3. Suppose I install operating systems from Microsoft, e.g., Windows 7 64 bit Professional, Windows 10, Windows Server or the database SQL Server. Then do I have to get keys from Microsoft?
Q. 4. Will the support processor and its software, whatever they are called, on their own without my knowledge or approval use the Internet to send/receive data from/to my computer, modify the data on my hard disks, etc.?
2. Depends on the motherboard and the BIOS written in the flash chip
3. No. They are already signed.
4. If somebody controls them and ask them to do so. All that's necesary is a LAN connection (or wifi, but only with Intel chips) and power. The HDD is completely irrelevant, as is the OS.
Just checking, the PDF does mention the Unified Extensible Firmware Interface (UEFI) but not ARC or PCH.
That ASUS manual does mention that the UEFI BIOS does offer automatic updating of the BIOS version; that feature, if enabled, does seem to raise security concerns.
Looking at the UEFI page of Wikipedia at
> UEFI can support remote diagnostics and repair of computers, even with no operating system installed.
which seems to raise some security concerns. Also it does appear that some people trying to install an operating system might encounter some mud wrestling. Maybe what I'm intending to do with Microsoft's Windows 7, 10, and Server will be easy enough.
"Last year, the Russian government announced that it doesn't want to rely on Intel and AMD chips from the U.S. anymore and will focus more on using homegrown chips from Russia."
You can make an argument that it is still an improvement, because there are no obvious binary blobs required by the system. But in this case, I would recommend going for one of the many Chinese ARM cores -- at least you can buy them easily.
The early models implemented a proprietary VLIW architecture with enterprise-y features (like hardware-tagged pointers, probably borrowed-ish from Itanium) with a dynamic binary translation layer for x86 compatibility on top, not sure if they still do that or perhaps the other way around now.
That is the end-result, yes, but that wasn't the purpose: the purpose was to allow companies to keep track of their laptops--to remotely push out firmware updates, to inventory the hardware/asset list, etc. It was a convenience feature, essentially.
Of course, the end-result, as stated, is that you've got a complete black-box second processor that can do whatever it wants, even when your device is off.
This strikes me as the root problem here. How can one company be granted a monopoly on what is basically an instruction set? Particularly in the case of the instruction set our civilization runs on?
Since I can't understand how something like this could happen I don't understand why any replacement architecture wouldn't end up being controlled by a single entity.
I dislike the mandatory use of these features as much as the next nerd, but this is inaccurate FUD. Secure Boot is a code in flash that checks the signature of whatever you try to boot against some rather complicated policy. It's regular code and would work more or less the same on any platform that runs machine code off of ROM or flash.
There's something that Intel calls, IIRC, "Verified Boot" that tries to prevent someone with an in-system programmer or desoldering skills from changing the flash, but that has nothing to do with the Management Engine either.
And FOSS users don't need to purchase any license from anyone. They can use a tool like Linux Foundation's PreLoader or Red Hat's shim (open source but awkward to modify because you need the signed binary to boot on a stock system) to boot anything they like. No negotiations, no license, no communication with MS at all.
> I dislike the mandatory use of these features as much as the next nerd, but this is inaccurate FUD. Secure Boot is a code in flash that checks the signature of whatever you try to boot against some rather complicated policy. It's regular code and would work more or less the same on any platform that runs machine code off of ROM or flash.
"Regular code" doesn't mean it's not proprietary, and doesn't mean that it's not concerning for free software users.
> And FOSS users don't need to purchase any license from anyone. They can use a tool like Linux Foundation's PreLoader or Red Hat's shim (open source but awkward to modify because you need the signed binary to boot on a stock system) to boot anything they like. No negotiations, no license, no communication with MS at all.
Those preloaders are signed by Microsoft. While it is a good hack for distributions at the moment, it doesn't mean that Microsoft is no longer in the loop. They still have an incredibly worrying amount of control over what can run on modern hardware.
Which has essentially nothing to do with the article and isn't even Intel's fault in any meaningful sense.
I have yet to hear any explanation of the IME that makes sense without the presence user-hostile intent.
The entirety of enterprise laptop management. Not because you don't want users to change their laptop. The point is to be able to run updates for the users.
Or consider the remote KVM option. Disregarding security, that is a sysadmin's wet dream. Being able to recover a system that can't boot saves a lot of boots on the ground.
It does not explain the 30 minute timer.
An innocent use would be: "If the ME is hung, turn it off and on again."
Why is the ME watchdog mandatory?
What innocent explanation details why Intel has chosen to deny me the option to consider the ME a security risk in my environment and disable it?
I used it a few years ago to automatically build classroom computers for training classes. The trainer would pick a configuration, and a complete server and workstation environment would be installed.
You can also do KVM from a powered down state, brick the device, or validate that management engines are present.
Is this an accurate description of what is happening? (I don't pay much attention to desktop systems: I spend most of my time concentrating on the ever-worsening mobile arena.) Do these "major distributions" come with a recent version of bash? As someone who develops software under the GPLv3 license, I would not want my software being distributed to these machines via this hack :/.
> Is this an accurate description of what is happening? (I don't pay much attention to desktop systems: I spend most of my time concentrating on the ever-worsening mobile arena.) Do these "major distributions" come with a recent version of bash? As someone who develops software under the GPLv3 license, I would not want my software being distributed to these machines via this hack :/.
It's not entirely accurate. Effectively what most modern distributions do is that they have a "shim" which is signed by Microsoft. That shim then enrols the distribution's own UEFI keys on the laptop. So their kernel is signed with both their own key and Microsoft's key. This means that you can modify your code without "permission" from Microsoft. openSUSE, Fedora and Debian all employ this tactic so that our distributions can boot on newer laptops.
Do I wish this wasn't necessary and that everything ran core boot? Yes. Is there a better way of handling this problem? Not as far as I know.
I'm pretty sure that neither of those is the case, however. Once the system boots using the signed loader, it's really just Linux, and you're free to replace the kernel and any bit of userspace as usual.
Furthermore, I seriously doubt that anyone is selling machines with Linux preinstalled that have Secure Boot which cannot be turned off - simply because that would become known pretty fast, and even aside from licensing issues, would elicit a very hostile reaction from the community (and hence many potential buyers).
Shim contains keys from Canonical or someone (I'm not sure), and verifies that GRUB has been signed by Canonical before running it. Then when GRUB runs the kernel, it calls back into shim first to verify the kernel has also been signed (Actually that last step wasn't enabled yet last time I checked).
So basically, until Microsoft changes their keys, by signing shim they've given Canonical permission to sign things. But unless you disable secureboot, you can't run a custom kernel unless you convince Canonical or Microsoft to sign it.
Of course the government wants this capability to access anyone's system, so I assume nothing will be done. This has to be one of the worst things that has happened in the history of computing.
EDIT: Handy for CBP use, I imagine.
>"While this architecture is extremely limited in performance, price"
Can anyone say thy the performance of RISCV is so lacking?
This is not easy to acquire or build. Some is wisdom from decades of design and iteration. Some is hundreds of thousands of engineering hours. Some is big money.
RISC V is a new open source project. Who knows how close they will ever get.
Nothing says the ISA itself is a barrier to performance on par with popular existing processors though. The RISC-V BOOM implementation is supposed to be close to an ARM Cortex A9 in performance.
"Because no one has manufactured a high performance RISC-V implementation yet" isn't answer enough for you? All that exists for purchase at the moment are microcontrollers aimed at the ARM Cortex-M market niche.
There's nothing about the ISA that says you couldn't make a deeply pipelined, six-way issue implementation with three levels of cache running at 4 GHz. But that fact doesn't make such a machine appear from nothing either.
The whole toolchain they are working with looks quite awesome tbh.
Commercially, the Freedom U500 platform seems to be really interesting:
You can take RISC-V ISA specification or ready Verilog/VHDL and produce working ASIC chip relatively easy, but it's not fast.
If you want fast general purpose chip in that competes toe-to-toe with AMD and INTEL, it will take huge amount of chip and physical design, simulation and verification. AMD has spent tens of millions to get new x86 ISA compatible architecture out. Doing the same for RISC-V without enough demand would be economic suicide.
I remember a wihle back when Google was shopping around for Intel replacements (likely a negotiation tactic), people were saying they should buy the POWER division from IBM (IIRC). That would have been really interesting...
Funny, I was speaking to some IBM engineers a few months ago and brought up the POWER chips and they kind of laughed and said something to effect that biggest use case for POWER was Google as a means of keeping Intel pricing in check.
> Secure Boot, which even now requires FOSS users to purchase a license from Microsoft to boot FOSS on affected machines that lack an appropriate Secure Boot override.
I recently installed rEFInd from source by using a self-singed certificate (signed the binary using it and enrolled the key into the EFI using mokutil) and it worked. I certainly didn't have to pay MS. I do know that rEFInd provides a key of their own (using the distro's shim) that obviously has trust rooted at MS.
Not that I wouldn't like a world with no more blobs (or at least reproducible-build signed blobs). But I use a ton of software I don't have time to review. Why is solving this more important than, say, looking for RPC holes in docker?