I didn't exactly know of it but I had enough glitches on that terrible app when I was using it that it was obvious there was info being sent that it didn't mean to and some atrocious performance issues that made it feel like it was crudely thrown together
Pretty sure I flagged something or another as a security issue but can't recall what it was
The UEFI boot services that the shell relies upon aren't available after the bootloader or OS calls ExitBootServices() (the code is literally dropped out of ram and those regions handed back to the os) so this is not an easy thing to implement
"Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety"
Is the ability to run an insecure bootloader on a system that has an installed OS with a security policy built around it not running insecure bootloaders an essential liberty? Let's say it is, for the sake of argument. Have you given up that freedom? Given that you can disable secure boot, or boot a live image and remove the SBAT entry, or boot an updated image and recover your existing install, I think it's hard to say that you've actually given it up. Is that security temporary? A well-maintained secure boot chain provides you long-term security against a variety of threats, so I don't think it's clearly temporary.
It's fine to disagree, but please don't do so by pretending that a misquote is meaningful.
From what I understood, the parent's question is not about Microsoft updating grub; it's about a person hit by the bug, and thus in a situation where Windows boots but Linux doesn't, using Windows to copy the correct file (probably extracted from an updated package from the Linux distribution they're using) to the correct place in the EFI partition by hand.
(The first obstacle would be that AFAIK the EFI isn't mounted by default on Windows, but I believe it should not be hard to tell Windows to mount it and give it a drive letter.)
my guess is that its not used by the ARC dGPUs which have their own equivalent for it? But I guess it makes sense to use it for iGPUs.
With that said, seems sketchy to send untrusted data to the ME which is essentially an independent computer, running an independent OS with the ability to have persistent state. Seems like a security failure waiting to happen.
It being unclear that they're exploitable doesn't justify the fact that the reference implementation had known weaknesses that were only documented in the issue tracker and a readme buried several directories deep and not referenced from elsewhere! It also doesn't justify failing to proactively inform any of the clients making use of olm that these issues had been identified, and now anyone who does feel that this is inside their code's threat model has to rush to port to the new implementation after public disclosure.
Edit: but also if you're going to argue that it's unclear that these issues are remotely exploitable, it'd be helpful to discuss why - are there external constraints that ratelimit them such that the number of packets required is infeasible in any length of time? Is all the affected key material ephemeral and rotated before it's realistic to get a large enough sample? Just vibes?
Adding to Ryan's concerns, submitting to a bug bounty program often means accepting terms and conditions that constrain your ability to publicly disclose the issue if the vendor decides to be a dick about it. Depending on how you think your career potentially benefits from freedom to discuss an issue, the long-term financial benefit is potentially greater from not going down the bug bounty path.
Not 100% true - a microcode-based CPU without microcode isn't able to execute anything, so CPUs will ship with an early version of the microcode that's then (as you say) updated during boot.
They could potentially do that to motherboards, but they could do that anyway (physical access would give you as much access to flash as this vulnerability does). But yes, CPUs should be fine in that respect.
The microcode store in the CPU itself shouldn't be writable (if it were we wouldn't need to load microcode updates from the firmware or the OS on every boot), but there needs to be some microcode on there for the CPU to be able to execute the firmware code that updates the microcode. And yes, microcode is signed (and typically also encrypted). SMM shouldn't have any special level of access to the microcode, any persistence here is likely via the system firmware (which should, as a result, be caught by Platform Secure Boot on platforms where that's enabled)