Intel and AMD will continue to not provide options without backdoors, intentional or otherwise.
Everyone will continue to buy their product because the alternatives do not have competitive performance.
How did this get missed in QC? The first thing I try doing while I'm working on my little game is seeing if null or empty inputs allow things that should not happen to happen with each command/function I add into the system. I write the code, see if it compiles, and if it does I immediately test the function in-game with any potentially invalid string/argument I can think to give it. Isn't input testing/validation step #1 in QC for code?
Probably a focus on testing only at the UI level. You would have to have a tool that allowed you to edit or generate your own Auth headers to find it. Or have unit testing that fuzzed, or at least supplied obvious bad stuff like nulls or empty inputs.
Not arguing it's good, but it's not uncommon. I've seen lots of places with webapps where the vast majority of tests are selenium driving form inputs, clicks, etc. That misses things like this, or client side cookie manipulation, etc.
What almost certainly happened here is that, as someone else in this thread mentions, someone came by to fix an automated warning about the use of strcmp(a, b) and replaced it with strncmp(a, b, strlen(a)) instead of strncmp(a, b, strlen(b)). Easily 90% of code reviewers wouldn't catch that mistake (especially if it came along with a truckload of other such fixes), and as it's a maintenance change no new tests would have been written for it.
The only way to catch this would have been to already have written and deployed a test that expressly tested an empty string password. That's surely a good test to have, but come on, be honest: you've probably written a bunch of "password" style checkers in your career. Did you deploy a test and integrate it into CI for every one of them?
So you have to hijack the header value the browser sends...not just send an empty password.
No, not a single one. I tend to work on projects with a bit more complexity than that (including designing the required hardware on top of it if something suitable doesn't already exist.)
I guess they would have firewalled access, but there's still a bug/hole.
Perhaps I misunderstand the question, but AMT is not dependent on the system's OS, by design. AMT runs on Intel Management Engine, a platform on the computer with its own CPU, memory, OS, bus, caches, etc. ... and applications, including AMT. AMT is designed to be used when there is no functioning OS.
Think of it this way: You paid for one computer, but you got two!
including the nmap/curl commands
623, 664, 1699[2-5]
$ curl -v http://server:16992
> GET / HTTP/1.1
> Host: server:16992
> User-Agent: curl/7.47.0
> Accept: /
< HTTP/1.1 303 See Other
< Location: /logon.htm
< Content-Length: 0
< Server: Intel(R) Active Management Technology 8.1.10
This shit is never clear, but did I have to actively turn this thing on to be vulnerable to the remote variant of this attack?