1. If you can do the attack entirely over USB, then you can completely take over a machine via USB and possibly add a very powerful firmware rootkit. The JTAG password seems relevant here.
2. If you get root code execution, you may be able to install a better rootkit. This might be able to extract supposedly hardware-secured crypto keys.
3. The “deterministic RNG” and/or inspection of RNG signals mode might be an interesting attack against SGX — SGX enclaves expect RDRAND to be secure. (If you can take over the ME, I think you can break some of SGX’s platform features, but I don’t think you break the core security guarantees.)
I think the JTAG password is the nastiest bit here.
If the leak is bad enough, the license holder might urge distribution platforms to have certain models of Intel CPU where these exploits are most successful against blocked from playing certain content.
I doubt this will happen any time soon but there might just be indirect consequences of such inconsequential data leaks.
when compressing via x264 with good settings, i cannot tell the difference. usually you can tell from blocking artifacts in areas of smooth colors like fog or out of focus backgrounds. try pausing during these scenes.
Discussion from Sept 14, 2010: https://news.ycombinator.com/item?id=1689669
I assume not all chips have the same JTAG password. I assume JTAG can be accessed via software running on the processor itself and not require explicit physical access. I assume there is a hack that allows someone to obtain their processor JTAG password, how that works is what I’m most interested in I suppose.
The password is unique per cpu.
There is no hack presented in the slides to get the password, and they directly say they couldn't get the password for their system. Their evidence seems mostly based on this pdf:
There is nothing in the slides about accessing JTAG from the processor itself, and in the pdf above you can see the JTAG equipment on page 12.
That's about it. I'm no expert at all, just relaying what's in the slides.
It can be made like this:
Or you can buy one from Intel.
This VISA exploit relies in some way on SA-00086, which is a flaw in the ME that cannot be removed in later versions.
The me_cleaner script has been able to purge these older chipsets since October of last year. I cleaned two different HP desktops at that time, and uploaded the modified BIOS here:
These still suffer from Meltdown and the remediation does exact a performance penalty, but Core 2 is starting to appear as the most manageable risk of all their products.
SA-00086 as an exploit is available through the ME.
VISA is available for access by the (compromised) ME.
This seems threatening enough to me.
If you run the Linux ME reporting tool and it is identified, then ME is awaiting provisioning and commands.
Every chipset has the CPU that serves the ME, separate from the CPU.
> Intel doesn't publicly disclose the existence of Intel VISA and is extremely secretive about it
So all of my data and everything I do on my computer is flowing through this every day, and nobody knows what it is or what it does....
It's a mips microprocessor inside your cpu, completely undocumented and has access to the memory, peripherals, network interface etc.
They were documented in the sense Intel publicly advertised them for years under AMT and vPro as enterprise features. That's why all the discussions on HN about whether Intel had backdoors or weakened randomness were funny. While people were "countering misinformation" here, Intel was publicly advertising backdoors in their chips to ease the management burden. I mean, I guess you could call them front doors with the publicity.
The sneaky part was how they started including them in all chips without a way to (a) buy chips without them or (b) know for sure you could turn them off. I immediately suspected NSA paying them off given most of this started in Trusted Computing Group activities which included classified sessions with NSA. They were always a stakeholder in that stuff. AMD did it, too.
Our only hope for x86 now is the Chinese company that's sharing AMD's chips. They might make a chip with no U.S. backdoors: only Chinese backdoors. If you're worried about local government but not I.P. theft, then the Chinese backdoors won't be any threat to you. Problem solved if the computers get here with no interdiction. Gotta do shell games.
Are you sure you didn't confuse that with the processor serial number (that Intel actually reversed their decision on)? https://news.ycombinator.com/item?id=10106870
TPM was (unfortunately?) far more positively received, likely because it was marketed as a security instead of DRM feature --- and the same goes for a lot of other antiuser features today... the manufacturers have gotten smart about it.
This was unfortunate as it largely evaporated the middle ground who recognized that without some trusted base you also can’t recover from malware or have robust anti-theft measures. I wish the politics had been such that we ended up with a robust open-source implementation before so much shoddy, unreviewed code had shipped so widely.
Not necessarily. It's possible your local government has infiltrated the Chinese agencies that have access to your data. Maybe not likely, but possible. It's also possible the Chinese might choose to sell that data to your government.
Later versions of the Intel ME switched to an internally-designed 80486.
That's worse than nobody.
Flaws have been found in it. They are probably not as bad as Intel.
Leaving the VISA environment accessible to ME is something that I hope AMD would never entertain out of a wealth of caution for their customers.
Not sure there's any evidence this is likely to be true - except for the fact Intel processors are widely deployed.
We would hope that their architectural restraint would extend to their SP.
It's slightly ambiguous whether you mean 'the flaws in AMD IME-near-equivalent are not as bad as the flaws in the IME' or 'AMD's IME-near-equivalent is not as bad as the IME'.
On the 2nd interpretation, it seems that AMD's 'ME' is likely not as powerful/mature as Intel's ME. On the other hand, on certain machines the IME can be disabled or largely neutralised, while I know of no such possibility for AMD.
The maturity of both platforms is opaque, varying, and the only reasonable feature that most consumers would want is an off switch.
Am I the only one here who thinks "threat model" and isn't all too worried? Physical access = full control, I've long held that belief and am not happy that this freedom is continually being eroded in the name of ever-increasing ridiculous "security" (a lot of it against the user, for things like DRM, as mentioned in one of the other comments here.)
Debugging/"test mode" features are basically present in all modern CPUs. I would not bet that Intel is the only one.
Edit: such features have been present in CPUs dating back to the late 70s, so perhaps "modern" isn't needed: see http://forum.6502.org/viewtopic.php?f=8&t=3366 and http://e4aws.silverdr.com/hacks/6500_1/
> The researchers however disagreed with Intel's comments and reportedly said in an online discussion that the patched Intel firmware can be downgraded using Intel ME, making the chipset vulnerable and opening the door for accessing Intel VISA.
I agree that "physical access" is a threat that I'm not too concerned about¹. But the researchers don't seem to agree that that's the extent of the issue.
¹If physical access means "disassemble the case and do X". If physical access is "two seconds with an exposed USB port", then that's a different matter.
Made me wonder if an attack scenario would be feasible where compromised phones could be used for targeted attacks utilizing one of the USB vulnerabilities; essentially just laying dormant until the user plugs into something interesting to charge.
Assuming that: Physical access vectors are often deemed a low priority/less likely to be patched and that testing if an employee cares more about the BYoD security policy or their dying phone will have not favor infosec. It seemed like an interesting way to get access to point of sale systems that I had not heard anyone talk about before.
Some other privilege escalation vulnerability would have to be exploited go beyond the usual app malware capabilities but I don't know enough about USB or mobile OS's to know if it would be possible even with root(maybe that's why no one talks about it).
No, the main thing is if VISA is for qualifying chips on the production line, why don’t they burn a fuse permanently disabling it on chips that pass QA?
This said, its basically a giant signal mux system run into a main block, you'd be pretty limited on what preselected signals are available at a time. People were also reasonably careful not to give entire busses of signals at once.
Speaking of documentation, while not fully open, Intel/AMD are already ahead of many others when it comes to documenting things. Look at the SoCs used in smartphones and tablets, or the infamous Raspberry Pi. AFAIK, Intel is not in the business of making secure cryptoprocessors like the ones used in payment cards and SIMs, but once again even with those you are very unlikely to find public documentation --- everything is NDA'd.
That said, don't get me wrong, I'm not advocating for leaving things undocumented; just explaining how it usually turns out that way.
They have entered a rather comparable space with stuff like TPM and SGX (and are largely using it for nefarious, consumer-hostile stuff like DRM and remote code attestation, of course).
Unless there are countermeasures that we don’t know about, this is a huge backdoor for such systems.
With DRM, the attackers afe consumers. With enclaves, the attackers are Intel and government intelligence agencies.
You will always want layers of security and even then security is an illusion .... there will always be a way, just with different costs to exploit and thus risk/reward frontier.
Unless that service has monopoly power. SGX makes it far too easy for an entity to abuse monopoly power, by directly turning it into power over what their customers have to run. This is why this sort of thing should only ever be done via physical devices - "monopoly" over a single, dedicated device is far more obvious and less insidious.
People care because that means that when you run something in the "Cloud(tm)" you have to trust the cloud provider.
The goal of things like "SGX" was so that you could run things on machines under someone else's control and still trust the results.
Oblivious RAM, FHE, ZKSNARKs are to be considered when you don’t trust the hardware, the users, the cloud, administrators, etc.
Some combo of TEEs, formal verification, binary verification, FHE, etc. might keep your data secure for a little while...maybe.
An internal-only name leaked..."Visualization of Internal Signals Architecture"
Back in the 80s, such policies were dismissed as "security through obscurity". So much change, so little progress ...
May the market penalize them in kind.
Intel made innovative, security-enhanced CPU's such as i432 APX, i960, and Itanium. They lost billions as the market punished them for breaking backward compatibility among a few other things. Mainly backward compatibility with insecure architecture market depended on. Market also wanted chips to maximize speed per dollar which always trades against security. Market also rarely picks secure products over insecure products whenever offered a choice.
So, our current situation is the market's responsibility, not Intel's. The market punished Intel for trying to make better products. Say what you will about two, the i960MC was pretty nice design even if BiiN wasn't. A RISC chip with object-based protection and high availability should've taken off if market really cared about reliability and security. Instead, the market rewarded what met their price/performance (x86/POWER) or price/performance/watt (ARM or MIPS) punishing all else. They also built on it non-portably in a way that led to lock-in. Now, they're reaping what they sow with everything they've done up to this point making any investment in secure chips or OS's a risk not worth taking for most suppliers.
Good news is that all the recent bad news is encouraging some companies to try secure offerings again. Might get more buyers this time. Still high risk, though.
For consumer hardware most buyers are completely oblivious to the situation but they are most likely familiar with the Intel brand. For enterprise the inertia is a lot higher and they are probably waiting to see that AMD manages to deliver for an extended period (multiple generations) before considering wider scale adoption.
And in fact ryzen 5 2200/2400 is so cheap that competing offers price wise are i3 based (or i5 but with less ram, and 4 GB is just not enough in 2019, when windows want 2, chrome wants 2, office wants 2, ...). Ryzen 5 2xxx vs i3 is simply not a fair fight, it's not even a fight at all.
I guess I'm usually so focused on high / super high / server needs that it didn't fully hit me how intel was getting eaten alive even in the low / middle market right now. And medium sized business replacing office computers really don't care about the brand of their cpu, "cheaper, works better" is going to win every time.
I'm not sure if there's enough of that at going around now in order for AMD to gain any substantial market share fast. If Intel's response is anything short of Core 2 vs. Athlon then it might go AMD's way.
Also Ryzen performance really falls off in 4C configurations due to lack of cache (its memory performance is not great and once you fall below a critical level of cache you start leaning harder on the memory for performance). The 2400G and 2200G models (which are the only models with graphics) are particularly heavily affected but the 1500X is not a stellar performer either compared to the 6C and 8C configurations. Normally they fall somewhere around Haswell-level IPC, but as a ballpark let's say the 2200G/2400G is more like Sandy Bridge.
Finally, a lot of the 1000-series chips are subject to a manufacturing fault that causes random segmentation faults under heavy load. Compilation is a reliable way to trigger it, but it can occur in anything, it's basically a uop cache bug. Supposedly fixed after mid-2017, and it's RMA'able if you have it, but a few people still find it in their later-production processors.
As a matter of philosophy, a lot of "office stuff" really prefers fast cores to many cores. An i7 4770 or even one of the new quad-core i3s (no HT) is better suited to typical office tasks on paper, and includes onboard graphics.
With that said, "office stuff" doesn't really need high performance to begin with. Ryzen performance is fine, even the weaker -G processors. But in turn, a lot of users would be OK with even less performance - Intel's Goldmont Plus atom processors are now hitting between Core2 and Nehalem level performance, and that's more than fine for a lot of office stuff. You can pick up a barebones J4005 NUC for like $130 now, or a J5005 NUC for ~$170. Add memory and an SSD and you're done.
The prices on the first-gen stuff are really appealing. I've seen the 1700 for as low as $130. And a home user is probably going to be using a discrete GPU anyway, so that cost penalty doesn't really matter. But, that's not a deal that would be institutionally purchased in volume, that's max quantity 5 from newegg, and companies usually don't buy discrete GPUs if they don't have to.
Are there any open, vetted computer system out there?
How do projects such as the Raspberry Pi fare in terms of security?
The CPU on that isn't the first thing that boots. There's a binary blob that boots the GPU first, and gets it to boot the CPU.
There is some progress on being able to boot without the mystery code, but it's not complete: https://hackaday.com/2017/01/14/blob-less-raspberry-pi-linux...
Broadcom offers very little documentation. I don't think an RPi would be a good security choice.
A Beaglebone Black has a better documented CPU and boot process, and can run stock, non custom Linux. Not specifically "secure", per se, but is at least easier to research. Can't vouch for it, but here's an attempt to make the BBB a secure environment: https://cryptotronix.com/products/cryptocape/
Define your limits.
For the everyday, you won't get far. Intel, AMD and others have all been shown to have problems like this, or at least things they specify for government agencies.
However, tiny computer architectures like MIPS, AVR and the like probably don't, simply because they tend to be too small. They don't have the memory for advanced backdoor techniques... But are trivial to access their memory if you have physical access.
Straight-up RISC-V is too new to truly trust security, but looks fairly great... Until you realise that almost everyone is going to add their own proprietary extensions to it, and those extensions may well include things like VISA.
The Raspberry Pi uses a Broadcom ARM chip. (Which model you get does have different models of the particular SoC, and the two main chips are vastly different).
I don't have enough details about the particular chip to tell, but ARM does have it's own remote management system. It may or may not be part of what Broadcom offers, and may or may not have undisclosed abilities to offer to clandestine agencies.
Bad analogy because Risc-V is an instruction set, not a physical microarchitecture. Back doors and side channel attacks could still be possible when implementing risc-v as a uarch.
The same applies for any ISA be it MIPS, x86-64, Power, Arm, Spark, PA Risc, Alpha, etc. They're just different programming languages implemented in hardware. And like software, hardware can have bugs too, though patching is much harder or impossible.
What do you think are the reasons that a "safe-by-default" CPU design has not yet seen the light of day?
I could go back to really old systems, but it does not make really sense to use the 'security through obfuscation' strategy, no?
Tradeoffs, and motivations, really.
This is going into the land of speculation and opinion, but I think that we did see some more secure CPUs in the past. I also expect that the military and some industries may have access to more secure versions of CPUs currently on the market.
Being more performant became one priority, which lead to do unsafe things that allowed Spectre and others to become possible.
And on the other hand, the more diverse and prolific technology becomes, the more interested in being able to access it in a surreptitious manner the government is. Many modern governments seem obsessed with getting as much data as they can - so much they can't even read it all.
So on one hand you have pressure from consumers to improve speed at any cost, and on the other you have state actors pressuring to keep the status quo of semi-leaky hardware.
I have no idea what kind of error checking procedure the CPU industry uses, but it is hard to believe that Intel et al. do not use highly integrated proof tools that cover multiple design layers. I know that these are very complex systems, that you can't cover certain cases with proof tools, and cost might be a factor at play - but I believe that there is interest in private and public markets to get hands on a stack that starts with a higher security threshold compared with what we have now.
There's certainly some interest, in some parts of industry.
NASA  has an FPGA board they made to be more rugged, for CubeSats, which had multiple layers for preventing program corruption. I'd be shocked if it had a backdoor into it, and would expect parts of the system would make it more resistant to tampering.
Unfortunately, I also expect that if any chip becomes something popular among consumers, state actors will push their weight and get their weaknesses.
1. Take an Intel or AMD system and try to secure-ify it. This is what vendors like system76 and Purism do, especially aided by the coreboot project. I'm not an expert on the details, but an important goal in this space has been disabling Intel ME, which I note in the original parent article seems to be part of this exploit (at least when used remotely). AMD has a corresponding parasite cpu-in-cpu, not sure about progress on disabling.
2. Use another CPU. Someone else mentioned the raptor project, that's all I know of (but again I'm not an expert). But by far the most common consumer CPU that competes with Intel and AMD, who make x86 processors, is the ARM platform used by phones, Raspberry Pis, and many embedded devices.
Note the CPU is only one kind of closed firmware that can have backdoors or insecurities....
The PlayStation 4 has an x86 chip, but is not a PC. It cannot boot a mainline kernel because a lot of basic assumptions about how a PC works don't exist on the PS4 platform. The Fail0ver people have done some great talks at all the issues porting Linux to the PS4.
The trouble with ARM is that it's not an architecture. Linux got so popular among developers and nerds in the 90s because you could just install it on any PC that was out there. When people got hardware working, they could upstream their changes. Phone manufactures patch the hell out of their kernels in terrible ways that can never be up-streamed, and include tons of binary blobs and shims. PostmarketOS is making some progress here, but it's slow.
ARM phones don't have a standard BIOS. (with the exception of the Windows Mobile line, which has ARM+UEFI, yet Microsoft still won't release a bootloader unlock even though their phone is pretty much dead at this point). Some ARM devices use the devicetree standard for hardware identification/allocation, but it's still a mess.
I wrote about this before:
Indeed, the control and understanding of the CPU (and auxiliary chips) is key.
A naive question: what are the resources needed to get your own CPU architecture going, not necessarily state of the art, but older technology?
The fabrication technology is there, after all. And many a patent should be expired by now. Also, there must be some academic projects out there which build such complex systems.
They have a powerpc based desktop system where every component has its firmware open and available.
Used OpenCompute enterprise server from eBay, with LinuxBoot as BIOS.
Future: OpenCompute platform based on OpenPower CPU, Open System Firmware and an open-hardware implementation of Microsoft Cerberus / Google OpenTitan.
How far along are these Open Architecture Systems?
An alternative to me_cleaner on some systems (like ThinkPad X200) is to replace the BIOS entirely with Coreboot . More recently, it was found that me_cleaner appears to let you remove/disable all or part of Intel ME on those and some other architectures but still boot.
 My own notes from when I played with this, which was a mixed success (it was way too much headache to have a huge number of people each do): https://www.neilvandyke.org/coreboot/
Raptor Talos II comes to mind. Quite powerful, and fully auditable firmware.
Uses an IBM POWER9 cpu, no x86
You will pay dearly in cost, performance and pretty much every other metric that you care about though.
Am I the only one reading this and thinking - tho' not surprised - "there's a backdoor baked into Intel chips"?
I can post youtube videos, cat pictures but the link to this page gets refused. Here is a video of what happens: https://media.giphy.com/media/kiB8T8qeTHCVdUM2oZ/giphy.gif
Are we in China yet?
On the other side: stepping out of twitter will isolate those who call attention to these things and just helps them to further block the population from knowing what is happening..
Personally I'm curious what doors this will open for understanding microcode.
Can anyone really say, given the insane pit of complexity that x86 has become?
And stuff like Spectre and Meltdown is all about how they chose to implement it - specifically, optimizations. It is very possible to have an x86 CPU that doesn't try to be "smart" at all, and just implements the x86 spec in the most straightforward way possible... so long as you're willing to pay the perf penalty. It looks like we don't.
Appears Intel is trying too underpin this as unexploitable (shocker!!!), citing a patch from last year. Researchers say that there are multiple ways to turn this debug mode on besides those.
Also the title needs to be changed to remove VISA. It could mean Visa(traveling), VISA(credit cards).
Zdnet titles it well “Researchers discover and abuse new undocumented feature in Intel chipsets”
Excerpt: "The complexity of x86-based systems has become so great that not even specialists can know everything."
We need a new CPU manufacturer. One that is built from the ground up on transparency, auditability, accountability, and Einstein's "as simple as possible but no simpler" maxim.
Such an entity cannot, CANNOT be a corporation.
The reason why such an entity cannot be a corporation is because a corporation is legally bound to the legal structure in the country in which it operates, and that legal structure may create, one way or another, a door for secret agreements / secret hardware modifications between that company and the host government.
In other words, transparent, ethical engineers with NO conflicts of interest -- are no longer running the show... Corporate lawyers and Government lawyers are.
That has to change in the future...
How would that work? You want a government designed secure processor... why wouldn’t you figure that the reasons they are insecure now is because of gov involvement?
If you want an open source processor... that’s cool, and corporations can/are already do that.
Now... with any solution proposed, verify to me that the silicon as designed is really the silicon made, esp when talking about 7nm.
I don't care who produces the processor, but whoever does must be open, transparent, accountable, auditable, etc. as enumerated above.
Whichever group does this must not be guided by Lawyers who enter into secret agreements. That is, ALL legal aspects of this group must be open the public as well.
You raise an excellent point, which is "how is silicon verified as being the same silicon that was designed?".
Usually silicon is verified through tests at various levels of abstraction (electrical, signal, logical, single instruction, programs, etc.) but none of these directly verify that the silicon produced is the silicon designed.
So, you are correct, that's a very real problem that needs a very real solution in the future...
I don't know the practical reality of any company permitting that, but it's a good idea. Some future chip manufacturing group will hopefully do that.
Audits of any form at any time by any party should be permitted, rather than denied.