Hacker News new | past | comments | ask | show | jobs | submit login
Intel VISA Exploit Gives Access to Computer’s Entire Data, Researchers Show (ndtv.com)
431 points by DennisP 23 days ago | hide | past | web | favorite | 163 comments



This doesn’t discuss when the attack matters. I see at least three ways:

1. If you can do the attack entirely over USB, then you can completely take over a machine via USB and possibly add a very powerful firmware rootkit. The JTAG password seems relevant here.

2. If you get root code execution, you may be able to install a better rootkit. This might be able to extract supposedly hardware-secured crypto keys.

3. The “deterministic RNG” and/or inspection of RNG signals mode might be an interesting attack against SGX — SGX enclaves expect RDRAND to be secure. (If you can take over the ME, I think you can break some of SGX’s platform features, but I don’t think you break the core security guarantees.)

I think the JTAG password is the nastiest bit here.


For the general consumer maybe, but for media distributors the story might be a bit more worrying. Some forms of DRM keep the keys in special areas of the processor such as these and any key leaks might allow for recording or piracy of very stringently controlled content.

If the leak is bad enough, the license holder might urge distribution platforms to have certain models of Intel CPU where these exploits are most successful against blocked from playing certain content.

I doubt this will happen any time soon but there might just be indirect consequences of such inconsequential data leaks.


Is there any content which was available for streaming yesterday and isn't available on torrents today? DRM stops "casual" pirates. People exploiting CPUs are out of scope.


The quality is a question. It’s nicer to distribute the original digital copy rather than decompressed and recompressed copy.


I don't think the average pirate is going to distinguish on this level of quality difference. As long as it isn't a theater cam most people are going to be satisfied with the quality / price ratio.


At least when it comes to pirates in the anime scene, people are really astonishingly picky about encoding quality.


They'll be picky even if you capture the 'original' compressed stream, though.


Can you tell the difference between recompressed 1080p and the virgin source? What about 4k? I highly doubt I can but maybe I don't know what to look for.


depends on thr compression settings. it takes 6x longer to compress with good quality using software x264 than hardware like intel's quicksync, amd's vce or nvidia's nvenc.

when compressing via x264 with good settings, i cannot tell the difference. usually you can tell from blocking artifacts in areas of smooth colors like fog or out of focus backgrounds. try pausing during these scenes.


True to some extent of course, but HD content seems to be better protected. Then again, HD video content also takes up large amounts of bandwidth and storage space (especially with the newer features that are being added over time), which means that we'd expect it to be less common on these platforms anyway.


In 2010 the HDCP master key was leaked basically killing DRM for HDMI. The KEY take away, pun intended, is that when you get the opportunity to use RSA or invent your own scheme, use RSA. https://en.m.wikipedia.org/wiki/High-bandwidth_Digital_Conte...


It's always more desirable to grab data closer to the source, e.g. the compressed stream instead of stuff further downstream that made it through the playback pipeline to HDMI.


The quality will still be the same you'll just have the burden of having to rip massive unencoded raw dump files and the time to recompress it.


The recompression is a lossy step which you do not incur when extracting the already compressed data from BDs or VoD streams.


Yeah, better grab the already compressed copy and extract it without recompression and the DRMs.


> In 2010 the HDCP master key was leaked

Discussion from Sept 14, 2010: https://news.ycombinator.com/item?id=1689669


4K has also been mostly cracked and it seems recently iTunes as well (for 4K).


"4K has been cracked" means absolutely nothing; it's a resolution. You probably mean one specific file format or distribution channel has been broken but I have no idea which from your comment.


I’m driving, but don’t want to forget this later... do you think you could help me with a 30 second run down on Intel JTAG?

I assume not all chips have the same JTAG password. I assume JTAG can be accessed via software running on the processor itself and not require explicit physical access. I assume there is a hack that allows someone to obtain their processor JTAG password, how that works is what I’m most interested in I suppose.


I skimmed the slides.

The password is unique per cpu.

There is no hack presented in the slides to get the password, and they directly say they couldn't get the password for their system. Their evidence seems mostly based on this pdf: https://web.archive.org/web/20190213084218/http://www.keenli...

There is nothing in the slides about accessing JTAG from the processor itself, and in the pdf above you can see the JTAG equipment on page 12.

That's about it. I'm no expert at all, just relaying what's in the slides.


JTAG requires physical access.


How so? Hack a net-connected USB device that the target is using.


The net-connected device would need to be connected with a USB 3.0 debugging cable. Also DCI would need to be enabled on the computer.



Yes, as I said, if DCI is already enabled and the device is connected via. a USB 3.0 debug cable.

It can be made like this:

https://github.com/ptresearch/IntelTXE-PoC#preparing-the-usb...

Or you can buy one from Intel.


I would guess that many USB devices can make themselves look like debug cables with minor firmware modifications.


The Core 2 series on LGA775 is the last Intel chipset where the ME can be entirely disabled.

This VISA exploit relies in some way on SA-00086, which is a flaw in the ME that cannot be removed in later versions.

The me_cleaner script has been able to purge these older chipsets since October of last year. I cleaned two different HP desktops at that time, and uploaded the modified BIOS here:

https://github.com/corna/me_cleaner/issues/233

These still suffer from Meltdown and the remediation does exact a performance penalty, but Core 2 is starting to appear as the most manageable risk of all their products.


Curious what your threat model is.


The Intel ME can be accessed over the connected ethernet even when the PC is shut down, as long as the power supply is attached.

SA-00086 as an exploit is available through the ME.

VISA is available for access by the (compromised) ME.

This seems threatening enough to me.


I am always confused about this part. Are systems that don't support vPro/AMT/MEBX also accessible over the motherboard's Ethernet connection?


As far as I know, yes.

If you run the Linux ME reporting tool and it is identified, then ME is awaiting provisioning and commands.

Every chipset has the CPU that serves the ME, separate from the CPU.


All Intel systems contain ME hardware and firmware, but most consumer systems are not configured to support remote administration via the integrated ethernet port.


On older Intel chips it's possible to remove the ME and have a functional CPU. These Intel systems running Libreboot lack functional ME firmware/hardware.


No. Systems which don't advertise vPro support probably don't include the necessary firmware even if the hardware is capable of supporting it.


it may be fully there, just disabled in software. I was able to query vpro firmware version from Linux on my laptop, which is "non-vPro" and I was able to use me_cleaner to remove most parts of ME firmware and physically flash the SPI flash chip. Now it shows "machine is not in committed state" in BIOS during booting but everything works (except ME of course).


The Management Engine firmware blob for a non-vPro device is just under 2MB. For a vPro device it's 7MB. To me this hints that the functionality is not there rather than neutered. Of course, no one outside of Intel really knows.


The scariest line to me:

> Intel doesn't publicly disclose the existence of Intel VISA and is extremely secretive about it

So all of my data and everything I do on my computer is flowing through this every day, and nobody knows what it is or what it does....


You never heard of intel IME or amd PSP?

It's a mips microprocessor inside your cpu, completely undocumented and has access to the memory, peripherals, network interface etc.


"completely undocumented and has access to the memory, peripherals, network interface etc. "

They were documented in the sense Intel publicly advertised them for years under AMT and vPro as enterprise features. That's why all the discussions on HN about whether Intel had backdoors or weakened randomness were funny. While people were "countering misinformation" here, Intel was publicly advertising backdoors in their chips to ease the management burden. I mean, I guess you could call them front doors with the publicity.

The sneaky part was how they started including them in all chips without a way to (a) buy chips without them or (b) know for sure you could turn them off. I immediately suspected NSA paying them off given most of this started in Trusted Computing Group activities which included classified sessions with NSA. They were always a stakeholder in that stuff. AMD did it, too.

Our only hope for x86 now is the Chinese company that's sharing AMD's chips. They might make a chip with no U.S. backdoors: only Chinese backdoors. If you're worried about local government but not I.P. theft, then the Chinese backdoors won't be any threat to you. Problem solved if the computers get here with no interdiction. Gotta do shell games.


I am not surprised that their inclusion was sneaky. I recall when Intel attempted to market TPM for the first time. The reaction was swift and very negative. Slashdot was not in favor of 'security' through including security holes and relying upon obscurity of the information on how to exploit the holes being the single point of failure. It was closer to when the government was trying to mandate key escrow and Clipper chips than now and back then they had to walk it back and not release it with a high profile. Back then the most common worry focused on was that this would be used for hardware-based DRM in service of the entertainment industry.


I recall when Intel attempted to market TPM for the first time. The reaction was swift and very negative.

Are you sure you didn't confuse that with the processor serial number (that Intel actually reversed their decision on)? https://news.ycombinator.com/item?id=10106870

TPM was (unfortunately?) far more positively received, likely because it was marketed as a security instead of DRM feature --- and the same goes for a lot of other antiuser features today... the manufacturers have gotten smart about it.


The TPM got almost universally negative negative coverage outside of the enterprise IT space because there wasn’t an obvious benefit to anyone else and many concerns that it would prevent alternative operating system installs, lead to unbreakable DRM, etc.

This was unfortunate as it largely evaporated the middle ground who recognized that without some trusted base you also can’t recover from malware or have robust anti-theft measures. I wish the politics had been such that we ended up with a robust open-source implementation before so much shoddy, unreviewed code had shipped so widely.


> "If you're worried about local government but not I.P. theft, then the Chinese backdoors won't be any threat to you."

Not necessarily. It's possible your local government has infiltrated the Chinese agencies that have access to your data. Maybe not likely, but possible. It's also possible the Chinese might choose to sell that data to your government.


It's all about decreasing the odds. US -> China -> CPU is longer than just US -> CPU.


Yes I agree. However the calculus may change for people in smaller countries that are diplomatically closer to China than America (increasing the chance that their government will do business with the Chinese government.)


The older Intel version was actually on the ARC architecture, which was embedded in the north bridge:

https://en.m.wikipedia.org/wiki/ARC_(processor)

Later versions of the Intel ME switched to an internally-designed 80486.


I've heard a lot about those, the quote refers to "VISA"


It's just a tiny piece of Intel's architecture. There's a ton of other components that your data also flows through that are equally undocumented.


Nobody? I'd say Intel knows. And more than likely NSA knows.

That's worse than nobody.


Does this mean AMD is inherently more secure?


AMD has an equivalent of the Intel ME that was called the "Platform Security Processor" and is now known as the "Secure Processor."

Flaws have been found in it. They are probably not as bad as Intel.

Leaving the VISA environment accessible to ME is something that I hope AMD would never entertain out of a wealth of caution for their customers.


> They are probably not as bad as Intel.

Not sure there's any evidence this is likely to be true - except for the fact Intel processors are widely deployed.


AMD did not suffer from Meltdown.

We would hope that their architectural restraint would extend to their SP.


> Flaws have been found in it. They are probably not as bad as Intel.

It's slightly ambiguous whether you mean 'the flaws in AMD IME-near-equivalent are not as bad as the flaws in the IME' or 'AMD's IME-near-equivalent is not as bad as the IME'.

On the 2nd interpretation, it seems that AMD's 'ME' is likely not as powerful/mature as Intel's ME. On the other hand, on certain machines the IME can be disabled or largely neutralised, while I know of no such possibility for AMD.


Intel ME just moved away from ARC processors to an internally-developed 486.

The maturity of both platforms is opaque, varying, and the only reasonable feature that most consumers would want is an off switch.


Test portions (debug architecture) are necessary for all chips. It is otherwise impossible to make sure what you have actually works as you intend.




And the proper paper esp.


Intel underplayed the exploit and told ZDNet that the VISA issue requires physical access to the machines and the Intel-SA-00086 vulnerabilities have already been mitigated.

Am I the only one here who thinks "threat model" and isn't all too worried? Physical access = full control, I've long held that belief and am not happy that this freedom is continually being eroded in the name of ever-increasing ridiculous "security" (a lot of it against the user, for things like DRM, as mentioned in one of the other comments here.)

Debugging/"test mode" features are basically present in all modern CPUs. I would not bet that Intel is the only one.

Edit: such features have been present in CPUs dating back to the late 70s, so perhaps "modern" isn't needed: see http://forum.6502.org/viewtopic.php?f=8&t=3366 and http://e4aws.silverdr.com/hacks/6500_1/


The next sentence in the article,

> The researchers however disagreed with Intel's comments and reportedly said in an online discussion that the patched Intel firmware can be downgraded using Intel ME, making the chipset vulnerable and opening the door for accessing Intel VISA.

I agree that "physical access" is a threat that I'm not too concerned about¹. But the researchers don't seem to agree that that's the extent of the issue.

¹If physical access means "disassemble the case and do X". If physical access is "two seconds with an exposed USB port", then that's a different matter.


Rackable servers typically have a locking front cover to prevent that. You can buy CoverLocks for desktops.[0] This Kensington USB lock also looks interesting.[1] The keys are unique, by the way.

0) http://www.computersecurity.com/coverlock/

1) https://www.amazon.com/Kensington-Cable-Guard-Rectangular-K6...


I was shopping a few months back at a store with a particularly disinterested cashier who was doing something on their phone and was also charging it using the USB port that was on the register. It struck me as something that you probably shouldn't be able to plug your personal device into but that it is likely a common practice when nothing actually prevents it.

Made me wonder if an attack scenario would be feasible where compromised phones could be used for targeted attacks utilizing one of the USB vulnerabilities; essentially just laying dormant until the user plugs into something interesting to charge.

Assuming that: Physical access vectors are often deemed a low priority/less likely to be patched and that testing if an employee cares more about the BYoD security policy or their dying phone will have not favor infosec. It seemed like an interesting way to get access to point of sale systems that I had not heard anyone talk about before.

Some other privilege escalation vulnerability would have to be exploited go beyond the usual app malware capabilities but I don't know enough about USB or mobile OS's to know if it would be possible even with root(maybe that's why no one talks about it).


The main thing is "Why undocumented?", not threat model and physical or remote exploit.


> The main thing is "Why undocumented?", not threat model and physical or remote exploit.

No, the main thing is if VISA is for qualifying chips on the production line, why don’t they burn a fuse permanently disabling it on chips that pass QA?


VISA isn't for qualifying chips as much as it is for debugging chips including customer returns. Long ago access to limited versions of the signals was given to some customers under NDA and customers had the ability to set customer fuses which would prevent Intel from unlocking the debug features. There's also an unsupressable "debug enabled" bit hardwire to a lot of these functions if I remember so the parent system can always see if somebody turned on the debug functions. Turning on debug functions also usually had the effect of disabling/corrupting important key material.

This said, its basically a giant signal mux system run into a main block, you'd be pretty limited on what preselected signals are available at a time. People were also reasonably careful not to give entire busses of signals at once.


Yet another main question, because ...


Why undocumented? Because Intel is not RISC-V or even Arm. Do you expect other chip makers to document every internal detail of their chips? And yes, as chips grow in internal complexity, bugs like this one become inevitable - so perhaps we should be insisting on fully-documented designs.


There was nothing inevitable about this particular one, unless you mean it at the level of "people will always make mistakes and there's nothing you can do about it"


Testing/debugging features are intended for internal use only and can expose proprietary information about their implementation, so only Intel and possibly some motherboard manufacturers are allowed the knowledge.

Speaking of documentation, while not fully open, Intel/AMD are already ahead of many others when it comes to documenting things. Look at the SoCs used in smartphones and tablets, or the infamous Raspberry Pi. AFAIK, Intel is not in the business of making secure cryptoprocessors like the ones used in payment cards and SIMs, but once again even with those you are very unlikely to find public documentation --- everything is NDA'd.

That said, don't get me wrong, I'm not advocating for leaving things undocumented; just explaining how it usually turns out that way.


> Intel is not in the business of making secure cryptoprocessors like the ones used in payment cards and SIMs

They have entered a rather comparable space with stuff like TPM and SGX (and are largely using it for nefarious, consumer-hostile stuff like DRM and remote code attestation, of course).


You can also use SGX for privacy-preserving enclaves, which makes the combined existence and secrecy of this debugging stack more concerning.

Unless there are countermeasures that we don’t know about, this is a huge backdoor for such systems.

With DRM, the attackers afe consumers. With enclaves, the attackers are Intel and government intelligence agencies.


SGX always involved trusting Intel. It was Intel basically telling a third party "yup, this machine is running the binary code you expect it to be running". Which is exactly why it's nefarious, of course - a third-party has no business forcing me to run their proprietary binaries on my own machine in order to access their service. If they want that kind of trust, they're still free to establish that by sending me a physical dedicated-use device that can authenticate itself to whatever service they care about.


SGX only guarantees the isolated memory region remains isolated. It doesn’t guarantee the code running in it is secure or even leak-free. Microsoft Research has studies this and release a compiler for making stronger claims about code running inside the enclave but I think this is still very much an open problem.

You will always want layers of security and even then security is an illusion .... there will always be a way, just with different costs to exploit and thus risk/reward frontier.


Hm, while I completely agree about not wanting proprietary garbage on my machine, I'm not sure about this logic. A third party can stipulate anything it likes, in order to use their service - you're always free not to use that service. Arguably, the ability to certify that you, the user, are indeed running the code you say you are is a feature provided to you, rather than a restriction imposed upon you. Not a feature I have interest in using, but nevertheless it's possible to frame it that way around. The fact that such code tends to be proprietary is (again, very arguably) besides the point.


> you're always free not to use that service

Unless that service has monopoly power. SGX makes it far too easy for an entity to abuse monopoly power, by directly turning it into power over what their customers have to run. This is why this sort of thing should only ever be done via physical devices - "monopoly" over a single, dedicated device is far more obvious and less insidious.


The processor in a payment card doesn't have any valuable data that the bank doesn't already have, unlike computer's CPU.


Based on the name, the tool likely makes it easy for others to figure out low-level details of processor implementations that Intel would prefer be trade secrets.


> Am I the only one here who thinks "threat model" and isn't all too worried? Physical access = full control, I've long held that belief and am not happy that this freedom is continually being eroded in the name of ever-increasing ridiculous "security" (a lot of it against the user, for things like DRM, as mentioned in one of the other comments here.)

People care because that means that when you run something in the "Cloud(tm)" you have to trust the cloud provider.

The goal of things like "SGX" was so that you could run things on machines under someone else's control and still trust the results.


SGX doesn’t guarantee results, just isolation. You would still need a protocol for confirming that the code was error free, didn’t leak data, not to mention side channel attacks and timing analysis.

Oblivious RAM, FHE, ZKSNARKs are to be considered when you don’t trust the hardware, the users, the cloud, administrators, etc.

Some combo of TEEs, formal verification, binary verification, FHE, etc. might keep your data secure for a little while...maybe.


With encrypted drives and TPM you can still raise the bar pretty high before physical access means you have control. You may have control of the hardware, but not of what’s running inside.


Undocumented? a bug? No, a backdoor. big brother (NSA, FSB, ...) inside[1]?? There isn't any datasheet/docs about "Intel Visualization of Internal Signals Architecture" on Google! even documented technologies/features like Intel ME and AMT or speculative execution (Spectre/Meltdown) are (were) backdoors, not bugs! We need open (free) source hardwares (PCH, CPU, GPU, ...) with a complete documentation.

[1] https://github.com/CHEF-KOCH/NSABlocklist/issues/31

[2] https://software.intel.com/sites/default/files/managed/d3/3c...


The name VISA makes all of this extremely confusing, as there are two other major overloaded meanings for the word


"Intel doesn't publicly disclose the existence of Intel VISA"

An internal-only name leaked..."Visualization of Internal Signals Architecture"


Yeah, I figured that out eventually, but for the first while I was mentally following multiple possible narratives


VISA should at least be capitalized like in the original title.


>VISA's documentation is subject to a non-disclosure agreement, and not available to the general public.

- https://www.zdnet.com/article/researchers-discover-and-abuse...

Back in the 80s, such policies were dismissed as "security through obscurity". So much change, so little progress ...


Clickbait article. Ignore it and read the slides instead:

https://i.blackhat.com/asia-19/Thu-March-28/bh-asia-Goryachy...


Intel's recent string of "what the fuck" failures are what happens when a monopolist emerges and goes unchallenged for a long enough period to become lazy.

May the market penalize them in kind.


"May the market penalize them in kind. "

Intel made innovative, security-enhanced CPU's such as i432 APX, i960, and Itanium. They lost billions as the market punished them for breaking backward compatibility among a few other things. Mainly backward compatibility with insecure architecture market depended on. Market also wanted chips to maximize speed per dollar which always trades against security. Market also rarely picks secure products over insecure products whenever offered a choice.

So, our current situation is the market's responsibility, not Intel's. The market punished Intel for trying to make better products. Say what you will about two, the i960MC was pretty nice design even if BiiN wasn't. A RISC chip with object-based protection and high availability should've taken off if market really cared about reliability and security. Instead, the market rewarded what met their price/performance (x86/POWER) or price/performance/watt (ARM or MIPS) punishing all else. They also built on it non-portably in a way that led to lock-in. Now, they're reaping what they sow with everything they've done up to this point making any investment in secure chips or OS's a risk not worth taking for most suppliers.

Good news is that all the recent bad news is encouraging some companies to try secure offerings again. Might get more buyers this time. Still high risk, though.


The tech industry in general undervalues security (at least in my opinion). I agree part of the reason Intel does is about being a monopolist, but I see this as a bigger trend of fancy/shiny/innovative at the expense of solid/secure/stable.


I would argue that tech people overvalue security, since they think about what could happen if you're violated, and then they discuss the expected value of making the secure decision.


But Intel has stalled out pretty hard on fancy/shiny/innovative. They hit a brick wall with 7nm.


But you can still speed things up with shiny tricks like speculative execution...


...and here we are.


Speculative execution (Branch Prediction) is hardly flashy. It's been present in Intel chips (Pentium) (and just about everyone else) since the early 90s (a quarter of a century ago). This was back when die resolution was 600 nm.


Right, my point was that they're doing more than just reducing the nm.


Unfortunately the market probably won't. Intel still has enough clout and resources to incentivize OEMs to sell as many Intel systems as possible.

For consumer hardware most buyers are completely oblivious to the situation but they are most likely familiar with the Intel brand. For enterprise the inertia is a lot higher and they are probably waiting to see that AMD manages to deliver for an extended period (multiple generations) before considering wider scale adoption.


Anecdotical at best but I upgrade a few dozen desktops a year for my customers during the early months of the year, and this year is the first year in a long time that there was some AMD in my purchases, in fact most of it was AMD. Not by choice, but because 1. it was available, 2. it was the best option (for a branded below 450 euros desktop with ssd, the fight was mostly ryzen 5 with 8 GB of ram vs i3 with 8 GB or ram vs i5 with 4 GB of ram). Lenovo especially had some great offering with Ryzen cpu.


Security and price aside, I would expect Ryzen with its many cores to be excellent in office use. Is it?


Yes, but it's even better than you think given that you can find ryzen 3 at the price point that used to be modern "pentium" playground; Windows 10 and a browser with a few tabs and office with a couple spreadsheets opened just don't do well on two cores anymore

And in fact ryzen 5 2200/2400 is so cheap that competing offers price wise are i3 based (or i5 but with less ram, and 4 GB is just not enough in 2019, when windows want 2, chrome wants 2, office wants 2, ...). Ryzen 5 2xxx vs i3 is simply not a fair fight, it's not even a fight at all.

I guess I'm usually so focused on high / super high / server needs that it didn't fully hit me how intel was getting eaten alive even in the low / middle market right now. And medium sized business replacing office computers really don't care about the brand of their cpu, "cheaper, works better" is going to win every time.


> And medium sized business replacing office computers really don't care about the brand of their cpu, "cheaper, works better" is going to win every time.

I'm not sure if there's enough of that at going around now in order for AMD to gain any substantial market share fast. If Intel's response is anything short of Core 2 vs. Athlon then it might go AMD's way.


I think any modern CPU is enough for regular office use. You don't really need lots of cores for that. But if you can get them at a lower price point, lower power consumption maybe, and perhaps even better security then why not?


Most Ryzen chips lack integrated graphics, which drives up system costs. Also, while the chips are cheaper, saving $100 on the CPU doesn't necessarily make a big impact in total system costs, especially since you now need to buy a discrete GPU for $100 extra too.

Also Ryzen performance really falls off in 4C configurations due to lack of cache (its memory performance is not great and once you fall below a critical level of cache you start leaning harder on the memory for performance). The 2400G and 2200G models (which are the only models with graphics) are particularly heavily affected but the 1500X is not a stellar performer either compared to the 6C and 8C configurations. Normally they fall somewhere around Haswell-level IPC, but as a ballpark let's say the 2200G/2400G is more like Sandy Bridge.

Finally, a lot of the 1000-series chips are subject to a manufacturing fault that causes random segmentation faults under heavy load. Compilation is a reliable way to trigger it, but it can occur in anything, it's basically a uop cache bug. Supposedly fixed after mid-2017, and it's RMA'able if you have it, but a few people still find it in their later-production processors.

As a matter of philosophy, a lot of "office stuff" really prefers fast cores to many cores. An i7 4770 or even one of the new quad-core i3s (no HT) is better suited to typical office tasks on paper, and includes onboard graphics.

With that said, "office stuff" doesn't really need high performance to begin with. Ryzen performance is fine, even the weaker -G processors. But in turn, a lot of users would be OK with even less performance - Intel's Goldmont Plus atom processors are now hitting between Core2 and Nehalem level performance, and that's more than fine for a lot of office stuff. You can pick up a barebones J4005 NUC for like $130 now, or a J5005 NUC for ~$170. Add memory and an SSD and you're done.

The prices on the first-gen stuff are really appealing. I've seen the 1700 for as low as $130. And a home user is probably going to be using a discrete GPU anyway, so that cost penalty doesn't really matter. But, that's not a deal that would be institutionally purchased in volume, that's max quantity 5 from newegg, and companies usually don't buy discrete GPUs if they don't have to.


Also see: Boeing


Out of interest: What would one consider as the most secure, commercially available computer architecture?

Are there any open, vetted computer system out there?

How do projects such as the Raspberry Pi fare in terms of security?


"How do projects such as the Raspberry Pi fare in terms of security?"

The CPU on that isn't the first thing that boots. There's a binary blob that boots the GPU first, and gets it to boot the CPU.

There is some progress on being able to boot without the mystery code, but it's not complete: https://hackaday.com/2017/01/14/blob-less-raspberry-pi-linux...

Broadcom offers very little documentation. I don't think an RPi would be a good security choice.

A Beaglebone Black has a better documented CPU and boot process, and can run stock, non custom Linux. Not specifically "secure", per se, but is at least easier to research. Can't vouch for it, but here's an attempt to make the BBB a secure environment: https://cryptotronix.com/products/cryptocape/


Thanks for the link!


> Out of interest: What would one consider as the most secure, commercially available computer architecture?

Define your limits.

For the everyday, you won't get far. Intel, AMD and others have all been shown to have problems like this, or at least things they specify for government agencies.

However, tiny computer architectures like MIPS, AVR and the like probably don't, simply because they tend to be too small. They don't have the memory for advanced backdoor techniques... But are trivial to access their memory if you have physical access.

Straight-up RISC-V is too new to truly trust security, but looks fairly great... Until you realise that almost everyone is going to add their own proprietary extensions to it, and those extensions may well include things like VISA.

---

The Raspberry Pi uses a Broadcom ARM chip. (Which model you get does have different models of the particular SoC, and the two main chips are vastly different).

I don't have enough details about the particular chip to tell, but ARM does have it's own remote management system. It may or may not be part of what Broadcom offers, and may or may not have undisclosed abilities to offer to clandestine agencies.


> Straight-up RISC-V is too new to truly trust security, but looks fairly great...

Bad analogy because Risc-V is an instruction set, not a physical microarchitecture. Back doors and side channel attacks could still be possible when implementing risc-v as a uarch.

The same applies for any ISA be it MIPS, x86-64, Power, Arm, Spark, PA Risc, Alpha, etc. They're just different programming languages implemented in hardware. And like software, hardware can have bugs too, though patching is much harder or impossible.


Your quote cut off the parent right before they made roughly the same observation.


Not really - it was talking about proprietary extensions to the instruction set, but the real problem is on implementation level. You can implement most everything strictly to the spec, and still have vulnerable side channels.


Most small microcontrollers are vulnerable to program memory extraction and replacement - even the supposedly secure ones.


Thanks for your elaboration.

What do you think are the reasons that a "safe-by-default" CPU design has not yet seen the light of day?

I could go back to really old systems, but it does not make really sense to use the 'security through obfuscation' strategy, no?


> What do you think are the reasons that a "safe-by-default" CPU design has not yet seen the light of day?

Tradeoffs, and motivations, really.

This is going into the land of speculation and opinion, but I think that we did see some more secure CPUs in the past. I also expect that the military and some industries may have access to more secure versions of CPUs currently on the market.

Being more performant became one priority, which lead to do unsafe things that allowed Spectre and others to become possible.

And on the other hand, the more diverse and prolific technology becomes, the more interested in being able to access it in a surreptitious manner the government is. Many modern governments seem obsessed with getting as much data as they can - so much they can't even read it all.

So on one hand you have pressure from consumers to improve speed at any cost, and on the other you have state actors pressuring to keep the status quo of semi-leaky hardware.


Thanks. Your speculations sound reasonable.

I have no idea what kind of error checking procedure the CPU industry uses, but it is hard to believe that Intel et al. do not use highly integrated proof tools that cover multiple design layers. I know that these are very complex systems, that you can't cover certain cases with proof tools, and cost might be a factor at play - but I believe that there is interest in private and public markets to get hands on a stack that starts with a higher security threshold compared with what we have now.


> but I believe that there is interest in private and public markets to get hands on a stack that starts with a higher security threshold compared with what we have now.

There's certainly some interest, in some parts of industry.

NASA [0] has an FPGA board they made to be more rugged, for CubeSats, which had multiple layers for preventing program corruption. I'd be shocked if it had a backdoor into it, and would expect parts of the system would make it more resistant to tampering.

Unfortunately, I also expect that if any chip becomes something popular among consumers, state actors will push their weight and get their weaknesses.

[0] https://www.nasa.gov/mission_pages/station/research/experime...


There are two avenues:

1. Take an Intel or AMD system and try to secure-ify it. This is what vendors like system76[1] and Purism[2] do, especially aided by the coreboot project[3]. I'm not an expert on the details, but an important goal in this space has been disabling Intel ME[4], which I note in the original parent article seems to be part of this exploit (at least when used remotely). AMD has a corresponding parasite cpu-in-cpu, not sure about progress on disabling.

2. Use another CPU. Someone else mentioned the raptor project, that's all I know of (but again I'm not an expert). But by far the most common consumer CPU that competes with Intel and AMD, who make x86 processors, is the ARM platform used by phones, Raspberry Pis, and many embedded devices.

Note the CPU is only one kind of closed firmware that can have backdoors or insecurities....

[1] https://system76.com/

[2] https://puri.sm/

[3] https://www.coreboot.org/

[4] https://en.wikipedia.org/wiki/Intel_Management_Engine


With Intel/AMD x86_64 chips, you have a platform to go with it: the PC architecture. The BIOS/UEFI in this architecture is incredibly standard over the course of their offerings.

The PlayStation 4 has an x86 chip, but is not a PC. It cannot boot a mainline kernel because a lot of basic assumptions about how a PC works don't exist on the PS4 platform. The Fail0ver people have done some great talks at all the issues porting Linux to the PS4.

The trouble with ARM is that it's not an architecture. Linux got so popular among developers and nerds in the 90s because you could just install it on any PC that was out there. When people got hardware working, they could upstream their changes. Phone manufactures patch the hell out of their kernels in terrible ways that can never be up-streamed, and include tons of binary blobs and shims. PostmarketOS is making some progress here, but it's slow.

ARM phones don't have a standard BIOS. (with the exception of the Windows Mobile line, which has ARM+UEFI, yet Microsoft still won't release a bootloader unlock even though their phone is pretty much dead at this point). Some ARM devices use the devicetree standard for hardware identification/allocation, but it's still a mess.

I wrote about this before:

https://penguindreams.org/blog/android-fragmentation/


Thanks, the info and post is helpful!


Hi, thanks for the links

Indeed, the control and understanding of the CPU (and auxiliary chips) is key.

A naive question: what are the resources needed to get your own CPU architecture going, not necessarily state of the art, but older technology?

The fabrication technology is there, after all. And many a patent should be expired by now. Also, there must be some academic projects out there which build such complex systems.


I don't know much about this but I guess you could check out RISC V


https://www.raptorcs.com/

They have a powerpc based desktop system where every component has its firmware open and available.


Also see https://www.raptorcs.com/BB/ for their latest, more affordable board called Blackbird.


The processors have complete internal and internal configuration register documentation available, as well.


> NOTE: The proprietary NVIDIA® driver stack does not support OpenGL. There is no way to add OpenGL support to the proprietary driver stack. This system is designed for GPU compute, and while a minimal 2D framebuffer is supported 3D applications will fall back to non-accelerated LLVMPipe rendering. This may result in modern desktop environments, such as KDE and Gnome, failing to operate at an acceptable speed.

(No comment.)


That's their PowerAI node. Their regular hardware is sold with AMD workstation cards and uses amdgpu like everything else. (Talos II owner.)


Lenovo X230 with Heads, me_cleaner, Qubes.

Used OpenCompute enterprise server from eBay, with LinuxBoot as BIOS.

Future: OpenCompute platform based on OpenPower CPU, Open System Firmware and an open-hardware implementation of Microsoft Cerberus / Google OpenTitan.


Another server option is something based on ~2012 Opterton chips which have been freed, such as:

https://store.vikings.net/libre-friendly-hardware/the-server...


Interesting. Are there no known backdoors in the Lenovo X200/.../X230 Intel systems? I remember some bad news about Active Management Technologies.

How far along are these Open Architecture Systems?


The me_cleaner they mentioned is an attempt to remove/disable the Intel ME, including any AMT.

An alternative to me_cleaner on some systems (like ThinkPad X200) is to replace the BIOS entirely with Coreboot [1]. More recently, it was found that me_cleaner appears to let you remove/disable all or part of Intel ME on those and some other architectures but still boot.

[1] My own notes from when I played with this, which was a mixed success (it was way too much headache to have a huge number of people each do): https://www.neilvandyke.org/coreboot/


At least with the X200 you can run Libreboot, a fully blob-free subset of coreboot. The AMT stuff can be flashed off the chip.


>Are there any open, vetted computer system out there?

Raptor Talos II comes to mind. Quite powerful, and fully auditable firmware. Uses an IBM POWER9 cpu, no x86


The most certainty you are going to get is by taking an open source processor core, and run it on an FPGA with an open toolchain. You won't be able to check the actual FPGA chip, though the risk seems minimal on that front compared to ASICs with the ISA baked in.

You will pay dearly in cost, performance and pretty much every other metric that you care about though.


There are many, but you'll sacrifice either price or performance to get it. The problem is that for hardware qualification/testing/development, remote management, etc many vendors have started opening up these backdoors.


> "Security researchers have discovered a previously unknown feature in the Intel chipsets, which could allow an attacker to intercept data from the computer memory. The feature called Intel Visualization of Internal Signals Architecture (Intel VISA) is said to be a utility that is bundled by the chipmaker for testing on the manufacturing lines. Although Intel doesn't publicly disclose the existence of Intel VISA and is extremely secretive about it, the researchers were able to find several ways to enable the feature on the Intel chipsets and capture the data from the CPU."

Am I the only one reading this and thinking - tho' not surprised - "there's a backdoor baked into Intel chips"?


Don't trust the hardware RNG, they said. And people pointed fingers and called them paranoid while laughing.


For some odd reason Twitter does not permit that I share the link to the security disclosure page.

I can post youtube videos, cat pictures but the link to this page gets refused. Here is a video of what happens: https://media.giphy.com/media/kiB8T8qeTHCVdUM2oZ/giphy.gif

Are we in China yet?


Just delete Twitter account and create a Mastodon account :)


True.

On the other side: stepping out of twitter will isolate those who call attention to these things and just helps them to further block the population from knowing what is happening..


I hope this Intel VISA Exploit will shed some light upon inner workings of Intel Management Engine, its memory states, software modules, their undocumented features etc.


The slides mention that they are able to unlocked JTAG for IME cores, so I suspect we should be finally be able to do research into it.

Personally I'm curious what doors this will open for understanding microcode.


Given the scale of some of the recent x86 exploits that went totally unnoticed by massive chipmakers like Intel (thinking mainly Spectre/Meltdown but this is pretty bad too), is it even possible to build a secure x86 processor?

Can anyone really say, given the insane pit of complexity that x86 has become?


This one is more like a deliberate backdoor that was left not fully secured. It doesn't seem to have anything to do with x86-the-instruction-set.

And stuff like Spectre and Meltdown is all about how they chose to implement it - specifically, optimizations. It is very possible to have an x86 CPU that doesn't try to be "smart" at all, and just implements the x86 spec in the most straightforward way possible... so long as you're willing to pay the perf penalty. It looks like we don't.


This article appears to be blog spam after an interview ZDNet had with the researchers, which seems to include a lot more info on this.

Appears Intel is trying too underpin this as unexploitable (shocker!!!), citing a patch from last year. Researchers say that there are multiple ways to turn this debug mode on besides those.


Oops, didn't notice that. The article I posted has two links other than the zdnet link, but they're also contained in the zdnet article. Totally support the admins repointing to this if they want:

https://www.zdnet.com/article/researchers-discover-and-abuse...


It looks like a shady website.

Also the title needs to be changed to remove VISA. It could mean Visa(traveling), VISA(credit cards).

Zdnet titles it well “Researchers discover and abuse new undocumented feature in Intel chipsets”


https://www.blackhat.com/asia-19/briefings/schedule/index.ht...

Excerpt: "The complexity of x86-based systems has become so great that not even specialists can know everything."

We need a new CPU manufacturer. One that is built from the ground up on transparency, auditability, accountability, and Einstein's "as simple as possible but no simpler" maxim.

Such an entity cannot, CANNOT be a corporation.

The reason why such an entity cannot be a corporation is because a corporation is legally bound to the legal structure in the country in which it operates, and that legal structure may create, one way or another, a door for secret agreements / secret hardware modifications between that company and the host government.

In other words, transparent, ethical engineers with NO conflicts of interest -- are no longer running the show... Corporate lawyers and Government lawyers are.

That has to change in the future...


>Such an entity cannot, CANNOT be a corporation.

How would that work? You want a government designed secure processor... why wouldn’t you figure that the reasons they are insecure now is because of gov involvement?

If you want an open source processor... that’s cool, and corporations can/are already do that.

Now... with any solution proposed, verify to me that the silicon as designed is really the silicon made, esp when talking about 7nm.


One possible solution: https://libresilicon.com

I don't care who produces the processor, but whoever does must be open, transparent, accountable, auditable, etc. as enumerated above.

Whichever group does this must not be guided by Lawyers who enter into secret agreements. That is, ALL legal aspects of this group must be open the public as well.

You raise an excellent point, which is "how is silicon verified as being the same silicon that was designed?".

Usually silicon is verified through tests at various levels of abstraction (electrical, signal, logical, single instruction, programs, etc.) but none of these directly verify that the silicon produced is the silicon designed.

So, you are correct, that's a very real problem that needs a very real solution in the future...


Can I enter your factory and inspect the dies as they come off the line before they are sealed? If yes, then you could indeed verify the silicon is as designed. Probably just as you could take a chip that does not work and diagnose the problem, this functionality would seem to be necessary to successfully build a chip/chip factory in the first place. How would you troubleshoot the manufacturing process if you couldn't audit the product for quality/accuracy?


Completely random inspections of product in any stage of the manufacturing process by third parties is a good idea.

I don't know the practical reality of any company permitting that, but it's a good idea. Some future chip manufacturing group will hopefully do that.

Audits of any form at any time by any party should be permitted, rather than denied.


I was re-reading the 2017 AMT story, it is so strange. How is this even possible?

https://www.tenable.com/blog/rediscovering-the-intel-amt-vul...


If this is truly for assembly line diagnostics they should add a hardware self destruct to the interlink with those circuits, after VISA serves the intended purpose fry it. Is that a reasonable and technically feasible option?


What exactly is 'Orange Mystery'? From the context of the article they make it sound like it is another known vulnerability, but I went looking and can't find anything related with that name.


Orange Mystery (their made up name) is an intended way to get JTAG access to TXE if the CPU is in manufacturing mode. Something that only happens if an OEM forgets to turn it off before shipping. This was the case with Intel Macbooks, before it was fixed. Haven't heard of other manufactures making the same mistake.


IIRC CS(TXE/ME) are implemented in the pch, not the cpu. It's also pretty common for consumer gaming motherboard manufacturers to leave it on. I know at least MSI and Gigabyte do


It's common for gaming motherboards to leave manufacturing mode on? Got any info on that, can't find anything on google about this.


How do I find out if this debug mode is enabled?


"VISA... it's everywhere you don't want to be..." <g>


I mean you could look at it as a 20 channel logic analyzer for free...


Getting real tired of hearing about how our monopolistic glorious chip-maker leaks data like a sieve, almost as if there's some sort of monetary benefit to doing so...


This is why we must all move to end-to-end encryption. We should limit “concentrations of information” just like “Concentrations of power”.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: