Hacker News new | past | comments | ask | show | jobs | submit login
“Paranoid Mode” Compromise Recovery on Qubes OS (qubes-os.org)
66 points by jerheinze on Apr 29, 2017 | hide | past | web | favorite | 14 comments



> …I don’t believe that advances in so called “safe languages” or anti-exploitation technology could significantly change this landscape. These approaches, while admittedly effective in many situations, especially against memory-corruption-based vulnerabilities, cannot address other broad categories of software vulnerabilities, such as security bugs in application logic, nor stop malicious (or compromised) vendors from building backdoors intentionally into their software.

True. But never underestimate how common memory corruption bugs are. It's fucking embarrassing just how common they are. Look at the Project Zero tracker. Just the first page of the newest issues: "double-free", "out-of-bounds write", "use-after-poison", "use-after-free", "kernel double free", "kernel memory corruption due to off-by-one", "kernel heap overflow", "kernel uaf due to double-release", "heap-buffer-overflow"… And it's these bugs that often lead to the scariest situation for regular users, "I just visited a web page and my browser got pwned".


An excellent point that applies to almost any system:

The inconvenient and somehow embarrassing truth for us – the malware experts – is that there does not exist any reliable method to determine if a given system is not compromised.


This is true for x86-based desktops Qubes targets. You might get past this if you can tolerate a console-like experience with lower-risk hardware. One example is where the state is read-only without physical modification of the machine such as changing a jumper or flipping a switch. Alternatively a combo of ROM and flash where the ROM is immutable but loads signed flash with a correct-by-construction, heavily-pentested module. Apply to various chips in the design. Read-only memory for firmware protection dates back to a mainframe in the 1970's where you had to physically pull old one out and put new one in. OS was built on top of the "Nucleus" API in that for consistency. Microsoft mostly copied that in VerveOS's Nucleus but minus read-only firmware.

An old trick that researchers and implementors should breath more life into. The hardware companies like the more mutable storage for financial reasons. Customers aren't big on replacing hardware but security-focused ones might go with pluggable ROM's long as ROM's aren't changed often. Hence, correct-by-construction approaches that cause few to no defects.


> One example is where the state is read-only without physical modification of the machine such as changing a jumper or flipping a switch.

Joanna Rutkowska, the main developer of Qubes OS, has an article about it and probably is working hard on the implementation for x86 laptops:

https://blog.invisiblethings.org/2015/12/23/state_harmful.ht...


That's certainly an interesting design in this space. Being a laptop would make it more convenient. I think she should talk to the Purism or Minifree folks since their crowd will already pay a bit extra for free or more private stuff.


Purism already ships qubes as an install option (and gives a small donation to qubes if you opt to use it).

This is the reason I bought a Libre 13 laptop from them - they were already certified to work well with qubes.


Yeah, it's one of the reasons I suggested them. They already support the project. They can build laptops. Maybe they can build a stateless laptop, too.


Is what you're describing essentially a trust anchor / root of trust, as implemented in TPMs? I realize that TPMs aren't implemented exactly as you describe, but they are designed to be vulnerable only to physical attack, not software attack.

I ignored the (negative) hype and looked into TPMs recently, and I encourage others to do the same.[0] They look like excellent solutions with the important exception that two of the three key hierarchies, the platform hierarchy and endorsement hierarchy, appear to be fundamentally in the vendor's and not in my control (the latter hierarchy can be disabled, as I understand it, but its functionality is then lost). It's surprising that enterprise IT would tolerate that - I'm not sure I will - but perhaps they can have the manufacturer deploy the corporation's keys in the roots of those hierarchies.

Also, the TPM's security as a trust anchor depends on its implementation. They look good in theory, but I have no idea if the various vendors actually implement them effectively.

[0] By far the best source I found is A Practical Guide to TPM 2.0 - Using the Trusted Platform Module in the New Age of Security by Arthur, Challener. It's also recommended by the Trusted Computing Group, the authors of TPM.


You're getting the idea. The difference is that what I described leaned more toward replaceable, open hardware and user control of process than TPM. The TPM scheme was designed fairly well for industry project but is meant to include lowest cost, DRM, and software stack.


Total bull. Take the filesystem offline and run Tripwire over it (assuming you did this after a fresh install).

We solved this problem in the 90s. Try to keep up.

sigh


We had BIOS based malware in the 90s too


Right. And what do you do when the filesystem comes back clean because the malware resides in some re-flashed firmware?

You can almost get a pass for being condescending ("try to keep up") if you know what you are talking about, but being both condescending AND wrong just makes you look foolish.


That doesn't help you if the code running outside your filesystem is compromised.

For example, a backdoor implanted in the disk firmware would be virtually undetectable for the vast majority of users.


> True, there is a number of conditions that can warn us that the system is compromised, but there is no limit on the number of checks that a system must pass in order to be deemed “clean”.

This is wrong. A computers behaviour, even if allowed to access "true randomness", can be determined in finitely many steps. Sure, the upper bound to the number of steps is unfeasibly big, but not without limit.

Practically, there might be no difference if you assume there is no limit, but excluding the possibility seems u justified.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: