Hacker News new | past | comments | ask | show | jobs | submit login
DARPA's “Unhackable” Computer (eetimes.com)
120 points by deepnotderp on Dec 26, 2017 | hide | past | favorite | 52 comments



I'm not familiar with the MORPHEUS proposal specifically, but I do have a tiny bit more knowledge of the SSITH program overall, and this has a terrible explanation of what it is. It's effectively about exploring (with working prototypes in mind) new instruction sets—or, in practice, modifying existing instruction sets—with mitigations for certain common classes of vulnerabilities implemented directly in hardware: specifically, vulnerabilities related to "permissions and privileges, buffer errors, resource management, information leakage, numeric errors, crypto errors, and code injection."

The original proposal says nothing about "unhackable", and in fact, specifically quotes someone as saying that eliminating these hardware vulnerabilities would, "…effectively close down more than 40% of the software doors intruders now have available to them"—a far cry from unhackability! It's pretty typical of both science reporting and marketing bluster to go from "addressing certain vulnerabilities to reduce intrusion by less than half" and end up at "unhackable".

The quotes here come from this announcement, from before the proposals were selected: https://www.darpa.mil/news-events/2017-04-10


Our research group at Georgia Tech applied for this grant but we were unfortunately not selected.

As far as I recall, $50m was allocated to the project over a 3 year period. Unlike NSF, DARPA grants typically require a working prototype by the end of the funding period. In this case, DARPA needs a fully functional implementation of the security scheme on a RISC-V processor as well as a development toolchain that can be used to "secure" generic software and/or hardware applications.

At a first glance, it sounds like this team is applying instruction set randomization at the micro-architecture level: as far as I know, this has been done before at a smaller scale. Our approach was to address each of the CWEs (vulnerability classes) with a different technique, which I think contributed to the rejection of our proposal.

Edit: Ignore the title. The goal of SSITH is lofty and likely impossible to achieve for all cases in practice. But this is how DARPA operates: they come up with (currently) far-fetched goals with the hope that one of the funded approaches strikes gold.


That's what Im reading out of it. Anyone curious can look up Instruction Set Randomization Security or combos of words "security," "diversity," and "moving target." I had conceptual designs for doing it at microcode and/or RTL with a NISC-like approach.

Sorry your team didnt get picked. I wonder if any submissions came in from Draper or someone on SAFE architecture. CHERI as well. They're already proven in other designs. Chopping down the bitsize for CHERI while replacing BERI MIPS with Rocket RISC-V would seem straightforward. Throw in some optional enhancements.

Ill have to dig into this program after work to see more about requirements and submissions.


I don't think its ISA randomization if the EETimes article is accurate:

> Morpheus works its magic by constantly changing the location of the protective firmware with hardware that also constantly scrambles the location of stored passwords.

Constantly changing the location sounds like dynamic ASLR.

I looked at umich to see if there was anything more detailed but all I found was this press release which is pretty much the same article:

http://www.eecs.umich.edu/eecs/about/articles/2017/morpheus....


Todd Austin's group has some (old) publications on something close at least. Looks like they're forcing control flow to not use indirect jumps and branches somehow and using that to do dynamic ASLR.


Hey, that's super cool! I'd love to learn more about the work you guys are doing, would you mind pinging me?


I'm skeptical. As far as i can tell their 'unhackable' solution is some form of super advanced ASLR that moves 'some' kind of software around and decrypts and re-encrypts it on the fly. By continually encrypting with new keys and such they are betting that an attacker won't have the time to find vulnerabilities. However, this all assumes the 'advanced ASLR' itself isn't vulnerable, and moreover they are not building software that has no bugs, but rather just throwing all the bugs behind a pretty big locked door.

Sure it's a cool idea, but lets not call it unhackable.


The "unhackable" name probably just comes from a sales pitch or an overenthousiastic journalist.

The Titanic wasn't called unsinkable by people who knew what they were talking about. Same thing for that exploit mitigation hardware.


Are we seriously trying to fix buffer overflows in hardware instead of just moving on from C?


Unclear what the article is really talking about. However, regardless of programming language, I do think that microarchitectures could do more for security. When going for memory safety, why stop at the highest level?

Frankly, if it were about memory safety, though, I think we could count C out. A microarchitecture with inherent protection against memory bugs would likely not be able to provide it's advantages to vanilla C or existing C software.



Unfortunately, given the industry reliance upon standard C, I doubt capability machines will ever catch on.


CHERI a capability machine designed for C programs that runs FreeBSD.


My point is more that it requires modifications to existing C code to use the full capabilities (no pun intended) of a capability machine.


I wonder if this could be done with an alternate stdlib. Editing the usual e.g. heap management functions et al. so it does stuff behind the scenes.


Why not get safety AND easy concurrency by going to Software Transactional Memory in hardware? I don't know the specifics of the overhead off the top of my head but I imagine we're getting close to the point where it's a reasonable cost to bear for the benefits it would bring...


Do you know what JVM , Node, python runtimes are written in? Its not like there aren't bugs in other software. When you have millions of lines of code there is always going to be something that some one forgot to think about.

Unless you have a very trivial program I think its hard to make anything "unhackable."


>JVM

Well, here's a Java Virtual Machine written in Java: https://github.com/beehive-lab/Maxine-VM

Not to counter your point, but to keep the discussion going :)_


> Do you know what JVM , Node, python runtimes are written in?

this doesn't matter to its following theses:

> Its not like there aren't bugs in other software.


There were/are architectures (Burroughs large systems, iAPX432, to some extent AS/400) with fine-grained memory protection which essentially solved all the memory safety issues. Apart from the fact that some such machines were terribly inefficient (eg. iAPX) the major reason why you don't see them that much today is that there was no sane way to run existing C code (or any other code that expects flat memory space for that matter) on them while preserving the safety features.

So the bottom line is that such hardware architectures necessitate using more high-level languages than C.


I worked on Burroughs large systems - they had a stack architecture and memory locations were tagged by data type to ensure safe access. ALGOL scope rules were effectively enforced in hardware. The issue even then was that these rules did not work once you moved to COBOL, or C or other languages which had different scope rules / data types from ALGOL and so hacks had to be used which effectively gave up a lot of the safety features of the hardware.


Security cannot be achieved by merely avoiding programming mistakes.


The question is not if it can be made perfect, but if it can be improved.


If you fix it in hardware you wouldn't have to rewrite everything in a different language. That sounds like a pretty good solution to me.


I feel like that $3.6 Million in funding go a lot further if spent on projects like the Rust Language.


Rust's safety was partly inspired from such spending on Cyclone, a safe version of C. The NSF and other organizations fund tons of work on clean-slate languages, type systems, formal methods, etc. The academic incentives usually lead to quickly thrown together prototypes that arent production ready. Even the good ones rarely get used by programmers in general. Ocaml was one of the exceptions. What gets adoption is normally a language/platform with a knock-off of their efforts pushed by a big company or used in a "killer app." Bandwagons in other words.

The consistently-low adoption of clean-slate tech along with lock-in effecta means the opposite is true: massive investment should go into making most common tech more correct, secure, and recoverable as simple as possible for users. If clean-slate, it should stay close as possible to what people already know with clean integrations. We can also continue to invest in ground-breaking stuff for niches that use them (eg SPARK Ada for aerospace) and/or new bandwagons that might build on them (eg Rust ecosystem).


I had forgotten Cyclone's name, thanks for the reminder. I've always felt kinda sad some of the small features like int@ didn't make their way back into C. I'm glad at least C++ has non-null pointers via references (but with stricter semantics and potentially undesirable syntactic sugar) and fat pointers via span<T>, but it's useless when I have to write in C for legacy reasons.


Trevor Jim revived it a bit with virtualization for those wanting to play with it:

http://trevorjim.com/unfrozen-cyclone/


Oh, that looks interesting, thanks for the link!


So instead of spending money on a completely revolutionary approach that would add an additional layer of security in addition to whatever an up-and-coming language that's already doing a tremendous job at raising awareness about the relationship of programming languages and security as it invents a new paradigm (the borrow checker), we should just give it to focus on the latter?


Rust isn't a panacea. Research funding should go towards exploring a wide array of subjects. Doing anything else ends up with a monoculture.


Agreed. Although I do think that memory safe systems languages are seriously low hanging fruit.


It sounds like an oxymoron. Given the meaning of hacking today it seems a computer by definition is hackable - if you can't hack it what's the point? There's surely some mathematical way to prove that the usefulness of a computing system declines in proportion to it's hackability, such that the least hackable system resembles more or less a rock.


Eh? It's about ensuring that a program running on the computer cannot be altered or compromised in order to run unintended code.


Wouldn't abandoning the von Neumann architecture do this immediately? Store the code and data in separate memories. I'd think that would take out most all exploits in one strike, no?


It appears that they don't want any sacrifices..just a system to run existing code on top of. But yes, you are correct.


They talk about this in the article:

> DARPA’s stated goal of “hack resistance” appears to hedge a bit on whether truly unhackable hardware is achievable.


I find it difficult to parse out what's really going on here from the hodgepodge of attempted layman's explanations.


Not to mention the weird inaccuracies, like tying vpro (amt) to Xeon.

Also no mention of TrustZone or similar existing techs for ensuring application integrity.


Labelling something "unhackable" is the quickest way to get some random person/group on the internet to hack it.


> Labelling something "unhackable" is the quickest way to get some random person/group on the internet to hack it.

Perhaps doing such penetration testing is actually what they (secretly) want people to do?


I actuallyed laughed out loud at this sentence:

> Austin likens Morpheus’ defenses to requiring a would-be attacker to solve a new Rubik’s Cube every second to crack the chip’s security.

Hopefully that is a gross simplfication for the bendfit of less tech savvy folks. A physical rubiks cube can be solved nowadays in 600ms.


To be fair, 600ms is a long time for what I assume is an analogy for a hashing algorithm.

Maybe it would be a fun idea to have sensitive data encrypted in such a way that a part of the hashing algorithm involves solving a physical Rubik's cube.


I would say that 600ms is just a _mechanical_ limitation.


giggle at https://www.youtube.com/watch?v=N1b6iPYj3YQ some 2 move at once


I don't know enough about computer security, but I think the talk from Joanna Rutkowska about 'Towards (reasonably) trustworthy x86 laptops' [0] is basically pushing for 'hack resistance' from a different angle, with a stateless laptop. I guess that's focussing more on verifying the lack of hacks rather than making it unhackable.

[0]: https://media.ccc.de/v/32c3-7352-towards_reasonably_trustwor...


Yep. And Enigma is unbreakable, that's why it's totally safe.


What about the Mill architecture? https://millcomputing.com/docs/security/


Something as simple as tagged memory becoming a standard feature of the dominant CPU architectures would be a huge step forward for computer security.


what a load of shit. as if hardware bugs don't exist


I can't help but feel that the only "unhackable" computer is one without powers that's been recently introduced to a sledgehammer.

Besides, if you wanted to hack some sort of computer system: Wouldn't you just take someone's children and do unspeakable things to them until someone cracks, that seems less more straightforward - certainly more reliable than depending on the intellect of an engineer ("hacker").


> Wouldn't you just take someone's children and do unspeakable things to them until someone cracks,

Eek. No need to get ghoulishly creative! We just call that https://en.wikipedia.org/wiki/Rubber-hose_cryptanalysis :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: