Hacker News new | comments | show | ask | jobs | submit login

"Cache timing goes back to at least 2005 with Osvik and Tromer. This isn't a simple cache timing bug, though." (tptacek)

"Cache timing goes back to 2005 with Percival. I published a couple weeks before them. :-)" (cpercival)

Cache timing goes back to the VAX Security Kernel (early 1990's) designed for those A1 certification requirements that tptacek calls useless, "red tape." One of the mandated techniques was covert, channel analysis of whole system. They found lots of side channels in hardware and software they tried to mitigate. Hu found one in CPU caches following with a design to mitigate it. That was presented in 1992.

https://news.ycombinator.com/item?id=16083384

Since Hu is paywalled, see (b) "cache-type covert timing channels" in his patent:

https://www.google.ch/patents/US5574912

So, one of INFOSEC's founders (Paul Karger) did the first, secure VMM for non-mainframe machines. The team followed the security certification procedures discovering a pile of threats that required fixes from microcode for clean virtualization to mitigation of cache, timing channels. They published that. Most security professionals outside high assurance sector and CompSci ignored and/or talked crap about their work presumably without reading it. Those same folks reported later on virtualization stacks hit in 2000's with the attack from 1992 on new software with weaker security than KVM/370 done in 1978. Now, another attack making waves uses the 1992 weakness combined with another problem they discovered by looking at what interacts with it. That might have been discovered earlier if they did that with x86 like high-assurance security (aka "red tape") did in 1995 for B3/A1 requirements with them spotting potential for SMM and cache issues:

https://pdfs.semanticscholar.org/2209/42809262c17b6631c0f653...

Note: High-assurance security was avoiding x86 wherever allowed by market for stuff like what's in that report. As report notes with exemplar systems, market often forced it to detriment of security. Their identification of SMM as a potential attack vector preempted Invisible Things by quite a bit of lead time. That was typical in this kind of work since TCSEC require thoroughness.

In 2016, one team surveyed the components plus research on them in a modern CPU. Researchers had spotted branching as a potential, timing channel pretty quickly after CPU's got mainstream attention.

https://eprint.iacr.org/2016/613.pdf

So, a team following B3 or A1 requirements of TCSEC for hardware like in 1990-1995 would've identified the cache channels (as done in 1992) plus other risky components. They'd have applied a temporal or non-interference analysis like they did with secure TCB's in 1990's to early 2000's. A combination of human eyeballs plus model-checking or proving for interference might have found recent attacks, too, given prior problems found in ordering or information flow violations. This is a maybe but I say focusing on interactions with known-risks would've sped discovery with high probability. Far as resources, it would be one team doing this on one grant using standard, certification techniques from mid-1980's on a CPU others analyzed in mid-1990's say it was bad for security due to the cache leaking secrets, too many privileged modes, various components implemented poorly, and so on.

I keep posting this stuff on HN, Lobste.rs, and so on since it's apparently (a) unknown to most new people in the security field for who knows what reason or (b) dismissed by some of them based on the recommendations of popular, security professionals who have clearly never read any of it or built a secure, hardware/software system. I'm assuming you were unaware of the prior work given your scrypt work brilliantly addressed a problem at root cause quite like Karger et al did when they approached security problems. The old work's importance is clear as I see yet again well-known, security professionals are citing attack vectors discovered, mitigated, and published in the 1990's like it was a 2005 thing. How much you want to bet there's more problems they already solved in their security techniques and certifications that we'd profit from applying instead of ignoring?

I encourage all security professionals to read up on prior work in tagged/capability machines, MLS kernels, covert channel analysis, secure virtualization, trusted paths, hardware security analysis, and so on. History keeps repeating. It's why I stay beating this drum on every forum.




If you read my 2005 paper, you'll see that I devoted a section to providing the background on covert channels, dating back to Lampson's 1973 paper on the topic. I was very much aware of that earlier work.

My paper was the first to demonstrate that microarchitectural side channels could be used to steal cryptologically significant information from another process, as opposed to using a covert channel to deliberately transmit information.


Hmm. It's possible you made a previously-unknown distinction but I'm not sure. The Ware Report that started INFOSEC field in 1970 put vulnerabilities in three categories: "accidental disclosures, deliberate penetration, and physical attack." The diagram on Figure 3 (p 6) shows with radiation and crosstalk risks they were definitely considering hardware problems and side channels at least for EMSEC. When talking of that stuff, they usually treat it as a side effect of program design rather than deliberate.

https://csrc.nist.gov/csrc/media/publications/conference-pap...

Prior and current work usually models secure operation as a superset of safe/correct operation. Schell, Karger, and others prioritized defeating deliberate penetration with their mechanisms since (a) you had to design for malice from the beginning and (b) defeating one takes care of the other as a side effect. They'd consider the ability for any Sender to leak to any Receiver to be a vulnerability if that flow violates the security policy. That's something they might not have spelled out since they habitually avoided accidental leaks with mechanisms. Then again, you might be right where they never thought of it while working on the superset model. It's possible. I'm leaning toward they already considered side channels to be covert channels given descriptions from the time:

"A covert channel is typically a side effect of the proper functioning of software in the trusted computing base (TCB) of a multilevel system... Also, as we explain later, malicious users can exploit some special kinds of covert channels directly without using any Trojan horse at all."

"Avoiding all covert channels in multilevel processors would require static, delayed, or manual allocation of all the following resources: processor time, space in physical memory, service time from the memory bus, kernel service time, service time from all multilevel processes, and all storage within the address spaces of the kernel and the multilevel processes. We doubt that this can be achieved in a practical, general purpose processor. "

https://csrc.nist.gov/CSRC/media/Publications/conference-pap...

The description is it's incidental problem from normal, software functioning that can be maliciously exploited with or without a Trojan horse. They focus on penetration attempts since that was culture of time (rightly so!) but know it can be incidental. They also know in second quote just how bad the problem is with later work finding covert channels in all of that. Hu did the timing channels in caches that same year. Wray made a SRM replacement for timing channels year before. They were all over this area but without a clear solution that wouldn't kill the performance or pricing. We may never find one if talking timing channels or just secure sharing of physical resources.

Now far as your work, I just read it for refresher. It seems to assume, not prove, that the prior research never considered incidental disclosure. Past that, you do a great job identifying and demonstrating the problem. I want to be extra clear here I'm not claiming you didn't independently discover this or do something of value: I give researchers like you plenty credit elsewhere on researching practical problems, identifying solutions, and sharing them. I'm also grateful for those like you who deploy alternatives to common tech like scrypt and tarsnap. Much respect.

My counter is directed at the misinformation than you personally. My usual activity. I'm showing this was a well-known problem with potential mitigations presented at security conferences, one product was actually built to avoid it, it was higly cited with subsequent work in high-security imitating some of its ideas, these prior works/research is not getting to new ones concerned about similar problems, some people in security field are also discouraging or misrepresenting it on top of that, and I'm giving the forerunners their due credit plus raising awareness of that research to potentially speed up development of next, new ideas. My theory is people like you might build even greater things if you know about prior discoveries in problems and solutions, esp on root causes behind multiple problems. That I keep seeing prior problems re-identified makes me think it's true.

So, I just wanted to make that clear as I was mainly debunking this recent myth of cache-based, timing channels being a 2005 problem. It was rediscovered in 2005, perhaps under a new focus on incidental leaks, in a field where majority of breakers or professionals either didn't read much prior work or went out of their way to avoid it depending on who they are. Others and I studying such work also have posted that specific project in many forums for around a decade. You'd think people would've have checked out or tried to imitate something in early secure VMM's or OS's by now when trying to figure out how to secure VMM's or OS's. For some reason, they don't in majority of industry and FOSS. Your own conclusion echos that problem of apathy:

"Sadly, in the six months since this work was first quietly circulated within the operating system security community, and the four months since it was first publicly disclosed, some vendors failed to provide any response."

In case you wondered, that was also true in the past. Only the vendors intending to certify under higher levels of TCSEC looked for or mitigated covert channels. The general market didn't care. There's a reason: the regulations for acquisition said they wouldn't get paid their five to six digit licensing fees unless they proved to evaluators they applied the security techniques (eg covert-channel analysis). They also knew the evaluators would re-run what they could of the analyses and tests to look for bullshit. It's why I'm in favor of security regulations and certifications since they worked under TCSEC. Just gotta keep what worked while ditching bullshit like excess paperwork, overly prescriptive, and so on. DO-178B/DO-178C has been really good, too.

Whereas, understanding why FOSS doesn't give a shit I'm not sure on. My hypothesis is cultural attitudes, how security knowledge disseminates in the groups, and rigorous analysis of simplified software not being fun to most developers versus piles of features they can quickly throw together in favorite language. Curious what your thoughts are on FOSS side of it given FOSS model always had highest potential for high-security given labor advantage. Far as high-security, it never delivered it even once with all strong FOSS made by private parties (esp in academia) or companies that open-sourced it after the fact. Proprietary has them beat from kernels to usable languages several to nothing.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: