Hacker News new | past | comments | ask | show | jobs | submit login

A thought just came to my mind. Let's say 30 years ago, I said to a colleague with whom I shared accesses to some Unix systems, "You know, I can use 'ps' to see what processes you are running. If I know the details about certain flaws of those binaries, I may be able to run a custom binary simultaneously on the system and figure out some of your data!" Would he/she be surprised or alarmed?



30 years ago it was not uncommon for everybody to know the root password on Unix systems. Even if you didn’t, there was so many setsuid holes that it didn’t matter.

A lot has changed since then.


This is a bit before my time but doesn't sound right to me - my ISP in 1994 certainly didn't give everyone the root password with our shell accounts, nor did my high school in 1996. Security holes were gaping by today's standards, but attackers were equally unsophisticated. Most setuid programs still required wheel access.


30 years ago timing attacks were unknown (at least publicly). The core idea of exfiltrating some secret based purely on specific input data to a program that was memory-safe and perfectly validating its inputs would have been a surprise on its own. Doing so entirely outside the process's direct view doubly so.


The paper "A Retrospective on the VAX VMM Security Kernel" (1991) presents a summary of analysing timing sidechannels in things like hard drive arm movements as prior art from 1977 in section VI.E, while explicitly addressing more usual timing side channels with various references to 1991 papers. One of their solutions to address timing side channels was fuzzy time (seem familiar?).


Interesting! I have always seen timing attacks cited as originating in Kocher (1994), but I suppose this is some bias towards the current Unix / PC / crypto world away from mainframe development. The 1991 paper is quite clear, though the 1977 paper it cites just kind of punts on the issue (end of section 3.6):

> Hence, it will not be possible to transmit information over a covert communication channel at a high enough bandwidth to make such attempts worthwhile.

Skimming the citations though, I'm not 100% sure it's the same thing as Kocher (1994) which has a more direct line to Meltdown/Spectre. The idea that intervals of high-precision clocks could carry information is the same, but especially the KVM/370 paper seems concerned with it being used for unmonitored communication between two malicious actors, or as a tool to learn something generally about what other users of the system are doing, not exfiltration of the the stored data itself across a security boundary with an oblivious user. The 1991 paper seems to sit somewhere in the middle.


Importantly, fuzzy time was referenced in the Light Pink Book [1] from the DoD Rainbow Series.

The problems of timing attacks in shared resource systems were well understood well before the Spectre mess.

1. https://fas.org/irp/nsa/rainbow/tg030.htm#5.0


I guess it was not only unknown for the public. Processors had been so slow nobody most often you did use one server for one application, those machines did not have enough compute power for multiple workloads simultaneously.


Timesharing was dominant from the 70s up to the start of the PC era, and never really went away on the server side. Performance gains vs. tool cost have been sublinear for decades now; servers used to do only a little less with a lot less.


Thirty years ago was 1990 so I'd guess probably not that surprised. Superscalar speculative processors were in their infancy but still understood, although I don't know if anyone had seriously considered the problems that have been exposed recently.


30 years ago the global economy wasn't running on shared servers. But even back then, if you also mentioned that that "data" was their bank transactions and the password to their account, I'm pretty sure they would be alarmed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: