Hacker News new | past | comments | ask | show | jobs | submit login

Pandora's box was opened with the public disclosure of Spectre and Meltdown. Security researchers will continue to find new and better ways of attacking the security boundaries in processors, and there's unlikely to be an end to this any time soon. Exciting time to be in security, not such an exciting time to be a potential victim.

It reminds me of when the first buffer overflows were disclosed and they were followed by a massive rash of buffer overflow vulnerabilities that continued for over a decade.

Not arguing just asking, but how has Pandora's box been opened with the discloser of Spectre and Meltdown? We've had security researchers discovering and reporting vulnerabilities since there were computers as far as I know?

I do agree that this won't end soon though. It appears to me that many of the methods CPU's use for better performance are fundamentally flawed in their security, and it's not like we can expect the millions of affected machines to be upgraded to mitigate this.

Before Spectre and Meltdown were disclosed publicly, very very few security researchers were looking at the CPU level, beyond things like attacks against hardware virtualization functionality, TrustZone and friends, ME, etc. Those bugs had existed for ages and could've been found through thorough examination of the processor manuals, but nobody had really looked too hard there. These new bugs were found independently by many different researchers/groups, simply because their attention was drawn to looking at this stuff for the first time.

> and could've been found through thorough examination of the processor manuals, but nobody had really looked too hard there.

I don't think you're giving enough credit. The actual microarchitecture isn't documented much in those manuals, so looking hard at those wouldn't help without making a series of assumptions of how it all works. The authors of recent exploits have been diligently reverse engineering and making sensible guesses.

this person seems to think that security researchers are the only people looking for vulnerabilities when in fact the people who stand to have significant profits are apt where the vulnerabilities were probably known long before researchers found them

But like, those bugs were there. They might have been being exploited and not been caught.

Your argument sounds like an argument against responsible disclosure totally.

Opening Pandora's Box is a good thing. It gets the issues out in the air and visible so you know what more to look out for and what needs to be fixed.

Don't forget that Hope was also released when the box was opened.

I'm not reading anything against responsible disclosure in deaken's comments. How are you getting that?

The whole "since they published that it happened, we've had a bunch of disclosures" which is a typical "I don't feel safer when people talk openly about unfixed vulnerabilities" argument.

Err no, no it's not. It's that there's been a ton more attention there. We're no more or less safe than we were before, we simply didn't know about the bugs that were there.

(FYI, I've been a security researcher for 15+ years and work as the head of hacker education for HackerOne; I am very, very pro disclosure. :) )

Another security principle is involved: assume the worst case. If CPU vulnerabilities are a popular subject, they get fixed to some extent: it's much better than letting them be as a tool in the hands of private and government black hat hackers.

Security by obscurity is no security

That's cryptography at scale in a nutshell, thus far it seems to be a fair enough countermeasure. Cryptography on an individual item with an unknown cipher/salt/hash can be treated as security via entropy - but with a big enough data set and some idea of the target content, things quickly devolve into security via obscurity since the target content is discoverable with enough time and computational resources. Security via "untamperability" (quantum bits/state) is better, alas we're not quite there yet.

My biggest worry is that all currently known classical "secure" data sets, including encrypted but recorded internet communication, will become an open book a few decades from now. What insights will the powers that be choose draw from it then, and how will that impact our future society? Food for thought.

This saying rubs me the wrong way, because confidentiality is 1/3 of security. Obscurity is critical or this wouldn't be a vulnerability.

Confidentiality is a valid layer of security, however security solely by obscurity is wrong.

You can have unintentional exploits/vulnerabilities in free/open source software or hardware too.

The critical part is understanding that confidentiality is temporal.

All “secrets” are eventually revealed, security is about managing the risks and timing associated with this revelations

The question is: how much needs to be kept confidential?

"Obscurity" general refers to situations where "everything" is confidential. And when everything is confidential priority one, nothing is, since people can't work like that.

Cryptography attempts to sequester the confidential data into a small number of bytes that can be protected, leaving larger body of data (say, the algorithm) non-confidential.

_Please_ stop parroting this line incorrectly.

Security _ONLY_ through obscurity is not security. Obscurity is a perfectly valid layer to add to a system to help improve your overall security.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact