Why did you not ask the same question when the link was posted 5 days ago? That was the second time it was submitted by the same person in as many days. The real question is who cares? If it gets voted to the front page it is because people want to see it...
Because it had been posted over a year ago prior to being posted a week ago. And yes, you are right in pointing out that the same person submitted it twice in two days, but the first time it appears that nobody saw it. IMO, this is within the HN repost guidelines, but that's a matter of interpretation.
But I agree with you that "If it gets voted to the front page it is because people want to see it..."
14. Simple Network Time Protocol (SNTP)
Primary servers and clients complying with a subset of NTP, called
the Simple Network Time Protocol (SNTPv4) [RFC4330], do not need to
implement the mitigation algorithms described in Section 9 and following
sections. SNTP is intended for primary servers equipped with a single
reference clock, as well as for clients with a single upstream server
and no dependent clients. The fully developed NTPv4 implementation is
intended for secondary servers with multiple upstream servers and multiple
downstream servers or clients. Other than these considerations, NTP and
SNTP servers and clients are completely interoperable and can be intermixed
in NTP subnets.
Usually an SNTP client will add the latency/2 to get the correct time - I.E. if a packet takes 350 msec to return, then you'll want to add 175 msec from the wall-clock-time in the packet to get the "current wall clock time".
It makes losing/leaking a private key less of a problem, because it restricts the leakage to a 24h window. It also makes (webserver) key revocation kind of useless, because the certificate is automatically invalid after 24h.
Sure --- there's a neat paper by Franssen (2006) that demonstrates the formal equivalence between optimization problems and the social policy problems Arrow was concerned with. Basically, Arrow says that some constituents will always lose out under any social policy. Franssen showed that you can swap out "composite cost metric" for social policy and "components of the cost metric" for constituents, and the same arguments apply.
They're in the Bell link. One is an MLS workstation and one streams information out to many nodes. Both demand high security given their use case. Both are EAL4 garbage despite high assurance components available. He goes into more detail.
I am not sure this is a new threat, a user's list of known SSIDs has been a recognized threat to privacy for a long time. You do not even need to have an app installed on Alice's phone to track her location. All Eve has to do is listen for beacon probes from alice's laptop and Eve can get a good picture of where Alice has been and more: "Show me your SSIDS I'll tell you who you are"
1) you can circumvent this problem by randomizing your mac between probes, as apple already does, and that doesn't help with the threat we present
2) ssids are not unique - when it says "airport" it can be any airport. When you have access to the mac of the device, you can pin point it uniquely - that's the threat we present.
3) with the threat you link, you theoretically might be able to recover some of the past locations of the user where they did connect to WiFi. With the threat we present you get the location history with time resolution of up to 20 seconds, whether the user connects to WiFi or not, and even if they disable WiFi, and you don't have to control any routers. I would say this constitutes a novelty.
=== EDIT ====
4) the link only mentions a theoretical possibility, we show that the threat is real based on real data collected over 6 months about multiple people.
I wish there was an afl-fuzz like tool for fuzzing network traffic. By afl-fuzz for network traffic I mean a general purpose network fuzzer; not protocol specific fuzzer like the codenomicon ipsec fuzzer. Does anyone know of anything close? Remotely close?
There seems to be a significant overlap in what you would want from a network protocol fuzzer and a tool to reverse engineer a network protocol. Netzob is the only protocol RE tool that I know of and it seems that development has stalled.
You can attach to a process (-p pid) and then feed it with an external, initial input (from pcap, or hand-crafted). Honggfuzz will modify it to maximize code coverage in the network server. I got pretty decent results with e.g. apache (in must be executed with -X, so it doesn't fork/daemonize).
But what exactly are you fuzzing without being protocol specific as you start fuzzing? How do you eliminate false positive? You can be generative with grammar or with some learning capability (as "dumb" as aligning similar traffic as a cluster for example).
I am not sure how you are distinguishing my theoretical protocol agnostic fuzzer from afl. Is afl a jpg/png specific fuzzer? One of the things that is great about afl is that it is not targeted towards a specific file format it can work with a small corpus or thorough dictionary of the target format. eg I can give afl-fuzz a snippet of two of markdown and it will go nuts or if I want to be exhaustive I can feed afl the afl "dictionary" from the commonmark testsuite.
How do you ever eliminate false positives when fuzzing? And what constitutes a false positive?