Hacker News new | comments | show | ask | jobs | submit | A1kmm's comments login

Their primary mission is to defend (including both protecting US information security and gathering intelligence) - cases like this show that they have compromised their direct information security defence mission to further their offensive capabilities.


According[1] to Dan Geer[2], the intelligence community is all offense:

    Chris Inglis, recently retired NSA Deputy Director, remarked that if we were
    to score cyber the way we score soccer, the tally would be 462-456 twenty
    minutes into the game, i.e., all offense.  I will take his comment as confirming
    at the highest level not only the dual use nature of cybersecurity but also
    confirming that offense is where the innovations that only States can afford
    is going on.
[1] http://geer.tinho.net/geer.blackhat.6viii14.txt

[2] (among other credentials) CISO at In-Q-Tel


> There’s been speculation about whether the UK, China or > the NSA are to blame — but today’s revelation strongly > suggests that it might have been the US.

Why would the NSA infiltrate Juniper to change the Dual EC DBRG parameters, when the standard parameters are already exactly how they want them?

There is a good chance they noticed that their attacks against Dual_EC_DBRG weren't working - but to reveal that pre-Snowden would prove that they knew the private key and were exploiting it.

That said, I understand there was more than one back-door disclosed.


FWIW, I believe people have said that Juniper was always using non-standard EC DBRG parameters. The backdoor changed those parameters to something else. So the NSA wouldn't have already had the parameters they wanted.


I think the dream of homomorphic encryption ever being practical (at least for what I would consider practical) is unrealistic.

Let f(x) represent the encryption of x.

I would assume that a practical system has the following properties: * It is possible to compose primitive operators and values to implement a +1 operator (i.e. an operator that adds one to an integer). Call this operator g(x), defined such that g(f(x)) = f(x + 1). * If the value of 1 is leaked (which might, for example, be a literal that is used in a position that is known to the attacker), that shouldn't compromise the entire scheme. * The scheme must provide a way to compare for equality so variable-length algorithms are possible. Define a function h such that h(f(x), f(y)) is true iff x = y.

By our assumptions, the attacker knows f(1). They can compute f(i + 1) = g(f(i)), and so they can compute as many small integers as they want. Suppose they have an unknown value u, that they know is a small integer, but they don't know which one. They can test h(u, i) for each i up to some limit to find the value of u. Hence, the encryption scheme is insecure.


What you're missing is that the result of h(f(x), f(y)) is itself encrypted. They can't tell whether it's true or not.


"Define a function h such that h(f(x), f(y)) is true iff x = y"

If an attacker can compute such a function then any cryptosystem would be broken because of the attack you give. The Goldwasser,Micali paper "Probabalistic Encryption" is a very important early result in cryptography about this fact.

The attack implies that no deterministic encryption scheme is secure.

"By our assumptions, the attacker knows f(1)".

There will not be a single value f(1) as secure encryption schemes cannot be deterministic.

There is nothing special about knowing one of the values of f(1) because modern cryptography assumes that the encryption algorithm is public.


'Native' usually means one of two more precisely defined terms:

  * Indigenous - it got there without any human intervention.
  * Endemic - it is not found anywhere else.
I imagine that many sea birds would be indigenous there at the very least.


I suspect the author of the article is American. Everyone thinks that their own accent is neutral and measures everyone else relative to that.

As a non-American, the mentioned singers don't sound American to me, although I would also agree that they sound neutral.

I suspect that a more reasonable explanation is that the phonemes used while singing are more universal than those used in normal speech, and so everyone perceives singing to be closer to their own accent.


> But what if the otherwise loathed real name policy could > be turned to service this particular need?

The link between a real person and a Facebook account isn't secure - I could make an account with your name today without too much stress (no need to provide ID unless Facebook thinks your name isn't a real name).


I think the grandparent chose the wrong end of the stick with relating this to "famous" people, which, in turn, threw you off.

Sure, you can register an account in my name, but there are quite a number of people who will not be fooled: people who actually know me. People who know me in real life can tell whether an account is real or not, because they can tell whether I post about things I do, whether I post pictures that are...well, me.

In that case, they can be reasonably sure that the account in question is, in fact, my account. If I attach my GPG key to this account, they can thus also reasonably assume that the GPG key belongs to the account that belongs to me. This essentially gets you the online equivalent of a key-sharing party.


Yes, I deliberately chose the term "prominently visible" and not "celebrity". The context is different with PGP.

Maybe I should have used high-profile as the specifier in that sentence too.


While it seems (based on other comments) that the product mentioned here doesn't do that, if the phone has an alternative link (e.g. GPRS / 3G / 4G) data link to the server that stores the credentials, it would be possible to make this more secure.

For example, suppose Alice wants to connect to Bob's 802.11g wifi hotspot using 802.11i-2004 (WPA2) authentication in PSK mode. Charlie and Bob have the password; neither want to give it to Alice, but Charlie wants to facilitate Alice to access Bob's system. The first step of the normal WPA2-PSK process is to take the password and generate the Pairwise Master Key (PMK) by putting the password through a key derivation function (PBKDF2-SHA1). However, the PMK is essentially just a stretched version of the password - so we will assume Charlie also doesn't want to share that. To continue with the connection, Alice needs to compute the same Pairwise Transient Key (PTK) as Bob. The PTK is a hash function computed from the PMK, Alice's random nonce, Bob's random nonce, Alice's MAC address, and Bob's MAC address. Alice could send the latter four pieces of information to Charlie over the existing link, then Charlie could send the computed PTK back to Alice, allowing Alice to make the 802.11g connection without revealing the PMK or password to Alice.

A similar way of transferring enough information to allow the connection to continue is likely possible for other authentication modes like the various 802.11X options.

Of course, implementing this would probably require, at least, a rooted phone (for example, to replace the stock wpa_supplicant on Android).

Also, the server would need to store the password or PMK itself (either in plaintext or encrypted with a key that is kept available at all times to process incoming requests), so it would have a huge database of credentials that could be compromised.


> find a seemingly unrelated reason to terminate you (or > make you leave).

That doesn't make sense. The guy who is going around harassing people is creating the liability, not the victim, and will probably create more liability in the future.

I would be very surprised if there was any jurisdiction in the world where terminating an employment contract terminates employer liability for harassment; a terminated former employee who has been harassed is a much bigger threat to company profits than a current one.

The only reason any reasonably competent HR department would fire someone for making a valid complaint of harassment would be if they were ordered to by their superiors to protect themselves or their management colleagues (which is, sadly, a risk in some companies).

Obviously, going to HR should not be someone's first move for a one off comment. The person probably doesn't realise that they are being offensive, so a reasonable escalation is to tell them you are offended (and why) first, and if it continues, then go to HR. If it still continues, and HR doesn't do anything meaningful to address the problem (or fires you wrongfully in retribution), then escalate by hiring an employment lawyer to bring a lawsuit against the company, or depending on your jurisdiction, complaining to the relevant government body. This is fair to the person making the remark (they have a chance to learn that their conduct is offensive), the company (they have a chance to know about and address the problem before facing any external action), and the person who was offended (the problem is fixed one way or the other).

In this case, both sides were potentially in the wrong - throwing your drink on someone could be viewed as an assault and more serious than a single offensive remark. I think that had Kelly Ellis followed a reasonable pattern of escalation, there would be less harassment at Google now, all three parties (i.e. the two people and the corporation) involved would be better off, and we wouldn't be hearing about this now. Obviously it would be even better if the remark had never been made in the first place.

It makes me wonder if Google HR does not give employees training about not harassing people and how to respond to harassment - that is pretty much standard at most big companies of their size, and probably pays for itself in reduced liability (if they get sued out of the blue, they can point to the training to try to shift the blame to the person doing the harassing rather than the company - obviously that won't work if the plaintiff complained to them and they did nothing).


> That's a failure at the management level, not the developer's level

Management don't necessarily accurately know how much effort the work will take. They can ask developers to estimate this, but even then it will not always be accurate.

The crux of movements like the agile movement (and management processes like scrum within it) are to make things more process orientated - management ensures employees work for x hours a week following a process, and adapts based on what output they produce (increasing resources if they want more output).

Product orientated processes (where managers tell employees they have to produce y output, and after that they can slack off or go home if they want to) might work if someone's job is to sit on a production line doing a well understood process, but it does not work well for software. I think both businesses and developers are better off for the trend to abandon such processes when it comes to software.

Given a process orientated environment, it is the prerogative of management to make sure everyone works as close as possible to their contracted number of hours - someone who is working less should rightly get called out on it. They may have finished their current task, but they should help someone with their task, or work on improving processes / infrastructure / addressing tech debt to make likely future work easier.

Not everyone needs to work the same number of hours in a process-orientated environment, but if they work less, that should be in their contract (and most likely they will get paid less than they would have if they worked longer) and not just the employee unilaterally deciding that.


It is a radar device so the speed of sound is irrelevant. The speed of light in air (and hence the refractive index) does depend slightly on temperature / pressure, but not by very much: https://physics.stackexchange.com/questions/6872/refractive-...



Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact