So if an authenticated PayPal email pops up in your Gmail inbox saying you must do this and that to unlock your account, you may be more likely to do so due to the legitimacy of DKIM.
I am reading HN on chrome, but unless I go looking for what browser I use, I wouldn't know.
Always sad. People willing to discount countless hours of expertise and knowledge because a user doesn't know what the name of their browser is. As if that means anything.
For example, I really don't give a shit if my neurosurgeon is aware of what his browser is named. Nor would I dream of calling him stupid if he didn't. Chances are he knows leaps and bounds more about me on most topics, just not casual desktop computing.
Likewise, discounting someone entirely because they're uncomfortable with or uninterested in computers is one of the most ridiculous, ignorant, and self-absorbed things you can do.
"Assume the user is stupid" isn't really being mean to users. It's just shorthand for "make everything as easy as possible. Sane defaults; great design; remove ambiguity; correct documentation; and so on."
I have mistakenly reported several Amazon security emails to their phishing team.
But I'm happy to continue backing up why this particular Wired headline is silly. DKIM is a cryptosystem backed by the insecure DNS. Mail has been spoofable since before RFC822. The whole idea behind phishing attacks is that you can't trust email. Nobody credible has ever suggested that DKIM resolves that problem; you will find no credible Internet security advice anywhere suggesting that a DKIM signature on a piece of email from Paypal or Chase means you should click on a link in that email and log into something.
So Gmail can simply deadpool hundreds of fake paypal phishing emails. That doesn't mean the occasional one that gets through by fooling DKIM is authentic - but the security benefits exist.
Unfortunately, in quibbling over the headline, which you are free to do, you argued that anti-spam has nothing to do with security.
Simply put keeping most of the spam that purports to be from paypal.com out of an inbox is a security issue, even if a determined spammer can thwart dkim via dns shenanigans.
US CERT seems to agree: http://www.kb.cert.org/vuls/id/268267
DKIM is one of many, many anti-spam mechanisms in place at Google (and presumably Yahoo). But of course, far fewer people would read and forward this Wired story if you had simply written "Mathematician Finds Weakness In One Google Anti-Spam Measure During Recruitment Attempt".
DKIM plays a key role in keeping phishing e-mails out of the inboxes of hundreds of millions of people who have no idea what PGP or an e-mail header is. It's a standard, not just a Google feature.
And if you'd read the story, you'd see that a number of companies fixed their weak crypto thanks to his efforts.
But, of course, far fewer people would upvote your comments if you didn't diss everything with a tone of condescension.
DKIM is an anti-spam mechanism. It does not authenticate the sender of an email message; to do that, use something like PGP. This is an interesting story, but it's not a story about a "massive net security hole". Mail on the Internet has always been spoofable.
What else do you want me to say? Mail on the Internet is spoofable, with or without DKIM. I literally don't know what I can do to placate you at this point.
DKIM doesn't validate the From line. But where is the weakness in trusting that someone with Google's private key approved the entire content of a message that is signed with Google's key, and so, my trust in Google should extend to the content of that email?
Domains simply are not a meaningful security boundary. At no point in the life of the commercial Internet have they ever been. Yes, there are security mechanisms on the Internet that are simultaneously (a) important and (b) misguided enough to ignore this fact. Fortunately, the most important of them are in practice difficult to reliably and scalably exploit.
This whole story is a tempest in a teacup. As Matthew Green said on Twitter: a 512 bit DKIM key says more about how little Google cares about DKIM than it does about any laxity on Google's part. Google is not lax about security.
Since this thread is also dead, we'll have to wait until someone else tries to get to the top of the front page with a DKIM story to continue arguing about DKIM.
DNSSEC is a boondoggle, and so far as I know no popular application on the entire Internet relies on it for security. But we don't need to hash out DNSSEC vs. DNS to see why a cracked DKIM key isn't a major Internet security hole.
It was on HN too, of course: http://news.ycombinator.com/item?id=1442385
I was surprised then that anyone would be using a 512-bit RSA key in the wild, let alone now.
I never look at DKIM or SPF. If I really care about who a message comes from I use PGP. It's a handy input for learning spam filters that use it as one piece of information, but it has about a 10% failure rate (legit mail where the signature fails to verify because of message manipulation in transit). It's most useful if the domain it's matching against is also trusted in some way. For example, a good signature against google.com is likely to mean the mail is good; a good signature against frohfuwehfwo.biz is not very helpful unless we know that that domain always sends spam.
Also, the most important piece of mail I ever received (from the Prime Minister's office) came without SPF or DKIM and I authenticated it by calling the office. In general, external authentication like that tends to reassure me.
Ha! That's optimistic.
It might be somewhat feasible if they wanted him to be security engineer, not a devop. Still, he expected they have set up what essentially is an elaborate prank just to send a cold-call email to just one of probably numerous potential candidates.
How likely this is? What would be the risk-to-reward ratio for doing that, considering that many of unsolicited recruiting mails are not even read? Isn't it more feasible for it to be a genuine mistake on their part? Google's not infallible, omnipotent being after all.
I called it my "cleverness attribution error" and wrote about it this summer: http://rachelbythebay.com/w/2012/06/19/attrib/
I've run into it in a few other places, too.
"I got my first glimpse of artificial intelligence on Feb. 10, 1996, at 4:45 p.m. EST, when in the first game of my match with Deep Blue, the computer nudged a pawn forward to a square where it could easily be captured. It was a wonderful and extremely human move. If I had been playing White, I might have offered this pawn sacrifice. It fractured Black's pawn structure and opened up the board. Although there did not appear to be a forced line of play that would allow recovery of the pawn, my instincts told me that with so many "loose" Black pawns and a somewhat exposed Black king, White could probably recover the material, with a better overall position to boot. "
about a move most computers of the time would find pretty quickly, and most decent human players would intuitively have thought OK at first glance.
My theory is that a significant part of his game was based around human psychology, so he found it hard to grasp computers. He played computer-friendly risky openings as if to taunt the machine, and heavily talked up the influence of the programmers on Deep Blue to an almost paranoid extent.
The result was that he lost against Deep Blue when he should have won fairly easily if he'd been more disciplined.
I think you are being very diplomatic. :)
But seriously, this cleverness attribution error might be a big problem with large players like Google. Forum posters, blogs and the digital versions of mainstream media all seem convinced Google is somehow special. That they know what they are doing, at every turn. Assuming some silly mistakes is a "test". It's a potentially harmful meme: people assuming ideas like "genius" or "utmost comptetence" without requiring any proof.
Blind faith followers take note, because here we have _proof_ that Google makes mistakes too. Silly ones at that.
Yes, I can see it now: Iran endures crushing sanctions in order to pursue spam email program.
Hobbyists have been factoring 512bit keys on a whim for a few years now, so....
Wow, the guy's a monster.
Fluent in classical (and Levantine) Arabic, Chinese, Greek; Top Putnam score (twice), teacher, Christian missionary. Sounds like he's got drive.
1. Top Putnam score in Colorado. There's a pretty big difference between that, and say, top Putnam score in Massachusetts (which is more likely the same as top overall due to many Putnam Fellows coming from Harvard or MIT).
2. Elementary proficiency in Classical and Leventine Arabic, Mandarin Chinese, and Koine Greek
http://www.colorado.edu/news/series/cu-boulder-nobel-laureat... (add David Wineland to that list).
top 50 or even 200 overall or whatever is far more impressive than #1 in a state that has no reputation for high scores.
The rat race is not for everyone.
"But the government of Iran probably could"...At this point I stopped reading, as this article became propaganda.
Did you know this month is National Cyber Security Awareness Month, as advertized by the DHS?
Even if that was true (it's not), how could you know it without reading further?
However, that sentence "But the government of Iran probably could" made the preceding paragraphs appear to be a vehicle to deliver a meme (like a shaggy-dog story). The rest of the article could have been great, I just stopped reading.
The journalist could have made a neutral statement about what entities have the resources to crack a 768-bit key. But they or their editor chose not to.
Instead, everyone that reads the article will go away with the meme "Iran, if they wanted to, could crack 768-bit keys". Which is, by common definition, propaganda.
It might be unintentional, i.e. the journalist is riding a wave of popular opinion, which they should not do; or it might be an attempt to load the article with link bait.
How is that propaganda? You don't think most countries have that kind of computing power?
defn: "Information, esp. of a biased or misleading nature, used to promote or publicize a particular political cause or point of view"
2: the spreading of ideas, information, or rumor for the purpose of helping or injuring an institution, a cause, or a person
3 : ideas, facts, or allegations spread deliberately to further one's cause or to damage an opposing cause; also : a public action having such an effect
> You don't think most countries have that kind of computing power?
The quote itself states that: "...or a large group with sufficient computing resources could pull it off."
I think you may have misunderstood my point of view.
The point is, in his quote, the interviewee (Zachary Harris) singled out Iran.
As it stands it is an out-of-the-blue assertion.
Here's my take: DKIM is an attempt by _third parties_ (i.e. "email providers", not the author or the recipient of the message) to control who can send email (but guess what? anyone can send email, go figure). On the other hand, authentication (PGP) is an attempt to allow senders to sign messages and receivers to verify signatures (no third parties needed).
Bob printed his PGP public key on a card and gave it to Alice when they had lunch. He then signed an email message the following week using PGP and sent it to Alice. But Bob's "email provider" decided to block Bob's message because Bob didn't pay money to someone for the use of a "domain name" and Bob's "email provider" thought his email was "spam" because he hadn't been "authorized" (by paying money for use of a domain name) to send email.
RFC 4871 (sorry for formatting but ipad issue)
" signers MUST use RSA keys of at least 1024 bits for
long-lived keys. Verifiers MUST be able to validate signatures with
keys ranging from 512 bits to 2048 bits, and they MAY be able to
validate signatures with larger keys. Verifier policies may use the
length of the signing key as one metric for determining whether a
signature is acceptable.
Factors that should influence the key size choice include the
o The practical constraint that large (e.g., 4096 bit) keys may not
fit within a 512-byte DNS UDP response packet
o The security constraint that keys smaller than 1024 bits are subjec to offline attacks..."
Also, until you have interviewed, all positions are "unspecified". Many positions need to be filled and they don't pick one for you until they know what you can do.
Geography is not really considered to be an issue. Once SRE finds someone they really want they will help with relocation.
I has additionally heard rumors that recruiters were so silo'ed that they would actually just throw away a resume rather than route it. Reason being that they were in a competition with all recruiters, and worst performers (based strictly on a numbers game) didn't get their contracts renewed.
*May have been as low as less than 5, its been a few years, and I never really took to memorize what was posted on the wall while I was at the urinal.
I think this is because SPF is still sometimes broken in practice. For example, it can fail when there are misconfigured e-mail forwarding (e.g. mail aliases) at _other_ peoples servers. Or with web forms that set the envelope sender to the "From" field in web page...
Props to Google for fixing the problem instantly.
Weird that he thought the email was phony based on content. Who wouldn't want a computer savvy math genius on their team? Google has lots.
ECC keys may be stronger at shorter lengths, but that hardly means that key length isn't the problem. After all, using a longer key would fix this problem.
ECC may even be a better solution, as you say, but that doesn't mean that the problem isn't also one of an insufficiently long key.