(Switching back to my old account because rate limits.)
I wouldn't ever use something like FizzBuzz to assess a candidate. It would be more of "here's a mostly finished sample application with a corresponding SQL file, add this feature (e.g. a search bar for a blog) and fix any (intentionally introduced) security bugs you find".
They would be evaluated based on how successfully they complete the main task, and if they have an eye for finding/patching vulnerabilities, that's a bonus that can be used as a secondary selector if a lot of candidates pass. If no one does, it won't be used against them.
That's how I'd approach it, personally. Something specific to the kind of work we're doing, but abstract enough to be approachable without a lot of insider knowledge.
> All true but my observation is that the companies that put candidates through multi-day-out-of-town interview processes can afford to miss out on the candidates that can't do it.
All companies can afford to waste less money than they need too.
> Russians cryptoexperts doesn't fully trust DJB they found that at the last iteration of picking parameters by DJB for Curve25519 was a bit questionable.
Tell them to publish their findings and propose a better solution.
> Changes was done for "better performance" but no one found what exactly was speeded up.
What "changes" exactly? The word "changes" implies there was an early draft with vastly different parameters.
> I don't know details, but when curve parameters was tried to be being compromised by NSA was almost always was about adding such "performance optimizations".
If you don't know the details, try doing some research. Knowledge is healthy.
I really appreciate the level-headed discussion in this thread so far, especially the comment I'm replying to.
It's a stark contrast to the CFRG mailing list. (At least, so far, no one has tried to derail discussion here with "hey check out my custom cipher it's soooo secure but you need to compress the data before encrypting it or else you can observe a repeated structure out of it".)
I like 25519's school of thought. If you use the smallest possible value for a given performance/security goal, there's less room for conspiracy theory (provided the person making the theory understands what's even going on).
Yeah, this is common joke, especially when you feel ambivalent.
Examples:
"Do you want to go out for dinner tonight? Or should we stay in and save money?"
"Do you want cake or ice cream for dessert?"
"Should we prioritize this bug fix or focus on hitting our release target?"
Somebody might reply with tongue in cheek, "Yes", to indicate they agree with both parts, want both things, it's not a simple choice, etc.
FWIW, I encounter it more with older people. As a kid, I heard it often from uncles and grandparents.
The joke is in treating the "or" as the Boolean operator and collapsing it into a yes-no question.
Thus, the answer would be "no" if and only if legalization has had no effect whatsoever. As the asking of the question implicitly assumes that there has been an effect, answering yes therefore responds to the question in a very literal, truthful way, without actually conveying any new information.
This is my favorite way to punish a poorly structured interrogatory. It forces the follow-up question, "Which one?" Which can then be answered with "both."
Some people say that there are no stupid questions, but clearly, some questions are more intelligent than others.
Don't you hate it when you ask someone if they want coffee or tea and they say "yes"?
I mean, strictly speaking they answered the question. They do want a coffee or a tea. They just haven't yet specified which one they would like most...
~Don't you hate it when you ask that question out of curiosity, and then the person you asked gets all mad that you didn't give them either coffee or tea afterward?~
The question has several possible responses. "No, thank you," means the person wants neither coffee nor tea. "Coffee, please," and "tea, please" mean just what you think. "Yes, please," means that either coffee or tea would be acceptable, and the respondent is indifferent to which one.
This usually means "I'll take one of whatever you're having," and is probably intended to be less burdensome to the host, by allowing them to choose what they would prefer to serve rather than forcing them to defer to their guest.
Of course, the person might be trying for a cheap laugh rather than politeness. In that case, it would be appropriate to wait a beat, chuckle, optionally make a flirtatious gesture (such as a wink or arm touch), then ask "So either one is fine?"
Person: "I'm starving and barely able to get by working for Yelp in SF."
Yelp: "You're fired." (Good luck paying rent without a job.)
Yelp CEO: "The cost of living is too high here, so we're going to instead move offices to Arizona and pay the same wage."
Does this mean that Yelp is going to...
a. Help all of its employees move to AZ where they can enjoy
a lower cost of living?
b. Fire all of its employees and hire replacements in AZ?
c. Something else?
Because if they're going with option B, wow.
The cost of living in SF is one of the reasons I refuse to ever move there for work, but it seems like a scapegoat in this case. Why not just pay your employees a livable wage to begin with?
Eventually, probably. There's a reason why most companies don't scale their customer rep teams in high cost areas like the bay area to begin with. Many companies either offshore or open their customer rep centers in places like Arizona or the Midwest where the low wage work will still be a living wage. It's inherently low wage work, and paying significantly above market wage for low skill labor is something businesses are understandably reticent to do.
I think it would be more likely to do the transfer in a more phased way.
For example, if they start to need more low-wage positions you start hiring directly in Arizona, and when someone leaves the SF office you just don't hire someone there again, you re-open the position in your other location.
Do this for some time and at some point you are probably going to have only a few employees left which would be easier to fire.
So it's kind of like option B except not everyone at a time.
Cryptocat was a good concept (i.e. it was USABLE!), but the execution was flawed. It grew a lot of criticism and Nadim made mistakes in handling some of his critics, creating a schism between him and the cryptographers who might have been able to help him. (Not all of this was his fault, of course.)
I hope that not only will this new product of his be developed with "A pure vision of democratized, pleasant secure messaging", but also that he has matured significantly. I hope that Cryptocat v3 will come out after it has been thoroughly audited by several reputable third parties.
A usable, secure messenger is a pretty important niche to fill. Cryptocat is a good concept (usability is important), but its implementation is flawed enough that it's worse than useless. With some TLC, better ground-up design, and through auditing? Pretty sweet. There isn't one messenger that fills every niche reasonably well (eg. Tox is fragmented and hasn't been audited as far as I'm aware, Jitsi is clunky and relies heavily on outside services, Retroshare requires an intricate knowledge of GPG, several other messengers sit over Tor which requires education, and libpurple is garbage, etc.).
Ricochet sits over Tor and doesn't require much or any in the way of education. The implementation details and use of Tor is virtually invisible to the end-user. It's nothing like using public key encryption to send an email for instance, and more akin to AOL Instant Messenger from a user perspective.
Just to play devil's advocate for the sake of discussion I'll say, the main benefit is "education" for designers of actual secure messaging apps such as those from Open Whisper Systems.
I remember Moxie writing about intentionally using insecure messaging apps that have great UI for the purpose of learning what non-technical users want, and he then built stuff that was both secure and usable.
I think it's interesting how apps like Cryptocat (on one side) and those from e.g. Open Whisper Systems (on another) play off each other. Some secure messaging apps were pressured to up their UI game, and now some "usable" apps are pressured to up their security game or shut down. Whatsapp got X25519 via the TextSecure protocol, and now Cryptocat is shutting down. It sends a message that designers of new apps will be competing against successful deployments of messengers that are both secure and usable.
There are still things like Telegram that are apparently big, but I think the trend is clear.
Agreed. There must be a cryptographic aphorism along the lines of John Gall's famous saying about the provenance of complex systems that work.
Something like: "Usable secure systems are created by iterating from secure unusable systems, not by iterating from insecure usable systems." Someone must have said something like this before, and put it more eloquently.
Are you sure that the usability wasn't fundamentally insecure? Because in that case, whether or not it was usable is meaningless as a benchmark for a system that works.
I don't think so. I did a brief look at how it works. It was basically a centrally-hosted, shared-secret setup. I've built those before. Super easy to build and use compared to high-secure, P2P apps w/ their trust management. Here was the user experience when I tried it:
1. Go to the right site. So, tell them to check domain and HTTPS.
2. Type in information you and other person agreed to preferably in person.
3. Chat.
Very, very usable. That could've been implemented in a simple, secure-coded app communicating over a secure tunnel with another simple app on a robust server. The crypto to do that sort of thing right (outside a browser) is pretty basic. One could even run the deployment server and untrusted storage separately so complex TCB couldn't affect trusted app delivery or operation. Not past availability.
Cryptocat's design was actually simpler than some high assurance systems of the past. That tells me it could be done robustly with a different implementation and protocol. Is it the best idea? Hell no for all kinds of reasons that start with centralization then get worse from there. Its usability can be recreated, though, in a more secure solution.
Note: As I said to tptacek, even the original with its security issues kept users safer than fads like Facebook Chat that spy on them. A fun, usable solution with better than average privacy is still a step up if used by the right people. Just gotta be clear to use something stronger (less fun) to stop hackers.
Secure from whom? A school IT admin? An abusive spouse? A kiddie with a wifi packet sniffer? From hackers going after email accounts? There are lots of threat models that CryptoCat was useful against; and in the long run, CryptoCat was right about usability.
How about "secure against the best funded, best staffed signals intelligence agency in the world"? Because that's the adversary Cryptocat ended up with. When it came out that Greenwald had used it for Snowden, Kobeissi was over the moon about it on Twitter; he treated it as an endorsement.
What's it matter if it's secure but not usable? That problem, aside from demand, is why almost everyone uses insecure messengers. Usability is more important if one is targeting the masses. Especially if the alternatives they find cool and usable are horribly insecure.
That leads to other side: what is a secure messenger? Secure against WHO? If it's hackers, then Cryptocat is entirely inappropriate as it will be smashed. Yet, average person's threat model includes all kinds of snoops that might not have hacking skill not to mention the service host. Especially in high school & college. Cryptocat would protect them from many of those while its own problems would be found and improved over time. Widespread adoption of Cryptocat over services like Facebook Messenger stashing & analyzing the messages would be a win in privacy.
So, the question is use case. I gave it a positive review for potential to get insecure crowd on something a little better. It was also fun thanks to good art. I just said they should clearly indicate it's not for stopping hackers, governments, etc. Plus keep links to good products that are. If people want those, they'll use them. If not, Cryptocat wasn't a bad fallback compared to straight-up invasive apps they were likely using.
People say things like this a lot. I understand why they say it. But, no. Emphatically, no.
Let me put a bullet right in the head of this argument in favor of Cryptocat and things like it:
In June 2013, Cryptocat was used by journalist Glenn Greenwald while in Hong Kong to meet NSA whistleblower Edward Snowden for the first time, after other encryption software failed to work.
And, you know what else? Guess what happened right around June 2013? Decryptocat.
I specifically said it shouldn't be used to stop hackers. Let's continue anyway as there's a lesson here.
"after other encryption software failed to work."
Nothing else worked because all your recommendations were unusable. It was using Cryptocat because it might be private or doing open communications that wouldn't be private. There was also a time window.
"Guess what happened right around June 2013?"
They completed the meeting without the NSA getting shit. Greenwald got the data. Snowden escaped. Comms remained private until an NSA analyst discovered both the intercepted data and Decryptocat. It worked.
Great story. Now, what app do you recommend for a future Greenwald that's so easy to correctly acquire and use that I could give my grandmother a 3-4 step flashcard and she get through it without help & minimum hassle? Cryptocat passed my granny test. Nothing else on a desktop did so far.
> I specifically said it shouldn't be used to stop hackers.
Script kiddies get their name because they only make use of easy-to-use tools written by knowledgable "hackers" that perform tasks that are vastly beyond the understanding of the kiddie. If your "secure communications" software doesn't stop a sophisticated passive adversary, it doesn't stop anyone, because a sophisticated adversary will inevitably release a point and drool tool that anyone can use to unscramble your data. [0]
> They completed the meeting without the NSA getting shit. ... Comms remained private until an NSA analyst discovered both the intercepted data and Decryptocat.
So, then the NSA did "get shit". They may not have gotten it in a timely manner, but they did get the plaintext of the conversation.
> Now, what app do you recommend for a future Greenwald...
TextSecure/Signal has been around since 2010. It walks you through the setup process, so no need for flashcards. Unlike Cryptocat, its crypto has stood up to scrutiny. It doesn't currently meet your "on a desktop" search criteria but:
1) It seems reasonable to expect that most journalists possess either an iOS or Android smartphone.
2) There is a Signal desktop client in development that's currently in population-limited beta testing. From what people tell me about how WhatsApp handles the interaction between its mobile clients and desktop client, Signal's desktop client is every bit as easy to use as WhatsApp's.
[0] Granted, Decryptocat likely has to be used by someone running code in the Cryptocat datacenters, but this does not invalidate my objection to your assertion.
"If your "secure communications" software doesn't stop a sophisticated passive adversary, it doesn't stop anyone"
So every non-technical person right now wanting others' conversations in various insecure apps are running full surveillance on them with control of their PC/phones because the NSA and other teams are? And NSA et al turned all that into script kiddie warez published openly with easy Google access? No they're not. Those that are make up a tiny, tiny few. So, you're argument is simply wrong.
Mediocre solutions stop people all the time despite pro's or talented people being able to defeat them. A subset of them get attack kits made by black hats or security professionals. A subset of that gets released into the wild. A tiny subset of laypersons find those and learn to wield them. Sometimes those tools require more access than they have, sometimes not. There's no all-or-nothing game with what happens using certain apps or security strategies. Lots of variation in risk. Your threat model, what software you're using, and how you're using it matters a LOT in determining what will actually happen.
Incidentally, this is why the Mac users felt immune to malware so long despite lots of popularity, business data up for grabs, and terrible security. If your argument was correct, they would've gotten owned massively and regularly in botnets that were on par with Windows if not worse. They didn't, though. The weakness and possibility of an attack didn't materialize into even large gains by hackers: just a little botnet or two in PPC days. Laypersons certainly didn't know about ways to own them all with easy tools. Actually, over all proprietary & FOSS in use, that appears to be an uncommon or rare event.
Note: I know people that to this day use PPC Mac's and old software in a hardened configuration with backups. No evidence that anyone has trashed their system so far. Plus, the laptop users would notice if lots of streaming was going on given the terrible battery usage of those. So your hypothesis is still failing for them going on over a decade.
"So, then the NSA did "get shit". They may not have gotten it in a timely manner, but they did get the plaintext of the conversation."
The requirement was that the NSA not be able to understand the content of those messages for a period of time that covers their activity. The NSA's goal is to spot stuff like this before it becomes a huge problem. Greenwald et al's requirement passed while NSA's failed. NSA didn't get shit in terms of their goals. They also lost a LOT. :)
"TextSecure/Signal has been around since 2010. "
I asked for a desktop app usable right now. I thought that was a mobile app. It's good that you...
"There is a Signal desktop client in development "
...brought me a red herring that wouldn't have helped Greenwald then or laypeople now. (sighs) Oh well. At least your counter might be true in a future case once that materializes. I look forward to its release.
> ...brought me a red herring that wouldn't have helped Greenwald then or laypeople now.
Funny. I addressed this in my previous comment, but I guess you glossed over it:
> 1) It seems reasonable to expect that most journalists possess either an iOS or Android smartphone.
Your snark doesn't enhance the credibility of your objections.
> The requirement was that the NSA not be able to understand the content of those messages for a period of time that covers their activity.
Two things:
1) That's not what you said, though. You said "the NSA didn't get shit", when in fact, they did. In my reply to you, I even addressed the fact that it's possible they got the plaintext of the conversation long after the meeting. [0] Again, your snark doesn't do you credit.
2) Another goal of the NSA is storage of encrypted data for later decryption just in case a decryption method is found and the data is useful. The NSA does far more than just deal with information that has a very brief shelf life.
> Mediocre solutions stop people all the time despite pro's or talented people being able to defeat them.
Does a messaging system that XORs the message and addressing information with a hard-coded value meet your definition of "secure messenger" if its target audience is the everyday US citizen who communicates only to people within the US? Why or why not?
[0] But, in reality, we can't know that NSA wasn't aware of this vulnerability in CryptoCat at the time of the meeting. It's entirely possible that they had access to the plaintext of the conversation shortly after it happened.
"Funny. I addressed this in my previous comment, but I guess you glossed over it:"
No, I'm calling you on it. You're countering my claim that mediocre privacy is better than choosing no privacy if one is consciously aware that this is the choice. Your first counter...
"If your "secure communications" software doesn't stop a sophisticated passive adversary, it doesn't stop anyone, "
...was so ridiculous that you lost credibility instantly. I gave you the benefit of the doubt on the rest. The next part was a recommendation that basically confirmed my original claim that mediocre solutions were all that's you could think of barring a future release of Signal. Once again, there was no effective counter to people using Cryptocat or other mediocre solutions when they had nothing else available that was usable. And, again, knowing it wasn't guaranteed to stop hackers: just delay them or stop lay attackers.
"ou said "the NSA didn't get shit", when in fact, they did."
OK. I see you were just griping with a technicality in a secondary claim. I stand corrected: NSA did get shit way after they needed it. Remember that Snowden knew they would get found out. The after effect wasn't important. Just the delay. My argument still stands even with that point corrected given each party's goals.
"Does a messaging system that XORs the message and addressing information with a hard-coded value meet your definition of "secure messenger" if its target audience is the everyday US citizen who communicates only to people within the US? Why or why not?"
Yes if the threat model is jocks snooping on her phone and the code is custom. No most of the time because that's weaker than weak. A regular encryption algorithm people wouldn't know about with a solution that's not popular? Will stop most snoops unless they straight hack it. A modification of an existing one that preserves its security properties but obfuscates the change? Slows even nation state attackers.
Like I claimed: the threat model and security goals determine what level of security is appropriate. Wise engineers tell people to default on something really good. The userbase our post is discussing is incapable of or unwilling to put up with what's good by our standards. For them, it's no protection, methods that sell them out, or methods that offer some protection while not selling them out. The third option sounds better than the other two. This is where solutions like Cryptocat (not XOR lol) come in. They're easy, sometimes fun, enough for adoption to raise the baseline a bit. Or they remain niche which is even better in this threat model.
So, I drop the bar a bit to provide them some protection rather than none. Recommendations depend on the person and situation. Some w/ no malicious provider is still better than none + malicious host. The crux of my arguments.
"But, in reality, we can't know that NSA wasn't aware of this vulnerability in CryptoCat at the time of the meeting."
Even more support. If they weren't, the niche and barely functional thing did its job of showing them scrambled traffic they didn't auto-break and analyze. If they were, it bought the users time. Benefit either way over open communications despite this being outside my recommended use case. Some better than none.
Cryptocat was not secure. No argument there! Decryptocat was the proof in the pudding.
If a secure product could be as user-friendly as Cryptocat was while still being secure, then most peoples' communications would be more secure.
That's all I was saying. I'm not trying at all to hand-wave the proven insecurity. I'm saying that the only thing they got right was the one thing that secure products have consistently gotten wrong. (Barring Signal.)
"That's all I was saying. I'm not trying at all to hand-wave the proven insecurity. I'm saying that the only thing they got right was the one thing that secure products have consistently gotten wrong. "
That's my main claim. Usability and setup phase were between 90-100% of my positive remarks on it in my review. I just added it still has value for crowds without tech-savvy opponents if (a) they won't use truly secure stuff due to hassle and (b) they are clearly informed usable but weak tools can be breached. As one of a few interim solutions if nothing else.
Sigh, I guess it's time to enable Click-to-play again[1]. I wish there were a way to just automatically pause on load, without needing to completely disable flash.
Firefox supports loading plugins like flash on demand on the Addons settings page. Additionally, Firefox has a setting "media.autoplay.enabled" to prevent HTML5 media from playing automatically. However, some websites assume autoplay succeeded and behave wrongly. For example, YouTube's paused/play button state is backwards.
This requires your users to trust whichever OAuth providers you decide to integrate with. Sometimes, the set of "trusted OAuth providers" for your users is {}. What then?
> 99% of the websites that "require" me to create an account and log in don't need to store primary credentials for me
Why are you giving them valuable credentials? Give them a throw-away password (password managers are great for this).
I meant OpenID. I literally couldn't see I was saying the wrong thing. Everything you say about OAuth is true. I'm an idiot. I'm sorry for getting so blue in the face.
A hybrid between the two (common OAuth-style endpoints and any OpenID endpoint) is the best solution for everybody.
You don't integrate with a provider. You implement the protocol and let your users supply a URL. Layering on popular alternatives (Facebook, Google, etc) help, but use the Stack Exchange model. Let users do what they want to do.
That way users can be their own oAuth providers if they want.
My question was: "What if your users don't trust any of the existing providers on Earth?"
It's hard to make a blanked recommendation like that, even for "only 99%" of websites. Neither you, nor the person building the website, has any insight into who the website's users trust.
Offer OAuth2 as an alternative to passwords: Great move.
Only offer OAuth2 and don't let people create an account: Questionable.
I don't understand why they would trust <crappy forum owner> over a dedicated authentication storage place but that's their choice. And yes, there is also every possibility to offer direct credentials, per the Stack Exchange model (they host their own oAuth server and allow simple registrations).
> I don't understand why they would trust <crappy forum owner> over a dedicated authentication storage place but that's their choice.
What if <crappy forum owner> happens to be a security engineer, and <crappy forum> happens to be Silk Road 13?
The trust decisions people make are situational and nuanced. OAuth is great if that's where people invest their trust. Otherwise, you're outsourcing it for the user to a company they might fear.
No, you're saying "which of this limited set of companies are you going to authenticate with" instead. If you don't want to be guilty of taking users' agency away from their own trust decisions, you need to do one of two things:
1. Let every website on the Internet potentially be an OAuth provider.
2. Make OAuth optional.
If you follow option #2, then this article is still relevant because you need to handle passwords securely.
Your first paragraph is like saying using email is forcing somebody to use one of a "limited set of companies". It's nonsense. Again, if they don't like what's on offer they can host their own, just like email!. They can hire a company like yours to host their credentials with as many layers of security as they want. The user has ultimate choice.
Secondly, every website on the Internet is potentially an OAuth provider.
Not to mention that I have —on multiple occasions here— suggested that websites that consume OAuth should also provide it (like Stack Exchange).
I wouldn't ever use something like FizzBuzz to assess a candidate. It would be more of "here's a mostly finished sample application with a corresponding SQL file, add this feature (e.g. a search bar for a blog) and fix any (intentionally introduced) security bugs you find".
They would be evaluated based on how successfully they complete the main task, and if they have an eye for finding/patching vulnerabilities, that's a bonus that can be used as a secondary selector if a lot of candidates pass. If no one does, it won't be used against them.
That's how I'd approach it, personally. Something specific to the kind of work we're doing, but abstract enough to be approachable without a lot of insider knowledge.