There's the suggestion that an exploding feature is worthless, given your partner can just take a screenshot or video of what you sent.
This suggestion is missing (1) that your relationship with a partner is disproportionately okay at the time you sent something (i.e., you trust them THEN) and (2) there's a whole different class of adversary who compromises your or your partners' devices in the future.
SnapChat, as far as I know, has none of the cryptographic implementation of Keybase. And yet it has likely protected hundreds of thousands of kids from severe bullying. Consider the teen girl who sends the goofy sexy pic to her boyfriend. Before the advent of exploding messages, he might've iMessaged or emailed that to a friend, just one friend, his best friend, out of pride. And that friend sent it to a few more, and so on. Not out of malice, but suddenly the whole school has seen her pic of god knows what and she literally wants to die. But with Snapchat, taking a screenshot is knowingly violating a social agreement. It's also violating the trust of his current girlfriend - everyone knows it's not okay to screenshot that shit. And the number of people who would do that is much tinier. Second, consider the far worse scenario: she dumps him a month later and until then he has been NiceGuy. But then he becomes r/niceguy, the guy who will look through the old pictures and spread them around.
Finally, let's not forget that your device can be compromised by loss, theft, or hackers, at any time. Exploding messages are gone when that happens.
People can be tricked, compelled, coerced, blackmailed, and hacked. Or just turn evil. All in the future. Which is what a timed message protects against. This is why Keybase is doing this. Paired with encryption it's quite powerful.
The primary threat is compromise of a device. Keybase allows you to revoke keys but that assumes you are aware that the device has been compromised. Which is already too late for sensitive messages.
The average user doesn’t understand data persistence, or secure destruction of data. Manafort is a good example of this. I wish apps just expired messages by default. I don’t understand why WhatsApp doesn’t have this feature.
Why do you want your messages deleted by default when you use one of these secure messaging clients?
You might worry about not being able to find something you said. Others worry about being able to find something they said.
I personally chose my defaults appropriately, with work stuff getting archived and everything else not even getting backed up. And realistically, even the work stuff is completely useless after a couple of years; a problem I have is not finding information, but finding current, useful information.
Is this true? (Asking with no implication of criticism or being a leading question - I just genuinely don't know the answer)
I can believe both that these teens were going to sext each other anyway and Snapchat is keeping them safer, or that they weren't going to and Snapchat has convinced them that it can be done more safely than it can actually be done.
Has anyone done studies on this? (Is it even possible to do studies? I suppose you'd either need information from Snapchat itself on how often they detect screenshots, or from high schools on bullying cases over time and whether Snapchat is involved + hope that bullying cases that get escalated to adults at high schools is a meaningful proxy for actual bullying.)
I'm inclined to buy your argument that because of the implementation making stored pictures not the default, and the social pressure not to take screenshots, probably Snapchat's disappearing messages are better than iMessage. But this seems like the sort of thing that's dangerous enough (in either direction! if the technology works and we refuse to deploy it, that's bad too) that hard data would be useful.
That isn't saying that Snapchat has removed the potential of spreading explicit content. As someone mentioned in another comment, screenshotting the snap circumvents the system. It's also just as easy to take a photo of the screen with another device -- both an untraceable and permanent record of the photo.
As a whole, Snapchat has had a net positive effect for people my age. I can attest that teenagers make unwise decisions now and again, and Snapchat has helped in that those rash decisions are less likely to bite us in the future. While I don't have the data to back up the claim that hundreds of thousands of kids have been protected due to Snapchat's impermanence, I certainly wouldn't be surprised if it was true. It's the most popular social network in my demographic for a reason -- it oozes the ephemeral teenage spirit.
> "Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected. Although usually small in comparison to the fundamental benefits of safety interventions, it may result in a lower net benefit than expected."
There's a book too, with a special emphasis on on financial crises. https://www.theguardian.com/books/2015/oct/12/foolproof-greg...
> "In the run-up to the crash, consumers and even policymakers had come to believe that smart regulators and forward-thinking bankers had made the world of money a much safer place.
> "The fundamental insight of Ip’s new book, Foolproof, is that this very belief was a key factor in the lead up to the crash. When people believe they are safe, they take more risks – they drive faster, in motoring terms – and “speed makes everything worse”. Or as the economist Hyman Minsky, whose work Ip revisits, put it: “Stability is destabilising.”
There are applications in our field too:
- safety features for users might make them behave less safely (e.g. exploding messages)
- better reliability of systems might lead us to put more trust in them, leading to even bigger outages when they occur (e.g. centralising trust in cloud providers)
It's interesting to see things like Chaos Engineering (https://principlesofchaos.org/) introducing intentional "danger" into a system in order to improve system-wide stability. Of course, maybe Chaos Engineering will give us more trust in our systems which may lead us to take even bigger risks...
So, I'm okay with risk compensation if people are net doing better. I don't think that "if even one person is hurt by this, that's too much" is a meaningful basis for decisions, especially when there's a risk that even one person will be hurt by not doing the thing. So at the risk of reducing people to numbers, if, say, 100 teenagers send sexts when they otherwise wouldn't have and get screenshotted, but 1,000 teenagers send sexts when they otherwise would have sent them to a non-disappearing-by-default client, and now their photos don't get copied because of social pressure / high-but-not-impossible technical barriers, that still seems like a clear win.
That's the sort of data that I think would be very interesting to inform good engineering decisions, and also pretty impossible to get.
(I also don't know the answer!)
It's unclear how they protect images today, but they have never once mentioned any use of encryption.
I think it is possible that Snapchat has net caused more kids to get bullied as a result of ill-advised sexting, by being the company advising ill. I can see both arguments and I don't know which one is actually true.
Preventing phone-screen capture isn't really something you can't get around but Snapchat could certainly afford to put their money where the mouth is and try to provide their users with a safer experience by cracking down on 3rd party apps.
My exposure to SnapChat suggests that this is not the case. Screenshoters are treated more like rascals than felons. This may depend on the content of the message though. My incoming messages tend to be more silly faces than nudes.
Edit: Or rather, it is the case, but the social agreement is a lightly enforceable one. Closer to not holding an elevator door than eating a coworker's lunch.
But making a big deal about “exploding” is dangerously incorrect that many users will make incorrect assumptions.
I’m not worried about screenshots, I’m worried about my plugin that archvives all text inbound to me that then requires me to respond to subpeona, etc.
From a security standpoint, this feature should not impact behavior since it is meaningless. If users don’t understand this, then it will cause heartache.
To use your door analogy, it’s like telling someone that a door lock keeps people out when there’s an invisible teleported that also gets installed with the door lock.
It’s a hard analogy to follow because me retaining information you sent me is different than me breaking into your house. If you send me info, it’s mine. The weird mental model is that you still control what you give to me.
Keep up the good work, guys!
Anyway, maybe it's just me, but I never communicate anything to anyone that would be hugely problematic if published. That is, for that persona. Which is carefully compartmentalized from other personas. So Mirimir has rather restrictive limits. My meatspace identity has even more restrictive limits. But some of my personas have no limits, and are basically throw-aways.
Edit: And that's basically how accounts work on HN, right? I mean, throwaway use seems quite common, and accepted.
If you're interested, I explore that and related issues in one of my series on the IVPN website. There's also an old guide on nesting VPNs and Tor with VMs. And a tribute to Kevin Mitnick, featuring onion SSH hosts for chaining.
The tl;dr is that compartmentalization is the key. At all levels. At physical levels such as hosts and VMs, LANs and vLANs, and uplinks and proxy chains. And at behavioral levels, such as interests, forums and social media, projects, and language and writing style.
Mirimir is my only main persona that writes about privacy issues. He has temporarily had a few secondary personas for particular projects, just for casual deniability. But none of my other personas have written at length in English.
You send a message to someone whom you trust (and therefore won't screenshot). If their device is later compromised, forward secrecy ensures the message can't be retrieved.
Even revoking the compromised device is insufficient, as they could retrieve your chat history long before the user realizes they've been pwned.
As publicized (by Keybase and every other platform), exploding messages appear to put control of post-receipt management in the hand of the sender. This is especially credible coming from Keybase, since you guys are educating a lot of people about possibilities with careful crypto (e.g. forward secrecy). This has risks... you mention the Snapchat user who was protected from bullying, but what about the teen who wouldn't have sent that pic in the first place but felt safer because of SnapChat -- only to be bullied over a screenshot anyway?
Your description here is that exploding messages make it easier for both sides to announce and abide by a social contract about deletion. A name like "flag messages for auto-delete" (I'm sure someone can do better) would set the right impression.
And I find it weird that you're comparing yourself with Snapchat. Snapchat is a casual app, targeted at a completely different audience than the people Keybase targets (at least that's the impression I got so far)
Also Snapchat is mobile only product, which makes all the difference. It's much easier to detect screenshotting on mobile than desktop. And as far as I know, Keybase is desktop-first app. So it's kind of ridiculous that you're comparing yourself to snapchat.
I don't know if you are aware of above distinctions or not, but if you're not aware of this, there's something wrong here. You guys are supposed to be completely aware of all these subtle differences. And if you ARE aware of this, why are you trying to make these claims pretending there's nothing wrong?
I have nothing against Keybase, I'm just pointing out the faulty logic in this specific comment you're making (which happens to be hostile towards those who are just pointing out the issue with no trolling intent)
I don't think they're comparing themself to Snapchat; I think they're using a hypothetical situation that everyone can understand in order to explain the threats that an "exploding message" protects against; Snapchat is used merely because the scenario is easy to understand.
I wouldn't mind messages that were flagged for automatic deletion after some time interval, if I were also provided with controls for when and when not to honor such requests. But currently Signal, SnapChat, Keybase, and others don't provide me with such a choice - they do what the sender requested, regardless of whether or not I approve.
It goes without saying that providing such an easily accessible option would almost certainly result in it being used at times in socially inappropriate or distasteful ways. But consider, do you really want to give up control of how your device behaves in an attempt to prevent others from behaving poorly? Perhaps applications should focus on providing practical security (ie facilitating, not forcing, automated removal), and leave the social aspects up to the humans to sort out.
I think it is very easy and useful. It is great to have something like this on Keybase.
Exploded messages are just replaced with an image of what people are calling 'ashes'.
Further conversation on KB about this points out that hashing the message would compromise the secrecy. I still think it would be a neat feature.
I think it's a great feature if you think of it (exploding messages) not as an assurance against someone who shouldn't be trusted, but that they won't forget to clean the trash.
"I have nothing to hide"
Because no one is trying to hurt you
I don't have time to read through the code right now, but I'd love to hear how they implemented exploding messages with untrustworthy clients.
I've thought about it a few times before, and it seems one of the few places that closed software has an advantage - You can't easily force third-party clients to delete messages.
If they've solved that, I'm really interested to learn how it works!!
If you don't trust the person at the other end, this is never going to work. It's more useful for "we both agree that we don't want a paper trail" kind of thing.
Fake text is the easiest to fake if you can identify the font used - any image editor will work. HN uses 9pt Verdana, even without using dev tools I could fake your post to say anything I wanted it to say since it would just be 9pt Verdana on a solid background set to text wrap every 1050px.
See: https://www.youtube.com/watch?v=ohmajJTcpNk & https://www.youtube.com/watch?v=AmUC4m6w1wo
But it does not combat attacks that are not embarrassing but rather a release of information you are known to have but which is intended to be kept secret, or which is easily verified: if someone captures your social security number, private key, or home address in a screenshot, you'd better be really good at bluffing.
The whole point is to totally lower the bar for anyone to make a passable copy, thus removing all confidence that any screen shot is genuine.
Any DLP or DRM can be circumvented using analog means.
There are ways to mitigate (snapchat detects screenshots, etc) but no way to fully prevent - Someone could always use an external camera, etc.
I was just really hoping that they had come up with some sort of cool technical way to stop the ability to decode messages after XYZ time, even if they couldn't prevent it from being copied once decoded.
For example, imagine if a message were wrapped in two levels of encryption - Once from the user, and once with a key that you have to retrieve from keybase.io - If you weren't in the right time-window, you wouldn't be able to retrieve the second key.
There's lots of problems with that particular approach, which is why I was hoping they had come up with something awesome, not just asking the client nicely to delete it. It's a nice feature either way, though.
Nonetheless, it'd be interesting if all this effort, which got done because of the lobbying of major media rightsholders, was re-used for interpersonal communication.
 https://en.wikipedia.org/wiki/Protected_Media_Path  https://arstechnica.com/gadgets/2016/11/netflix-4k-streaming...
Apparently this happens a lot [2-7... probably more]. Unfortunately, this renders Keybase unusable for me because, even though I still have my private key, I cannot access my laptop's Keybase when I install it.
> The reason of this restriction is pretty important: we want people to be able to think of devices by name - say, when seeing them in a list, or when talking about them - and never have to think about key fingerprints or id's. The only way to achieve this safely is to make a device<->name map immutable and global, using the sig chain. Otherwise there are endless caveats and visual explanations needed showing the evolution of a device name over time.
> Consider: if an intruder steals one or more of your keys and starts doing crap to your sig chain, they still can't change the definition of "iphone6s-white". It is set in stone, which is crucial to maintain the abstraction that "iphone6s-white" is a certain key.
I agree that it sucks to have your device name become permanently unusable; I've hit this myself a few times, and it's mildly annoying to find that I have to pick a new device name in Keybase even though my local name for the device hasn't changed. But removing this restriction opens up a security vulnerability.
In this day and age, I do not advise this. Check with your company's compliance officer or corporate council before doing anything that is designed to remove evidence of communication.
I send an exploding message, set for 1 day, to Bob.
Bob checks his chat a week from now.
Does Bob get the message? Or has it already exploded?
Does the timer begin when the message is sent or received?
This seems like the only sensible answer for group chats. And we can't have a different answer for 1-on-1 chats and group chats. That would confuse people. Not the kind of person who reads an FAQ such as yourself, of course.
So our answer is simple: you set a timer and the message is gone after that time.
What might be the reasoning here that Keybase won't do it yet Signal messenger does start the timer on the "receive" end?
- Signal never deletes keys until after a message is read, because the key schedule and the message history are closely integrated. So if I send a "30 seconds" disappearing message, but you read it a month later, that will work. Keybase doesn't work that way; we delete keys on a fixed schedule, generally about a week. The "start the clock after sending" rule fits our key schedule better, without creating confusing cases at the one week boundary.
- Keybase is designed to support "very large" groups, like thousands of people. In that setting, a "start the clock after reading" rule would be a problem. It's unrealistic that all N thousand members will ever read a given message, and that would make deletion less reliable.
And it isn't just random forum discussions, major technologies like secure DNS, secure email, etc were held back years because nobody could agree on compromises needed to make improvements.
You see that video demonstrating the feature? Notice how you can read the content of the message which was supposedly deleted?
This feature just eliminates the messages in case something happens to the recipent or their device if either become compromised.
Let's say I work IT and user Bob forgot their password again and needs a temporary reset. I can message him his temporary password that expires in 3 minutes so that they can login and set a new one.
I don't trust future bad actors who get hold of the device, including the receiver should they turn.
I keep asking for this. Google, Facebook, where is that feature?
- if your or their phone gets taken at a border crossing
- if your or their phone gets taken by the FBI (which is how they recovered encrypted WhatsApp/Signal chats from Michael Cohen's phone)
Months later the relationship turns sour, and they are fired or denied a promotion. They can't then go through the archives any more to take the screenshots.
Without authentication and keeping remote party secure NOTHING will protect you (besides the threat of violence I guess).
There is currently no way to encrypt a message using a secure DNA-based public-key encryption that makes it so only that person can read the message.
You have to show it to whoever is "authorized" at some point. We hope the keys are kept safe to assure us of that.