
Keybase is not softer than TOFU - malgorithms
https://keybase.io/blog/chat-apps-softer-than-tofu
======
el_cujo
"How often do resets happen? Answer: if you're using WhatsApp or Signal, all
the freaking time.

With those apps, you throw away the crypto and just start trusting the server:
(1) whenever you switch to a new phone; (2) whenever any partner switches to a
new phone; (3) when you factory-reset a phone; (4) when any partner factory-
resets a phone, (5) whenever you uninstall and reinstall the app, or (6) when
any partner uninstalls and reinstalls. If you have just dozens of contacts,
resets will affect you every few days."

I guess I don't have "dozens of contacts", but getting a new phone/resetting a
phone isn't really that common of a thing in my circle. I feel like for the
average user, they wouldn't do this with their phone more than like once every
year or two. So I guess if you have like 600 people you talk to on these apps
regularly then that works out to daily, but for me at least this isn't that
big of a deal and was pretty much understood from the outset.

~~~
flafla2
This isn't really the issue. From further down on the article:

""" There's a very effective attack here. Let's say Eve wants to break into
Alice and Bob's existing conversation, and can get in the middle between them.
Alice and Bob have been in contact for years, having long ago TOFU'ed.

Eve simply makes it look to Alice that Bob bought a new phone:

Bob (Eve): Hey Hey

Alice: Yo Bob! Looks like you got new safety numbers.

Bob (Eve): Yeah, I got the iPhone XS, nice phone, I'm really happy with it.
Let's exchange safety numbers at RWC 2020. Hey - do you have Caroline's
current address? Gonna surprise her while I'm in SF.

Alice: Bad call, Android 4 life! Yeah 555 Cozy Street.

So to call most E2E chat systems TOFU is far too generous. It's more like TADA
— Trust After Device Additions.1 This is a real, not articifical, problem, as
it it creates an opportunity for malicious introductions into pre-existing
conversation. Unlike real TOFU...by the time someone is interested in your
TOFU conversation, they can't break in. With TADA, they can. """

The quote you linked is relevant because it means that you can't simply ignore
this problem; resets are _fairly_ common, common enough that you can't just
delete the key-loser's account (for example). However it doesn't have anything
to do with the actual security flaw (if we want to call it that, it's really
more of a UX / messaging problem) being discussed.

~~~
thinkloop
That exchange is not realistic, it's easy to ID family or friends by how they
conversate. At worst you can ask a challenge question. The prompt that a
device has changed is perfect to heighten senses for a quick ID. I disagree
that physical contact is necessary as TFA (and industry lore) seem to
recommend:

> You must now reestablish identity, and in almost all cases, this means
> meeting in person and comparing "safety numbers" with every last one of your
> contacts

Perhaps the alert could be a little more alerty and in red and read something
along the lines of "Hey! Your buddy's safety code has changed, make sure they
sound normal and aren't acting weird and creepy asking for information"

~~~
dabernathy89
How many people are realistically going to pester their friends & relatives
with "challenge questions"? I bet even the majority of folks in the HN crowd
don't/wouldn't.

~~~
thinkloop
I'm not saying you have to know their first pet's name. I'm saying it's
usually pretty obvious through regular conversation if someone is who they say
they are, and worst case, if you're suspicious, you can ask about some shared
past event without them knowing that they're being challenged.

~~~
skybrian
You are basically saying that social engineering never works, but there are
many stories about how social engineering _sometimes_ works.

~~~
feanaro
Why does it have to _never_ work? No technique is perfect.

------
idlewords
Much of the criticism of how gently WhatsApp and Signal handle key resets
misses the mark. Widespread adoption of end-to-end encrypted messaging is an
effective countermeasure to passive collection and blanket surveillance. In
order to get that widespread adoption, you can't be showing people skull-and-
crossbones warnings every time they swap out a SIM card.

Speaking from my experience getting journalists and political campaigns set up
with signal in 2017-2018, the early scary key change warnings were off-putting
to people and made them reluctant to continue with the messenger. At the time,
I was in contact with 100-150 people via signal and quickly ran out of
patience with anyone who insisted on a safety number check. But the UI at the
time encouraged that level of paranoia.

I continue to believe that making key changes as painless as possible for
users is the correct approach as long as there are ways to harden this
behavior in the settings, for the benefit of the far smaller set of people to
whom MITM attacks are a credible threat.

~~~
BostonEnginerd
I agree with your assessment about key change management. With that said, I do
like the device history trail that Keybase uses. Keybase has a better multi-
device story - in that it has a multi-device story at all. I understand what
they're trying to do preserving message history - I do prefer my conversations
to be ephemeral by default.

I'm OK with the compromise that Signal has made with key management -- there
are people that I really care to have private communication with, and people
who I prefer to have private communication with. I verify the former, and
don't bother with the latter unless we happen to be bored together in the same
room.

On a side note, thanks for your efforts over the last two years!

~~~
idlewords
The user base for Keybase and Signal is so different that it makes sense to
have such different behaviors.

Thanks very much for the kind words!

~~~
chb
I would have thought that these two groups would have more in common than not.

Who do you think constitutes these disparate user bases and how are their
perceived expectations different?

------
cyphar
Matrix handles this by exposing the device keys to the user so they can make
decisions about whether to trust new devices (and I believe identity key
changes mean you wouldn't be in your rooms anymore -- but in order to change
identity keys you would have to delete you entire Matrix account on the
homeserver).

If a new device has shown up, your messages will be blocked from being sent
until you verify the new device. To be fair, it is too easy to blaze past the
warning -- and it can happen often in large rooms. As a result, it's a little
bit cumbersome at the moment, but with device cross-signing coming down the
pipe and the new verification system (which is much better than Signal's IMHO
-- you just check both devices have the same string of 7 emoji on their
screen) it's getting a _lot_ better.

~~~
eeeeeeeeeeeee
I still think Keybase is right — multi-device or some kind of multi-trust
model is best so the key revocations aren’t happening so often. I remember
this problem with PGP and most people did not take the key verification
seriously.

And the problem with SSH that was pointed out in the article is funny now
because of cloud services where servers are constantly being destroyed, so
keys are changing frequently, unless you save and persist that private key for
the server in your configuration. Which I’ve realized a lot of companies are
simply not doing, which leads to people straight up ignoring key verifications
in their ssh config.

~~~
swiftcoder
SSH is even worse in many corporate environments, because you also have to
cross an SSH gateway to transit into from the corp fabric into the prod
fabric.

At AWS this meant we had to hop to a different gateway for each region, and
then onto the (stateless) production host which was recreated from scratch on
every deployment... There's no way a human was going to keep all those keys
straight, and everyone just disabled SSH host verification.

~~~
rocqua
This actually seems like the right use-case for SSHFP records. They allow for
storing the SSH key fingerprint in a DNSSEC secured zone.

Normally this is considered rather weak, because of the reliance on DNSSEC.
However, in this case I think its a very easy to take step that'll be miles
better than just trusting everything.

The stronger alternative is Certificate-based SSH, but that is a _lot_ of
hassle. As it requires setting up a CA, handling revocation, and actually
distributing the certificates.

~~~
toast0
It depends on how transient your servers are, but you can get pretty good
mileage out of a checked in known_hosts file.

------
DoubleMalt
I love Keybase.

Actually I tried to install the app a couple of days ago.

But there is no version on F-Droid and the version on Play Store has Firebase
Analytics baked in

Is there a plan for a clean F-Droid version?

~~~
jxcl
They also have no (as far as I could find) way of installing Keybase without
root permissions on Linux. I tried looking for a way to install keybase
without "sudo dpkg -i keybase.deb" but had no luck. In the end, since the
people I'm working with use it, I had to spin up a VM to install it in so that
keybase wouldn't mess up my Debian installation.

~~~
kbModalduality
Keybase developer here.

Keybase uses root privileges only for making the magic /keybase directory
available, where you can access your KBFS files (the redirector allows
different users on the same system to see their own files). Keybase and KBFS
run as unprivileged daemons (via the systemd user manager where available).

As giancarlostoro mentioned, you can unpack the .deb file and run the binaries
out of there. If you put the binaries in your $PATH, you can even symlink the
systemd unit files to your ~/.config/systemd/user and use the systemd user
manager to manage your custom Keybase install. Note that the KBFS mount will
not be accessible at /keybase, but instead at another location writable by
your user (see [https://keybase.io/docs/linux-user-guide#configuring-
kbfs](https://keybase.io/docs/linux-user-guide#configuring-kbfs)).

Finally, our build script at
[https://github.com/keybase/client/blob/master/packaging/linu...](https://github.com/keybase/client/blob/master/packaging/linux/build_binaries.sh)
builds all components and assets without root and lets you handle the
packaging and install after that.

If you have questions or more custom use cases, feel free to join the
keybasefriends team to talk about it or make a GitHub issue!

~~~
jxcl
Thank you for your your detailed response! It would be really nice to have
this information in the official documentation.

~~~
kbModalduality
Added: [https://keybase.io/docs/linux-user-guide#installing-
keybase-...](https://keybase.io/docs/linux-user-guide#installing-keybase-
without-root-privileges).

Let me know (I'm the Keybase user modalduality) if you have anything else to
add or any questions. Cheers!

------
sdenton4
(A very tiny fwiw): you /can/ create a backup in signal and use it to transfer
seamlessly to a new device, without triggering new safety number checks. The
user flow sucks, but it is possible.

~~~
metildaa
Backups are only possible on Android, and the most recent Signal non-beta
builds have had this feature broken, making backups you had useless.

Unless your willing to open a bug and be a pest for 2 weeks (and still have
access to your old phone/leave it registered) I wouldn't plan on retaining
your messages or keys across phones. Its a huge weak point of Signal.

~~~
izacus
Even in that case, you could only backup on device storage, had to write down
(!) a huge set of numbers and manually copy files around. The whole flow is
awfully terrible to any user - at least WhatsApp can sync your profile over
Google Drive.

Last time my phone died, I lost all of my Signal group memberships, because
the app is incapable of transfering those to a new phone without backup... and
it's also incapable of doing the backup automatically. Those UX choices
continiously baffle me - it's like authors didn't learn anything from the
failure of PGP.

~~~
metildaa
Backups are low priority for Signal, and they don't want to make security
compromising choices like WhatsApp does with backing up.

It'd still be nice if the UX and reliability of restoring from backups could
be improved.

------
unsignedint
Requirements to use phone number, let alone, as an identifier is major
complaints I have for many of messaging apps. It really limits usable cases as
I have plenty of people I would love to interact but not necessarily want
provide my phone numbers.

I love Keybase for this aspect, but something I don't like about it is its
device name handling. They don't allow decommissioning old device names, so I
end up having 'MyLaptop' 'MyLaptop 1' 'MyLaptop 2'...

~~~
malgorithms
Oh interesting. I don't think we've talked about this decision publicly, so I
can write about it for a second. Not letting people re-use a device name is an
inconvenience, I admit, but arguably it's not like other cryptography
inconveniences, where people are confused, troubled, etc. We figured people
would say "huh, weird requirement" and pick a different name and move on.

The goal is a 1-1 mapping between devices (keys) and these names. So whenever
we need our UX to talk about a key, it can talk, safely, about it in terms of
device names. Once committed to your chain of signatures, "Laptop-Warhol"
means a specific device key, and it can't be used again. So, for example, if
one of your Keybase installs wants to tell you "oh, Laptop-Warhol just added a
new device, iPhone-Vangogh" then it doesn't need to look like this: "Key
34858234589234895897234598734 added key 90123845890230948234234324."

If Laptop-Warhold could mean multiple devices (keys), well then we'd need to
start talking about the keys. Which is a nightmare for usability.

A lot of this decision was driven by something we've seen with apple devices.
Every now and then I'd get a popup on my computer - say when updating iOS -
that said something like "you just started using iMessage on a new device,
'chris's iphone'. if you don't know what this you should freak your shit out."
well - it has basically said that so many times with the same names over
again, that I can safely assume that it's a near-useless warning.

Note I mean unique to you; 2 different users on keybase can name their devices
the same.

Generally speaking...it's been a goal from the beginning that names on keybase
are meaningful. Similarly if you look up "chris" in in our merkle tree (which
is pinned to bitcoin) that leads to a deterministic chain of signatures.
inside that chain, where I mention "work-imac-warhol", you're guaranteed to
see the same answer as I am. So "chris" is as good as a key fingerprint or
safety number. And so is my device name.

~~~
unsignedint
I understand where you are coming from, but I my irritation comes from the
fact that they device cannot be revoked from my account and recycle the name.
(e.g. OS reinstalls, etc. I don't really like naming my device MyMainPhone-2,
logically because it's not my second phone. It's a same device to me.) If the
device that's _active_ already have that name, I would agree with your
decision that duplication is not allowed.

Maybe some people are more irritated by that than others (I'm certainly
former-type), perhaps consider "advanced" to remove existing names for those
who knows what they are doing?

~~~
swinglock
You're not the only one annoyed by this.

------
broahmed
Let's not forget what Signal and the Signal Protocol (used by WhatsApp) have
achieved: making end-to-end encrypted chat EASY and accessible for the masses,
for many of whom "security" is password123. It's important in our post-Snowden
world.

~~~
godelski
I'm not sure why this is downvoted. The reason WA has so much popularity is
that it is easy to use and you could use wifi. Signal is not as easy to use
(less features) but more secure and so privacy conscious people like it. But
people don't like switching to Signal because "it is hard". Making a more
secure app is good, but we have to question "do we want people using pretty
good e2e or do we want to make the perfect app first?"

~~~
mikekchar
I don't think your question is as easy to answer as you might be implying. I
certainly don't know the answer. My main concern is: what is the downside to
"pretty good e2e"? Without understanding what that means, we can't evaluate
the situation.

"Good" is better than "best" if "best" is not available seems obvious, but it
certainly isn't true in a lot of cases. If I tell you that X secure and you
trust it to be secure, when it actually has problems -- that might be worse
than me telling you that X is not secure.

People are bad at evaluating risk. If I want to pass notes in class and don't
want my teacher to know what the note says if I get caught, then rot-13 is
probably "good enough". But if I'm a whistle-blower for a government agency,
my security needs are quite a bit higher. We can never make a perfect app, but
I'm not sure I could define what "good enough" looks like for the general
populous. It's completely reasonable to me that different groups have
different opinions on the matter -- and I think that's a good thing.

~~~
godelski
> If I tell you that X secure and ...

Is Signal or WA in that regime? Literally the only reason I don't use WA is
because it is owned by FB. But as far as I'm aware both are cryptographically
secure. Yes, I'm aware WA has a metadata problem.

So what's good enough? I'd say that if you need a state actor to crack it,
I'll call it good enough. At least for the general populous. Any more
difficulty that can be made is a bonus imo.

Clearly we can't get a perfect e2e app. So at what point in time do we say "we
also need other features that people want so that they'll use our app". That
doesn't mean "stop working on encryption" (you can never stop that) but "we're
at a good enough point to start targeting a larger market." I think something
like Signal is there. Stop focusing on the security geeks and bring in the
general public.

------
dcbadacd
The thing I find about this article the nicest is that they published the
audit result document.

[https://keybase.io/docs-
assets/blog/NCC_Group_Keybase_KB2018...](https://keybase.io/docs-
assets/blog/NCC_Group_Keybase_KB2018_Public_Report_2019-02-27_v1.3.pdf)

------
orblivion
The argument is that people don't bother to keep their verified connections up
to date. But come on, how often to MITM attacks happen? For those rare cases
where people are doing stuff important enough that it becomes a possibility, I
would guess the security conscious individuals would become more diligent.

For the rest of us, it seems that doing it on occasion is still worth it. As I
understand, Signal is designed to never indicate over the wire who has checked
safety numbers. Thus, a MITM anywhere on the network creates a risk of
becoming discovered, which is a cost in itself.

~~~
hopler
What's the point of using an encrypted chat at all if you don't care if it's
hacked?

~~~
octorian
A big benefit of end-to-end encryption is that it makes it impossible for the
service provider to suddenly start doing silent mass surveillance of their
userbase. A solution doesn't need to be 100% perfect to achieve this goal.

Somehow, nearly every article on the subject completely misses this, and
instead keeps moving goalposts on reasonable endpoint security.

~~~
pault
Exactly. I only use signal for chat, and I tell my family and friends that
they must install it if they want to send me text messages. It's not because I
fear being hacked, it's because I don't want highly personal conversations
about sex, drugs, chronic illness, and skeletons in the family closet getting
picked up by global passive surveillance.

~~~
_they
Yep. And Signal is as good as it gets for end users with zero technical
knowledge. I've had many family members and friends install it on their own
and not skip a beat.

------
josh2600
Show us the server:
[https://github.com/keybase/client/issues/6374](https://github.com/keybase/client/issues/6374)

~~~
malgorithms
It _seems_ you (someone from the Signal project?) are actively diverting from
the point, with what is ultimately a security theater request. Keybase's app -
which IS open source - doesn't trust the server at all. We could be running
anything server-side, regardless of what we do or don't publish. Meanwhile,
Signal's story is "you MUST trust our server, over and over again," as the
blog post explains. Unfortunately there's no way to know what's happening on
the server. So being like Signal and publishing your server source is strictly
worse than being like Keybase and not (yet?) publishing server-side source. At
any time, Signal could be throwing in these fake key upgrades, either due to
running other source code on purpose, or being forced to, or just plain
getting hacked. The most malicious Keybase server could not.

This comment may be of interest (we could release server code at some point,
and I will take this as a vote), but I hope people reading this aren't
distracted by Signal's flaw here.

[edit: chilled a bit!]

~~~
throwawaymath
_> But to be clear, you're actively diverting from the point.

...

I get why you showed up here, but you're really not addressing the point of
the post at all, and in fact you're trying to distract with the suggestion
that Signal's publication of that code protects people from this flaw. It
doesn't. At all._

Wow, that's a pretty hostile (and accusatory) response to a fair ask. This is
one (small) step removed from accusing someone of shilling/astroturfing.

Let your product stand on its own merits. If you have a good reason why you
won't open source Keybase's server implementation, own it. Don't undermine
requests to open source the code by publicly accusing people of supporting a
competing product.

The person you're replying to didn't make an argument in favor of Signal - or
any other competing product, for that matter. In my opinion, _your_ response
is actively distracting from _their_ request.

~~~
drexlspivey
Even if they publish their server code there is no way for anyone to verify
that it's the code they are actually running and it would be just a PR move.
If the client implementation is good there should be no way that the server
can compromise any message.

~~~
elagost
It's a step toward people running their own servers, either federated with
Keybase proper, or just as a personal instance. That would be valuable for
quite a number of enthusiasts. Federation (like email/XMPP) is a very
reasonable feature for any forward-looking communication platform.

~~~
sethgecko
Then this is not a request for transparency but for them to change their
business plan

------
miopa
I'm wondering why the article fails to mention that there is a sufficiently
good and easy mechanism to compare and verify the new safety numbers. You just
talk to your peer and read the numbers - and the peer can verify them.

This will fail when AI software gets really good at imitating voice in real-
time during casual talk, but we're not there yet (or - if that is my threat
model, I'll find an out of band way to verify)

~~~
zaroth
A sufficiently good mechanism from a cryptographic standpoint, but which from
a usability perspective totally falls down because people never do it.

It’s a very worthy goal to make these events rare and scary so that users
might actually bother reading those numbers out loud to each other.

------
Pxtl
I'm not a security guy, but wouldn't the most seamless approach be to encrypt
the key collection with a master password and store the encrypted key
collection on the server? So on a new device you'd download the encrypted key
collection and then decrypt it locally?

If they forget their password, they can re-upload it from a validated device
with a new master password.

~~~
syn0byte
Biggest issue is single point of failure for total access to all devices.
Get\guess\beat out the master password and its game over on _all_ connected
devices.

~~~
eridius
Not only that but this also enables offline attacking of the password. If you
can compromise the Keybase server and grab the encrypted passwords, you can
then attack it at your leisure with whatever computing power you can scrounge
up, over whatever time duration you want. And when you break it, as long as
any of the included devices are still on the account, you'd have complete
access to everything.

Requiring existing devices to be actively involved in provisioning a new
device prevents all of this.

~~~
Pxtl
Ah, that makes more sense.

So in Keybase, what does device to device provisioning look like? "Hey, you've
just set up this device - a message has been sent to all your other devices,
OK the message and come back here and you'll be good to go"

~~~
nickik
You just scan a QR code. Or if you want to do an offline device you can type
in a a set of words that represent the key.

------
hprotagonist
I actually _do_ reverify safety numbers out-of-band every time they change.

~~~
threwawasy1228
I would actually like to see some numbers or a survey on how many people
actually do this. Anecdotally I don't think I know a single person who ever
verifies safety numbers out of band. Of the approximately 50 people I use to
talk on signal with regularly not including large group chats of people I
don't know as well, not a single person has ever tried to reverify me nor have
I tried to reverify any of them.

Would be cool to see what the numbers are for reverifications.

~~~
hprotagonist
I normally don't do this in-person, but some kind of out of the loop approach
is something i do every time.

One time i got a postcard from a friend with their safety number on it and
nothing else. I recognized their handwriting to complete the loop.

~~~
0xb100db1ade
Would you mind elaborating?

What you described sounded interesting

------
solatic
Why can't we just have government certificate authorities for the average Joe?

Ultimately, while people may have very little trust for the government in
general, the one thing that people do trust the government for is establishing
identity. We use government ID papers to establish our right to work, to open
bank accounts, to enter legal agreements, and to cross borders. Why should
communications be any different? We don't need to trust the government with
the _content_ of the communications (and we shouldn't), by not providing the
government with the private keys. But why can't I get the government to sign a
public key for me?

The issue it raises is whether people will eventually get locked out of
society if the government decides to get antagonistic with somebody by
revoking their public key and refusing to issue a new one, given a society
where such a scheme is popular. But we don't have any sort of such protection
today - the government can seize your passport, seize your driving license,
freeze your bank accounts. A society in which the government solves identity
issues for the digital age is only a net improvement over the status quo.

~~~
dannyw
Because not everyone wants to communicate under a singular identity.

------
inetknght
> _Similarly, in SSH, if a remote host 's key changes, it doesn't "just work,"
> it gets downright belligerent:_

Funny enough, I have ranted to friends/coworkers about sysadmins completely
replacing machines and not telling anyone. How do I know it happens? BECAUSE
OF THIS EXACT WARNING.

------
tosh
If there was an iPad app that works in landscape mode I’d be able to switch
most of my groups from other apps to keybase.

------
xrd
Smart stuff and fun:

"Did you though, or did you just scroll down here?"

------
DINKDINK
If a single party in a chat has their keys reset that's not a MITM attack
because a MITM would need to rekey to both parties. If the two clients
communicate via at least one uncompromised service to communicate that their
counterparty's keys have been reset, they might be able to detect a MITM

Keybase's example Cozy Street is not a MITM attack (i.e. an attacker has
inserted themselves between two parties and can get the plaintext), it's just
impersonation.

If it was a real MITM attack both Alice and Bob would get rekey notifications,
unless they both confirm whenever they get a rekey notification a MITM attack
is possible.

I also think that crypto is stuck in the early 90s by thinking real world
meetups are the only way to authenticate keys. If you know what your chat
friend sounds and looks like, willing to submit a video, and don't think your
adversary can fake such a proof, a simple video of you reading your
pubkey/safety number is sufficient. Is that scalably practical? no but it is
possible.

That said: keybase is doing cool novel work, I commend them for advancing the
state of the art.

------
grenoire
I don't know how much weight such warnings will hold, given how well we know
people ignore cookie warnings and the rest...

~~~
walrus01
Look at the number of non technical end users who will determinedly download
.EXE files, or run them from their mail client, and click through all of the
Windows 10 "do you really want to run this untrusted software?" warnings in
order to successfully install cryptolocker type malware on their computers.

If you give people a way to click "yes/accept/run" and they are determined to
accomplish what they think is their intended task, they will just blow through
any warnings.

~~~
foepys
You don't even need to observe the average end user. Just look at software
developers aka "technical experts" using npm, NuGet, Maven, and all the other
package managers. Digital signatures? Nope, just run the code on your machine,
please. Bonus points for allowing code execution in user context to
"configure" the package and placing executables in $PATH.

npm here being exceptionally secretive on what it will install as dependencies
as it can reach tens of thousands packages very quickly.

------
arendtio
Actually, I think it is wrong to call it TOFU as it simply doesn't require the
user to opt-in to anything. Instead, it seems more like the thing the XMPP
people call 'blind trust before verification' [1]. I am not quite sure if it
is exactly the same as 'blind trust before verification' changes its behavior
as soon as you explicitly verified the keys.

That way everybody can use somehow e2e encrypted messages, but if you really
care about the security you validate your keys and get a real trusted e2e
encryption.

[1] [https://gultsch.de/trust.html](https://gultsch.de/trust.html)

------
h4t
I've been using keybase for quite some time now and I absolutely love it. I
suggest everyone give it a seriously try and check out some of the popular
public teams chats before finalizing your opinion of it. I'm glad I did.

~~~
diaz
Best thing for me besides chat working with some friends regularly is the
amazing file sharing option between multiple parties easily.

------
hopler
Keybase's UI is still bad. It prominently highlights the "Let them (the
eavesdropper) in" button, and the warning has so much red that it's hard to
read the text.

------
i_am_proteus
Their solution seems reasonable, but why not allow export of an (encrypted)
private key for use on other devices?

~~~
eridius
Keybase allows you (nay, encourages you) to set up a "paper key", which is a
private key that you store offline (e.g. print it out and stick it in a safe).
You can then use this paper key to provision new devices. This way if you lose
all of your devices simultaneously (or just have one device to begin with) you
don't need to go through an account reset to add a new device.

Note that even with this, any new device added using the paper key still
generates its own independent private key. Having per-device private keys is
important to be able to revoke devices, and to be able to track which devices
were responsible for any given action.

~~~
i_am_proteus
To quote Kanye Omari West[0], "I like this."

[0][https://youtu.be/PsO6ZnUZI0g?t=95](https://youtu.be/PsO6ZnUZI0g?t=95)

------
chr1xzy
I can never get keybase to address this article about them:

"Keybase, we have a problem."
[https://freehuman.fr/posts/20238f40da8b01357816236af097d2ae](https://freehuman.fr/posts/20238f40da8b01357816236af097d2ae)

Maybe user /kbModalduality can

~~~
chupasaurus
A happy KB user, let's bring that article down:

1\. A _server_ runs KBFS which syncs data with the server. No one could stop
you from disabling it on startup and running wherever you need it.

2\. Any app running without AppArmour/SELinux or outside of network namespace
could get your real address. Latter is relatively easy to set up, I'm running
VPN by default and all the apps inside are running only in the namespace with
VPN tunnel device.

3\. Last time I checked I could make my own package.

4\. I don't have time to check it atm.

5\. Link for audit results is somewhere in the comments on this post.

6\. Works with uBlock on.

7\. Some paranoia rant without looking how KB works: it's push/pull mode, all
messages are stored as files you could sync.

8\. Based on previous points.

9\. Each connection is under TLS. Plaintext messages could be read via RAM,
and if someone could read it, KB would be the last problem.

10\. You could make your own.

11\. Fixed. _#ls -l /keybase_ would show you the symlink KBFS_NOT_RUNNING to
/dev/null, but shows correct directories under user running the app.

12\. Hello GDPR.

13\. Like everyone in the world does.

14\. Bad outro.

------
peterwwillis
I like it! They should apply this to the problem of TOFU in HSTS. For many
HTTPS sites it's trivial to hijack them by getting a user to switch to a
different device, as the devices' apps often don't synchronize HSTS databases.

------
foxhop
Keybase is awesome.

I wish they would take my money as a customer. I don't want to see them to
away.

------
z3t4
Key rotation is a hard problem. One idea is to host the public key in a txt
domain record. For example yourname.com that you can also use for your email
and blog.

------
throwawaymath
There are a few claims made here which I'd like to see clarified. In the
spirit of transparency, let me declare upfront that I use Signal but I am not
affiliated with them, and I don't really have a dog in the race here. I'll
reference the published NCC security review[1] for this comment. Overall I'm
happy to see a published cryptographic review of this protocol.

First, under "The Full Security Picture" heading in this article, it's claimed
that forward secrecy is supported via time-based exploding messages. Pages 19
and 20 of the NCC report explain that, "The default chat protocol does not
allow for forward secrecy since the same keys can retain indefinitely on a
users device." So forward secrecy is not assured by default under this chat
protocol, is that correct?

The NCC report goes on to say that, "Exploding messages introduce mechanisms
for message deletion and forward secrecy; however, it is not clear to the user
that keys and messages could remain on their device beyond the period
specified during message creation." I interpret this to mean that there _is_ a
way to assure forward secrecy - which is exploding messages - but you're not
making that explicit in this announcement. This seems a little disingenuous to
me because in your FAQ, the first answer criticizes Whatsapp for compromising
forward secrecy using the backup feature, but you don't have forward secrecy
enabled by default in your chat protocol.

Likewise, this announcement makes a point of mentioning how other apps require
you to trust the server due to resets, and why trusting the server is bad. But
page 20 of the NCC report explains that, "While the default Chat encryption
protocol does provide for message confidentiality and integrity, it does not
provide for security in the face of device and server compromise, as keys and
ciphertext are stored for a potentially indefinite period of time." So is it
correct to say that unless users specifically enable exploding messages for
their conversation (which is not the default), they actually _do_ need to
trust the server?

There are also a few drawbacks to the ephemeral messaging scheme NCC found
that I think should be explicitly disclosed, because they don't require too
much technical detail:

1\. There is no deniable authentication on the default chat protocol. While
exploding messages provide deniable authentication, this property fails in a
group with more than 100 participants. It's fair to question whether that's a
realistic place to expect deniable authentication, but it should probably be
called out.

2\. Exploding messages are based on the local client's system clock. Therefore
it's possible for an exploding message to be indefinitely retained on another
device by e.g. manipulating the local time.

___________________

1\. [https://keybase.io/docs-
assets/blog/NCC_Group_Keybase_KB2018...](https://keybase.io/docs-
assets/blog/NCC_Group_Keybase_KB2018_Public_Report_2019-02-27_v1.3.pdf)

~~~
maxtaco
Keybase affiliate here.

Correct: forward secrecy isn't on by default. We think there's a trade-off
here. With forward secrecy, your old messages won't be visible on a new
device, but users want this since Slack (and others) make it seem natural.
However, you can opt-in to forward security on a per-message or per-
conversation basis.

The report says "device _and_ server compromise." Decryption keys never leave
the user's client. What they mean is if: (1) the server's stored data is
compromised; (2) your phone is also compromised; and (3) the messages weren't
marked ephemeral; then the attacker might be able to read past messages, even
if the user tried to delete them (i.e., did Keybase really delete the
ciphertexts?). This line of reasoning is correct and one of the primary
motivations for key ratchets. I don't think the report is claiming that users
need to trust Keybase's server in general. They do need to trust Keybase to
delete messages that are marked deleted, which would mitigate the attack above
if conditions 1 through 3 are met.

~~~
throwawaymath
Thank you for clarifying that, especially the second point. Looks like I was
misunderstanding the report then.

~~~
maxtaco
Thank _you_ for taking the time to read the report!

------
ex3ndr
Having encrypted backups means that you are throwing away PFS.

------
wDcBKgt66V8WDs
Kind of OT but the belligerent SSH message example they used is good comedy

------
ReptileMan
Use threema. It has this problem (and the phone number one) solved.

~~~
cypherpunks01
I would argue that the benefit of solving this problem mentioned is far
outweighed by the large downsides of using proprietary software.

------
malgorithms
I don't know the moderation policy on title changes at HN, but I just changed
the title of the post. Internally at Keybase - and thanks to a conversation
with a peer - we've been feeling pretty guilty about calling out a specific
project that we think is basically the gold standard outside Keybase.

We'd rather focus on the positive solution to the problem (which Keybase has
implemented), rather than just pointing a giant finger at any other services
which have the problem we're trying to address. I think I personally will
sleep better tonight this way.

Still we want this conversation to continue.

~~~
sctb
Sleep well! We've updated the headline here.

~~~
malgorithms
<3

------
abhinai
In an ideal world, I would love to use a really secure app for communication.
However, my choices are limited when most of my friends are on WhatsApp or
Signal. That is just the unfortunate reality of the way social networks work.

~~~
idlewords
Both WhatsApp and Signal are secure apps for communication, and your friends
are right to use them.

------
fiatjaf
Very reasonable criticism. I'm now happy for never having used Signal.

