
Darknet Messenger Briar Releases Beta, Passes Security Audit - mwheeler
https://briarproject.org/news/2017-beta-released-security-audit.html
======
slim
I love the fact that the "build from source" section is for everyone, not just
developers. It's illustrated with screenshots

[https://briarproject.org/building.html](https://briarproject.org/building.html)

------
tptacek
It's ironic that this update plays up how Briar "hides metadata" when the
audit found that the application deanonymizes its users by exposing DNS
lookups during RSS updates.

~~~
grote
The article says: "All the issues found by the audit have been addressed in
this beta release."

~~~
gcb0
so the current version isn't audited?

if they changed the code and design after the audit, then much worse bugs
might be hiding now, until that version is audited.

~~~
gandarojin
wtf :D

------
lawnchair_larry
As someone who does professional security audits, I would just like to say
that there is no such thing as "passing" a security audit. In fact, most pen
testing shops will carefully dance around actually making that claim in
writing for a customer, because they know they are going to look bad when a
bug is inevitably found in code they reviewed (and it's probably a dumb idea
for liability reasons too).

There are certain certifications with falsifiable conditions that can be
marked pass/fail. But, as I'm sure many folks here are aware, these are
incomplete and often completely dubious. They don't purport to be "security
audits".

What a real security audit tells you is that of the (probably 2-4) consultants
that looked at a product for a few weeks (probably 2-6), these were the
security bugs they found.

That alone contains little information, because the skill level and domain
expertise varies greatly among consultants and companies. I can guarantee that
if these results were withheld, and they gave the same codebase to another
reputable outfit, the set of findings would be very different. There would
likely be some overlap, particularly in the most obvious types of bugs, but
bug hunting is way closer to art than science.

I know nothing about this project, and my intent is not to create doubt, but
users of secure messaging apps should understand what an audit is and what it
isn't.

Like other commenters, I was surprised to see 3 days of looking at crypto. It
could be that the crypto is extremely simple and uses a few well understood
APIs in a straightforward way, so this isn't a guaranteed red flag by any
means, but it's a bit unusual.

And like any software, this is a 1 line patch away from being blown wide open.
With every commit, an audit becomes increasingly meaningless. Just ask
cperciva!

And perhaps I'm being cynical, but I always felt like the "conclusions"
section of the audit report has an unspoken purpose of walking back from
calling their baby ugly and keeping a decent rapport to ensure the possibility
of future business. Not that I think what Cure53 wrote was not genuine, but
there are natural incentives to be a little generous there. Again, I'm
speaking from experience writing those sections as well.

Edit: Basically what tptacek said.

~~~
fovc
I was not aware of the tarsnap issue, but for anyone else wondering, I think
it's this: [http://www.daemonology.net/blog/2011-01-18-tarsnap-
critical-...](http://www.daemonology.net/blog/2011-01-18-tarsnap-critical-
security-bug.html)

------
ycmbntrthrwaway
As for the audit[1], how would HTML sanitization on sender side protect the
reader? On page 12 they suggest adding "HTML sanitization" in onSendClick
function. It is as lame as protecting against XSS with JavaScript. Attacker
will simply remove this code and recompile app.

[1]
[https://briarproject.org/raw/BRP-01-report.pdf](https://briarproject.org/raw/BRP-01-report.pdf)

------
captainmuon
This looks interesting, but I wonder how safe it is in the stated use case of
journalists, activists in an authoritarian country. It can use Tor, which
hides whom you are communicating with, but the fact that you are using Tor
sticks out like a red thumb.

The authorities probably just have to flip a switch to put you under closer
surveillance if they see you use Tor. Or they'll just send someone to your
registered address and see whats going on.

What I really think would be cool would be a protocol based on massive
steganography and obfuscation. You would have kernels which tell it how to
wrap data in an innocent looking container (HTTPS traffic, SMT, IRC, Cat
pictures and recipies over plain HTTP, DNS, ICMP pings, ...). Ideally, you
would have dozens. And they would be shareable between nodes. You could define
them in a DSL, and make them sandboxed and provable (that they round-trip,
i.e. can decode what they encode, and terminate properly - that restricts what
you can do in them though). You could even autogenerate the kernels. The last
two points would require a bit of R&D of course.

The goal would be to be able to create new "protocols" faster than authorities
can learn to detect them. Then wrap a regular encrypted protocol in this
obfuscation layer.

~~~
abrichr
Maybe I'm missing something, but if the protocol were well defined and open
source, it would be trivial to detect, no?

~~~
captainmuon
Not really, the idea would be to hide data by using different amounts of
spaces in text files, in the least significant bits of pixels in images, or in
the access pattern to a certain service. The data looks like legitimate
traffic. You could run the tool on absolutely all traffic, but that would be
computationally intensive. And the data you get out is still encrypted, so
ideally you can't tell if it is random (from extracting data where none is
hidden) or real encrypted data.

Also, you would have dozens or hundreds of kernels, and you could generate
them by analyzing innocent traffic, or hiring a bunch of students to write
them quickly. My idea is that the kernels are not part of the source code per
se, but rather distributed by the protocol. To contact somebody you need to
speak a common kernel, but then they can send you new kernels automatically.
You could come up with a measure of how well kernels survive censorship and
use that to decide which to pass on.

It's a bit like auto updating malware, but for good :-). My only novel idea is
to make a DSL or bytecode for the kernels, so that you can prove that they are
benign and correct, and autogenerate them or use kernels from strangers. I
don't know at all if this is feasible or not, but I have a couple of ideas how
to make it work. No where near a POC yet so this is all still wishful thinking
though.

~~~
PeterisP
"the idea would be to hide data by using different amounts of spaces in text
files, in the least significant bits of pixels in images, or in the access
pattern to a certain service" is not appropriate for the claimed use case,
i.e. activists in totalitarian regimes.

In such an environment, the traffic of suspected activists _will_ be analyzed.

Assuming the kernels are open, it's possible to see in analysis of certain
data that "amounts of spaces in text files, in the least significant bits of
pixels in images, or in the access pattern to a certain service" have encoded
information, even if the extracted information looks like random/encrypted
data. At this point you don't have plausible deniability and rubber hose
cryptoanalysis can be used.

Switching to new kernels happens too late since you don't know when they've
identified a kernel until they start arresting people - it's not like they're
simply going to block it immediately.

i.e., the described service is resistant to _mass censorship_ and automated
filtering, but these use cases actually need to be able to resist attribution
and retaliation, which are quite different problems.

------
siberianbear
I downloaded the beta and installed it, but I guess I need to physically find
a friend who also installed it. I'm not in Silicon Valley, so I doubt that
will happen soon....

~~~
vonuebelgarten
Same problem here. Despite this being the best MITM-prevention strategy, it
will be hard to assemble a group of people.

Maybe allow copying/pasting keys but with a Signal-style post-validation or
something like a PGP wordlist to allow voice-based confirmation?

------
jancsika
When I see that one of the requirements for privacy-preserving software is to
have been in the same physical location as the person I need to connect with,
_while running said software_ , I immediately stop reading and move on to
other things.

I've done this for roughly five years.

Assuming I've never wanted to become a Debian developer, is there any
important piece of privacy-preserving software I've missed out on? Is there
likely to be any important privacy-preserving software I will miss out on in
the next five years?

Edit: clarification

~~~
yjftsjthsd-h
Short of web of trust, which has other issues, how else would you propose to
bootstrap?

~~~
jancsika
Using PKI, "winging it", etc.

Bitcoin - used PKI to download it. Now Bitcoin is reproducibly buildable, so
you can read the forum to see if any zealots notice different hashes (which
they certainly would unless you are personally being targeted in testing out
the software). Make some small transactions to see if it works.

Bitmessage - downloaded it a few days after the initial release. Sent a
message over Bitmessage to the Bitmessage author. Got one back from the
author.

Tor - trust that the directory servers are doing their jobs.

Signal - haven't used it but if I did I'll piggy-back on phone numbers to
message people I already know.

git - used PKI to initially grab the code, trust my own dev machine as I've
made commits, occasionally posted commit hashes over various secure/insecure
mediums for various reasons (may have done this in person wrt a bug, can't
remember).

Notice that in all these cases, trying out the software (at least in the U.S.)
does not at all imply that you trust it. You could practice installing Tor 20
different times, on 20 different untrustworthy Windows machines and simply use
it to search for cat pictures. Then, the 21st time, you could take all kinds
of precautions and build a special box just for running Tor, armed with all
the first-hand knowledge about how it works and what its trade-offs are.

I can also completely fuck up something in git and get so frustrated I just
clone it again from the repo I don't have to trust because I just check the
hashes and go on working.

Requiring physical proximity and a formal key exchange _before I can even use
the software_ simply cannot work IMO. It a) requires special planning,
coincidence, or proselytization to try out a working version of the app, b) it
balloons the length of the engineering cycles and makes it hard to just start
over, c) the reliance on in-person meeting implies a level of trust between
you, your keys, and your smartphones that neither party should take for
granted.

Also, it doesn't scale.

------
hamandcheese
As far as the audit, I feel like 13 days is surprisingly short. I base this on
my experience getting new jobs and familiarizing myself with new code bases.
Maybe I'm slow.

~~~
dsacco
13 days (let's call it two weeks, assuming full person/weeks of time) is not
atypical for an assessment. If you have multiple people working simultaneously
on a two week assessment, you can "comfortably" assess fairly complex
applications.

What _is_ surprising to me is that so little of that time was devoted to
cryptography. For a secure messenger that time should be ratcheted up a bit
(though the security infrastructure and general software implementation stuff
is also very important).

------
softwarelimits
"Darknet" is a brainwashing propaganda term. Please do not use it, thanks.

~~~
thinbeige
What is the right term then?

~~~
Angostura
Difficult to know in this context - what meaning do you think they weed trying
to convey, assuming that “designed for illegal use” wasn't it.

~~~
PaulAJ
How about "secure against spying by governments and criminals".

~~~
Angostura
So 'highly secure' would be better than darknet

~~~
meowface
Not quite, because that leaves out the important fact that it doesn't operate
over typical networks.

------
mxuribe
Wow, i had never heard of this project. I'm very much a fan of matrix
protocol, and associated riot app...but - besides differences in protocol - I
really like the addition of blog posting and rss feed reading. I mean, this
could sort of take off, and become the new basis for social interaction -
besides just "texting" securely with your contacts. I wish somehow both makers
of briar and matrix combined superpowers to combine the perfect, unified
stack!

Side note: Is there a way to export (basically archive offline) the content;
to be clear only one's content, not someone else's? I'd hate to have some
important messages lost if I were to lose my phone. Not saying i want a
central server...simply some method to archive my own stuff for safe keeping -
encrypted of course.

Otherwise, briar seems really awesome!

------
Tepix
It's not yet available via F-Droid… is it planned?

~~~
goapunker
Yes. Briar will have its own F-Droid repo for the beta.

------
pasbesoin
More "Darknet". I almost passed this by.

I'm glad I took a peek. This is actually interesting to me.

 _...Briar is a secure messaging app for Android.

Unlike other popular apps, Briar does not require servers to work. It connects
users directly using a peer-to-peer network. This makes it resistant to
censorship and allows it to work even without internet access.

The app encrypts all data end-to-end and also hides metadata about who is
communicating. This is the next step in the evolution of secure messaging. No
communication ever enters the public internet. Everything is sent via the Tor
anonymity network or local networks._

------
RRRA
Where is the crypto primitives higher level comparison to say axolotl, noise,
omemo, etc.?

------
bartread
I love the fact that this is a "darknet" messenger service called Briar:
"Black Briar". Now where have I heard that before?

------
JoeCoder_
Why not develop tox instead, which is open source, end to end encrypted, on
more platforms, and seemingly further along in general?

~~~
secfirstmd
Briar is also e2e and open source. It also has a ton of mesh networking
features that Tox doesn't have.

~~~
sldoliadis
Can you say a bit ore about the mesh network features of Briar?

I've looked through a lot of the documentation and can't find anything other
than references to bluetooth and wi-fi, which is opaque to me.

I'm kind of wondering about something like briar, but that can connect over a
cjdns network if available... is that what it's doing?

------
raymond_goo
Can someone explain how it deals with routers and NAT ? Does it use UDP hole
punching ?

~~~
Deathmax
The app hosts a Tor hidden service which other peers can connect to. No NAT
punch through required as Tor will relay messages instead of a direct P2P
connection.

------
silur
_sigh_ another one p2p e2e crypted messenger of the week released

------
_nedR
How does this compare with the competition? Like Signal, for example?

~~~
niceplayer
I wonder how it compares to Threema. Signal's servers are located in the US,
and the service requires you to provide a phone number, which is a deal
breaker for me.

------
baby
"passes security audit". Is security audit an exam? What does passing mean?

~~~
QAPereo
Purely naively I would guess that it means during whatever audit they ran, no
signs of insecurity were observed. Maybe it would be better to say that it
didn't "fail" the audit?

~~~
sillysaurus3
You can't really fail an audit though. The point of an audit is to make your
application more secure. Using terms like pass/fail just reinforces a sense of
fear where there shouldn't be any.

A pentest consists of an analysis period, typically about a week. Then any
flaws in your app are communicated to you, along with steps to reproduce them.
When you feel you've fixed the issues, a retest is scheduled and the
pentesters verify that each flaw has been fixed.

A healthy application is one that's pentested on a regular basis. Ideally
after every release, though only big companies can afford that.

~~~
QAPereo
>You can't really fail an audit though. The point of an audit is to make your
application more secure. Using terms like pass/fail just reinforces a sense of
fear where there shouldn't be any.

I see, that's a good point I hadn't considered.

------
kobeya
Feedback: darknet has come to mean places where you go buy drugs online, not
p2p applications generally.

~~~
freshhawk
Does it? I know it has that meaning on the evening news, but even on network
television it means something like "the secret internet where crazy stuff is".

I mostly hear it used to mean what it is supposed to mean, but that is rarely
from non-technical people.

~~~
Veratyr
The problem is that as long as the name has that association at all, it's
going to be a way to attack Briar.

Darknet now: Anonymous place drugs sometimes happen

Darknet if it becomes bigger: That place where terrorists and criminals hide,
the FBI and NSA say it's a risk to national security and we have to stop it at
all costs! What do you mean it's just a secure messaging app? I'm not a
terrorist, I don't need anything like that!

~~~
the8472
You could also see it the other way around, your application is not secure
enough if terrorists, dissidents, drug dealers, whistleblowers and child
pornographers don't feel confident to use it.

Look at Tor.

~~~
lucaspiller
From an ethical point of view building this software must be difficult. On one
hand you are building something that advances technology and could be used to
help free people from an oppressive government, on the other hand you are also
building something that could be used (and if it works most likely will be
used) to aid acts that we all agree are morally wrong.

~~~
thaumasiotes
Ah, the essential ethical dilemma of building encryption software, polaroid
cameras, and kitchen knives.

