https://ephemeral.pw/chat/ (Also written in Go)
If you are trusting the server to create or associate identities with keys, the end-to-end encryption is easily subvertible.
The question is - how does the first public key exchange happen? It has be done outside of the site for it to be secure and your private key must exist locally on your device - which is contradictory to the premise of these websites.
How to ensure the server doesn't get a hold of the private key is the issue (can you really trust the code you're running?).
What we need is end-to-end encryption and with an open source client that just has to be downloaded and built/installed once (and in such a way that it's verifiably secure, think reproducible builds).
I am sure downloaded clients with end to end encryptions exist, but it's definitely outside the scope of something as simple as Niltalk.
Is a secure platform really worth anything if it's unverifiable as secure? We have no way of knowing there are no traces left.
I'm genuinely interested in why people feel local clients are more secure than something running in a browser. It's something I came across when writing an ssh client in browser (www.minaterm.com).
I guess it's the potential for a HTML page to updated overtime so it no longer reflects an audited version. However it seems that it's really a failing in our browsers that this is the case. Perhaps an external service that verifies the hash of a page would help? But this would need browser support of course.
A good question.
In order to have end-to-end security, you need some sort of secret that is only known on the end points (possibly negotiated over some sort of key exchange protocol), and it should be impossible for the server in the middle to have the secrets.
There is, therefore, no way to use the web through a conventional browser to create an end-to-end connection that the server doesn't have full access to. Browsers just aren't designed for this use case.
Note nothing stops you from providing an HTTPS REST interface that would allow full end-to-end encryption that is used by a client that is capable of having local secrets and does not provide any way for a server to run code against it. It is specifically the browsers making this impossible. I'd also observe this isn't necessarily fundamental, browsers could be changed to fix this, but... I'm not sure it would be a good idea. Browsers are already insanely complicated security environments that just barely work on the best of days. Not sure I want to add "secure-from-the-server secret storage" to the list of things a browser is supposed to be able to do. (It is also possible certain extensions in the browser have already hacked together this ability, such as the video chat extensions, I haven't studied them to that detail, but AFAIK secure secret storage and key negotiation aren't generically and generally available.)
1) Don't really care about privacy. Might not want their chat on the front page of the papers, but aren't going to go to great lengths to achieve that.
2) Actually care about privacy and are informed. There's not many of these people, but they're trained to be wary of every outside dependency and opportunity for hostile code injection. Crypto running in the browser can be replaced any time you load it if the host is compromised - either in the technical sense or the legal sense. Yes, it could be hashed, but it isn't and there's no mechanism for this nor plans to build one.
Not to mention that the browser itself presents a pretty large attack surface.
That's kind of a shame. It would be nice if apps distributed over the web could be signed the same way they are from repositories.
> Not to mention that the browser itself presents a pretty large attack surface.
As does the operating system itself. I would have thought with a local (likely native) client, you just have one less layer to get through.
This sounds like a theoretical impossibility. The server's source code is by nature closed, and while the server could provide you a copy of the source with a signature, there's really no way for you to verify that the code you've been promised is the code that is running.
I don't see how it's a theoretical impossibility.
You already can distribute signed browser add-ons.
I'd also be less scared if it was running on my own server, but it's not clear to me that this is completely logical either.
If it is could you post a link to a public repo. Thanks :)
PS: The "source" link is in the footer of the website as well.
Here's another room (pass is dontclickdispose):
Last messages were about how the Dispose button is conspicuous and easy to press by mistake.
I'm planning on going a different direction with the domain, this functionality for private messaging for a platform set up for chat rooms as well.
Right now though I'm investigating a node.js and rethinkdb infrastructure, but that's also because I will need to persist data somehow.
Thanks for building this, at least validates that someone else has similar ideas.
Coupled with absolutely no encryption of the messages in memory, I think "anonymous" would be a better term than "secure" for this.
I do have privacy concerns about this and agree they can eavesdrop if they wish. Increasing the bcrypt rounds from 5 to 15 would in no way help with any of that.
My main complaint, though, is that there's simply no reason to choose such a low number of rounds. Using the exact same code as in this app, 5 rounds takes 3551534 ns/op, 10 rounds takes 3583632 ns/op and 15 rounds takes 3623005 ns/op. In other words, it's only 2% slower to use 15 rounds than it is 5, and the default (10) is less than 1% slower.
> In other words, it's only 2% slower to use 15 rounds than it is 5, and the default (10) is less than 1% slower.
So are you arguing that your complaint is petty or isn't? Because this isn't helping your case.
Overall your attack scenario is where an attacker has just enough access to the machine to read memory in the redis database, but not enough access to read memory in the web-server, or at the point before bcrypt has been run in the process.
If redis was stored to disk you may have a valid point. As it stands your argument actually doesn't make sense. If they can access Redis they can access pre-bcrypt passwords and therefore making bcrypt's rounds completely unimportant.
No. The unhashed passwords are not stored in redis. What I think you're missing is that there's a significant difficulty gap between connecting to, and reading data from, redis compared to gaining root access and reading arbitrary memory on the server.
> So are you arguing that your complaint is petty or isn't? Because this isn't helping your case.
You make a good point - even if it's not the one you were trying to make - and that it's that my benchmark was not particularly helpful as it measured per operation, not per hash.
You missed the point I was really trying to make, though, which is that difference between 5 rounds and 15 (your choice, not mine - I probably wouldn't choose 15) isn't that significant when you're doing legitimate stuff, like hashing chatroom passwords. It is significant if you're brute-forcing.
Never claimed otherwise. They are stored in memory though. They're in the web-server process, and the process which actually conducts the bcrypt hashing.
> What I think you're missing is that there's a significant difficulty gap between connecting to, and reading data from, redis compared to gaining root access and reading arbitrary memory on the server.
You don't need to read arbitrary memory on the server, you only need to be in the same scope as the web app runs in.
> It is significant if you're brute-forcing.
If you're in a position to steal the bcrypt-ed passwords in this case, you're in a position to steal the plain text passwords (both in memory, both in the same scope, why waste time breaking bcrypt?).
If the author altered the code so it DID store on the file system medium to long term, sure, it might be worth while increasing bcrypt's rounds. In the mean time bcrypt is almost pointless in this case as plain text exists in the same execution scope and is accessible to processes with access to Redis.
For someone who has never dabbled with go, how do I run nilktalk after all of the above was done?
As proven in this very thread, it doesn't really work. The idea that everyone can dispose of the room is interesting, but there probably should be an option so that only the creator (or the first to join) can dispose of the room, for such public place cases.
So for instance i could generate a password then sign it with my partners public key then paste that in the message box so theoretically only they could get access to the channel.
and and, create rooms that are meant for someone, so their public key is the index and their private key decrypts the message to get the password into the channel.
Public key encryption is definitely a good idea and could be an optional feature for an upcoming version.
Alas, you can't really know if the code on Github is actually the same running on their servers.
It doesn't need to be trust-based, and in fact shouldn't be trust-based, because even if I trust you, I also have to trust the people who could coerce or bypass you, or people who could maliciously access/modify your systems.
This is why end-to-end encryption is really the only way to make promises as a server about not reading / storing logs.
Cool project though.
And if you're not a small number of trusted participants (e.g. anon participants, not all of whom you trust, or enough people that one might make a mistake or delete before everyone is ready), that's not going to work. See the example forums being created and deleted above. They could easily allow two passwords on setup though to avoid this - one for admins, one for posters.
On the button placement, it really would be better elsewhere - it is not related to the text submission entry, so it belongs at top somewhere, along with the sound, which again is a forum-level setting.
It would also be nice to disable 3DES ciphers and only allow ciphers with forward secrecy.
don't delete the room! lol
edit: this doesn't work on a public forum. some asshole always deletes it.
The honor and glory of Hacker News users will keep this from being closed.