Hacker News new | comments | show | ask | jobs | submit login
Show HN: Free, instant, secure, disposable chat rooms built in Go (niltalk.com)
169 points by SatyajitSarangi 917 days ago | hide | past | web | 93 comments | favorite



I only feel safe using end-to-end encrypted chatrooms. Currently, niltalk can read every message. At the very least, AES encrypting messages by the chatroom's password will reduce reliance on SSL. But it really should use public key crypto for a key exchange between users. This is what's done by other disposable chatrooms:

https://crypto.cat/

https://ephemeral.pw/chat/ (Also written in Go)


The problem with end-to-end encryption is not the encryption but the key-exchange (and especially so for multi-user setups).

If you are trusting the server to create or associate identities with keys, the end-to-end encryption is easily subvertible.


New keypairs would be generated on the client every time you join a chatroom. Another member of the chatroom sends you the shared_key encrypted by your public key. Server knows nothing, stores no keys. Keys exchanged between users.

Javascript crypto is still a problem though: http://matasano.com/articles/javascript-cryptography/

When you re-download the codebase on every use, there is no way to ensure integrity of the code. This is the reason cryptocat ships as a chrome extension, because it is downloaded once. Even with these issues, I'd take javascript crypto + open source over nothing (or just SSL).


> New keypairs would be generated on the client every time you join a chatroom. Another member of the chatroom sends you the shared_key encrypted by your public key. Server knows nothing, stores no keys. Keys exchanged between users.

The question is - how does the first public key exchange happen? It has be done outside of the site for it to be secure and your private key must exist locally on your device - which is contradictory to the premise of these websites.


It's asymmetric encryption. Even if the server got a hold of the public key, it would not be able to decrypt the contents.

How to ensure the server doesn't get a hold of the private key is the issue (can you really trust the code you're running?).


The bigger problem is "how do you ensure that the public key the server sent is actually the other user's, and not a MITM?".


Exactly, you have to exchange public-keys via another method - which is also potentially vulnerable.


But all forms of exchange are potentially vulnerable, the point of using multiple channels for authentication is to increase the challenge-space for potential attackers. Indeed the chief benefit of public key encryption is that the key can be exchanged over a multitude of channels and a compromise of just some of them does not jeopardize the entire operation. Perhaps we need more authentication systems where this is made implicit, with trust based on the number of different mediums the key is transferred over (or the number of different third party signers).


Use keybase.io


Is there a chat system that leverages keybase.io?


IIRC, PGP is not good for instant messaging.


PGP is acceptable for exchanging a symmetric key, though.


Have two buttons: "vote to keep open" and "vote to close". Colour them orange and blue. Release it to Reddit. Potential viral hit.


Have one button. Release to Reddit. Potential viral hit.


For anyone who doesn't understand this reference, there is a massive social experiment going on at Reddit: https://www.reddit.com/r/thebutton


You can checkout the public repo here: https://github.com/goniltalk/niltalk


Good work. Though I have to say I've seen so many of these web-based "secure, private, anonymous" chat services now, I've lost track.

What we need is end-to-end encryption and with an open source client that just has to be downloaded and built/installed once (and in such a way that it's verifiably secure, think reproducible builds).


Author here. This is meant to be something super simple and instantly accessible. Start and finish a conversation in mere seconds if need be with no traces.

I am sure downloaded clients with end to end encryptions exist, but it's definitely outside the scope of something as simple as Niltalk.


>no traces

Is a secure platform really worth anything if it's unverifiable as secure? We have no way of knowing there are no traces left.


why does the client need to be built locally? Are you inherently suspicious of anything delivered over HTTPS?

I'm genuinely interested in why people feel local clients are more secure than something running in a browser. It's something I came across when writing an ssh client in browser (www.minaterm.com).

I guess it's the potential for a HTML page to updated overtime so it no longer reflects an audited version. However it seems that it's really a failing in our browsers that this is the case. Perhaps an external service that verifies the hash of a page would help? But this would need browser support of course.

The only thing I could think of that could be implemented in current browsers was a small stub page which calculates and displays a hash of the HTML/Javascript to be launched. The stub would need to be small enough that a user could manually check that nothing malicious has been added here.


"why does the client need to be built locally? Are you inherently suspicious of anything delivered over HTTPS?"

A good question.

In order to have end-to-end security, you need some sort of secret that is only known on the end points (possibly negotiated over some sort of key exchange protocol), and it should be impossible for the server in the middle to have the secrets.

The core problem is that a webpage is really, really, really designed to be a representation of the server, sitting on a client sandbox. There is no built-in way for a web browser to inject anything into the connection that could be used for a security connection in such a way that the server can't see it. All the local storage the page has access to, the server has access to. All the cookie data the page has access to, the server has access to. Anything else you can come up with that the page has access to, the server can either read or destructively set by sending down the correct HTTP or HTML. There's no independent client "context" that can be passively, safely used by the page somehow, and in a world where the page is running javascript provided by the server it's not even particularly clear what could be "used" by the page without being something that the server could "use" by reading, then sending to the server.

There is, therefore, no way to use the web through a conventional browser to create an end-to-end connection that the server doesn't have full access to. Browsers just aren't designed for this use case.

Note nothing stops you from providing an HTTPS REST interface that would allow full end-to-end encryption that is used by a client that is capable of having local secrets and does not provide any way for a server to run code against it. It is specifically the browsers making this impossible. I'd also observe this isn't necessarily fundamental, browsers could be changed to fix this, but... I'm not sure it would be a good idea. Browsers are already insanely complicated security environments that just barely work on the best of days. Not sure I want to add "secure-from-the-server secret storage" to the list of things a browser is supposed to be able to do. (It is also possible certain extensions in the browser have already hacked together this ability, such as the video chat extensions, I haven't studied them to that detail, but AFAIK secure secret storage and key negotiation aren't generically and generally available.)


Users fall into two categories:

1) Don't really care about privacy. Might not want their chat on the front page of the papers, but aren't going to go to great lengths to achieve that.

2) Actually care about privacy and are informed. There's not many of these people, but they're trained to be wary of every outside dependency and opportunity for hostile code injection. Crypto running in the browser can be replaced any time you load it if the host is compromised - either in the technical sense or the legal sense. Yes, it could be hashed, but it isn't and there's no mechanism for this nor plans to build one.

Not to mention that the browser itself presents a pretty large attack surface.


> Yes, it could be hashed, but it isn't and there's no mechanism for this nor plans to build one.

That's kind of a shame. It would be nice if apps distributed over the web could be signed the same way they are from repositories.

> Not to mention that the browser itself presents a pretty large attack surface.

As does the operating system itself. I would have thought with a local (likely native) client, you just have one less layer to get through.


> That's kind of a shame. It would be nice if apps distributed over the web could be signed the same way they are from repositories.

This sounds like a theoretical impossibility. The server's source code is by nature closed, and while the server could provide you a copy of the source with a signature, there's really no way for you to verify that the code you've been promised is the code that is running.


A browser feature would be required that could calculate/display the hash of the delivered code and optionally verify it against a 3rd party server. Ideally you'd want have particular versions signed as "audited" etc.

I don't see how it's a theoretical impossibility.


You're neglecting the server-side code. If you have access to the full source code to verify it, you're not describing a web service, you're describing a local application that happens to be implemented in a browser.

You already can distribute signed browser add-ons.


I was looking for something like minaterm the other day, trouble is i'd be scared to put my credentials into it. A when I think about it logically that isn't rational (putty can grab my credentials just as easily), but still.


It's not entirely irrational. If putty wants to grab your credentials they have to ship a broken binary that once downloaded exists forever and can be examined and reverse engineered in the wild. Someone running a web service (or someone who has compromised said service) can target a particular user for a single session and the evidence that an attack occurred will only exist until a few caches get cleared.


yes, and I also would be scared to too. It's interesting thinking about why though. I think there's a significant social/psychological component to the decision.

I'd also be less scared if it was running on my own server, but it's not clear to me that this is completely logical either.


If the code can't change, what's the point of having it be delivered through the browser each time? Aren't you better off saving the bandwidth by downloading it once?


Have you been measuring the server load of this at all since this thread has been open? Very curious as to what that looks like.


Is this open-source?

If it is could you post a link to a public repo. Thanks :)


Author here. Yes of course - https://github.com/goniltalk/niltalk

PS: The "source" link is in the footer of the website as well.


Wait, you're the author but someone else submitted it as "Show HN"? That's... not how it's supposed to work, but hey, at least you're in the thread.

Here's another room (pass is dontclickdispose):

https://niltalk.com/r/8L5MD


And it's gone.

Last messages were about how the Dispose button is conspicuous and easy to press by mistake.


My apologies for being so unobservant :)



I've been killed kicking around the idea of doing something similar, in go, with the domain I own ChatFor.Us

JavaScript encryption, as others have mentioned is the thing I see I was planning that's missing from yours.

I'm planning on going a different direction with the domain, this functionality for private messaging for a platform set up for chat rooms as well.

Right now though I'm investigating a node.js and rethinkdb infrastructure, but that's also because I will need to persist data somehow.

Thanks for building this, at least validates that someone else has similar ideas.


Nice. From my crude and unscientific benchmarks, I've found that Go is able to handle a lot more concurrent WebSocket connections than Node.


I assumed it would. But I'm thinking socket.io would provide better client support and have been looking at meteor as a way to get something built fast. Going Roth a prototype first as I have a lot less time to do this stuff these days. I'm about equally proficient in both go and node which is to say I can stumble my way to a solution with both.


Not sure how the word killed got in there. Typing on phone


And this is secured how exactly?


Password-protected and no public listing, I assume. Nothing on secure data transfer, though.


The number of bcrypt rounds is extremely low, too[1]. While the Go bcrypt lib will actually accept a cost of 5, that seems an unreasonably low value to me.

Coupled with absolutely no encryption of the messages in memory, I think "anonymous" would be a better term than "secure" for this.

1:https://github.com/goniltalk/niltalk/blob/master/api.go#L75


Your bcrypt complaint is pretty petty. They aren't storing the hash on disk at all, and the chat rooms are only temporary.

I do have privacy concerns about this and agree they can eavesdrop if they wish. Increasing the bcrypt rounds from 5 to 15 would in no way help with any of that.


It's in a redis database, so it's not all that hard to get at, either, should someone compromise the system. "Not on disk" stops being such a good defence when it's stored in a semi-persistent DB.

My main complaint, though, is that there's simply no reason to choose such a low number of rounds. Using the exact same code as in this app, 5 rounds takes 3551534 ns/op, 10 rounds takes 3583632 ns/op and 15 rounds takes 3623005 ns/op. In other words, it's only 2% slower to use 15 rounds than it is 5, and the default (10) is less than 1% slower.


If someone has an active compromise on a running machine, they can intercept network traffic and bypass bcrypt completely.

> In other words, it's only 2% slower to use 15 rounds than it is 5, and the default (10) is less than 1% slower.

So are you arguing that your complaint is petty or isn't? Because this isn't helping your case.

Overall your attack scenario is where an attacker has just enough access to the machine to read memory in the redis database, but not enough access to read memory in the web-server, or at the point before bcrypt has been run in the process.

If redis was stored to disk you may have a valid point. As it stands your argument actually doesn't make sense. If they can access Redis they can access pre-bcrypt passwords and therefore making bcrypt's rounds completely unimportant.


> If they can access Redis they can access pre-bcrypt passwords and therefore making bcrypt's rounds completely unimportant.

No. The unhashed passwords are not stored in redis. What I think you're missing is that there's a significant difficulty gap between connecting to, and reading data from, redis compared to gaining root access and reading arbitrary memory on the server.

> So are you arguing that your complaint is petty or isn't? Because this isn't helping your case.

You make a good point - even if it's not the one you were trying to make - and that it's that my benchmark was not particularly helpful as it measured per operation, not per hash.

You missed the point I was really trying to make, though, which is that difference between 5 rounds and 15 (your choice, not mine - I probably wouldn't choose 15) isn't that significant when you're doing legitimate stuff, like hashing chatroom passwords. It is significant if you're brute-forcing.


> The unhashed passwords are not stored in redis.

Never claimed otherwise. They are stored in memory though. They're in the web-server process, and the process which actually conducts the bcrypt hashing.

> What I think you're missing is that there's a significant difficulty gap between connecting to, and reading data from, redis compared to gaining root access and reading arbitrary memory on the server.

You don't need to read arbitrary memory on the server, you only need to be in the same scope as the web app runs in.

> It is significant if you're brute-forcing.

If you're in a position to steal the bcrypt-ed passwords in this case, you're in a position to steal the plain text passwords (both in memory, both in the same scope, why waste time breaking bcrypt?).

If the author altered the code so it DID store on the file system medium to long term, sure, it might be worth while increasing bcrypt's rounds. In the mean time bcrypt is almost pointless in this case as plain text exists in the same execution scope and is accessible to processes with access to Redis.


> All communication happens over SSL. Niltalk doesn't record or log IP addresses, messages, or peer handles anywhere.


Message transmission is over SSL with no logging anywhere.


Yes except it's all plain-text on the server?


Yes, there is no end to end encryption as of now, although there is no persistence or storage of any sorts on the server.


In RAM only, it looks like. But yes.


How one runs this? I installed go, and redis. The ran "go get github.com/goniltalk/niltalk", which installed. The previous command created three directories under my $GOPATH, one on which has a 'nilktalk' executable.

For someone who has never dabbled with go, how do I run nilktalk after all of the above was done?


The README has the full instructions. Edit the file config.json and then do "./run" on the terminal.


Thank you. More on the installation steps on an, now closed, issue[0].

[0] https://github.com/goniltalk/niltalk/issues/3


Nice idea, sadly only as secure as https.


As all clients need a password to enter a room, the messages could be encrypted with that password. There are a lot of JS libraries that could do this, e.g. Triplesec



Author here. Right now, it's only as secure as https, but I'll look into JS encryption. It's just a fun project that came out of some Go experiments.


Still would only be as secure as https if the client is downloading your JS crypto lib every visit.


Would it be safe to keep the crypto lib on the client somehow? Browser addon? local storage? How would we do that?



This is an awesome service. Thanks for making it available to everyone. Can I ask what the use case is for this? I talk to my friends using FB messenger or Google chat and my customers using a chat widget on our site, so I'm curious when I would use this.


Thanks! Use cases could be, quick private convo. at a workplace, taking a discussion on a public forum private (like HN or Reddit), talking to strangers (eg: Craigslist) without adding them to your FB or Google Talk, exchanging secrets with your friends without leaving logs on your Google talk etc. :)


The problem with taking a discussion from a public forum private, is that in the current state, everyone can choose to dispose of the room.

As proven in this very thread, it doesn't really work. The idea that everyone can dispose of the room is interesting, but there probably should be an option so that only the creator (or the first to join) can dispose of the room, for such public place cases.


It's actually meant for small group of people to have private conversations and is not really ideal for take a huge public discussion private. The idea of marking a peer the creator or making the first peer an owner complicates the whole privacy and security aspect.


Perhaps an option where a majority of users (if more than two) need to opt for deleting the room ?


To make it even more instant (in terms of UX), I would display the message immediately so you don't get the little delay. From where I'm at, it's about 250 milliseconds from the point I hit ENTER to when I see the text displayed.


Author here. This is how it was meant to be but I somehow overlooked it. Thanks, will implement.


I have opened a room. https://niltalk.com/r/8CKyw I am not going to tell the password though. Let's see how long it lasts!


first off... this is great! I wonder if you could make it so when you create a room, you can attach a message.

So for instance i could generate a password then sign it with my partners public key then paste that in the message box so theoretically only they could get access to the channel.

and and, create rooms that are meant for someone, so their public key is the index and their private key decrypts the message to get the password into the channel.


Thanks :)

Public key encryption is definitely a good idea and could be an optional feature for an upcoming version.


hn chat time!

https://niltalk.com/r/bfT9W, hn-923732


Disposing of the room does remove it and kick everyone out, indeed. And then the link is invalid. Neat. According for the privacy page, it's all living in RAM only, so in theory there is no logging (https://niltalk.com/pages/privacy). Guess we can check the code and see for ourselves, of course.


Guess we can check the code and see for ourselves, of course.

Alas, you can't really know if the code on Github is actually the same running on their servers.


Author here. I concur, just like any other open source software running as a hosted service. It has to be trust based.


http://www.daemonology.net/blog/2012-01-19-playing-chicken-w...

It doesn't need to be trust-based, and in fact shouldn't be trust-based, because even if I trust you, I also have to trust the people who could coerce or bypass you, or people who could maliciously access/modify your systems.

This is why end-to-end encryption is really the only way to make promises as a server about not reading / storing logs.


The dispose button is far too inviting where it is. I started to click it thinking it was the submit button till I read the text. Perhaps put it somewhere top right or someplace other than right beside the input box, also, it's kind of weird that there is no admin for the forum, so any participant can delete it I assume?

Cool project though.


Yes, that's by design. These rooms are meant to be completely ephemeral and private amongst small groups of peers. It's critical to ensure that anyone who is connected is able to quickly dispose of the room for security / privacy reasons. Once a room is created, there is no "admin" or "host" per se, just a short lived private space.


It's critical to ensure that anyone who is connected is able to quickly dispose of the room for security / privacy reasons.

And if you're not a small number of trusted participants (e.g. anon participants, not all of whom you trust, or enough people that one might make a mistake or delete before everyone is ready), that's not going to work. See the example forums being created and deleted above. They could easily allow two passwords on setup though to avoid this - one for admins, one for posters.

On the button placement, it really would be better elsewhere - it is not related to the text submission entry, so it belongs at top somewhere, along with the sound, which again is a forum-level setting.


looks like the room got deleted. hn chat time over!


Really interesting to read through the source code and get an idea of how you're using Go to write APIs, thanks for sharing!


Deciding on a pattern on writing http APIs in Go was a bit of chore. Ended up using the `pat` library for chaining middleware. Quite extensible and light weight. Also, using context to pass objects through the requests chain is a neat trick.


This site uses insecure 1024 bit Diffie-Hellman parameters for Diffie-Hellman key exchange! Please fix!


Why was this comment downvoted? The NSA has built custom hardware to crack 1024 bit DH in a few days[1], so the site owner really should regenerate the DH parameters and use 2048 bits.

It would also be nice to disable 3DES ciphers and only allow ciphers with forward secrecy.

[1] http://blog.erratasec.com/2013/09/tor-is-still-dhe-1024-nsa-...


https://niltalk.com/r/h8XLk pw hnchat

don't delete the room! lol

edit: this doesn't work on a public forum. some asshole always deletes it.


Aaaaand it's gone


Let's keep this one open: https://niltalk.com/r/PEiMn hnhnhn

The honor and glory of Hacker News users will keep this from being closed.


gone.


I'm curious to see if this takes off, how long before the powers-that-be start to say something.


If the powers-that-be are annoyed, they'll just require the Niltalk operators to log the messages. It's not like they're encrypted end-to-end.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: