You can then use idb.filesystem.js to add api support for firefox etc. Search the file above for "is_chrome" for a few idb.filesystem.js-specific quirks.
Looking at that page, it looks like firefox will ship with support in version 50?
I'm using idb.filesystem.js in https://www.sharedrop.io, so that only very small part of the transferred file is stored in memory, but then without asking users for permission (i.e. using non-persistent storage) you "only" get ~4GB (not sure exactly, I tested it with files up to 1.5GB).
A short story on that as a Defcon 22 badge competitor - When we reached the stage where we got the "Lorem ipsum" page. We first noticed that a bunch of the lines did not directly follow the "Lorem ipsum" format exactly and had strange capitalization. So we thought that the difference between the expected "Lorem ipsum" text and this text was the clue... We eventually figured out if you pasted the entire block into Google translate something strange would pop out (that was relevant to another hint - https://www.defcon.org/1057/SarangHae/ and then was useful again with what that email address returned ).
Looks like Google updated their latin translator to completely break the puzzle :)
Yup it's possible. It doesn't appear to be here though, especially if the files have any sort of lifespan not dependant on the users browser staying open. Anyhow, I built something similar over at rtccopy.com which does use webrtc
Chrome Canary is the only Chrome version that has working SCTP (reliable) datachannel support at the moment. It's broken(undetectable also) in every version before that (you can try, the website won't stop any version, just display warnings).
I did have unreliable datachannel support initially, but as both Firefox & Chrome now support reliable, I see no reason to keep that overhead/extra code around. Hopefully working reliable datachannels in Chrome will reach the primary version soon!
- The only external JS loaded on this site is google analytics. Feel free to block this using something like noscript :)
- WebRTC datachannel connections on this site don't use just (optionally) OTR. They have DTLS enabled within the browser. OTR just adds an authentication layer that DLTS currently lacks. So that even if the OTR implementation here was completely compromised, the only possible attack would still be a MiTM on the DTLS channel.
- It's open source (https://github.com/erbbysam/webRTCCopy), so it's available to be hosted elsewhere and all of the libraries used could be re-downloaded.
It looks like this is leaking the room name to Google Analytics. After thinking about this more, I'm going to go ahead and remove that. I should be able to monitor the server itself to make sure it isn't getting overloaded.
Even if Google weren't currently sucking up this information, it would still have been a wise decision to remove it. They can change their JavaScript at any point without you noticing and start logging it. Malicious intent not required.
No, I currently just manually keep the site in sync with the repository. Not exactly the most professional system but it got the job done. I'm going to look into that(not sure exactly how it works, but I do need to keep a node.js server running as well for webrtc negotiation) , as well as potentially just installing git on the server and have it sync up to the repository.
A minor note on 1. - rtccopy.com does use (optionally) OTR in javascript on top of the DTLS channel in order guarantee identity (something not currently guaranteed with the DTLS channel).
Hi, I'm the author of rtccopy.com (https://github.com/erbbysam/webRTCCopy). Having just updated the site to support reliable connections, I was not aware that SCTP supported larger "chunk"/message sizes. Do you have a recommendation for a maximum size for Chrome or a spec for this?
I also still base64 encode files... I should probably update that as well.
As far as I know, the only issue with large messages is that while they are being sent, they'll reside in memory. But that's true even if you break up the message into smaller pieces and call send() in smaller chunks all at once. So, it's really a matter how much data you pass to the browser via send() calls at any time. The more you do, the more gets buffered in memory.
I'm not really sure what is optimal for chunk size, but I think the real target is about keeping the buffered amount low, but non-zero. Lower means less memory used, but zero means you missed out on bandwidth you could have used. Choosing a chunk size probably doesn't have a large impact on that, but you won't know for sure until you try different sizes. It would be interesting to see an article that experiments with different sizes and different buffered amounts and tests the results.
You can then use idb.filesystem.js to add api support for firefox etc. Search the file above for "is_chrome" for a few idb.filesystem.js-specific quirks.
Looking at that page, it looks like firefox will ship with support in version 50?