Hacker News new | past | comments | ask | show | jobs | submit login
WebWormHole: Send files quickly using WebRTC (webwormhole.io)
636 points by pvsukale3 on April 29, 2020 | hide | past | favorite | 159 comments

This is fantastic! Really nice work :)

The nice thing about WebRTC is this works (pretty much) everywhere! Someone could throw up Python/Android/iOS/Go/Web/C++ Clients really easily. That is really exciting.

Also just a HUGE fan of NAT Traversal/P2P in general. The less dependence we can have on others for sharing our data the better.

> The nice thing about WebRTC is this works (pretty much) everywhere! Someone could throw up Python/Android/iOS/Go/Web/C++ Clients really easily.

Huh, what did I miss?! How do you make a WebRTC-client in language-of-your-choice? The last time I checked for C++ I only found answers like "look at the chrome source". lol.

Ooh, these are very cool. Do you know what sorts of things people are building with them?

This is only a personal anecdote, I had two occasions to use it at work:

- I wanted to setup ssh access to some machines in a complicated network in a shared office, over which I had no control. After trying various NAT traversal hacks, I realized the potential of WebRTC to do all that for me, and setup ssh over webrtc. I made it from a bunch of hacks, but some people seem to have built that properly now, ex: https://github.com/mxseba/rtc-ssh

- I built some tech demos that needed to run on multiple OS and use the webcam. Initially I relied on opencv, but I needed to dockerize things and outside of Linux the webcam device passthrough is a pain. Instead I made a simple webpage fetching the video feed and talking to a python backend (using this great library: https://github.com/aiortc/aiortc), and it worked nicely. It also has been surprisingly easy to setup.

I maintain https://github.com/pion/awesome-pion which has some of the things people in the community have built, and companies who are willing to publicize their work.

Lately I am seeing a lot of stuff in the broadcast space (mixer.com), robotics and IoT. I am really enjoying the Teleoperation stuff as well, people are remotely controlling cars/robots with WebRTC.

I'm also curious about what's being built.

Also https://github.com/RainwayApp/spitfire if you're making C# apps


Where be at the Node.js verzhun?

Thanks! There's already a Go client: https://github.com/saljam/webwormhole

I was really excited to see you are using Pion (I am the creator) if there is anything I can do to make it better I am happy to help :) You should also join https://pion.ly/slack and share your project! I posted WebWormHole on https://twitter.com/_pion also

I see you are a Recurser (me as well!) I was Summer 2012 I think? Are you doing this as a project during your batch? I would really love to do a talk/deep dive on WebRTC as a talk sometime for Recursers, you should float the idea to Sonali and get me in :p

Pion is great. Thanks for making it!

Definitely give a talk if you can. It's all remote currently!

Was it you on the GoTime podcast? If so the enthusiasm for webrtc and go just motivated me a ton. Kudos.

Yes I was! Thanks for listening :)

I really want to get more Go developers into WebRTC/P2P, that was my goal for this year. It is hard to convince conference developers WebRTC is more interesting then all the DevsOps stuff! I am putting in the good fight though.

Doing podcasts and beating the drum and these novel things built on WebRTC will get you there. Keep it up!

webRTC is great, but in practice still a lot of hurdles to implement, especially for mobile.

What makes this really interesting to me is that it uses a golang implementation of WebRTC on the server side. When I was playing with multiplayer networking in the browser ~1.5yr ago, there really didn't seem to be a lot of options for WebRTC servers. Great to see some progress here.

Also, it sounds like it's using streaming rather than loading the entire file, which would give it an advantage over https://file.pizza, which is what I usually recommend for p2p transfers.

If you like these types of tools, but don't require p2p (or can't use it because of NAT), I'll also plug my own https://patchbay.pub, which will let you do streamed transfers with nothing but curl.

I stumbled upon patchbay the other day and I thought it was really cool, now I just need to find a good excuse to toy with it.

Just a heads up: the homepage still references the old index.html which you say in your blog article might have the wrong paths. Indeed looking at the docs all paths require a mode

Do you have an example of where it's incorrect? I'm pretty sure it's the right index.html. Note that it's still very similar to the blog post.

There is no mention of the req/res flow (someone could POST to /req, expecting it to be a MPMC queue)

In the docs it seems MPMC queues should start with /queue, but it turns out that anything that's not /pubsub or /req,/res just works, so is it really needed ?

Also the doc doesn't specify that pb-method is available to get the requester's method in the request/respond protocol

Anyway those are just minor things, thanks for the service it really rocks

Thanks! Several good points.

> In the docs it seems MPMC queues should start with /queue, but it turns out that anything that's not /pubsub or /req,/res just works, so is it really needed

/req/res was developed after the initial launch, since I had the idea later. It represents the most general form of the entire concept, since you can tunnel essentially any HTTP traffic through it. The plan is to change the default protocol to /req, which is why I changed the examples to /queue, to make the transition smoother. Lately I've been going back and forth on whether it's a good idea to make the switch, since most of the time /queue is what I want, and using /req/res involves complexity that isn't really in the spirit of patchbay. But just today I decided another project I'm working on will need the full HTTP capabilities, so I think I'm going to pull the trigger on it in the next couple weeks. There are several independent implementations of the /queue-style approach, so I think it's ok for patchbay.pub to take a slightly more feature-full approach.


You're right that MPMC queue is a specific case of req/res where responder doesn't switch to a channel for replying so it does make sense to switch to it; nothing will be lost. You could even use the pubsub protocol by putting a query param (pubsub=true), instead of reserving a whole path prefix just for this protocol. I'd still keep pubsub because it's still useful in the general case, especially since your initial itch to scratch, poor man's notification, is done in pubsub mode

(Side question: why use pubsub for notification ? You wouldn't want to lose the notification if no one is listening on the consumer side... but you also want to possibly send it to multiple consumers at the same time. Maybe there's space for something a bit different, like "As a producer I want to block until at least one consumer is here; if there are multpile, send to all of them")

The only concern I'd have is that in the general case of req/res there's no "easy" cli tool to parse the request headers and a potentially streaming body, so it's harder to do a 1-liner (or a 5-liner) to process the input.

Thanks for the feedback

> You're right that MPMC queue is a specific case of req/res where responder doesn't switch to a channel for replying so it does make sense to switch to it; nothing will be lost

Not quite, unfortunately. The current implementation of req/res assumes the first path segment is the responder "id", and everything after that is the path to the file on that responder. So responders will shadow things and cause potentially unintuitive behavior for users who just want an MPMC. There may be ways to mitigate that though. I haven't though it through.

> You could even use the pubsub protocol by putting a query param (pubsub=true), instead of reserving a whole path prefix just for this protocol

That's actually exactly how it worked originally. Once I started adding more protocols, I switched to the /proto/ api since it makes it clear right at the beginning of the URL how it works, whereas query params are at the end of a potentially long path. Still not 100% sure about this though. Been thinking about switching to a pb-proto={res,queue,pubsub}.

> (Side question: why use pubsub for notification ? You wouldn't want to lose the notification if no one is listening on the consumer side... but you also want to possibly send it to multiple consumers at the same time.

In practice I actually haven't been using pubsub for notifications. MPMC is almost always what I need. Since the chat example is mostly a toy, I'm really not sure pubsub is earning it's complexity cost.

I suppose pubsub is still useful for streams of events (like webhooks) where it's not necessarily a disaster if the event gets dropped, but you definitely don't want the sender piling up blocked requests.

> Maybe there's space for something a bit different, like "As a producer I want to block until at least one consumer is here; if there are multpile, send to all of them")

That's an interesting idea. You'd still need a separate protocol for it, because you have to read the entire message into memory in order to send to multiple requesters, but it could be useful for sure.

> The only concern I'd have is that in the general case of req/res there's no "easy" cli tool to parse the request headers and a potentially streaming body, so it's harder to do a 1-liner (or a 5-liner) to process the input.

Yes, it pretty much requires a real script. I'm tempted to pull it out into a completely separate thing, but it turned out that MPMC is almost completely a subset of req/res, so it felt like a lot of duplication.

Thanks for taking the time to reply!

> I suppose pubsub is still useful for streams of events (like webhooks) where it's not necessarily a disaster if the event gets dropped, but you definitely don't want the sender piling up blocked requests.

Yeah, it seems to me the semantics here is not so much pubsub but rather "at-most-once". I think that kind of things makes sense for frequent updates where you mostly care about the most recent value, so things like pings from a temp sensor or stuff like that

> You'd still need a separate protocol for it, because you have to read the entire message into memory in order to send to multiple requesters

Actually, related to the previous point, that's an at-least-once thing: if there is 1 (or multiple) consumers, send to all of them; if there is none, wait for the first one, and once the first one is connected send to it. There wouldn't be a need for much serializing in memory

Regarding req/res: it kinda feels like there's some overlap with the world of CGI, it's basically the same issue; maybe it's possible to re-use or extract some of the existing libraries ?

thoughts on send.firefox.com?

Seems like a solid option, and maybe better for recommending to people in general. Is it p2p? Nice thing about patchbay is pretty much all servers ship with a client (curl), so you don't have to worry about installing anything even for transferring large files. It's even lower friction than SSH/rsync with ssh agent because you don't have to type the destination path. Just run the receiver in the directory where you want the file to go. I used patchbay to transfer a 2GB earlier today. It looks like this:


curl https://patchbay.pub/anders/file.bin --data-binary @file.bin


curl https://patchbay.pub/anders/file.bin > file.bin

It's 2020, and people are elated to discover that it is possible to transfer a file directly between two systems on the Internet.

True story: I was giving a guest lecture on network virtualization at UCI and demoing ZeroTier. One student came up afterwords and asked me how traffic could flow between systems without "a cloud." Evidently the idea that data could just go directly from point A to point B was utterly, completely foreign to the point that they weren't aware that the Internet could be used this way.

I blame NAT.

Treating end-users like second-class netizen consumers trained people to "need" the cloud to do perfectly normal peer-to-peer things.

I'm not sure I would blame NAT. You can easily disable that.

What you can't disable is the asymmetry of consumer internet connections (upload << download) and the fact that most consumer devices are not running (or connected to the internet) 24/7.

> I'm not sure I would blame NAT. You can easily disable that

This is predominately USA mentality. In the rest of the world widespread use of carrier-grade NAT predates mobile networks by decades.

Many residential ISPs don't hand out public IPv4 addresses or require extra payment for them. Some of those ISPs got their first IP block (or even single address!) from someone and never bothered with whole "ask IANA for addresses" thing. It is multi-layer NAT all the way down.

Hmmm I suppose my view is strongly biased then. I've lived outside the US most of my life and have always had my own IPv4 address at home.

And sure, as IPv4 addresses are now exhausted, carrier-grade NAT is getting increasingly common. But I would have said the issue started way before that.

> In the rest of the world widespread use of carrier-grade NAT predates mobile networks by decades. […] Some of those ISPs got their first IP block (or even single address!) from someone and never bothered with whole "ask IANA for addresses" thing.

Do you happen to have a source here? Because carrier-grade NAT predating mobile networks by decades is news to me.

Also security: if consumer PCs were open to the internet, they would be constantly getting breached in even bigger numbers.

Consumer PCs are not the real issue by and large, IoT crap that can't even be updated is way more problematic. Having NAT by default helps screen out attacks to those devices.

Not NAT. Firewalls. You don't need NAT to have a firewall.

Yup. There is no NAT in my home, but there is a firewall. Every device in my home has public IPs, but some of them aren't allowed to talk to the outside world or are restricted on who can / can't talk to them and how.

If I may ask, what's your setup? What router (and what software on it) are you using?

A huge part of the internet cannot just disable NAT. I experience carrier grade NAT from my LTE connection. I only get a single public IPv4 assigned from the two ISPs available at my apartment. How to I "disable" NAT when I only get a single IP but have many devices to connect?

Anything that requires a manual step will be at a huge disadvantage compared to something that doesn't. People have better things to do with their life than read router manuals.

That's true. But this is a general problem. Even if you weren't sitting behind a NAT, you would still have to harden your firewall and so on if you wanted to run a server at home. So it's definitely not without work either way.

Carrier grade NAT...

Sure, when you're talking about mobile devices. (Or has it already become a thing with DSL, too?) In any case, the issue started much earlier, I'd say.

maybe its not very common everywhere but here in Germany i have seen multiple ISPs deploying DS-Lite which means you will get CGNAT for IPv4 networks. whats worse is while if you demand a public IPv4 address you will get it you also wont get IPv6 connectivity anymore. why? no idea... Interestingly, if i happen to use my own DOCSIS modem i get a true DualStack solution so this is not a technical problem for them per se. However, doing so they will force you to use VoIP instead of the IMHO way more stable PacketCable you would get with their modem...

I have fiber with carrier grade NAT for some reason...

I blame Windows taking 20 years to include ssh.

My ISP really hates servers to the point that they block ports below 1024 iirc, even though it have a real dynamic public IP

This is all about charging business customers 3-4X more for the same service. They don't want businesses to get residential class connections. Business links are unblocked.

Similar to one of my family members: she thought the Internet is a "thing" you just put stuff to, and it was available to all. She had no idea Facebook has an actual computer somewhere receiving her queries.

"Drop.io was nominated for the Technical Achievement Award at the South By Southwest 11th Annual Web Awards in 2007." https://en.wikipedia.org/wiki/Drop.io

It keeps happening.

yay for UCI! Really interesting product in ZeroTier by the way.

But yes, I think there is a massive gap in knowledge here. It's apparent that a lot of the students aren't really interested in CS, just trying to get a degree and a solid job. I think with the competitiveness of college these days kids have lost the freedom to be curious or actually learn about the things that interest them.

I also think there's a wide divide in the quality of teachers, and its a well known problem among the department.

WebRTC is problematic if you're using a VPN service, with the VPN client running on the local machine. Quoting BrowserLeaks:[0]

> IP address detection using JavaScript. Starting work on WebRTC API, the web browser communicates with the STUN server and shares information about local and public IP addresses even if you are behind NAT and use a VPN or Proxy. This tool will show if your real public IP is leaking out.

However, if you run the VPN client on the router, there's no problem, because the local machine has no public IP address, just LAN and VPN interface addresses.

0) https://browserleaks.com/

Local IP Leak SOLVED:

Chrome (and maybe other browsers, too) no longer share local IP address. They share the mDNS address instead, which is often generated and registered locally by the browser. It's used only if the peers are on the same network. Else, it's useless but it used to be provided to all peers and malicious websites and they stopped doing that (speaking for those browsers that support mDNS)

Public IP Leak in VPN SOLVED:

In Chromium version 48+, you can set webRTCIPHandlingPolicy to default_public_interface_only which means that any VPN proxy will carry WebRTC media (over UPD if it supports UDP or else over TCP, which impacts quality of transmission)

Your VPN provider just has to provide a Chrome extension to do the above or advise you to do that yourself. That way, the VPN's proxy IP address is what's visible to STUN, not the user's public IP address.

There is also a more elaborate way around it, but the above should work.

Local IP leakage has been fixed! WebRTC uses mDNS candidates now, so there is nothing that shows your 'local IP' anymore.

For 'Public IP' that sounds like a VPN configuration issue. Your WebRTC Agent should be routing the STUN requests through the VPN (and getting that public IP). But this effects all software/protocols, so don't think it is fair to ding WebRTC for this!

[0] https://bloggeek.me/psa-mdns-and-local-ice-candidates-are-co...

OK. But the thing to do is test, using https://browserleaks.com/ or whatever.

Sorry, it took me a while to catch up on the latest developments in webrtc and write my response. If I had seen yours first, I would not have replied redundantly.

i like the approach of encrypting locally, uploading to the cloud and sending the decryption key via a link.

that's the way firefox send does it


it's open source so you could run an instance of it if you wanted to.

This is a very different (and equally valid) use case. It does mean somebody (in this case Mozilla) has to spend money on storage to deliver the service.

Wormhole-style systems don't need to store the data because it's flowing from the sender to the recipient live.

For the tech savy that like to use the CLI, checkout: https://github.com/timvisee/ffsend

I love that it uses chunks and streaming to transfer the file. So many of these just try and load the entire file at once so you can’t transfer much.

Haven't looked through the code yet. How does it handle stream backpressure with WebRTC?

WebRTC has built in mechanisms to check how many bytes have been buffered (RTCDataChannel.bufferedAmount), and you can register a low water mark (RTCDataChannel.bufferedamountlow) that fires an event handle when the buffered amount goes below that threshold (RTCDataChannel.onbufferedamountlow) where you can resume sending.

WebRTC is sort of a combination of low-level and high-level APIs, but the ability to control backpressure ends up being very useful.

Thanks. Does bufferedAmount work better than it does on WebSockets[0], because I haven't had much luck with that.

[0]: https://github.com/websockets/ws/issues/492

Interesting note that this guy's choice of PAKE, Cpace, was chosen about a week ago by the CFRG for use in IETF protocols. Cpace is new, but that's a big vote of confidence for it.

Indeed! saljam asked me about a month ago what PAKE he should use with a Go implementation, and since there isn't a canonical one I put together a CPace implementation on top of ristretto255: filippo.io/cpace. It's a wonderfully simple algorithm when instantiated with a prime order group such as Ristretto.

There are some implementation notes in the README: https://github.com/FiloSottile/go-cpace-ristretto255

Nice spot, here's a link to the IETF draft spec for CPace mentioned.


IETF post announcing the chosen candidates


Candidate selection process


Implementation using libsodium https://github.com/jedisct1/cpace

Hey, the libsodium guy! Thanks a million for your work on that; I've really enjoyed using it. I actually ran across this the day after the CFRG meeting and was happy to see a respected implementer had already written a C version. Would you say it's mature enough to use yet?

Is this in the wasm version?

This is neat, but it seems like unlike with "real" Magic Wormhole, the server here can capture files by surreptitiously manipulating JS.

Absolutely true for the web interface if loaded from https://webwormhole.io. I'm open for any more suggestions here! https://github.com/saljam/webwormhole/issues/13

Someone mentioned the command line client. One can also build and serve the html/js/wasm from anywhere and it should still work, even with the same signalling server. It has pretty lax CORS for this reason.

IPFS would be a solution here, since the files are content-addressed. You'd have to fetch them locally, since a gateway could still manipulate the content, but it's easier to find a gateway you trust.

Forgive my ignorance, but how would an IPFS gateway interfere here? If you have the hash of the js file you need, you can verify the gateway gives you the right one, correct? Or are you referring to the case where IPNS is used so the actually content at the address can change?

If you go to the hassle of verifying the hash, yes, that's fine. I was talking about just loading and using the page, which can be tampered with (because the hash checking happens on the gateway).

You can host the code on on your own server if you don’t trust someone else’s. The code is BSD licensed. It should work on a static website like GitHub pages.

There is a native client as well, you don't have to use JS!


Although of course the people supplying your "real" Magic Wormhole might have surreptitiously altered that software to capture the file too...

I think these are similarly likely and have similar (but not identical) mitigations if you're worried you would really be a target for such shenanigans.

How exactly would they alter the desktop client I installed from a git clone?

If you wrote the client then they couldn't alter it, but then if you wrote the WWH site then they couldn't alter that either, so there's no difference.

If you're running code from a clone of somebody's git repo you're vulnerable to anything they did to that code, just as if you're running code from a web site you're vulnerable to anything they did in that site.

There are marginal differences, and I'm guessing the one you're really excited about is maybe the web site changes moment-by-moment to introduce and remove betrayal mechanics whereas your git clone doesn't change moment-by-moment. Of course that cuts both ways - bugs can be fixed in the site immediately and your clone doesn't magically get bugfixed.

But mostly I'm arguing these are the same problem: Do you trust some well-wisher who has seemingly no reason to betray you? You probably should, life is too short.

I'm worried about incentives and accidents. We've seen chrome plugins get sold to spammers after getting popular. We've seen AWS credentials accidentally leak into git repos. These are cases where the site might be ok one day and start serving something malicious the next. I do think installing a cli tool via your distro's package manager insulates you from these types of risks.

I would not let employees at my company use an externally hosted site like this to share secrets. I would have no problem if it were hosted internally by the company.

Nice, I made one of these a few years ago http://passfiles.com

Yours is a bit more polished than mine though. I didn't use QR codes either just good old fashioned urls.

There seem to be hundreds of these sites. They all do the same thing. Off the top of my head I can remember https://file.pizza

file.pizza and friends work on webtorrent, which doesn't do end-to-end encryption

EDIT: just learned that WebRTC actually does end-to-end encryption by default, so I'm wrong

I'm not as familiar with WebRTCPeerConnection as I'd like to be. Does it use the STUN server to get it's real IP and after that we can establish a completely peer to peer connection and now the webserver has no interaction with WebRTCPeer stream?

If any of that is wrong, please enlighten me, I didn't realize peer to peer connections could be as simple as this.

that's right. when establishing connection each side enumerates a bunch of "ice candidates". which is everything from your local LAN to outside NAT ip discovered via TURN servers (some restrictions due to privacy reasons).

once the ice candidates are exchanged each side starts spraying the other with STUN messages to addresses ranked by "candidate pairs" that potentially could make a connection until one is found.

this is simplified. there are mechanics like "trickle ice" and fallbacks to proxying via TURN servers.

then there's the alleged idea this is all part of a webrtc "standard", which is laughable cause no browser follows the wild collection of RFCs that supposedly makes up the standard and the only reason any of it works is because there's a non written down general consensus of what's required.

That is right!

The only extra thing that STUN does is establish a hole punch. It isn't enough to just get your public hole, but you also do a temporary 'port forward' to the person that made the STUN request.

Last I checked (long ago) there is the option for a TURN fallback as well.

Neat, looks like a different backend and frontend implementation of the very similiar magic-wormhole[0]

Now I wonder if anyone has made a web frontend of the original.

[0] https://github.com/warner/magic-wormhole

I made a minimal web API for some experimental stuff I was trying:


Not really hardened for production usage.

> ...it uses WebRTC to make the direct peer connections. This allows us to make use of WebRTC's NAT traversal tricks, as well as the fact that it can be used in browsers.

But I'm assuming it can't break through all NAT routers, right? A good portion of people still won't be able to use this?

A service usable by everyone would require STUN and TURN servers to be set up, no?

Or has WebRTC made advances I'm unaware of?

> But I'm assuming it can't break through all NAT routers, right? A good portion of people still won't be able to use this?

> A service usable by everyone would require STUN and TURN servers to be set up, no?

Anecdata, of course, but I haven't been able to reliably use any of these WebRTC based file transfer services (file.pizza, instant.io, etc etc). Testing mostly between two computers on the same subnet. Sometimes they work for a little while, at surprisingly low speeds (for two computers connected to the same wireless access point), sometimes I can let them sit for an hour and never get a connection. I've learned to not even bother trying them, it just wastes time.

That said, magic-wormhole (the original) works fine between the same devices, so maybe I'll see if something is somehow different about this implementation.

Edit: ah yes, this service hangs indefinitely on "connecting". You love to see it. (Firefox on Linux - firewall disabled specifically for this test - and Safari on macOS)

Edit: seems to be working in Chrome (Linux) to Firefox (Android). Not sure what the difference is.

Also didn't work for me on Firefox/Linux, between 2 tabs.

Hmm, maybe this is a classic "didn't bother testing on Firefox" situation. I wonder if anyone on a different OS can confirm.

I developed it mainly on Firefox on macOS. I'd love to figure out why it didn't work for you. Do you get anything on about:webrtc while trying to connect?

Thanks for the reply. I did the following for you:

1. Opened about:webrtc, clicked "start debugging".

2. Opened a WebWormhole on one tab.

3. Copy / pasted the code into WebWormhole on another tab. Got something like "invalid key".

4. Try again steps two and three. Got endless "connecting" message this time.

5. Stop debugging. No log file /tmp/WebRTC.log was created, so I clicked "save page". Used sed to replace my public IP address with x.x.x.x and uploaded here for you:


I hope this gives you enough information to fix the problem. I'd like to be able to use these tools too. I suppose it could be addon related, but another user confirmed the problem for Firefox / Linux. It would be useful to be able to detect various problems and report them to the user instead of hanging on "connecting".

This uses STUN servers to help it poke through NATs. (That's what I mean by "WebRTC's NAT traversal tricks")

There's no TURN server set for this, but it shouldn't be hard to add one. There are NATs where you'd need one to relay all the traffic, but these seem to be relatively rare nowadays. If anyone has any actual statistics on these I'd appreciate it!

" but these seem to be relatively rare nowadays "

AT&T 5G uses Symmetric NAT. It's not rare if you have an iPhone or iPad with cellular. No way to do P2P without relaying traffic unless you want to "guess" the randomized port number, and, on that front, there are NAT-device-aware algorithms that can make that process faster.

We were promised IPv6 will make NAT's not necessary but I believe service providers use NATs not simply to conserve the IPv4 space but to actively discourage using the service to host your own servers.

This man told the truth!

They are zealously pushing the "ever increasing speeds" of questionable benefit for the user - what for? So that commuters could watch 8k 120fps video while on a bus? Or rather to gather all kind of sensor data in real-time, audio and video included, from their human oil wells? To strip off people's clothes with millimetre wave imaging?

But making it easy for people to run their own home/mobile servers, share and cooperate without govporate oversight is clearly not on their agenda.

It's amazing what would be possible if NAT wasn't a thing. We will get there. Someday.

We are going backward. Newer 5G and fiber deployments where I live offer only IPv4 with carrier grade NAT. No IPv6, and no real IP unless you ask for one. (Not sure how long they will offer that to non-business subscribers.)

Which one? Which STUN server are you using?

The website uses Google's.

On command line it's an option and Google's is default. I'd like to make the signalling server also a STUN server at some point.

Oh that's interesting... I had no idea there were publicly available STUN servers like that.

But way back in 2014 a Google employee does seem to have confirmed it's free to use, but comes without guarantees.

[1] https://groups.google.com/d/msg/discuss-webrtc/shcPIaPxwo8/F...

I don't have any hard numbers, but I have heard ~85% ICE success rate with out TURN. But you are right, in some cases WebRTC will fail without TURN. Just no one wants to pay to run those servers :)

I would love to see TCP hole punching in ICE, but it sounds like it is super hard to get right.

Consumer internet does a lot better, lots of those failures come from Government/Military/Medical I bet.

You always need a at least a STUN server and in my experience that 85% isn't remotely true. For example STUN-only never worked from mobile internet (only tested some german providers).

> Just no one wants to pay to run those servers :)

No, it's the users that don't want their (meta)data inspected by a random third party in transit. This is why I don't use filepizza

how would TCP hole punching even work? TCP has state and a handshake. UDP doesn't.

UPnP can be used to setup port forwarding if the NAT gateway is configured correctly.

sure. I think it's a bit different though. upnp is like some remote control of your fw.

Very nice! I'm assuming this is based on the wonderful "Magic Wormhole"? Is it actually using that program under the hood?

No, its doesn't use any of the same protocols that Magic Wormhole uses.

I'm curious how this compares with the DAT project https://docs.dat.foundation/docs/intro I've had issues using that with networks throwing NAT errors, but need a secure P2P file solution for large data transfers. Wondering if this wil do the trick.

So, I tested this but it was blocked by our firewall.

Why not generate the QR code client-side?

Folks that care about P2P want to see zero HTTP requests to your server after loading the basic resources.

The QR code is generated client side.

Why do I see a network transaction for it?

Chrome shows "blob:" URLs as network transactions, but they're not.

Anyone else get something like this? https://github.com/saljam/webwormhole/issues/27 I was running `ww send ~/Downloads/myfile.bin` and my friend went to recv it from browser.

This makes me ridiculously happy. Magic wormhole is one of those tools that works so well that you want to use it even when you don't need to. So happy to see something like this so I don't have to install wormhole on my wife or my son's computers to send them stuff.

It is annoying that it removes the ID from the URL. It would be nice to bookmark my own code and I can just open it on multiple devices whenever I want to transfer a file. However I need to do a bit of gymnastics with the QR code to grab the URL.

Codes are intentionally single use, to limit the bruteforce vector. And only two peers can connect any given time currently. It would be interesting to figure out how to make it work with more than 2 peers!

This is nice and all but is it possible to make a few kilobytes single-file static HTML instead of 2.7MB https://webwormhole.io/util.wasm ?

I want to host it on my router.

I've just used it. It's absolutely fantastic. Saved me from sending a 500MB file through a sharing service or having to create S3 temporary buckets or whatever other complicated method. Simple, works, perfect. Thank you!

file.pizza is another similar project

I once tried sending a 3 GB file through it, then kept wondering why my entire system became so sluggish. Turns out it loads the entire thing into RAM... The file didn't go through either.

I hope this one isn't like that.

Does HTML5 even let you read a file without loading it all into memory?

Edit: Looks like this is it. https://developer.mozilla.org/en-US/docs/Web/API/ReadableStr...

Edit 2: And yes, this is using it. https://github.com/saljam/webwormhole/blob/master/web/main.j...

You can also do this manually (and on older browsers) by creating a FileReader and only loading new chunks after old ones have been transferred. With async APIs like WebSockets and WebRTC this typically requires implementing your own backpressure to avoid blowing up browser memory. See for example how omnistreams does it[0].

[0]: https://github.com/omnistreams/omnistreams-filereader-js/blo...

Hm, apparently what you found is something else (i.e. https://github.com/saljam/webwormhole/issues/5).

There is still an open bug for large file transfers: https://github.com/saljam/webwormhole/issues/4

That's great! Inconsistent large file support has also been what has prevented me from using the various predecessors to this so far.

Same experience here. Every website I tried so far died when sending large files because it apparently loaded the whole thing into ram and then some. That might be fine on a system with 32gb memory, but my laptop with 8gb dies when trying to send a 7gb file.

The JavaScript implementation is creating a blob with a URL pointing to it so the user can save the file. I might be wrong, but I think that the all the data is in browser memory before it is saved.

Browsers are pretty restrictive about writing to the file system.

An incomplete list of such projects: https://news.ycombinator.com/item?id=22274981

Do these other ones have native clients? WebWormHole has a Go client, so you can run everywhere Go works! Unix/Windows/Mobile/Web covers a decent amount of platforms :)

Is there a size limit of on the files? This was the issue with all WebRTC file sending websites I saw so far.

We need a distributed db based on webrtc

Just curious, why do we need this?

I would love to see zero barrier entry to creating a website (like twitter / name your favorite site that is not a static site) that can run on peoples laptops just by visiting a static javascript/html/css webpage. For this, you would need some decentralized backend storage solution akin to a sql or nosql database.

IPFS is not a database, perhaps one could be built on top but then who pins the data and who runs the gateway? IPFS gateways are particularly confounding at this juncture. Until IPFS gets native browser support for pinning and gateways it cannot be the storage layer for a decentralized database.

This kind of technology could help people take back the internet (imo) and webrtc goes a long way toward that goal. Even webrtc doesn't have the full promise of what I am suggesting however, it still requires a server that knows the IP of the other party so that you can directly connect (discovery). This would also need to be decentralized (somehow) perhaps only the discovery of peers would be done over a bootstrapped p2p DHT or something equivalent.

IPFS probably has everything you need and a lot more

Checkout gundb

Who pays for the stun and turn servers?

Turn servers are expensive but there are plenty of free SatUN servers. For example Google offers free stun servers.

Has the pandemic made ipv6 more popular somehow?

Of course. IPv6 is much more widely deployed to home networks, whereas businesses tend to go out of their way to disable it. So when people are at home all over the world, whether that's the Christmas period or this present crisis, it bumps up IPv6 numbers slightly.

Had to look it up because I did not know:

A STUN server is used to get an external network address. TURN servers are used to relay traffic if direct (peer to peer) connection fails.

Could someone share practical usages of these commonly today to help me better grasp usage for these servers as a starting point?

The Twilio page you posted was good.

I really like BlogGeek.me for WebRTC info. Here's an entry on the various servers involved.


Seems to work well sending images from my phone, but please let me select multiple at one time.

How secure is this?

From the github repo (original caps):


I’ll rephrase my question - how secure is this attempting to be?

Conceptually it's the same design as Magic Wormhole though all the technologies are different.

It's just a PAKE then you do a file transfer encrypted with the key you agreed using the PAKE.

PAKEs are very human friendly, they leverage a relatively weak secret (like "Monopoly Vegetable") that humans can deal with, to agree a good quality secret (like an effectively random 128-bit AES key) in such a way that both parties find out if the other party doesn't know the weak secret.

Because humans are bored easily you can use rather weak secrets safely - it's a natural rate limit. An adversary who guesses almost right "Cluedo Animal?" only gets told they're wrong, and after maybe two or three more attempts the legitimate parties are annoyed and refuse to keep trying so their adversary is foiled.

Machines wouldn't naturally use something like this because if a machine has a secure channel to another machine it can just move the 128-bit AES key, not waste time with some weaker human-memorable secret.

This technology won't hide the IP addresses of those communicating

A passive on-path adversary learns the size (perhaps not exactly but at least close) of the file transferred.

And of course an active adversary can prevent the file transfer by spamming the service with nonsense.

Come on it didn't work with a 5.5GB file :P

The command line version shouldn't have any trouble with large files. There's https://github.com/saljam/webwormhole/issues/4 to fix the web version. :)

ok will give it a try :)

it happily loops forever tranferring an empty file :D

consuming enormous cpu resources

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact