I've used this for years when passing large files between systems in weird network environments, it's almost always flawless.
For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies. I still hate how often Google Drive will fall over when you throw a 10s-of-GB file at it.
> For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies.
The lack of improvement in these tools is pretty devastating. There was a flurry of activity around PAKEs like 6 years ago now, but we're still missing:
* reliable hole punching so you don't need a slow relay server
* multiple simultaneous TCP streams (or a carefully designed UDP protocol) to get large amounts of data through long fat pipes quickly
Last time I tried using a Wormhole to transmit a large amount of data, I was limited to 20 MB/sec thanks to the bandwidth-delay product. I ended up using plain old http, with aria2c and multiple streams I maxed out a 1 Gbps line.
IMO there's no reason why PAKE tools shouldn't have completely displaced over-complicated stuff like Globus (proprietary) for long distance transfer of huge data, but here we are stuck in the past.
I overall agree, but "reliable holpunching" is an oxymoron. Hole punching is by definition an exploit of undefined behavior, and I don't see the specs getting updated to support it. UPnP IGD was supposed to be that, but well...
Well, with v6 you're down from NAT-hole-punching to Firewall-hole-punching, which in principle should be as simple as arranging the IP:Port pairs of both ends via the setup channel, and then sending a "SYN" packet in both directions at once.
Then, trying to use e.g. TCP Prague (or, I guess, it's congestion control with UDP-native QUIC) as a scalable congestion controller, to take care of the throughout restrictions caused by high bandwidth delay product.
That's why protocols like this have what Wormhole calls a "mailbox server", which allows two ends separated by firewalls to do secure key exchange and agree upon a method for punching through directly. See also STUN: https://en.wikipedia.org/wiki/STUN
I'm working on a branch that considerably improves the current code and hole punching in it works like a swiss watch. If you're interested you should check out some of the features that work well already.
I (Electrical + Software Engineer) once worked for a physicist who believed that anything less than an order of magnitude was merely an engineering problem. He was usually correct.
I was taught the same. To not care a lot about things under an order of magnitude. Over the years when planning large software projects or assessing incidents and so on, the 1 order of magnitude threshold helped me often.
As a physicist, I think this is correct too :). You don't start to see problems with things under that, unless they are deviation from standard model predictions.
Not as far off as the casual reader might think 20MB vs 1Gb sounds way more than the actuall 160Mb vs 1Gb - one shouldn't use Bytes and bits in a direct comparison together. One or the other, otherwise it's misleading/confusing.
In this case transferring the data at the slow rate would have taken more than a week, so it's no small difference. Actually one side had a 10 Gbps line, so if the other side had had faster networking I could easily have exceeded the limit and gotten the transfer done more than 6x faster.
I used the term "1 Gbps line" just because it's a well known quantity - the limitation of Gigabit Ethernet. The point wasn't that multiplexing TCP can get you 6x better speeds, it's that it improved the speed so much that the TCP bandwidth-delay product was no longer the limiting factor in the transfer.
Yeah but with magic wormholes you see, there could be other universes where that's not the case and 160mbps is close to 1024mbps or 1000mbps whatever the cool kids call a gigabit now adays.
As a protocol tcp should be able to utilize a long fat pipe with a large enough receive window. You might want to check what window scaling factor is used and look for a tunable. I accept that some implementations may have limits beyond the protocol level. And even low levels of packet loss can severely affect throughput of a single stream.
A bigger reason you want multiple streams is because most network providers use a stream identifier like the 5-tuple hash to spread traffic, and support single-stream bandwidth much lower than whatever aggregate they may advertise.
Yeah, that's the issue. I didn't have root permissions on either side. Moreover, a transfer tool should just work without requiring its users to have expert knowledge like this.
In this case, I checked the roundtrip ping time and multiplied it by the buffer size, and it agreed with the speeds I was seeing within ~5%, so it was not an issue with throttling. Actually, if I were a network provider interested in doing this, I would throttle on the 2-tuple as well.
> you need a machine that can handle whatever link speeds you need
I would have expected the relay server only being used for initial handshake to punch through NAT, after which the transfer is P2P. Only in the case of some network restrictions the data really flows through the relay. How could they afford running the free relay otherwise?
There are two servers. The "mailbox server" helps with handshakes and metadata transfers, and is super-low bandwidth, a few hundred bytes per connection. The "transit relay helper" is the one that handles the bulk data transfer iff the two sides were unable to establish a direct connection.
I've been meaning to find the time to add NAT-hole-punching for years, but haven't managed it yet. We'd use the mailbox server messages to help the two sides learn about the IP addresses to use. That would increase the percentage of transfers that avoid the relay, but the last I read, something like 20% of peer-pairs would still need the relay, because their NATs are too restrictive.
The relay usage hasn't been expensive enough to worry about, but if it gets more popular, that might change.
The folks on the wormhole-rs fork (who appear to share your Github organization? [1]) already have NAT punching working 95+% of the time in my testing, so maybe what they're doing could be ported over to the Python implementation.
The Rust implementation on Tailscale worked well for me. Except on a layer 7 firewall have to be quick to permit the connection or else it tries fallback.
I end up using a combination of scp, LocalSend, magic wormhole and sharedrop.io. Occasionally `python -m http.server` in a pinch for local downloads. It's unfortunate that this xkcd comic is still as relevant as it was in 2011: https://xkcd.com/949/
This is one of those amazing single feature utilities that does one thing incredibly well and goes completely unnoticed as it’s so good but also unremarkable. I should try to be more grateful for these brilliant creations.
It makes me uninterested in Firefox. I want webapps to be able to talk to files, but Mozilla thinks it's too dangerous. Even though we already have APIs to talk to the local filesystem, & the main difference is this one isnt hideously slow.
This issue is one of those that when people are screaming, why are folks using chrome, why haven't we all switched to Firefox I point to and say, because I want a good web, I want a fast web, I want a featureful web, and Mozilla definitely does not share my priorities.
If it doesn't do what you want, it's fair not to like it.
Just to offer a different perspective, though: I don't consider a lot of the things that Chrome does (bittorrent functionality in this case) to be part of "a good web", or really part of the web at all. I don't need my browser to be an operating system. I can use other apps to do other things.
It's much more important to me to avoid another Internet Explorer-like monoculture, and to have a browser that's relatively respectful of privacy.
Probably the main benefit of another browser engine (Ladybug?) entering the scene is that it would force Mozilla to come up with a more compelling sell than "not Chrome."
Do you mean Ladybird? It would be nice to see it become competitive with the others, at least. Diversity would be welcome.
I was a little surprised to learn that they'll use Swift for future development. It's not among the languages I usually think of for cross-platform work. On the other hand, maybe Ladybird using it will help drive improvement in that area.
Firefox can definitely download very large files with no problem, JS doesn't even need to be involved, downloading a file is ancient browser functionality.
The server sends an application/octet-stream response and a few other headers and it works.
It hasn’t been working for me for years, but somehow I always end up there when I need to transfer a large file. It’s just so easy to remember the name and url!
I've used wormhole once to move a 70 GB file. Couldn't possibly do that before. And yes, I know I used the bandwidth of the relay server, I donated to Debian immediately afterwards (they run the relay for the version in the apt package).
I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.
Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.
Hetzner.de has 1 gbps unlimited or 10 gbps with a 20TB limit on their bare metal servers. And those can be bought very cheap if you don't need any special hardware.
It's unlikely they would let you run it full tilt the entire month. I'm not aware of any VPS providers that have a true unlimited data plan. Would love to be proven wrong.
I'm suffering from fatigue from all the political commercials in which every single Democrat apparently single-handedly reduced the price of insulin. As if government-mandated pricing were a good thing.
If something is overpriced, somebody should jump in and take advantage of a business opportunity. If nobody is jumping in, perhaps the item is not overpriced. Or perhaps there is some systemic issue preventing willing competitors from jumping in. Imagine if somebody tackled the real issue and it unclogged the plumbing for producers of all sorts of medicine beside insulin at the same time.
If a government mandates the sale of an item below the cost of production, they drive out all producers and that product disappears from the market. That is, unless they create some government subsidy or other graft to compensate the government-appointed winners. Any way you slice it, it is a recipe for disaster.
If parties are allowed to compete fairly with each other, somebody will offer a cheaper price. This is already the case with AWS. Consumers may decide that the cheaper product is somehow inferior, but that is not a problem that lawmakers should interfere in.
Interesting you should choose insulin, as it's made by ~3 companies, and 2002-2013 the price went up 6x, while the price of the inputs dropped. ISTR that right after that it went up another 3x to over $300/vial. Thankfully, I only needed a vial once every few months, it was for my cat.
"Evergreening", a process where the drug manufacturers slightly change the formula or delivery when one patent is running out, to gain a new patent, then stop manufacturing the old formula.
Not saying I want to see AWS bandwidth prices regulated (though I think they could come down and still make a massive profit). But in the case of insulin, the industry has left little choice but government intervention.
Except in insulin’s case all they did was cap out of pocket costs, meaning insurance takes up the rest of the bill…which means the rest of us pay for it - and worse yet, it effectively stops any pressure on those companies to lower prices. That’s both political pressure and market pressure. Why the hell would anyone care or use cheaper insulin now?
> If something is overpriced, somebody should jump in and take advantage of a business opportunity
insulin is off patent. anyone can in theory manufacture it, but the ROI is just not worth it even at the current prices. Manufacturing it is not easy, there are humongous amounts of regulations, you will probably need to do a couple of clinical trials too... so you end up with an oligopoly that are incumbents that nobody wants to challenge, and prices that are all aligned.
You disliked my idle thought so much that you needed to reply twice? :)
The various factors causing strong lock-in effects, their dominance, and the insanely high pricing of moving data out of AWS - I wouldn't be surprised if they got their antitrust moment within a few years.
Sorry. It wasn't personal. I just thought you deserved more than my initial terse response and some explanation of what bothered me: Layers of stupid laws on top of stupid laws that impede rational behavior instead of encouraging it.
>I'm beginning to think that the only feasible solution is changing the law.
Do you also think we should legislate the price of BMWs? You're not forced to buy AWS, there's plenty of alternatives, and the prices that AWS charges is well known. I'm not sure why the government should be involved other than a vague sense of "I want cheap stuff".
> You’ll get at least 20 TB of inclusive traffic for cloud servers at EU and US locations and 1 TB in Singapore. For each additional TB, we charge € 1.00 in the EU and US, and € 7.40 in Singapore. (Prices excl. VAT)
I also used to use Time4VPS, however they have gradually been rising prices and the traffic I'd get before being throttled would be less than that of Contabo.
Not yet. The "Dilation" protocol (which is about 80% implemented) is intended to support WebRTC as a transport layer. IIRC it requires a public server to tell you about your external IP address, but magic-wormhole already has a server that could play that role. Once a side learns its own address, it can send it to the peer (via the encrypted tunnel, through the relay server), and then the WebRTC hole-punching protocol tries to make connections to the peer's public address. When both sides do the same thing at the same time, sometimes you can get a direct connection through the NAT boxes.
We don't have that yet, but the two sides attempt direct connections first (to all the private addresses they can find, which will include a public address if they aren't behind NAT). They both wait a couple of seconds before trying the relay, and the first successful negotiation wins, so in most cases it will use a direct connection if at all possible.
Do you do NAT hole punching, and/or port traversal like uPNP, NAT-PMP? I think for all but the most hostile networks the use of the relay server can be almost always avoided.
Yes, it relies on two servers, both of which I run. All connections use the "mailbox server", to exchange short messages, which are used to do the cryptographic negotiation, and then trade instructions like "I want to send you a file, please tell me what IP addresses to try".
Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.
The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.
Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:
* my public IP addresses
* your public IP addresses
* helperA (after a short delay)
* helperB (after a short delay)
and the first one to negotiate successfully will get used.
> since otherwise you just scp or rsync or sftp if you don't have the dual barrier
True, but wormhole also means you don't have to set up pubkey ahead of time.
Can you turn the magic wormhole into an API for receiving a JSON payload directly into your magic wormhole ontop of whatever youre running in a fastAPI to route that incoming wormhole listener?
There's a `wormhole send --text BLOB`, which doesn't bother with a bulk-data "transit" connection, and just drops a chunk of text on the receiving side's stdout.
You can also import the wormhole library directly and use its API to run whatever protocol you want. That mode uses the same kinds of codes as the file-sending tool, but with a different "application ID" so they aren't competing for the same short code numbers. https://github.com/magic-wormhole/magic-wormhole/blob/master... has details.
A technique like this is used to do "invites" in Magic Folder, and also in Tahoe-LAFS. That is, they speak a custom protocol over just the Mailbox server in order to do some secrets-exchanging. They never set up a "bulk transport" link.
There is also a Haskell implementation, if that's of interest.
I love to learn about "non-file-transfer" use-cases for Magic Wormhole, so please connect via GitHub (or https://meejah.ca/contact)
All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.
If I have a direct network connection I tend to go with:
python3 -m http.server
or
tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.
Not really.. the closest approximation would be if both sides set their `--transit-helper` to an unusable port like `tcp:localhost:9`. That would effectively remove the relay helpers from the negotiation list, leaving just the direct connection hints.
But you can't currently force that from one side: if you do that, but the other side doesn't override it too, then you'll both include their relay hint in the list.
Note that using the relay doesn't affect the security of the transfer: there's nothing the relay can do to violate your confidentiality (learn what you're sending) or integrity (cause you to receive something other than what the sender intended). The worst the relay can do is to prevent your transfer from happening entirely, or make it go slowly.
I use wormhole a lot, but I've been too lazy to figure out if it's as secure as ssh/scp, so I always gpg the file I'm transferring before putting it into wormhole.
It can't hurt, but it shouldn't be necessary. The client-side software establishes an encrypted connection with its peer, using an encryption scheme that should be just as secure [but see below] as what GPG or SSH will give you.
For GPG to add security, you also have to make sure the GPG key is transferred safely, which adds work to the transfer process. Either you're GPG-encrypting to a public key (which you must have copied from the receiving side to the sending side at some point), or you're using a symmetric-key passphrase (which you must generate randomly, to be secure, and then copy it from one side to the other).
I should note that magic-wormhole's encryption scheme is not post-quantum -secure. So if you've managed to get a GPG symmetric key transferred to both sides via PQ-secure pathways (I see that current SSH 9.8 includes "kex: algorithm: sntrup761x25519-sha512@openssh.com", where NTRU is PQ-secure), then your extra GPG encryption will indeed provide you with security against a sufficiently-large quantum computer, whereas just magic-wormhole would be vulnerable.
When I ask if wormhole is as secure as ssh/scp I'm not really thinking about post-quantum security, I'm thinking about the number of brilliant people looking at the code and the number of brilliant people attempting to break it.
I just wanted to thank you for making it. I wanted it just to bootstrap VM’s on new machines. I ended up using it all the time for many things. Great project!
* is there an app for it? where i can share the password via qrcode? for when the data is to big for qrcodes?
* what do you plan on doing regarding quantum computation? switching to some pqsafe cryptography, also to be safe before save-now-decrypt-later-attack?
* is it possible to extend your protocol over more generic proxies like turn servers?
None that I know of. It just uses a TCP connection to the mailbox server (with keepalives), and then TCP connections for the bulk-transfer transit phase, so I can't think of anything special that iptables would need to handle it well.
The encrypted connection is used to exchange IP addresses.. maybe you're thinking of the module that e.g. can modify FTP messages to replace the IP addresses with NAT-translated ones? Our encryption layer would prevent that, but we'd probably get more benefit from implementing WebRTC or a more general hole-punching scheme, than by having the kernel be able to fiddle with the addresses.
scp/rsync are great tools, but they require pre-coordination of keys. One side is the client, the other is the server. The client needs an account on the server machine (so the human on the client machine must provide an ssh pubkey to the human on the server machine, who must be root, and create a new account with `adduser`, and populate the ~/.ssh/authorized_keys file). And the client needs to know the server's correct hostkey to avoid server-impersonation attacks (so the human on the server machine must provide an ssh host pubkey to the human on the client machine, who puts it in their ~/.ssh/known_hosts file).
Once that's established, and assuming that the two machines can reach each other (the server isn't behind a NAT box), then the client can `scp` and `rsync` all they want.
Magic-wormhole doesn't require that coordination phase. The human sending the file runs `wormhole send FILENAME` and the tool prints a code. The human receiving the file runs `wormhole rx CODE`. The two programs handle the rest. You don't need a new account on the receiving machine. The CODE is much much shorter than the two pubkeys that an SSH client/server pair require, short enough that you can yell it across the room, just a number and two words, like "4-purple-sausages". And you only need to send the code in one direction, not both.
Currently, the wormhole programs don't remember anything about the connection they just established: it's one-shot, ephemeral. So if you want to send a second file later, you have to repeat the tell-your-friend-a-code dance (with a new code). We have plans to leverage the first connection into making subsequent ones easier to establish, but no code yet.
Incidentally, `wormhole ssh` is a subcommand to set up the ~/.ssh/authorized_keys file from a wormhole code, which might help get the best of both worlds, at least for repeated transfers.
Mostly no. ssh/rsync is compiled C code, so might be slightly faster than a Python-based `wormhole`, if you have a really fast connection to take advantage of. And rsync provides that lovely continue-from-interrupted-transfer feature that magic-wormhole currently lacks.
But wormhole has turned out to be more usable in some cases. I've had days where I'm sshed into a bastion host, then sshed from there into a server, then cd'd into a deep directory with lots of spaces and quotes and shell metacharacters in the path, and then found a file that I wanted to copy out. To do that with ssh, I have to first configure ProxyJump to let me reach the internal machine with a single ssh command, and then figure out how to escape the pathname correctly (which somehow never works for me). With `wormhole send` I get to skip all of that, at the cost of having to do it once per file.
I find myself using Send Anywhere [1] all the time. I couldn't find documentation on how the files are transferred or if they're uploaded to their cloud, but it's very handy. They claim the files are encrypted in transmission, but don't give details & could just be talking about SSL.[2]
When you choose the files you want to transfer, it gives you a 6 digit code or a QR code. Once you enter that, the files are transferred! It's available for most all major platforms, but isn't open source. [3]
I haven't read their privacy policy. Frankly, I'd rather not know...
I was going to say SnapDrop was discontinued but I see it is back again thanks for the reminder this must be the third or fourth time I have thought they pulled the plug and see it get fixed back to normal bravo developers !!
scp has the assumption that you have a login on the computers you're trying to share data from. wormhole allows for sharing with others without providing login access to the computer
Right. Also you may have to reconfigure some firewalls to use scp.
Typically, a firewall allows outbound connections without needing an explicit entry for the protocol, and in the case of magic wormhole, both sides are an outbound connection. So it passes right through.
If you've got security-minded folk managing that sort of thing for you, it's possible that magic wormhole will upset them for this reason. More for policy/compliance reasons than actual security ones.
Both problems can be worked around by having a third, general-purpose host where both source/destination hosts can scp to/from. Not quite as straightforward because you have to copy twice and do it from both sides, but has the benefit of not having to install bespoke software.
I think you could use an ssh tunnel between the intermediary and the destination such that the scp connection from the source makes it all the way through in one go, rather than leaving files on the intermediary. You'd be forwarding to the ssh port via ssh, so it would be a confusing bit of sshception.
If I tried to actually come up with the actual commands for this, I'm sure I'd burn a whole afternoon on fiddling with it.
I realize this is a dumb question, but what's a good way to do this between an iPhone and a MacBook? Airdrop is disabled (by policy), iCloud storage is full (because I'm lazy), and I use syncthing on every other device, but I haven't found a client I can use on my work iPhone.
I've been using sharedrop.io which also is open-source [1] and it works quite nice, I particularly like this one because I don't have to install any third-party app on any of the devices.
I think on mac Safari usually doesn't work as well as in Chrome, but I've been able to transfer from Windows to iOS, Windows to macOS and macOS to iOS without installing a thing.
I like LocalSend and Landrop. The latter performed better when I sent large files. However, neither of them does it automatically; you have to manually do it every time, which is okay since they don’t claim to be sync software.
When python is not installed already (Windows pretty much) or the computer is the destination, I prefer https://github.com/sigoden/dufs, a single binary supporting uploads, folder .zip download, and even webdav
Signal 'note to self' works. I have several nts's...medical, links only, shopping...if I think of something on one device (pc/ android for me), it's on the other within seconds.
I tend to message myself a lot of things. Usually links not files, but it works and it doesn’t take me out of the headspace I’m occupying. Either Apple messages or slack.
Möbius Sync doesn't sync in the background; you must have the app in the foreground for it to function. So, not quite a proper substitute for Syncthing, but may work for OPs usecase.
Right, it does the best it can given the limitations of iOS, but "not instantaneous" can mean hours before synchronisation happens. When the phone's charging, or when the app is open, then it does tend to work almost as well as on Android.
Taildrop is neat, but Wormhole is much more flexible and much easier to use (if you're OK with a command line tool). We use Tailscale everywhere here and I still wormhole things all the time.
That is something I want to know too. Do these various "wormhole" apps use any common protocol between them or do they just all use the same words and branding for different things?
Magic Wormhole is the first, and is its own thing. It's called "Magic" Wormhole for a reason: it tends to blow people away the first time they use it. So lots of people have copied the UX, and some of them have copied the name as well. What you should use is Magic Wormhole itself, or, if you want a non-Python runtime for some reason, wormhole-william.
The Haskell implementation uses the same protocol as the Python implementation. The main difference is that there are some features the Python implementation has that the Haskell implementation still lacks (most notable "Dilation").
https://zynk.it is a new project I've been working on together with a small team aimed at delivering a truly easy, fast, efficient, unlimited, privacy-respecting and pain free file-sharing experience. It’s peer-to-peer, E2EE and avoids centralized storage, aligning with the ethos of control and transparency we often discuss here. It allows users to send and receive any file(s) or folder(s) without any limits whatsoever between any device/OS and any device/OS, send and forget, Zynk takes care of all the heavy lifting.
What I hope sets Zynk apart is that it is built to literally be used by anyone, be it a power user, or my mom.
One of my main goals with this project is to remove any pains associated with data transfer once and for all, for any use case.
I'm curious if this resonates with you—would you use it? What would make it indispensable for your workflows?
I'd be happy to discuss it more if anyone is interested. Feel free to sign up for early access on the site.
It's login/email walled. If you do want people to try it, the try button shouldn't immediately greet you with a popup to provide your full name and email address. I stopped at that point.
Point well taken. It was a bit too rushed and obviously not ready yet. I'll post about it once we finalize the site and make the whole value proposition clearer.
I'm assuming it won't be open-source? I don't really see why I would use a propietary/non FOSS version of this (magic wormhole).
The great thing about magic wormhole is that the protocol is open, and anyone could implement it for themselve.
For example there is the reference implementation in python, then there are implementations golang, rust and haskell. Flutter bindings so you can use it in flutter. Multiple GUI implementations for all operating systems, even mobile and the web (via WASM). It has also been implemented into other open source projects like tmux or termshark. https://magic-wormhole.readthedocs.io/en/latest/ecosystem.ht...
Basically what I'm saying is, I'm locked to the applications you and your team have built. I couldn't "hack" something quickly together to integrate it into other things, I couldn't extend your clients by modifying the source code and I also couldn't verify that your code really does what it says (E2EE, privacy-respecting).
> What I hope sets Zynk apart is that it is built to literally be used by anyone, be it a power user, or my mom.
I'm sure that a more friendly UI/UX for non power users would be great, but IMO it would be even better if it used an open protocol like magic wormhole, this way the receiver does not also need to install a Zynk Client, but can use whatever he is already using. There is for example https://winden.app/about already exists, which seems to be a very user friendly UI, is open source and works without installing it.
Maybe I'm just too much of a "power user" (I use Linux on my computers/servers and a custom ROM on my phone) to understand what zynk could provide to me.
But I think (which means I don't have sources to back this up) the audience which does not care about e2ee/privacy already uses the solutions implemented into their OS (like AirDrop/Quick Share, share via iCloud/Google Drive/OneDrive/...) and from my experience the audience that cares about privacy/e2ee has a large overlap with the Open Source community which is more likely to use solutions like magic wormhole or croc.
We are not planning to open source it, but who knows what the future might bring.
I too love and appreciate open protocols and tools, heck, I also tend to gravitate towards that by default as well, but when something better comes up that's not open source and I can use it better/easier, I do.
We'll release a CLI for Windows, macOS and Linux which will easy to use and flexible/scriptable so you could use that to hack together anything you need.
You definitely are a power user. While I don't disagree on the P2P/privacy overlap with open source I do think the world has yet to have the final say about data transfer. Yes, literally countless tools and methods of moving data exist out there, but they aren't universal (AirDrop/Quickshare don't work between all platforms), they pretty much always have both random limits, and limitations, and in most cases aren't really efficient nor pain free -- we're trying to do better, hope we make it! :)
Stay tuned and if you or anyone else would like to give it a try in the mean time drop me a line, m <at> zynk.it
You’re setting up a relay using two well known domain names it seems. And you’re encrypting files that probably can’t be decrypted using MITM so you’re sending all kinds of “red flags” if they use any number of MITM detection software.
To be fair our offshore team was so bad with security (“doesn’t work? Turn it off!”) it is unfortunately necessary. If I had a slightly different app “magick wormhole” they’re likely to use it if it had a pretty GUI.
Like if we didn’t have strict security policies in place how do you manage 500+ “developers” who have no repercussions? Part of it is getting the cheapest labor possible, part of it is security is hard to do right and part of it is english as a second language issue.
It is much easier to put everyone in an incredibly locked down environment than it is to have them decide what’s secure or not. If I were to fork this and internally use our own DNS and put a GUI wrapper and there’s a flaw in the implementation of magic wormhole I’d be in much more trouble than using Crowdstrike which no one will get fired for using for example.
I feel like this project could use a diagram. Maybe I'm a visual learner. I looked at the github link and the docs. It mentions that is uses a couple of servers. Without reading through the Installation steps I didn't grok how it works. A simple diagram with a couple of endpoints and how they connect would go a long way.
I don't recall where I saw it (either the Python or Rust implementation, most likely), but there was talk about trying to turn this into a browser plugin. I would love to be able to use my browser to send files to someone else (via their browser, if they so choose).
I don't know of one yet, but I tried to choose protocols (websockets) that were friendly to being hosted in a browser, or a browser plugin. The exception is the bulk-transfer protocol, that's pure TCP, which can't be used by plan web content, although I think browser plugins might be allowed to. The upcoming Dilation protocol should be more flexible on this dimension.
Slightly off-topic but this explainer article on how TailScale traverses NATs/firewalls is interesting.
Tailscale fools both endpoints into believing they are initiating the same network connection simultaneously enabling direct connection between 2 endpoints that would otherwise be impossible.
If you already have node and npm installed, and are happy running random binaries from the internet, you can `npx magic-wormhole` to run this without any further setup. It just wraps wormhole-william, which is the go implementation of magic-wormhole.
I put it on npm primarily so I could send things to other JS developers an absolute minimum of fuss: one command, total, instead of installing a tool and then running the command.
I'm inexperienced and I see a lot of posts like this where open-source that's seemingly loved by the community just don't get the support that they seem to need to survive for a long time.
Why aren't people who know about this and hold important positions doing something about the ecosystem? What can people with no experience but care do to ensure the longevity of open source tools like this?
I'm reading so many good comments that I will be trying Magic Wormhole soon.
How about Warpinator [1]?
It's the application that I use simply because it came by default with my choice of Linux distro, and it works fine. Main use case for me is sending recently taken photos from my phone to the computer.
> The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed.
How does this work from a security perspective? Given the lack of apparent entropy can’t a malicious actor conceivably enter the correct phrase before the good actor?
The PAKE algorithm lets you spend an interactive roundtrip to buy a full-strength key out of a weak shared secret. An attacker can attempt to guess the passphrase, and their chances are non-zero (one out of 65536 with the default configuration), but when they guess wrong, the whole protocol shuts down, and the real participants have to start over again, with a new code. So the only way for the attacker to win is for you to restart over and over again until they get lucky. Kinda self-limiting that way :).
An attack on the PAKE would involve the attacker seeing the secret value as it was transferred to the recipient and then beating the recipient to the handshake. So there is security value in being prompt in putting in the secret value at the receiving end.
That is as opposed to sending a public key or key fingerprint. In that case there would be little value to the attacker in seeing the transfer. They would have to MITM the transfer of the key itself. If you wanted to prevent the attacker from sending bogus files you would also have to transfer some sort of signing key.
So a short, time limited, secret vs a longer public value.
“An important property is that an eavesdropper or man-in-the-middle cannot obtain enough information to be able to brute-force guess a password without further interactions with the parties for each (few) guesses. This means that strong security can be obtained using weak passwords.”
Just wanted to say thank you to the maintainer and creator of magic wormhole. I had to help my nephew debug what had happened to his computer which didn't have any gui after a restart. Setting up magic wormhole to send files back and forth was a feasible solution over the phone.
To send a file from one computer to the other by internet (although often internet should not be needed for this), I think netcat should do. Use tar or something else like that if you want to send multiple files. Use programs to calculate the hash if you want to verify the hash of the file. For large files, it does help to make resumable transfer; fortunately, that can be done easily, too (you can see what the file size is and use the "tail" program to skip some).
You should not need HTTP, FTP, etc. You should be able to use something which can work on any computers, such as just TCP/IP. Unfortunately, some systems (especially some Windows systems) will make that difficult. Using the more complicated such as Magic Wormhole and other programs means you will need two computers that support such a thing. I did once try to transfer a file from Windows to Linux, and had to install ncat to do so but Windows deletes it by default if you try to do that, but I was able to make it to not to do that.
How do you transfer between computers behind NAT or firewalls without an intermediary?
If you have to install software anyway why not install wormhole directly?
Netcat works in some circumstances and it’s fine to use it in those. But wormhole covers different scenarios and your netcat proposal doesn’t cover or have advantages in some of them.
It is unfortunate that you would have to install netcat on Windows (I don't know why they don't include functions such as that built-in, perhaps it could be included as a function in PowerShell); but on some other systems it is likely to already be installed, and such a thing really ought to be included by default if the computer has TCP/IP included by default. This does not necessarily need to be netcat; it can be any program that does the same thing, and it is simple enough to include with TCP/IP based systems that other kinds of operating systems could potentially have their own variant, if they have the ability to connect to the internet.
However, it is true about considering NAT. Then you will need to set up the intermediary (which further affects compatibility), if you cannot connect the computers directly (which also ought to be possible with a null modem cable, but that also is often not available).
It also does not consider multiple files at once; how that should need to be handled will be different on different computers anyways though, since the files are different on a different computers.
Due to such things, other programs such as Magic Wormhole might help, although even then it will not necessarily work with all computers anyways, because then you will need a computer that is compatible with Magic Wormhole.
Another alternative, that might sometimes be suitable, would be LAN connections. This is not always suitable, but if they are, then you can use the LAN addressing directly, too.
I've been running Pairdrop - https://pairdrop.net/ on my home network and it works great to let my wife out visitors easily share files off phones or laptops when you need a GUI.
Do I understand correctly from this thread that there's still no good one reliable way to send files across networks where both participants are NAT'd ?
What about using Onionshare to solve NAT'ing or at least Topr for handshaking?
WebRTC (and the various hole-punching techniques listed elsewhere here) have mechanisms to help with most cases of both participants living behind NAT boxes. The remaining cases require some sort of relay that is willing to proxy the connection through the extra-strict NAT layers.
Tor is basically a distributed set of proxy servers, so using onion servers (aka Hidden Services) is a viable, albeit somewhat slow, way to manage even the strict NAT boxes.
If you have Tor installed, then `wormhole send --tor` will automatically use an onion service to do exactly that.
Reliability is fine, performance is, depending on your demands, not. F the coordination server is not up to task of matching throughput of the two servers, and a lot of bytes need to be transferred, then it's noticable. But the tech at large works, depending on the implementation you choose to use. (I'm partial to Tailscale, although that's a different service, it lets me transfer files between computers (with additional SW) without manually mapping a port. They are using DERP for the coordination server.
I find `syncthing` pretty useful for this kind of stuff. It's been around a long time and has a lot of different options for configuration and clients for every platform you could imagine, both UI based and command line.
On every *nix platform I would just install the `syncthing` package and use it quite easily. I've experimented with some wormhole stuff before and looked at this package some, but there would be a lot of extra steps involved right because of the packaging choices.
The package was removed from Fedora in 37 with the "replacement" being use a Snap instead [1]. That doesn't make any sense because that platform is heavily invested in Flatpak and it's very "against the grain." There are some other "Wormhole" apps on Flathub that are verified, but none of them are the same as this. Are they compatible protocol wise or just named similar things? That's assuming you want to enter the game of "is this app safe or made by the same entity?"
I want to enjoy this project and others like it, but it's very confusing. The goal of these tools is to simplify transfer of files and to take most of the "pain" in doing that away. Yet, to actually use most of these tools in any meaningful way between two computers you would need to invest more time into getting this to run on those systems. My brain tells me to make this work you need a big button on the homepage for each well supported platform that just says "Download for Windows" along with a one-click solutions for various Linux platforms (one line command, Flatpak, AppImage, etc.)
syncthing and Magic Wormhole have different goals. Wormhole is very simple: it is a way (often the best way) to get a file from point A to point B, regardless of the connectivity (or lack thereof) between A and B, without accounts or configuration.
Syncthing also does relaying like that, NAT traversal, bridges, etc. all automatically. If it can, it will use local IP connections and go blazing fast. If for some reason your network configuration(s) don't allow for any valid connections, it will use a relay or bridge - either a default or one you setup - and work relatively fast as well.
"Connectivity" here needs to be taken at large: syncthing needs to have a folder already shared between two machines. There's some setup involved that is not automatic.
The entire setup phase of magic wormhole is "copy those 3 words" and boom you're done.
I think a lot of people talking about Synchthing must have used it in a different way in the past. It has a QR code now that encodes that 16-digit number or whatever that is used for the same purposes of key exchange and initial handshaking.
I use it with my wifes phone to transfer files between her drawing tablet and her Linux system she uses for Blender every day.
Yes, but even that is more involvement: you need to create a new specific folder, add a device, share that folder, accept on the receiver, copy the file in the folder, and wait for full sync in the web ui. It's a much longer setup that makes sense if you will synchronize files/folders with the same machine in the same folder, but that's not always the case. magic-wormhole removes all that back-and-forth dance for one-off sends.
Yeah, syncthing is awesome for repeated interaction.. once you've configured the two sides to know about each other, it's really flexible for doing an initial transfer, pushing just the new changes, pushing to multiple destinations, etc.
magic-wormhole doesn't need the initial configuration, but only lets you transfer one file (or one directory). So it's better for ad-hoc transfers, or for safely establishing the configuration data you need for a more long-term tool. The analogy might be that magic-wormhole is to synthing as scp is to rsync.
The snap/flatpak thing is weird, and I share your discomfort with uncertain provenance of software delivered that way.
I wrote the original version in Python, and took advantage of a number of useful dependencies (Twisted, to begin with), but a consequence is that installing it requires a dozen other packages, plus everything that Python itself wants. I've watched multiple people express dismay when they do a `brew install magic-wormhole` and the screen fills with dependencies being downloaded. If I knew Go, or if Rust had existed when I first wrote it, I might have managed to produce a single-file executable, which would be a lot better for deployment/distribution purposes, like wormhole-william provides today.
- You do not need an existing trust relationship (or trust on first use)
- Easier to punch holes through NAT/firewalls
- Easier for non-technical users
Worse than rsync (over SSH):
- Multi-file support is poor (basically it .zips up everything before even starting to transfer)
- Zero support for incremental transfers
- Cannot reuse existing trust relationships (and thus cannot be used non-interactively)
- Easier DoSed
- 1/65536 chance of connection being hijacked (by default)
- Higher CPU usage
wormhole is awesome, useful especially in ad-hoc scenarios. I use it often to copy files between systems when I can’t use scp (because of no relevant entries in authorized_keys.
The protocol enumerates all the IPv4 addresses on each side, and attempts to connect to all of them, and the first successful handshake wins.
So if your VPN arrangement enables a direct connection, `wormhole send` will use that, which will be faster than going through the relay (and cheaper for the relay operator).
A basic VPN gets you safe connectivity between the two sides, but transferring a file still requires something extra, like a preconfigured ssh/scp account, or a webserver and the recipient running `curl`/`wget`. magic-wormhole is intended make that last part easy, at least for one-off transfers, just `wormhole send FILENAME` on one side, and `wormhole receive [CODE]` on the other.
Right, I'm more commenting on the "have my friend slurp files" part: that sounds more like they want to set up a VPN with normal file sharing (e.g. full network shares) so their friend can just grab literally anything as long as it's in a network share. It didn't sound like they needed individual file transfers.
Magic-wormhole is really intended to help with the first-step "introduction" phase of a tool like that: start with two humans that can yell a codephase at each other, and finish with two computers that have a secure encrypted connection. Once you've got that connection, you can send whatever you want through it.
The `wormhole send` tool is a good demonstration of what you can do with that API, and a convenient tool in its own right, but wasn't designed to be the end-all-be-all of the file transfer universe, nor to be a building block for other tools layered on top.
The application you describe would be pretty cool (the UI might look more like dropping a file into a Slack DM chat window). But I'd recommend against using automated calls to `wormhole send` to accomplish it: you'd be cutting against the grain, and adding load to the mailbox server that everyone else uses. Instead, build a separate app or daemon, which can use the magic-wormhole API to perform just the introduction step. You'd push the "invite a peer" button on your app, it would display a wormhole code, you speak that to your pal, they push the "accept invitation" button on their app, type in the code, and then the two apps exchange keys/addresses. All subsequent transfers use those established keys, and don't need to use the wormhole code again. You should never need to perform a wormhole dance more than once per peer.
Correct. The wormhole code is a channel number (called a "nameplate") and a short secret, which defaults to 16 bits of entropy. The secret is used as the input to a PAKE, which only gives the other party (hopefully your intended recipient, but maybe an attacker) a single guess. The security of the protocol stems from the PAKE algorithm: yes, someone might jump into your conversation and attempt to guess the secret, but they're going to guess it incorrectly most of the time, and each time they fail, the connection is interrupted, and you (the sender) get an error. You'll probably give up well before they get a reasonable chance of success.
The secret can be any string you like, the protocol doesn't care, instead of "4-purple-sausages" it could be "4-65535" or "4-qtx", and have the same resistance to attack. The CLI encodes the secret as two words from the PGP word list, which was designed to be spoken and transcribed accurately even over a noisy voice channel (sort of like the Alpha/Bravo/Charlie/.. "military phonetic alphabet", except it's two alternating lists of 256 words each). In practice that pair of words is much easier to speak and listen and hold in your head for a minute or two than a random number, or the first two letters of each word divorced from the words themselves.
There are some provisions in the protocol (not yet implemented) to allow alternate word lists, so if the sender uses e.g. a French wordlist instead of the default English one, the receiving CLI learns about it early enough so that "wormhole rx" can auto-complete against the correct list. The server/attacker could learn which wordlist is in use, but still faces the same level of entropy about the PAKE secret itself.
I had the same reaction. It's a one time password, so it does not need to be as secure as a permanent one, but it still feels silly to have a long password that you can just type the first few characters and have it auto complete each word, versus just having a shorter password to begin with.
That said I can see how autocompleting from the first three letters of each word for "beaver-grass-hypocondriac-shelf" might be easier for a human than typing "beagrahypshe"
Alas no. It's a one-shot file-transfer tool, and we don't store a copy of the encrypted data or anything. So the sender must stay running until the receiver has finished downloading. If you're comfortable with relying on servers for your security, then I believe wormhole.app offers the clickable zero-install link that you described.
Magic-wormhole can't use that approach, because our security model rules out reliance on servers for confidentiality or integrity. We could safely store ciphertext without violating the model, but you need an interactive protocol with the sender to get the decryption key (otherwise the wormhole code would be a lot larger), so it wouldn't improve the experience very much, and would cost a lot more to operate. The wormhole servers have trivial storage requirements, so the only real costs are bandwidth for the transit relay helper, for when the two sides can't make a direct connection.
It's actually possible via https://winden.app which uses the wormhole protocol (compared to wormhole.app which is not affiliated with the matic wormhole project), once you upload a file there it will show you a link which is basically winden.app/#passphrase
As it uses the anchor tag # the passphrase doesn't even get sent to the server hosting the website, so it all happens client side.
old but gold use it on the regular. Only thing that could be improved is an option for someone receiving the file to open it up on a website. file.pizza used to do this
"I don't get why people use syncthing, can't they use bluetooth file transfer and USB cables like us oldschool peeps?"
read the doc.
SSH requires previous arrangement (you need to transfer the SSH key to your friend), magic wormhole is a way to arrange such a meeting without physical proximity.
Nope. It needs to contact the "mailbox server" to coordinate the rest of the protocol. Two machines with local connectivity (e.g. on the same LAN, but your WAN connection is broken) could still implement the second half of the protocol, where they use each other's IP addresses to make a direct connection, but without the first half they couldn't learn those addresses or exchange the key-negotiation messages.
We've sketched out some approaches to working in a disconnected environment like that, using local multicast and mDNS/ZeroConf/Bonjour to act as an alternate mailbox server (https://github.com/magic-wormhole/magic-wormhole/issues/48). There's still design work needed, though, and I fear it would degrade the experience for fully-connected nodes (extra timeouts), so it might want to be opt-in with a `--offline` flag on both sides.
I wonder if magic-wormhole could be implemented as a layer on top of Syncthing:
- Generate a short code
- Use the code as the seed to deterministically generate a Syncthing device key + config
Since the Syncthing device key could be generated deterministically, sharing the code with both sides would be enough to complete a dir/file transfer and then discard the keys.
For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies. I still hate how often Google Drive will fall over when you throw a 10s-of-GB file at it.
[1] https://www.jeffgeerling.com/blog/2023/my-own-magic-wormhole...