Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Low-latency jamming over the internet (sub.live)
236 points by weepy 87 days ago | hide | past | favorite | 119 comments

Heya - I recently made a desktop app that lets you jam live with your friends over the internet.

You can read more about how it works here => https://sub.live - but essentially the latency we achieve means I can jam between UK and Denmark with <20ms !

I also added video because it really helps to see the other person.

If you're interested - the tech stack is :

* Electron embedding a Svelte app * Websockets to a node server to handle the users/room * C++ sub-process that handles the UDP networking and audio

BTW It's also free to use for both Windows and Mac.

Thanks for making this. It looks worth giving a try.

Some friends and I get together roughly weekly using Jamulus. We're playing mostly jazz standards. We have gotten latency down into the 20 ms range, when we are all in the same town. That's the ping time. Add another ~ 40 ms that seems to be eaten up by my local computer and wired home network.

With that said, online jamming is hard work. I'm the bassist. Fortunately I can set my double bass aside and use my electric bass, so I'm not hearing my acoustic and delayed sound at once. The wind instruments and singer, not so lucky. For the bassist it's a constant chore to hold the tempo, leaving little room for anything expressive.

I'm glad to be doing this, beats not playing at all, but yet there is still no substitute for playing together in person.

I had a look into how Jamulus works under the hood. It’s more of a hub and spoke model with the central server receiving, buffering and resending the stream out to each client. This has some advantages (kinder to bandwidth especially as the number of clients increases) but it does mean that there are two sets of recv buffers (server and client) which need to be deep enough to avoid too many missed packets due to jitter. My personal experience was that even when running it all on a lan I couldn’t get much below 16-20ms with an ideal near zero latency network. Once you add in geographic latency and internet jitter then it goes up considerably.

(Full disclosure: Weepy and I played with Jamulus then he had a crack at building sub.live)

> Add another ~ 40 ms that seems to be eaten up by my local computer and wired home network.

Maybe you have a weak computer? Having 40ms being eaten up by your local network would be absolutely bananas, the latency should be closer to 1-3ms if not lower. If I ping it's not until the switch outside the city I live in that the latency comes up to above 10ms, so something sounds wrong/broken in your setup if it's really your local LAN having 40ms latency.

If you are on Windows you need to use an ASIO soundcard ?

Ah, it's possible analog31 isn't using ASIO card, I assumed they are as they seem to have tried to jam before so also assumed they were a musician and I haven't met many who doesn't use ASIO in the first place.

Ah, that's interesting. I'm using a USB audio interface. It claims to be ASIO compatible. Oddly enough I tried the same interface in different computers, and have tried other computers, but haven't dug much further than that. I've also tried all of this on a decent Ubuntu box, and I've now got it running on a Raspberry Pi 3 since the overall latency has been more less the same in all cases.

We're all analog musicians playing alcohol powered instruments. ;-) Some of us are techies, others not. While I'm a techie, that side of my life has always been somewhat separate from the musical side, so I haven't paid that much attention to the digital technology until the pandemic came along. But I'm happy to learn, especially if something simple can make it work better.

You need to switch the drivers from WDM to ASIO. Most DAWs have this as an explicit choice with WDM as the default.

I think WASAPI has comparable latency to ASIO drivers. I can get around 20-30ms latency with WASAPI though ASIO takes me down to 10-15. Granted I'm using a dedicated USB audio interface though.

You should be able to easily sustain below 5ms buffer size, and RT_PREEMPT Linux has no problems keeping a few hundred microseconds of buffer filled (a suitable PCIe soundcard should easily make 100us buffer level reliable; that's 9.6 samples at 96kHz sample rate).

Yes, sub-ms latency is hard. But the infrastructure exists.

I did actually add a way to play a backing track for some drums or similar . It does make it much easier for everyone to stay together .

That would be nice, a click track. I suppose adding a second "user" that generates a click would do the job.

You could - but you can also just drag in an audio loop already.

When the server is in the same town, it's not hard for me to get 10-15 ms total latency in Jamulus. 60 ms is a lot - borderline unplayable. I get better latency than that over WiFi.

I may be missing something but this appears to me to be a WebRTC application. It is very easy to build something like this using any modern browser's WebRTC APIs, just disable audio processing in the media constraints and munge the opus SDP to stereo, maybe play with the network buffer setting if you want to lower latency but the audio hardware and physical distance is going matter more there.

The developer states that no other software was suitable, and also that it's the first of its kind. Both of those statements are not accurate: there is nothing innovative in sublive that isn't in any of the apps listed below. Props to the developer though for scratching an itch.


It uses WebRTC for the video, but the audio latency of WebRTC is too large and uncontrollable.

As stated in the post, the audio uses a custom C++ UDP solution. As far as I know it's the first video calling app with very low latency audio.

How do you deal with firewalls in that case? Sonobus has a similar problem that if you don't have accessible NAT you can't connect. You need a relay or central server which can get really expensive!

Basically similar issue to Sonobus. Relay could work buy it probably adds latency and it certainly adds complexity. I may know a way to improve some NAT configurations but need to do more research.

Yeah one reason I like Jacktrip, you can optionally use servers to connect. Low latency is nice but it doesn't help in a performance when you can't connect to your group of musicians

It would be fun to try to add a predictive layer to this:

- Given the score and what each person has just played predict what the next few sounds are going to be - Given a high frame rate video stream of a person predict what the next note to be played

In the same way that Nvidia has extremely low bandwidth but high resolution video enabled by face keypoint tracking and facial reconstruction / puppeteering maybe there's a place for prediction and/or sound reconstruction from extremely low bitrate streams.*

* Obviously not the exact usecase here since a premium is being placed on not processing, but still fun to think about.

We're doing exactly this to teleoperate humanoid robots on high-latency networks!

Paper: https://arxiv.org/abs/2107.01281

Video: https://www.youtube.com/watch?v=N3u4ot3aIyQ

"We introduce a system in which a humanoid robot executes commands before it actually receives them, so that the visual feedback appears to be synchronized to the operator, whereas the robot executed the commands in the past. To do so, the robot continuously predicts future commands by querying a machine learning model that is trained on past trajectories and conditioned on the last received commands. In our experiments, an operator was able to successfully control a humanoid robot (32 degrees of freedom) with stochastic delays up to 2 seconds in several whole-body manipulation tasks, including reaching different targets, picking up, and placing a box at distinct locations."

Wow. I really like the positioning of your product: the audience and value proposition are crystal clear.

I'm going to share this with my musician friends and see what they have to say. I know they had challenges jamming at the start of the pandemic lockdown that stemmed from a) variance in internet connection strengths and b) innate latency in Zoom.

All the best & kudos for launching!

Thanks !

Hey, are you using Opus for audio? I remember the whole reason Opus was so cool, is that it was the first codec that combined low latency with high quality. Most voice chat software uses Opus nowadays so I'm surprised to hear existing solutions are so bad.

Yes Opus - the other benefit is that it can handle a degree of missing packets which is very handy.

And that feature adds ~20 msec more latency

You must have set it up wrong. It doesn't add any latency, but you do need a higher bitrate to get similar quality.

I googled "opus packet loss" and found https://www.asterisk.org/asterisk-opus-packet-loss-fec/, which says "an Opus encoder can embed redundant data about the preceding packet in-band in the current packet". That method would cause 1 packet of delay. Are you using something different, or avoiding the delay through another method like tiny packets?

Here's probably the answer: https://opus-codec.org/docs/opus_api-1.2/group__opus__decode...

"Lost packets can be replaced with loss concealment by calling the decoder with a null pointer and zero length for the missing packet."

This is very cool (former touring musician turned software engineer here!)

I'm curious about the networking/UDP side of this. How are you handling retransmits? Do you treat this like a video game would and just keep sending the latest data? Or doing something more advanced like forward error correction?

no retransmits - Opus handles a degree of packet loss pretty well. Not using FEC as it adds extra latency.

Would be amazing if this could be released as a Linux binary as well. Me and many of my friends are on a P2P mesh network, I'm sure we can push the latency down to below 20ms which would be amazing to see/hear!

Within the same city you can get <5ms

I'm guessing that most people reading this are not familiar with some of the other options available for this kind of thing. In particular, they likely don't know about Sonobus:


(and that's likely because it's libre software and the developer doesn't really do marketing)

I was about to mention Sonobus. Fun fact: under the hood it uses a fork of my AOO library (https://git.iem.at/cm/aoo/-/tree/develop). AOO is still alpha but I'm hoping to release a stable version till end of the year.

Wow!! I'm one of those people - TIL about Sonobus. Thank you for that, because it's amazing, how-did-I-not-know software. With iOS and Android builds, too!

And it's also available as a VST plugin!

I'm very interested in this! Can you say how this compares to something like https://jamkazam.com? One of my issues with JK is that it seems to require all participants to be members, and to pay for relatively pricy memberships to partcipate. I'm sympathetic that nothing is free and I'm willing to pay for a good service, but I'd love to have a model where one person could pay for a session that others could join (with video) for free.

That said, I'm not sure if you plan to start charging.

I would like to make some money with it to continue the support, but it's not clear yet the best way forward. Currently it's donationware, but other models could be a subscription, or some sort of premium/freemium model.

Really I just want to see how people use it and figure it out from there.

One of the concerns I always have about things I love is that they'll work but eventually disappear. Is there a way to donate/pay/tier for this? I assume it comes at non-zero cost to you in terms of time bandwidth hosting etc.

Development is never free, but this seems to be driven by a P2P protocol so hosting/bandwidth costs should be close to zero (maybe signalling server is hosted by creator but should be relatively cheap to host). I agree that donation would be nice, but won't prevent it from disappearing. Only solution I can think of to completely make it unable to disappear would be to open source the code base, either now or having a guarantee that the code base would be open sourced if the author looses interest.

There's a "buy me a coffee" on the page if you're feeling generous! It's true the cost is certainly the dev time not so much the servers. Though open sourcing doesn't protect things from dying - it's people not using them!

Awesome! If Linux binaries become available I'll for sure "buy you a coffee" :)

Open sourcing is more to enable people to be able to build the thing in the future with new libraries. OSes move forward every day and something that runs/builds today might not be able to run/build tomorrow, so open sourcing makes sure it's possible to change libraries/API usage if needed.

Hey @weepy. Awesome project! As a webrtc dev - interested to hear about your choices around opus packet durations, FEC percentages, ARQ strategies, jitterbuffer length and packet redundancy.

I used the standard 2.5ms. I know you can go lower if you want, but then you need a higher bitrate as it's "custom".

I turned off FEC as it adds latency.

Jitter buffer right now is just user controlled. A bit lame, - I should make it automatic, but need to get the right heuristic. With a LAN connection, the buffer can be as low as one or two packets.

I don't use any packet redundancy either and there's no ARQ as if you have to ask for a retransmit you've already lost the war!

How about you ?

Yeah, awesome. Have to say this is the best version of something like this that I have tried. The packet capture didn't seem to be RTP, are you using libwebrtc under the hood?

The system seems to have the settings right for realtime collab. In fact I found it much better for chatting too if you can guarantee they have a headset and good mic - I hate current-gen AEC algorithms. Interested to see how it fares in a 3+ participant setting.

I had a go at jamming with my brother yesterday. We had to adjust down the delay to 2ms (I think default was 5ms) in order to counteract the "lagging" effect you get when you lock into the remote beat. Once we had that tuned it worked really well. We would occasionally suffer burst losses, but you can play through it and it's still synced afterward which is the best you can do with an unreliable network.

Some feature requests: - Join room by name (we were confused that there was a different code than the name we chose) - Auto register new audio devices when you plug them in/edit in settings. - An easier way to join with multiple audio inputs. We had trouble setting up our instrument, but also adding a mic for chatting. - Local recording of your end with some shared sync markers, that you could manually or automatically sync up post-hoc

The use case I use WebRTC for is slightly different, where quality >> latency , so we have it tuned to the other end and run the full gambit of FEC, ARQ, packet redundancy etc. Interesting to hear about other's solutions though!

Hey thanks.

You can also send the backing track ahead of time a little so that you both hear it at the same time!

Do you mean that you want to mix the inputs from multiple devices ? This is possible - but you do get a little bit of extra latency of course.

It's supposed to refresh the audio devices when you "mouseenter" the select box, so perhaps there's a big there.

Multitrack recording with a bigger buffer would be great.

Good you found the buffer settings. I'm surprised that you found a significant difference though between 2ms and 5ms?

I had assumed this would be a VST plug-in so you could integrate everyone into (for example) an Ableton Live set with whatever other audio/midi manipulation you want.

Using audio devices directly makes it possible for any audio application to use it, not just applications supporting VSTs. Makes it more flexible, not less. A VST could be built on top of the current solution, but if it was done vice-versa, it wouldn't.

Exactly. I have a prototype VST plugin which pipes the audio from the DAW into the application.

VST frameworks like JUCE export to VST or stand-alone. So you get both for free. Much easier and more flexible.

Also DAWs don’t typically allow you to interface with multiple audio devices. There are ways of doing it but they have major downsides.

Yes JUCE is da bomb

My experience with Jamulus might be instructive. Members of our group live in the same city (Kingston, Ontario) but use two ISPs. Packets between users on the same ISP were fine but packets from one ISP to the other were being routed via Toronto and then Chicago and ultimately back to Kingston. It's not the distance travelled that's the problem (speed of light), it's the latency introduced by each intermediate node.

Solved the problem by setting up a Jamulus server on AWS in Montreal. Both ISPs provide low-latency connections to Montreal, much better than one mile across town!

Of course each participant has to use ethernet rather than WiFi and has to use a low-latency audio device, not a laptop sound card.

Something similar was happening in Calgary where multiple ISPs in Calgary peered through Seattle and Toronto. Luckily YYCIX was formed and it appears that things are much better now: https://yycix.ca/talks/cuug-2013-06-18/mgp00001.html

How can you detect these node transfers? I run into similar problems often with WebRTC and I have found it troublesome to diagnose


Had the same experience in Sweden. Between certain pairs of ISPs, all traffic was routed via one of the capital cities (Stockholm, Copenhagen, Oslo), adding 10-20 ms and jitter. Solved it by setting up a VPS at a provider with good connections to all ISPs involved.

The issue with Jamulus is that it requires a central server - which means it needs to be close to everywhere . It also needs double buffering and double compression. P2P is the way forward here IMHO.

"requires a central server" is misleading. You can set up a server wherever; there's no central server. Except for the multiple-ISP issue I discussed, there's no reason access to a server is more latency than access between the "clients". The advantage of the server/client model is that the clients can be very lightweight: I personally use a Raspberry Pi 3 with an ultra-low latency DAC/ADC hat. Works fine. All the real computation is done on the server.

Well you do need to find a server midway between all the users, which is a hassle of course. I don't personally like the model either becuase it needs to decompress, mix and recomopress all the streams on the server and also needs an extra jitter buffer. The only clear benefit to it AFAIK is that it scales better for larger groups O(N) rather than O(N2)

"need to find a server midway between all the users" Nonsense. Did you read my post? The server is in Montreal; all the clients are 200 miles west of Montreal in Kingston. Physical distance isn't important. What is important is ping time. When one tries to connect in Jamulus, dozens of public servers are suggested with their ping times listed. If none are suitable or one wants a private server, one can set up a server anywhere that provides good ping times to all the intended clients. In the multiple-ISP scenario I had to deal with, a server "midway between all the users" would have been useless, as would any kind of server-less topology.

"The only clear benefit to it..."? As I suggested, a significant benefit of the Jamulus model is that any of the clients can be "thin"; most of the computation is done on the server. You may be able to improve on this but disparaging it with silly criticisms isn't going to help you; there are thousands maybe millions of satisfied Jamulus users around the world.

P2P doesn't require any computation on any server - it's essentially serverless. It doesn't make much sense to me at least to need to compress everything twice and have multiple buffers AND have to manage a server.

Well if any of the O(N^2) connections has unacceptable latency, you'd understand why a server is sensible. Reducing computation won't improve latency in the connections. One has no flexibility with clients but a server can be located wherever good connections are available.

There is flexibility - in the P2P model if one of the clients has bad latency - their buffer gets increased to prevent drop outs. How is a server better in this scenario?

> P2P is the way forward here IMHO.

But if any of the O(N^2) P2P links has unacceptable latency, eliminating the server would be counter-productive because the client-to-server connection may have less latency if the server is located appropriately.

Maybe I'm missing something but A->B is likely to be faster than A->S->B ?

It's good that latency is considered to be so critical, but for the same reason I'm sceptical that this would work well for the majority beyond quite local ranges with very good internet connections i.e some kind of fiber... which most people don't have, although I realise a lot of the tech crowd is unaware of this (most of the worlds user end points are some kind of DSL or cell network with a 20-40ms minimum).

If anyone has tried playing a processing heavy software synth on a pc in real-time you will have experienced how unplayable it is as soon as latency goes beyond 10-20ms - you can't play music if there is noticeable delay between your fingers and ears, and we are much more sensitive to sound latency, it would be the same problem trying to play in time with each other.

I like the idea, but feel like the internet isn't there yet for the majority of users, and latency hasn't exactly been improving at a great pace. I expect it will still be useful in internet rich areas like city to city.

Actually I can jam with ease with my friends in London from Denmark. That's over 1000km. And if you don't believe me - check the testimonials on https://sub.live ^_^ Also you have a much lower tolerance of latency for your own actions that other people's. I'm hopefully going to record a video this weekend to show everyone!

Also I live in a village in Denmark near Aarhus. I believe my connection goes via Copenhagen which is actually the wrong direction! Though it is true that Denmark is very well connected.

Yeah, the hop to the internet exchange point is the main issue with DSL or cable. I live in a major city and it takes around 10ms to hit the internet exchange point (using Ethernet the LAN latency is neglegible) and thus I get a minimum of 20ms or so just connecting to a server hosted by someone in the same neighborhood. I've actually tried this experiment with some of my nearby friends - pinging each other's public IPs on DSL or cable takes around 20ms.

With fiber, you can hit the exchange point in 1-2ms. So I suppose this tool would work well fine over fiber assuming everyone is in the same city. I've even heard of people successfully using protocols like Dante (professional audio over IP intended for LAN) over gig fiber lines as well.

From my experience, a couple milliseconds of latency is fine for keyboard or guitar processing, but anything more than that starts to mess me up. There are artists who insist on using a full analog chain for monitoring for that reason, and refuse to use digital mixers or digital wireless systems that can add several ms of latency.

Why does fiber have lower latency to the exchange than copper? That surprises me.

I thought that ISPs generally run fiber for most of the distance, and coax is only used for the last few hundred meters. The speed of signal propagation is actually faster in copper than fiber, but nobody (afaik?) does long runs over copper so it’s a moot point.

Yes the ISP runs fiber to the DSLAM or the CMTS, and copper is used for the final hop to your home. The latency comes from the buffering and processing required to get a clear signal over copper - for DSL I think you can get in a perfect scenario 7ms or so with fastpath, more if you use interleaving. For fiber you can pretty much treat it as like an Ethernet connection, with latency determined almost solely by distance.

Yup I remember fastpath, it's a choice of low latency vs higher bandwidth. I'm not sure if ISPs still actually bother offering this option to end users anymore though? I mean it's all turned into scripted tier 1 support nonsense if you call an ISP these days.

I had to specifically ask for fastpath for my DSL connection, without it I get around 35ms to the exchange. You however do need a decent connection to the DSLAM for it to work properly without packet loss. You can get the same bandwidth over fastpath as interleaving if the line is clean.

woah what is this fastpast magic ?!

Ahhhh interesting - so the latency comes from the noise in copper, which forces you to do extra work, which adds latency per bit? That makes sense - thanks!

True for many users. My mobile latency is ~100 ms, and that's still impressively quick to me. But I find DOCSIS to be very low latency where I live. My connection at home is ~10 ms to other home users on the same provider in this city, and generally within 20 ms anywhere in the southern part of the province and 40 ms most of the way across the continent. As long as they aren't on DSL or mobile on the other end, anyway.

This is super cool. I’ve been formulating a similar idea for a while now. I’m building a desktop utility for guitarists using clojurescript and Tauri. I also wonder if this can also capture ASIO audio streams in a useful way? My goal was to do something like that to allow streaming of processed audio.

Some differences in what I was planning and accompanying thoughts:

Clojurescript. I do like that it’s using Svelte, I just wanted more idiomatic support for datalog stuff for the purpose of building metadata-driven music theory tools. Svelte is super cool though, and is my go to JS tool right now. There’s always Datalevin, a portable datalog implementation that I found recently. Currently I’m using a locally running XTDB instance for development, but for the final shippable I may switch to Datalevin. If anyone is interested in doing some similar you could try XTDB over http or figure out a nice way to interface with Datalevin from other languages.

Electron -> Tauri. Better native feel and the ability to hit Rust code directly. May not be worth it for this project since it seems like C++ is being used for some stuff. But for me Rust is a better fit. As a side note I think the Tauri team is working on support for interchangeable back ends, so soon you could replace Rust with Go or whatever. Tauri also makes including accompanying binaries easy. Not that I’m saying electron doesn’t, I have no idea.

Capturing ASIO streams. Super important for getting good sound for most people, allowing people to play audio through interfaces and mixers while still capturing it. I’m not expert in ASIO or audio streaming, but from my understanding capturing ASIO streams directly is tricky. Reastream (a reaper plugin) is the only thing I’ve found that lets this happen, and sadly it doesn’t work well with other DAWs. Why this would be useful IMO: people can stream audio while still listening to the processed output through whatever means they already do. Guitarists could process audio in a DAW or plugin and both listen to and stream that audio. People using DAWs can stream the output of the DAWs master channel without compromising how they listen to it. I’m not saying sub.live doesn’t accomplish that, I just think it’s important either way. Typically this is the missing link that makes other methods of audio streaming difficult.

Open source. Makes me sad that it isn’t. Could have been a good building block and I definitely would have tried to be involved right away. Feel free to correct me if this is actually open source and I just misunderstood.


I haven't ruled out opensourcing, but honestly I already have limited time and in my experience open source takes _more_ time commitment (I get that you will get free help eventually).

I'm making a VST plugin to stream output from a DAW.

Problem with Tauri is that you have to support the native browser, rather than just chrome, so it's more work to build and maintain.

Good luck with your project!


One thing you might be interested in is a company called NetworkNext. (No affiliation, I just think it’s a great idea.) They provide a private network marketplace that will give you pretty much optimized latency anywhere in the world.

>”We've been able to achieve sub 20ms latency from UK to Denmark - more than 1000km”

It’s pretty solid given that the speed-of-light would like 3.3ms to travel that same distance.

Honestly I was a bit blown away when I got it working the first time ! It kind of mental that I can capture audio and compress it - send it through all these layers and machines - and receive it 1000km away 20ms later. All on consumer grade internet.

I don't get these realtime collaborative streaming services.

Streaming at home via wifi is already crappy, how should this ever work with more distance?

Don't use WiFi? Ethernet has way less latency. Combine with everyone being in the same city/region and using fiber network and latency should be really good.

Otherwise, find friends that you have visibility with and setup a P2P WiFi via antennas/radios, you'll get insanely good latency.

This has not been my experience, with a dedicated PCIe wifi card, the difference in latency to when I plug in the ethernet cable is about 1ms, if even that, to the point it's not worth running cables.

PCIe (vs USB) is but one of many factors. Which frequency is the wifi on? how busy is the spectrum there? What features does the router support (MIMO, Wifi 6, etc)? I'm happy for you that the difference between wifi and wired is 1 ms, but that experience is far from universal.

Ping is also a singular piece of detail. Ping between two mediums can change by 1 ms but have a huge effect on throughput and jitter? Which affects streaming over the Internet a great deal despite not being so obviously linked.

In perfect circumstances, yeah of course WiFi is stable. If you live in a dense urban area with tons of WiFi noise latency tends to jitter quite a bit. This isn't a problem with Ethernet.

Yeah, I shouldn't use tech that is basically used all over the world for convenient internet connections to access a new tech that is basically built on top of that.

Sounds reasonable...

I mean, ethernet/wired IS, also, a convenient standard tech used all over the world. There's any number of convenient tech out there, and not all of it is for every purpose. My Minivan is supremely convenient ubiquitous tech for my kids & family but I wouldn't use it on a racetrack. My WRX is fun for rallies but I wouldn't use it to haul cargo. My phone is a great convenient ubiquitous camera but I wouldn't use it to capture professional wedding events. etc.

WiFi is indeed a convenient tech used all around the world... for convenience, not performance. If you're gaming, if you need security, if you need reliable and consistent performance, Ethernet is pretty ubiquitous. I have couple of wires from Bell router to gaming and music computer at home, wifi for other usages.

When added to your extremely sarcastic tone to honest attempts to assist... what are you actually trying to achieve/learn here?

The sarcastic tone was because it felt like someone tried to help by saying "just replace your convenient infrastructure with something much less convenient and everything will be okay"

I get that wired is less convenient (I'm not going to argue the semantics of "most"), but until we can transmit power wirelessly from a distance for consumers, everyone is still dealing with wires to charge their devices. (Even if you have wireless charging, it's still not at a distance and you have to get it just right on the charging pad.)

Trying to convince everyone else that something as simple as plugging a wire in, is horribly onerous, when they do it every day or so, is probably going to be an uphill battle, especially if it's for something people want. Where jamming with friends over the Internet ranks in your list of desires, vs never ever having to do this horrible act of plugging in a wire, is on you.

But that's exactly true. It's a matter of what you are optimizing for - convenience or performance.

There's any number of things WiFi isn't the perfect answer for.

Ultimately, in today's world, if you want to jam with musicians not in your room, some non zero effort will be required. Presumably you have setup you audio interface, music interface, microphones, software, mixer, midi and asio, etc etc etc... Honestly, for me at least, Ethernet is the least inconvenient or nerdy or annoying aspect of home studio :-)

Yes, you're probably right.

For me, cables are a huge issue, because my PC is in a room that's not directly connected to my appartement, where my internet connection is located.

"This won't work with the commodity technology I use" (wireless LAN)

"Here's some ways in which it can work using a slightly different but still as widely used commodity technology" (wired LAN)

"No, that is unreasonable, wires are unreasonable"

Like, it's SUCH a simple solution: use wires. That's how I can stream games from my computer to my parent's house at 1080p60 with only about 30ms latency over domestic broadband connection, for example.

Also, sometimes you can get really lucky with wireless on 5ghz - see streaming in VR as a prime example. The developer of Virtual Desktop for the Quest has done some great work getting a really solid low latency VR feel; I've been able to stream games from rent-a-VM ShadowPC servers without feeling the lag.

OK, so pro audio is a different beast and is even more sensitive to latency than VR, but here people are having fun with this tech and clearly it's working for them!

It’s impossible to get some people to use wires. I work in an area where a high bandwidth, low latency connection is in theory mission critical for my customers and they still use WiFi 80% of the time.

> It’s impossible to get some people to use wires.

Indeed. Perhaps we don't worry about those people anymore. If you are interested in a high quality experience but don't want to satisfy the prerequisites, you are not entitled.

WiFi is never going to provide a reliable, ultra-low-latency streaming experience for the average person. This isn't a "we can but we won't" type of problem where, with just a bit of extra effort, we can somehow magically solve all of the deficits.

Yes, there are scenarios where WiFi is indistinguishable from ethernet, but those are exceedingly rare in my experience.

How is 30ms remotely a good latency here?

I have 14ms between my Quest/PC and it feels sluggish.

From the website:

> We've been able to achieve sub 20ms latency from UK to Denmark - more than 1000km !

That is pretty good latency for what it does, it's not just ICMP packages but sending/receiving audio and then all that comes with it.

How are you connecting your Quest and PC? Sounds awful, my latency to friends outside the city has a shorter latency than that and they are more than 10km away from me.

That's why I don't believe those numbers.

All I've seen in shorter distances was usually worse :(

this right here.

1 mile away, both of us have AT&T fiber, we're seeing 10-15ms response times (packet size ~1400).

100 miles away, both on AT&T fiber clocked in around 18-20ms

1 mile away, one on AT&T fiber the other on Comcast gigabit had response times around 24ms

100 miles away, one on AT&T fiber the other on Comcast gigabit had response times around 35ms

i'm hard pressed to believe you will find an american ISP that prioritizes low latency. verizon 5g home actually did some magic with their non-standard hardware where i was seeing 9ms to my in-town datacenter and 16ms to my out-of-state datacenter. when they replaced the hardware with the standards-compliant generation of hardware my latency increased to 30ms in-town (they also changed towers so that could be part of the problem, plus congestion during prime time)

i just moved where i am 6 blocks away from my friend who i did the 1 mile test with. i will retest this afternoon and update this comment

I see 3ms ping times between my friend and I on opposite sides of London, though we’re both on pretty decent broadband tech (me gfast, him some fancy hyperoptic plan). IIUC there are some ADSL technologies that add some minimum latency floor of 10+ ms. However the long distance stretches of backbone fibre shouldn’t necessarily add too much latency - e.g 100km of fibre at 2/3rd speed of light is an additional 1ms on your (two way) ping time

That sounds really crazy bad. Europe here so I don't know much about US - but I've had US people use it an report good experience - so dunno.

Maybe because you refuse to use wires and instead use WiFi which can have really bad latency? If you really, really, really want to use WiFi, get a dedicated PCIe/external WiFi optimized for latency and you can get close to Ethernet latency.

It’s worth remembering that sound takes 3ms to travel a metre. If you’re standing a few metres away from a drummer then the round trip time is already pushing 20-30 ms of latency and yet you can still play together in a rehearsal room

The suggestion of a P2P mesh is only if you physically could set that up.

Otherwise, using Ethernet + Fiber connection should be good enough for most. Only if you really wanna lower the latency should you invest in additional hardware.

You can play with Wifi - it's OK - but you get probably 5-10ms better latency with Wired LAN. Also I think you have a lower tolerance for latency of your own actions vs the latency of another person's.

Wifi might not be too bad, though it can spontaneously degrade badly if you get interference. Not great in the middle of a jam!

Well you get that there a need for it, and people are trying to find solutions to overcome the issue of latency, that's the whole point, isn't it?

I think https://endlesss.fm got the approach to jamming over the internet right.

Instead of trying to reduce latency (which you simply cannot do beyond speed of light) endlesss implements a shared “multiplayer” 8 track looper.

Jammers add layers to the loop, which has a clock and supports sync via ableton live, external audio input, etc.

Plus, every addition to the looper is saved/versioned. You can move backwards through loops and export the audio stems. Its a great creative tool, and quick way to build on ideas with friends.

It’s way better than any of the live internet jamming software I’ve used, by far.

Endlesss is cool - in fact I know the founder. It's a different experience though - when it's live you feel someone else's presence compared with offline collab. Up to you what you prefer.

NINJAM also works on the same principle: https://cockos.com/ninjam/

  > The NINJAM client records and streams synchronized 
  > intervals of music between participants. Just as the 
  > interval finishes recording, it begins playing on 
  > everyone else's client. So when you play through an 
  > interval, you're playing along with the previous 
  > interval of everybody else, and they're playing along 
  > with your previous interval.

Ninjam is a bit different, in that its not a looper. The timing is offset to keep music sounding in sync to all clients, but what you play plays once and does not loop. In endlesss phrases will loop until they are changed or removed.

Curious about using this to stream a jukebox playing analog records to myself

You can achieve that with a much simpler solution, look into RTMP.

you could but you don't need higher latency.

This is revolutionary ...

Does anyone know of an alternative that keeps both audio & video in complete sync, but possibly pauses/buffers at times in order to maximize the segments where everything is kept in sync?

Not possible to be in complete sync and have ultra low latency. You would have to have a jitter buffer (at least).

Checkout https://jacktrip.org/studio.html. We enable low latency audio connection through open source technology developed at Stanford. We can scale to hundreds of users signing at the same time. Check out hundreds of choir members signing together https://www.youtube.com/watch?v=SJgB5QmyDfU.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact