Hacker News new | past | comments | ask | show | jobs | submit login
Online Jamming and Concert Technology (stanford.edu)
73 points by borski 42 days ago | hide | past | favorite | 28 comments



I've used JamKazam (https://www.jamkazam.com) a few times recently to play folk music with some friends. The sort of music we play relies very heavily on listening to each other and improvising, and surprisingly it _just about_ works on JamKazam. It's certainly not perfect, but we're not going to be hanging out in pubs together any time soon.

The important number is round-trip latency, and we found that around 25ms is good enough (we're all quite used to playing with people of varying time-keeping ability anyway). I think there's scope to improve on that using a library like Roc (https://github.com/roc-project/roc) where you can target a specific latency. I've been meaning to play around with it, but to be honest I'd rather be playing music :)


JamKazam founder here. Happy to come across a user!

We see that too: 25ms one-way latency is the max to stay in sync, and that includes both internet + audio device encode/decode, which gets eaten up quite fast!

We are looking at providing an optional premium networking service to offer a faster connection as an alternative to the open internet. Nothing too expensive, like $10/month is the goal. Hope that gets you and your friends under that magic threshold when it's available, if you try it out.


I've been experimenting with it too, with my funk band. Very quickly we realized that any latency was too high for us to have a hope of getting in the pocket, but it worked quite well for sloppier rock jams.

Since you're here, I'll ping you some feedback:

- The UX is, charitably, idiosyncratic. We all found it hellaciously difficult to get started, find each other, start a session that didn't have strangers popping into it, manage audio (more on that below). The UI is honestly just super crazy insanely weird.

- The audio handling is... counterintuitive too. I expect to be able to control my "monitor" mix, and have one person control the master mix. But that just isn't working. Instead of one channel strip per source, we just each see a single fader for each participant (even though each person has vocal mic and instrument mic) and it seems to affect everyone globally.

- Everyone slows down over the course of the song. We're all listening to each other, so the latency builds, and we all end up dragging horribly. Only solution we found was to have the drummer play to a click which is miserable in our genre and generally not fun outside of a studio session (which is "work" anyway).

I _really_ hope you're able to use some of the newfound interest you've got to inject some new life into the service. The core is so promising. Notwithstanding that feedback, I'd pay $10/mo for a non-social private version where I can host my own server, since all my bandmates are within a mile of me.


I think JamKazam is helping some of my friends maintain what little is left of their sanity in the current climate, so thank you for that!

Unfortunately I might not be in many sessions for a while as I replaced my last Windows machine with Linux this weekend, which I don't think you support yet; although I will have a go at getting it working in WINE :)


Hi! This was already asked before, but probably my voice would be heard: do Jamkazam team support Linux? Currently we're using Jamulus for practicing, but I would like to compare it with Jamkazam. Unfortunately, some of our members are *nix guys.

Do you plan to work on Linux support? Thanks!


Hi, I clicked on the learn more link for the jamblaster and it’s a dead link?


wince

We need to take that section/link down.

We did do a KickStarter for the JamBlaster, made ~200 and shipped them.

https://www.kickstarter.com/projects/1091884999/jamblaster-t...

But we are not in a position to be focusing on custom hardware.

... a dedicated device is half the puzzle. That, and a low-latency network connection to your peers. You have those two and you can get a reliable experience.


For people that are interested in putting something like this together, Bela might be a good starting point.

https://bela.io


Forgot to add: I never came across JackTrip whilst researching options for this, so I'm quite keen to have a play with that.


I know of a musician who has been performing online shows for the past couple of months. He does them solo, just singing and playing guitar. He streams video using OBS Studio and Crowdcast.io, and all of us attending the show participate through text chat. His video has a latency of about 30 seconds, presumably because it's using a typical RTMP + CDN setup, designed for non-interactive broadcasts. So sometimes after he finishes a song, there's an awkward moment, because even though we're sending our equivalent of applause through the chat, he won't see it for another 30 seconds. He knows about the latency, and he believes it's necessary for a reliable, high-quality stream. So he just tries to carry on like it's not there. But I wonder if it's possible to do better. FWIW, his most well-attended show so far had ~170 people watching, and the audience is usually much smaller.


That sounds small enough that casting through Discord or Twitch may be an option. Those seem designed more more interactive broadcasts and thus lower latency.

I don't know at what point the scaling breaks down.


Twitch has a record of over 2M viewers of a single active steam. I don't think scale for most individual artists is a factor.


Ha! I'll take your word for it. You would know.


Jamulus is another take on this. It's server based and uses lossy compression (opus). People host public servers and the software has a server browser.

http://llcon.sourceforge.net/

http://llcon.sourceforge.net/PerformingBandRehearsalsontheIn...


We’ve been using JackTrip for a couple of years. I would recommend it to anyone looking into telematic performances.


Any amount of latency seems like too much latency when jamming with others. I could see how this would work with audio engines like SuperCollider, but I'm curious how one goes about this for live recordings.


Latency is kind of funny in the context of jamming. Sound only travels about a foot per millisecond. An orchestra pit is thirty or forty feet across. A big festival stage can be thirty feet across.

So we have these pretty ordinary situations where there's about thirty milliseconds of latency between when a drumstick strikes a head and a guitar player hears it. Of course, in a modern live pop performance there's all the crazy monitoring and latency compensation to try to make a football stadium acoustically comprehensible, but there is still the physical reality of how people normally play music together.

If I ping google.com from my house, on my crummy wifi, right now I'm getting about 10ms. This is roughtly the latency a guitar player experiences standing at the end of a 10 foot cord from their amp between their pick plucking a string and the resulting sound striking their eardrums.

Reality has latency.


Orchestras are playing (primarily) composed and prewritten music where deviating from the script would imply a poor musician. They look to a single source (the conductor) for tempo and dynamic cues.

“Jamming” is much more dynamic, and uses a combination of audio and visual cues to work.

The problem with even fantastic network latency in the 10ms range is this gets multiplied by the number of participants, and quickly turns into a shitshow.

The only approach I’ve seen that even sorta works is the approach ninjam took with a hard coded and pre determined latency. It’s not the same as a real improv session with real humans in the room, and has obvious limitations, but can at least give a little of the same experience without the uncanny valley.


Reminds me of a marching band of a college. The drums and the brass section would gradually dis-synchronize, the sound disintegrated into chaos, until the conductor couldn't stand it and shouted at everyone. My guess is that the players weren't paying attention to the conductor at all.

But even with zero auditory latency, musicians (classical musicians at least) don't really use sound to synchronize, because the cue must come before the sound is made.

That's why in any competent orchestra, musicians rely on visual synchronization. For example, the string sections look at the concert master, who in turn communicate with the conductor and the soloist.

In chamber music, the musicians constantly look, nod, and body gesture at each other to synchronize.

The internet is competing with light not sound.


Typically this is why marching band field arrangements put the drumline in back. The drumline plays with the conductor's hands, and everyone else plays with the beat of the drumline as it travels forward towards the audience, therefore arriving synchronized to the audience. Drumline hears everyone else as dragging, brass thinks the woodwinds are dragging, but it all works out from the perspective of the audience.


Yes, but reality also has a lot of established patterns for minimizing that latency so people can play together live.

Musicians in an orchestra pit or on a football field visually synchronize to a shared clock -- the conductor visibly keeping time. In a live setting, musicians use monitors (in-ear or wedges) that zip the sound of their bandmates to them via the speed of light.

I've never seen virtual jamming over a network without a shared clock actually work, because once you get to 200ms round-trip it just doesn't work.


Have you seen it work well WITH a shared clock? Could you share details of the setup? Ty


I was also thinking that keeping the latency constant across the group is much more important than keeping it as low as possible. Musicians could deal with 30ish ms delay once they get used to as they already do in situations you mentioned. Better than having it 10ms most of the time but with sudden drops for different players


A couple decades ago when virtual instruments started to become realistic on home computers, I did extensive testing and to my ear, anything over 8ms is too much delay. Obviously one would like no delay at all, but there is a point beyond which you can hear the delay and some folks will find that unacceptable.

Your ISP and their ISP have a certain amount of delay or lag which can't be avoided, but it's possible to improve your delay on your own side of the cable / dsl / fiber modem by using Ethernet instead of Wifi. My Wifi router (a slightly older, last-model Apple unit) introduces about 10-12ms all by itself.


On how "any latency is too much latency" when playing music together - I read that even a very large room, where musicians are far enough apart, can introduce noticeable latency and disrupt being in sync.

That impressed me about the low threshold of latency needed, if the distance of less than a hundred meters can be too much for sound to travel between players.

I'm hopeful that latency will continue to reduced, as much as technologically (and physically) possible. Real-time jamming over the wire will continue to improve.


You can use ninjam, reaper and audio hijack for audio syncing but video syncing seems problematic.


ninjam works very well for audio syncing, but the problem I've run into is that (at least with the default settings) it seems to sync everything a measure behind, i.e. you play to what others played one measure ago. This works well for jamming, but playing a song with chord changes was hard to coordinate.


A given channel has two controls: tempo (because there is an audible click track to keep people in sync) and a repeat duration. To make it work, either (a) it has to be a modal jam (no chord changes), or (b) play 12 bar blues and set the latency to 12 bars.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: