
Show HN: Roc – Real-Time streaming over the network - gavv42
https://gavv.github.io/articles/roc-0.1/
======
sleavey
This is exciting. About 5 years ago I tried to set up a home-made Sonos clone
by using two Raspberry Pis to stream synchronised audio across my network. I
did get it to work, but it was a huge hassle finding the particular
combination of PulseAudio configuration flags to use, and I had to set up a
dedicated wireless network for the bandwidth because it fell over if I used
compression. I figured the best approach would be to write a PulseAudio module
in C but I never had the time or skill to do it.

I'll definitely be giving this a go!

~~~
geekuillaume
I've built something similar at home with three RPis and Snapcast [0]. It has
an integration with Librespot [1] that shows it as a Spotify destination. It
works really well!

[0] [https://github.com/badaix/snapcast](https://github.com/badaix/snapcast)
[1] [https://github.com/librespot-org/librespot](https://github.com/librespot-
org/librespot)

~~~
sleavey
Wow, that's amazing! I'll definitely have a look at that first!

------
aaronarduino
What is the latency? I don't see any latency numbers listed on the linked
site. Latency would be my number one concern when using software like this.

~~~
rahimnathwani
You can choose the target latency. Presumably, the larger that value, the less
effect dropped packets and network jitter have on the quality of the output:

[https://roc-project.github.io/roc/docs/manuals/roc_recv.html](https://roc-
project.github.io/roc/docs/manuals/roc_recv.html)

~~~
gavv42
Right.

You can as well configure the FEC block size (it should be smaller than the
target latency), the length of network packets, the length of internal audio
frames in the pipeline, and the resampler window. And also the I/O latency,
e.g.PulseAudio buffer size. So basically you can configure all (or almost all)
parameters that can affect the resulting latency.

I'll document these parameters and their configuration a bit later. (Currently
you can find all of them in the man-page in in the API, but there is no
overview page that explains how exactly do they affect the total latency).

------
radarsat1
Wasn't there a way to do this with Jack? Ah yeah, there were a few attempts..
[http://jackaudio.org/faq/netjack.html](http://jackaudio.org/faq/netjack.html)

~~~
gavv42
AFAIK zita-njbridge is quite similar to Roc, but it's JACK-specific and has no
loss recovery.

------
vlaskovits
This seems pretty cool -- but if I understand correctly, the promise of this
is more constant/predictable latency that it is real-time.

------
holy_city
What sets ROC apart from an AES67 stack?

~~~
aaronarduino
Or Dante for that matter.

~~~
detaro
any pointers to open implementations of those two?

~~~
holy_city
AES67 is just an interoperability standard, it's more of a device level
protocol than anything. You buy AES67 compatible gear for your application,
then use the vendors' tools like Dante Virtual Soundcard (so you can
essentially treat the networked audio system as a normal soundcard on your
machine through CoreAudio/WASAPI/JACK/etc).

It's actually pretty great, most of the time there's no need for a separate
API just to handle streaming. It "just works."

~~~
tootie
Doesn't Dante also require proprietary hardware?

~~~
holy_city
Kinda? You need hardware at some point. AES67 was all about creating an open
protocol for connecting different proprietary stuff, and frankly there's only
a handful of places where I've seen open hardware worth its salt in audio. If
you need high capacity, low latency audio over networked machines, you're
going to need proprietary hardware/software in the chain somewhere.

------
xmichael999
!RemindMe when this supports video

