Hacker News new | past | comments | ask | show | jobs | submit login
PipeWire 0.3 – JACK compatibility layer with comparable performance to JACK2 (github.com)
128 points by c487bd62 39 days ago | hide | past | web | favorite | 43 comments



I tried getting it running and working two weeks ago and sometime today as well. I had some real problems getting it working. The compilation ran without a hitch, but getting jack, alsa or pulseaudio to work wasn't really possible. Wanted zo open bug reports and read the wiki on github but they moved it to the freedesktop gitlab, but didn't update the links. Also the pages are mostly out of date or without any relevant information, but plans.

Can't wait to get a seamless setup for it, but I guess 0.3 is still too early for early adopters...


Jack, while incredibly powerful has been a huge hurdle for pro audio on Linux. I'm looking forward to Pipewire being the default audio interface. Hopefully it drives a revival of the Linux sound engineering community and new software projects. Specifically the lack of a user friendly loop/sample-based recording tool like Ableton Live is glaring.


The reasons why a tool like Live exists or does not exist on a platform has essentially zero relationship to the APIs used for audio and MIDI I/O.

As others have noted, Bitwig already exists - in some ways, it is more powerful than Live (and in others, less). It is also proprietary.

Also, Live is not the kind of tool that most people would associate the term user friendly with. It's quite hard to get started using Live, despite the program being basically 100% awesomesauce. Live is extremely friendly towards a certain kind of workflow for "in-the-box" music production, and has fundamentally changed the entire zeitgeist surrounding making music with computers. But it's not a replacement for linear timeline DAWs (like ProTools, Logic, Sonar, Ardour etc. etc.) and has its own foibles and certainly its own complexities and limitations.

I anticipate that the next major release of Ardour will have some Live-like features starting to arrive.


> I anticipate that the next major release of Ardour will have some Live-like features starting to arrive.

This is pretty intriguing - is there a public roadmap for these Live-like features ?


We don't have enough developer resources to make roadmaps a worthwhile thing to spend time on. I work on this stuff when I am inclined and when there don't seem to be more important things to work on.

Also, just to be clear, I said "the next major release" because in my own head, 6.0 is "almost upon us". What I meant was the next major release after 6.0.


> Specifically the lack of a user friendly loop/sample-based recording tool like Ableton Live is glaring.

Slowly I'm trying to get https://ossia.io there... any help appreciated :D


Wow, that's not a Live competitor, that's a new paradigm! Definitely going to check it out and see if I can help in any way!


It's a superset of loopers, timelines and patcher :) but i'm almost alone on it so development is not as fast as it could be


Can you describe what some of your hurdles have been? Pipewire is probably not going to solve your issues as it's not a complete replacement for JACK [0] and uses the same API anyway. Realistically, if you're writing a new DAW, you're still going to want to use the JACK API. If your problems are driver-related, that definitely won't be solved by switching to a different audio daemon.

And for the record, the open source audio community is not dead. There is activity, but you have to know where to look.

[0]: https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ...


To be fair, as the original author of JACK and the lead author of one the main DAWs on Linux, we no longer encourage the use of JACK with Ardour.

JACK is an exceptional powerful tool (if I do say so myself) but it is overkill for the majority (maybe even the vast majority) of users. We try to encourage most Ardour users to use its builtin ALSA audio/MIDI I/O support rather than JACK these days.


I am a JACK/Ardour user and I still use application-application routing regularly to do other things. There are some workflows that are hard to do inside any given DAW, not just Ardour. This results in me having to run multiple DAWs at once, so running Ardour directly on ALSA won't work. You're right though that this situation is overkill -- I want to help fix some of these issues but Ardour is a large project and is difficult for me to start contributing to at the moment. Porting JACK-native programs over into audio plugins also takes time. Unfortunately this means I will probably be stuck on JACK for a while.

But all that is beside the point anyway. I still stand by my statement that a new DAW should use the JACK API, for compatibility purposes. I would change my mind about this if it ever comes to the point where JACK support is removed from Ardour and the other major DAWs. Take that as you will.


Paul, first of all, let me thank you for the great software you’re developed. As a long time JACK user, Can you point me to a more detailed write up on why it is overkill?


There's no such writeup.

But look - most people don't actually want to connect multiple applications together to make music. Most people don't actually want to move audio between applications at all. As we get more and more (reasonably) good plugins available on Linux, the "monolithic" approach - do it all inside one program (e.g. a DAW or something a bit like it) is easier for most people (no complex state management) and closer to their pre-existing mental models.

If you do need/want to connect multiple applications together, then sure, JACK is great and better than more or less any other possible alternative for that purpose.

But most people don't want to do that, and increasingly do not need to either.


> do it all inside one program (e.g. a DAW or something a bit like it) is easier for most people (no complex state management) and closer to their pre-existing mental models.

This may be a generational thing. As someone who learned recording in traditional analogue studios, I find a modular approach using JACK to be much closer to my own mental model.


With the greatest of respect, maybe, going forward, it would be best for clarity to include a caveat along with your "JACK is probably overkill" statements that you are taking about users who don't require (or might be interested in it) inter-app audio/MIDI/CV routing?


Before giving others lessons you should get your facts straight. PipeWire is meant to be a complete replacement for not only JACK, but also PulseAudio. One of it's goals is to merge the functionality of both of these and only provide a compatible API for apps as a convenience. It will be possible to write apps that don't use these. Internally it's different from JACK and PulseAudio. Other goals include better latency and better handling of dynamically plugged audio interfaces which is currently a mess with JACK. It's also an attempt at simplifying the linux audio stack.


I'm going by what's in the FAQ that I linked. I already gave you the primary source of whatever facts I have so take it up with them, not me. If it's wrong, and you have the ability to edit it, can you fix it?


I have no skin in this game but I presume a DAW would not use any of the dynamic graph configuration / format conversion stuff and just generate its own PCM buffers in userspace. In this case JACK's primary draw over PA/alsalib/whatever is solely the lower latency, for responsiveness in live performances.

This FAQ makes Pipewire out to be very similar to JACK but with more features. It is full of references to how Pipewire can achieve low-latency. e.g. In the "is PipeWire another JACK implementation" it boasts ""Synchronous clients are providing data for the current processing cycle of the device(s). There is no extra period of latency.""

Of your bullet points vs JACK, the first is a genuine latency issue but should be fixable (""we are not there yet""), maybe in the kernel driver or by special-casing this particularly simple path in Pipewire. The rest are CPU overheads but not really latency-specific, especially not in an ongoing way for an established / security-checked graph from the DAW.

I think I read this FAQ quite differently from you, I see a lot of understated optimism here and I think it is reasonable to expect Pipewire to supplant JACK (and obviously Pulseaudio too) in the future for all use cases. It will be a phenomenal simplification of the Linux A/V stack to get behind a single comprehensive implementation.


> In this case JACK's primary draw over PA/alsalib/whatever is solely the lower latency, for responsiveness in live performances.

I see you included ALSA as a library which you claim wouldn't hit as low a latency as using JACK. Is that what you meant to write? If so, what situation do you have in mind where sound arrives at your ear faster by way of app->JACK->$foo->speaker vs app->ALSA->speaker?

I ask because I'm only familiar with using JACK where $foo = ALSA backend. It stands to reason that JACK-handing-off-to-ALSA cannot possibly achieve a lower round-trip latency than going straight to ALSA. And the round-trip latency measurements I've read to compare those two routes confirm that reasoning.


You are correct for ALSA == the kernel driver model.

But using "alsa" as a userspace audio stack has its own latency, dmix in particular. A highly optimized .asoundrc might be competitive.

This feels to me like background knowledge from long ago, but I can't provide any specific source other than anecdotes like https://forum.audacityteam.org/viewtopic.php?p=234235#p23423... .


Don't get me wrong, I personally am totally in favor of Pipewire. The GP was making it sound like it's going to be some kind of panacea though, and I don't think that is going to happen. Pro audio is hard. It takes a long time to bring a project like this up to speed (I know because I've been waiting patiently for the video streaming stuff to stabilize). Pipewire is a good improvement for users but it doesn't bring any major features that a DAW is going to really want. And as Paul said, eventually as a DAW grows and becomes more monolithic you're going to get away from even wanting an audio server at all so you can have the lowest latency possible.

BTW from my perspective there already is a major project driving a revival of open source audio, and that project is called JUCE [0].

[0] https://github.com/WeAreROLI/JUCE


If only JUCE supported the open LV2 plugin format..

(And opening up projects in Projucer and saving out something wasn't an all to commonly required build step)

I would encourage interested folk to check out DPF and/or come hang out in #lad and #lv2 on Freenode.

https://github.com/DISTRHO/DPF


Right, I actually avoid using JUCE for my personal projects for this reason :) But it has resulted in a ton of plugins being available that would otherwise not have had native Linux support.


>PipeWire is meant to be a complete replacement for not only JACK, but also PulseAudio.

A laughable pipe dream.


> Jack, while incredibly powerful has been a huge hurdle for pro audio on Linux.

For me JACK was a huge step forward. After 25+ years of recording on tape, JACK allowed me to switch to computer based recording ~14 years ago. It did away with a lot of the arbitrary restrictions on what connections were allowed in the audio software that was around at the time (and which made me unwilling to switch until that point).


Check out Bitwig


Does it work with a launchpad? Or is there any other options that do? Ableton and virtualdj are literally the only pieces of software I keep windows for.

Edit: sidenote is there any software works with a mixtrax3


Bitwig was created by Ableton employees.


I wouldn't be too quick to assume that Pipewire will be the default anywhere.


Starting Jack and then patching stuff together was never an issue to me. It's not any more complicated than your imagined use-case for it, aka "patch program A into progam B"

ALSA is horridly complex and it's complexity is related to Linuxy stuff, not to anything resembling the work your actual audio converters are doing nor the task of getting data in and out of them!

A new layer added on top of ALSA will not fix this.

Nothing will fix this until some AUDIO people, who are NOT Posix Linuxy People write a realtime audio subsystem to REPLACE ALSA.

This will not occur anytime soon, for several reasons.

Both the Bela.io system and Elk's "brand new audio OS from the ground up" (LOL) utilize Linux but bypass ALSA to achieve their low-latency rock-solid audio pipeline. (the CTAG interface that Heinrich Langer designed appears to be the first example I can find of this approach)

They use Xenomai realtime kernel extensions to Linux to essentially run the audio driver as a real-time thread and the buffers your program writes to/reads from, are the actual buffers that DMA is moving the audio data in and out of to the ADC and DAC.

End of the day, audio isn't a posix API, and should never be considered as such. Want an abstraction? fine, wrap a callback as an audio server lol...

I had a Tascam USB audio interface that always sounded THIN on Linux and I wondered why, until I saw that the ALSA channel mapping had automagically applied a surround sound map to my 8 channels of output, which implied two lovely high-pass filters running at 75hz because the system was quite sure that's what everyone does with 8 channels of output... this despite the fact that ALSA saw 4 different stereo "devices" as my hardware interface presented itself...

IF ONLY this were merely a universally understood /etc/alsa.conf or something file... oh no no no! Linux People have SAVED us from audio configuration! Go look in /opt/didntknowthisfolderexisted/.confy/map.conf or maybe somewhere else... Add x-windows and the lack of any GUI application corresponding 1-to-1 with the "handy obscure utility" it purports to configure and control, and you have a match made in heaven!

Linux is an awesome server OS. It's a crap audio OS, and this is due to extreme cleverness and a total lack of paying attention to how any other audio API's in the world work except OSS... OSS isn't even an audio API FFS! it's like a printer API... you might as well use CUPS to run your sound LOL


Good news to see work being done on the Linux-audio 'bottleneck'. I miss the ease-of-use and options available on other platforms (I left behind).


Trust me, the device audio/MIDI APIs are not the bottleneck.

Things like JACK (which tend to draw complaints) don't exist on other platforms (except ... by running JACK there), so complaining about its complexities (not that you did, explicitly) isn't really fair since that is based on an apples-to-oranges comparison.

I would guess that the options you miss are the result of application and plugin developers (not) being willing to include Linux in their target platforms. As cool as PipeWire might turn out to be, it isn't going to have much effect on those decisions.

Finally, lots of people seem to manage to forget how for years (decades, perhaps), high performance audio software on Windows required new device drivers (ASIO) for your audio hardware, because the ones that came with Windows couldn't do the job. This has mostly changed now, but ASIO is still with us nevertheless.


I don't know whether the audio APIs are a bottleneck for programmers, but for me complexity of _configuring_ ALSA[1] is a massive bottleneck to expecting _average users_ to be able to do prosumer audio on Linux.

With ASIO on Windows, all the user need do is install the driver and then select the ASIO device in their application. On Linux, if something is wrong with your ALSA config, you need to break out the text editor and become an expert in ALSA architecture and configuration. I've seen enough lost souls in this situation asking questions on mailing lists that I am convinced it is a major problem.

As someone interested in deploying an end-user application to Linux, the fact that there is no plug-and-play "it just works" solution[2] for low-latency audio is a big problem.

[1] More specifically: resolving misconfiguration.

[2] By this I mean a solution that works by design, not by luck.


Pipewire is probably not going to make your ALSA configuration issues go away. It's built on top of ALSA just like everything else. What it might do is make it easier to have low-latency audio across multiple devices. And it may save you some of the hassle of having to configure JACK and LADISH.

Would you care to mention what your device and particular misconfiguration problem is? I can't guarantee I can help, and if it turns out the issue is due to drivers then there's not really anything a sound server like Pulse/JACK/Pipewire can do about it. You can't design around that, it literally is just bad luck that you happened to have a device that is badly supported.


Thanks for offering to help. I'm okay at the moment. I was referring to other users reporting issues that turn out to be ALSA configuration problems (e.g. on the PortAudio mailing list). Sidenote: the fact that someone needs to offer help is part of the issue that I'm pointing at. Once a driver is installed on Windows or Mac OS I've never heard of someone experiencing misconfiguration of the audio subsystem as a whole.

The main ALSA misconfiguration issue I've personally encountered were not related to the audio drivers per-se, but to codec mixer configuration. It was on RK3399 SBCs from FriendlyArm. In that case I had to read the codec data sheet to work out how to set all the ALSA mixer flags and parameters correctly to get audio routing to send a signal to the output jack.


I'm not sure what larger issue you're pointing at and I don't think it makes sense to chalk this up to anything else; no sound server can magically figure out how to read that data sheet either. A random userspace daemon (that is optional) is just not the right place to put a fix for that. The burden for a fix likely falls on your distro to ship the correct udev rules, but figuring out what those are for all variations of hardware that use that driver is another story. With that who knows, the Windows and Mac driver could actually be doing something strange by default that just happens to work for most users, but isn't worth replicating if someone writing udev rules decides they want to do the "right" thing. See where I'm going with this? It's not just a quick fix, and I don't even know if it's the right one. Some other distro somewhere might even have a different way to do it. If you're really interested to make progress on this, you'll have to ask your distro's audio maintainers.


I think you put it well by writing "Pipewire is probably not going to make your ALSA configuration issues go away."

I never claimed there was a quick fix, but I kinda hoped that Pipewire/Redhat have the leverage to fix whatever needs to be fixed in the ALSA API to make userspace audio a seamless experience.

My naive impression is that the ALSA driver model is "broken"[1] -- most likely because many parts that need to seamlessly work together are atomized into different subsystems with no one organisation held responsible. As you seem to be suggesting, correct operation appears to depend on correct configuration by either the distro and/or the end user.

There should be nothing to configure. The driver architecture should be structured such that it is not possible to ship a "working driver" that then requires individual distro maintainers to intervene for the user to experience "working audio". For example, if the mixer hardware needs to be configured for correct operation, that configuration should be part of the driver or kernel, not some auxiliary file that may or may not be correct in a given distro.

[1] where by "broken" I mean that it incapable of providing the kind of zero configuration plug and play experience that is available on Windows with ASIO, or with CoreAudio.


Ross, the main issue with ALSA these days is not the API at all. It's the cases where the people responsible for the code cannot get (easy) access to the information required to program whatever hardware mixer is present in the device.

This can be true for devices following the Intel HDA "specification" (which is still so loosely constructed when it comes to the mixer that it's barely a specification at all). It's also true for many of the USB interfaces out there - the audio/MIDI I/O part works flawlessly on Linux (thanks to iOS' "no driver" requirement), but there's no way to configure the hardware mixer. MOTU went with a web-based configuration process for some of their devices, which is truly lovely since it works on any device with a web browser. But companies like Focusrite (and many more) continue to refuse to openly provide the information required for the ALSA internals to control the hardware mixer on these devices. In some cases, they have been reverse engineered, but often only partially.

Note that the same limitation applies when using those devices on iOS: you cannot configure them fully, unless the manufacturer makes an iOS version of the "device control panel".


Another reply from me ... another thing that ALSA doesn't do well, by itself, is allowing device sharing by multiple applications. In that regard, it is a lot like ASIO and at least one of other N Windows audio driver models. On Linux, the general design decision surrounding this has been to use a user-space daemon to provide this functionality - an approach that eventually even Apple ended up with (they refactored stuff, and moved coreaudiod from the kernel into user space).

So that means that systems like PulseAudio, JACK and PipeWire are where the "seamless experience" are going to happen, not really in ALSA. To use the comparison with CoreAudio, ALSA operates (1) as if every application enables hog mode. Try that sometime on macOS ... and watch your "seamless experience" completely fall apart :)

This is where PipeWire does offer some real hope. Whereas PulseAudio and JACK deliberately target different use cases/workflows, PipeWire seeks to unify them in a similar way to what coreaudiod does on macOS (device sharing, multiple sample rates, buffer sizes, etc. etc.)

Note that there is a cost to this, even on macOS. You cannot get the absolute minimum device latency on macOS, which you can on Linux. But most (all?) users on macOS seem OK with, either because they just don't know that or because they do and think that the convenience tradeoff is worth it. I imagine that if/when PipeWire reaches its full goals, it will impose a similar burden (though perhaps with an option to avoid it by disabling some functionality).

The key point, however, is that applications (2) are going to continue to either the ALSA or the JACK API for doing audio/MIDI I/O. ALSA ... because it's there, and has several 3rd party libraries built on top of it to make things simpler for developers. JACK ... because it's insanely well designed [:))] and gets the job done, portably and efficiently.

(1) well, it mostly does. The library that apps actually use to talk to ALSA can do device sharing, but it's never been widely used for that, and the design is bit ... well, it's not ideal.

(2) all apps except Skype and Firefox, which join the hall of shame for actually deciding to use the "native" PulseAudio API, something the original author of PulseAudio strongly recommended against.


If somebody wants to try to convince upstream to do it then they can, but that's a lot more involved task then getting a distro to do it.


LADISH is old, NSM is the newer session management hotness.

http://non.tuxfamily.org/nsm/API.html


Does this mean we'll finally see Zoom screen sharing with Wayland? Hope the Zoom team gets this resolved soon.


Note this is developed by a Red Hat engineer, so this is an an open source IBM contribution to Linux.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: