Can't wait to get a seamless setup for it, but I guess 0.3 is still too early for early adopters...
As others have noted, Bitwig already exists - in some ways, it is more powerful than Live (and in others, less). It is also proprietary.
Also, Live is not the kind of tool that most people would associate the term user friendly with. It's quite hard to get started using Live, despite the program being basically 100% awesomesauce. Live is extremely friendly towards a certain kind of workflow for "in-the-box" music production, and has fundamentally changed the entire zeitgeist surrounding making music with computers. But it's not a replacement for linear timeline DAWs (like ProTools, Logic, Sonar, Ardour etc. etc.) and has its own foibles and certainly its own complexities and limitations.
I anticipate that the next major release of Ardour will have some Live-like features starting to arrive.
This is pretty intriguing - is there a public roadmap for these Live-like features ?
Also, just to be clear, I said "the next major release" because in my own head, 6.0 is "almost upon us". What I meant was the next major release after 6.0.
Slowly I'm trying to get https://ossia.io there... any help appreciated :D
And for the record, the open source audio community is not dead. There is activity, but you have to know where to look.
JACK is an exceptional powerful tool (if I do say so myself) but it is overkill for the majority (maybe even the vast majority) of users. We try to encourage most Ardour users to use its builtin ALSA audio/MIDI I/O support rather than JACK these days.
But all that is beside the point anyway. I still stand by my statement that a new DAW should use the JACK API, for compatibility purposes. I would change my mind about this if it ever comes to the point where JACK support is removed from Ardour and the other major DAWs. Take that as you will.
But look - most people don't actually want to connect multiple applications together to make music. Most people don't actually want to move audio between applications at all. As we get more and more (reasonably) good plugins available on Linux, the "monolithic" approach - do it all inside one program (e.g. a DAW or something a bit like it) is easier for most people (no complex state management) and closer to their pre-existing mental models.
If you do need/want to connect multiple applications together, then sure, JACK is great and better than more or less any other possible alternative for that purpose.
But most people don't want to do that, and increasingly do not need to either.
This may be a generational thing. As someone who learned recording in traditional analogue studios, I find a modular approach using JACK to be much closer to my own mental model.
This FAQ makes Pipewire out to be very similar to JACK but with more features. It is full of references to how Pipewire can achieve low-latency. e.g. In the "is PipeWire another JACK implementation" it boasts ""Synchronous clients are providing data for the current processing cycle of the device(s). There is no extra period of latency.""
Of your bullet points vs JACK, the first is a genuine latency issue but should be fixable (""we are not there yet""), maybe in the kernel driver or by special-casing this particularly simple path in Pipewire. The rest are CPU overheads but not really latency-specific, especially not in an ongoing way for an established / security-checked graph from the DAW.
I think I read this FAQ quite differently from you, I see a lot of understated optimism here and I think it is reasonable to expect Pipewire to supplant JACK (and obviously Pulseaudio too) in the future for all use cases. It will be a phenomenal simplification of the Linux A/V stack to get behind a single comprehensive implementation.
I see you included ALSA as a library which you claim wouldn't hit as low a latency as using JACK. Is that what you meant to write? If so, what situation do you have in mind where sound arrives at your ear faster by way of app->JACK->$foo->speaker vs app->ALSA->speaker?
I ask because I'm only familiar with using JACK where $foo = ALSA backend. It stands to reason that JACK-handing-off-to-ALSA cannot possibly achieve a lower round-trip latency than going straight to ALSA. And the round-trip latency measurements I've read to compare those two routes confirm that reasoning.
But using "alsa" as a userspace audio stack has its own latency, dmix in particular. A highly optimized .asoundrc might be competitive.
This feels to me like background knowledge from long ago, but I can't provide any specific source other than anecdotes like https://forum.audacityteam.org/viewtopic.php?p=234235#p23423... .
BTW from my perspective there already is a major project driving a revival of open source audio, and that project is called JUCE .
(And opening up projects in Projucer and saving out something wasn't an all to commonly required build step)
I would encourage interested folk to check out DPF and/or come hang out in #lad and #lv2 on Freenode.
A laughable pipe dream.
For me JACK was a huge step forward. After 25+ years of recording on tape, JACK allowed me to switch to computer based recording ~14 years ago. It did away with a lot of the arbitrary restrictions on what connections were allowed in the audio software that was around at the time (and which made me unwilling to switch until that point).
Edit: sidenote is there any software works with a mixtrax3
ALSA is horridly complex and it's complexity is related to Linuxy stuff, not to anything resembling the work your actual audio converters are doing nor the task of getting data in and out of them!
A new layer added on top of ALSA will not fix this.
Nothing will fix this until some AUDIO people, who are NOT Posix Linuxy People write a realtime audio subsystem to REPLACE ALSA.
This will not occur anytime soon, for several reasons.
Both the Bela.io system and Elk's "brand new audio OS from the ground up" (LOL) utilize Linux but bypass ALSA to achieve their low-latency rock-solid audio pipeline.
(the CTAG interface that Heinrich Langer designed appears to be the first example I can find of this approach)
They use Xenomai realtime kernel extensions to Linux to essentially run the audio driver as a real-time thread and the buffers your program writes to/reads from, are the actual buffers that DMA is moving the audio data in and out of to the ADC and DAC.
End of the day, audio isn't a posix API, and should never be considered as such. Want an abstraction? fine, wrap a callback as an audio server lol...
I had a Tascam USB audio interface that always sounded THIN on Linux and I wondered why, until I saw that the ALSA channel mapping had automagically applied a surround sound map to my 8 channels of output, which implied two lovely high-pass filters running at 75hz because the system was quite sure that's what everyone does with 8 channels of output...
this despite the fact that ALSA saw 4 different stereo "devices" as my hardware interface presented itself...
IF ONLY this were merely a universally understood /etc/alsa.conf or something file... oh no no no!
Linux People have SAVED us from audio configuration!
Go look in /opt/didntknowthisfolderexisted/.confy/map.conf or maybe somewhere else...
Add x-windows and the lack of any GUI application corresponding 1-to-1 with the "handy obscure utility" it purports to configure and control, and you have a match made in heaven!
Linux is an awesome server OS.
It's a crap audio OS, and this is due to extreme cleverness and a total lack of paying attention to how any other audio API's in the world work except OSS... OSS isn't even an audio API FFS! it's like a printer API... you might as well use CUPS to run your sound LOL
Things like JACK (which tend to draw complaints) don't exist on other platforms (except ... by running JACK there), so complaining about its complexities (not that you did, explicitly) isn't really fair since that is based on an apples-to-oranges comparison.
I would guess that the options you miss are the result of application and plugin developers (not) being willing to include Linux in their target platforms. As cool as PipeWire might turn out to be, it isn't going to have much effect on those decisions.
Finally, lots of people seem to manage to forget how for years (decades, perhaps), high performance audio software on Windows required new device drivers (ASIO) for your audio hardware, because the ones that came with Windows couldn't do the job. This has mostly changed now, but ASIO is still with us nevertheless.
With ASIO on Windows, all the user need do is install the driver and then select the ASIO device in their application. On Linux, if something is wrong with your ALSA config, you need to break out the text editor and become an expert in ALSA architecture and configuration. I've seen enough lost souls in this situation asking questions on mailing lists that I am convinced it is a major problem.
As someone interested in deploying an end-user application to Linux, the fact that there is no plug-and-play "it just works" solution for low-latency audio is a big problem.
 More specifically: resolving misconfiguration.
 By this I mean a solution that works by design, not by luck.
Would you care to mention what your device and particular misconfiguration problem is? I can't guarantee I can help, and if it turns out the issue is due to drivers then there's not really anything a sound server like Pulse/JACK/Pipewire can do about it. You can't design around that, it literally is just bad luck that you happened to have a device that is badly supported.
The main ALSA misconfiguration issue I've personally encountered were not related to the audio drivers per-se, but to codec mixer configuration. It was on RK3399 SBCs from FriendlyArm. In that case I had to read the codec data sheet to work out how to set all the ALSA mixer flags and parameters correctly to get audio routing to send a signal to the output jack.
I never claimed there was a quick fix, but I kinda hoped that Pipewire/Redhat have the leverage to fix whatever needs to be fixed in the ALSA API to make userspace audio a seamless experience.
My naive impression is that the ALSA driver model is "broken" -- most likely because many parts that need to seamlessly work together are atomized into different subsystems with no one organisation held responsible. As you seem to be suggesting, correct operation appears to depend on correct configuration by either the distro and/or the end user.
There should be nothing to configure. The driver architecture should be structured such that it is not possible to ship a "working driver" that then requires individual distro maintainers to intervene for the user to experience "working audio". For example, if the mixer hardware needs to be configured for correct operation, that configuration should be part of the driver or kernel, not some auxiliary file that may or may not be correct in a given distro.
 where by "broken" I mean that it incapable of providing the kind of zero configuration plug and play experience that is available on Windows with ASIO, or with CoreAudio.
This can be true for devices following the Intel HDA "specification" (which is still so loosely constructed when it comes to the mixer that it's barely a specification at all). It's also true for many of the USB interfaces out there - the audio/MIDI I/O part works flawlessly on Linux (thanks to iOS' "no driver" requirement), but there's no way to configure the hardware mixer. MOTU went with a web-based configuration process for some of their devices, which is truly lovely since it works on any device with a web browser. But companies like Focusrite (and many more) continue to refuse to openly provide the information required for the ALSA internals to control the hardware mixer on these devices. In some cases, they have been reverse engineered, but often only partially.
Note that the same limitation applies when using those devices on iOS: you cannot configure them fully, unless the manufacturer makes an iOS version of the "device control panel".
So that means that systems like PulseAudio, JACK and PipeWire are where the "seamless experience" are going to happen, not really in ALSA. To use the comparison with CoreAudio, ALSA operates (1) as if every application enables hog mode. Try that sometime on macOS ... and watch your "seamless experience" completely fall apart :)
This is where PipeWire does offer some real hope. Whereas PulseAudio and JACK deliberately target different use cases/workflows, PipeWire seeks to unify them in a similar way to what coreaudiod does on macOS (device sharing, multiple sample rates, buffer sizes, etc. etc.)
Note that there is a cost to this, even on macOS. You cannot get the absolute minimum device latency on macOS, which you can on Linux. But most (all?) users on macOS seem OK with, either because they just don't know that or because they do and think that the convenience tradeoff is worth it. I imagine that if/when PipeWire reaches its full goals, it will impose a similar burden (though perhaps with an option to avoid it by disabling some functionality).
The key point, however, is that applications (2) are going to continue to either the ALSA or the JACK API for doing audio/MIDI I/O. ALSA ... because it's there, and has several 3rd party libraries built on top of it to make things simpler for developers. JACK ... because it's insanely well designed [:))] and gets the job done, portably and efficiently.
(1) well, it mostly does. The library that apps actually use to talk to ALSA can do device sharing, but it's never been widely used for that, and the design is bit ... well, it's not ideal.
(2) all apps except Skype and Firefox, which join the hall of shame for actually deciding to use the "native" PulseAudio API, something the original author of PulseAudio strongly recommended against.