There are trade-offs; a real time as well as a low-latency system (which is what is offered here) will never be as efficient in terms of data throughput as a system which doesn't guarantee response times (or as here merely prioritizes fast response times).
You'll want to use a low-latency system for multimedia applications, games and other interactive systems. You want to use a real-time system when exceeding deadlines is cost-prohibitive or dangerous (below mentioned CNC machines, robotics, etc.). You will want to use a system which prioritizes throughput in all other cases.
The easiest way to make a real time system is to statically partition the resources. Think of giving each of 4 processes 1/4th of the CPU, now you can guarantee each process will get that much CPU time, and thus can finish a job that takes X clock cycles in Y real world nanoseconds.
Now consider that with 3 of the processes being idle, and 1 trying to do as much computation as possible; it's obviously wasteful.
Trying to be more efficient, while still remaining real time, gets progressively more complicated.
"Real-time" means that a process receives guarantees regarding its execution schedule. These guarantees require modified task schedulers, preclude certain performance optimisations and constrain what a process is allowed to do (and how much of it).
No multitasking is one that comes to mind. A real time system wants to be small and robust and guarantee a fast response, no stuttering and so on. They're used in specialized problem domains. But you wouldn't need that on your laptop, you're fine with a sometimes imperceptible delay in exchange for all the goodies in a modern OS. Well, I like my OSes stripped down but no matter how much you reduce they'll never be real time systems.
Fundamentally it’s the trade off between latency and throughput. Batching increases throughout at the cost of latency. Real time in this context means guaranteed low latencies which means lower throughput
I mean RHEL charges for software too.
But anybody can clone the source, plus Ubuntu Pro is free for up to 5 machines, and RHEL too is free for up to 16 machines.
If Ubuntu pro gets big enough, we might actually see a rocky-style Ubuntu pro clone.
There is tremendous value in the software, and it probably took quite a bit of time to add that in. Just this week we saw a pretty high-visibility discussion about the difficulties in funding OSS.
Everyone has different opinions, but I've derived quite a bit of value from Canonical's F/OSS efforts/contributions and am supportive of them exploring ways to make money, even if it's for a period of time, behind high-value-add features.
Not sure if it answers your question, but music production also needs much lower latency than what you typically get in stock Ubuntu. Ubuntu Studio has a real-time kernel build.
> I would have thought that givig the audio process full priority would have the greatest benefit.
I think you're conflating two separate things. Giving full priority to an audio processing thread will ensure it will be scheduled as much as possible which increases throughput. This is crucial for things like rendering (audio or video) where the process takes in the order of minutes or hours and needs to be completed as fast possible.
In the other application, such as synthesizers, virtual instruments (MIDI keyboard -> MIDI signal -> DAW -> sound wave), music notation etc you care only about latency. Human brain is hard-wired to observe even the minutest latency in music, especially in bass register, any millisecond delay will be perceptible to highly trained professional musicians. To fix this, you need to tell kernel, when it receives a signal, the thread cannot be scheduled out. Here, you don't care about throughput, but have extreme latency requirements.
I, as a hobbyist composer, personally use a traditional archlinux setup for my studio. It's not real time at all. It honestly sucks, the latency is extremely noticable, which makes some applications, such as playing on my keyboard and hearing soundfonts through DAW impossible; as the latency makes me confused and prevents me from playing the keyboard properly. This is ok for me, as it doesn't distrupt my personal workflow (I have a very slow, calculated approach to composing, and when I need to improvise, I do it on my piano) so I didn't care about optimizing it. But if you're a professional musician and need good latency, something like this is can be crucial.
Yeah, I don't think that'll work in practice. Unless your OS is real-time, what you described cannot work because audio latency has to do with more than your DAW's threads. It also has to do with OS polling MIDI data, or sending data to audio driver. These need to be completed on a deadline. Realtime scheduling can be less efficient than fair scheduling which is why if your DAW has all the power to set its own scheduling you'll only do compute, and not enough IO. This means poor latency.
Audio is one of the most real time applications. Most real time applications have much lesser time controls. If you are running a motor the inertia of the whole system means that you really only need to adjust the speed every few seconds. If you are running a stepper motor you need much better time controls on the steps, but that is typically done by a dedicated CPU, leaving the logic of what speed to go to something else.
Note, audio is about latency. Other real time applications need much more CPU power, but they can accept more jitter and latency.
> Audio is one of the most real time applications. Most real time applications have much lesser time controls. If you are running a motor the inertia of the whole system means that you really only need to adjust the speed every few seconds.
Anecdotally that's one of the "eureka" moments behind SpaceX's industry-disrupting low production costs. They realized they only really needed ±20ms precision for communication between different sensors/actuators, which made heavily multiplexed ethernet busses feasible, and got rid of massive amounts of dedicated serial lines and related hardware.
13 years later, competitors are just about ready to catch up.
I would have thought that the operating system interfering with the scheduling of my DAW would increase latency. But since I know too little about how DAWs handle the scheduling I can only make some naive assumptions.
I want my audio processes to have full priority over any other task on my OS. I don't mind if my DAW increases the latency of other processes running on my PC.
It depends on the application. More serious music applications have separate "render mode" which works in non-realtime. Also "real-time" apps will use a rendering buffer of say 100 ms. Someone would need to benchmark how small the buffer needs to be for RT to matter.
Time-resolution for midi-input is not high enough for it to matter; it will be quantized to midi-time anyway.
It is critical for audio-effects and synthesizers that are taking live input and emitting live output. These exist, but I'm not sure they are commonly used on linux or if it's wise running with low resources.
I don't understand this perspective. I've seen musicians at all levels use all sorts of crazy instruments. The output of live audio processing may not be used in the final result, but it's very important for capturing a performance.
> not sure they are commonly used on linux or if it's wise running with low resources
Linux-based studios are the norm for many hobbyists and in educational settings where the goal is actually teaching and not just indoctrinating a new cohort of commercial software users. Also, who said anything about "low resources" and what does that even mean?
Linux-based studios are the norm for many hobbyists?
I don't think so. And what does that even mean?
There are some hobbyists using Linux for music production. But it's so far behind MacOS and Windows in that regards, that no sane person would use it for any other reason than prefering free software or using some esoteric piece of software that's only available on Linux.
I wish it was different, because than I could ditch Windows and move to Linux full stop.
100ms is too much. That's almost 1/16th at 120bpm. Anyone multitracking would try to use at most a 256 sample buffer, which gives a 6ms (extra) latency.
Yes, but you can set a heavy project to say 4096 samples (which is then 96ms) when playback latency doesn't matter.
Sound travels about 2m in 6ms. Iirc, it's difficult for humans to perceive time differences smaller than 20ms, so humans perceive sounds from within a radius of 6m simultaneously.
It's not really a matter of how many resources a system has, without proper realtime scheduling a thread might still be randomly delayed for just a few milliseconds too many due to scheduling screwups.
QNX, owned now by BlackBerry was always billed as a unix-like "real time" OS that gets used in automotive applications and maybe similar control applications. This could potentially be an open replacement. QNX is closed source.
This seems to be targeted to OEMs. There is little information regarding supported underlying hardware.
Some Rockwell (Allen-Bradley) PLCs, for example, now use commodity Intel hardware like i7 processors and Ethernet modules. These PLCs and I/O modules also provide deterministic time guarantees, especially important for motion applications. Since the OEM has control of the hardware, the software is tightly coupled to ensure deterministic performance.
PREEMPT_RT is still a out of tree patch for the kernel. There has been significant progress in mainlining it and for the last few years I've been hearing that its on its last legs of full mainlining.
However, I think its still a little while away from being fully mainlined. The last I remember were issues with preemption in printk calls. Adding this level of preemption support to every subsystem is extremely difficult. Though, I must say that the patched kernels function pretty well. If you need proper real-time, ditch GNU/Linux and go with a real real-time kernel like QNX, VxWorks, etc. Linux will never be true real-time and that's an acknowkedged fact
Where can one see the updates about the status of the project to mainline PREEMPT_RT? Often in recent years there has been articles/reports about its imminent mainlining.
From what I understood of all the explanations here, in layman's terms, a real-time system is very quick to respond to signals but it doesn't guarantee the total amount of processing done for each signal, only that it will processes as many signals as possible. Whilst a non real-time system doesn't care much about the total amount of signals, instead it wants to process as much as possible on each signal without blocking other signals to much.
For example, music production is low bandwidth (meaning that not a lot of data goes in each individual action) but it has a lot of signals that when not processed immediately, things will sound off. Same for a space rocket, small packets of data stream from one subsystem to another and things can go bad if the data gets delayed, but each data packet does not requires huge amounts of processing, they just need to be processed ASAP. Both of these benefit from a real time system.
On the other hand, when you are doing something like graphical rendering, heavy data processing like training machine learning models, mining crypto, simulations and other types of processes that require a lot of data to be processed in one go and don't really mind if the data gets processed now or 1ms later, then you're better off with a non real-time system. Because if you had a real-time system for these tasks, you could actually see lower total performance for each marco task. Applications would be very snappy on your desktop, but you'd have to wait a bit longer for things to load fully in a real time system that does medium to high data processing.
Games are a mix of both maybe, real-time for input processing and sound, non real-time for graphical rendering (if you don't need extremely consistent framer ates in the realms of <1% deviation or something). With in game physics simulation being a mix of both.
Is that a fair summary?
But then there's low latency systems? Which I don't really know the difference compared to a real-time system, that should offer low latency by default because it tries to process as many signals as they come in.
I hope it is!, its the only one I actually understood!
As for gaming I am really interested if there are benefits to real time kernels and whether if its possible to make a hybrid kernel that turn to real-time only when needed.
That'd be very interesting, having a scheduler that can toggle between real-time and non real-time depending on the process. Although I'm not sure what that'd look like physically in terms of core usage.
Its important, as it allows to use run of the mill operation systems and programs, while still have reliable sheduling.
If you make a car for example, that shares its ECU with various systems, some of the time critical, the whole affair needs to have the ability to schedule reliable, as to meed the dead lines. This is not doable with a run of the mill os.
Thus there are special operating systems that provide realtime capability - e.g.
https://en.wikipedia.org/wiki/ECos but are usually not posix systems and generally incompabtile to them. Also normally they cost a arm and a leg to license.
And now ubuntu can run on car-hardware with time critical software running on it.
Oh I'm well aware of what is generally meant by real time. It's also abused as a marketing term to the point where it loses some meaning.
What is the latency and timing jitter with ubuntu real time for a user land process? a kernel module? Are there any guarantees?
On a cortex-m there are very clear ideas of what interrupt to isr latency is going to look like, and while not easy its possible to determine bounds of timing.
On a linux x86 machine with god knows what running? Yeah...
> On a linux x86 machine with god knows what running? Yeah...
Generally the way Linux RT projects solve that is by scheduling the CPU between the realtime processes and "the rest of Linux", where the rest of Linux acts as a single low-priority best-effort process. How much work that one part is doing within isn't really relevant.
Typically, the realtime processes can only communicate with "the rest" by some very limited channel, e.g. message-passing, so they're never hampered by kernel locks or such.
I guess there is a almighty scheduler in the kernel, which at any given moment is allowed to send your processes to sleep, pushing all the registers on some stack, plus execution pointer and do something else.
Its certainly not by software yielding.
Linux is generally good enough for soft real time, where you can miss a deadline but really would prefer not to. Think "noticeable glitch" not "explosion". Soft real time and low latency are what this is all about.
As far as I've read, consumer variants of x86 CPUs themselves are not ever capable of hard real time, there's "management" stuff that'll just decide to pause the CPU sometimes.
> The PREEMPT_RT patchset reduces the kernel latencies as required by the most exacting workloads, helping to ensure time-predictable task execution. Meeting stringent determinism requirements and upper-bounding execution time, Ubuntu with PREEMPT_RT makes the kernel more preemptive than mainline Linux.
It isn't very unclear imho. And it's not like prempt_rt is some new thing, it is pretty well understood in the industry
"more preemptive" ok but how does this bound latencies and timing jitter from event to handling?
E.g. I need to run a task at exactly every 100us, or within 10us of a gpio event, or ...
You get the idea. Linux RT is, in my mind, abusing a term to market itself better. There's absolutely zero guarantees of not having your process indefinitely delayed.
It's less likely, but zero guarantees and no possibility of doing any sort of worst case determination. That's not "Real Time" that's just better scheduling.
> "more preemptive" ok but how does this bound latencies and timing jitter from event to handling?
Linux RT is basically just the preempt_rt patches.
In the normal linux kernel side, many (all?) kthreads can't be interrupted. This prevents the kernel from handling the interrupts from io in a bound time since it has to wait for a kthread to yield. The preempt_rt patches enable preempting kthreads.
> E.g. I need to run a task at exactly every 100us, or within 10us of a gpio event, or ...
Allowing preempting kthreads enables the kernel scheduler to run more deterministically and not be blocked by kthreads doing things. This allows the kernel to ensure a gpio task gets serviced by a given deadline.
There's still spinlocks and critical sections which can block preemption, but they're minimized.
Linux rt provides a "execute my thread by a deadline option".
> It's less likely, but zero guarantees and no possibility of doing any sort of worst case determination. That's not "Real Time" that's just better scheduling.
Hate to tell you, but thats every RTOS. The only one I know of that has a provable bounded scheduler is SeL4. IMHO, thats cool.
But FreeRTOS, Zephyr? Nope same thing the Linux preempt_rt patches do. Now its smaller and easier to ensure the system isn't overloaded, less kthreads to review for critical sections, etc.
You need to add some sources to your words so people can take you seriously.
If you just write something it does not qualify as "truth" just because it is written by you, you know?
They even invented a feature that can be used in every web browser today - it is called "URL" and is used to help you give users a quick method to read the "source of truth" that your words are based on.
Just use it and your words will have more power - it is like magic!
What are the disadvantages of a real time system?
I mean, real time sounds better than not real time, so why not include it into the standard Ubuntu distribution.