Ok so basically they are introducing annotations so that the compiler can reason about the code and warn the programmer for non-realtime usage.
When you think about it, it's a lot like a type system.
I haven't worked with realtime systems, but I have other constraints. E.g. I want the memory usage of a function to stay within x kilobytes, or I want an api call to return within a second, or I want to ensure there is no PII being sent to the logs.
I sincerely hope that in the future we'll have languages that cater to these kind of constraints. Think function coloring on steroids. This way the compiler can help figure out problems and we need way less tests.
Resource (including time) use is a type of (side) effect, and when effects are modeled at type system level we talk about effect systems[0]. There's definitely quite a bit of interest in effects among the programming language design/theory crowd.
I only have grug brain, but one could call WASM modules each with its own tiny memory pre-allocated. There is also WUFFS the language which is explicitly limited in several ways. I also feel like some things could be done in Ada or one of the more strict functional languages.
Most languages don't offer the ability to arbitrarily grow the stack, so it should be straightforward to compute an upper bound on any given function's stack usage. C is a bit harder, you need to forbid alloca, as well as goto and setjmp/longjmp (because I think you need to ensure that control flow is reducible in order to do this analysis).
But the problem then is that the existence of recursion in every language means that even if you know the size of every function's stack, you can still have an arbitrary amount of stack usage due to recursive function calls, so you need to forbid recursion as well.
And that only gives you guarantees WRT to the stack, so you'll probably also want to forbid general heap allocations (possibly replacing them with some fixed-size static buffers).
Well I'm not sure about forbidding heap allocations, that would severely limit what you can do with a function. In low level languages like Rust or C it would be difficult to keep track of the total size of heap allocations in a performant way, but in e.g. Python it should be possible to add some tracing so that a function can only allocate X bytes, and beyond that throw an error or log a warning.
It would be great if we can mark some functions as non-Turing-complete, and avoid recursion. Would make it easier to reason about them.
A pattern in some embedded programming is to have an "arena" (just a bunch of pre-allocated memory) and then allow "malloc" (really super simple malloc) from that. But "free" is a no-op and does nothing. (It leaks all memory.) Then you just release the whole arena when you are done. You just reason about and/or test your code until it runs with a given arena size and call it a day. This way you can run "legacy" code written with allocations in mind, but the allocation is super fast with computable upper bounds on how long it takes worst case.
Recursion can cause problems (though it's easy to detect if you can make a callgraph), but the harder one in most cases is constructing your callgraph in the face of function pointers and other runtime abstractions. It's possible to do a worst-case analysis if you have types that are constrained enough to reduce the possible targets of such a call to a reasonably small set, but most static stack analysis tools bail on trying to analyse this at all.
When you think about it, it's a lot like a type system.
I haven't worked with realtime systems, but I have other constraints. E.g. I want the memory usage of a function to stay within x kilobytes, or I want an api call to return within a second, or I want to ensure there is no PII being sent to the logs.
I sincerely hope that in the future we'll have languages that cater to these kind of constraints. Think function coloring on steroids. This way the compiler can help figure out problems and we need way less tests.