Unless it changed, L4 is used with additional code co-located as the base-band modem on its own cpu. Small RTOS is not a multi-server micro-kernel based os.
EDIT: Also, had some fun in the past learning about the modem software on Qualcomm modems, back before iPhone or Android so nice to meet someone who worked on it :)
IIRC Linux stopped using this long ago as the performance improvement was not worth the extra code (although Linux is not as heavy on ipc as l4 & co)
As an example of the wrong approach, x86’s SYSENTER just switches to kernel mode and updates the program counter. x86’s interrupts are better — they push all clobbered state to the stack.
We won't get another chance to do this right for a long, long time.
Leaving aside whether the behavior you propose qualifies as "right": no, I don't think there is any chance to change the encoding of "bool", any more than you can redefine the size of "char" or the value of CHAR_BITS. The world uses 8-bit bytes, the world uses twos-complement (C recently wrote that into the standard), and the world uses 0 for false and 1 for true.
I come from x86, ppc, mips, and arm, and in all these platforms "bool's true" are sometimes ~0 and sometimes 1. For example, all SIMD operations on all these ISAs use ~0 for true, and I've fixed hundreds of bugs where people expected true to be represented by 1.
Even RISC-V Vector ISA uses `~0` for true. So if anything, what would be completely retarded is for a new ISA to sometimes use 1 for true, and sometimes use ~0. That's super confusing, and people get hit by it all the time.
Beyond the confusion of using different values for different operations, even for simple scalar code, it means that you can't just "logical and" with true to mask all bits, which is a super useful thing to do (so useful, that this is why all SIMD ISAs use ~0, and why 1 for true wouldn't be an option for SIMD).
> (Whether that C code was allowed to make that assumption or not, it did.)
Citation needed. In C, using an integer in a logical operation returns false if the integer is 0, and true otherwise. Avoiding this is _super hard_, so I doubt your claim that C code is relying on this. Also using 0 or 1 when converting to bool is handled by the compiler, and required to convert to the false and true representations of bool, where multiple true representations are allowed.
If you are serializing or deserializing raw bools, most code doing this serializes 1 bit per bool. Code serializing 4 bytes, is mostly using integer types, and when reading those testing against 0 is the most common thing to do. And well code doing this needs to deal with endianness and what not anyways.
However, this ship has sailed for general purpose programming languages. 0 is false, all other values are interpreted as true, operations that create a bool create 1. As you say, that's just how the world works.
False = ~0 is just zany though.
That works perfectly fine if you write the type as an integer type and define 0 as "no error". You can't call that a C "bool" though.
In hardware, C _Bool is just an scalar integer type (1 byte wide almost everywhere these days).
If you define 0 for false, and true otherwise, you can emit much better machine code for all scalar comparisons. For example, when doing scalar == 0 or scalar != 0 (e.g. in null pointer checks) the result is always the scalar itself. The test is a nop, and with "branch on non-zero" instruction, you can just directly branch.
If you define true to some value, you need to actually test whether the scalar is zero, and give it some other value otherwise. That goes from zero instructions to often two instructions (e.g. if the hardware comparison returns 0xffff and you need to convert that to 1).
Correct. It was not my intent to imply otherwise. Like the person I replied to and I reiterated, that ship has sailed.
Welcome to VMS!
Has there been any discussion on this matter on risc-v mailing lists or somewhere? If the RISC-V Base ISA is now ratified, I think it is too late to change such things.
Edit: seems to be too late. According to , SLTx instructions "set the destination register to one or zero depending on whether the relation is true or not."
You can already do that: use XORI with immediate 1 to toggle.
The ABI of your platform might require special values for true and false, e.g., the SysV64 ABI explicitly requires 0 for false, and 1 for true.
So you would need to define a new platform for doing these tests, and then to test some code, you would need to port it to this new platform. AFAICT, if you port that code correctly, everything would work, and if something doesn't work, then you didn't port that code correctly.
So this experiment feels moot.
> I'm almost certain the system won't boot.
Clang can't compile the Linux kernel, so I hope you mean some other kernel. Otherwise, without a kernel, the system won't even boot :P
Spoiler: doom uses 2 values for bools: 0 for false, 1 for true, and -1 for not sure/unknown/error/something else
However, you can trivially achieve this the normal way by including simple instructions that allow you to select or combine vector inputs. If that's not a path they want to go down, it's also doable to take a slight performance hit and achieve the same with the use of multiplication instructions.