It's not just about the OS, it's about the platform (hardware). Even with a fully-optimized RTOS on a standard PC platform, you might run into millisecond-latency due to non-maskable interrupts, cache misses or bus contention (DMA bus-mastering kills any real-time guarantees). That's not even considering the stalls introduced by SMM or TPM instructions.
I would think it would be difficult to achieve the low jitter of a simple, fixed # of cycles / instruction, architecture like AVR in a superscalar, cached, speculative architecture like modern X86 CPUs. A hard RTOS is necessary, but perhaps not sufficient with most X86 microarchs. Maybe you can disable some of that dynamic behavior via MSRs but probably not all.