Because it's not just "CUDA" by itself, it's the whole ecosystem.
Most AI developers don't actually use CUDA directly, they use programming libraries like PyTorch that use CUDA under the hood to communicate with the GPU to run tasks in parallel.
CUDA is pretty much the standard and is supported anywhere it's relevant.
Just creating an alternative is pretty meaningless if it isn't actually supported anywhere.
Adding support isn't easy, and there's also stability issues, bugs, etc. People want something that works and is reliable (= CUDA, since it's battle-tested).
That's the same flawed argument that people have used to expect Huawei to replace Android/EUV.
Extremely large and fragmented community of AI researchers and developers in China. CUDA competitors in China face the same hurdles as those in the West.
They're the ones writing most of the open source AI code.