False positive and negatives are unlikely to be improved by AI significantly. They are mostly based on a trade off between catching more, and making sure those you catch are accurate, and a limitation of the test itself. Sure, AI might marginally make some tests better by interpreting more variables more reliably, but it’s going to be marginal rather than solved.
For example, for PSA a blood test for prostate cancer, it doesn’t matter how much AI you through at it, it’s just not a great test. It’s elevated outside cancer commonly, and is normal in a significant percentage of prostate cancers. Just have to deal with its limitations.