False positives undermine support for tools, LLM or not. I suspect that's what's going to kill off lots of these products, as humans endlessly chase after LLM hallucinations like the very early days of virus scanners that flagged every disk write as something that needed human approval. While it's possible AI security tools can discover patterns of infiltration behavior that a humans would overlook, it doesn't go much good if the reports are buried in mountains of intrusion fantasies. But lots of money will be made before the decision makers realize that the vendor promises don't come to fruition and headcount has to increase to manage use of the tools.
Right, I specifically mentioned LLMs because they're not the correct tool for the job. There's plenty of application specific models that are much more aligned for the task, and hundreds terabytes of logs they can be trained on.