The best vulnerability is one that is hard to detect because it looks like a bug. It's not inconceivable to train an LLM to silently slip vulnerabilities in generated code. Someone who does not have a whole lot of programming experience is unlikely to detect it.
tl;dr it takes running untrusted code to a new level.
This ultimately why I believe Microsoft and Apple will be the big winners. I suspect a lot of companies will want Microsoft and Apple to sign off on things and Microsoft and Apple are going to make sure they get their cut. We may need a new layer above existing operating systems in the future, to safeguard things.
Meh. Why would the model makers not be fantastic security vectors? The motivation to not be the company known to "silently slip vulnerabilities in generated code" seems fairly obvious.
People have always been able to slip in errors. I am confused why we assume that a LLM will on average not be better but worse on this front, and I suspect a lot of residual human-bias and copium.