Unfortunately, cyber attacks are an application that AI models should excel at. Mistakes that in normal software would be major problems will just have the impact of wasting resources, and it's often not that hard to directly verify whether it in fact succeeded.
Meanwhile, AI coding seems likely to have the impact of more security bugs being introduced in systems.
Maybe there's some story where everyone finds the security bugs with AI tools before the bad guys, but I'm not very optimistic about how this will work out...
There are an infinite number of ways to write insecure/broken software. The number of ways to write correct and secure software is finite and realistically tiny compared to the size of the problem space. Even AI tools don't stand a chance when looking at probabilities like that.
Meanwhile, AI coding seems likely to have the impact of more security bugs being introduced in systems.
Maybe there's some story where everyone finds the security bugs with AI tools before the bad guys, but I'm not very optimistic about how this will work out...