Can’t help but wonder about the reliability and security of future software.
Given the insane complexity of software, I think people will inevitably and increasingly leverage AI to simplify their development work.
Nevertheless, will this new type of AI assisted coding produce superior solutions or will future software artifacts become operational time bombs waiting to unleash the chaos onto the world when defects reveal themselves?
I wonder if code becomes so complex that humans cannot understand, there may be a way to fool the LLMs into creating backdoors in the software.
For example, a open source LLM is produced, used everywhere, and it subtly inserts some malicious coding. Not saying this is happening now, but could happen.
Then when AGI comes along, this would shift to understanding the motivations of the AI and how they align with human ethics.
Given the insane complexity of software, I think people will inevitably and increasingly leverage AI to simplify their development work.
Nevertheless, will this new type of AI assisted coding produce superior solutions or will future software artifacts become operational time bombs waiting to unleash the chaos onto the world when defects reveal themselves?
Interesting times ahead.