It sounds fine as long as you can fully trust the AI to do good work right?
I don't think there's any current AI that is fully trustworthy this way though.
I wouldn't even put them at 50% trustworthy
I think we are going to see a cliff where they become 80% good, and every tiny bit of improvement past that point will be exponentially more difficult and expensive to achieve. I don't think we reach 100% reliable AI in any of our lifetimes
I think we are going to reach a cliff where a type of old school developers keep saying, "it just can't write code like I can" while at the same time wondering why they can't land a job.
Current AI is likely already beyond 50% trustworthiness, whatever that means.
> "it just can't write code like I can" while at the same time wondering why they can't land a job
People had this same prediction about offshore development
Those old school devs are able to find well paying work fixing broken software churned out by overseas code sweatshops
I predict if you can read and understand code without the help of AI models you will be in even higher demand to fix the endless broken software built by AI assisted coders who cannot function without AI help
I don't think there's any current AI that is fully trustworthy this way though.
I wouldn't even put them at 50% trustworthy
I think we are going to see a cliff where they become 80% good, and every tiny bit of improvement past that point will be exponentially more difficult and expensive to achieve. I don't think we reach 100% reliable AI in any of our lifetimes