The code is checked when it is pushed to the main repository... The code and the bug already exists, the developer is responsible for that.
Also, if you look at the video from Youtube, you can see that 60% + 30% = 100% : https://www.youtube.com/watch?v=I5C4FUvDyCc&t=50s
I was hoping for some kind of godlike AI to help catching bugs before I write my code, now I'm disappointed...
They said it catches 60% of bugs and has 30% false alarms. Those are different measurements, the sum of true positives and false positives doesn't need to add up to 100%.
Anyway, 60% of the time, it works every time !
They did mention is does other things such as suggesting improvements, so maybe it does provide more tangible value in those ways. Hard to say without more information!
No. Recall the wording:
"They said it catches 60% of bugs and has 30% false alarms. Those are different measurements, the sum of true positives and false positives doesn't need to add up to 100%."
Hence (nitpicking perhaps): 100%-60%=40% of the bugs (their estimation) are not caught (false negatives)
The rate of false positives is 30%, that is if a particular code excerpt has no bugs it might be tagged as buggy.
Thus the final error rate I guess is (actual probability of a bug)60% + (1- (actual probability of a bug))30%.
But for application-level bugs, we're SOL... And luckily so, otherwise we would be jobless in a few years or so.
I don't think so. You know those painful conversations with people who "have a great idea", but can't actually make it? It's not because they don't have technical skills (although they don't) - it's because they're not trained in thinking to the level of detail that implementation actually requires.
Someone needs to "program the AI" (whatever that ends up meaning). That'll still be us, even if it doesn't require any code, because at some point what code actually is telling something what you want it to do.
It's "the same" leap as from machine code to compilers, domain-specific languages, garbage collection, etc. You describe what you want to happen at some level of abstraction, and something interprets that to actually make it happen.
The level of detail you, as the programmer, have to directly describe goes down over time, but you still deal with far more details than most other people.
Maybe it's not such a bad thing if AI is not godlike.
Fascinating. Meanwhile, watch Fred (a friendly AI in Far Cry 5) fly the player directly over a known enemy outpost, crash the helicopter, then later crouch immediately in front of the player when the player is trying to stalk the outpost - to the point where the weakness of the AI becomes the talking main point of the video:
Totally agreed one day this will make a difference, but in the meantime: game AI is still game AI.
Whenever a game has an absolutely heinous component to it, it’s clear that that particular bit was not playtested.
To be fair, if I get a message along the lines of "There's a 0.345 chance that this line will cause a bug", I'll want to know why the line will lead to a bug- which is going to be a lot harder to do than just throw up a statisic.