Hacker News new | comments | show | ask | jobs | submit login
Ubisoft is using AI to catch bugs in games before devs make them (wired.co.uk)
40 points by helloworld 7 months ago | hide | past | web | favorite | 22 comments



The title is missleading!

The code is checked when it is pushed to the main repository... The code and the bug already exists, the developer is responsible for that.

Also, if you look at the video from Youtube, you can see that 60% + 30% = 100% : https://www.youtube.com/watch?v=I5C4FUvDyCc&t=50s

I was hoping for some kind of godlike AI to help catching bugs before I write my code, now I'm disappointed...


> Also, if you look at the video from Youtube, you can see that 60% + 30% = 100%

They said it catches 60% of bugs and has 30% false alarms. Those are different measurements, the sum of true positives and false positives doesn't need to add up to 100%.


Thank you for the clarification.

Anyway, 60% of the time, it works every time !


What it does mean is that that unless 1/3 or more of your code is buggy, it will report more false positives than bugs.


Yeah considering in terms of a binary classification the accuracy is not super impressive imo. It could just be learning statistical patterns for code that is more likely to have issues rather than identifying actual bugs.

They did mention is does other things such as suggesting improvements, so maybe it does provide more tangible value in those ways. Hard to say without more information!


So you're saying that 10% of time it says there's no bug? ;)


> So you're saying that 10% of time it says there's no bug? ;)

No. Recall the wording: "They said it catches 60% of bugs and has 30% false alarms. Those are different measurements, the sum of true positives and false positives doesn't need to add up to 100%."

Hence (nitpicking perhaps): 100%-60%=40% of the bugs (their estimation) are not caught (false negatives) The rate of false positives is 30%, that is if a particular code excerpt has no bugs it might be tagged as buggy.

Thus the final error rate I guess is (actual probability of a bug)60% + (1- (actual probability of a bug))30%.


If the "godlike AI" could find bugs before you wrote the code, it may as well just write the code instead of you.


It might be interesting to apply machine learning to already written code, correlating common mistakes to specific developers so that the IDE might warn the developer that they're entering some area or process where they tend to make mistakes.


the problem with that is you get an opaque metric that's not very helpful, see: https://news.ycombinator.com/item?id=16569767


I mean, probably many elementary code smells (e.g. those related to null references) just pop up over and over again in a sufficiently large code base, so an AI could reasonably be trained detect them and thus prevent some of the bugs that arise from them.

But for application-level bugs, we're SOL... And luckily so, otherwise we would be jobless in a few years or so.


> And luckily so, otherwise we would be jobless in a few years or so.

I don't think so. You know those painful conversations with people who "have a great idea", but can't actually make it? It's not because they don't have technical skills (although they don't) - it's because they're not trained in thinking to the level of detail that implementation actually requires.

Someone needs to "program the AI" (whatever that ends up meaning). That'll still be us, even if it doesn't require any code, because at some point what code actually is telling something what you want it to do.

It's "the same" leap as from machine code to compilers, domain-specific languages, garbage collection, etc. You describe what you want to happen at some level of abstraction, and something interprets that to actually make it happen.

The level of detail you, as the programmer, have to directly describe goes down over time, but you still deal with far more details than most other people.


An analogy here is NP problems - it might be a lot easier to detect bugs than to generate bug-free code. Compilers actually do this on a regular basis, detecting bugs like typos and type errors.


I think the joke here is the implication the AI can guess your code. But yeah it would be hilarious if there was a complexity class on predicting what the problem solver is going to write.


> I was hoping for some kind of godlike AI to help catching bugs before I write my code, now I'm disappointed...

Maybe it's not such a bad thing if AI is not godlike.


Minority report for software developers, sounds like a good recruiting startup idea.


> When a player encounters a non-player character in Far Cry 5, two systems are at work: trust and morale. If you raise your weapon at someone you've never met before, they will react with distrust or fear, warning you to lower your gun. If the NPC recognises a lingering threat from you, it will launch an attack of its own, fearing for its own 'life'. When facing a group of enemies, as you pick off members of a gang, individual foes may realise they're outclassed and lose their thirst for combat, and attempt to flee as they sees their 'friends' taken out. Elsewhere, animal companions will respond to player activity, cowing close to the ground unprompted when you crouch into stealth, for instance. It's the sort of work that adds depth and realism to the world.

Fascinating. Meanwhile, watch Fred (a friendly AI in Far Cry 5) fly the player directly over a known enemy outpost, crash the helicopter, then later crouch immediately in front of the player when the player is trying to stalk the outpost - to the point where the weakness of the AI becomes the talking main point of the video:

https://youtu.be/iauG9h6N-PQ?t=377

Totally agreed one day this will make a difference, but in the meantime: game AI is still game AI.


Programmed at 5am by some poor soul on crunch mode, after an 80-hour work week and 3 hours of sleep.

Whenever a game has an absolutely heinous component to it, it’s clear that that particular bit was not playtested.


It seems to contrast the messaging from Ubisoft though. Why doesn't the NPC fear the cult symbol all over the outpost? Why don't they fear the line of sight from the player's weapon?


TBH the behavior described in the quote has been around in various forms since at least Elder Scrolls: Oblivion (2006). It's interesting and (arguably) immersive, but almost always has unintended and sometimes hilarious side effects. For example, Kingdom Come Deliverance (released last month) received much praise much praise for letting the player start out as an average unknown guy... but after a while in game you build up reputation with some towns to the point where stepping foot in a town leads to everyone in sight yelling greetings at the player. It also interacts in interesting ways with cutscenes [1] and the "respond to an armed player" behavior [2].

[1]: https://www.youtube.com/watch?v=vF_KxC_cOUU

[2]: https://www.youtube.com/watch?v=kuY2x_Dxr7Y


>> "The fact that when you show a programmer statistics that say 'hey, apparently you're making a bug!', you want him or her [to realise] that it's a tool to help and go faster.

To be fair, if I get a message along the lines of "There's a 0.345 chance that this line will cause a bug", I'll want to know why the line will lead to a bug- which is going to be a lot harder to do than just throw up a statisic.


If only there was some sort of way to statically encode and check your logic...oh wait.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: