Hacker News new | past | comments | ask | show | jobs | submit login

> look over your code and understand and explain where you might have bugs.

This would certainly be interesting. I'm not aware of active research going on in this area (any pointers would be helpful!).

This would require an agent to have thorough understanding of the logic you're trying to implement, and locate the piece of code where it silently fails. For this you'd again need a training dataset where the input is a piece of code and the supervision signal (the output) is location of the bug. I could imagine some sort of self-supervision to tackle this initially where you'd intentionally introduce bugs in your code to generate training data. But not sure how far this can go!




1. Generate test cases from function/class/method definitions.

2. Generate test cases from fuzz results.

3. Run tests and walk outward from symbols around relevant stacktrace frames (line numbers,).

4. Mutate and run the test again.

...

Model-based Testing (MBT) https://en.wikipedia.org/wiki/Model-based_testing

> Models can also be constructed from completed systems


> I'm not aware of active research going on in this area (any pointers would be helpful!).

Look at the static analysis tool in clang. Xcode uses it well.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: