Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Magnify.dev – AI Code Security Scanner
2 points by CoolRequirement 44 days ago | hide | past | favorite | 2 comments
Hi all! I'm looking for some wisdom on a side project I started out of interest. Magnify (magnify.dev) is an AI powered code vulnerability scanner that aims to be simple, affordable, and automated. You can just upload a .zip file with your code and it'll return a PDF report containing potential security issues in your code. I'm looking for some opinions:

Would you use a code security scanning service? Do you value code security or am I talking to a wall here?

What would be a fair price? Currently I've priced it at 5 CAD (~3.6 freedom dollars) for 1k lines of code, but willing to go quite a bit lower. What would you be willing to pay to have 1k lines of code scanned by AI?

What other features would you be interested in? Like an API, CI, etc.

I'm hoping I can get some valuable opinions! And also, for the next 72 hours, I'll do scans for free so you can all get a demo! Just select the paid option and upload your code as a .zip file, but when you get to the payment page, just close it. Contact support with the the email you used and I'll approve the task manually and send you the link to the analysis after (might take a few hours, I'm pretty busy during the day).

I genuinely believe that the affordability of my project can bring usefulness -- professional code security audits are often ~1k USD for 1k lines of code, so Magnify, although not as effective (yet), is literally hundreds of times cheaper. Thank you all!




I think all of your questions are contingent on how good the tool is.

FWIW I have built a POC of this for my consulting company, to help identify possible vulnerabilities. With or without proper context LLMs will hallucinate both potential and false positive vulnerabilities that you have to manually check through. This would be fine, except it's like a DDOS attack, where you now need to check hundreds of places every thousand lines of code.

I am not aware of a good way for LLMs to rank their potential vulnerabilities, so often times it shows really ones that are unlikely, while missing more complicated / nuanced ones.


Good point, thanks! Do you think over time LLMs will improve? As a POC (which Magnify currently is kinda in), there are a lot of false positives, and while it won't replace professionals, I think LLMs currently can provide a sufficient baseline of security which should get better over time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: