The idea came from a simple problem: most teams have lots of API endpoints, but almost no one has realistic coverage. Writing and maintaining test collections takes forever, and scripts always fall out of sync.
The goal is to give engineers a rough but honest API health picture in ~2 minutes — without maintaining test files or writing code.
A fun surprise: I pointed Rentgen at ChatGPT’s API and found a few issues we genuinely didn’t expect to see in production. They were fixed immediately after reporting.
I would really appreciate feedback from the community:
• What categories of tests are missing?
• Which edge cases do you usually find manually?
• What would make this useful in your workflow?
This hits a real pain point. I've been on teams with hundreds of endpoints but maybe 10% had any tests beyond "does it return 200?" — and those inevitably rot as the API evolves.
Generating tests from a single cURL is smart because it meets devs where they already are. We're already firing off cURL commands during development; turning that into a test suite with zero extra effort is a nice DX win.
A few edge cases I’d love to see:
Rate limit testing (verify proper 429s with Retry-After)
Idempotency checks for POST/PUT endpoints
Unicode edge cases like : emoji, RTL characters, null bytes in strings
The ChatGPT API finding is a great proof point. Nothing sells a security tool better than “we found something in production at a major company.”
Will try this on some internal APIs this week.
Thanks for the thoughtful suggestions, those are spot on.
Rate-limit testing and proper 429/Retry-After handling are definitely on the roadmap. Idempotency checks for POST/PUT are a great call too, a lot of APIs behave unpredictably there, and it’s one of those areas people rarely test systematically. Unicode/emoji/RTL input fuzzing is a fun one, Rentgen already generates trimming/whitespace/negative cases, but expanding into more string-weirdness categories makes total sense.
If you end up trying it on any internal APIs this week, I’d genuinely love to hear what it catches. The tool often surprises me in places I didn’t expect.
I tried Rentgen on one of our internal APIs and was genuinely impressed with the results. The tool is surprisingly powerful — it quickly exposed issues we hadn’t noticed before. The interface is straightforward, and with just one request payload you can generate hundreds of tests, including security, performance, and various edge-case scenarios. Overall, a very useful and practical tool. Great work!
Rentgen takes one cURL request and generates: • boundary tests (min/max, out-of-range) • enum variation tests • invalid/negative input cases • trimming/whitespace cases • structure/mapping validation • reflection safety checks • missing/incorrect security headers • basic latency/load insights • automatic bug-report templates • and many other.
The goal is to give engineers a rough but honest API health picture in ~2 minutes — without maintaining test files or writing code.
A fun surprise: I pointed Rentgen at ChatGPT’s API and found a few issues we genuinely didn’t expect to see in production. They were fixed immediately after reporting.
I would really appreciate feedback from the community: • What categories of tests are missing? • Which edge cases do you usually find manually? • What would make this useful in your workflow?
GitHub: https://github.com/LiudasJan/Rentgen
Happy to answer anything about the engine design, how the generator works, reflection detection, or upcoming performance modules.