Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stop leaking user data to OpenAI/Claude/Gemini (risk-mirror.vercel.app)
1 point by Raviteja_ 37 days ago | hide | past | favorite | 4 comments


Hey all,

I'm the creator of the PII Firewall Edge API (currently on RapidAPI).

I saw a lot of devs struggling to implement safety guardrails correctly—most were just using basic regex or heavy LLMs that hallucinate.

So, I decided to package my API into a full-featured UI/Toolkit called Risk Mirror.

What it does: It sits between your users and your LLM (OpenAI/Anthropic) and strips out sensitive data before it leaves your server.

The Tech (Zero AI Inference): Instead of asking an LLM "is this safe?", I use:

152 PII Types: My custom engine covers everything from US Social Security Numbers to Indian Aadhaar cards and HIPAA identifiers. Shannon Entropy: To detect high-entropy strings (API keys, passwords) that regex misses. Deterministic Rules: 100% consistency. No "maybe." Why use this?

It's Tested: The underlying API engine is already battle-tested. It's Fast: <10ms latency.

Includes a 'Twin Dataset' generator for Data Scientists (redact CSVs securely). Feedback welcome!"


Stop burning $20/mo on Claude credits. I unlocked the Prompt Optimizer for free.

If you're hitting the message cap on Claude/Cursor, you're sending too much fluff. "Please", "Thank you," and verbose context contexts are eating 30% of your token budget.

I built Risk Mirror to mathematically compress prompts (removing filler, preserving logic).

It’s usually a Pro feature, but is completely free for now while I benchmark compression rates.

Free Tools Included:

* Prompt Optimizer (Save 40% tokens)

* Safe Share (Redact PII from LOGS/Text instantly)

* Risk Scanner (Check prompts before pasting)

* Clarity Analyzer (Fix vague inputs)

Grab it before I have to close the free tier


If your Cursor/Claude credits vanish fast, it’s probably prompt bloat.

Polite filler + repeated context + messy JSON = wasted tokens.

Risk Mirror compresses prompts 20–40% without changing meaning.

More credits. Same results.

Try free: https://risk-mirror.vercel.app


Every time you paste a stack trace into ChatGPT, you might be leaking:

- User session tokens

- Database connection strings

- API keys from env variables

I built Risk Mirror to scan and redact sensitive data BEFORE it touches any AI.

It's deterministic (no AI used for scanning because that would defeat the purpose).

Feedback welcome !




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: