Hacker Newsnew | past | comments | ask | show | jobs | submit | Raviteja_'s commentslogin

3 Tools you are overpaying for:

- Prompt Compressor ($10/mo)

- PII Redactor ($20/mo)

- Fake Data Gen ($15/mo)

Risk Mirror does all 3 in the browser.

The Text Suite is $0 right now.

No card required. No logs kept.

Try for free. Feedback would be great !


Every time you paste a stack trace into ChatGPT, you might be leaking:

- User session tokens

- Database connection strings

- API keys from env variables

I built Risk Mirror to scan and redact sensitive data BEFORE it touches any AI.

It's deterministic (no AI used for scanning because that would defeat the purpose).

Feedback welcome !


Stop burning $20/mo on Claude credits. I unlocked the Prompt Optimizer for free.

If you're hitting the message cap on Claude/Cursor, you're sending too much fluff. "Please", "Thank you," and verbose context contexts are eating 30% of your token budget.

I built Risk Mirror to mathematically compress prompts (removing filler, preserving logic).

It’s usually a Pro feature, but is completely free for now while I benchmark compression rates.

Free Tools Included:

* Prompt Optimizer (Save 40% tokens)

* Safe Share (Redact PII from LOGS/Text instantly)

* Risk Scanner (Check prompts before pasting)

* Clarity Analyzer (Fix vague inputs)

Grab it before I have to close the free tier


If your Cursor/Claude credits vanish fast, it’s probably prompt bloat.

Polite filler + repeated context + messy JSON = wasted tokens.

Risk Mirror compresses prompts 20–40% without changing meaning.

More credits. Same results.

Try free: https://risk-mirror.vercel.app


Hey all,

I'm the creator of the PII Firewall Edge API (currently on RapidAPI).

I saw a lot of devs struggling to implement safety guardrails correctly—most were just using basic regex or heavy LLMs that hallucinate.

So, I decided to package my API into a full-featured UI/Toolkit called Risk Mirror.

What it does: It sits between your users and your LLM (OpenAI/Anthropic) and strips out sensitive data before it leaves your server.

The Tech (Zero AI Inference): Instead of asking an LLM "is this safe?", I use:

152 PII Types: My custom engine covers everything from US Social Security Numbers to Indian Aadhaar cards and HIPAA identifiers. Shannon Entropy: To detect high-entropy strings (API keys, passwords) that regex misses. Deterministic Rules: 100% consistency. No "maybe." Why use this?

It's Tested: The underlying API engine is already battle-tested. It's Fast: <10ms latency.

Includes a 'Twin Dataset' generator for Data Scientists (redact CSVs securely). Feedback welcome!"


Great catch! Emails with spaces around @ (like "test @ example.com") slip through. This is a classic obfuscation bypass.

The current pattern intentionally matches RFC 5321 compliant emails (no spaces). Adding support for spaced variants creates a trade off. wewould catch more bypass attempts but also increase false positives on text like "send @ 5pm". I'll add this to the roadmap. Appreciate the feedback ! this is exactly the kind of edge case I need to hear about to make my api more better


Quick technical notes for HN:

Why no AI?

The irony of sending PII to an AI model to detect PII is lost on most "privacy" APIs. This is pure algorithmic detection – the same approach your credit card company uses to validate card numbers.

What's validated (not just pattern-matched): - Credit cards → Luhn checksum - Aadhaar → Verhoeff (the algorithm that catches single-digit and transposition errors) - IBAN → Mod 97 (same as banks use) - Singapore NRIC → Mod 11 with offset - Brazilian CPF → Dual Mod 11

Latency breakdown: - Heuristic scan: O(n) single pass for trigger characters (@, -, digits) - Pattern matching: Only runs if triggers found - Validation: Only on pattern matches - Total: 2-5ms for /fast, 5-15ms for /deep

False positive mitigation: - "Order ID: 123-45-6789" won't trigger SSN (negative context) - Timestamps won't match phone patterns (separator requirements) - Random 16-digit numbers won't trigger credit card (Luhn must pass)


Hi HN! I built this after 3 months researching image-based attacks.

The problem: Apps that accept user images typically just strip EXIF metadata. But this misses: - Steganographic payloads (data hidden in pixel LSBs) - Polyglot files (valid as both image AND executable) - Image bombs (1x50000px files that exhaust memory)

My approach: Content Disarm & Reconstruction (CDR) - Decode image to raw pixel buffer - Completely discard the original container - Rebuild a sterile PNG from scratch

Stack: Rust core → WebAssembly sandbox → Cloudflare Workers edge

Free tier: 100 requests/month on RapidAPI

Happy to answer questions about the architecture, threat model, or implementation!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: