Hacker Newsnew | past | comments | ask | show | jobs | submit | asekka1's commentslogin

Hi HN,

I built TensorWall because I noticed how difficult it is to balance LLM freedom with production safety. It’s an open-source security layer designed to intercept, analyze, and filter prompts and responses in real-time.

Key features:

PII Redaction: Automatically masks sensitive data before it reaches the model.

Prompt Injection Defense: Detects malicious patterns in user inputs.

Output Validation: Ensures the model stays within predefined constraints.

Framework Agnostic: Easy to integrate with existing Python stacks.

I’m looking for feedback on the architecture and what specific security "walls" you'd like to see next.

Check it out here: https://github.com/datallmhub/TensorWall


Hi HN,

I built TensorWall, an open-source, self-hosted LLM gateway that sits between applications and LLM providers (OpenAI, Anthropic, local models).

It exposes an OpenAI-compatible HTTP API, so applications can integrate with LLMs from any programming language using a standard HTTP client.

The main goals are: – controlling LLM costs (budgets, limits) – enforcing security and governance rules – avoiding vendor lock-in by routing requests across providers

It’s not a SaaS. It’s fully self-hosted and works as a plain proxy.

I’ve deployed a demo on Hugging Face Spaces using mock providers to show the workflow, and the full source code is available on GitHub.

Feedback from people dealing with multi-LLM setups or production LLM governance would be very welcome.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: