Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Omnifact – Self-Hosted, Privacy-First AI Platform for Enterprise
9 points by flore2003 12 months ago | hide | past | favorite | 15 comments
Hi HN, we're Patrick and Florian, and we created Omnifact out of frustration with the underwhelming impact of AI on business productivity. After working with large companies struggling to adopt AI in a meaningful way due to data privacy concerns and vendor lock-in, we decided to build the platform we wished we had.

Omnifact is designed for organizations that want to make use of generative AI and RAG without compromising data security or flexibility.

Here's the tech breakdown:

- Deployment Flexibility: Omnifact can be used as a SaaS in our Azure-based cloud, deployed within a company's private cloud infrastructure, or even in completely air-gapped environments in combination with a self-hosted LLM.

- Your Data Stays Yours: Omnifact protects customer data from third parties. Documents are processed locally using Omnifact's data ingestion and embedding model. Additionally, sensitive information is masked before interacting with public LLMs via our privacy filter (custom AI model).

- LLM Agnostic: We support OpenAI, Anthropic, Google, Mistral, or any locally hosted open-source LLM (e.g., Llama 3.1 70B). Switching between providers and models is easy and can be done as needed.

What's different about Omnifact?

Omnifact is built with a focus on ease-of-use, security, and adaptability:

- Usability is Key: Our goal is to make the UI easy for both technical and non-technical users to create and manage AI assistants.

- Privacy by Design: Omnifact offers granular control over data access and ensures compliance with even the strictest data privacy regulations.

- Future-Proof Foundation: The LLM-agnostic architecture allows companies to leverage the latest AI advancements without vendor lock-in.

Give it a spin at https://omnifact.ai

Let us know what you think! Feel free to email us at founders@omnifact.ai as well.

Our Vision:

We're not just building a platform for creating AI assistants, though. Our goal is to empower organizations to gradually move from simple AI augmentation to end-to-end automation. Starting with secure and private RAG, the vision is to build towards a future where AI can automate complex workflows, seamlessly integrating with existing systems and data. We believe that a robust, privacy-first platform, deployable locally within an organization's infrastructure, is an important building block for this future of AI automation.

Previous HN discussions:

- https://news.ycombinator.com/item?id=39952114 - Ask HN: Recommendations for Local LLMs in 2024: Private and Offline?

- https://news.ycombinator.com/item?id=40948971 & https://news.ycombinator.com/item?id=40837081 - Goldman Sachs on the lack of ROI for AI (due to the lack of enterprise automation..)



This is interesting; using NER for PII/secret filtering as bridge between secure systems and networked APIs.

Over the last six months I built what seems to be the world's first desktop application providing a vault with encryption at rest for secure offline interaction with local LLMs. First public release soon.

This idea of yours is pretty great. Something like it could open the door for me to market my app to enterprise customers from a couple of great new angles.


This sounds really interesting as a concept. Would love to learn more! Ping me via Patrick at omnifact.ai if you’d like to chat about this!


How does Omnifact's privacy filter stack up against more traditional privacy methods? Does it really keep data secure without making the AI less effective when using public LLMs?


Thanks for asking. The privacy filter was actually one of the first things we’ve launched with: Omnifact's privacy filter leverages a custom-trained Named Entity Recognition (NER) model for advanced data masking and context-aware content filtering. This ensures sensitive information is replaced with placeholders while preserving the AI's contextual accuracy. The platform supports on-premise deployment and self-hosted LLMs, offering full data control and compliance with regulatory standards.


We're here all day to answer questions, hear your feedback, and have interesting discussions about GenAI in Enterprise. Looking forward!


Awesome idea! One question, how does the privacy filter work? Did you build it yourselves or are you using an external company for this?


That's a great question. So basically it is a mixture between a set of carefully crafted pattern matching and our own entity recognition model to detect sensitive information or PII. It's a small model that we ship as part of Omnifact to allow companies to run the whole platform on-premise. If you have any other questions about the privacy filter or the whole platform, let us know!


Hi Patrick here, I'm one of the co-founders and happy to answer any questions that may come up here!


Amazing concept, love it


Thank you! Means a lot! Feel free to contact us with any question you might have!


Is it up? Just getting a blank page.


Seems like we are affected by this: https://www.netlifystatus.com/incidents/yyz1bg27b8hv?u=vcpf7...

Very bad timing...


Issue seems to be resolved now!


Thanks, it works now. Nice, I might be in the market for this.


It looks like we experience some issues with Netlify right now. Thanks for pointing that out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: