Hi HN, we're Patrick and Florian, and we created Omnifact out of frustration with the underwhelming impact of AI on business productivity. After working with large companies struggling to adopt AI in a meaningful way due to data privacy concerns and vendor lock-in, we decided to build the platform we wished we had.
Omnifact is designed for organizations that want to make use of generative AI and RAG without compromising data security or flexibility.
Here's the tech breakdown:
- Deployment Flexibility: Omnifact can be used as a SaaS in our Azure-based cloud, deployed within a company's private cloud infrastructure, or even in completely air-gapped environments in combination with a self-hosted LLM.
- Your Data Stays Yours: Omnifact protects customer data from third parties. Documents are processed locally using Omnifact's data ingestion and embedding model. Additionally, sensitive information is masked before interacting with public LLMs via our privacy filter (custom AI model).
- LLM Agnostic: We support OpenAI, Anthropic, Google, Mistral, or any locally hosted open-source LLM (e.g., Llama 3.1 70B). Switching between providers and models is easy and can be done as needed.
What's different about Omnifact?
Omnifact is built with a focus on ease-of-use, security, and adaptability:
- Usability is Key: Our goal is to make the UI easy for both technical and non-technical users to create and manage AI assistants.
- Privacy by Design: Omnifact offers granular control over data access and ensures compliance with even the strictest data privacy regulations.
- Future-Proof Foundation: The LLM-agnostic architecture allows companies to leverage the latest AI advancements without vendor lock-in.
Give it a spin at https://omnifact.ai
Let us know what you think! Feel free to email us at founders@omnifact.ai as well.
Our Vision:
We're not just building a platform for creating AI assistants, though. Our goal is to empower organizations to gradually move from simple AI augmentation to end-to-end automation. Starting with secure and private RAG, the vision is to build towards a future where AI can automate complex workflows, seamlessly integrating with existing systems and data. We believe that a robust, privacy-first platform, deployable locally within an organization's infrastructure, is an important building block for this future of AI automation.
Previous HN discussions:
- https://news.ycombinator.com/item?id=39952114 - Ask HN: Recommendations for Local LLMs in 2024: Private and Offline?
- https://news.ycombinator.com/item?id=40948971 & https://news.ycombinator.com/item?id=40837081 - Goldman Sachs on the lack of ROI for AI (due to the lack of enterprise automation..)
Over the last six months I built what seems to be the world's first desktop application providing a vault with encryption at rest for secure offline interaction with local LLMs. First public release soon.
This idea of yours is pretty great. Something like it could open the door for me to market my app to enterprise customers from a couple of great new angles.