Hi HN! I'm Nicolas, co-founder of Lume, a seed-stage startup (
https://www.lume.ai/).
At Lume, we use AI to automatically transform your source data into any desired target schema in seconds, making onboarding client data or integrating with new systems take seconds rather than days or weeks. In other words, we use AI to automatically map data between any two data schemas, and output the transformed data to you.
We are live with customers and are just beginning to open up our product to more prospects. Although we do not have a sandbox yet, here is a video walkthrough of how the product works: https://www.loom.com/share/c651b9de5dc8436e91da96f88e7256ec?.... And, here is our documentation: https://docs.lume.ai. We would love to get you set up to test it, so please reach out.
Using Lume: we do not have self-serve yet. In the meantime, you can request full access to our API through the Request Access button in https://www.lume.ai. The form asks for quick information e.g. email so that I can reach out to you to onboard you. Please mention you came from HN and I’ll prioritize your request.
How our full API product offering works: Through Lume’s API, users can specify their source data and target schema. Lume’s engine, which includes AI and rule-based models, creates the desired transformation under the hood by producing the necessary logic, and returns the transformed data in the response.
We also support mapper deployment, which allows you to edit and save the AI generated mappers for important production use cases. This allows you to confidently reuse a static and deterministic mapper for your data pipelines.
Our clients have three primary use cases
- Ingest Client Data: Each client you work with handles data differently. They name, format, and handle their data in their own way, and it means you have to iteratively ingest each new client's data.
- Normalize data from unique data systems. To provide your business value, your team needs to connect to various data providers or handle legacy data. Creating pipelines from each one is time consuming, and things as small as column name differences between systems makes it burdensome to get started.
- Build and maintain data pipelines. Creating different pipelines to that map to your target schema, whether for BI tooling, downstream data processing, or other purposes, means you have to manually create and maintain these mappings between schemas.
We're still trying to figure out pricing so we don't have that on our website yet - sorry, but we wanted to share this even though it's still at an early stage.
We’d love your feedback, ideas & questions. Also, feel free to reach out to me directly at nicolas@lume.ai. Thank you.
https://web.archive.org/web/19981201053816/http://www.lume.c...