I've been exploring the hybrid of multimodal models like GPT-4 Vision for image analysis, and GPT-4 Turbo for generating written content based on those image descriptions.
Here we're showing a use case for building a tool with Graphlit for apartment managers, and generating an instant inspection report purely from uploaded images.
When creating a chatbot or AI copilot, which supports Q&A across ingested content, a common user problem can be knowing where to start asking questions.
Have a look at Graphlit (https://www.graphlit.com). Managed cloud-native platform for knowledge ingestion and retrieval. Simple GraphQL API, and serverless backend to handle all the data ingestion, embeddings, and RAG LLM patterns. Free tier to try out without CC. (Disclaimer: I'm the founder)
With the use of Graphlit and Azure AI, market intelligence from Reddit can be automated - accelerating the time to business insights.
This tutorial walks through ingesting Reddit posts into Graphlit by creating a Reddit feed, along with a content workflow. The workflow describes how the content will be ingested, prepared, extracted and enriched into the Graphlit knowledge graph.
When all content has been ingested, you can analyze the observed entities that Azure AI found in the Reddit posts, and extract structured data via faceted queries for charting and analysis.
Use the KG to provide greater context to your RAG pipeline, with GraphRAG.
Code: https://github.com/graphlit/graphlit-samples/tree/main/pytho... Demo: graphlit-samples-sharepoint-graph.streamlit.app