Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Inkeep (YC W23) – Copilot for Support (think Cursor for help desks)
117 points by engomez 71 days ago | hide | past | favorite | 61 comments
Hi HN, We’re Nick and Robert, founders of Inkeep (https://inkeep.com). We help companies turn their content into AI support copilots. So far, we’d focused on customer-facing experiences (e.g. find us as an “Ask AI” button in the docs of companies like Anthropic and Pinecone). Today, we’re excited to share our new copilot, Keep, which is designed specifically for support agents.

It’s a conversational sidebar you can use as an app for Zendesk or Chrome Extension for any support platform. There’s a demo video at https://vid.inkeep.com/cx-copilot or you can try the live sandbox with example tickets at https://try.inkeep.com/cx-copilot.

Why? Most AI support tools today are focused on trying to have AI answer customer questions before they even reach support teams (‘deflection’). We’d focused on that too. However, we heard from many of our customers that while they care about deflection, they care even more about providing high-quality, fast human support when users need it. Some teams don’t even want customer-facing AI at all and just want AI tools to help their team be more efficient. We created Keep with these scenarios in mind.

Keep does a few neat things we haven’t seen elsewhere:

1. Provides intelligent suggestions: if Keep is confident, it’ll create a draft answer and tell you the sources it used. If the ticket is long, it’ll summarize the conversation so far and outline the remaining to-dos. All automatic and contextual to the ticket.

2. Is fully conversational: ask for clarifications, revise draft answers, and iterate as needed.

3. Uses ‘Generative UI’: suggestions are rendered as glanceable, interactive UI components. For example, a draft answer has buttons like “Shorten” & “Concise” that prompt the AI to revise the answer. UI components are interweaved within normal text.

4. Turns tickets into FAQs: can generate an FAQ from a closed ticket and lets you iterate on it and save it when done.

5. Leverages many content types: uses your docs, help center, previous support tickets, Slack threads, etc.

We were inspired by tools like Cursor, Claude Artifacts, and v0. These experiences go beyond plain-text conversations by interweaving interactive code blocks or UI previews into their answers. This makes answers digestible and intuitive (and fun) to iterate on.

Some technical details, for those interested: We use the Vercel AI SDK to optimistically stream the React components by using our Chat APIs, which are powered by Claude Sonnet 3.5 and our RAG service. Our APIs follow the OpenAI chat completion format so are generally compatible with any LLM tooling. Our `inkeep-qa` API generates draft answers and the `inkeep-context` API generates structured outputs and tool calls (docs: https://go.inkeep.com/ai-api). For an example of how these APIs are used, check out our Intelligent Support Form example (demo: https://try.inkeep.com/ai-form, repo: https://quickstart.inkeep.com/ai-form).

If you want to try Inkeep on your product's content, just fill out the form in our landing page. You’ll get a demo in your inbox powered by your public content — NO “call us”, “book a demo”, or “schedule a meeting” required. Note: we do check that your email domain matches your content to prevent spam.

Curious to hear about your experiences when working with customer/support questions and any ideas on how else the copilot could be useful for those scenarios.




We've been a customer for the past year (https://speakeasy.com/docs). I was honestly highly skeptical about putting a RAG powered search in front of our documentation site instead of what we were using (FlexSearch / Nextra). Have been delighted to be proved wrong.

The learning I've had is that whilst the majority of queries go through standard search patterns (i.e. users search for something that's covered by documentation), a subset of queries are not answerable by our documentation but only implied by it. I've direct experience that Inkeep is serving a large part of that user segment and reducing our support burden.

As a very recent/specific example from last week, we had a community user generating a terraform provider for an internal use-case. By putting error messages from our CLI tooling into Inkeep's "Ask AI" feature, they discovered a nuance in "x-speakeasy-match" (the error message implied it created a circular reference, but didn't spell that out) and self-served a solution.

Inkeep effectively turned our documentation into a guided tutorial on our product, specific to the customer. Pretty strong ROI.


best way to frame the customer-facing AI: guided tutorials on-demand that can translate between user terminology and product lingo.


Strong agree. I increasingly feel like this is one of the major benefits of AI.


How are you measuring the confidence of the answers? This is one of the biggest challenges I've seen with AI, it provides wrong answers to hard questions, which wastes the user's time.


Take a look at https://docs.inkeep.com/ai-api/openai-chat-completion-endpoi...

tl;dr we define a JSON schema with a few semantic labels that represent a gradient in confidence. On our end we prompt it with certain examples and guidance for when to use each label. This is generally a better approach than e.g. asking an LLM to give a numeric score.

We also have trained embedding-based classifiers as non-LLM heuristics.


Highly recommended. Congrats on the HN launch.

Inkeep works great at Pinecone and meaningfully reduced the number of support tickets with common questions/issues.

[1] https://support.pinecone.io/


that's the goal. And now with Keep the goal is to help the support team answer those questions that come through faster too. Minimizing "time to answer" across the funnel is the goal.


Very cool! We've been working on an open source solution for government that does something similar.

https://ai.gov.uk/projects/caddy/


We (at PostHog) have a very unique implementation of Inkeep in our community forums[1], and it's been a lot of fun working on a custom solution with the Inkeep team.

Our ultimate goal was to make our experience explicitly not feel like you're talking to AI.

So rather than trying to intercept questions from being posted to our forums, we trigger Inkeep _after_ a question is posted. If we're able to find an answer with a high degree of confidence, our "AI user" (Max) will show an answer within about 30 seconds.

The OP can then provide feedback that we're using to train further answers.

If the answer is marked as helpful, we display the answer publicly (and disclaim it as an AI response)[2]. If the answer is marked as _unhelpful_, the answer only shows to the OP and we review the feedback to figure out how we can improve (ie: do our docs need to be improved so Inkeep has better source material?).

It's been fun getting creative with the Inkeep team on a solution that worked for our specific use case. I'm planning on rolling out Inkeep more broadly in other areas of our site as we verify that our highly confident answers are genuinely useful to our users.

IMO Inkeep has been the first AI solution that hasn't sucked – and that's high praise coming from me!

[1] posthog.com/community [2] https://posthog.com/questions/autocapture-event-bubbling


Bajar la calidad del teléfono y que los juegos corran a 250FPS


I tried this question "How do you track for failures of the service?" I had to drill down multiple times but it did give me a good understanding of the service. I did notice that it was also giving results in javascript. Looks interesting and I have problems with my own RAG app https://www.securday.com


was this in the support copilot or our public-facing bot on our landing page? for my fyi what did you mean by 'failures' of the service, can look to create some content relating to that.


I first asked for "How to track failures" and it told me some code and how to look at analytics. Eventually I figured out that I had to explicitly track for hallucinations / up and down events etc.. so I was overly broad but was able to drill down and understand the system so maybe I was unclear but it taught me how your app works.


Ah understood. I can see how the term is a little fuzzy. Glad it highlighted how the product works though.


Tener el mejor procesador y aumentar 100GB de RAM


How is this different/unique than the thousands of other competitors that pretty much promise the same thing? sorry if it sounds dismissive of your product, but that's my first impression, and probably a lot of other ppl's too, so would be good to get a good answer...


(I'm not from Inkeep but I've seen and used their product.)

I'm not sure how they do it but the answer quality and the UI is meaningfully better than all the other "chat with your docs"-type products I've tried.

In other words the promised outcome isn't very original but they've nailed the execution.


Feel free to correct me on that, but here's my understanding. The comprehensive support products cover four main sub-products:

1. FAQ/Knowledge bases with search functionality.

2. Conversational mediums and agent notifications (e.g., live chat widget, messenger support).

3. Ticket management systems and agent management, which is the core of Zendesk/Intercom. This is the most difficult to operationalize as it requires process architecture, SLA management, etc.

4. Orchestration and workflow management, which can be done inside #3, though some products are available as well.

Most new post-LLM startups target #2 but face platform risks as they rely on companies covering #1, #3, and #4 (e.g., Zendesk, Intercom, Gorgias).

I feel InKeep doing some combination of #2 but emphasising that you can support client whenever they are (ie Github, Discord, Slack) instead of asking them to submit tickets in the website widget.

Another issue for AI support startup is the verticalization/horizontal trap. Most LLMs require solid tuning per client, especially for enterprises like us. Startups often avoid this initially, opting for a more horizontal general path (e.g., AI support for Shopify merchants). This is where enterprise players are more beneficial. Thus companies like ServiceNow, Zoom, and Oracle offer products for support and implementation services.

Neat business imo.


are you implying that a custom implementation service for enterprises is a good business?


that's the reality of the post-LLM Customer Support business.


give the playground a shot and let me know what you think.

we answer 250k+ customer-facing questions/mo today for teams who really care about quality (Anthropic, Clerk, Pinecone, Postman) - we're brining that same care and high bar to our copilot for support teams.

the generative UI and conversational aspect is quite different than other copilots we've seen.


We use them at getstream.io, the RAG on SDKs is way ahead of other platforms in this space.


I'm not associated with them but was browsing their docs and spotted this:

https://docs.inkeep.com/faqs/comparisons

Might help.


We are customers at windmill.dev and we are really happy with it. It also motivates us to write ever better docs as it means more answers can be an answered completely by the bot.


Anyone downvoting this should know that other YC companies endorsing a product launch is a certified HN classic, and by downvoting it, you're violating a long and rich tradition.


windmill.dev looks an awesome solution thanks for the link :)


nothing like closing the content loop! try to make that direct.


Aumentar 12 GB de RAM


How does this compare to Q for Business?

https://aws.amazon.com/q/business/


My understanding is that Q is a general purpose internal ai/search service - similar to Glean or Microsoft's equivalents.

Our tool focuses on support use cases (customer-facing or internal-facing), which means we can go deep with workflows like detecting gaps in your documentation and focusing our efforts on quality around these scenarios. Generally our support copilot intro'd here also generates dynamic UIs so goes beyond a normal chat interface.


Entry price of $150/month just to try it - regardless of volume.

Pretty sure most people will go to whoever has a free tier, and even that space will be competitive.


most folks will go to whomever is free but ultimately most folks for whom the problem is valuable will go to whomever is best


Do you have any stats / benchmarks of how often it answers a question correctly that's present in the customer's documentation?


We measure the opposite now - % of questions where the bot was not able to find documentation to answer and it doesn't exist. This powers our insights reporting. We generally don't hear complaints about false positives here (i.e., cases where there were docs), so anecdotally, I'd guess 95%+ else our reporting wouldn't be very useful.


Inkeep has been great for us. Congrats on the launch!


appreciate it!


how is this different to all the other identical "we attached ChatGPT to a web app" competitors?

how do you ensure that companies don't use this to make it impossible to actually contact a human? or do you feel that isn't in scope for a product that's encouraging companies to make it impossible to contact a human?


this one is designed for support agents - not to be customer facing. Whole goal is to make it more scalable to provide high-touch human-based support for teams who are keen on that.

Agree customer-facing AI has to be done in a tasteful and mindful way.


This is the first time I've seen someone use two instances of "x for y" where x is another product and y is the actual thing they do


The first one (Copilot) is quickly becoming a generic term, so one could argue that there's only one instance (Cursor) of that pattern.


Used Inkeep in my previous company. A+ team & support.

Nick & Rob are very strong leaders.


appreciate it!


Inkeep has been very solid for us and running in our Discord. https://milvus.io/community

Robert spoke at our meetup and is awesome. https://www.youtube.com/watch?v=35JdjmiDvWI


The developer community of milvus vector database benefits a lot from the inkeep ask-ai-button in discord and milvus.io website. As a user we are happy with the rich feature set of inkeep, like integration with github/discord, admin tool to study user's questions to identify issues in product or documentation. These features are often overlooked when people talk about RAG solutions but they turned out to be very important from our experience using RAG in a real world scenario. This agentic workflow of Keep feels a great addition to the existing core RAG functionality.


glad to hear!


How much did you spend on the domain??


Not too bad - $3k.


Ok that’s probably worth its value lol.


Pretty crowded market


There is a typo in the title of your example https://copilot-demo.inkeep.com/?ticketId=new-ticket

"Escelating to humans from Inkeep AI Slack bot" : "Escelating" --> "escalating"


ty!


Were any open source projects cloned in the making of this product?


Unrelated, but has Cursor achieved this kind of mind share?

Last week, I heard about a company (I think it was an YC company) describing themselves as "open source Cursor".

Also last week, a comment here on HN stuck with me: "I live inside Cursor" https://news.ycombinator.com/item?id=41651380

And now, the Cursor for Help Desk.

I've used Cursor, and I loved it. 10 to 1 over regular GitHub Copilot, and well worth the $20 dollars and I am a hobby programmer (management job during the day).

But... all this? It has become the reference point just like we had "Uber for...", "AirBnb for...". It seems like it happened so fast.


Seems like it, whenever I ask mid/senior devs for suggestions on an AI copilot they all recommend Cursor. Haven't tried it though, my bottlenecks involve people and outdated docs ^_^


it is definitely all the rage in the AI community. what made it better than Github Copilot for you?


Good question, took me a while to figure out why I preferred over Copilot.

Number one was the "apply" suggestions from chat, but now I think Copilot has it too.

Number two are the suggestions while I am typing to change multiple lines at once. Such a time saver.

Number three is how I can send entire files/folders as reference in the chat.

Also, it feels a bit snappier? The suggestions seem to come a bit faster than Copilot.

In terms of correctness / good good, they're both equally good (and bad).

To me it's all about how much time I am saving. When I sit down 30-90 minutes 3-4 times a week to code, I just feel like it helps me to get more stuff done.


One thing that’s better: I’m converting a codebase to a new major library version with a lot of breaking changes. The suggestions in Cursor are working really well for this. You can make an edit to fix a call (or whatever), go to the next place and it suggests a similar edit, then just start hitting tab as it figures out where else you would want to do the same thing. It also seems to have recent edits in context so when you go to the next file it’s already primed to continue.


Have been a Cursor user for ~1yr. I think they just nailed the key workflows people wanted - i.e. the fully conversational side bar with "apply" buttons and easily being able to attach the right files or code snippets you want.

VS Code may have caught up now, but haven't looked back.


We've built something like this at Helpjuice.com - we call it Swifty AI Chatbot. It's pretty cool to see companies that are building a completely open platform that works with all. Nice work folks!

Upvoted – looking forward to supporting you guys more


It's not a good look to piggyback off competitors' launches like this. Let them have their moment.


Ahh just when founders had finally stopped pitching their startups as "the uber for X" now we get "Cursor for X"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: