We also want to share details about how HammerBot works on the backend. Right now, our developers are using OpenAI’s text-davinci-003 model, otherwise known as GPT-3, trained on a custom dataset of our articles. The data is stored in a vector database from Weaviate and the bot is coded primarily with Python, using LangChain, a framework that makes it easy to customize AI output.
When you enter a prompt, the server queries against the dataset that’s stored in Weaviate to get the search results. Those are then sent to the LLM to help it develop a consistent response to your question.
A lot of chatbots will talk to you about anything on Earth. But don’t ask HammerBot for a knitting pattern! It’s designed to have limits on what it will talk about; it focuses on the expertise you can only get from Tom’s Hardware, so it may say it doesn’t know or can’t answer if prompts that fall outside of its training.
How the HammerBot Works
We also want to share details about how HammerBot works on the backend. Right now, our developers are using OpenAI’s text-davinci-003 model, otherwise known as GPT-3, trained on a custom dataset of our articles. The data is stored in a vector database from Weaviate and the bot is coded primarily with Python, using LangChain, a framework that makes it easy to customize AI output.
When you enter a prompt, the server queries against the dataset that’s stored in Weaviate to get the search results. Those are then sent to the LLM to help it develop a consistent response to your question.
A lot of chatbots will talk to you about anything on Earth. But don’t ask HammerBot for a knitting pattern! It’s designed to have limits on what it will talk about; it focuses on the expertise you can only get from Tom’s Hardware, so it may say it doesn’t know or can’t answer if prompts that fall outside of its training.