Hey HN! My name is Andrew, and I'm thrilled to share with you a project I've been working on called Factful.
I'm a high school student with a passion for tackling misinformation online. Inspired by the need for more reliable content verification tools, I decided to create Factful. It's an AI-powered web app designed to revolutionize how individuals and organizations verify content.
Unlike traditional grammar checkers, Factful provides a comprehensive analysis that goes beyond just grammar. It evaluates context, factuality, coherence, and more to ensure the accuracy and credibility of content.
I believe that in today's information age, it's more crucial than ever to have tools like Factful to combat misinformation and promote content integrity. I'm excited to continue developing Factful and would love for you to check it out. Your feedback and support would mean the world to me. Thanks for taking the time to read about Factful, and please go check out our beta deployment of Factful (a little beyond the MVP) for free on our website!
- most current LLMs are trained on large amounts of web data that itself contains facts, opinions, and misinformation. These things are treated equally, so I would expect the LLM to get common facts right, but also to represent opinions or misinformation as facts when they are pervasive.
- LLMs "hallucinate" and tend not to know when to say "I don't know" or to not try to fact-check something that is not factual in nature.
...in short, I would expect LLMs to be an unreliable fact checker, which has the potential to do as much harm as good.