Hacker News new | past | comments | ask | show | jobs | submit login

> In today's world, catchy headlines and articles often distract readers from the actual facts and relevant information.

A reasonable premise! But easier said than done. I wonder how this app counteracts the hallucination and lying behavior of LLMs.* Would be pretty bad to trade off easier-to-decipher human bias and sensationalism for distorted truths and lies from an obfuscated sequence of dot products!

* I assume they are using LLMs because they state:

> By utilizing the power of advanced AI language models capable of generating human-like text,




Usually it hallucinates when you're asking for information, in this case it's rewriting existing text, so it should be a little safer. When in doubt check another source, as with everything.


People should doubt everything an LLM outputs, ergo, why use it in the first place if the desired output is objective fact? LLMs hallucinate, that's what they do. When it's wrong, you likely won't notice that it's wrong, but over time, your world view is going to become more and more distorted.


> why use it in the first place if the desired output is objective fact

rewriting facts is like 90% of all writing jobs


I think there will be an art to these "information summarization" products. You want just enough summarization to accomplish the reader's goal, while also minimizing your hallucination surface area. Summarize too short, and your user won't get what they came for. Summarize too long, and watch user trust crater as the hallucinations pile up.

I don't think there's a generalized solution to this problem for all information domains, so when search engine companies implement it it'll be low quality. What remains to be seen is if the money is in being a curator/aggregator within a niche, as Boring Report aims to be; or if it will be in selling the specialized summarization tech to the content creators directly - for use by them when they publish. I think the latter leans B2B and will have higher quality since the content creator signs off on it. But we'll see. Either way, the right mental model for LLMs may be to treat them as memetic compression algorithms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: