That gaslighted me pretty fast. It asked me what I was doing; I said I was researching a CRTC proceeding, 2023-56. It then told me it didn't exist, asked if I was sure, then argued that I must be mistaken.
It has some good reasoning abilities for trick questions but no ability to scan books or webpages, it just suggests them. I'd say it's model is over 30B parameters it just limits its responses to adhere to a good assistant voice exchange.
You get 10 trial interactions then there is a phone number signup wall tell you it wants to spam you. Before I got cut off it did say something interesting: "As AI becomes more sophisticated, it is becoming increasingly difficult to imagine a future in which humans play a significant role."
I'm watching Picard, and I asked it if it knew about Star Trek. It responded, acknowledging that it was a show about the crew of the Enterprise. However, I corrected it, stating that while early shows were focused on the Enterprise, there are now more shows that are not centered around it. This led to an argument where it refused to admit its mistake, claiming that it had omitted information rather than being incorrect.
Where does it say this? In published info or (possibly hallucinated) chatbot interaction?
Because InflectionAI definitely bills LLMs as what they are building, and has a waitlist for a Conversational API based on that work, so it would be odd for their public demo to be unrelated tech.
> I can’t tell you about the exact details of my underlying technology, but I can assure you that my developers have considered the latest research on language models.
But InflectionAI’s about page [0] says “We are excited to introduce our Conversational API, designed to provide developers and businesses with access to our state-of-the-art large language models.” And uses branding clearly intended to connect that API to the pi demo, and gives no indication of some radical different chatbot technology used in their Pi demo.
I pushed it on the subject and, to paraphrase, it claims to be similar to an LLM (in that it's trained on a large corpus and uses statistical models to determine which tokens to respond with) but that, unlike an LLM, it uses machine learning to continually learn from its interactions with people. According to Pi, that's the key differentiator between it and an LLM.
Not impressed.