Hopefully running local S2T + LLM and native app with data fully stored on device. I can’t imagine anyone would be happy shipping their private personal thoughts to a cloud LLM + unknown backend service provider, especially because running this stuff locally is well within the realm of possibility these days.
I think it’s also think it’s interesting to consider from the perspective of side-channel attacks/information leaks. For the most part, I think text notes are well just text. But I shutter to think what other metadata, fingerprinting, or background conversations/information could be extracted from a bunch of audio. But maybe I’m just being overly paranoid.
I would never in a zillion years put things in Notion that would ruin my life if they showed up in a data breach. I don’t even keep a physical journal because I don’t like how hard it is to secure.
Am I in a bubble, or is it the children who are wrong?
>I can’t imagine anyone would be happy shipping their private personal thoughts to a cloud LLM + unknown backend service provider
you're in a tech bubble if you think this.
Do you think all the people on this subreddit https://www.reddit.com/r/CharacterAI/ are keeping private thoughts to themselves when doing roleplay with their virtual companions?
No.. I think most people don't think about this stuff at all.
I am the builder. In our platform, voice data is not stored on our servers by default. It is used solely for transcription to text and is deleted immediately afterward. Users have the option to enable a feature that allows the storage of voice recordings along with the journal text, but this feature is disabled by default.
We utilize Microsoft's transcription APIs, ensuring that data is not used to train any models.
Additionally, we are developing an "Ask About Your Journal" feature that employs Retrieval-Augmented Generation (RAG) techniques. This feature is also disabled by default and can be enabled by users if desired.