This is a great idea! I'm building something very similar with https://practicalkit.com , which is the same concept done differently.
It will be interesting for me, trying to figure out how to differentiate from Claude Cowork in a meaningful way, but theres a lot of room here for competition, and no one application is likely to be "the best" at this. Having said that, I am sure Claude will be the category leader for quite a while, with first mover advantage.
I'm currently rolling out my alpha, and am looking for investment & partners.
I remember one time at summer camp in the teen dorm I claimed that pain was an illusion, because it was subjective. A girl named Lisa picked up a wooden block and threw it at me. It hit my lip, which started bleeding, and she was immediately horrified at what she had done; but I had to acknowledge that subjective "reality" has an importance to me that objective reality does not.
Interestingly I had just re-watched the House episode with the CIPA patient in S3, and it touched on this if you squint. The girl, having CIPA, effectively can’t feel pain. She can’t even feel getting 2nd degree burns and it’s questionable if she even felt them poking around in her head or if she used that to escape (and fall down a 2nd story balcony). The only time she felt actual pain was seeing her mother relapse and be wheeled off for more surgery.
She cannot feel what should objectively cause her pain, but because pain is a subjective experience she can’t. However, truly subjective pain, that is pain derived from emotional connection, is literally the worst pain she can feel.
The guy couldn't emotionally recognise his mother after seeing her and started calling her imposter. But when he heard her voice over telephone, he felt emotional connection and said the person on other end was indeed his mother. Emotional pathways provide salience information in conjunction with sensory pathways. Any disruption to emotional pathways can override even correct sensory data.
Pain actually has a lot of objective parts to it. There are real chemical and mechanical processes involved. You could even argue the subjective part might be smaller than people think. Mindset can change the experience, but different people might just have different "pain functions" to begin with.
Same idea with hunger and weight gain or loss. Hunger is a biological process. You can push through it, but people also experience it differently because their actual hunger mechanisms differ, not just because they "interpret" it differently.
I don't care about the objective parts; the chemical and mechanical processes would have been exactly the same if it had been Lisa's lip that was bruised and bleeding instead of my own, or the lip of another boy halfway around the world, but it wouldn't have mattered to me in the same way.
on the other hand, it opens up the opportunity to build a language that is extremely easy to use with LLMs. I suspect a lot of issues in LLM usage comes from the fact that coding languages are built for humans.
if the abstractions are good, than the LLM has no problem writing the code. That's what we've noticed for Wasp at least. Its a simple config language, then the rest is react/nodejs, so it works surprisingly well.
You have a function that does A() and another function that does B().
Upon careful inspection or after just writing/using them 10,000s of times[1] you realize they are both special cases of one general function f()[2]. Congrats, you're likely doing CT now, but barely scratching the surface, though.
Let's say you find a way to do a function factory that generates explicit instances of f() -> A() and f() -> B() at runtime for your different use cases as they are needed. You do this 100 times, 1,000 times[1] with many different functions, in many different contexts. You eventually realize that if all your functions and their signatures had the same structure[3] it would be quite easy to mix some (or all?) of them with each other, allowing you to handle a perhaps infinite amount of complexity in a way that's very clean to conceptualize and visualize. Isn't this just FP? Yes, they're very intimately related.
By this point you're 99.9999% doing CT now, but remember to shower regularly, touch grass etc.
CT formalized these structures with mathematical language, and it turns out that this line of thinking is very useful in many fields like ours (CS), Math, Physics, etc.
1. Which is what happened to me.
2. Which sometimes is a way more elegant and simple solution.
3. This term is fundamental and has way more meaning than what I could write here and what one would think on a first approach to it.
UML doesn't give ideas for how to actually structure things. Category theory is primarily a theory of nice ways things can be put together or form relationships while maintaining invariants.
I've noticed a fundamental shift in how I engage with longform text — both in how I use it and how I perceive its purpose.
Longform content used to be something you navigated linearly, even when skimming. It was rich with meaning and nuance — each piece a territory to be explored and inhabited. Reading was a slow burn, a cognitive journey. It required attention, presence, patience.
But now, longform has become iconic — almost like an emoji. I treat it less as a continuous thread to follow, and more as a symbolic object. I copy and paste it across contexts, often without reading it deeply. When I do read, it's only to confirm that it’s the right kind of text — then I hand it off to an LLM-powered app like ChatGPT.
Longform is interactive now. The LargeLanguageModels is a responsive medium, giving tactile feedback with every tweak. Now I don't treat text as a finished work, but as raw material — tone, structure, rhythm, vibes — that I shape and reshape until it feels right. Longform is clay and LLMs are the wheel that lets me mould it.
This shift marks a new cultural paradigm. Why read the book when the LLM can summarize it? Why write a letter when the model can draft it for you? Why manually build a coherent thought when the system can scaffold it in seconds?
The LLM collapses the boundary between form and meaning. Text, as a medium, becomes secondary — even optional. Whether it’s a paragraph, a bullet list, a table, or a poem, the surface format is interchangeable. What matters now is the semantic payload — the idea behind the words. In that sense, the psychology and capability of the LLM become part of the medium itself. Text is no longer the sole conduit for thought — it’s just one of many containers.
And in this way, we begin to inch toward something that feels more telepathic. Writing becomes less about precisely articulating your ideas, and more about transmitting a series of semantic impulses. The model does the rendering. The wheel spins. You mold. The sentence is no longer the unit of meaning — the semantic gesture is.
It’s neither good nor bad. Just different. The ground is unmistakably shifting. I almost titled this page "Writing Longform Is Now Hot. Reading Longform Is Now Cool." because, in McLuhanesque terms, the poles have reversed. Writing now requires less immersion — it’s high-definition, low-participation. Meanwhile, reading longform, in a world of endless summaries and context-pivoting, asks for more. It’s become a cold medium.
There’s a joke: “My boss used ChatGPT to write an email to me. I summarized it and wrote a response using ChatGPT. He summarized my reply and read that.” People say: "See? Humans are now just intermediaries for LLMs to talk to themselves."
But that’s not quite right.
It’s not that we’re conduits for the machines. It’s that the machines let us bypass the noise of language — and get closer to pure semantic truth. What we’re really doing is offloading the form of communication so we can focus on the content of it.
And that, I suspect, is only the beginning.
Soon, OpenAI, Anthropic, and others will lean into this realization — if they haven’t already — and build tools that let us pivot, summarize, and remix content while preserving its semantic core. We'll get closer and closer to an interface for meaning itself. Language will become translucent. Interpretation will become seamless.
It’s a common trope to say humans are becoming telepathic. But transformer models are perhaps the first real step in that direction. As they evolve, converting raw impulses — even internal thoughtforms — into structured communication will become less of a challenge and more of a given.
Eventually, we’ll realize that text, audio, and video are just skins — just surfaces — wrapped around the same thing: semantic meaning. And once we can capture and convey that directly, we’ll look back and see that this shift wasn’t about losing language, but about transcending it.
For me, it was a skill issue.Most people learn it when very young. Just repeated practice helped... and someone close to me coached me on things that seemed common sense to others, but were counterintuitive to me. But over time, my neurons rewired themselves. I'm fairly good at small talk now. People dont believe me when I say I couldn't even order pizza over the phone at one point.
Are you young enough to have grown up in a house without a land line by chance?
I think land lines are where many current adults (who grew up before cell phones were ubiquitous) learned a lot of that common sense, because in order to get in touch with anyone you had to be willing and able to make small talk with whoever picked up the phone first - chatty mothers, asshole brothers, mostly-deaf grandfathers, etc.
It will be interesting for me, trying to figure out how to differentiate from Claude Cowork in a meaningful way, but theres a lot of room here for competition, and no one application is likely to be "the best" at this. Having said that, I am sure Claude will be the category leader for quite a while, with first mover advantage.
I'm currently rolling out my alpha, and am looking for investment & partners.
reply