This is an experiment in using a markdown architecture coordinated through llms.txt to make a book and associated content as legible as possible to LLMs
No, I think to experience the illusion, it's enough to be a biological or artificial machine that processes data and reacts differently to that same data.
It's an issue! As someone who's worked on science policy for decades, including time in the Beltway, you cannot develop a robust and world-leading science policy strategy with technologists alone!
Perhaps you don't understand peon. The technologist is there to take orders and execute. You thought you were being given input/a voice at the table? God, no. That's above your station. Now go on, shoo. Let the monied people talk. We'll call you when we need something done.
I find it depends on context. I write a lot as an academic and author. When I need to generate functional content that has a specific purpose (knowledge base, transfer of information etc) I will use AI where it makes sense. Where I write to explore ideas, develop my own thinking, and connect with others in a very relational way, I intentionally do not use AI. Plus, when I do this, writing is an extension of my identity and I'd rather not give that away!
I suspect the lack of consequences to getting mad help - no relationships that can be broken and that need mending. Not sure it's healthy, and yes, I do it when Claude simply does not get it or I know i could do better :)
My experience is that it all comes down to personal fit and feel. I switched from ChatGPT to Clause several months ago and much prefer it - although do get frustrated at glitches and hitting limits. But I'm a writer and academic, and the LLM fits my purpose better. With what I do ChatGPT does nott feel great to use.
The original files can be browsed on GitHub: https://github.com/2020science/spoileralert-wtf
reply