The body of knowledge on Ericksonian hypnotherapy is pretty clear that the effect of language on Level 1 is orthogonal to, and sometimes even opposed to, conscious processes.
I became interested after being medically hypnotized for kidney stone pain. As the hypnotist spoke, I was consciously thinking: "this is dumb, it will never work." And yet it did.
That's exactly your point — I was fully conscious and "at home" the whole time, yet something was processing and acting on the language independently. The question is whether that something shares any computational properties with LLMs, not whether the whole system does.
"It's exactly your point — I was fully conscious and "at home" the whole time, yet something was processing and acting on the language independently."
It's unclear what you're referring to here. You were conscious, & you wanted to think the thought "this is dumb, it will never work." & you thought that. What was the independent process?
I think you're creating a false dichotomy between meta-thinking and mere reflex, when in fact most conscious thinking is neither of those.
My understanding is that a hypnotized person is very focused on the hypnotist and suggestible but can otherwise carry on a relatively normal conversation with the hypnotist. And certainly an unhypnotized chattering person is still conscious, aware of the context as well as the subject of their speech. You may find the speech dull and tedious, may even call it "mindless" as insult, yet it's honestly impossible to dispute that there's an active human mind at work.
I don't think we're far apart. My claim isn't that Level 1 is "mere reflex". It's that language can produce effects at a level that operates independently of (and sometimes in opposition to) conscious evaluation. The hypnosis example is just a clean demonstration of that separation.
Whether LLMs are useful models for studying that level is an empirical question. They're not conscious, but they do learn statistical regularities in language structure, which may be exactly what Level 1 is optimized for.
I became interested after being medically hypnotized for kidney stone pain. As the hypnotist spoke, I was consciously thinking: "this is dumb, it will never work." And yet it did.
That's exactly your point — I was fully conscious and "at home" the whole time, yet something was processing and acting on the language independently. The question is whether that something shares any computational properties with LLMs, not whether the whole system does.