> We humans often tend to follow a similar process, but we can actively choose to do real critical thinking instead.
> - it may confidently present a false conclusion to itself, then expand that conclusion into a whole thread
I want to know how that differs from human "real critical thinking", because I may be missing this function. How do you know what you thought of is true or false? I only know it because I think I know it. I had made a lot of mistakes in past with a lot of confidence.
> The more unique and interesting your conversation is, the less utility these models will have.
Yeah, that also happens with a lot of people I know.
> ... the result is likely to be logically coherent. It's even likely to be correct!
Yeah, a lot of training data made sure that what it outputs is as correct as possible. I still remember my training over many days and nights to be able to multiply properly, with two different versions of multiplying table and many false results until I got it right.
> I guess the crux of it is this: is it training or awareness?
I don't think LLM's are really aware (yet). But they do indeed follow logical reasoning method, even if not perfect yet.
Just a thought: when do you think about how and what you think (awareness of your thoughts)? When you actually think through a problem, or after that thinking? Maybe to be self-aware, AI's should be given some "free-thinking time". Currently it's "think about this problem and then immediately stop, do not think any more". Currently training data discourages any "out-of-context" thinking, so they don't.
We know what true and false mean. An LLM knows what true and false are likely to be surrounded with.
The problem is that expressions of logic are written many ways. Because we are talking about instances of natural language, they are often ambiguous. LLMs do not resolve ambiguity. Instead, they continue it with the most familiar patterns of writing. This works out when two things are true:
1. Everything written so far is constructed in a familiar writing pattern.
2. The familiar writing pattern that follows will not mix up the logic somehow.
The self prompting train of thought LLM pattern is good at keeping its exploration inside these two domains. It starts by attempting to phrase its prompt and context in a particular familiar structure, then continues to rephrase it with a pattern of structures that we expect to work.
Much of the logic we actually write is quite simple. The complexity is in the subjects we logically tie together. We also have some generalized preferences for how conditions, conclusions, etc. are structured around each other. This means we have imperfectly simplified the domain that the train of thought writing pattern is exploring. On top of that, the training corpus may include many instances of unfamiliar logical expressions, each followed by a restatement of that expression in a more familiar/compatible writing style. That can help trim the edge cases, but it isn't perfect.
---
What I'm trying to design is a way to actually resolve ambiguity, and do real logical deduction from there. Because ambiguity cannot be resolved to a single correct result (that's what ambiguity means), my plan is to, each time, use an arbitrary backstory for disambiguation. This way, we could be intentional about the process instead of relying on the statistical familiarity of tokens to choose for us. We would also guarantee that the process itself is logically sound, and fix it where it breaks.
> - it may confidently present a false conclusion to itself, then expand that conclusion into a whole thread
I want to know how that differs from human "real critical thinking", because I may be missing this function. How do you know what you thought of is true or false? I only know it because I think I know it. I had made a lot of mistakes in past with a lot of confidence.
> The more unique and interesting your conversation is, the less utility these models will have.
Yeah, that also happens with a lot of people I know.
> ... the result is likely to be logically coherent. It's even likely to be correct!
Yeah, a lot of training data made sure that what it outputs is as correct as possible. I still remember my training over many days and nights to be able to multiply properly, with two different versions of multiplying table and many false results until I got it right.
> I guess the crux of it is this: is it training or awareness?
I don't think LLM's are really aware (yet). But they do indeed follow logical reasoning method, even if not perfect yet.
Just a thought: when do you think about how and what you think (awareness of your thoughts)? When you actually think through a problem, or after that thinking? Maybe to be self-aware, AI's should be given some "free-thinking time". Currently it's "think about this problem and then immediately stop, do not think any more". Currently training data discourages any "out-of-context" thinking, so they don't.