I believe it can do that, but there's two periods where it can "think".
1. During training time, which is very expensive in terms of input text and money - plus, model training often completely fails which is why you have to do checkpoints, hyper parameter searches, etc.
2. During inference, but it doesn't have arbitrary thinking and memory abilities like a human does, it only has however much space is in the input tokens + its weights. There are thoughts that can't fit even in an optimal model.
GPT isn't just a model, it also has that sampling system which is a regular computer program (as opposed to a learned one), which does give it extra abilities.
1. During training time, which is very expensive in terms of input text and money - plus, model training often completely fails which is why you have to do checkpoints, hyper parameter searches, etc.
2. During inference, but it doesn't have arbitrary thinking and memory abilities like a human does, it only has however much space is in the input tokens + its weights. There are thoughts that can't fit even in an optimal model.
GPT isn't just a model, it also has that sampling system which is a regular computer program (as opposed to a learned one), which does give it extra abilities.