One solution would be to train the AI to generate notes to itself about sessions, so that rather than reviewing the entire actual transcript, it could review its own condensed summary.
EDIT: Another solution would be to store the session logs separately, and before each session use "fine-tuning training" to train it on your particular sessions; that could give it a "memory" as good as a typical therapist's memory.
Yeah I was thinking that you can basically take each window of 8192 tokens or whatever and compress it to a smaller number, keep the compressed summary in the window, then any time it performs a search on previous summaries if it gets a hit it can then decompress that summary fully and use it. Basically integrate search and compression into the context window