For a medical & caretaking project, I experimented with combining symbolic logic with LLMs to mitigate their tendency to nondeterministic behavior and hallucinations. Still, it leaves a lot of work to be done, but it's a promising approach for situations requiring higher reliability.
The core idea is to use LLMs to extract logic predicates from human input. These are then used to reliably derive additional information by using answer-set programming based on expert knowledge and rules. In addition, inserting only known facts back into the prompt removes the need for providing the conversation history, thus mitigating hallucinations during more extended conversations.