IMHO it's because lisp shines to manipulate symbols whereas the current AI trend is crunching matrices.
When AI was about building grammars, trees, developing expert systems builds rules etc. symbol manipulation was king. Look at PAIP for some examples: https://github.com/norvig/paip-lisp
That stuff works, and requires very low resources by today's standards. You can do symbolic processing in a tiny, low-powered embedded system with no cloud access.
Even today, the accomplishment of SHRDLU remains impressive. And you can actually go into the code to understand why and how it comes up with answers.
Eliza on Emacs (ok, Doctor) it's a good example on this. Pick any Pentium, 486, or RpiB, it will run at crazy speeds. And you can edit it in ELISP to enhance the "personality".
Even if Elisp is not the fastest LISP here at all, it runs fine.
Now if you use SBCL, or cross the parallel Scheme road with JIT'ted Guile3 or use natively compiled Chicken, the performance skyrockets.
When AI was about building grammars, trees, developing expert systems builds rules etc. symbol manipulation was king. Look at PAIP for some examples: https://github.com/norvig/paip-lisp
This paradigm has changed.