The issue you have circled around here is a bit deeper and nuanced than trimming down exponential code to something more tractable. In fact, it's often worse, and O(2^n) can be a blessing!
The problem with logic programming is the 'logic' part.
Modern imperative and functional programming languages are constructive.
Logic programming is not, and the expressive power varies with the exact logic being used.
Elementary logic gives you the following intuition.
Propositional logic -- easy.
First order logic -- easy (with caveats).
And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.
For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.
This is much more difficult than it seems like.
For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.
Of the logic programming languages, Prolog is actually pretty friendly when it comes to construction. You can easily force evaluation with an empty left clause, i.e. ":- thing1(x), thing2(y).", and Prolog gives you a lot of tools to control the current database with assert/retract (handy for memoization). Minikanren (which core.logic is based on) is much more restrictive and really wants things to be pure, though it does have the advantage of working as a library in an imperative language as a small runtime.
Neat, I wonder if this idea could be developed into a more refined programming experience where you you spend time in the imperative world by default and only express some iterations as logic puzzles to be solved like this.
I've actually seen this usage pattern in the practical usage of logic programming libraries. Common Lisp's Screamer [1], for instance, or Clojure's less-mature core.logic [2].
Though to be fair, both of these libraries are in Lisp languages, which inherently support the design pattern of slipping into/out-of DSLs.
Minikanren/Microkanren (which is what core.logic is based off of) actually works really well as library in most languages. Nothing about it really requires macros to be easy to use, just higher order functions.
The problem with logic programming is the 'logic' part.
Modern imperative and functional programming languages are constructive.
Logic programming is not, and the expressive power varies with the exact logic being used.
Elementary logic gives you the following intuition. Propositional logic -- easy. First order logic -- easy (with caveats). And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.
For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.
This is much more difficult than it seems like.
For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.