It's perfect to declare the known facts, and let it find the best possible answers automatically. Not as strict as with prolog, but with loops. Much easier to work with and extremely fast. I create the facts automatically.
This quote from the front page reminds me of the motivation for Autograd (and other AD frameworks)
> just write down the loss function using a standard numerical library like Numpy, and Autograd will give you its gradient.
or even probabilistic programming languages like Stan, where you can write down a Bayesian model and get posterior samples.
Working backwards (as I know Stan but not Picat), I guess to really put the language to work you need to be aware of limits of the implementations, and how to dance around them.
This seems generally an important and underdocumented aspect of language characterization. I wonder how that might be improved?
With version 3.0 Picat is also mostly backward-compatible with Prolog.
In my opinion, Picat's biggest attraction is its facilities for constraint solving and optimization, which are in some respects state of the art.