This quote from the front page reminds me of the motivation for Autograd (and other AD frameworks)
> just write down the loss function using a standard numerical library like Numpy, and Autograd will give you its gradient.
or even probabilistic programming languages like Stan, where you can write down a Bayesian model and get posterior samples.
Working backwards (as I know Stan but not Picat), I guess to really put the language to work you need to be aware of limits of the implementations, and how to dance around them.
This seems generally an important and underdocumented aspect of language characterization. I wonder how that might be improved?