Hacker News new | past | comments | ask | show | jobs | submit login

> it allows to directly convert the problem statement into an efficiently solvable declarative problem specification without inventing an imperative algorithm. -- Sergii Dymchenko

This quote from the front page reminds me of the motivation for Autograd (and other AD frameworks)

> just write down the loss function using a standard numerical library like Numpy, and Autograd will give you its gradient.

or even probabilistic programming languages like Stan, where you can write down a Bayesian model and get posterior samples.

Working backwards (as I know Stan but not Picat), I guess to really put the language to work you need to be aware of limits of the implementations, and how to dance around them.




> to really put [a] language to work you need to be aware of limits of the implementations, and how to dance around them

This seems generally an important and underdocumented aspect of language characterization. I wonder how that might be improved?


Well, instead of a language pretending the abstractions are watertight, be transparent: present them as leaky or whatever. Pure functions are a good example of abstraction since you can always step in, see what’s happening and step out. Stricts are ok. Macros are ok. From then on, someone somewhere is hiding something and you have to figure it out. This happens at the CPU level as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: