
Barliman: Prototype of a smart text editor - adamnemecek
https://github.com/webyrd/Barliman
======
runeks

        > given a fully or mostly-specified definition of some function foo, Barliman will attempt to prove that a partially-specified test is consistent with, or inconsistent with, the definition of foo.
    

I feel like this is what GHC (the Haskell compiler) does for me. Where the
"test", in this case, is the type definition. And there is no room for error
here.

So it's possible that this idea makes more sense for languages without a
strong type system. In any case, trying to complete a partial function
definition without looking at the native types of the language means you miss
a lot of information, especially in the case of Haskell, where the types tell
you so much, if not everything, you need to know about a function.

~~~
will_byrd
Thank you for your comments!

I agree that type-directed synthesis would be very helpful. One of the
inspirations for Barliman is the type-and-example-directed program synthesis
by Peter-Michael Osera and Steve Zdancewic:

[http://www.cis.upenn.edu/~stevez/papers/OZ15.pdf](http://www.cis.upenn.edu/~stevez/papers/OZ15.pdf)

I want to experiment with a typed version of Scheme (a mini-ML with
s-expressions) to see how much types would improve synthesis. In previous work
with Michael Ballantyne, Michael showed that adding types could significantly
improve the synthesis capability of the relational interpreter used by
Barliman.

The miniKanren constraint logic system used by Barliman can be use to write
type checkers and type inferencers that can perform type habitation, similar
in spirit to the Djinn system mentioned by Cybiote. This is a topic Oleg
Kiselyov and Dan Friedman have explored, for example. I'm very interested in
improving our ability to write more sophisticated type inferencers in
miniKanren, and in using the inferencer to guide synthesis.

Part of the motivation for Barliman is to explore what can be done for
dynamically typed languages. Perhaps something like soft typing or gradual
typing would be useful in this context.

Another idea I want to explore is using high-level properties, as in property-
based testing, or even software contracts, to help guide synthesis.

I should mention that Greg Rosenblatt has improved Barliman's synthesis
abilities recently, by specializing the underlying relational interpreter,
using techniques similar to those in the Mercury compiler. I need to update
the documentation to show the improved synthesis, which still needs work in
order to be useful.

I'm also working with Rob Zinkov and Evan Donahue on using stochastic search
to improve miniKanren's synthesis ability.

I'm very much open to collaboration on Barliman, synthesis, relational
interpreters/type inferencers, constraint logic programming, etc.

~~~
runeks
I guess I don't quite understand the root purpose of program synthesis. It's
my impression, as a programmer, that the difficult task is defining ones
program logic without ambiguity. If this is indeed the case, then program
synthesis already exists in the form of Coq, but it doesn't mean we've made
programming easier, it means we've created a tool that reveals _all_ possible
failure points of a program, forcing us to consider all possible outcomes. It
makes for a provably un-crashable program, but we pay for it in the time it
takes to create the specification/implementation (which at this point have
converged into one). If I can write a specification in a language that some
tool can transform into a source code implementation (which can then be
compiled and executed), then I wouldn't call that tool a "program synthesizer"
but rather a "compiler" (and the specification language would then be a
programming language).

~~~
will_byrd
Good questions!

> It's my impression, as a programmer, that the difficult task is defining
> ones program logic without ambiguity.

Certainly the unambiguous specification of a program's behavior is a difficult
task! Barliman's approach is to allow for the behavior of a function to be
_partially specified_. That is, the specification of behavior is _ambiguous_ ,
since the examples can only specify the intended behavior over a small number
of inputs/expected outputs. Many other synthesis tools allow for ambiguous
specification of behavior, and use heuristics such as parsimony (smallest
program is assumed to be the correct choice) to pick one of infinitely many
programs consistent with the partial specification.

The idea for Barliman is to make the writing/editing of the code interact with
synthesis-by-example, in order to give the programmer continuous feedback on
whether the current code is consistent with the tests, and hopefully to make a
reasonable guess at any missing code. As the programmer fills in the code for
a function, the search space shrinks. Still, there may be infinitely many
definitions that satisfy the partial specification, so Barliman sometimes
guesses a goofy definition!

I want to explore adding high-level properties, contracts, optional types,
etc., to Barliman, to make it possible to specify the desired behavior in more
detail. However, I want to make all of these specifications _optional_ , to
maintain the flexibility of a language like Scheme. I think this is still an
under-explored area in program synthesis, especially given how much code is
written in JavaScript, Python, Ruby, etc.

> If I can write a specification in a language that some tool can transform
> into a source code implementation (which can then be compiled and executed),
> then I wouldn't call that tool a "program synthesizer" but rather a
> "compiler" (and the specification language would then be a programming
> language).

It is true that if you completely specify/prove correct your program in Coq
(for example), you can extract a running programming from that specification.
This is an interesting and powerful technique, and is in many ways similar to
a compiler, as you point out.

However, I don't think of this approach as program synthesis. In program
synthesis I might give a _non-constructive_ specification of a program's
behavior, and ask the computer to find such a program. For example, I could
completely specify what it means for a number `n` to be the square root of
another number `m` without giving an algorithm to find `n`. If I know of such
an algorithm, I may be able to prove it correct in Coq, and extract running
code from Coq. However, I can't expect that Coq will _find_ the algorithm for
me. To me this is the difference between proving code correct and performing
synthesis.

However, the division is somewhat murky--for example, some forms of synthesis
can be used to prove correctness of the program. For example, a well-known
example in miniKanren (the system Barliman uses for synthesis) is to generate
a quine directly from the mathematical definition of a quine. A quine in
Scheme is some expression `q` that evaluates to itself. In a relational Scheme
interpreter implemented in miniKanren, we can generate quines by running the
query:

(run n (q) (evalo q q))

Where `n` is the number of quines you would like to generate, `evalo` is the
name of the relational Scheme interpreter, and `run` is miniKanren's interface
operator. One of the quines generated is:

((lambda (_.0) (list _.0 (list 'quote _.0))) '(lambda (_.0) (list _.0 (list
'quote _.0))))

where `_.0` is a legal Scheme identifier. This expression is alpha-equivalent
to the most famous Scheme quine
([https://www.nyx.net/~gthompso/quine.htm](https://www.nyx.net/~gthompso/quine.htm)).

In the case of quine generation, we are _proving_ that the generated program
is a quine by synthesizing it from its definition. (Assuming that both
miniKanren and the relational Scheme interpreter are correct, of course!) We
have also synthesized fixpoint combinators from their definitions in a similar
fashion. Note that this approach is different from, but complimentary to, the
synthesis-from-example approach used by Barliman.

If the quines example interests you, you might look at our paper and code from
the 2012 Scheme Workshop:

William E. Byrd, Eric Holk, and Daniel P. Friedman. miniKanren, Live and
Untagged: Quine Generation via Relational Interpreters (Programming Pearl).
Proceedings of the 2012 Workshop on Scheme and Functional Programming,
Copenhagen, Denmark, 2012.

[http://webyrd.net/quines/quines.pdf](http://webyrd.net/quines/quines.pdf)

[https://github.com/webyrd/2012-scheme-workshop-quines-
paper-...](https://github.com/webyrd/2012-scheme-workshop-quines-paper-code)

------
fspeech
Byrd has been working on miniKanren for over a decade. He showed before how
you can easily program miniKanren to produce quines through search.

The question I would have is that producing specifications for trivial tasks
may be just as hard if not harder than programming the same tasks. Often the
program is the specification.

~~~
lopatin
Agreed, because the problem domain/system dynamics needs to be encoded into
Scheme before synthesis can be useful for high level specs. If this kind of
programming style takes off, it could possibly be useful for communities to
supply "starter templates" for common patterns and system behaviors that they
manually reason about all the time.

~~~
will_byrd
Yes, using templates for common patterns could be helpful. Or even using
machine learning over existing code bases to infer such templates.

Similar to the use of templates is Matt Might's suggestion of using Barliman
for code repair--given an _almost correct_ definition of a function, and both
passing and failing tests, try to massage the code to make it pass all the
tests. I think templates could be useful here.

There are many approaches to synthesis, and Barliman's implementation is still
very naive. Experiments by Michael Ballantyne and Greg Rosenblatt have shown
that some synthesis tasks in miniKanren (used internally by Barliman) can be
speed up by factors of thousands to millions (by restricting the grammar and
using a type system, or by performing program specialization on the relational
Scheme interpreter, as in the Mercury compiler). Alas, since synthesis is
inherently exponential in terms of program size, a speedup of a million means
that you can synthesize only a slightly larger expression. And the techniques
Michael and Greg have used don't work for all problems, and can reduce the
generality of the programs that can be synthesized.

Still, given how many synthesis techinques, tricks, and optimizations that we
_haven 't_ tried, I'm optimistic that Barliman's synthesis can be improved
enough to be practical for at least some uses. What those uses are, I'm not
sure.

------
will_byrd
Thank you for the interest in Barliman!

I'll try to answer all of the questions this afternoon.

Also, I'm very much open to collaboration on Barliman, synthesis, relational
interpreters/type inferencers, constraint logic programming, etc.! :)

~~~
adamnemecek
I have some questions.

a.) What do you think about the current/future state of logic programming?

b.) Do you think that logic programming can at some point be 'mainstream'? If
so, what steps need to happen for this to occur.

c.) Is there a particular area of problems that would really benefit from
being solved with logic programming? E.g. I was surprised to learn that
Windows used to have an embedded prolog interpreter for network configuration
and I realized that there must be more areas that could benefit from using
logic programming.

d.) Are there some areas where logic programming should never be used?

e.) What is the thing/insight that sparked your interest in this particular
research area?

~~~
will_byrd
Thanks for the post, the pull requests, and the questions!

I find that it's impossible to give enough context when answering high-level
questions such as these, so some of my answers will probably seem more
provocative or critical than I intend. Please ask me for more details/context
for any answers that seem especially ridiculous/stupid. I'm _not_ promising,
though, that the answers _aren 't_ ridiculous/stupid! :)

I'll answer these questions in reverse order.

> e.) What is the thing/insight that sparked your interest in this particular
> research area?

I had zero interest in logic programming until I saw two programs.

The first was a relational type inferencer written in the original Kanren
language, capable of type inhabitation--that is, of synthesizing expressions
of a given type. Dan Friedman showed me this program in his B521 graduate
programming languages course at Indiana University in Fall 2003. He and Oleg
Kiselyov had been playing with these relational type inferencers; this made me
want to write an interpreter for Scheme in the same fashion, to generate
Scheme expressions that evaluate to a given value. It took us a long time to
figure out how to do this is a semi-reasonable way! Our first interesting
relational interpreter is described in this paper:

William E. Byrd, Eric Holk, and Daniel P. Friedman. miniKanren, Live and
Untagged: Quine Generation via Relational Interpreters (Programming Pearl).
Proceedings of the 2012 Workshop on Scheme and Functional Programming,
Copenhagen, Denmark, 2012.

[http://webyrd.net/quines/quines.pdf](http://webyrd.net/quines/quines.pdf)

[https://github.com/webyrd/2012-scheme-workshop-quines-
paper-...](https://github.com/webyrd/2012-scheme-workshop-quines-paper-code)

This interpreter the single most interesting and enjoyable program I've worked
on, and is the basis for Barliman's synthesis. I intend to keep improving this
program for the rest of my life.

The second program was the relational arithmetic system that Oleg Kiselyov
created, and which appears in chapters 7 and 8 of 'The Reasoned Schemer'. This
is the system that convinced me that non-trivial programs can be expressed as
true relations, allowing logic variables to be placed in arbitrary positions,
and that such programs could be efficient enough to produce interesting
answers in practice. A Prolog version of the arithmetic system can be found in
this paper:

Oleg Kiselyov, William E. Byrd, Daniel P. Friedman and Chung-chieh Shan. Pure,
declarative, and constructive arithmetic relations (declarative pearl). In
Proceedings of the 9th International Symposium on Functional and Logic
Programming, ed. Jacques Garrigue and Manuel Hermenegildo, pp. 64-80. LNCS
vol. 4989, Springer, 2008.

[http://webyrd.net/arithm/arithm.pdf](http://webyrd.net/arithm/arithm.pdf)

> d.) Are there some areas where logic programming should never be used?

"Never" is a very serious word! :) I don't think we know enough about _any_
form of programming to use words like "always" and "never." I agree with Gerry
Sussman and Alan Kay that we really don't have any idea of how to program.
That is doubly true for the kinds of relational programs we're exploring with
miniKanren.

Also, I don't really do logic programming, as it's commonly understood. (As in
Prolog programming, for example.) I'm interested in writing non-trivial
programs as _relations_ , in which variables representing unknown sub-terms
can be placed anywhere, within any argument. Relational programming overlaps
with traditional logic programming in many ways, but really has a different
feel, different goals, and requires different techniques, and different
features from the underlying language.

In the short term I would say that relational programming should be used for
everything, and nothing. 'Nothing,' since it is so difficult to write
efficient relational programs, or even to write queries that are guaranteed to
terminate. 'Everything,' since writing programs as relations is often
shocking, infuriating, and delightful, all at the same time.

For example, when we were working on the relational Scheme interpreter, we had
no idea that it could generate quines from their mathematical definition,
until Stu Halloway pointed this out to me and Dan Friedman at Clojure/conj.
Talk about a bolt from the blue! And later Michael Ballantyne showed that we
could treat the Scheme definition of `append` as a relation by running it
inside the relational Scheme interpreter, instead of writing a special
`appendo` relation directly in miniKanren. This is the technique used by
Barliman! I didn't realize either of these insights, even after many years of
writing relational interpreters. I love writing these short programs that can
surprise its creator in such deep ways, even after many years!

~~~
will_byrd
> c.) Is there a particular area of problems that would really benefit from
> being solved with logic programming? E.g. I was surprised to learn that
> Windows used to have an embedded prolog interpreter for network
> configuration and I realized that there must be more areas that could
> benefit from using logic programming.

I suppose the standard answer is: rule-based systems that have to perform some
type of logical inference. For example, UBS bank apparently wrote a rule-based
system in core.logic (David Nolen's Clojure library, originally based on
miniKanren, but which now has many extensions). My understanding is that the
UBS system was some sort of expert system, determining when certain financial
operations should occur. This is a very Prology application.

Like I said above, I'm much more interested in relational programming than in
logic programming. I'm not sure which problems benefit most from relational
programming, which is one reason I created the Barliman prototype. Nehal Patel
thinks that the generality/efficiency tradeoff inherent in the relational
interpreter approach to program synthesis makes it most useful for extremely
difficult or extremely general problems that cannot be readily expressed using
other techniques. This would be similar in spirit to techniques like Markov
Chain Monte Carlo--a very general, but potentially slow, technique that should
never be be used (here's the "never" word!) if a faster, more specific
technique is applicable, but which can be used to express all manner of messy
computation.

> b.) Do you think that logic programming can at some point be 'mainstream'?
> If so, what steps need to happen for this to occur.

In the 1980s logic programming _was_ mainstream, at least in Japan!
([https://en.wikipedia.org/wiki/Fifth_generation_computer](https://en.wikipedia.org/wiki/Fifth_generation_computer))
LISP, Common Lisp, and Lisp machines, and all that, was also semi-mainstream,
in that companies and organizations were willing to spend real money on custom
hardware and programmers to develop expert systems.

LISP and Prolog were closely connected to good ol' fashioned symbolic AI.
During (that season of) the AI winter, the bottom fell out from under LISP and
Prolog.

LISP hasn't _really_ become mainstream, although Clojure is fairly popular.
But functional programming, and especially _ideas_ from functional
programming, has become much more mainstream in the past 10 years. Garbage
collection, closures, and even (semi-)hygienic macros are now standard
features--or at least standard extensions--in am increasing number of popular
or semi-popular languages.

There are at least some uses of logic programming that have started to become
popular again. Datalog is being used for static analysis, LogicBlox is selling
Datalog-based products to major corporations, Clojure's Datomic is (or
contains) a Datalog, and I know of at least one startup writing a custom
Datalog. So at least Datalog, which is closely related to Prolog, seems to be
gaining in popularity.

What hasn't taken off again is a full, general purpose logic programming
language, like Prolog. Maybe functional programming has to become more firmly
entrenched in industry before general purpose logic programming can become
popular, since logic programming is even higher-level, more abstract, and
further from the hardware than functional programming. Trying to understand
how the search and constraint solving works, seems much more difficult for
most programmers than understanding how Lisp's `cons` works, for example.
Understanding monads seems closer in difficulty to understanding basic logic
programming--the proliferation of tutorials on the basic ideas of miniKanren
and core.logic reminds me a little of the multitude of monad tutorials.

There is also the applicability and performance issue. There is increasing
interest in using functional programming, or at least functional techniques,
in AAA video games, and there have been a few success stories. Still, using
Haskell to write Call of Duty 17 doesn't seem feasible right now. Similarly,
it's not clear how to write such a game using a general-purpose logic
language. I think what's clear, though, is that _parts_ of these games can be
a good match for functional or logic languages or DSLs.

> a.) What do you think about the current/future state of logic programming?

I'm disappointed by the extent to which Prolog is still dominant in the logic
programming community, especially for research. I have nothing against Prolog.
Having a mono-culture, though, stifles lots of ideas. (Thought experiment--
imagine what the state of programming languages research would be in all
research was done in...FORTRAN, LISP, Haskell, or any other single language,
no matter how great that language is.)

Curry, Mercury, Mozart/Oz, and miniKanren, of course, each have a different
take on logic programming than does Prolog, and the logic programming
community has never been entirely a mono-culture. But compared to functional
programing, for example, there are far fewer logic programming languages that
are at all popular, or that are used in research or industry.

I suspect that the dominance of Prolog is one reason that logic programming
isn't currently more popular. Haskell, Scheme, Common Lisp, Scala, Clojure,
F#, OCaml, Racket, etc., are active enough, and have vibrant enough
communities, that someone interested in functional programming can pick from a
wide variety of languages, with different philosophies, syntax, type systems,
tooling, etc. In the logic programming world it's still either Prolog...or
some essentially marginal language (in terms of popularity, not quality!) with
a tiny community.

I do think that many people are interested in the _ideas_ of logic
programming, especially now that functional programming ideas have become
mainstream, having infected even Java and C++. People are starting to look for
the next higher level of abstraction after traditional functional programming,
be it dependent types, proof assistants, logic programming, or whatever. And I
find the interest in Datalog, and the interest in miniKanren, promising.

Unless some language/killer-application combination appears out of the blue
(think Ruby/Rails), I suspect that for the foreseeable future it will be the
_ideas_ of logic programming, and perhaps a few logic programming DSLs, that
influence the broader programming community, and only for the early adopters.

Using multiple DSLs to express different paradigms within the same program
seems entirely reasonable to me. This is a very Lispy/Schemely/Rackety notion.
There is lots of interest in DSLs these days, and more languages are
supporting macro systems, even hygienic macro systems (at least for limited
notions of "hygienic"). I wouldn't be surprised if logic programming ideas
slip into programmers' toolkits through DSLs like miniKanren or embedded
Prologs. Something like an O'Reilly book on using logic programming DSLs to
solve common problems would go a long way towards making these techniques more
widely known.

As far as the future of relational programming, as opposed to logic
programming, I think we haven't even scratched the surface of what is
possible. I'm inspired by Alan Kay's saying that "the best way to predict the
future is to invent it." :)

~~~
adamnemecek
If you'd prefer to take this discussion to email we can.

> Thanks for the post, the pull requests, and the questions!

You are very welcome, lol. There might be more coming as time permits. I'll
keep an eye out on the progress.

> I'm not promising, though, that the answers aren't ridiculous/stupid! :-)

I've always felt like people are too afraid about not sharing their ridiculous
ideas anyway :-).

> I suppose the standard answer is: rule-based systems that have to perform
> some type of logical inference.

I think that my question was a bit too ambiguous. I think that I meant like an
area where logic programming isn't currently commonly used (to me, the network
configuration tool seems very out of place for example) but it would be a
match made in heaven. I'm aware of expert systems :-).

> In the 1980s logic programming was mainstream, at least in Japan!

I'm actually aware but for some reason it's not mainstream now.

> What hasn't taken off again is a full, general purpose logic programming
> language, like Prolog.

That's what I was asking about. E.g. would it make sense to e.g. write an OS
in a logic/relational language? What sort of CPU support would be required to
make this a serious effort? I think what I'm getting at is that I feel like
e.g. Prolog isn't truly general purpose. I get that it's turing complete but
large portion of programs are much slower in prolog (as you say yourself). Is
this something that's "inherent" to logic languages (like even if HW support
were better, will it always (i know i know, no absolutes but you get the gist)
be MUCH slower or could it be just slightly slower)?

Like I realize that they are slower because they are fundamentally doing a
search but I'm wondering if e.g (and I'm really just trowing ideas on the wall
here) it's possible to leverage type systems and hw support to considerably
reduce the runtime search space and do most of it on compile time. Are you
familiar with liquid types (you mention dependent types, so I'm guessing
yes?)?

In your work, you seem to be gravitating towards dynamically typed languages.
Wouldn't a lot of work be easier in like Haskell or some other type language?
I'm sure you have good reasons for not having switched and I'm interested in
hearing those. Why are Lisps better for your work?

> "Never" is a very serious word! :-) I don't think we know enough about any
> form of programming to use words like "always" and "never."

I was actually thinking about wording it slightly less absolutely but I was
interested if there were any problems where logic programming would be a bad
idea no matter what so it would be basically "never".

> Unless some language/killer-application combination appears out of the blue
> (think Ruby/Rails),

I think that's part of what I was asking. What would be a "killer" app for
logic programming? Are there any problems out there that aren't currently
being solved with logical programming but would really benefit from them?

Btw can you recommend some literature on the intersection of
logic/constraint/relational programming and music? I've ordered this book
[https://amazon.com/dp/1848212887/](https://amazon.com/dp/1848212887/)

I'll check out the links you posted. I'm actually working full-time on this
music composition app (logic/relational programming isn't the main
contribution of the application but I'm just trying to see if there's some low
hanging fruit I could do easily and blow minds). If you have any sort of input
into that I'm be interested. I saw on your CV that you supervised some
undergrad research in this area so I'm guessing that you might not be
unfamiliar with that intersection.

I did see like the top google links when I searched for the combination of
those keywords (e.g. [https://vimeo.com/55677313](https://vimeo.com/55677313))
so I think I'm interested in some sort of meta analysis.

~~~
will_byrd
> If you'd prefer to take this discussion to email we can.

Sure. I'll take a crack at these followup questions first, though!

> I think that my question was a bit too ambiguous. I think that I meant like
> an area where logic programming isn't currently commonly used (to me, the
> network configuration tool seems very out of place for example) but it would
> be a match made in heaven. I'm aware of expert systems :-).

It doesn't surprise me to see Prolog in a configuration system, since
configuration is a standard use of expert systems--for example, Xcon
([https://en.wikipedia.org/wiki/Xcon](https://en.wikipedia.org/wiki/Xcon)).
Getting a manager at Microsoft to allow the use of Prolog is the surprising
thing to me! :)

> That's what I was asking about. E.g. would it make sense to e.g. write an OS
> in a logic/relational language? What sort of CPU support would be required
> to make this a serious effort? I think what I'm getting at is that I feel
> like e.g. Prolog isn't truly general purpose. I get that it's turing
> complete but large portion of programs are much slower in prolog (as you say
> yourself). Is this something that's "inherent" to logic languages (like even
> if HW support were better, will it always (i know i know, no absolutes but
> you get the gist) be MUCH slower or could it be just slightly slower)?

You can play all sorts of "extra-logical" tricks in Prolog to improve
performance, at the cost of giving up declarativeness. Still, Prolog is not
the language I'd reach for first when writing system software, as you point
out.

Have you looked at Mercury? Mercury is a logic programming language that
borrows heavily from Haskell. One cool thing about Mercury is that you define
high-level relations, specifying types and other annotations for each
relation. The Mercury compiler then compiles the high-level declarative
relations into lower level, more efficient functional definitions. One high-
level relation might be compiled to many functions, which are dispatched to at
run-time based on the groundness of the arguments passed to the relation.
Mercury is specifically designed for system programming, and also has support
for parallelism.

I should point out that Mercury relations aren't _really_ full relations, like
they are in miniKanren. Mercury compiles away unifications for efficiency,
which means that Mercury can't handle more sophisticated uses of logic
variables. For example, the quine-generating query would be impossible to
specify in Mercury. But the resulting code is much more efficient.

Greg Rosenblatt has applied a few of the Mercury compiler techniques to the
relational interpreter used by Barliman, which has sped up some synthesis
queries by orders of magnitude. He suspects there are more gains to be made
from borrowing Mercury optimizations.

> Like I realize that they are slower because they are fundamentally doing a
> search but I'm wondering if e.g (and I'm really just trowing ideas on the
> wall here) it's possible to leverage type systems and hw support to
> considerably reduce the runtime search space and do most of it on compile
> time. Are you familiar with liquid types (you mention dependent types, so
> I'm guessing yes?)?

I'm not sure what you mean by hardware support, exactly. Are you talking about
using accelerators like GPUs or FPGAs? If so, as far as I can tell miniKanren
doesn't map well onto these accelerators, although the Intel Phi might be a
decent match.

I suspect that running synthesis tasks on a beefy server, or even on a
cluster, would result in a significant improvement to performance. Static
analysis and compiler optimizations would probably make a much bigger
difference, though.

I'm slightly familiar with liquid types. There may be ways for dependent types
to prune parts of the search space statically, although I'm not sure. I'd like
to explore combining miniKanren and Idris, to see where that leads.

> In your work, you seem to be gravitating towards dynamically typed
> languages. Wouldn't a lot of work be easier in like Haskell or some other
> type language? I'm sure you have good reasons for not having switched and
> I'm interested in hearing those. Why are Lisps better for your work?

I like Scheme because it has very powerful meta-programming capabilities, it
is easy to write meta-circular interpreters, it is trivial to write DSLs
(without having to write parsers or whatever), and the abstract syntax and
concrete syntax are essentially the same. Being able to just call 'read' to
parse an expression, and being able to redefine built-in binding forms, for
example, can be extremely handy.

Haskell is a language chock full of interesting ideas. I don't love it,
though. The syntax, lack of macros, all the extensions in GHC, etc., make for
a language that is powerful but complex. I like languages that are powerful
but small, containing only a few concepts. miniKanren itself only has three
core operators; the relational interpreter uses an extended version of
miniKanren with six operators, but we hope to get that down to five.

In short, Scheme fits my programming style better. I'm not averse to using
other languages, though. And miniKanren has been ported to dozens of host
languages, including multiple Haskell implementations.

As far as the types making things easier, I'm not sure this is true. Many ML
and Haskell programmers have looked at miniKanren and said, "this would be so
much better with types!" After playing around with the implementation for a
while, they usually say, "taking advantage of the types without losing the
flavor of miniKanren is much harder than I thought." As far as I can tell, the
only _interesting_ embedding of miniKanren in a typed language is Dmitry
Boulytchev's OCanren
([https://github.com/dboulytchev/OCanren](https://github.com/dboulytchev/OCanren)).
Dmitry had to hack the OCaml type system at a low (and potentially unsafe)
level to make the embedding possible. I would be interested in exploring
OCanren for writing relational interpreters. I'm not exactly sure what it
would buy us, though.

There are lots of interesting connections between dependent types and logic
programming. I'd like to explore combining Idris and ideas from miniKanren,
for example. To me, combining Idris and miniKanren seems more interesting and
promising than combining Haskell and miniKanren, which has been done many
times without apparent benefit.

Now, in contrast to the host language used for the embedding of miniKanren,
implementing a typed language in the relational interpreter _definitely_ makes
synthesis easier. In other words, implementing a relational interpreter for an
ML-like language, using miniKanren embedded in Scheme, is probably far better
for synthesis than implementing a relational interpreter for Scheme, using
miniKanren embedded in Haskell.

I definitely plan to investigate relational interpreters for typed languages
(Michael Ballantyne has already done some work in this area, and showed great
improvements in synthesis). I'm also open to exploring miniKanren's embedded
in typed languages. This seems like a much trickier problem than you might
think, though. OCanren looks like a promising start, and I look forward to
playing with it more.

> I was actually thinking about wording it slightly less absolutely but I was
> interested if there were any problems where logic programming would be a bad
> idea no matter what so it would be basically "never".

I think this largely depends on what you consider "logic programming". Mercury
seems applicable to systems programming problems that aren't an ideal match
for traditional Prolog, for example. And Curry allows functional programming
and logic programming to be blended in interesting ways that blur the lines
between the paradigms.

There are also approaches like Answer Set Programming, and even Answer Set
Programming Modulo Theories, that use powerful solvers to solve problems that
would be hopelessly inefficient with naive Prolog (or miniKanren) search.

So I'll stick with my original answer, and say it's far too soon to tell. :)
We do need new langauges and approaches, though!

> I think that's part of what I was asking. What would be a "killer" app for
> logic programming? Are there any problems out there that aren't currently
> being solved with logical programming but would really benefit from them?

Such a tiny percentage of programmers have even _heard_ about Answer Set
Programming; it's possible there are killer apps for ASP laying around, just
waiting for someone to connect the ideas.

I suspect killer apps will be found for probabilistic logic programming, as
the languages and implementations improve, and as more people become
comfortable with probabilistic programming.

And, of course, I'm hoping program synthesis using logic programming will be a
killer app! :)

> Btw can you recommend some literature on the intersection of
> logic/constraint/relational programming and music? I've ordered this book
> [https://amazon.com/dp/1848212887/](https://amazon.com/dp/1848212887/) >
> I'll check out the links you posted. I'm actually working full-time on this
> music composition app (logic/relational programming isn't the main
> contribution of the application but I'm just trying to see if there's some
> low hanging fruit I could do easily and blow minds). If you have any sort of
> input into that I'm be interested. I saw on your CV that you supervised some
> undergrad research in this area so I'm guessing that you might not be
> unfamiliar with that intersection. > I did see like the top google links
> when I searched for the combination of those keywords (e.g.
> [https://vimeo.com/55677313](https://vimeo.com/55677313)) so I think I'm
> interested in some sort of meta analysis.

I'd be happy to talk to you about this in more depth, maybe via hangouts. I
think it depends very much on what you want to do. For example, synthesis of
music that pleases humans is still an open problem. But other problems seem
more tractable.

------
YeGoblynQueenne
It would have been nice to see some more information on the way this works.
I'm not clear exactly what is being searched- some set of built-in functions
that are combined in the hopes of finding one that makes tests pass? A space
of function components?

It's cool but I'd like to know what it does exactly. Maybe the explanation is
spread around examples, but skimming through them I couldn't find it.

~~~
will_byrd
Barliman's synthesis is based on the "relational interpreter" approach
described in this 2012 Scheme Workshop paper:

William E. Byrd, Eric Holk, and Daniel P. Friedman. miniKanren, Live and
Untagged: Quine Generation via Relational Interpreters (Programming Pearl).
Proceedings of the 2012 Workshop on Scheme and Functional Programming,
Copenhagen, Denmark, 2012.

[http://webyrd.net/quines/quines.pdf](http://webyrd.net/quines/quines.pdf)

[https://github.com/webyrd/2012-scheme-workshop-quines-
paper-...](https://github.com/webyrd/2012-scheme-workshop-quines-paper-code)

The basic idea is to write an interpreter as a _relation_ in a constraint
logic programming language (miniKanren, in this case). Since the interpreter
is a relation, there is no real distinction between "input" arguments and
"output" values. More generally, arguments to the relational interpreter can
be partially-instantiated terms that contain logic variables representing
unknown subexpressions.

Since the interpreter for "miniScheme" is fully relational, running the
interpreter "forwards" to generate a value from a given expressions, or
running the interpreter "backwards" to generate an expression that evaluates
to a given value, are essentially the same operation. The only difference is
where the logic variables are placed in the query to the relational
interpreter.

But the approach is more general than just running the interpreter forward or
backward. Logic variables can be used in _both_ the expression to be
evaluated, and in the expected value. This is how we can generate quines
(expressions that evaluate to themselves) in miniKanren directly from the
mathematical definition of a quine, for example. The query

(run N (q) (evalo q q))

will produce N quines, including a quine that is alpha-equivalent to the
canonical Scheme quine

((lambda (x) (list x (list (quote quote) x))) (quote (lambda (x) (list x (list
(quote quote) x)))))

This example shows some of the generality of the relational interpreter
approach---program inversion applied to a Scheme interpreter would not be able
to generate quines.

The search is done over all possible terms in "miniScheme," a Turing-complete
subset of Scheme that supports higher-order variadic functions, recursion,
lists and pairs, lexical scoping/shadowing, etc., and that has been extended
with pattern-matching.

Does that help?

------
lake99
> I'm starting with cocoa since I'm developing on a Mac

Does anyone here have an idea of how easy or difficult it would be to build
this on Linux? The GNUStep website says they framework closely follows Apple's
Cocoa. And yet, I am not familiar with any sizeable Mac app getting ported to
Linux via GNUStep.

~~~
will_byrd
Good question!

I'm not sure how easy it would be to compile Barliman under GNUStep. I would
like to have a version of Barliman that runs on different platforms, perhaps
using JavaScript/Clojurescript/whatever.

The GUI part of Barliman isn't that sophisticated. I think the trickier part
is managing the Chez Scheme processes that are being launched/killed in the
background. If there are N tests/examples, Barliman will launch N+2 processes
of Chez Scheme, in parallel, each one running a miniKanren query. Killing
these processes and starting new ones reliably is critical.

I'm not sure how easy it would be to launch and manage Chez Scheme instances
using Node.js, for example.

If you are interested in compiling Barliman under GNUStep, I'd be happy to
help. I'd be willing to try to rewrite parts of the code that aren't supported
by GNUStep, for example. I assume GNUStep supports Swift? If not, we would
have to rewrite Barliman in Objective-C, which would be fine by me.

I should mention that I've been thinking about separating the GUI completely
from process management, to make it easier to run synthesis processes on
remote machines, such as a beefy EC2 X1 instance. At that point, hopefully it
would be quite simple to write a new front end for Barliman.

Do you have any recommendations for cross-platform GUIs that can also launch
and manage parallel processes well?

Thanks!

~~~
lake99
> If you are interested in compiling Barliman under GNUStep, I'd be happy to
> help.

Thank you for being open to it, but since I'm not a Schemer (yet) so I don't
see myself using it, beyond playing with it for fun.

> I assume GNUStep supports Swift?

From the best I could gather[1], this appears to be work-in-progress.
GNUStep's Objective-C API is apparently at OSX-10.6 level.

> recommendations for cross-platform GUIs that can also launch and manage
> parallel processes well

QT comes to mind. Their QProcess class seems pretty neat to me.

[1]
[https://www.reddit.com/r/swift/comments/49r82f/swiftgnustep_...](https://www.reddit.com/r/swift/comments/49r82f/swiftgnustep_support_on_linux/)

[2]
[https://doc.qt.io/qt-5/qprocess.html](https://doc.qt.io/qt-5/qprocess.html)

------
tedmiston
Does anyone generate tests automatically?

I've been looking for something related to this. Whenever I design unit test
cases, I write out the common edge cases, let's say: valid input, invalid
input (wrong data type), invalid input (too long), etc, then translate them
into tests. I might write them in a way that the tests are the combination of
various dimensions of input, one per variable. For example, try input1=['a',
'aaa', None] x input2=[-1, 0, 1, None] for each combination knowing which I
expect to pass and fail.

Often the tests are similar (write one, duplicate, slightly change the logic),
and I've always felt like there has to be some kind of test generation
framework out there to do this, though I haven't heard of anyone actually
using one.

~~~
bryanh
Immediately QuickCheck comes to mind
([https://hackage.haskell.org/package/QuickCheck](https://hackage.haskell.org/package/QuickCheck)).

Folks have written (possibly less powerful) implementations in Python
([https://github.com/HypothesisWorks/hypothesis-
python](https://github.com/HypothesisWorks/hypothesis-python)), Javascript
([https://github.com/jsverify/jsverify](https://github.com/jsverify/jsverify)),
etc.

FWIW I've found them fragile and lacking for my use cases - but I bet fuzzing
style testing tools could be pretty valuable in certain cases.

------
SwellJoe
Sort of a nitpick, but green always makes me think "tests passing", whereas
you've got it meaning "incomplete expression" (or something along those lines,
since you point out that it's not the same as syntax errors).

~~~
will_byrd
Thanks for the comment!

I agree! The newer interface includes an "Illegal sexpression" messages, but
still uses the sickly green color. I need to fix this--any suggestions? :)

~~~
kragen
A wavy red underline is the established default for inline error reporting at
this point, for better or worse. That's probably the best thing to use unless
you're determined to make advances in graphical user interface layout, which —
I hope it isn't rude to point out — doesn't seem to be a goal of Barliman,
from looking at the screenshots.

If you're going to highlight errors with an error background color, I suggest
blue, which contrasts better with green than red or yellow can, especially if
you're somewhat red-green colorblind, like I am. Alternatively, maybe you
could reduce the contrast on the whole pane except for the error, as if you
were trying to scam the programmer into supplying their email address to
continue reading an article.

~~~
will_byrd
Good point! I like the idea of using a wavy red line. Thanks!

Yeah the user interface is pretty budget right now. :) I've been teaching
myself Swift and Cocoa programming, and still have lots to learn. And I've
been concentrating more on the "under the hood" pieces of Barliman. But I do
want to explore new interface and GUI ideas.

I suspect the interface will change completely in the future. Tom Gilray, for
example, has suggested using just a regular editor window, which Barliman
would parse to find the definitions, examples, optional type annotations, etc.
This would be closer in spirit to the XCode interface.

One interface question is how to best specify the semantics for the language
under interpretation (for example, "miniScheme"). Currently the semantics are
specified by writing a "relational" interpreter in miniKanren. This is pretty
low level, though. I'd like to have a much high-level specification language
that looks closer to the math for operational semantics.

------
cheez
It wasn't clear to me how the "best guess" works. Did I miss it?

Indeed, if program synthesis can be done for even slightly non-trivial tasks
(find in list, etc) that would be a huge boon for StackOverflow answer bots.

~~~
will_byrd
I gave a short, semi-technical explanation of how Barliman performs synthesis
in my answer to `YeGoblynQueenne`. I'm not sure if that description answers
your question, though.

I agree that relatively modest synthesis could be useful; exploring modest
synthesis is why I made Barliman, although I hope Barliman's synthesis can be
made less modest! :)

------
gbersac
This is ambitious. I don't really believe it will be rally useful one day, but
who know, maybe it is the primitive form of a program which will replace
programmers in the future.

~~~
will_byrd
Thanks!

Yes, Barliman is ambitious. Both the synthesis capabilities and the interface
will need to improve before Barliman might be useful for practicing
programmers.

One question Barliman is trying to explore is whether relatively modest
synthesis abilities might be useful for some tasks. If the last 10% of a
function could be synthesized, for example, how useful would that be? Perhaps
a better question is, "for which tasks, if any, would modest synthesis be
useful"?

I don't know the answer to this question.

A tool like Barliman _might_ be useful in an educational setting. However,
this seems a bit like the calcular/Wolfram Alpha debate in teaching math. If
Barliman becomes good enough to synthesize many simple Scheme definitions from
examples, would that take away too many learning opportunities from students?
I suppose it would teach students to think of example tests, and perhaps even
properties, types, or contracts, for the functions they are defining. But it
does seem that the better synthesis works, the less useful Barliman might be
in a teaching setting.

One suggestion from Matt Matt is to use Barliman for code _repair_. For
example, say I define a function `foo`, along with several tests that `foo`
passes. Later I find that the definition of `foo` is incorrect, and come up
with several more tests that fail using the current definition. Barliman could
have an "auto repair" button that attempts to synthesize a correct definition
of `foo` from both the tests and the "almost correct" existing definition. I
have several ideas of how this might be done. For autorepair (and other
synthesis tasks) it would be helpful to extend Barliman to run synthesis tasks
remotely--for example, on an EC2 X1 instance with dozens of hardware threads.

For more advanced synthesis tasks, I think we'll need a more sophisticated
search. I was very impressed with Alpha Go's performance over a giant search
space that historically has been considered the domain of human experts, a
task that reminds me greatly of synthesis. I'm working with Rob Zinkov and
Evan Donahue on using stochastic search/Markov Chain Monte Carlo/etc. to try
to allow miniKanren's search to explore more promising areas in the search
space. One inspiring piece of related work is this 2013 ASPLOS paper by
Schkufza, Sharma, and Aiken on Stochastic Superoptimization:

[http://theory.stanford.edu/~aiken/publications/papers/asplos...](http://theory.stanford.edu/~aiken/publications/papers/asplos13.pdf)

I don't see Barliman or other synthesis tools replacing programmers any time
soon, but in 10 or 15 years I hope our tools will be assisting us with useful
synthesis tasks.

~~~
will_byrd
* Matt Matt -> Matt Might. Getting a little sleepy! ;)

------
YeGoblynQueenne
It's nice to see a logic programming subject on HN's first page- the second in
two days.

~~~
will_byrd
Cool! What was the other post?

~~~
YeGoblynQueenne
This one:

[https://news.ycombinator.com/item?id=12193694](https://news.ycombinator.com/item?id=12193694)

~~~
will_byrd
Very nice! Thanks.

You might be interested in this paper and code on the implementation of
microKanren, a minimal implementation of core miniKanren. The core
implementation is under 50 lines of Scheme code.

Jason Hemann and Daniel P. Friedman. microKanren: A Minimal Functional Core
for Relational Programming. In Proceedings of the 2013 Workshop on Scheme and
Functional Programming (Scheme '13), Alexandria, VA, 2013.

[http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf](http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf)

[https://github.com/jasonhemann/microKanren](https://github.com/jasonhemann/microKanren)

------
sdegutis
So, this is a text editor primarily for scheme, with a GUI for TDD built in?

~~~
chrismonsanto
It is an interactive program synthesizer, as it says in the overview

~~~
will_byrd
Yes, it's a prototype editor with interactive example-based synthesis.

Barliman's synthesis can try to fill in "missing" sub-expressions in a Scheme
definition. However, there is also a TDD component that might not be obvious.
Barliman will attempt to run each test individually, even if the function has
not been fully defined. That is, Barliman will attempt to synthesis a
definition of the function to make each test pass. If Barliman can _prove_
that there there is no way to fill in the function definition to make the test
pass, the test will display an error message.

In other words, Barliman runs unit tests continually, and those tests can fail
_before_ you have finished defining the function being tested. Hopefully this
feedback will let the programmer know that they are heading down the wrong
path as soon as possible. Or, let the programmer double check the test, to
make sure the test is correct.

