

Mycroft – A distributed declarative logic language - enkiv2
https://github.com/enkiv2/mycroft

======
hellofunk
> PROLOG, partly for historical reasons, is quite slow.

This is not an easy statement to make, nor particularly accurate. There are
many, many prolog systems available. Perhaps the free prolog compilers out
there are not stellar in performance, but commercial production prolog systems
are not likely going to be slower than what is proposed here, and yet will
still probably be more feature-rich.

~~~
enkiv2
Speed is achievable in prolog in a variety of ways. One is to make use of cuts
and structure code so that it fails early. Another is to optimize the hell out
of the interpreter. Another is to rewrite prolog code in another language &
optimize that (which is pretty standard practice for developing expert
systems). But, ultimately, prolog is always going to be comparable in time
complexity to a depth-first search. Tricks with cuts and structuring code to
avoid deep recursion are, at best, equivalent to a time-consuming attempt to
sort a tree such that the most likely desired results are the first to be
searched -- which is to say, they don't change the underlying time complexity
of the process.

In worst case, MYCROFT has the same time complexity as PROLOG. However, with
notation of determinate predicates, memoization is possible -- something that
PROLOG _cannot_ do without fairly complicated analysis. On top of that,
MYCROFT makes distribution of load trivial. Because of cutting rules and other
mechanisms designed for optimization of PROLOG code on single systems without
concurrency, any form of concurrency on standard PROLOG is difficult -- after
all, you can spend a bunch of cycles determining a result only to discover
that that result was going to be discarded by another branch and never needed
to be computed in the first place. MYCROFT doesn't have this problem.

Worst case time complexity is equivalent in MYCROFT and PROLOG. However,
average case in MYCROFT approaches linear time as computations occur even on
non-optimized code, and average case in MYCROFT approaches linear time on non-
optimized code as node count grows even for a static number of computations.
Average case in PROLOG for non-optimized code is the same as worst case. Any
arbitrarily large constant factor added to time for MYCROFT is irrelevant --
you can't optimize the depth first search out of a depth first search by
writing it in machine code.

There have been other PROLOG implementations (commercial ones) that did
automatic parallelism / distribution of tasks. I'm unaware of any that still
exist, but it was being researched back in the 80s. There are PROLOG-like
languages that have det vs nondet notation (MIT Church comes to mind). In this
sense, I'm not breaking new ground. I'm unaware of any other system that
combines det/nondet notation with aggressive distribution and memoization.

(This is not to say that the system is carefully optimized. Some behaviors are
quite naive -- intentionally, because clever solutions in this case are
subject to clever bugs, so it doesn't make sense to introduce them until the
system is more stable. The system is written in Lua specifically to make an
ESP8622 port easier -- even though a prototype in C was written & would have
been more efficient. We round-robin requests rather than using a distribution
mechanism based on hash similarity as originally planned in order to avoid
dealing with potentially slow pure-lua hash implementations that would prevent
the system from being portable. So, sure, it could be made faster. But it
already has an average case better than any standards-compliant PROLOG
system.)

