The dataflow is not totally clear to me—am I understanding correctly that the parser generator is implemented by a recursive-descent parser implemented in 1400 lines of JS, which then compiles the example grammar in "grammar" into the top-down Forthish code in "grammar_error", which is then used to compile the Algolish code in "input" into the Lispish code in "output", which is then compiled into the BF in "result"?
I suspect that the initial grammar compilation could be achieved by a cut-down compiler-compiler along the lines of what I'm working on in http://canonical.org/~kragen/sw/dev3/meta5ix.m5, although Meta5ix itself is not currently bootstrappable from a human-written seed.
It is not so much a parser generator as in interpreting parser. It is based on https://fransfaase.github.io/ParserWorkshop/Online_inter_par... which I developed earlier this year.
My idea was not to use the input as the base of the seed, but the generated BrainFuck program. The idea is that there are a multitude of BrainFuck interpreters/compilers in a multitude of programming languages that one could use verify that they are not malicious.
Also, I think it is easier to verify that a BrainFuck program is not malicious than a program in machine code. The x86 instruction set is rather large, I understand, with all its extensions. I understand that only a very restricted subset of instructions is used in the seed, but one still needs to verify this requiring to understand the x86 instruction documentation. Verifying the BrainFuck program is not easy, but it does not require the access to some external document. Besides the definition of the language, only an ASCII table, which would also be needed for x86 seed, BTW.