You just have to write a problem and domain file which is very easy LISP syntax, it's standardized too, so you can swap planners later. I can give you examples if you want.
disclaimer: I am collaborating with Maria Fox's group
I know we live in the era of "there already is a library for everything", but for small problems it may still sometimes make sense pick from the ideas you like and code directly. This can take less time than learning whatever huge family of encodings committees have agreed on, is less likely to bring in cross language calling requirements, less likely to bring in specific system requirements and less likely to bring in licensing requirements.
It's been battle tested, all the issues in the article are solved (proper pruning, optimal results). Furthermore, its likely to have less bugs than the authors.
Its been tuned over years for use in competitions. So it probably runs faster than the author's, even though it has more features.
It uses a derivative of the same language the author has suggested (STRIPS is the predecessor to PDDL). So its basically the same interface but more general purpose. Using a standardised language makes the overall system more maintainable
There are no licensing requirements, because its a shell script. If you want to modify the solver then its GPL, but you want to pipe data out of it its ok to invoke it in a separate process.
I agree programming stuff is useful for learning. But optimization is specialised and tangential to most companies core competencies. Leave it to the professionals. You don't write your own SQL query interface for example (which is quite a similar problem).
The author should not feel like they made an obvious mistake though. The planning community is the worst for writing their software up in human readable form. Its really very easy to use, but you would not be able to guess that from their websites, their constant use of complex logical constructs every other sentence is infuriating.
For OP's use case, which appears to have a max depth of maybe 3, invoking a shell program is going to be a LOT slower than just running the search. No matter how slowly it is implemented.
You are basically saying the whole UNIX philosophy of specialised reusable programs chained together is a flawed design.
Spawning a program is not slow. Store the programs file arguments on a ramdisk to avoid disk IO and there will be no noticeable overhead.
Reusable piped programs are great for some things, especially a productive shell language. No, composing processes is not great for all things, and certainly not where performance is required. This is why FreeBSD is a monolithic kernel instead of a microkernel with processes, for example.
Spawning a program is incredibly slow, even ignoring disk read times, compared to 3 levels of low-branch-factor search in any language.
I wrote a litte strips planner for the last iteration of the course and set it the task to sort some boxes in an interactive box2d world without actually understanding the simulated physics, the result can be quite amusing, http://fhars.github.io/boxworld/
A technical write-up is available here: http://dspace.mit.edu/handle/1721.1/6916