Me and some co-workers did a mini-language a long time ago too, but it was for a BPM workflow software that was so in vogue ten years ago [1]. It only had a super buggy graphical interface that customers hated. Some of the largest customers had workflows with more than 100 steps/conditions, and we were losing money because we needed to have a full-time coder on-site for each customer just to program the workflows.
The database structure however was quite good, and I noticed that it was being deleted and recreated each time the workflow was edited.
Adding a text mode was just a matter of parsing individual lines and persisting them on the database. No parser, just regex matching. Even an extra blank space was a syntax error. Something like that:
It was implemented as an easter egg. Using text mode disabled the graphical part, since we lost the positions of each box. But nobody ever complained about that, of course.
Many years ago I was working on a complex screen scraper (the good kind) app and every site we scraped was ~1000 lines of error prone code, plus all the usual problems of sites changing randomly under us and the need to respond quickly. They wanted to scale up to scrape dozens of sites and we weren't given the resources so I came up with a little xpath like language "//content-area/table/tbody/@foreach/tr/@get-value" or something like that, after that the largest scraper was a dozen lines of code.
I think it came down on the right side of reducing complexity rather than creating more but I was let go shortly after for the heinous crime of not being at my desk at 9am.
This is a Most Righteous Answer, kissing cousin to The Correct Answer.
Repeating myself, sorry:
I've done a lot of electronic medical records. At the time, SOAP, WSDL, HL7 3.x (XML format) reigned.
Techniques like yours is how our two person team was able to run circles around our much larger partners. In other words, while they were stuck trying to update and compile the XSDs, we just treated inbound data as screen scrapping.
Sharing a tip of my own: while xpath expressions are mostly great, we migrated to globbing expressions. The wildcarding allowed our scrappers to be a lot more flexible, robust in regards to all the chaotic mutant data we were receiving.
PS- IMHO, The Correct Answer is to have path expression intrinsics, built right into the language. What LINQ should have been. Imagine if Java had regex intrinsics like Perl, instead of that weird library. Some idea.
Rolled my own that I'd copypasta. I always ended up with my own graph model (eg HL7, XML DOM, parse trees) and add query methods to the node objects. I really dislike having separate query objects, preferring a tighter fit.
The database structure however was quite good, and I noticed that it was being deleted and recreated each time the workflow was edited.
Adding a text mode was just a matter of parsing individual lines and persisting them on the database. No parser, just regex matching. Even an extra blank space was a syntax error. Something like that:
It was implemented as an easter egg. Using text mode disabled the graphical part, since we lost the positions of each box. But nobody ever complained about that, of course.[1] Looked similar to this: https://archibus.ro/wp-content/uploads/2017/07/GWE.png