Warning: this whole post is a blatant plug for my Open Source project
https://github.com/etiennesillon/ModelRunnerThere is lot of discussion around no code platforms and why developers don’t like them. My view is that they can be very useful to quickly get through the boring parts of a project, like creating master data management screens for example. So I’ve built my own version which interprets models at run time and, it turns out, understands natural language queries too!
Hi, my name is Etienne, I love coding and I’ve been doing it for a few decades now so I’d rather focus on code that keeps me interested. Unfortunately, I find that there is always a lot to code before I get to the interesting stuff. So, like every other half-decent programmer, I’ve always tried to automate as much as possible and build reusable libraries by adding levels of indirection and parameters.
I’ve been doing this for so long now that my code has become ‘hyper’ parameterised, so much so that I had to store all the parameters in configuration files. These evolved into complete models which are basically a mix between ER models and UML diagrams: they include Entities and Attributes but also support all UML relationships (plus Back References) as well as formulas in object notation like “Product.Name” and “Sum(OrderLines.Amount)”. I’ve even extended the idea to include workflow models to specify what happens when an object is created, updated or deleted or when a pre-requisite condition becomes true.
To simplify managing the models, I’ve written a graphical editor, starting with Eclipse GEF but since I like to reinvent the wheel, I moved to plain HTML5/JS. To make it even easier, I’ve added Google Speech Recognition so I can now design models by just talking to Chrome and when I’m done, I can deploy them with one click or by saying something like ‘please deploy the application’. This will create a schema for the data and the ‘meta’ application will be ready to offer standard, web based, data management screens.
At this stage you’re probably thinking “Great, you can design and deploy data driven apps with your voice, so what?”
Ok, let’s move on to something more interesting then, which is what the ‘meta’ app can do because it has access to all the information in the model at run time, like for example, the ability to manipulate the data using natural language queries.
This works because having access to the semantics in the model removes the current gap between Machine Learning based Natural Language Understanding systems, which are very flexible but mostly ignorant of the domain model and, on the other hand, old fashioned back end systems with very rigid APIs. You can find a more detailed discussion here: https://modeling-languages.com/modelrunner-open-source-no-co....
So I’ve also added Google Speech Recognition to the ‘meta’ application and I can now just speak to it and tell it to “create a city called Melbourne and set postcode to 3000 and set notes to the most liveable city in the world” or “get me a list of customers living in Sydney aged 40” which I think is pretty cool and almost justifies all the hours and late nights I’ve spent coding it!
I think this has pretty obvious applications like for example, being able to manage your data on the go by just talking to your phone instead of trying to use a GUI on a small screen.
So, I highly recommend the parameterised indirection approach but if you don’t have a lot of time to write your own code, you might want to have a look at mine, it’s all Open Source with an MIT license: https://github.com/etiennesillon/ModelRunner.
Or, if you just want to try it or watch a demo, just head to https://modelrunner.org.
Now, it’s still very much a work in progress and I’ve spent more time on the core engine than on the UI so if you try to break it, you probably will! But, if you give it a try, please let me know how you went!
Thank you!