Bugs are a function of number of engineers, features, moving parts, and significant lines of code.
No-code takes out engineers, moving parts, and SLoC from the equation (as far as the enterprises buying these solutions are concerned), but leaves a lot to be desired in terms of feature set; low-code brings down number of engineers, and SLoC, whilst providing flexibility in terms of bespoke feature sets.
Also, another reason is, in essence, No-code and Low-code are natural extension of the cloud computing model in which capex is traded away for opex. And economies of scale, over time ensures, Low-code / No-code is going to be cheaper yet more reliable than anything one (tech-enabled) enterprise can roll on their own.
Low/no code seems like they’ll eliminate some types of bugs but probably not the ones that really hurt (it does what I told it to do, but what I told it to do was non-sensical).
I do firmly believe that the subtle bugs of "this seems to work the way I intended" is far more nefarious.
Albiet this is something that can be aleviated with better tooling, which no-code stuff generally lacks (how do you debug your kizmit code?).
I remember spending many hours trying to debug what was going on with teamcity and just ultimately wishing I could get a fully dumped XML describing the config rather than going through every single damn menu item/config drop down. (I believe Tc only allowed a partial XML config of the current screen and not the whole build process).
Since the logic operators and ui blocks are consistent and tested across all apps, code errors reduces over versions. But implementation errors still exists. This normally happens because the app builder does not fully understand the application data.
So by using a DSL we effectively reduce the moving parts. With the remaining problem being - does the developer understand the data - and as a business, this is where you want developers to spend their time to extract the most value.
 - https://github.com/lowdefy/lowdefy
I call this "metadata-driven programming", and try to use it whenever I can get away with it. It's amazing what you can pull off with a FSM, external configuration, and some ingenuity.
Logically loaded React classes is a common example of this. Filling up these components with bugs and making them hard to test.
Once I experienced how well it works to abstract logic from components my code simplified immensely and writing good tests became easy. And then making the logic configurable truly elevated the capabilities of our “meta-driven programming”.
...which is interesting, right? SQL won. There are no competitors. Everybody uses SQL. But if you ask developers how they develop their interfaces, you'll get a million answers, ranging from DOS terminals, Electron apps, PWAs, .NET forms, whatever. Maybe what we need is SQL for the view layer to complement the relational backend? SQL is famously easy to pick up because it allows the user to describe what they want, not how to get it. React and friends get us closer to this goal of universal reactive programming, but nothing/nobody has capitalized on providing an all-in-one "put words in this box and get what you want" type application. I guess the assumption is that building such a language is functionally impossible, but I really wonder if that's true. The app-building app that ultimately wins is going to be the one that lets you build any app, not just an app that serves a particular niche or business process.
We have those already. They're called IDEs. It turns out there's only so much complexity you can hide if you need to describe what you want to do unambiguously enough to be executed by a machine. Something like Visual Basic is probably as far as you can bury complexity while still being widely useful. Go much further and you start to drastically constrain the range of apps that can be built with your app-building-app.
SQL is just the lowest-common-denominator notation for accessing and manipulating data by performing operations that can be represented by relational algebra and relational calculus. Fixating on the language itself misses the point that there is a theoretical underpinning to why data is handled this was in many practical applications.
I think it's mostly that Salesforce was founded by an Oracle executive (and is just generally a bastion of enterprise software mess).
The reality is that you still have to model some specific domain against which SQL queries can be written. I 100% agree with doing implementations in SQL, but you still need imperative code wiring your well-normalized interpretation of the matrix into the outside world.
That said, SQL is more powerful than most developers are aware. Using providers like SQLite allows authoring UDFs which blur the lines between code and SQL. The nastiest bits of our product are implemented by business owners in SQL now. We have written UDFs like format_date() which are simply thin wrappers for things like Datetime in C#. Developers just ensure stability of the schema/UDFs and any relevant Code-to-sql mappers. Also note the in-memory mode of operation for SQLite makes it a perfect fit for large numbers of ad-hoc projections in parallel.
The success of PowerApps is a huge part of the Microsoft Dynamics appeal in the ERP space for instance - and they seem to be winning this segment very quickly on the back of it with tremendous growth.
The fact a relatively inexperienced user can create a new phone app that is fully integrated into the company ERP is a real point of difference.
It's going to keep a lot of people employed keeping those things going as the platform shifts underneath you every six months, and/or yanking the Goldberg machines out into real code.
There are ways of writing less code over time, but they're based on the sort of solid abstractions that are really not that common. (This is how "high level" coding works.) But the typical "low code" or "no code" system is not built like that.
The reality is that some problems are fundamentally better suited to low code, and the solutions will be much simpler and easier to maintain if they are built as low-code (e.g. if I have MS Dynamics and want a new data-entry screen, doing it as a model-driven PowerApp is undeniably the best way to do this).
Other problems will either be too complex, a poor fit, or simply not possible to implement within low-code tools. I'm not going to start developing a mobile game in PowerApps!
I think the overall issue is "If all you have is a hammer, everything looks like a nail". The best thing is to recognise the relative strengths and weaknesses of the two approaches and pick the right tool for the job.
Counterexample: MS Access apps are often a maintenance nightmare.
They end up being a maintenance nightmare because they are undocumented and flooded with poorly written VBA, which is hardly low-code. A non-VBA Access Database is trivial to maintain, but VBA is a full programming language, so if that's heavily used it's really maintaining a VB6 Winforms application, which isn't what we would now typically call 'low code' in terms of this article.
(As an aside, I would actually personally argue that a completely undocumented MS Access database is actually slightly easier to maintain than an equally undocumented web application anyway. The reason these tend to be a 'maintenance nightmare' is that they are typically made in Shadow-IT rather than the technology itself which is very simple).
Low-code and no-code solutions are often created by shadow-IT.
Heck, while it is phrased differently, enabling Shadow IT is an explicit selling point of low- and especially “no”- code tools.
Of course, Shadow IT isn’t the tools fault, it is the organization that segregates IT into a separate heirarchy often the whole way up until the C-Suite.
If IT was a function integrated within each business unit empowering it, rather than external silo alternately constraining arbitrarily and granting favors, Shadow IT woudn’t be a thing. Shadow IT is a symptom of a broken relationship between IT and the broader organization.
 but often not really “no”, just as Excel isn’t.
In that case the issue is Shadow IT then, not low-code tools.
We used to call those data entry screens "HTML forms". And hey, they're even text based so you can commit them to a version control!
And although it's easy to write a HTML form, it tends to be slightly harder to write the code handling it on the backend to change the database, with full user/roll based authentication and validation - particularly if you are integrating it into your ERP.
If the use case is a form to interact with Microsoft Dynamics, something built in PowerApps model-driven apps will be much more robust, quicker to develop and more secure than a html form with some custom handler.
Writing less code should always be top of mind for any developer. But making the right abstraction decisions seems to be the true art of coding.
The question then becomes can we write a low code framework that does this well? - best we try :)
We ask mathematicians for help! https://math.mit.edu/~dspivak/teaching/sp18/7Sketches.pdf
IMO assuming low-code is just for people who don't know how to code is missing the power of low-code in the ERP/CRM space, and how that will be applicable to more spaces in the future.
Besides, I can program but I can also build a simple application in a low-code solution quicker and it will usually be more reliable without lots of testing.
IMO saying low-code is for people who don't know how to write actual code is like saying a typewriter is for people who don't know how to write with a pen - they are two different tools to get a job done, and sometimes a pen is better and sometimes a typewriter is better.
The people that excelled with it were programmers and others who clearly had the ability to write software. The people that struggled were the ones that couldn't decompose a problem into its constituent parts, that didn't have sufficient attention to detail.
It was a productive tool in the right hands, the only time we had problems were when it was mis-sold as a way to solve problems without needing problem solvers.
Most devs already use no-code an low-code and they know when to use what.
You shouldn't ask a group of software developers what they find valuable in a no-code toolset.
I'd be happy to lose my economic position as a possessor of arcane skills if we can stop trying to construct multistory buildings out of mud and learn to frame walls properly.
Take for instance a common integration point for these applications - sending a sales order to a warehouse management system (WMS). Even if both systems have RESTful API's there is no fixed standard for a sales order, so you need to at least do some mapping. Then maybe actually you find out WMS needs a reference from the transport management system that has to be merged into it, and this also has to go back to ERP (which isn't part of the standard 'send order' integration). This is why middleware programs exist.
Enterprises often end up being bundles of random applications that all need to be informed about random stuff, and it's just not practical to expect the application vendors to know what applications need to be aware/informed of what.
In the more complex systems we interact with over 6 other systems, sometimes a lot more if the customer has done acquisitions but not consolidated IT systems (one potential customer had over 20 systems we'd have to integrate with, including 5-6 different WMS systems).
We found that it's usually easiest to just get the customer to tell us what their systems can provide or need, and then make the shims on our end. We've got a few customers running BizTalk and such but as far as I can figure they're mostly just an additional point of (frequent) failure.
Don't take that away from me.
Most ERP's or similar enterprise software will have some sort of mapping layer (e.g. mulesoft) otherwise they just won't be able to roll their software out into the real world.
Let's say we have the following:
- Salesforce conforms to a standard and sends sales orders in format "Generic Order Format"
- WMS receives sales orders in format "WMS Order Format"
- TMS receives sales orders in format "TMS Order Format"
- TMS will update Salesforce on shipment progress in format "TMS Order update format"
- WMS will also update shipment progress to Salesforce in "WMS Order update format"
So without some sort of mapping layer, how would this work? How does Salesforce understand the formats? How does it even know what software needs the messages at what intervals? What if they are legacy systems that require a EOD batch? Insisting the other vendors change their software to conform to "Generic Order Format" is just moving the issue, and means you will never make any sales.
There's always a mapping layer, but that layer could itself be modularized and interchangeable. What I have some trouble getting is the assumption that this has to be managed by a single-vendor platform.
Well the article isn’t saying that the middleware looks after all of your enterprise mapping, it just looks after mapping to/from salesforce.
Salesforce have modularised it, and it is interchangeable - you don’t have to use mulesoft, but it’s another layer in their ecosystem. You want to manually integrate with web APIs? Sure, you have that option.
I don’t get the difference between what you are describing and what mulesoft is in this instance.
As more use cases pop up I actually expect the yaml to grow considerably complex, at which point you will need to reuse yaml blocks, reference other blocks and so on. Essentially it feels to me writing yaml is no different from coding using a high level library or framework.
What is really cool about writing these apps in YAML is that it enables all developers to build a web UI. Many backend developers have the need to put a UI in front of their services but have very little desire to learn React, webpack, css, etc. And for frontend developers, they often want to focus on the consumer side of their applications, yet spend a great deal of time building administrative features. Lowdefy enables all developers to build web apps without learning additional skills - it's almost like full stack "infrastructure as code".
> you will need to reuse yaml blocks, reference other blocks and so on. Essentially it feels to me writing yaml is no different from coding using a high level library or framework.
Absolutely. We have a _ref  operator which enables you to template our parts of your YAML. Then helps Lowdefy apps scale really well.
It is true that one has to "think like a coder" to write Lowdefy apps. But I'm wondering if this can or should be removed at all when you are working with data. Even when using Excel I need to think like an excel developer to develop a more complicated sheet.
> But, in this modern world, is Salesforce not just an expensive, $150 per employee deployment of Postgres?
If that’s the author final conclusion, is hard to take any of the article seriously as it completely jeopardizes the value and complexity of any platform and therefore any layers on top , which was suppose to be part of the premise of the theory…