Until then, what we can do is build abstraction on top of abstraction over what computers can do. But that is code. There's a lot of room to make application development easier, to make it require less knowledge. In a lot of cases there's no good reason it isn't easier already. But it can't really become easy until the target stops moving; what computers can do changes too fast for people who want to build a good unified set of abstractions over it.
Yes, this! It's the inquiry into the way the data flows, what the application should do, and how to handle edge cases that is difficult for a lot of business applications. To say nothing of getting various stakeholders to agree. Code isn't fluid, it's calcified business process. That calcification requires regimented thought that developers can bring to the table (others can too, but I've seen it far more often with developers).
That's not to say that there aren't more and more domains that will be codeless. As another commenter says, if you can stay in a box, a codeless solution like Wix or wordpress.com can solve your problems quite well. But as soon as you step out...
Another issue that is never brought up is lifecycle and change management. This is a complex topic that developers spend a lot of time thinking about. Some codeless solutions version control behind the scenes, but testing and regressions are not really part of what the end user thinks about. Again, this is a question of maintainability and scale. Small companies may not need the overhead. Until they grow and suddenly they do.
Please pardon my ignorance, as I've never worked in software development. Could you please explain what you mean by 'overhead', and also how a lack of testing/regressions in low-code programs would be a roadblock as the company or application scales?
As an aside, I love your analogy of code as "calcified business process". I've heard several low-code 'success stories' of business users without CS experience learning how to build apps, and they often comment that one of the most difficult things was learning to think in terms of the rigidity of code. Sometimes I ponder whether the true value of low-code platforms is simply that they teach non-coders how to think about business-level problems in terms of logical systems, rather than fluidly, which is how they experience them.
What about adding a feature that changes a fundamental assumptions of the program? Eg since the very beginning, the product has only ever supported integration with Salesforce.com, so all of the system was programmed with that assumption in mind. Unfortunately, there's now a need to support other CRMs. As support for OtherCRM is added, how sure can you be that salesforce.com support still works 100% perfectly? And then let's also add support for 30 more CRMs because that's what the customers are using. If support isn't added for those 31 CRMs, then a (high-code) competitor could come along with a better product for cheaper.
Thus, making sure every change going into the program still works with every single one of those CRMs becomes a roadblock. Not insurmountable, but at some point, the roadblock becomes an albatross around the neck of the product, making it impossible to make any changes to the product.
Having tests (hopefully automated) will make sure that the product works with all the CRMs it claims to. But the overhead of writing tests takes a non-zero amount of time/resources, which (eventually) adds up.
Software is deliberate, yes. You need to know what you want to do at a pretty detailed level, but that's not the challenge. The challenge is that this new technology (what software can do) is a gamechanger for most traditional businesses. It creates a lot of new opportunities by enabling ventures you couldn't even imagine before, and every time you build more, you get more opportunities. Software requires you to reevaluate what you were doing previously. That's hard.
Software is also fluid. As you get more software, you see more opportunities. As you get more, you want more. Anything that tries to treat the domain as "static and understandable" is doomed to fail. Development is not the steady march towards the perfect system. Development is the discovery of what you can do with software, and how that changes the business processes that already exist.
I should revisit with more modern examples.
- dealing with the environment. My program always works perfectly until it moves away from my laptop.
- debugging. If you don't know how to code, good luck to find the error in your system.
- interfacing. Your codeless stuff is going to work fine for everything that it already know. But new stuff are coming. Somebody will need to code for that.
Now I do think that eventually AI will be able to manage all that. And that it will give human a fuzzy codeless experience. But it will take A LOT of time before we get there.
I have a chipset that I'd like modeled, and can I have some trivial drivers by next Tuesday?
Modern computers are not trivial. They are probably the most complicated devices on the planet, full of features, edge conditions and screw cases that are not obvious even to very skilled developers. I've done some very fast platform bring-ups and gotten them to ship, projects organized to be very light-weight and agile (in the non-political sense) and it's still many person-years of effort and a bunch of time on the phone with vendor support, discussing chip-level bugs (can that model please include the bugs? That would help a lot!) No talk at all about pleasant abstractions, we need to ship this pile of silicon.
Visual interfaces are for toy programs that spend resources on making pretty illusions of how the world should work. Underneath the pretty glitz are piles of slain dragons and machinery that people take for granted and apparently think can be wished away. That's fine for end-users and application level folks, assuming they can get their jobs done without resorting to breaking the abstractions (and any sufficiently large or long-lived project will). But someone has to worry about memory management, concurrency and synchronization, all the drivers for fucked-up hardware that (say) locks up when you fire data at it too fast, what happens when the file system fills up or that configuration file is mangled by an upgrade. All that nasty stuff that the "drag a blue wire from point A to point B -- see, easy!" doesn't handle, and never will.
On one project we had a clock that would, very occasionally, go backwards in time. It took a calendar year of going back-and-forth with support and ultimately a phone call with the chip architect to figure out the problem (the workaround for the bug was unpleasantly expensive). It's easy to model a clock and come up with a set of expectations. It is much, much harder to make a clock work in the real world. You would not believe how hardfought some of this stuff is, for features that are "trivial".
Visual programming is great when it works -- I've certainly done my share of it -- but we should not mistake ease-of-use in some domains for revolutionizing programming in general, or (worse) expecting the paradigm to work when things get complicated and large.
I was with you until this line. Is it really the case? x86 has been with us for... a while, and x64 is backwards compatible. We built an abstraction on top of that, a compiler that can take high level languages and turn it into machine instructions.
I think what people are looking for next is a compiler that can take something even higher level and "compile" it into high level language. The issue is that if you get too abstract, a standard compiler stops being able to make good guesses about your intent. However, if you built a "high level compiler" around specialized domain-specific use cases, then you might be able to make reasonable higher-level guesses about intent (though as with normal compilers, you'd need support for advanced developers to work around those assumptions).
So some new "things computers can do" from the last 20 years:
- Fit in our pockets
- Communicate wirelessly between any two major population centers on the planet, with high bandwidth
- Store exabytes of data
- Do excellent pattern-matching of all sorts of data
- Be close to your customers, wherever they are on the planet, without you needing to have a ton of capital upfront
- Cost as little as $7 while still being compatible with our complex software stacks
- Be powerful enough to run an Electron application
- Monitor your blood glucose levels and deliver the right amount of insulin in real time
Probably a next great milestone is being able to run multiple electron apps in parallel...
I usually snark on Node.js but not really in this case, I understand well when the platform imposes a moat and it's easier to just re-do than integrate. Ain't nobody got time for that... it's still something of a tragedy but it's not something we have an easy fix for.
Because the real world is like that, there is no possible programming language that is going to prevent the need for "if". Users may think they don't care about that, but the requirements are full of "if".
Loops you don't need to code today - if you don't need too precise a control over execution. If you need real control, you probably need to write the loop instead of just using a comprehension.
Data storage schemas: As databases scale, efficiency of storage start to add up to real money. The user may not care, but the finance people care a great deal.
No, not necessarily, depending on the business requirements. There's already logic programming, for example a language like Prolog. You the programmer input a very high-level description of the rules that your information must follow. The computer figures out all the imperative bits.
I worked somewhere with a team of engineers who worked in LabView. They used LabView mainly for customized test equipment controllers and recorders and it worked well. They then branched into FPGAs for some straightforward dataflow signal processing and that also worked well. They then decided they were sick of how long it took software engineers to write device drivers for custom hardware and decided to do that in LabView with some tool that could generate C code from LabView. I got pulled into a meeting about it. I asked a couple important questions: how do you insert memory barriers and how can you write code that is ISR safe? It was clear the LabView folks had no idea about any of this. So I launched into a discussion about CPU architecture. They didn't really listen and everything was a hand wave, "LabView does that for you". It was clear they didn't want to hear my thoughts, so I let it go.
Fast forward 6 months and I get a frantic call from a manager if I had "bandwidth" to write a device driver in C for the same card where the LabView team was supposed to have written the driver.
A new project rolled in that didn't meet the expectations of their tooling, and suddenly they had to start writing native code plugins and so forth, and they were totally at sea. They started demanding engineering resources and invasive changes to the product so their test environment could handle the thing with minimal modifications.
I remember telling one particularly obstinate contractor that we were colossally uninterested in writing drivers for their environment, and that he was either going to learn C or lose his contract. I didn't see him again, maybe his management chain told him not to come 'round the software group again, or maybe he learned C and saved the day for his team.
"If this trend holds, and it seems like it does, pretty soon even a regular high school kid should be able to build the next Netflix using nothing more than an easy "click and drag" interface.
This is the promise low-code platforms are built on."
I work for a low-code software platform company and at least as far as we're concerned, I cannot scream "NOT EVEN CLOSE" loudly enough.
We're for building _business applications at scale_. Not Netflix, or the next great TODO app startup for teams, or any of that. Not the same ballpark, not even the same sport.
We're for building your companys HR app. Or an app to help Legal. Or an app to help field CSRs do front-line support. Line-of-business apps.
Maybe some of them are promising this "drag and drop your way to netflix" thing. I know we aren't, nor are any of our competitors in the low-code space.
The technically hard part is the infrastructure to unicast millions of videos at a time, but that's now available from several cloud providers.
The really hard part isn't technical, it's content. Disney might be able to build the next Netflix with a codeless interface to videos on the cloud, but a high school student wouldn't.
It isn't available at a price that lets you stay in business.
Once you've proven the idea, perhaps it's time to hire a team to optimize, but until then, you can focus on the rest of the business.
And in what similar cases the technology isn't a good fit yet?
Wix and squarespace are more than enough for many businesses' needs. AWS has blown through the roof. And more and more people discover what it feels like not to have to worry about hardware and VMs configuration.
ForestAdmin is also a nice case in point, supposing you have the proper stack.
Sure there will always some who need developers, and most "solutions" this far haven't been up to the task. But I definitely wouldn't bet that won't ever be more than enough for most people and use cases.
For this, extensibility (via integrations and proper established languages; definitely not some absurd and ill-conceived "very friendly script language") will be key
Also serverless is arguable more complex for a business person/analyst to grasp than just telling a developer to create something they imagined.
Or serverless is to those same web developers what codeless is to business people.
Edit: also, this is an analogy
I guess eventually a new gen of Excel comes out that can write code without IF, FOR loops, and function calls, but until then it’s safe to say anything writing those statements other than in code is pretty much snake oil.
As for the rest. No-code / low-code solutions don't just pass nice drawings / uml diagram to a gremlins-powered CPU. They rely (and will rely) on code behind the scenes.
Analogy still stands
Low-Code, No-Code is too immature to build a long term sustainable business. It will ebb and flow for decades to come. Always getting closer to solving this list of problems (and the problems are real) until one day we as a society learn to understand where the value boundaries are for digital products. I don't see that happening in the next 50 years.
Excel/sheets, wordpress, MS Access, moddable games, arguably even the web itself...
Modding feels like fixing a plane in flight, except someone else built it, it has two nosecones and three wings, and there are twenty other mechanics, some of whom are running around adjusting the things you just tuned up, some of whom aren't actually mechanics, they just like wrenches.
It feels closer to programming anarchy than any open source project I've participated in, and it requires being very comfortable with writing nothing but hacks. Honestly I think it requires a level of cleverness that you should not need in a programming day job, because only masses of technical debt that have gone critical need that much cleverness.
For context, my modding experience comes from Stardew Valley, Rimworld, XCOM 2, and BattleTech.
To get Salesforce you already need trained consultant to who you pay $$$ and yeah those CEOs don't see it. It is just, developers do some magic and are expensive. Where I don't understand what Salesforce consultant would do and he would also be expensive.
>Suddenly you find yourself looking for programmers who are well versed in the specific low-code platform that you bought, and it is a really hard task.
A general-purpose low/no-code tool may be an utopia. But task-specific low/no-code software can be very effective and successful. And yes, it does remove/reduce dependency on developers for non-technical people. For corporate users dealing with internal IT departments is frequently a big hassle. Which is by the way one of the reasons why cloud apps are becoming so popular - because they reduce dependency on in-house IT departments.
Although, for low/no-code software it's important to have a decent and reliable API to be able to integrate and interoperate with external apps and systems. With such API it doesn't really matter if the app was developed using a written code, or visual tool.
Second, have good logging/auditing capabilities.
Third, play nicely with security/administration - support LDAP, SSL, etc.
I agree, TFA's author doesn't seem to grasp that some tasks, with constrained inputs and outputs, might be better served with a well-designed visual tool. ETL (like Yahoo Pipes and your project) seems well-fitted.
Decades ago I built children's literacy software. I built the building blocks (sometimes visual sprites, sometimes behavior blocks), and the artists and educators took those building blocks and built a series of products. It worked nicely.
1) People is what makes coding hard. If the People involved (designers, execs, users) could fit themselves into the simple box of what more or less comes for "free" these days then they could do so many things via GUI. Look at cloudpipes, Zapier, If this then that, and segment.io. Each more or less integrate 100s (1000s?) of tools to eachother. People like to think their business needs are unique, but most of the time it's just JSON being piped from one bucket to another.
2) Lots of companies w/ engineers and a big tech budget are not what I'd classify as a "Tech company", instead they're more of a "Product company". I often use this distinction -- is the thing that makes this company valuable the fact that their tech is better than what others have ever done? This looks like 10-100x more capable in a certain aspect. For example Wix is not a tech company -- it creates a mundane thing in a slightly more accessible way. Compare that with something like Tesla -- they applied technology to the automotive space to create many multiples of more efficient transport to unlock a whole new paradigm in their domain. Most of these "product" companies should be using boring tech and adapting themselves to the tech's edges so they can rapidly and inexpensively build products.
They compete with Excel, MS Access and possibly WordPress. They're never going to replace actual software development because once the problem domain gets complex enough the person behind the wheel needs pretty much all of the skills of a good software engineer. They're also pretty much all closed platforms so only an idiot would use them to build a product. So they're stuck in the small-scale automation hook.
I still have regrets that I left because it was intellectually stimulating. It's just impossible* to work in a startup culture whilst being a GOOD parent to young children.
* When you can't stay late when needed and your colleagues are picking up the slack it feels like sh!t and makes you feel like a massive shit; especially when said colleagues are amazing about it.
There is a reason why you have textbooks in school and not a stack of photos. written language is much better suited to convey deep and complex information. Sometimes both together can convey a complex idea better than one in a silo, however without the text to convey what is trying to be shown in the image the complexities of the content in the image are severely limited.
Code is written language, and just happens to be various forms of specialized language we use to convey these open ended concepts. Visuals just can't be as communicative. So I firmly believe that visual blocks are relegated to small sets of problem space and not these open ended sets of problems.
There's also a reason why many of the best textbooks contain diagrams, images, charts and illustrations. Text is, in my own personal opinion, not the best way to convey deep and complex information of all types and sometimes its not best in isolation. Also, some people can learn through pure text and other people are more visual and find it much easier to understand things when stated visually. Personally, I find I often think in boxes and lines (relationships between things, I guess) so need pen and paper at hand when I need to think through a complex programming issue before I can turn into into textual code.
> So I firmly believe that visual blocks are relegated to small sets of problem space and not these open ended sets of problems.
Meh. I've written some open ended code in Max/MSP and I found it to be a really pleasant and productive experience. Sadly (at the time, at least), Max was too limited in features to really take it further (at the time it didn't have any support for data structures other than a few built in ones, I see that it now does, dunno if I just missed it or if its newer; also, no unit testing support was a big problem), but none of its limitations are limitations of visual languages, just of that particular implementation (which, to be fair, was never intended to be a general purpose language).
Some things really are better suited to text, of course (mathematical formulas certainly), but the best visual languages let you use text for those. And annotate things with text as you wish.
> visuals just can't be as communicative.
I guess I'm a visual thinker, because I completely disagree. Sure, visuals alone are bad at communicating, but its never an all or nothing either or thing. I often find complex issues with just text extremely difficult to get my head around, but a nice diagram can make it trivial. "A picture is worth a thousand words" and all that.
No visual programming environment I've seen compares to the expressivity of writing code - but I also consider myself a visual thinker, and I believe there's still a lot of potential for imaginative rethinking of what it means to code. Text is just a subset of visual communication, and there's no reason why we need to limit ourselves to what can be typed - maybe we could include dynamic interactive symbols as new "words", or program by visually constructing diagrams that include code..
The sweetspot seems to me, visual languages that also let you use code.
Failing that, maybe a hybrid thing that lets me do mathematical code and pure algorithmic work textually, but to do all of the higher level architecture and coordination, asynchronous code and stream processing visually.
Personally, I really enjoyed my experience with Max/MSP, I found it quite liberating in many ways and much (but definitely not all) of it suited my way of thinking closely enough that I could bypass the pen-and-paper to figure out complex things. I also really liked not having to name things (until I wanted to, at least) which I found made experimenting with ideas, before they were solid enough to put names to things, was also quite interesting.
There are visual syntaxes that are genuinely useful, but really as an aid to reasoning more than anything - so we're not that far from the "literate programming/documentation" case. I include the sorts of diagrams we routinely use, e.g. in category theory (commutative diagrams, string diagrams etc.) under those.
> but to do all of the higher level architecture and coordination, asynchronous code and stream processing visually.
Thing is, you'll still want to enter all that stuff as text - not by fiddling with a frickin' mouse. And the authoritative version of that code should be plain text as well - the system should be smart enough to cope with outside edits and do the visual formatting and layout itself if needed.
This is why I said the ideal would be a non ambiguous mapping: so that the text and visual code is equal and you can convert between them as you please. Unfortunately, this dream is unlikely.
> Thing is, you’ll still want to enter all that stuff as text
Why, though? I’m a very keyboard-centric computer user. I use tiling window managers, keyboard-centric editors (vim & spacemacs) and generally have my setup such that I rarely need to use the mouse. But I don’t see the value of needing this input to be through text. My experience with Max and Synthmaker showed me that using the mouse can be quick, easy and efficient. I spent a few months working with Max and having to use the mouse did not bother me as much as I thought it would. In fact, it didn’t bother me at all.
I also dream of having this visual system on an ipad (because I’m wholly unsatisfied with the existing ipad programming tools, which are typically too text centric for a touchscreen) and would like to be able to use the touchscreen to draw my boxes and lines.
I really like this description of a possible language / IDE, where textual code is the "canonical" representation of the program, with visual layers of interaction to manipulate higher abstractions.
At the bottom, I still think it would be an abstract syntax tree, with textual and visual "views" into the program. As I've got VS Code open in front of me, a few examples come to mind:
- Minimap: a condensed visual overview and navigation of code structure
- Color picker: hover on a color value in code, and select from a color wheel, adjust transparency, with preview
- Type annotations: hover on a variable, function, class instance to see a tooltip showing their type definitions, reference link to source
These are all various ways to represent and interact with the same underlying code, and I imagine many more possibilities exist to extend the "visual programming" aspect.
A quick idea as an example, there could be a mode/view to see the entire program and its modules/dependency tree visually, move classes around, rename/refactor methods..
Whenever "codeless software" and similar topics come up, it always reminds me to re-study the work of Douglas Englebart, Ted Nelson, Alan Kay, Seymour Papert, Bret Victor .
I agree. What I meant with my comment was that the visual syntax shouldn't simply be a direct visual representation of the syntax tree, because if that's all it is, I'd rather just write in a Lisp with a graphical tree visualisation generator.
I definitely like the idea of having many possible views of the code, where a traditional textual language is just one of those.
> a few examples come to mind
That sounds quite similar to what LightTable attempted to achieve. Its nice and useful and I want to see more of it, but I don't think its far enough, at least for a discussion on visual languages ;)
> it always reminds me to re-study the work of Douglas Englebart, Ted Nelson, Alan Kay, Seymour Papert, Bret Victor
Absolutely! They've been a major influence on my own line of thinking on the subject.
As for spaghettification, I honestly don't see visual languages any different from textual languages in this respect. Yes, most visual code you see in the wild is horribly spaghettified, but the bulk of this code is also written by non-programmers who never learned software engineering principles (DRY, abstractions, proper naming, factoring, etc etc) and textual code written by the same class of people, in my personal experience, is just as spaghettified. Sometimes worse, even, because I can't put my finger on a line and trace it, I literally have to text search and mentally trace where named things connect.
PS: Even though I don't quite agree with you, I appreciate the discussion! This is how ideas that may eventually lead to a workable solution are formed :)
As the user/application reaches a certain level of advanced usage, visual/low-code representations tend to turn into a big ball of spaghetti. As you pointed out, the reason I think has much to do with how the user is not familiar with programming concepts and organizing logic.
On the other hand, I imagine it's possible to have a language/IDE that encourages good program organization naturally and intuitively, to guide and support managing complexity..
Well, this topic is deep and endlessly fascinating. Contrary to the originally posted article and its call of "doom", I'm more hopeful that the field of software development will continue to explore new and creative approaches.
I can't imagine this being possible with two humans looking at the same visual display/image and aligning on its "true meaning" let a lone an automated system doing it. Reduce the problem down to a fixed set of visuals with mapped meanings to AST and you can get there but then you are a vastly limited "language" of functions and that explodes into a horrible and complex visual tree to get any non trivial complexity described.
For this category of software, there are increasingly better and modern platforms and frameworks without the need of writing any code such as Microsoft PowerApps, Mendix and SalesForce. That also explains the lately success of these companies.
I think it's worth reading the sucsess stories of mendix/outsystems about using their low-code tool to build complex software , like hospital management systems or hospital analytics system, and than try to form an opinion.
Certainly not - any teenager could do it alone even more than twelve years ago. I remember using Notepad - before syntax highlighting was really a thing - to build a site in PHP and hosting it with Apache. The only cost was the domain registration fee.
Note that I don't write websites. I only have vague clues as to what issues real web site developers face. The above is a very incomplete overview of the type issues a website developer needs to figure out that don't apply to a webpage.
Today websites are common enough that people have figured out all the common issues and built tools. Now a website is easier because we know of solutions common problems and so you can just use the solution.
I eventually sold it and maintained it for 7 years total and then it was abandoned because the site didn't catch on enough, and the game itself assimilated lots of the functionality provided by my in-game database (which did catch on massively with the players) so that successful service became redundant over time. But there you've got it: a website plus ancillary software and services, an entire distributed system, built by one person.
And I knew others who did similar stuff - lots and lots of them. It was very common back then to just write some web app by yourself, put it online and see if someone is interested enough to use it. Much more common than today, as it seems, because today pretty much anything already exists, or if it doesn't, it is probably because there is some kind of legal risk involved that you can't just take anymore as a solo person without a big company backing you up.
No web developer is charmed by the limited capabilities of Wix, but it's good enough for the static website many (if not most) businesses are looking for. A CRUD app made by a low-code platform may be limited in its functionality, but it's often good enough for a typical business need. To say it's doomed to fail is an overstatement.
I’ve built lots of applications without code (and teach others how @ https://www.makerpad.co/) one being a mini Netflix ‘clone', Airbnb ‘clone’ and Fiverr ‘clone’.
They may not be able to withstand Netflix level of users and usage but why should every startup/business have to aim for that. It could service hundreds/thousands of users and turnover great revenue for those running it.
while very valuable for learning, i find that folks building thematic maker sites (no code, react/node, rails, etc.) are very biased towards their own way and often haven't made a "real" product built on what they profess.
engineers would be better off using "no code" more often, and non-engineers would be better off learning just enough react, python, etc. to ship something.
otherwise, everything begins to look like a hammer.
The only reason these tutorials are built as 'clones' is to show the features for others to tweak for their own.
well, I used to work in a company where a lot of software were written in Max/MSP (https://1cyjknyddcx62agyb002-c74projects.s3.amazonaws.com/fi...). Sure, it wasn't very maintainable, but it allowed one person to iterate very fast.
I've also used some CRUD apps which were entirely made in MS Access (D&D 3.5 eTools :) ).
Services get more fine-grained every year, and composition of these services is where the money will be made in the future. AI might resolve some of the issues with finding the right services for the right use cases & even integrating them.
Custom, home-made services will be a conscious choice, and might be composed of other services that are publicly available or not. Everything else will use off-the-shelf services.
The truth is, this space just gets better every year. You can get more and more done with less and less. Sure, you still can't build arbitrary programs without at the very least a general-purpose programming language, but if you can constrain the solution space just a little bit you'd be very surprised at just how far you can get without needing to break out a text editor.
Tasks that are currently automated.
Tasks that you can automate via input from automated tasks, but is currently done manually.
Tasks which depends on other manual tasks.
Naive people tend to look at this and think "There are not a lot of tasks in the center there, soon we will have automated everything we could possibly automate!". But what actually happens is that every time someone automates something it opens up more things to be automated.
Example: Automated detection of location and good map API's allowed Uber to automate away basically all overhead from conventional taxi services. Of course since those things were automated for everyone it wasn't hard for conventional taxi services to start providing similar features, but that kind of automation wouldn't be possible without all the automation efforts done over the past decades.
That said, Excel has largely replaced traditional text based languages in a significant set of domains, and I don't see any reason that pattern can't be repeated elsewhere. There was a time when summing a bunch of data required a "programmer" and today that is only true for unusually large datasets. You can certainly define Excel to be a programming language, but it doesn't feel that way to most of it's users.
There are 2 main problems with this idea:
1) it is still code, so you have to think like a coder
2) any non-trivial routine fills pages of "diagram" and is arguably harder to follow than the 20 lines of code that would have sufficed.
An example of (1):
At a pretty big early 00's e-commerce site, we decided to let a couple managers implement some "promotions" (conditional discounts) using the commerce suite's drag-n-drop feature. Add an inkjet printer to your cart, get free photo paper. Great. But they forgot to add the reversal logic, so crafty customers quickly figured out how to get reams of free paper! Oops.
I was consulting for companies using it as a main java framework for all their work. While you can write Java code, most of the time you don't because it gets generated (along with lots of XMLs) and you need special hardware, IDE, build server etc...
Needless to say, the end result was in each case junk, sometimes so big that dedicated oracle tech crew couldn't fix it in 6 months. Any time you are out of the ADF comfort zone (which is like 10 minutes after you tried CRUD) you don't know what to do or you have amazing performance problems (team was using huge dedicated server which could run easily 100+ services instead of the single one badly).
There is nothing as horrible as it is, its even framed as MVC framework. I would rather Scratch or Lolcat.
Wait! Just because the code is graphical instead of textual, doesn't mean its "codeless" or that non-developers can use it effectively, in a way that the results are maintainable. For example, Max/MSP is used by musicians and artists, quite successfully (so.. not doomed to fail!), but much of the code they generate is a complete mess. This isn't the fault of Max/MSP the language as much as its that the users aren't programmers and therefore never learned to reason about problems in the programmer way or learn about software engineering best practices like... abstraction. Similarly bad textual code is also written by non-programmers, so this isn't just an issue with graphical tools.
You might argue that there's a difference between graphical code like in Max/MSP and purely model-based techniques that try to hide the code-ness even more, but I'm not qualified to comment on those tools because I've not used them.
> Why do people keep trying to breathe life into a solution that is so obviously doomed to fail
I've personally had great success with Max/MSP and Synthmaker and plenty of people have had success with visual shader generation tools, Unreal Engine's Blueprints, SCADE Studio and whatnot, so I don't think its "obviously" doomed at all.
In other words, if you are making anything with real value that doesn't already exist, you are not going to be able to make it through some kind of WYSIWYG.
On the other hand, just like DSLs are sometimes appropriate, writing a _domain-specific_ WYSIWYG could be appropriate, especially if it's for internal use and not the product itself.
As long as what you’re doing is not Turing complete, I think WYSIWYG has value.
This has all happened before and will all happen again. Remember Visual Basic. It had all of the benefits and drawbacks of Low-Code. For some uses, it was great and exactly what was needed. For others, it was a horrible trainwreck.
That said, the author is right. It is true for any framework or tool, once you get outside what it does well, you are in for a world of hurt. He is also right in that none or almost none of the Low-Code tools of this generation will make it to the next. Anyone remember PowerBuilder? It was great for about 5 years and OK for about 5 more after that. It is long dead now. Same is true for Cold Fusion. What a great tool for about 5 years.
Every once in a while I still see VB apps out in the wild but I don't know a person that will admit to knowing anything about VB. about 10 years after this current Low-Code revolution dies, another will start. Each time we get a little closer to actually understanding what Low-Code is really good at and keeping it in that box. At this rate humanity will have a great handle on Low-Code in 2457.
Admin dashboard is an interesting problem. To me, to build an MVP means to quickly scaffold an Admin dashboard. Then you can iterate the domain model until it fits the requirements.
Sadly, not enough open source project which is successful to quickly craft a customizable admin dashboard.
Note: Codeless in this case means you configure the dashboard through a json file instead of canvas.
"No-code" is a similar metaphor to "serverless" - of course there's still servers, but you don't have to worry about managing or scaling them. Or more accurately, you pay a premium for someone else to manage and scale them. For "no-code" platforms, someone is still writing and maintaining all of the code. They just give you an API (or a visual UI) to link everything together.
It's actually pretty amazing what you can build with Bubble, and there are a huge number of plugins .
They all fail to live up to the hype. I think it's because you have to have a good general understanding of how things work to properly build (and initially test) software.
Yes it's still better to have skilled engineers, but they're expensive and hard to find nowadays, and not every company is even capable of hiring them effectively.
<!-- <span class="meta">Posted by
<a href="#">Start Bootstrap</a>
on August 24, 2018</span> -->
The whole idea behind it was just like the article described; a WYSIWYG in which business people could draw boxes and lines and code would be generated for it.
However it was never that simple, plus Oracle eventually acquired it and added even more fluff onto it.
Thankfully I was developing around the CRM and just had to draw some lines to represent an API being consumed.
I later found out that what I just mentioned is a fourth generation language (4GL) and there are many of them out there.
ETL looks like you could reduce it to signal processing problem. Likewise, general purpose web development looks like you can reduce it to an ETL problem.
So what's the problem, why do these graphical programming languages fail? Because the problems they try to solve are not actually reducible to signal processing.
Look at Zapier, while they do (now) have some options that let you build your own functions, etc. their bread and butter is replacing custom cross service integrations with a handful of clicks and some structured data.
Graphical tools work great for creating static sites, because you mostly just care what they looks like. If you have complex behavior to describe, however, typing will always be easier than creating a massive flow chart.
We already try and make coding as fast and easy as possible. The software industry is not a racket that is hiding some magic tool. If we had a magic bullet that any end-user could use to build their software, we would all be using it already.
What your approach does do is create a system that is easy to parallelize. You want to run on thousands of cores/threads? Make each "person" a thread, and there you go.
Well with money yes.
built quick to satisfy a short term need.
There is a space for those but it doesn't fit all.
I think OP was overly trying to be critical of popular things.
Wix is a website with database features, having a back end has purpose.
Using Excel as a traditional database is not what Excel does. Excel is often good enough at storing data that people use it like a database.
Anyway this seemed like a Pop Programming article. Lots of Python and coding love, bashing WYSIWYG and Microsoft products.
The other half pretends they are not, while frantically trying to migrate from it with a various degree of success.