Hacker News new | past | comments | ask | show | jobs | submit login
Why Codeless Software Is Doomed to Fail (architectofworlds.com)
167 points by wessej 3 months ago | hide | past | web | favorite | 146 comments



The article doesn't get to the reason that it's doomed to fail. Software developers translate between what humans want and what computers can do. The second part — what computers can do — is a well understood domain that can be modeled beautifully, and tasks involving it can be automated trivially. The first part, what humans want, is a profoundly complex anti-inductive domain that every human in history has spent their whole lives trying to understand, failing to but scratch the surface. Tasks involving concepts in the domain of human desires can, too, be automated, but the technology needed to do so would be AI-complete. In other words, if we had a technology that could automate the work of software developers, the same technology, with relatively minor changes, could be applied to automate any human occupation, including those of the people who want to automate software development.

Until then, what we can do is build abstraction on top of abstraction over what computers can do. But that is code. There's a lot of room to make application development easier, to make it require less knowledge. In a lot of cases there's no good reason it isn't easier already. But it can't really become easy until the target stops moving; what computers can do changes too fast for people who want to build a good unified set of abstractions over it.


> The first part, what humans want, is a profoundly complex anti-inductive domain that every human in history has spent their whole lives trying to understand, failing to but scratch the surface.

Yes, this! It's the inquiry into the way the data flows, what the application should do, and how to handle edge cases that is difficult for a lot of business applications. To say nothing of getting various stakeholders to agree. Code isn't fluid, it's calcified business process. That calcification requires regimented thought that developers can bring to the table (others can too, but I've seen it far more often with developers).

That's not to say that there aren't more and more domains that will be codeless. As another commenter says, if you can stay in a box, a codeless solution like Wix or wordpress.com can solve your problems quite well. But as soon as you step out...

Another issue that is never brought up is lifecycle and change management. This is a complex topic that developers spend a lot of time thinking about. Some codeless solutions version control behind the scenes, but testing and regressions are not really part of what the end user thinks about. Again, this is a question of maintainability and scale. Small companies may not need the overhead. Until they grow and suddenly they do.


> Some codeless solutions version control behind the scenes, but testing and regressions are not really part of what the end user thinks about. Again, this is a question of maintainability and scale. Small companies may not need the overhead. Until they grow and suddenly they do.

Please pardon my ignorance, as I've never worked in software development. Could you please explain what you mean by 'overhead', and also how a lack of testing/regressions in low-code programs would be a roadblock as the company or application scales?

As an aside, I love your analogy of code as "calcified business process". I've heard several low-code 'success stories' of business users without CS experience learning how to build apps, and they often comment that one of the most difficult things was learning to think in terms of the rigidity of code. Sometimes I ponder whether the true value of low-code platforms is simply that they teach non-coders how to think about business-level problems in terms of logical systems, rather than fluidly, which is how they experience them.


When a program becomes sufficiently complex, it's impossible to make changes to a program without the change affecting other parts of the program. Without tests, whether manual or automated, it's hard to know if a change has caused old bugs to reappear, or introduced new bugs.

What about adding a feature that changes a fundamental assumptions of the program? Eg since the very beginning, the product has only ever supported integration with Salesforce.com, so all of the system was programmed with that assumption in mind. Unfortunately, there's now a need to support other CRMs. As support for OtherCRM is added, how sure can you be that salesforce.com support still works 100% perfectly? And then let's also add support for 30 more CRMs because that's what the customers are using. If support isn't added for those 31 CRMs, then a (high-code) competitor could come along with a better product for cheaper.

Thus, making sure every change going into the program still works with every single one of those CRMs becomes a roadblock. Not insurmountable, but at some point, the roadblock becomes an albatross around the neck of the product, making it impossible to make any changes to the product.

Having tests (hopefully automated) will make sure that the product works with all the CRMs it claims to. But the overhead of writing tests takes a non-zero amount of time/resources, which (eventually) adds up.


Thank you, that helped me understand the point much more clearly. It sounds like a cogent example of 'technical debt'; I'm aware of several software products that have tried to expand their capabilities by purchasing others' source code and have eventually run into similar roadblocks.


That's a very waterfall way of looking at it. "Business have requirements, we have software, if we just understand the requirements we can build the perfect system". I could hardly disagree less.

Software is deliberate, yes. You need to know what you want to do at a pretty detailed level, but that's not the challenge. The challenge is that this new technology (what software can do) is a gamechanger for most traditional businesses. It creates a lot of new opportunities by enabling ventures you couldn't even imagine before, and every time you build more, you get more opportunities. Software requires you to reevaluate what you were doing previously. That's hard.

Software is also fluid. As you get more software, you see more opportunities. As you get more, you want more. Anything that tries to treat the domain as "static and understandable" is doomed to fail. Development is not the steady march towards the perfect system. Development is the discovery of what you can do with software, and how that changes the business processes that already exist.


> Code isn't fluid, it's calcified business process. That calcification requires regimented thought that developers can bring to the table.

Well said.


Thanks! I actually wrote a blog post about this years ago: http://www.mooreds.com/wordpress/archives/46

I should revisit with more modern examples.


That and also:

- dealing with the environment. My program always works perfectly until it moves away from my laptop.

- debugging. If you don't know how to code, good luck to find the error in your system.

- interfacing. Your codeless stuff is going to work fine for everything that it already know. But new stuff are coming. Somebody will need to code for that.

Now I do think that eventually AI will be able to manage all that. And that it will give human a fuzzy codeless experience. But it will take A LOT of time before we get there.


> The second part — what computers can do — is a well understood domain that can be modeled beautifully, and tasks involving it can be automated trivially

I have a chipset that I'd like modeled, and can I have some trivial drivers by next Tuesday?

Modern computers are not trivial. They are probably the most complicated devices on the planet, full of features, edge conditions and screw cases that are not obvious even to very skilled developers. I've done some very fast platform bring-ups and gotten them to ship, projects organized to be very light-weight and agile (in the non-political sense) and it's still many person-years of effort and a bunch of time on the phone with vendor support, discussing chip-level bugs (can that model please include the bugs? That would help a lot!) No talk at all about pleasant abstractions, we need to ship this pile of silicon.

Visual interfaces are for toy programs that spend resources on making pretty illusions of how the world should work. Underneath the pretty glitz are piles of slain dragons and machinery that people take for granted and apparently think can be wished away. That's fine for end-users and application level folks, assuming they can get their jobs done without resorting to breaking the abstractions (and any sufficiently large or long-lived project will). But someone has to worry about memory management, concurrency and synchronization, all the drivers for fucked-up hardware that (say) locks up when you fire data at it too fast, what happens when the file system fills up or that configuration file is mangled by an upgrade. All that nasty stuff that the "drag a blue wire from point A to point B -- see, easy!" doesn't handle, and never will.

On one project we had a clock that would, very occasionally, go backwards in time. It took a calendar year of going back-and-forth with support and ultimately a phone call with the chip architect to figure out the problem (the workaround for the bug was unpleasantly expensive). It's easy to model a clock and come up with a set of expectations. It is much, much harder to make a clock work in the real world. You would not believe how hardfought some of this stuff is, for features that are "trivial".

Visual programming is great when it works -- I've certainly done my share of it -- but we should not mistake ease-of-use in some domains for revolutionizing programming in general, or (worse) expecting the paradigm to work when things get complicated and large.


Note that the "trivial", as used in this context, probably means something more like "we know the domain pretty well", versus "can be completed in 5 days". There are still some ghosts, but I believe OPs point was that pales in comparison to the demons invoked when trying to "calcify the business process".


>what computers can do changes too fast for people who want to build a good unified set of abstractions over it.

I was with you until this line. Is it really the case? x86 has been with us for... a while, and x64 is backwards compatible. We built an abstraction on top of that, a compiler that can take high level languages and turn it into machine instructions.

I think what people are looking for next is a compiler that can take something even higher level and "compile" it into high level language. The issue is that if you get too abstract, a standard compiler stops being able to make good guesses about your intent. However, if you built a "high level compiler" around specialized domain-specific use cases, then you might be able to make reasonable higher-level guesses about intent (though as with normal compilers, you'd need support for advanced developers to work around those assumptions).


Not only advances in hardware affect what computers can do. Advances in software let computers do more things, but we can't count those when considering automating all software development, it wouldn't make sense. But there are advances in the capabilities of computers that are neither hardware developments nor the job of software developers, namely business/legal and theoretical advancements.

So some new "things computers can do" from the last 20 years:

- Fit in our pockets

- Communicate wirelessly between any two major population centers on the planet, with high bandwidth

- Store exabytes of data

- Do excellent pattern-matching of all sorts of data

- Be close to your customers, wherever they are on the planet, without you needing to have a ton of capital upfront

- Cost as little as $7 while still being compatible with our complex software stacks

- Be powerful enough to run an Electron application

- Monitor your blood glucose levels and deliver the right amount of insulin in real time


> Be powerful enough to run an Electron application

Probably a next great milestone is being able to run multiple electron apps in parallel...


Then a few years after that, be able to run electron apps well.


Like fusion, this will always be a few years out.


I would not put our hopes too high.


I was also hesitating over that line. Maybe it's implied that computers enter new business areas all the time, which certainly is true. I have been in the game long enough to notice the same patterns of explosive growth and associated API chaos etc etc. Nowadays you can do everything in Node.Js for instance, but first you had to "reimplement all things!" except in many cases they didn't know (or care) they were reimplementing.

I usually snark on Node.js but not really in this case, I understand well when the platform imposes a moat and it's easier to just re-do than integrate. Ain't nobody got time for that... it's still something of a tragedy but it's not something we have an easy fix for.


I think you nailed it with the reference to a higher-than-high-level language being the next step. We went from binary programming (punch cards) to machine code to assembly to early low-level languages to high-level languages and then just kind of stopped there at the abstraction level. But do we really need to be manually coding in if statements, loops, data storage schemas, etc. over and over again when end users really don't care about any of those things? I hope eventually we'll get to a "super-level" language future where developers still input highly specific and organized needs, but not at the level we see today.


The real world has a whole lot of "ifs" in it. "If it's this hardware rev, do this." "If the employee is a manager with more than five reports, they need to take this training." "If the account is more than 30 days delinquent, then send a second notice. If it's more than 60 days delinquent, call. If it's more than 90 days, begin collections."

Because the real world is like that, there is no possible programming language that is going to prevent the need for "if". Users may think they don't care about that, but the requirements are full of "if".

Loops you don't need to code today - if you don't need too precise a control over execution. If you need real control, you probably need to write the loop instead of just using a comprehension.

Data storage schemas: As databases scale, efficiency of storage start to add up to real money. The user may not care, but the finance people care a great deal.


Maybe declerative programming could help with that, like prolog/datalog. I don't think you can go much higher than inputting only the rules and data of your app, the only thing more higher than that I could imagine is an AI that could create a program written in a declerative language with only a subset of the rules, figuring out the rest by itself - which is pretty much what developers do


> do we really need to be manually coding in if statements, loops, data storage schemas

No, not necessarily, depending on the business requirements. There's already logic programming, for example a language like Prolog. You the programmer input a very high-level description of the rules that your information must follow. The computer figures out all the imperative bits.


There's also block programming. Eg. You put puzzle pieces together. But mainstream languages are becoming really abstract. See for example Rust and JavaScript et.al with async/await, generators, coroutines, etc.


I've seen the exact scenario mentioned in the article play out, "the moment you need to create something in a domain that does not fit existing tools, you are already into the domain where you will eventually need programming done, and in that domain, you need a programming language, not a drawing tool."

I worked somewhere with a team of engineers who worked in LabView. They used LabView mainly for customized test equipment controllers and recorders and it worked well. They then branched into FPGAs for some straightforward dataflow signal processing and that also worked well. They then decided they were sick of how long it took software engineers to write device drivers for custom hardware and decided to do that in LabView with some tool that could generate C code from LabView. I got pulled into a meeting about it. I asked a couple important questions: how do you insert memory barriers and how can you write code that is ISR safe? It was clear the LabView folks had no idea about any of this. So I launched into a discussion about CPU architecture. They didn't really listen and everything was a hand wave, "LabView does that for you". It was clear they didn't want to hear my thoughts, so I let it go.

Fast forward 6 months and I get a frantic call from a manager if I had "bandwidth" to write a device driver in C for the same card where the LabView team was supposed to have written the driver.


Similar experience with a Q/A team using LabView. They had a tool they were comfortable with, and had inevitably created a fragile, horrid monster that looked like a plate of spaghetti. Their test framework was probably nice and intuitive for six months before rot set in, and then no one could understand it.

A new project rolled in that didn't meet the expectations of their tooling, and suddenly they had to start writing native code plugins and so forth, and they were totally at sea. They started demanding engineering resources and invasive changes to the product so their test environment could handle the thing with minimal modifications.

I remember telling one particularly obstinate contractor that we were colossally uninterested in writing drivers for their environment, and that he was either going to learn C or lose his contract. I didn't see him again, maybe his management chain told him not to come 'round the software group again, or maybe he learned C and saved the day for his team.


I gave up at:

"If this trend holds, and it seems like it does, pretty soon even a regular high school kid should be able to build the next Netflix using nothing more than an easy "click and drag" interface. This is the promise low-code platforms are built on."

I work for a low-code software platform company and at least as far as we're concerned, I cannot scream "NOT EVEN CLOSE" loudly enough.

We're for building _business applications at scale_. Not Netflix, or the next great TODO app startup for teams, or any of that. Not the same ballpark, not even the same sport.

We're for building your companys HR app. Or an app to help Legal. Or an app to help field CSRs do front-line support. Line-of-business apps.

Maybe some of them are promising this "drag and drop your way to netflix" thing. I know we aren't, nor are any of our competitors in the low-code space.


Netflix's interface isn't complicated. Surely some codeless platform is capable of something similar.

The technically hard part is the infrastructure to unicast millions of videos at a time, but that's now available from several cloud providers.

The really hard part isn't technical, it's content. Disney might be able to build the next Netflix with a codeless interface to videos on the cloud, but a high school student wouldn't.


>but that's now available from several cloud providers.

It isn't available at a price that lets you stay in business.


Well, that depends on how much you can charge for your service. You can be super inefficient if your product is unique and has a big enough market, and that's precisely what these models are good for.

Once you've proven the idea, perhaps it's time to hire a team to optimize, but until then, you can focus on the rest of the business.


Netflix actually moved from their own datacenters to AWS at one point. Though not the bulk streaming part.

https://media.netflix.com/en/company-blog/completing-the-net...


If someone could compete with Netflix and make it buildable through a click and drag interface, then they'd probably be directly competing with Netflix and not making less money basing their business on "codeless". :)


And yet... I've met sooo many non-technical startup founders who tried (and failed) to write their app in Wix (or similar). I don't know which one of them you're working for, but your industry as a whole is totally selling this dream.


Reminded me of an article with estimated annual AWS bill Netflix has: $300M.. Yup,drag and drop,easy peasy...


Could you elaborate on why business apps are different from other apps? Is one an easier problem than the other? Are they fundamentally different?


So called 'Business apps', is an umbrella term for relatively basic apps that usually used for standard business functions. For instance, today I got a sign off for a so called app that will be used to store QA info on sales calls. The QA person will listen to a call, enter info on the system,which them allows to summarize info in reports and dasbords,print QA sheet,etc.Everything stored in one place.It took me just a few days to do it all and it is easy to customise.


What are some of the most complex apps, built sucsesfully using low-code tools ?

And in what similar cases the technology isn't a good fit yet?


Reminds me of what people used to say about website builders such as dreamweaver, then about the "cloud", and more recently about "serverless".

Wix and squarespace are more than enough for many businesses' needs. AWS has blown through the roof. And more and more people discover what it feels like not to have to worry about hardware and VMs configuration.

ForestAdmin is also a nice case in point, supposing you have the proper stack.

Sure there will always some who need developers, and most "solutions" this far haven't been up to the task. But I definitely wouldn't bet that won't ever be more than enough for most people and use cases.

For this, extensibility (via integrations and proper established languages; definitely not some absurd and ill-conceived "very friendly script language") will be key


I don’t think the article is about “serverless” - it’s more about various BPM, RPA and similar tools and technologies that tend to popup with snake oil type marketing every a couple of years.

Also serverless is arguable more complex for a business person/analyst to grasp than just telling a developer to create something they imagined.


Serverless is to sysadmins what codeless is to web developers (front and / or back).

Or serverless is to those same web developers what codeless is to business people.

Edit: also, this is an analogy


Yes, analogy is similar, the gap though is much larger (even though it’s barely noticeable).

I guess eventually a new gen of Excel comes out that can write code without IF, FOR loops, and function calls, but until then it’s safe to say anything writing those statements other than in code is pretty much snake oil.


The problem is that IF ELSE statement or even calling some functions is easy, however most people are shit scared of it.A person of very average intelligence can do it,but most people will never dare even to try,hence the popularity of all sorts of drag&drop. Another problem is that if one isn't a developer,it is close to impossible to estimate how difficult the task could be. For instance, querying 1000 records in the database is easy but doing the same for millions of records doesn't translate into "oh, let's just do it a few more times".


Oh, I definitely agree with you here. If it were just as "easy", AWS, GCP or Azure would have done it by now.


Serverless solutions still need someone to write the code to run on the serverless platform. Serverless doesn't mean literally no servers, it just means the servers are 100% managed by the provider. For example Lambda AWS still needs a developer to upload the code for the Lambda to run.


Yes. I know. This has been my job for the past months, and all my personal projects for the past 2 and a half years. Still, no sysadmin.

As for the rest. No-code / low-code solutions don't just pass nice drawings / uml diagram to a gremlins-powered CPU. They rely (and will rely) on code behind the scenes.

Analogy still stands


As someone who works in a no-code platform company, my view confirmed by a look at our revenues is the author is way off. low-code no-code platforms are booming and will continue to for a long time. There are several reasons 1 - there is a huge backlog of business processes which can be made more efficient with apps 2 - there is deficiency in developers, but a surplus of knowledge workers 3 - IT departments are unable to solve all users app needs 4 - enterprises are mananaging far too many SaaS 'point solutions' 5- most importantly, ROI and TCO make low-code platforms very attractive. 6 - the ability to quickly customize business process applications is incredibly important. Low-code platforms provide this 7 - technologies like webhooks are making it easier for low-code app builders to create sophisticated apps tied to multiple cloud and on prem data sources and systems.


I am amazed, everything you said is completely true and yet you are totally wrong. Sure your company is doing really well right now. You might even cash out or leave before it goes downhill but it will go downhill.

Low-Code, No-Code is too immature to build a long term sustainable business. It will ebb and flow for decades to come. Always getting closer to solving this list of problems (and the problems are real) until one day we as a society learn to understand where the value boundaries are for digital products. I don't see that happening in the next 50 years.


Unity3D game engine is a proof by counterexample that you can be lowcode for creatives and analysts while allowing engineers to fluidly drop down levels of abstraction and control the code, without big rewrites.


I think there are quite a few examples of successful low code platforms.

Excel/sheets, wordpress, MS Access, moddable games, arguably even the web itself...


I definitely agree with you there, though for moddable games... my experience has been the opposite of low code.

Modding feels like fixing a plane in flight, except someone else built it, it has two nosecones and three wings, and there are twenty other mechanics, some of whom are running around adjusting the things you just tuned up, some of whom aren't actually mechanics, they just like wrenches.

It feels closer to programming anarchy than any open source project I've participated in, and it requires being very comfortable with writing nothing but hacks. Honestly I think it requires a level of cleverness that you should not need in a programming day job, because only masses of technical debt that have gone critical need that much cleverness.

For context, my modding experience comes from Stardew Valley, Rimworld, XCOM 2, and BattleTech.


This is the key to it, what is low-code good at and staying within that box while giving the ability to easily get outside that box when needed. With these successful examples, I have seen horrible abominations that should have never been made and are impossible to support because we "had" to take that low-code tool one more step beyond it's capability.


I work with Salesforce that actively pushes 'point and click' configuration to the naive CEOs while at the same time making billions from ever increasing development required to make all the stuff work. Codeless is fine for some genuine vanilla cases but anything beyond that requires code and will continue to do so in the foreseeable future.The only real thing that is changing software field is ever growing abstraction,which does simplify a lot of things.For instance, to create a simple CRUD web app nowadays is a relatively simple task compared to what it used to be 20 years ago. However,even with all the abstraction,the simplicity ends up when complex business problems show up.


Ugh just like seeing my current CEO. Guy is "configuration over development", but he will not see point where that configuration will become development. It will in year or two get so complicated that you will not be able to get any guy from the street. But extensive training for 6 months will be needed, just like hiring developer.

To get Salesforce you already need trained consultant to who you pay $$$ and yeah those CEOs don't see it. It is just, developers do some magic and are expensive. Where I don't understand what Salesforce consultant would do and he would also be expensive.


100% this. As the author put it:

>Suddenly you find yourself looking for programmers who are well versed in the specific low-code platform that you bought, and it is a really hard task.


Yes, customisation of "codeless" platforms is typically narrow. You can easily customise the parts the developer anticipated that users would want to customise. When you need something else, customisation is limited and painful.


I make no-code ETL software for non-technical users (https://easymorph.com) and I don't think the article is correct.

A general-purpose low/no-code tool may be an utopia. But task-specific low/no-code software can be very effective and successful. And yes, it does remove/reduce dependency on developers for non-technical people. For corporate users dealing with internal IT departments is frequently a big hassle. Which is by the way one of the reasons why cloud apps are becoming so popular - because they reduce dependency on in-house IT departments.

Although, for low/no-code software it's important to have a decent and reliable API to be able to integrate and interoperate with external apps and systems. With such API it doesn't really matter if the app was developed using a written code, or visual tool.

Second, have good logging/auditing capabilities.

Third, play nicely with security/administration - support LDAP, SSL, etc.


Nice product!

I agree, TFA's author doesn't seem to grasp that some tasks, with constrained inputs and outputs, might be better served with a well-designed visual tool. ETL (like Yahoo Pipes and your project) seems well-fitted.

Decades ago I built children's literacy software. I built the building blocks (sometimes visual sprites, sometimes behavior blocks), and the artists and educators took those building blocks and built a series of products. It worked nicely.


In my years in the industry I've noticed 2 things.

1) People is what makes coding hard. If the People involved (designers, execs, users) could fit themselves into the simple box of what more or less comes for "free" these days then they could do so many things via GUI. Look at cloudpipes, Zapier, If this then that, and segment.io. Each more or less integrate 100s (1000s?) of tools to eachother. People like to think their business needs are unique, but most of the time it's just JSON being piped from one bucket to another.

2) Lots of companies w/ engineers and a big tech budget are not what I'd classify as a "Tech company", instead they're more of a "Product company". I often use this distinction -- is the thing that makes this company valuable the fact that their tech is better than what others have ever done? This looks like 10-100x more capable in a certain aspect. For example Wix is not a tech company -- it creates a mundane thing in a slightly more accessible way. Compare that with something like Tesla -- they applied technology to the automotive space to create many multiples of more efficient transport to unlock a whole new paradigm in their domain. Most of these "product" companies should be using boring tech and adapting themselves to the tech's edges so they can rapidly and inexpensively build products.


I worked in this domain for a couple of years at one of the most innovative players (www.triggre.com). These platforms fulfill a really important role - they allow a far larger class of people to automate their own day-to-day work.

They compete with Excel, MS Access and possibly WordPress. They're never going to replace actual software development because once the problem domain gets complex enough the person behind the wheel needs pretty much all of the skills of a good software engineer. They're also pretty much all closed platforms so only an idiot would use them to build a product. So they're stuck in the small-scale automation hook.


I should add that it's a really interesting domain to work in. There is just so much to achieve there and so many of the players are focusing on the wrong things that there are really big chances.

I still have regrets that I left because it was intellectually stimulating. It's just impossible* to work in a startup culture whilst being a GOOD parent to young children.

* When you can't stay late when needed and your colleagues are picking up the slack it feels like sh!t and makes you feel like a massive shit; especially when said colleagues are amazing about it.


I think the whole concept is meh. Can you create visual builing blocks that are transformable to an AST? yeah, thats no problem for a given fixed set of blocks. Can vsual blocks be as communicative as written language/syntax for much more open set problem spaces? Yeah as long as you don't want to make that AST, or yeah as long as you hand code the AST map to grow the set manually, but at what point do these visual representations fail to convey clearly the topic?

There is a reason why you have textbooks in school and not a stack of photos. written language is much better suited to convey deep and complex information. Sometimes both together can convey a complex idea better than one in a silo, however without the text to convey what is trying to be shown in the image the complexities of the content in the image are severely limited.

Code is written language, and just happens to be various forms of specialized language we use to convey these open ended concepts. Visuals just can't be as communicative. So I firmly believe that visual blocks are relegated to small sets of problem space and not these open ended sets of problems.


> There is a reason why you have textbooks in school and not a stack of photos. written language is much better suited to convey deep and complex information.

There's also a reason why many of the best textbooks contain diagrams, images, charts and illustrations. Text is, in my own personal opinion, not the best way to convey deep and complex information of all types and sometimes its not best in isolation. Also, some people can learn through pure text and other people are more visual and find it much easier to understand things when stated visually. Personally, I find I often think in boxes and lines (relationships between things, I guess) so need pen and paper at hand when I need to think through a complex programming issue before I can turn into into textual code.

> So I firmly believe that visual blocks are relegated to small sets of problem space and not these open ended sets of problems.

Meh. I've written some open ended code in Max/MSP and I found it to be a really pleasant and productive experience. Sadly (at the time, at least), Max was too limited in features to really take it further (at the time it didn't have any support for data structures other than a few built in ones, I see that it now does, dunno if I just missed it or if its newer; also, no unit testing support was a big problem), but none of its limitations are limitations of visual languages, just of that particular implementation (which, to be fair, was never intended to be a general purpose language).

Some things really are better suited to text, of course (mathematical formulas certainly), but the best visual languages let you use text for those. And annotate things with text as you wish.

> visuals just can't be as communicative.

I guess I'm a visual thinker, because I completely disagree. Sure, visuals alone are bad at communicating, but its never an all or nothing either or thing. I often find complex issues with just text extremely difficult to get my head around, but a nice diagram can make it trivial. "A picture is worth a thousand words" and all that.


I'm with you that pure text can be limiting as a medium for thinking and communication, and that using a wider range of visual expression like diagrams and illustrations can dramatically enhance understanding.

No visual programming environment I've seen compares to the expressivity of writing code - but I also consider myself a visual thinker, and I believe there's still a lot of potential for imaginative rethinking of what it means to code. Text is just a subset of visual communication, and there's no reason why we need to limit ourselves to what can be typed - maybe we could include dynamic interactive symbols as new "words", or program by visually constructing diagrams that include code..

The sweetspot seems to me, visual languages that also let you use code.


The ideal would be a language which has a non-ambiguous mapping between both textual and visual, so that you can freely switch between them as you wish, but I’ve never seen such a thing that was satisfactory and haven’t been able to come up with something myself. Having a visual AST or flowchart version of an otherwise textual language just doesn’t do it for me.

Failing that, maybe a hybrid thing that lets me do mathematical code and pure algorithmic work textually, but to do all of the higher level architecture and coordination, asynchronous code and stream processing visually.

Personally, I really enjoyed my experience with Max/MSP, I found it quite liberating in many ways and much (but definitely not all) of it suited my way of thinking closely enough that I could bypass the pen-and-paper to figure out complex things. I also really liked not having to name things (until I wanted to, at least) which I found made experimenting with ideas, before they were solid enough to put names to things, was also quite interesting.


A "non ambiguous mapping", in the typical case, would be way too complex on the visual side, and way too trivial on the textual side. I.e. it would be used, by and large, either for unreadable spaghetti visuals or for high-level, almost trivial descriptions, the sorts which we routinely sketch on whiteboards anyway. Actually it's sort of nice to generate these sketches from code/textual syntax, but that's basically a sort of literate programming, and this is where its utility ends.

There are visual syntaxes that are genuinely useful, but really as an aid to reasoning more than anything - so we're not that far from the "literate programming/documentation" case. I include the sorts of diagrams we routinely use, e.g. in category theory (commutative diagrams, string diagrams etc.) under those.

> but to do all of the higher level architecture and coordination, asynchronous code and stream processing visually.

Thing is, you'll still want to enter all that stuff as text - not by fiddling with a frickin' mouse. And the authoritative version of that code should be plain text as well - the system should be smart enough to cope with outside edits and do the visual formatting and layout itself if needed.


> the authoritative version of that code should be plain text as well

This is why I said the ideal would be a non ambiguous mapping: so that the text and visual code is equal and you can convert between them as you please. Unfortunately, this dream is unlikely.

> Thing is, you’ll still want to enter all that stuff as text

Why, though? I’m a very keyboard-centric computer user. I use tiling window managers, keyboard-centric editors (vim & spacemacs) and generally have my setup such that I rarely need to use the mouse. But I don’t see the value of needing this input to be through text. My experience with Max and Synthmaker showed me that using the mouse can be quick, easy and efficient. I spent a few months working with Max and having to use the mouse did not bother me as much as I thought it would. In fact, it didn’t bother me at all.

I also dream of having this visual system on an ipad (because I’m wholly unsatisfied with the existing ipad programming tools, which are typically too text centric for a touchscreen) and would like to be able to use the touchscreen to draw my boxes and lines.


> a hybrid thing that lets me do mathematical code and pure algorithmic work textually, but to do all of the higher level architecture and coordination, asynchronous code and stream processing visually

I really like this description of a possible language / IDE, where textual code is the "canonical" representation of the program, with visual layers of interaction to manipulate higher abstractions.

At the bottom, I still think it would be an abstract syntax tree, with textual and visual "views" into the program. As I've got VS Code open in front of me, a few examples come to mind:

- Minimap: a condensed visual overview and navigation of code structure

- Color picker: hover on a color value in code, and select from a color wheel, adjust transparency, with preview

- Type annotations: hover on a variable, function, class instance to see a tooltip showing their type definitions, reference link to source

These are all various ways to represent and interact with the same underlying code, and I imagine many more possibilities exist to extend the "visual programming" aspect.

A quick idea as an example, there could be a mode/view to see the entire program and its modules/dependency tree visually, move classes around, rename/refactor methods..

Whenever "codeless software" and similar topics come up, it always reminds me to re-study the work of Douglas Englebart, Ted Nelson, Alan Kay, Seymour Papert, Bret Victor [0].

[0] http://worrydream.com/#!2/LadderOfAbstraction


> I still think it would be an abstract syntax tree

I agree. What I meant with my comment was that the visual syntax shouldn't simply be a direct visual representation of the syntax tree, because if that's all it is, I'd rather just write in a Lisp with a graphical tree visualisation generator.

I definitely like the idea of having many possible views of the code, where a traditional textual language is just one of those.

> a few examples come to mind

That sounds quite similar to what LightTable attempted to achieve. Its nice and useful and I want to see more of it, but I don't think its far enough, at least for a discussion on visual languages ;)

> it always reminds me to re-study the work of Douglas Englebart, Ted Nelson, Alan Kay, Seymour Papert, Bret Victor

Absolutely! They've been a major influence on my own line of thinking on the subject.


I do think that the "iPad" (tablet compute in general) is the very best case for "visual" interfaces to programming, yes. Although even then, you would have to find some way to keep spaghettification under check! It might also be that the best case there is more of something FORTH-like (albeit with types and other convenience features we do expect these days): minimize syntax, while making the most use of what's ultimately a rather limited input bandwidth.


You make a good point about other syntaxes like FORTH-likes. That's an interesting concept that I hadn't considered! (Note, for a more "modern" take on FORTH, check out Factor[1])

As for spaghettification, I honestly don't see visual languages any different from textual languages in this respect. Yes, most visual code you see in the wild is horribly spaghettified, but the bulk of this code is also written by non-programmers who never learned software engineering principles (DRY, abstractions, proper naming, factoring, etc etc) and textual code written by the same class of people, in my personal experience, is just as spaghettified. Sometimes worse, even, because I can't put my finger on a line and trace it, I literally have to text search and mentally trace where named things connect.

PS: Even though I don't quite agree with you, I appreciate the discussion! This is how ideas that may eventually lead to a workable solution are formed :)

[1] https://factorcode.org/


On spaghettification (great word, it's like a constant fight against entropy): I see a parallel between visual/codeless programming and domain-specific languages (esp. of "low code" category).

As the user/application reaches a certain level of advanced usage, visual/low-code representations tend to turn into a big ball of spaghetti. As you pointed out, the reason I think has much to do with how the user is not familiar with programming concepts and organizing logic.

On the other hand, I imagine it's possible to have a language/IDE that encourages good program organization naturally and intuitively, to guide and support managing complexity..

Well, this topic is deep and endlessly fascinating. Contrary to the originally posted article and its call of "doom", I'm more hopeful that the field of software development will continue to explore new and creative approaches.


This is why I said the ideal would be a non ambiguous mapping: so that the text and visual code is equal and you can convert between them as you please. Unfortunately, this dream is unlikely.

I can't imagine this being possible with two humans looking at the same visual display/image and aligning on its "true meaning" let a lone an automated system doing it. Reduce the problem down to a fixed set of visuals with mapped meanings to AST and you can get there but then you are a vastly limited "language" of functions and that explodes into a horrible and complex visual tree to get any non trivial complexity described.


A lot of "software" that is currently being developed are relatively simple CRUD applications. A developer shouldn't be required to create these kind of applications. (Most developers also don't enjoy creating these kind of applications, so that is a good match...)

For this category of software, there are increasingly better and modern platforms and frameworks without the need of writing any code such as Microsoft PowerApps, Mendix and SalesForce. That also explains the lately success of these companies.


I agree.

I think it's worth reading the sucsess stories of mendix/outsystems about using their low-code tool to build complex software , like hospital management systems or hospital analytics system, and than try to form an opinion.

https://ryortho.com/breaking/kermit-using-mendix-helping-hos...

https://www.outsystems.com/blog/introducing-sapphire-hospita...


> A little more than 12 years ago, building and hosting a website was a serious undertaking, costing thousands, and requiring many experts.

Certainly not - any teenager could do it alone even more than twelve years ago. I remember using Notepad - before syntax highlighting was really a thing - to build a site in PHP and hosting it with Apache. The only cost was the domain registration fee.


You built a webpage not a website. A website is more than a collection it pages, it is interactions between the pages. It is about saving the shopping cart (what if the user opens several tabs and adds things in each tab). It is about updating the user on stock levels, which changes as other users order items - without giving away data on the other users who might not want the public to be able to guess what they ordered. It is about saving session state. It is about the user coming back the next day to check shipping.

Note that I don't write websites. I only have vague clues as to what issues real web site developers face. The above is a very incomplete overview of the type issues a website developer needs to figure out that don't apply to a webpage.

Today websites are common enough that people have figured out all the common issues and built tools. Now a website is easier because we know of solutions common problems and so you can just use the solution.


Plenty of solo amateur web developers, some of them teenagers, were doing this stuff alone 12+ years ago. I was one of them. I'm not saying my code was elegant or secure, but it tried to solve the problems you have described (maybe not working in multiple tabs, that wasn't popular at the time). You could develop sites with sessions and databases in PHP using Notepad, an FTP client, and a web host with a database management admin panel. I didn't find out about version control until years later.


I wrote stuff like that while I was studying, 12 years ago, and even before during my teenage time. Did everything - design, code, frontend, backend, hosting and operations. Probably my biggest success was a database site for the game World of Warcraft, which also had a distributed data collection mechanism by which hundreds of thousands of users could upload data gathered while playing the game to my site, where it would all be distilled into a database, of which a special, minified copy was then compiled and offered to be downloaded by the players right into the game, to be used while playing as a knowledge base. And alongside of that, people could query the database via a web frontend that used all the latest shit (AJAX was a big deal back then, reactive layouts were in their infancy, but I had one, and I even wrote a 3D model viewer in the browser and something like Google Maps to view pre-rendered maps of the game world that looked like satellite images). That thing was 60k LOC Java (data processing and website), 30k LOC Lua (for the addon in the game), about 5k LOC ActionScript, some hundred lines of PHP and Bash scripts, and about 5-10k LOC of C++ for the native client to do data uploads and downloads.

I eventually sold it and maintained it for 7 years total and then it was abandoned because the site didn't catch on enough, and the game itself assimilated lots of the functionality provided by my in-game database (which did catch on massively with the players) so that successful service became redundant over time. But there you've got it: a website plus ancillary software and services, an entire distributed system, built by one person.

And I knew others who did similar stuff - lots and lots of them. It was very common back then to just write some web app by yourself, put it online and see if someone is interested enough to use it. Much more common than today, as it seems, because today pretty much anything already exists, or if it doesn't, it is probably because there is some kind of legal risk involved that you can't just take anymore as a solo person without a big company backing you up.


Why do you assume indentit didn't build a site with authentication and session state and such? PHP itself gives you a lot of tools for handling requests and storing state, and 12 years ago you already had RoR-inspired frameworks like CakePHP.


I can assure you that syntax highlighting really was a thing before PHP and Apache was a thing. Maybe not on anything that came with Notepad, though.


Indeed, Wikipedia dates it back as far as 1982 (which is further back than I would have guessed.)


i was that teenager 12 years ago building websites on notepad.... good times before a proper IDE.


I think the author is limiting his perspective to that of a software engineer.

No web developer is charmed by the limited capabilities of Wix, but it's good enough for the static website many (if not most) businesses are looking for. A CRUD app made by a low-code platform may be limited in its functionality, but it's often good enough for a typical business need. To say it's doomed to fail is an overstatement.


I disagree with codeless software being doomed to fail.

I’ve built lots of applications without code (and teach others how @ https://www.makerpad.co/) one being a mini Netflix ‘clone', Airbnb ‘clone’ and Fiverr ‘clone’.

They may not be able to withstand Netflix level of users and usage but why should every startup/business have to aim for that. It could service hundreds/thousands of users and turnover great revenue for those running it.


how many products with users have you built without code?

while very valuable for learning, i find that folks building thematic maker sites (no code, react/node, rails, etc.) are very biased towards their own way and often haven't made a "real" product built on what they profess.

engineers would be better off using "no code" more often, and non-engineers would be better off learning just enough react, python, etc. to ship something.

otherwise, everything begins to look like a hammer.


Few legitimate businesses want clones of other consumer tools.


no of course not. But many are built in the same way.

The only reason these tutorials are built as 'clones' is to show the features for others to tweak for their own.


I think that's the point of the article - the moment you try to "tweak" a clone/"codeless"/"softwareless" tool with customizations, you end up writing code, hiring developers, and eventually scrapping and rewriting the said clone.


> Every couple of years, the hope surfaces that a simple graphical interface will replace teams of developers. Business people to quickly and easily create beautiful expressions of their ideas and launch them into production seamlessly. A handful of startups in every generation take up this challenge, and they mostly fail.

well, I used to work in a company where a lot of software were written in Max/MSP (https://1cyjknyddcx62agyb002-c74projects.s3.amazonaws.com/fi...). Sure, it wasn't very maintainable, but it allowed one person to iterate very fast.

I've also used some CRUD apps which were entirely made in MS Access (D&D 3.5 eTools :) ).


Thanks for this great example. I love MaxMSP and I participate in NYU ITP Camp every year where I am amazed at how many creative problems I can solve using MaxMSP. You are right about these solutions not scaling as enterprise applications, but they are valuable within the right context and the ability to share sketches is also great for learning to go deeper. If you don't mind sharing, what's the company that uses MaxMSP in their workflow? I love finding these creative examples and am curious for added context.


I think there is natural threshold of inherent complexity of any problem to solve. Codeless or low code software can only go so far but never be successful beyond this threshold. It is the same point where the abstractions of the codeless software become leaky [1]. And, because codeless or low code means heavy abstractions, user will encounter this problem quite early already with simple problems.

[1] https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...


The author is considering the wrong context.

Services get more fine-grained every year, and composition of these services is where the money will be made in the future. AI might resolve some of the issues with finding the right services for the right use cases & even integrating them.

Custom, home-made services will be a conscious choice, and might be composed of other services that are publicly available or not. Everything else will use off-the-shelf services.

My .02€


I dunno, I see the pendulum swinging back again. Or it might just be my contrariness. But building anything decent out of stitching together various third-party services with the baling twine and duct-tape of HTTP... it's not going to end well, and I think people are realising this.


SquareSpace would like to have a word with you.

The truth is, this space just gets better every year. You can get more and more done with less and less. Sure, you still can't build arbitrary programs without at the very least a general-purpose programming language, but if you can constrain the solution space just a little bit you'd be very surprised at just how far you can get without needing to break out a text editor.


There are 3 kinds of tasks in society.

Tasks that are currently automated. Tasks that you can automate via input from automated tasks, but is currently done manually. Tasks which depends on other manual tasks.

Naive people tend to look at this and think "There are not a lot of tasks in the center there, soon we will have automated everything we could possibly automate!". But what actually happens is that every time someone automates something it opens up more things to be automated.

Example: Automated detection of location and good map API's allowed Uber to automate away basically all overhead from conventional taxi services. Of course since those things were automated for everyone it wasn't hard for conventional taxi services to start providing similar features, but that kind of automation wouldn't be possible without all the automation efforts done over the past decades.


This article seems to be responding to the strawman argument that all code will be replaced by visual tools. I don't think this is true, any more than that high level programming languages have replaced all uses of assembly language. At the end of the day, there are applications that care enough about performance that you need to hire someone who deeply understands the microarchitecture of your chip.

That said, Excel has largely replaced traditional text based languages in a significant set of domains, and I don't see any reason that pattern can't be repeated elsewhere. There was a time when summing a bunch of data required a "programmer" and today that is only true for unusually large datasets. You can certainly define Excel to be a programming language, but it doesn't feel that way to most of it's users.


Back in the 90's some guy wrote a paper explaining that programming languages are in fact _design_ languages; whereas in most other kinds of engineering, drawings are the superior design tool, in programming, language works better. Empirically/experimentally speaking, you always end up with a more concise description when you describe a program in language. I suppose this is because programming derives heavily from the domains of formal logic and mathematics, which are also heavy on language rather than pictures, and maybe more specifically because you aren't usually modeling physical _things_. But the key thing is to recognize that you don't write programs, you write descriptions of programs - the compiler/interpreter is usually doing the rest.


Vendors have been selling various iterations of the "non-technical people can draw diagrams on the screen and then run it" idea for decades.

There are 2 main problems with this idea: 1) it is still code, so you have to think like a coder 2) any non-trivial routine fills pages of "diagram" and is arguably harder to follow than the 20 lines of code that would have sufficed.

An example of (1): At a pretty big early 00's e-commerce site, we decided to let a couple managers implement some "promotions" (conditional discounts) using the commerce suite's drag-n-drop feature. Add an inkjet printer to your cart, get free photo paper. Great. But they forgot to add the reversal logic, so crafty customers quickly figured out how to get reams of free paper! Oops.


I used Oracle ADF by force :S

I was consulting for companies using it as a main java framework for all their work. While you can write Java code, most of the time you don't because it gets generated (along with lots of XMLs) and you need special hardware, IDE, build server etc...

Needless to say, the end result was in each case junk, sometimes so big that dedicated oracle tech crew couldn't fix it in 6 months. Any time you are out of the ADF comfort zone (which is like 10 minutes after you tried CRUD) you don't know what to do or you have amazing performance problems (team was using huge dedicated server which could run easily 100+ services instead of the single one badly).

There is nothing as horrible as it is, its even framed as MVC framework. I would rather Scratch or Lolcat.


> Every couple of years, the hope surfaces that a simple graphical interface will replace teams of developers

Wait! Just because the code is graphical instead of textual, doesn't mean its "codeless" or that non-developers can use it effectively, in a way that the results are maintainable. For example, Max/MSP is used by musicians and artists, quite successfully (so.. not doomed to fail!), but much of the code they generate is a complete mess. This isn't the fault of Max/MSP the language as much as its that the users aren't programmers and therefore never learned to reason about problems in the programmer way or learn about software engineering best practices like... abstraction. Similarly bad textual code is also written by non-programmers, so this isn't just an issue with graphical tools.

You might argue that there's a difference between graphical code like in Max/MSP and purely model-based techniques that try to hide the code-ness even more, but I'm not qualified to comment on those tools because I've not used them.

> Why do people keep trying to breathe life into a solution that is so obviously doomed to fail

I've personally had great success with Max/MSP and Synthmaker and plenty of people have had success with visual shader generation tools, Unreal Engine's Blueprints, SCADE Studio and whatnot, so I don't think its "obviously" doomed at all.


> The reality is that whenever you are building software with any level of custom functionality, as in it does not come in the box, you are going to need people who are comfortable writing code.

In other words, if you are making anything with real value that doesn't already exist, you are not going to be able to make it through some kind of WYSIWYG.

On the other hand, just like DSLs are sometimes appropriate, writing a _domain-specific_ WYSIWYG could be appropriate, especially if it's for internal use and not the product itself.


I developed a graphical drag and drop bioinformatics app for the biologists at the hospital I used to work at. I’m not sure if it was more efficient than a command line for a skilled practitioner, but it ended up working pretty well all things considered.

As long as what you’re doing is not Turing complete, I think WYSIWYG has value.


Funny to read how something that will make millions or billions of dollars while creating tens of thousands of applications is a failure. Sure low-code isn't perfect and is being oversold by the people that benefit from it. What commercial software isn't oversold, overpromised, and underdelivered? Name just one if you can.

This has all happened before and will all happen again. Remember Visual Basic. It had all of the benefits and drawbacks of Low-Code. For some uses, it was great and exactly what was needed. For others, it was a horrible trainwreck.

That said, the author is right. It is true for any framework or tool, once you get outside what it does well, you are in for a world of hurt. He is also right in that none or almost none of the Low-Code tools of this generation will make it to the next. Anyone remember PowerBuilder? It was great for about 5 years and OK for about 5 more after that. It is long dead now. Same is true for Cold Fusion. What a great tool for about 5 years.

Every once in a while I still see VB apps out in the wild but I don't know a person that will admit to knowing anything about VB. about 10 years after this current Low-Code revolution dies, another will start. Each time we get a little closer to actually understanding what Low-Code is really good at and keeping it in that box. At this rate humanity will have a great handle on Low-Code in 2457.


An anti-example is admin dashboard generation, with ability to customize some parts that requires customization.

Admin dashboard is an interesting problem. To me, to build an MVP means to quickly scaffold an Admin dashboard. Then you can iterate the domain model until it fits the requirements.

Sadly, not enough open source project which is successful to quickly craft a customizable admin dashboard.

Note: Codeless in this case means you configure the dashboard through a json file instead of canvas.


Most machine-learning models and AI's are "codeless software" as well, in a very real sense (while the architectures/abstract models that specify their parameterizations and the format of their inputs and outputs may be a sort of code, the estimated/"learned" parameters themselves are not!) I generally assume that they too are doomed to fail, and for the very same reasons.


Block diagram based "codeless" software might not be a one-solution fits all, but it's certainly effective in a number of domains. I'm currently making the transition from writing physical simulation code in Fortran, Python, and C++ to Simulink with great effect. And "codeless" Labview has long dominated equipment controllers in laboratory settings.


If you like Simulink, you will love Modelica. Steep learning curve, but it blew my mind once I understood it.


Coming at this from a slightly different angle - I'm working on a document generation service [1], and I have an integration with Bubble [2]. I actually have quite a few customers on Bubble who are using my service to fill out PDF documents.

"No-code" is a similar metaphor to "serverless" - of course there's still servers, but you don't have to worry about managing or scaling them. Or more accurately, you pay a premium for someone else to manage and scale them. For "no-code" platforms, someone is still writing and maintaining all of the code. They just give you an API (or a visual UI) to link everything together.

It's actually pretty amazing what you can build with Bubble, and there are a huge number of plugins [3].

[1] https://formapi.io

[2] https://bubble.is

[3] https://bubble.is/plugins


Ugh. I got on board in 1990, COBOL was still around and 4GLs too. Nowadays there are focused efforts to cater to 'citizen developers' with pared down scripts.

They all fail to live up to the hype. I think it's because you have to have a good general understanding of how things work to properly build (and initially test) software.


I am under the impression that low code platforms thrive because you need less skilled people to work with them, and still are able to build your app/whatever.

Yes it's still better to have skilled engineers, but they're expensive and hard to find nowadays, and not every company is even capable of hiring them effectively.


You can add a (2018):

<!-- <span class="meta">Posted by <a href="#">Start Bootstrap</a> on August 24, 2018</span> -->


In my first job as a grad I had to work with this absolutely wonderful software called Siebel CRM.

The whole idea behind it was just like the article described; a WYSIWYG in which business people could draw boxes and lines and code would be generated for it.

However it was never that simple, plus Oracle eventually acquired it and added even more fluff onto it.

Thankfully I was developing around the CRM and just had to draw some lines to represent an API being consumed.

I later found out that what I just mentioned is a fourth generation language (4GL) and there are many of them out there.


Declarative configuration can work great if you understand the domain really well, you cover it, and it doesn't change very much or very fast or in fundamental ways that upend the problem.


They are successful for programming signal processors. For example Digital Audio Workstations, Shader Graphs, and UDK's Kismet.

ETL looks like you could reduce it to signal processing problem. Likewise, general purpose web development looks like you can reduce it to an ETL problem.

So what's the problem, why do these graphical programming languages fail? Because the problems they try to solve are not actually reducible to signal processing.


Code is simply the best UI we have for building software


Super interesting article that has reflected some conversations at past workplaces with coworkers. I can't seem to find who actually wrote the article. The About and Contact pages are blank, and there's no attribution on the article. This is also the only post for the entire site. I can't find a date for when this was written. Is @wessej the author of the article?


I think it's slightly more nuanced in that it's "codeless software that tries to replicate all the use cases functions of coded software" is doomed to fail.

Look at Zapier, while they do (now) have some options that let you build your own functions, etc. their bread and butter is replacing custom cross service integrations with a handful of clicks and some structured data.


If you want to express what something looks like, draw a picture. If you want to express how something behaves, use words.

Graphical tools work great for creating static sites, because you mostly just care what they looks like. If you have complex behavior to describe, however, typing will always be easier than creating a massive flow chart.


As a product manager part of me dreams the dream of software I can delivery to clients that doesn't involve long drawn out meetings and discussions with engineers on pros/cons of different approaches, hacks vs the right way etc. If anyone is based in UK, London and working on something in this field contact me.


I always think of it like this:

We already try and make coding as fast and easy as possible. The software industry is not a racket that is hiding some magic tool. If we had a magic bullet that any end-user could use to build their software, we would all be using it already.


There is a "shadow IT" project happening right now in my company where they are taking my database and dumping it into sales force so they can do low-code. The new CEO does not think we need expensive IT people. I have my popcorn.


"Diagrams will replace code" = "Flowcharts will replace novels"


Is there actually a platform where I could read a flowchart or story-board version of a novel? I can think of quite a few novels where this would have made my reading experience much better.


Comics / graphic novels.


twine gives you something similar. but i think you are looking for something else


Thanks, Twine looks cool. I've never heard of it before. It is kind of what I was thinking about - being able to read a story (in this case a branching one) by going over the diagram. Actually, thinking of it, Detroit: Become Human had a similar nice display of narrative choices.


That is kind of what a comic book is, isn't it?


...and there's a reason why comic books, like visual programs, are relatively rare outside of niche industries.


I don't think we are too far off and AI will be the key. Sometimes I like to remember that anything a computer can do can be done by thousands of people in an office. Take a program you've worked on and think about how you would have implemented it with people instead of code, where each person is a function and the data gets passed on a piece of paper. The designed workflow would be easily described to people, and any exact processing would need to be demonstrated to the worker. The super fine details of processing are usually pretty easy to understand as well as the high level. It should just take some AI assisted translating to find the appropriate processing models for the high level requests, with the ability to learn by demonstration for the edge cases.


I don't think you've solved the "low code" issue at all. You still have to define the process to the same level of detail; you're just thinking about it differently.

What your approach does do is create a system that is easy to parallelize. You want to run on thousands of cores/threads? Make each "person" a thread, and there you go.


I mean, thousands of people are highly concurrent while computers are not, so I don’t think the analogy really holds. If you had people as “functions” and organized them like a computer program, only maybe one person out of a thousand would be doing anything at one time.


It is a mental exercise to re-frame the problem. Sure it would be inefficient but the point is every stage could be done with reinforcement learning from a high level abstract design without every dealing with 'code'.


A middle ground maybe is to define function parameter types and return types and write a bunch of tests, then have a fuzzer compiler generate the actual function code maybe with the help of machine learning.


How will you be able to trust such generated code? What if you forgot/mistyped one of your tests?


Same way you trust current compilers.


>At first glance, the idea has merit. There are more than enough examples of tasks that used to be really hard, but today are simple. One such example is the task of building a website.

Well with money yes.


I think the author is right, but I am trying anyway to make a low code system at yazz.com. Doomed to fail, but some projects are interesting to make!


It is doomed to fail because logic can't be abstracted away, and that's the very foundation of programming and what makes it hard.


It can't be abstracted away, but it can be abstracted and raised to a higher level. Not all logic can in all cases, of course, but you can certainly abstract many cases of domain-specific logic to something higher level that you might be able to represent in a low/no code system.


Higher level building blocks still need logic to assembled just the same. If they don't, they are just configuration. That's why in every one of these use cases of these "no-code" projects, they ended up hiring programmers to build it/keep it. And then the programmers spend the rest of their days cursing whoever built the system which ties their hands.


Low-code feels like a deja vu. I call it throwable applications.

built quick to satisfy a short term need.

There is a space for those but it doesn't fit all.


Holy grails still do not exist. But for Some use cases old tools worked fine. A more balanced story on https://nocomplexity.com/nocode-solutions/


At some point we're going to run out of lollipops.


My counterpoint is Excel.


Excel


Excel is a bad database?

I think OP was overly trying to be critical of popular things.

Wix is a website with database features, having a back end has purpose.

Using Excel as a traditional database is not what Excel does. Excel is often good enough at storing data that people use it like a database.

Anyway this seemed like a Pop Programming article. Lots of Python and coding love, bashing WYSIWYG and Microsoft products.


Half of financial industry runs on Excel.

The other half pretends they are not, while frantically trying to migrate from it with a various degree of success.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: