Hacker News new | past | comments | ask | show | jobs | submit login
Has UML died without anyone noticing? (garba.org)
656 points by azhenley 50 days ago | hide | past | favorite | 586 comments



UML's promise was that that with detailed enough diagrams, writing code would be trivial or even could be automatically generated (there are UML tools that can generate code). It was developed during a time when there was a push to make Software Engineering a licensed profession. UML was going to be the "blueprints" of code, and software architects would develop UML diagrams similar to how building architects create blueprints for houses. But as it turned out, that was a false premise. The real blueprints for software ended up being the code itself. And the legacy of UML lives on in simpler boxes and arrow diagrams.


IBM used to push the adoption of their business process software for exactly the same reason. They imagined that "business process experts" use UML to construct the entire business process, and then the software (based on WebSphere Application Developer, an Eclipse-based IDE) will generate all the execution code, including deployment scripts. The irony is that the UML itself becomes more complex than code, and dozens of layers of exception trace were simply incomprehensible to engineers, let alone to "business process experts". To add insult to injury, IBM mandated generating tons of EJBs. Even thinking of that induces migraine.

P.S., It's surprising that those who advocate that UML is better than code didn't understand the essential complexity would not go away simply because we switched a language, as essential complexity lies in precisely specifying a system. Neither did they understand that that a programming language offers more powerful constructs and tools to manage complexity compared with UML.


Well said.

That is why I am fundamentally skeptical with the current push for Low Code or even No Code. Seems like people just don't really learn from the past.


> Low Code or even No Code

I look at the code I'm writing (mostly C# thesedays) and I consider the _information-theoretic_ view of the program code I'm writing, and I see a major problem is that even syntactically terse languages (like C# compared to its stablemate VB.NET) still requires excessive, even redundant code in many places (e.g. prior to C# 9.0, defining an immutable class requires you to repeat names 3 times and types twice (constructor parameters, class properties, and assignment in the constructor body) which alone is a huge time-sink.

The most tedious work I do right now is adding "one more" scalar or complex data-member that has to travel from Point A in Method 1 to Point B in Method 2 - I wish I could ctrl+click in my IDE and say "magically write code that expresses the movement of this scalar piece of data from here to here" and that would save me so much time.

At least in modern languages like C# 9.0, Kotlin, and Swift (and C++ with heavy abuse of templates) a lot of the tedium can be eliminated - but not in the granddaddy of OOP languages: Java. Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...


> but not in the granddaddy of OOP languages: Java. Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...

Because... We don't. IDEs and code generators have replaced a lot of the more stupid boilerplate. Not that there isn't a lot of stupid boilerplate in Java but it's been greatly reduced by tooling.

Still, I don't work with Java because I like it, I work with Java because it works and has a great ecosystem, the tooling around it make it bearable and without it I'd have definitely have jumped ship a long time ago.

I'm not only a Java developer, I've worked with Go, Python, Clojure, Ruby, JavaScript, Objective-C, PHP and down to ASP 3.0. Java is still the language that employed me the most and the longest, I have no love for it apart from the huge ecosystem (and the JVM gets some kudos) but it works well for larger codebases with fluid teams.


Ward Cunningham once noted that because a programmer can input incomplete, or minimal, code, then push a few buttons to autogenerate the rest, that there's a smaller language inside Java. Since he made that remark a number of other JVM languages have sprung up that try to be less verbose. One of them is Groovy, which uses type inference and other semantics to reduce duplication (or "stuttering" as Go calls it").

The issue now with Java is that it's such a big language and it's accumulated so much from many different paradigms that experience in "Java" doesn't always transfer across different companies or teams. Some teams hew closely to OO design and patterns, others use a lot of annotations and dependency injection, still others have gone fully functional since Java 8.

And then there are shops like one of my employers, where a large codebase and poor teamwork has resulted in a mishmash of all of the above, plus some sections that have no organizing principles at all.


> Some teams hew closely to OO design and patterns

I maintain that design-patterns are just well-established workarounds for limitations baked into the language - and we get used to them so easily that we rarely question why we’ve even built-up an entire mechanism for programming on-top of a programming language that we never improve the underlying language to render those design-patterns obsolete. (I guess the language vendors are in cahoots with Addison-Wesley...)

For example, we wouldn’t need the Visitor Pattern if a language supported dynamic double-dispatch. We wouldn’t need the adapter, facade, or decorator patterns if a language supported structural typing and/or interface forwarding. We wouldn’t even need to ever use inheritance as a concept if languages separated interface from implementation, and so on.

Better answer: the only real OOP system is Smalltalk.


> design-patterns are just well-established workarounds for limitations baked into the language

Strict functional programmers have been saying that for years. They may be workarounds, but as patterns they have value in allowing one programmer to structure code in a way that is recognizable to another, even months later. You could say that a steering wheel, gas pedal, and brakes are workarounds for limitations baked into the automobile we wouldn't need if cars could drive themselves, but still value the fact that the steering wheel and the rest of the controls for a driver generally look and work the same across vehicles.


Right you are - but my point is that language designers (especially Java’s) aren’t evolving their languages to render the more tedious design-patterns obsolete - instead they seem to accept the DPs are here to stay.

Take the singleton pattern for example. It’s not perfect: it only works when constructors can be marked as private and/or when reflection can’t be used to invoke the constructor to create a runtime-legal second instance. A better long-term solution is to have the language itself natively support passing a reference to static-state, which completely eliminates the risk of a private ctor invocation - but that hasn’t happened.

OOP Design Patterns are like JavaScript polyfills: they enable things that should be part of the native platform. They’re fine to keep around for a few years when they’re new, but when you’re still using what is essentially an aftermarket add-on for 5+, 10+ or even 25+ years you need to ask if it’s the way things should be or not...


Patterns are commonly (but not only) symptoms of some failure of features on the language but at the same time they are just vocabulary.

Patterns exist in functional programming as well, any map/reduce operation is a pattern, any monad is a pattern. It's a proven way to achieve a goal, it's easy to compartmentalise under a moniker and refer to the whole repeatable chunk with a name.

Unfortunately a lot of people only learn how to properly apply design patterns after doing it wrong and/or overdoing it (mea culpa here!). It's easy to spot the bad smells after you've been burnt 2-3 times.


If map and reduce were design patterns, you’d be writing out the iteration bits every time you used them. Instead map and reduce are abstractions, and you only have to plug in your unique functions and values.


piva00 probably meant that implementing map and reduce for a data structure, rather than the use of them, is a design pattern.


That was my point, thanks for clarifying it :)


This reminds me of a talk by Rich Hickey [0], where he introduces the Transducer pattern, which is actually an abstraction for map/reduce/fold/filter etc.

(But I'm not trying to invalidate your claim that patterns exist in FP in general, only that specific case. Afaik, the Transducer abstraction isn't even widely-known nor used.)

[0]: https://www.youtube.com/watch?v=6mTbuzafcII


What resources would you recommend for learning more about design patterns that are essentially missing language features?


By writing enough code using design-patterns that you see a pattern to the usage (and abusage) of design-patterns - and get plenty of experience with other languages and paradigms where said patterns are irrelevant.



I'd agree with that statement. I feel that I have settled in a few frameworks that I consider modern but mature and stable, with a good job market. For that I had to learn these smaller languages inside Java.

And not only smaller languages, the tool set as well. Maven and Gradle for a start, and Gradle is its own Universe of little quirks and "what the fuck" moments. IDEs have a learning curve (but I learned vim and emacs before using any IDE so knew how steep learning curves work) and if you manage to use their features it can help immensely your productivity. Frameworks such as Spring have enough pull that it's easy to find interesting projects with it, I think that the direction of Spring Boot is pretty good for modern tech enterprises, at least for part of the stack.

Boring technology has its place, it's a hassle you have to learn a pretty big set of tools to be productive in Java but when you do you can actually accomplish a lot with multiple teams and a scale of hundreds to thousands of engineers.

You can also shoot yourself on the foot pretty easily, in massive scale, if the people taking decisions are architecture astronauts and not battle-hardened engineers who have suffered through incomprehensible code and piles over piles of mishmash of technologies and failed frameworks. Keeping the tech stack simple, boring and focused on a small set of tools has its benefits on scale.


Groovy is a nightmare. We now reject candidates who advocate for it in production during interviews and we warn those who do say they d never dare there but use for testing.

Who cares if you have to write setters and getters by clicking a button in IntelliJ, or have to explicit your types rather than ask every subsequent readers to use their brain as a type inference compiler.

Typing clear code isn't a problem Groovy should have solved by making it all implicit, at the cost of having non compiled production code. A code should never fail at runtime because you made a function name typo that it couldnt tell you about any other time for free.


A lot of this also applies to other interpreted languages widely used in production, such as Ruby, Python and Clojure.

You don’t like interpreted languages in production, fine. But rejecting candidates who think differently from you just creates a monoculture and reduces the chance that you learn anything new, beyond reinforcing your own convictions.


Millions of lines of production Python code do just fine by doing the thing you should have been doing all along: testing.


How rational this is depends on your testing culture. With a comprehensive test suite, the guarantees offered by a compiler are not very important. The software goes through all its runtime paces before production anyway.

If you’re not going to write any tests, then obviously compile time is a crucial line of defense.

Most shops will be somewhere in the middle where compiler guarantees offer a real but marginal benefit, to be weighed against other tradeoffs.


The point is not "let's just not write any tests". With a compiler that offers meaningful guarantees, you can write more worthwhile tests than "does this function always take/return integers".


If you’re passing or returning values of the wrong type, it’s going to blow up one of your tests. Asserting on a value implicitly asserts its type. Passing a value and not getting a runtime error for your trouble, pretty strongly indicates that it’s the right type.


Writing tests instead of utilizing the compiler is wasted time and effort. And it is one of the worst kinds of code duplication, because you are reimplementing all the type-checking, bounds-checking, etc. Usually badly, buggy and again and again. And since usually the test suite doesn't have tests for the tests, you will only notice if something breaks in the most inopportune occasion possible.


In unit testing for dynamically typed languages, very rarely do you make explicit type checks. The type checking naturally falls out of testing for correct behavior.

The "tests for the tests" is code coverage.


> A code should never fail at runtime because you made a function name typo that it couldnt tell you about any other time for free.

So you also reject all dynamic and weakly typed languages? No JavaScript, Objective-C, PHP, Python, Ruby, Lisp, or Tcl? No type coercion at runtime?

> at the cost of having non compiled production code

Groovy can be compiled to the same bytecode as Java.


If the language doesn't help you with a function name typo, that's crap dynamic. Not only is that not a feature or benefit of dynamic, but it fuels unfair strawman arguments against dynamic.

Here is something I made largely for my own use:

  This is the TXR Lisp interactive listener of TXR 257.
  Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
  TXR is enteric coated to release over 24 hours of lasting relief.
  1> (file-put-string "test.tl" "(foo (cons a))")
  t
  2> (compile-file "test.tl")
  * test.tl:1: warning: cons: too few arguments: needs 2, given 1
  * test.tl:1: warning: unbound variable a
  * test.tl:1: warning: unbound function foo
  * expr-2:1: variable a is not defined
That's still a strawman; it only scratches the surface of what can be diagnosed.


For any non-trival program (more than a couple of files/pages long), dynamic and weak typing are a no-go.


How strongly typed is C? Are non-trivial programs in C a no-go? Would a LISP program more than a couple of pages long be forbidden?


There are toolings that patch C's shortcomings, and there's a reason better typed languages (e.g. Rust) are making strides in C's domain.


TIOBE gives C 14.32% (1) against Rust 0.49% (29), so "strides" might be over-egging the pudding.


Historic vs future trends.


Which tools patch C's shortcomings?


Static analysis tools and subsets like MISRA C.


Do you have a good one in mind? All the open source tools I've tried are MISRAble


Nope, I haven't been working with C recently.


I see where you're coming from, having used Groovy for a few years (in Grails, mostly) but I think you're also overstating your case.

Groovy is very quirky, and it's good at turning compile errors into runtime errors (@CompileStatic negates many of its advantages like multiple dispatch) and making IDE refactoring less effective. But the again, so is Spring. This got better when Java configuration was introduced... and then came Spring Boot, which is a super leaky abstraction, with default configuration that automatically backs off depending on arbitrary conditions (!). And yet people find it valuable, because it reduces boilerplate.

These days, I use Groovy mostly in testing (and Gradle), and it can really make tests more expressive.


> IDEs and code generators have replaced a lot of the more stupid boilerplate. Not that there isn't a lot of stupid boilerplate in Java but it's been greatly reduced by tooling.

I've been dealing with Java for about four years in an academic setting, not professional. When observing Java code in workplaces, the code bases have universally been bloated monstrosities composed mainly of anti-patterns - but that was hopefully down to the era these applications were written in (J2EE, Struts, etc).

My experience is that:

* Auto-generated code is still visual (and debugging) noise

* Annotations like Lombok's are great, but are "magic" abstractions, which I find to be problematic. They add cognitive load, because each specific library has its own assumptions about how you want to use your code, as opposed to built-in language constructs.

* Especially for Lombok, I can't help but think adding @Getter and @Setter to otherwise private fields is a poor workaround for a simple language deficiency: not having C#'s properties { get; set; }. I feel the same about libraries that try to circumvent runtime erasure of generic types.

* Compared to C#'s ASP.NET (core), I find fluent syntax configuration with sane defaults and "manual" DI configuration much more manageable and maintainable than auto-magic DI as in Spring, because at least it's explicit code.

* Java tooling (and if previous points weren't, this is definitely a subjective stance) just seems inferior and to set a low bar in terms of developer experience - maven or gradle compared to nuget, javac compared to dotnet CLI, executable packaging... As to other tooling, I suppose you're mainly referring to the IntelliJ suite?

I think C# (and by extension, Kotlin) had the right idea in seeking a balance through removing as much boilerplate as possible in base language constructs. Adding libraries is fine, but shouldn't be a workaround to evolving the language.


I agree with all of your points, including C# and Kotlin approach.

At the same time, a lot of that are artefacts of decisions made early in Java such as backwards compatibility.

The same for tooling, a lot of tools are results of its time, Ant, Maven and now Gradle.

Lombok has its pros and cons, I use it sometimes but had issues with Lombok and Java upgrades due to how Lombok writes out its bytecode.

I haven't touched javac in more than a decade so I don't really care about it, it's been abstracted away by my build tools (and I agree, the build tools are less-than-optimal).

Again, I agree with all the critics, at the same time just given how large the Java installation base is and by having to go through the Python 2 vs Python 3 migration path, debacles, etc., I still prefer to have all this cruft that is known, well discussed online, has documentation about its quirks rather than a moving target of language features over the last 20-25 years.

Java is too big due to all the evolution it went through, it could have taken different paths and modernised the language? Yes, at the expense of some core design decisions in the beginning. Do I agree with all these decisions? Nope, but who agrees to all design decisions made by their programming language designers?


> Compared to C#'s ASP.NET (core), I find fluent syntax configuration with sane defaults and "manual" DI configuration much more manageable and maintainable than auto-magic DI as in Spring, because at least it's explicit code.

Fluent-syntax, as it exists today, needs to die in a fire. It's horrible. It's abusing a core key computer-science concept (return values) and turning it into something that exists to only save a few keystrokes.

1. You have no way of knowing if the return-value is the same object as the subject or a new instance or something else.

2. It doesn't work with return-type covariance.

3. You can't use it with methods that return void.

4. You can't (easily) save an intermediate result to a separate variable.

5. You can't (easily) conditionally call some methods at runtime.

6. There is no transparency about to what extent a method mutates its subject or not. This is a huge problem with the `ConfigureX`/`UseY`/`AddZ` methods in .NET Core - I always have to whip-out ILSpy so I can see what's really going on inside the method.

Some libraries, like Linq and Roslyn's config use immutable builder objects - but others like ConfigureServices use mutable builders. Sometimes you'll find both types in the same method-call chain (e.g. Serilog and ImageProcessor).

What languages need is to bring back the "With" syntax that JavaScript and VB used to have - and better annotations or flow-analysis so that the compiler/editor/IDE can warn you if you're introducing unwanted mutations or unintentionally discarding an new immutable return value.


> exists to only save a few keystrokes.

It does that, but it also makes your code read more like natural language. Perhaps I was careless in my wording, as I meant to point to manual, explicit configuration rather than fluent syntax per se.

As to your bullet points: I can see where you're coming from. I still think it's better than the invisible side effects and invisible method calls you get with annotations.

> What languages need is to bring back the "With" syntax that JavaScript and VB used to have

As far as I know, With... End With is a weird cross between "using" in C# and object initialisers. How does that help prevent mutations? One of the code examples (0) even explicitly mentions:

   With theCustomer
        .Name = "Coho Vineyard"
        .URL = "http://www.cohovineyard.com/"
        .City = "Redmond"
   End With
I honestly don't see the big difference with either:

   var customer = new Customer {
       Name = "Coho Vineyard",
       URL = "http://www.cohovineyard.com/",
       City = "Redmond"
   };
or:

   var customer = Customer
      .Name("Coho Vineyard")
      .URL("http://www.cohovineyard.com/")
      .City("Redmond")
      .Build();
[0] https://docs.microsoft.com/en-us/dotnet/visual-basic/languag...


AutoValue makes a lot of this easier, but you're right that vanilla Java is quite terse, much more than needed.


"The most tedious work I do right now is adding "one more" scalar or complex data-member that has to travel from Point A in Method 1 to Point B in Method 2 - I wish I could ctrl+click in my IDE and say "magically write code that expresses the movement of this scalar piece of data from here to here" and that would save me so much time."

All of this is possible today and not even that hard (though it's harder than meets the eye, there's a lot of issues that description glosses over that you have to deal with, especially in conventionally-imperative languages). The main problem you face is that the resulting code base is so far up the abstraction ladder that you need above-average programmers to even touch it. (I am assuming that this is merely a particular example of a class of such improvements you would like made.) This is essentially the same reason why Haskell isn't ever going to break out of its niche. You can easily create things like this with it, but you're not going to be hiring off the street to get people to work with it.

Or, to put it another way, a non-trivial reason if not the dominant reason we don't see code written to this level of abstraction is the cognitive limitations of the humans writing it.

I know HN doesn't really like this point sometimes, to which I'd ask anyone complaining if they've mentored someone a year or two out of college and fairly average. You can't build a software engineering industry out of the assumption that John Carmack is your minimum skill level.


Rich Hickey and Clojure have a low-tech solution for you: use maps. This “wiring through all the layers” problem is basically self-imposed by the use of static typing for data objects. Instead you should mostly pass around maps, validate that the keys you care about are present, and pass along the rest.

Of course your Java peers aren’t going to be happy about this, so in some ways a new language is needed to establish a context for this norm. But the limitation isn’t physical, not even in Java.


I use java8 at my dayjob.

It's been years since I did any of that, as Lombok and spring boot generate that code for you.

Immutable classes get a @Value and maybe @Builder annotation, muteable ones get @Setters etc.

Springs auto wiring for beans is sadly magical however, so much harder to figure out what went wrong if you have any issues...


It's weird to me how you find the magic of spring sad while you find the magic of Lombok acceptable.

Lombok requires that you use a supported build system and IDE and while all the currently relevant ones are supported that is no guarantee. Needs plugins and agents that support your various tools' versions including the JVM itself. I've been in that hell before with AspectJ and the aspectJ compiler vs eclipse plugin (version incompatibilities that made it impossible to work efficiently until they fixed it all up).

Disclaimer: last company we used Lombok. Current company we are switching certain things to Kotlin instead. data classes FTW for example. I do miss magic builders. Builders are awesome. Building the builder is tedious ;)


Lombok magic doesn’t span across files. Look at the class, see the annotations, and as long as you have even a trivial understanding of what Lombok is, you can grok it. It’s basically like a syntax extension.

Spring on the other hand... autowired values everywhere, and at least for me (who doesn’t work with Spring day in and day out) it’s very difficult to understand where they come from.


Don't get me wrong, I've used Lombok and liked it from the working with it and what it saves you aspect.

We do use spring and I've used it for a very long time now. Nothing is magic and not understandable about wiring if you do it right. Unfortunately there are a lot of projects out there that use it in exactly the wrong way if you ask me and then I'd agree with you.

I used to be in a company where we used XML config and everything was wired explicitly. The XML part sucked but with SpringIDE (eclipse at the time) it was Ctrl-clickable to find what's what.

We use Java config with Spring at my current company and I can Ctrl-click my way through it all and find what's what. There's a small corner of 'package-scan'ed stuff that is evil but we are cleaning that up.


Java already has records, better than Kotlin's data classes.


How exactly are they better from Kotlin's data classes?

Aside from the fact Java only released non-preview support for records one month ago, they:

  - Don't support inheritance
  - Can't be mutable
  - Don't have a copy method
While Java records are nice, Kotlin data classes are strictly more capable than Java records.


FWIW I think that whether someone wants to use mutable objects or swears by immutability should be their choice, especially for interoperability with legacy code. It can be much easier to 'just go with the flow and be careful' in a legacy code base vs trying to have a clear separation of where immutability has been introduced already and where we still don't use it. Not everything is green field (in fact most stuff isn't) and not every company gives you enough time to always Do The Right Thing (TM).

Copying objects is a well known need and there are countless libraries that try to help you with it. All with their own problems, notably runtime errors vs. compile time safety or pervasive use of reflection.


Why would you need a copy method for an immutable record?


When applying events, for instance. In F#, you could do:

  match msg with

  | IncreaseCounter cnt ->
   { model with Count = model.Count + cnt }

  | DecreaseCounter cnt ->
   { model with Count = model.Count - cnt }

  | ResetCounter ->
   { model with Count = 0 }

  | ChangeRubric name ->
   { model with Rubric = name, Count = 0 }
The "with" says: copy the original record by value, but change these fields. For completeness' sake: F# also has implicit returns, so the bit between brackets is the function's return value.


Python's dataclasses have a "make a copy with these fields changed" function that's rippingly useful for immutable records.


How would inheritance, mutability and an arbitrary, presumably wrong and misguided "copy method" be a feature?


Why do you think copy methods are "presumably wrong and misguided"?

For the rest, I agree that in 99% of the cases inheritance and mutability are not needed if you're using greenfield Kotlin libraries. But they are unfortunately often necessary in the Java world.

Mutable data classes are especially quite useful for reducing boilerplate when creating classes that implement the Fluent Builder pattern, which is unfortunately quite necessary if you don't have a copy method...


Copy methods are in the works, and Java's pattern matching will be better than Kotlin's.


Lombok is great, but Immutables is even better! https://github.com/immutables/immutables


> the granddaddy of OOP languages: Java

I disagree with this statement. Java just happened to become popular due to its C style syntax and various accidents of history.

Smalltalk is the true grand daddy of OOP languages and none of the issues you talk about apply there.


> Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...

Lombok, at the very least, eliminates “manually writing by-hand getter-and-setter methods” (https://projectlombok.org/).


Thank you. I stopped using Java on a regular basis around late-2009-ish, which was before Lombok became really popular (as far as I know) so it is encouraging to hear that writing Java in-practice isn’t as bad as I feared.

Still... I feel strongly that Java eventually needs to adopt object-properties and reified-generics for it to stay relevant - otherwise it offers fewer and fewer advantages over competing languages - at least for greenfield projects at first, and eventually it’ll start being _uncool_ and fail to attract newer and younger devs to keep the ecosystem alive. Then we’ll end up with the next COBOL (well, more like the next Pascal/Delphi...)


That seems unnecessarily harsh towards Pascal/Delphi. I’d stick with the COBOL metaphor.

I’d also add that while Java does suck in some fairly obvious ways, most languages suck and at least Java can actually run concurrent threads in parallel.


Lombok ends up having all the costs of using a better JVM language (you still need to integrate it into your coverage tools, code analyzers etc.) but with few of the benefits. I used to use Lombok but in the end it was easier and better to just use Scala.


That’s fair. When I was writing Java code I wanted desperately to evaluate Kotlin, for the obvious reasons you’d expect, but there was not an easy Lombok-to-Kotlin migration path.

I probably would not choose Java with Lombok for a greenfield project today, were it up to me. But if I was forced to use Java, I would use Lombok. I was forced to use Java and I did use Lombok, and it didn’t really suck that bad.


I don't think any reasonable decisionmaker would approve Lombok and not Kotlin or Scala. (But I'm aware that many large organisations end up making unreasonable decisions).

The gap between post-8 Java and Kotlin is pretty small yeah. Though you have to write a lot of async plumbing yourself, and not having delegation is a real pain.


> I don't think any reasonable decisionmaker would approve Lombok and not Kotlin or Scala. (But I'm aware that many large organisations end up making unreasonable decisions).

For whatever reason, it’s a lot easier for most organizations to sign off on using a specific library for an existing programming language, even one as transformative as Lombok, than to sign off on using a different programming language, even one as backwards-compatible as Kotlin. Often they are categorically different decisions in terms of management’s interest in micromanaging them: they might default-allow you to include libraries and default-disallow you to write code in a different language.

In this respect, Lombok is really handy for a very common form of unreasonable organization :)


But when you read a novel, it's full of excessive and redundant code.

You don't write code for the machine, you write it for your team. It's fine if it's nice and comfortable, a bit repeated and fluffy, rather than terse and to the point.

Your code is only read by humans.


This is exactly my grudge with boilerplate. Code is being read much more often than written.

I don't care if you hand-coded all those buckets of accessors, or your IDE has generated them -- that's irrelevant to that they're still overwhelmingly useless noise. Which I need to read through, which I need to review in PR diffs, skim in "find symbol usage" output, class interface outlines, javadocs, etc etc -- all that 10 as often as during writing. Somehow I'm expected to learn to ignore meaningless code, while producing it is fine?..

Remember the point made in "green languages, brown languages" recent post here on HN? The insight for me there was the source of "rewrite it from scratch" urge which should be very familiar to engineers working in the field. It comes from incomprehensible code or weak code reading skills. Either way, boilerplate does nothing but harm.

So no, while I agree on your point that code exists principally to be read by humans (and as a nice secondary bonus, executed by machines) -- I disagree that boilerplate is "fine" whatever its incarnation. It's not, because it directly damages the primary purpose of code: its readability.


> Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...

My long-standing view is that Java's strict and verbose OOP syntax and semantics are an interface for IDEs. People who are hand-coding are practically guaranteed to be baffled by the verbosity, but they forget that Java development was the driver for IDE evolution (afaik), such that now we have ‘extract method’ and similar magic that understands code structure and semantics.

More specifically, OOP works, or should work, as an interface for IDEs that allows to (semi-)programmatically manipulate entities on a higher level, closer to the architecture or the problem domain.

Like you, I wondered if this manipulation can be harnessed and customized, preferably in a simpler way than giving in to the whole OOP/IDE/static-typing tangle and without writing AST-manipulating plugins for the IDE. In those musings I ended up with the feeling that Lisps might answer this very wish, with their macros and hopefully some kind of static transformations. Which are of course manipulating ASTs, but it seems to be done somewhat easier. Alas, predictably I've had no chance of doing any significant work in a Lisp, so far.


FYI for c# you have resharper allowing to add parameters to method and it would automatically propagate passing it through several levels up based on references. Guess sometimes all you need to know are the right tools.


Could you give a concrete example of your problem in a gist or something? I'm curious if it's solvable in C# as is. Sounds like it may be the kind of thing I'm approaching with reflection and attributes right now.


> granddaddy of OOP languages: Java.

Nowhere near. There was a lot of OOP hype in the early 90s with Smalltalk and C++, so Java just went all-in (everything is in a class) on that trend of the times.


If you're writing in Java, why not write in Kotlin? They're sufficiently compatible you can have Java and Kotlin files in the same directory structure, compiled with the same compiler.


this resonates so much to me


Well, in the case of Java there definitely are ways to minimize the boilerplate. Some of the more common ones that i use:

  - Lombok library ( https://projectlombok.org/ ) generates all of the getters and setters, toString, equals, hashCode and other methods that classes typically should have (JetBrains IDEs allow you to generate them with a few clicks as well)
  - MapStruct library ( https://mapstruct.org/ ) allows mapping between two types, between UserEntity and UserDto for allowing mostly similar objects or ones that are largely the same yet should be separate for domain purposes
  - Spring Boot framework ( https://spring.io/projects/spring-boot ) allows getting rid of some of the XML that's so prevalent in enterprise Java, even regular Spring, and allows more configuration to be done within the code itself (as well as offers a variety of pluggable packages, such as a Tomcat starter to launch the app inside of an embedded Tomcat instance)
  - JetBrains IDE ( https://www.jetbrains.com/ ) allows generating constructors, setters/getters, equals/hashCode, toString (well, there are covered by Lombok), tests, as well as allows for a variety of refactoring actions, such as extracting interfaces, extracting selected code into its own method and replacing duplicated bits, extracting variables and converting between lambdas and also implementing functional interfaces, as well as generating all of the methods that must be implemented for interfaces etc.
  - Codota plugin ( https://www.codota.com/ ) offers some autocomplete improvements, to order them by how often other people used any of the available options, though personally there was a non-insignificant performance hit when using it
As far as i know, there is a rich ecosystem for Java to allow treating the codebase as a live collection of a variety of abstractions which can be interacted with in more or less automated ways, as opposed to just bunches of overly verbose code (which it still can be at the same time). Personally, i really like it, since i can generate JPA annotations for database objects after feeding some tools information about where the DB is and allowing them to do the rest, as well as generating web service code from WSDL (though i haven't used SOAP in a while and noone uses WADL sadly, though OpenAPI will get there).

And then there's attempts like JHipster ( https://www.jhipster.tech/ ) which are more opinionated, but still interesting to look at. I think that model driven development and generative tooling is a bit underrated, though i also do believe that much of that could be done in other, less structured languages, like Python (though that may take more effort) and probably done better in more sophisticated languages, such as Rust.


Low Code/No Code solutions don’t work because the people involved in implementing solutions are rarely engineers themselves. Most (good) engineers have learned through training and/or experience, well, engineering things, like edge cases, error handling, user experience, efficiency, testing, maintainability, automated testing, and a plethora of other subtle and obvious aspects of system design. I know this quite well because I’ve worked with these so-called low-code and no-code platforms and every one of them I have seen end up having to be taken over by experienced engineers who have been brought in to fix (or in some cases completely rebuild) a poorly-designed system. These platforms typically suffer the “last mile” problem as well, requiring someone to write actual code.


And there's been the business process engine craze in between. BPEL comes to mind which also has 'visual editors' for the business people to use.

It's too complex for them and the you pay Software engineers to use BEPL instead. Which is just a worse language to actually program in than the underlying system.

Or any other number of 'process engines' which give you a worse language to describe your actual process in and then you need to do stupidly convoluted things to do simple things. But hey, we didn't have to code!


I worked on a Pega project once. There was nothing in there that the business people would be able to touch, especially after the requirements exceeded the capabilities of Pega’s primitives. One of the local friendly FTEs (the dev work was contracted out) would’ve been happy to use C#/ASP.Net web forms like everything else in the org.


> Low Code/No Code solutions don’t work because the people involved in implementing solutions are rarely engineers themselves.

In my experience these are usually too complex for non engineers to understand and incredibly frustrating for engineers to use.


Some people see past tries at something as proof that something will never work. Others see past tries at someone having the right idea, but wrong implementation.

Imagine how many tried flying before we "invented flight", and how many said "oh how they won't learn from the past".


I think that's a fair point. The way I see it, going from requirements (even visual ones) to working system would require strong AI, as any sufficiently powerful visual environment would wind up being Turing complete.

Which means that no code is either use case bounded or claiming something roughly on par with a break through. The first is common enough and where I imagine most low/no code offerings fall when the hype is stripped away. The hype seems to promise something on par with the second and I think that's where the dismissive attitude comes from.


Functional and declarative programming are mostly specifying requirements directly. You don't need a AI to do it, in fact that would be the wrong tool (AI is good for fuzzy problems and inference - not following a spec).

An extreme example of this are logic and verification systems like prolog and TLA+.

There is a sweet spot of low code I haven't seen explored yet, which is a declarative system that is not Turing complete. That would be an interesting avenue to explore.


Business requirements still have a measure of ambiguity and "do what I mean" to them. They are more formal than natural language, sure, but fall far short of the formalism of a declarative programming language. This is a big part of the business partnership underlying most Agile methodologies. If the formal spec could be handed off, then it would be and Waterfall would work better in enterprise settings. Instead, the team is constantly requiring feedback on the requirements.

So I guess I still see declarative languages as being part of the tech stack and something tantamount to AI being needed to handle all the "do what I mean" that accompanies business process documentation.


I think honestly the problem is a lack of tech literacy. I've seen spec sheets that are glorified database and json schemas in a spreadsheet, put together by BAs and translated by hand.

It could be done directly if every BA had enough programming knowledge to put together schemas and run CLI tools to verify them.


> had enough programming knowledge to put together schemas and run CLI tools to verify them.

That's quite a lot of programming knowledge. It makes some sense to decouple the business-oriented from the more technical roles - BA's trying their hand at coding is how you get mammoth Excel spreadsheets and other big-balls-of-mud.


I'd prefer not to.


Not sure if prolog or formal methods are good examples here, as they are pretty hard programming language. Yes, they can be used to specify a system, but they also require human ingenuity, aka strong intelligence, to get right. Prolog may be easy for some people, but I did spend inordinate amount of time to understand how to use cut properly, and how to avoid infinite loops caused by ill-specified conditions in my mutually recursive definitions.

As for formal methods, oh, where shall I even begin? The amount of time to turn something intuitive into correct predicate logic can be prohibitive to most of professionals. HN used to feature Eric Hehner's Practical Theory of Programming. I actually read through his book. I could well spend hours specifying a searching condition even though I could solve the search problem in a few minutes. And have you checked out the model checking patterns (http://people.cs.ksu.edu/~dwyer/spec-patterns.ORIGINAL)? I honestly don't know how a mere mortal like me could spend my best days figuring how to correctly specify something as simple as an event will eventually have between an event Q and an event P. Just for fun, the CTL specification is as follows:

``` G(Q -> !E[!R U (!P & !R & EX(P & E[!R U (!P & !R & EX(P & E[!R U (!P & !R & EX(P & !R & EF(R)))]))]))]) ```

As I said, formally specifying a system, no matter what tools one uses, is essential complexity.


I want to see formal methods used more in lieu of writing standards documents. If you go read a standards document, say for 5G wireless, you'll find it's largely formal specification awkwardly stated in English and ad hoc diagrams. It would be better to just write out something formal (with textual annotations as a fallback) and have a way to translate that to readable language.


True, but on the flip side imagine how many people tried transmuting lead into gold, thinking the previous attempts simply used the wrong approach.


They were right. The previous attempts did in fact use the wrong approach, and people have now successfully turned lead into gold. The only problem is that it’s too expensive to be worth doing.


I don't agree. If you could ask a prime Newton if he'd be satisfied converting lead into gold in a cost prohibitive manner I would bet any amount of money his answer would be a quick "no". The goal of alchemy was to convert lead into gold in a way that made the discoverer rich, it's just not proper to say the second part explicitly, but I believe most people understand it that way.


> The goal of alchemy was to convert lead into gold in a way that made the discoverer rich

That definitely needs a citation. The Wikipedia page mentions no such motivation, and describes Alchemy as a proto-scientific endeavor aimed at understanding the natural world.


However, the analogy is still accurate, because the right approach involved several steps which no one thought were conceivably part of the solution: "understand how forces work at macro scales", "understand electricity", "understand magnetism", "develop a mathematical framework for summing tiny localized effects over large and irregular shapes", "develop a mathematical framework for understanding how continuous distributions evolve based on simple rules", "learn to look accurately at extremely small things", "learn to distinguish between approximate and exact numerical relationships", "develop a mathematical framework for understanding the large-scale interaction of huge numbers of tiny components", and so on.

If you went back in time to an age where people were working hard on changing lead into gold and your mission was to help them succeed as soon as possible, your best bet would probably be something like teaching them the decimal place value system, or how to express algebraic problems as geometric ones. But if you also told people that this knowledge was the key to solving the two problems they were working on, "how to make very pure versions of a substance", and "how to understand what makes specific types of matter different" you would reasonably have been regarded as deluded.


> However, the analogy is still accurate, because the right approach involved several steps which no one thought were conceivably part of the solution

I don’t see how that follows. It’s just a truism that nobody figured out how to do it until someone finally did. The fact that the path wasn’t obvious at various points in the past seems irrelevant.

> But if you also told people that this knowledge was the key to solving the two problems they were working on, "how to make very pure versions of a substance", and "how to understand what makes specific types of matter different" you would reasonably have been regarded as deluded.

If they were listening to you at all, it’s not at all obvious why this part would sound deluded.

How is it any more exotic than any of the failed alchemies?

As far as I can see they were all quite abstract.

This one that happens to be correct, no less so.


'It' (most of 17th, 18th and 19th and some early 20th century mathematics, chemistry and physics) is clearly a lot more abstract than the failed alchemies.

The point is that 'just keep trying' would not have been a good strategy.


'just keep trying' is a straw man that nobody has mentioned until just now.

Yes, it’s possible that there are some concepts we have yet to think of.

See: https://numinous.productions/ttft/

The explanation of how hard it would be to come up with Arabic numerals if you didn’t already have them covers this.

However the point here is that we can learn from this, and we now know a lot more about how to do hard things.

‘It’s too hard and we should give up’ (the countervailing straw man) is even less supported by history than ‘keep trying’.


To be honest we actually figured even this one out.

It just requires a particle accelerator and the amounts are so tiny that it’s ridiculously expensive. But hey, we can still do it!


Either outcome is possible.

Many alchemists tried to turn copper to gold as well. They might as well think that their predecessors are just unlucky by using the wrong implementation.


And they were correct: we do know how to turn copper to gold now; it's just prohibitively expensive.


It would have been sad if we had given up on AI/ML ideas due to the failure of AI back in the 80s/early 90s. See: https://en.wikipedia.org/wiki/Fifth_generation_computer


I think this is only partially true. There are aspects of coding which can be abstracted away, either because they're essentially boilerplate or because a simpler description of the solution is sufficient. Ideally if a more complex description is required, one can drill down into the simplified low-code description and add sufficient complexity to solve the problem.

I mean, couldn't many of the existing frameworks be described as low-code wrappers around more complex work flows and concepts?


> many of the existing frameworks be described as low-code wrappers around more complex work flows and concepts

Using frameworks, you are still using the language itself to command the frameworks. For example, if someone claims oneself as a React programmer, nobody would assume that someone didn't know Javascript.

So to efficiently use one framework, you should master both the language + framework. In other words, the complexity not only remains, but also accumulates.

But this is contradictory to low/no code's selling point, as they are targeting non-programmers.


This only goes so far, though, with frameworks. In my experience, the vast majority of people that make claims about a particular framework do not understand the abstractions they are building upon. In fact it is sufficiently reliable that I have found this to be an excellent hiring signal, to probe how well a person understands the abstractions in frameworks they use, but also to probe how they diagnose and fix holes in their own knowledge.


No Code is mostly a code word for outsourcing. You use their app to get it started, realize it won’t meet your requirements, and then pay them to work on it forever. Unless it’s just a marketing Website.


No Code I feel like could be also viewed as a compromise for selling dev tools to mass market. You can sell a "cheap complete*" solution instead for the overhead of also dealing with customer issues / inevitably helping train a dedicated person for them to maintain their app. Then you have customers who need dev tool support as intended


Everybody promises their no-code solution is going to adopt to the way your enterprise already works, but the truth is you kind of have to go the other way around if you don't want misery.


I work at Stacker (YC S20) [0] and the approach we're taking to deliver on this promise is to start with the data.

That is, we let you take spreadsheets you already use and build more powerful collaborative tools on top of them, without code.

If you take the premise that a tool has a 'data' part, and an 'app' part, and that the data models the process, and the app more just controls how data is accessed, presented, the UX, etc, you might see why I'm so excited about this approach -- if you take the data from spreadsheets that are already being used to model a process in action, by definition you don't have to change your process at all.

[0] https://stackerhq.com


About 30 years ago one of my managers used to say "get the data model right and the application writes itself" and I have found that to be mostly true. What I have also often found is that people who create spreadsheets in business don't understand data modeling and even if the spreadsheet solves some business problem it's often very brittle and hard to change and adapt or generalize.


The spreadsheet structure point is an interesting challenge - I think often a spreadsheet ends up as the de facto model of a process, but often with, as you say, some redundancy, excessive flattening, and other structural issues that can make it more diffcult to build an app around.

The nice thing, though, is that shifting this structure around does not mean changing the process being modelled - it's more just a necessary part of making a more powerful tool to support it.

It's as you say, since the process is known, it's usually very clear exactly how the app should be, which under our model can inform how to shift the structure of the spreadsheet accordingly in pretty practical way. It's cool to see the same thing work in both directions!


From my experience working with some business-side using spreadsheets: yes, usually spreadsheets end as the de facto model of a process but not necessarily an efficient model or an easily replicable one.

In banks I know of some long-living spreadsheets that have been patched so much that it takes a real human weeks to months of work to disentangle the mess of macros and recalculations onto a streamlined script/process. Sometimes the resulting model diverges in quirky ways that are manually patched, I've seen diversions due to datetime issues (time zones, summer time, leap days, incorrect assumptions about dates, etc.) that were noted by side-effects of side-effects and the manual patching didn't help at all to debug the root cause.

I think that spreasheets are incredibly powerful, but the main reason for that power is that they are quite free-flowing and that invites the human creativity to solve problems with the limited set of tools and knowledge some users have, and some of those are in quite high technical positions using spreadsheets daily for years.

I believe you might have a killer product but I had so many headaches with spreadsheets that I wouldn't like to be working in that space.


My first project out of college was working on an internal metrics tool for a company. Their prior one was basically Excel; a guy who was due to retire had, back in the 90s, written an entire DSL with VBA, that could load files, run various functions on them, and output graphs.

Thing is, no one except him knew the DSL; everyone in the company relied on it, but they relied on him to write the DSL to compile their inputs into the output they wanted.

The rewrite included an actual database, proper data ingestion, and a nice clean frontend. The methods of aggregating data were reduced and standardized, as were the types of simulations and metrics that could be reported on; the flexibility was drastically reduced. However, the practical usage for everyone was drastically increased, because it moved from "we can't do anything different unless we get (expensive due to retire person's time)" to "we can do it ourselves.

I'm very jaded toward no/low code, in general, and that experience is partly the reason why. There isn't a sweet spot, that I've seen, that allows for non-technical people to have the control they want. And that was true even with spreadsheets.


The less nice thing, though, is that the model of the process you're starting from -- the actual spreadsheet -- has, as you say, these structural problems. And since (some speculation here, but I very much suspect that) many different processes, after having been so mangled, will end up in the same redundant, excessively-flattened structure, you can't determine from the spreadsheet alone which of these different processes it is supposed to encapsulate.

So before you can start "shifting this structure around" you'll still have to go through a standard business analysis process to find out what you are going to shift it into. And if you're already doing that... Well, then most of your promise of automation is out the window already, so what's the use of having the actual implementation done in some weird newfangled "no-code" or "low-code" tool?


This!

Understand the data and the application is simple and easy to understand. Start with a flashy GUI and your data is a mess.


So basically MS Access as a service?


Salesforce kinda nailing it (and I say that as Salesforce code crafter, not their no code user).


Some strange comments in here about Low Code, as if they weren’t already successful. There are easily hundreds of apps successfully making use of Low Code to solve problems for people. Some Marketing Automation tools have had them for 10+ Years. Integration tools are also often Low Code.


Have any examples? (Genuine question, not an attack!) I've mostly ignored the Low/No Code stuff for a long time.


Microsoft's Power Platform is a low code framework which works well, and generates a massive amount of revenue, as does Salesforce and some others. I recently designed and implemented a complex 500 user child protection application with PP that has been live for a year now. It was highly successful, and the time and cost taken to deliver it was far less than the cost of a hand written solution. That said, there is still quite a lot of custom code required for most enterprise level solutions even with the most mature low code platforms. Low code is not a panacea, and the same issues of how to represent requirements and design arise in the low code world as in the high code world. Low code platforms will continue to mature and improve. Maybe AI will catch them one day, but I'd be surprised if that happens anytime soon.


In 7 months, over 8,000 apps were created with Budibase. Many of these are either data based (admin panels on top of databases), or process based (approval apps).

Budibase is built for internal tools so the apps are behind a log in / portal.


Microsoft Access and Claris FileMaker have nearly 3 decades of success at low-code.

I genuinely think they get a bad wrap because the really successful apps you never hear about, but the problematic ones need a real programmer to sort out.


Integration - IFTTT, Zapier, Boomi, Jitterbit, Workato, and on and on.

Marketing Automation - Marketo, Klaviyo, Mailchimp as a few examples.

Airtable is another that has low-code automation built in


Comparing UML and No-code is apples to oranges. UML is about generic abstraction without actual implementation. No-code is about domain-specific implementation done using simplified high-level visual constructs instead of general-purpose programming languages. In other words, No-code is programming (just not text-based), while UML is modelling.


Low-code/no-code is simply the CASE tools of the late 80s, or the UML->production system fully automated pipeline of the late 90s/early 2000s, given new life and a shiny coat of paint. The same problems apply and the same people keep buying the same damn snake oil.

Once you can specify your procedures, requirements, and constraints in a way that is specific enough for a computer to read and act meaningfully on, the elements of your specification method become isomorphic to constructs in some programming language. So you've replaced typed-in keywords with clickable symbols or buttons or controls -- but you've in no wise reduced the headwork of programming or made the programming go away.


A spreadsheet is a low code app. Less flexible than a general purpose programming language, but still extremely valuable.

If low code apps simply provide more flexibility and maintainability than a spreadsheet, they’re already winning.


With the recent Lambda announcement, Excel is going to have a cool escape hatch for doing more complicated stuff right in the sheet without needing to dive into VBA. I honestly thought I would never get excited about a feature being added to Excel, but I was wrong. This looks friggin’ awesome!

https://www.microsoft.com/en-us/research/blog/lambda-the-ult...


Nonsense, No/lo-code is the polar opposite of UML, No-code is the ultimate Agile, it’s exciting and fun to build with, because it’s alive, because it executes immediately, resulting in a powerful iterative feedback loop. UML is static and tedious to build, because any gratification is delayed so far into the future. I know because I’ve done both, in fact I’ve invested all of my wealth (many millions) into developing a No-code ERP/CRM platform and it’s incredible - will launch this year.


I strongly believe that every tool, every invention, can only innovate/be innovative in one area.

As such what we need is to have the right building blocks and abstraction layers which at the end will mean that something like no or low code will work. It will only work when all the tools that underpin it have been crystallised over the years.

This happens on every layer and is fundamentally why we see so much work repeated, but slightly different. Every language makes different trade offs, therefore all the libraries implement the same functionality but just a bit different this time.

Every once in a while something like UML (too complicated, e.g., due to the use of EJBs), Business Works (too slow), etc comes along which has promise and offers value at the time but just misses the boat to survive until the next generation of revised underlying tools.


I think a certain set of problems requires the thinking of someone who knows how complex systems work and where the pitfalls are.

You can even see this with stepped covid restrictions that rely on infection numbers crossing a certain threshold. Most engineers wlould imidiately see the real world consequences that arise from the lack of hysteresis.

Similar things happen with input sanitization, deciding on formats etc.

Some stuff just takes experience. The actual writing of the code is not the problem, the knowledge of what to do and what to avoid is.


They're trying to commodify writing code. It would bring down the cost of programmers significantly. They won't succeed, I think. Instead AI will probably beat them to it.


What sometimes gets forgotten is that even a programming language _is_ a model! And it is a model that fits exactly all the details that are necessary to construct an actual program with all the little special cases.

Modeling all constraints and runtime behaviors in UML is cumbersome and hard to understand. UML could be used for showing larger building blocks or complex flows (e. g. with sequence diagrams), but it is a bad fit to model a complete program in it.


It did justify an army of billable systems analysts as they tried to deskill their engineers.


> The irony is that the UML itself becomes more complex than code...

I forgot the law's name but the said law states that "a machine of a certain complexity cannot build a machine more complex than itself". So, a car factory is more complex than a car itself. Similarly a code generator cannot produce something more complex than itself.

This is why you need to merge things of certain complexities to build something more complex.


> I forgot the law's name but the said law states that "a machine of a certain complexity cannot build a machine more complex than itself"

Isn't this disproved by the evolution of reproductive organisms? That's not necessarily more complex of course, but it's pretty obvious that over a large enough time scale successive machines can become more complex.


Could the problem simply be that UML was poorly designed for the job?

Just because certain information has an essential complexity doesn’t mean that different representations are equivalently complex. There’s an essential complexity to the layout of the London Underground, but it would be considerably worse if you had to represent it with plain text files instead of maps or diagrams.

I agree that UML was wrong-headed and simply ignored this complexity instead of addressing it seriously, but I wouldn’t be surprised if it turned out that some diagram-based programming language would be useful for something.


Simulink, labview, and gnu radio are all fairly well adopted though they all have their problems. They are all notably not oop, at least not idiomatically. In fact, as most blocks are stateless other than their configuration, you might say the paradigm is closest to functional. UML, with the exception of flow charts was very OOP centric. In a world where I still come across people who use OO as a synonym for "good" I can see why they created it that way, but a tool that shapes your design space can be limiting.

The lack of detail in the models means autogenerators have to have lots of configuration for each item in the diagram. People would brag that 95% of their code was autogen, and I would realize they had spent dozens of hours figuring out ways to use checkboxes and menu selections to generate the code they wanted. Instead of typing it. And all the hideous autogen vode was a nightmare to step through in a debugger. Large labview projects aren't any better really, but they are popular.


> UML, with the exception of flow charts was very OOP centric. In a world where I still come across people who use OO as a synonym for "good" I can see why they created it that way, but a tool that shapes your design space can be limiting.

To be fair, that can’t be the only reason UML sucks, or even the main reason, because the same is true of Java, and while Java sucks, it’s a much better programming language than UML.

I think a lot of it really does come down to a denial of essential complexity. If you designed a visual programming language in such a way as to actually accept and handle essential complexity you’d be on a better track.

> The lack of detail in the models means autogenerators have to have lots of configuration for each item in the diagram. People would brag that 95% of their code was autogen, and I would realize they had spent dozens of hours figuring out ways to use checkboxes and menu selections to generate the code they wanted. Instead of typing it. And all the hideous autogen vode was a nightmare to step through in a debugger. Large labview projects aren't any better really, but they are popular.

I haven’t worked with these systems you’re discussing, but it sounds like people would be better off handwriting more of the code and only using the visual tools for the part that they’re actually better for. In much the same way that a Wikipedia article about elephants includes photographs of elephants instead of merely relying on long-winded textual descriptions of their appearance, while still having lots of text for the sorts of things text is good for representing.

I think maybe GUI interface builders or HyperCard might be another example of this hybrid approach. I think some incarnations of SmallTalk included similar ideas.


The hybrid approach failed miserably for us, but that could be the tool, it forced a lot of things through the GUI with no option to bypass. I noted elsewhere that diagrams are natural for civil engineering and notation is natural for logic and other branches of math. That is because of the "essential complexity". I think it is possible there will be a great general purpose programming diagram tool some day, but I've never seen one that wasn't either very domain specific, or downright terrible.


It’s a hard problem and most people who try to solve it don’t have realistic expectations or the right intentions.


What I have found with GNU radio is that pretty soon, you wanna drop out of the 'graph' view, because you want to dynamically reconfigure your graph.


I've never gotten that complicated with it before I just switch to c++ uding UHD or soapy. I only use gnu radio to poke around and maybe prototype a little.


> Just because certain information has an essential complexity doesn’t mean that different representations are equivalently complex. There’s an essential complexity to the layout of the London Underground, but it would be considerably worse if you had to represent it with plain text files instead of maps or diagrams.

In general yes. I was only commenting on the assumption that UML can be of a general-purpose programming language, at least in the domain of business automation, or the idea that sufficient modeling can replace coding given the current technology.


Martin Fowler discusses this here: https://martinfowler.com/bliki/UmlMode.html

> UML was going to be the "blueprints" of code, and software architects would develop UML diagrams similar to how building architects create blueprints for houses. But as it turned out, that was a false premise.

True. The blueprint is the code. The brick and mortar construction is done by compilers.


Perhaps the better choice was to have automated tools to turn source code into understandable business diagrams to allow business analysts to partner with software engineers, instead of the other way around.


There are tools to turn code into diagrams.

I don't know of any that can make it understandable, however. I think that would be a very difficult task, even for quite small, well-designed programs.

Code has a lot of relationships. In 2D it looks like a mess.


What useful information is this going to convey to the business person and how is it going to be better than a conversation with you.


There's value in visual representations of complex things. See for example Edward Tufte. People probably just think of him as visualizing data, but his book Visual Explanations goes into many other areas.

My preference these days is to use a text-based representation to generate diagrams with tools like Structurizr DSL, WebSequenceDiagrams, PlantUML and Graphviz. The source for the diagram(s) can be kept in the repo with the executable code and versioned.

Maybe in another decade we'll get some tools that can take the executable source code, along with all the deployment descriptors, the Kubernetes charts, Terraform configuration files, shell scripts, and so on and generate meaningful visualizations.


Business continuity and inheritability of business applications by both the business and developer.


I think it's main promise was to sell training and materials to large companies. Nice work if you can get it.

Here's one that enabled some people to work a rich vein for a while.

https://en.wikipedia.org/wiki/Shlaer-Mellor_method

added:As an old person, there's no point in listening to me though. From where I sit, the main improvement over the last 40 years has been the widespread adoption of third party libraries. You'd be surprised at the things that had to be written from scratch. ...I just thought of another difference over time, the population of programming hobbyists who became professionals. It would kill me to write software for free.


The Wikipedia article you posted is very relevant to me because I watched a division of the Air Force waste tens of millions of dollars trying to implement that approach for their collection of IT systems.

The consultants got very rich with their cartoon drawings on the walls and nothing was produced for the taxpayers naturally.

A complete failure but because it's the federal government nobody was held accountable and the person in charge changed the success criteria enough to be able to cancel it and call it complete.


Watched (and still watching) the same thing in the Navy. :(


> From where I sit, the main improvement over the last 40 years has been the widespread adoption of third party libraries.

I'd say, as a direct corollary, the prevalence of API-driven services have been a welcome change.


I used one of those tools at one point, Rational Rose.

It was possible to get it to generate code, as I recall it basically gave you stubs of classes and methods and you would then go and code the implementation of each method.

It seemed like it could save you some amount of effort at basic boilerplate stuff but at the cost of putting the same or more effort into the UML.

UML was the swan song of the classic waterfall SDLC gang. Agile and TDD came along and nobody looked back.


The modern incarnation of agile has very little to do with what is written in the agile manifesto.

This parody site is a fair characterization of what the current situation is: http://programming-motherfucker.com/

My favorite quote there is:

> We are tired of being told we're socialy awkward idiots who need to be manipulated to work... because none of the 10 managers on the project can do... Programming, Motherfucker.

Developers are better at self-organizing than people think. That is the real driving force in modern software. You can eliminate all the ceremonies of scrum and still have a functioning team. You can remove the scrum master and in some cases even the product manager and still have a functioning team.

Despite popular belief: developers CAN understand the product. In some cases developers can understand the product better than a product manager can. Also, developers can have a good approximation to what the customer wants, even without talking to the customer, by just looking at analytics data, log aggregations and bug reports.

The modern incarnation of agile gives too much power to product people and disempowers engineers, turning companies into tech debt mills. The original incarnation of agile empowered engineers and allowed them to collectively negotiate with the product manager.


Agreed. Original agile connected developers and customers through valuing sw features. Today's agile is whip-cracking with the value of features obscured from developers who waste customer time doing unjustifiable work. It's more of a social game than one that benefits customers, so it can not last forever.


Thanks. I think you've done a better work at describing the situation than I did.

The original agile manifesto emphasized collaboration with the customer. Scrum defined roles such as product manager and scrum master, and then the industry injected even more roles in between the customer and the developer...

Fast forward to 2021, we have "agile" developers that have never met a customer. So much for customer collaboration.


> Also, developers can have a good approximation to what the customer wants, even without talking to the customer,

Yoep, sure, maybe we even don't need clients. Just developers creating products for themselves. Or just coding for the sake of coding. I've seen that too mamy times. That's why we need product people.


The largest software companies in existence were born during a time when the role of product manager did not even exist.

Just like the largest corporations in existence became profitable and expansive before they hired MBAs.

Product managers and MBAs are the best examples of the Texas Sharpshooter cognitive fallacy. You shoot into a wall and then paint a target around it. Being successful at those roles is about painting targets around any successful initiative and claim it was your idea.

Powerpoint presentations are not reality, picking up the customer service phone, auditing the code, talking to internal and external users and keeping in touch with reality is.

https://youtu.be/4JVJdKnbZu8

https://youtu.be/P4VBqTViEx4

https://youtu.be/Y6P8qdanszw

"We hire smart people to tell us what to do". - Steve Jobs


I dunno I disagree—the fact is a lot of people just dive into coding and don’t spend much time with design.

There’s a ton of value in the idea of diagramming code and then generating sources. UML is a starting point but the journey is far from over.

The more appropriate idea is that you create documentation in the form of diagrams for free. Just like in TDD you get unit-tests for free.

Folks always talk about self-documenting code—and that’s great. But what about conveying a complex system to a new team of engineers? Diagrams are frankly priceless if done well.

Also, looking at something like Kubernetes where a declarative YAML file generates magic underneath is somewhat similar. A step beyond what we have would be nice diagramming capabilities over the YAML to auto generate the plumbing underneath.

Personally, I think future advances in development _will_ be done as higher level ideas—pictures worth a thousand lines of code—AI will do the rest.


> The more appropriate idea is that you create documentation in the form of diagrams for free.

The problem is the diagrams are hard to create and hard to update and usually don't remain synchronized to the code. If there was a good way to create documents from the code (perhaps with some annotations required), it could just be a Make target and boom, free(ish) documentation.


I'm working on this, for Ruby, Python, and Java - https://appland.com/docs/get-started.html - would love to get your feedback.


I’ve recently gotten reacquainted with Doxygen, and it allows you to embed PlantUML source right in your source code or Markdown files. Easy to write, easy to update, and stored as plaintext right in your source tree. I don’t love Doxygen, but it’s doing a great job at what I’m trying to do (document a complex C++ project)


What about org-mode? I am using plantUML in org-mode.


I use PlantUML in org-mode as well! That's actually where I started, but found it much easier to recruit other people to write their own documentation (with sequence and state diagrams) by not requiring them to use Emacs :)


I feel like my ideal workflow would be a middleground between doing the design up front and just jumping into coding. Before you start coding I feel like you don't have much of an idea of what problems you will run into, resulting in diagrams based on the wrong assumptions. But with code it's easy to loose track of the high level structure of what you are writing. Writing code, then diagramming the high level structure, and then going back to fix the code seems like a good way to go.


Absolutely! That is similar to artists doing thumbnail sketches to figure out the composition; then once things are reasonably worked out, the chosen composition can be worked onto the final canvas; then the details follow.

That is a nice benefit of good development frameworks: how easy is it to explore new ideas? And frankly that’s why there’s an uptick in higher level languages.


> diagramming capabilities over [Kubernetes] YAML

Has been tried: https://github.com/CATechnologiesTest/yipee


Granted, but you can do the same and much more with different methods, and avoid fighting the frustrating, unreliable and time-consuming UML tools altogether.


I can’t argue _for_ any particular tool—just that the concept is a really good concept.

I think a lot of it comes down to folks not bothering to invest in making good tools—or developers sticking to their current workflow.

That is why I relate it to Test-Driven Development because it does require a shift in process. But the end result, I think, could be very rewarding.


I used RR at Uni in the early 2000s. It felt very clunky even then. It was also a pig to use - somewhere along the line it become known as Crashional Rose.


Yep, they taught us RR in uni in the early 2000s as well, this was in New Zealand.


Yes. We had a joke that either RR wasn't properly software-engineered the way RR proponents demand, so they weren't even dogfooding. Or it was and the process clearly doesn't work, even for its proponents. Because obviously, RR was crap.


I had the same experience in 1999. I never understood how that software could be so bad. A student using something in a student manner shouldn't be able to crash the thing in multiple different ways. It would be like if I opened Blender as a newbie, hit a button somewhere, and clicked in the viewport and it hard-crashed. I'm sure it's possible to crash Blender, but for a newbie it ought to be harder than that!

And then for this to be held up as the example of how software engineering was going to be in the future is just icing on the cake.

I have many and sundry quibbles with various things I was taught in university ~20 years ago, but most of them I at least understand where they were coming from and some of them have simply been superceded, of course. But that software engineering course with Rational Rose has the distinction that, in hindsight, I don't think I agree with a single thing they taught.


The problem with tools that generate code is that they are often unidirectional. If there is no way to get code changes to propagate back to the visual model, the latter is likely to fall into disrepair pretty quickly.


It could be possible to do something interesting in this space, where UML can be used generate template code, and later on, another tool could extract UML from the code, compare it to the baseline, and flag any discrepancies. From there, you can either sign off on the discrepancies (and replace your hand-made UML with the extracted one) or fix your code. Bit of a kludge, but at least automatic verification is possible unlike documentation


If only it were that simple but the code is so expressive that you can't really create the UML as easily from it as the other way around. You just can do too much stuff in code that the UML generator would just not understand at all. Or you'd have to basically code in a very specific manner. Not fun. Of course since I last tried it they probably got better at it.

I even remember back in university you'd have to write your custom code _in_ the UML tools dialogs if you didn't want it to be overwritten next time you tried to generate code. Of course these were just simple text boxes. Horrible dev experience.


The trick is propagating "backwards" to the model feedback from tests, not changes, preserving the normal loop of receiving requirements, writing code (in this case the "model"), compiling it to something else (including generated code that fools would be tempted to modify) and running the program.


Honeywell "automatically generated" all of the flight code for the James Webb Space Telescope using Rational Rose in the early 2000s. They were still trying to fix the code when I was at NASA Goddard in the mid-2010s.


> UML was the swan song of the classic waterfall SDLC gang. Agile and TDD came along and nobody looked back.

Don't the UML and Agile and TDD 'gangs' overlap? Robert Martin has evangelised both.


The intersection space in that Venn diagram is generally anotated as "$$$".


The difference I was trying to highlight is that UML (at least in my experience) was still very much focused on "big design up front" and production of design artifacts (vast numbers of diagrams) that agile and TDD approaches explicitly rejected.

I don't remember rapid iteration being a part of any UML-based methodology that I ever used. By the time the diagrams were complete enough to capture implementation details, they were too unwieldy. Did any UML tools support common refactorings, or would you have to manually change potentially dozens of affected diagrams?


But that's the point - how are the same group recommending massive complex paper designs up front, and also agile methodology?


Opinions and ideologies change over time, here's an hypothetical timeline that explain your question

===A====B==C=======>

A: (around 1997) We are hopeful that the methodology called UML will solve the software engineering problem.

B: We have tried that UML methodology and it doesn't solve the problem it said it would. We should try something else

C: (Feburary 2001) We have an ideology based on what we have found to solve the problem in practice, let's make a manifesto.


Maybe I'm cynical, but it seems like these people are in a farcical cycle of repeatedly inventing some new master theory of programming only to find it's actually a disaster a few years later and then to switch to something apparently diametrically opposed.

Of course, they have a book on sale to explain the new idea...

How much time was wasted and how many projects were damaged by the bad idea of UML and design up-front that they were pushing as hard as they could less than two decades ago? How many developers are being stressed by endless manic sprinting and micro-managing processes under the name of Agile?

Maybe they should stop? Or apply some actual science? Some of this in-group call themselves scientists but all they do is pontificate. I'm not really sure many of them spend much time actually programming.


> I used one of those tools at one point, Rational Rose. It was possible to get it to generate code,

If you could wait the hours it took to do so. God that was the most resource hungry piece of software I've ever had the displeasure of using.


> UML was the swan song of the classic waterfall SDLC gang.

The Unified Modeling Language (UML) was the graphical language for Object-Oriented Analysis and Design (OOAD).

Classic waterfall was more of a top down, divide and conquer approach to software design and development.


My impression as an undergrad learning UML was that it gave an architectural level view - so the effort would go in at a different level of thinking, not just in a different part of the process?


Yeah it turned into that before kinda dying. Now in the "real world" those tools are only used for diagrams that explain what's called a "slice" of the architecture, but no one really gives a whole architectural view of a system on UML. Not even for a simple component.

But the glimpse you got of it as an undergrad was UML trying to give it's last kick before dying. The whole quest for a formal definition and a standard for it doesn't make sense if you only want to use it to give an architectural level view.


Hey, I remember that. Code generation is cool in that you have now made it an upstream part of your tool chain. No changes allowed at the code level.


Years ago I had a contract with IBM so I got Rose for free. It had really neat demos but once you started using it, it was basically useless or worse. You got a few stub classes and then spent the rest of the time keeping the models in sync with reality.

I think only Clearcase had a bigger negative impact on productivity than Rose.


The thing is I found quite paradoxical (to stay polite) that you'd spend time drafting something that is not precise, not data to help you but just a document in a tool.

The model driven thing was nice but it was never good enough to actually help with code. It was also deeply rooted in the crippled Java days so full of verbose diagram representing overly verbose classes.

To hyperbole a bit, I'd rather spend time writing property based tests and a few types in haskell in a way.


There can be a stage between "I have kind of an idea of what this is supposed to be" and "I'm ready to code this", where you think carefully about what this thing is actually supposed to be, and how it's supposed to behave and interact. It's not amiss to think for a bit before creating the code.

I'd rather spend some time making sure I'm building the right thing, rather than testing that what I built correctly does the wrong thing.

On the other hand, if you want to argue that UML is not the optimal way to do that, you could make a case. It makes you think through some questions, but those may not be the only questions, and there may be other ways of thinking through those areas than drawing diagrams.

And if you want to iterate your designs, UML is a painful way to do so. You'd want to design in some other medium that is easier to change. (Maybe something text based?) But if you're thinking through all the design issues in another medium, and iterating the design in that other medium, then why produce the UML at the end? To communicate the design to other people - that's the point of UML. But if you can communicate the design better using something else (like maybe the medium you actually design in), then why produce the UML?


That assumes that before you have a thing in your hand (a working program with expected input, and output), you can exactly describe how that thing should act, what it should look like, what the input and output should be (and not be) and have that be successful - and structured correctly internally the first time.

In my 25ish years of experience writing code? That has happened for a non trivial task exactly zero times.

If the idea is you could refactor the UML (and hence generated code) to adjust, since none of the tools are able to generate functional code (stubs and simple templates yes, but not much more than that), that means it would need to refactor a bunch of human manipulated and generated code without breaking it. Which I think is well beyond even our current capabilities.


It's weird to read this because building's architects and designers do exactly that: they have to make tremendous efforts to design complex systems (think an airport or a hospital) before they lay down a single brick. Somehow this idealization and planning step is impossible for software developers.


Those engineers have the good fortune to be working in a fairly constrained space. New materials and building techniques become viable slowly over time.

Software developers are able to build abstractions out of thin air and put them out into the world incredibly quickly. The value proposition of some of these abstractions are big enough that it enables _other_ value propositions. The result of that is that our "materials" are often new, poorly documented, and poorly understood. Certainly my experience writing software is that I am asked to interact with large abstractions that are only a few years old.

Conversely, when I sit in a meeting with a bunch of very senior mechanical engineers every one of them has memorized all of the relevant properties of every building material they might want to use for some project: steel, concrete, etc. Because it's so static, knowing them is table stakes.

I'd say this difference in changing "materials" is a big source of this discrepancy.


Also, the construction industry is a huge mess, and anyone telling you that things don’t go over budget, get torn out because someone messed up a unit conversion somewhere, burn down during construction because someone didn’t follow some basic rules, or turn out to be nearly uninhabitable once complete because of something that should have been obvious at the beginning - is just ignorant. These happen on a not infrequent basis.

The big difference is no one really tries new things that often in construction, because for the most part people have enough difficulty just making the normal run of the mill stuff work - and people who have the energy to try often end up in jail or bankrupt.

In Software, we’re so young we end up doing mostly new things all the time. Our problems are simple enough and bend to logic enough too, that we usually get away with it.

If you’ve ever poured a footing for a building, then had the slump test fail on the concrete afterwards you’ll sorely be wishing for a mere refactoring of a JavaScript spaghetti codebase under a deadline.


Decades of coding here.

Buildings neither are Turing complete, nor do the building blocks become obsolete every few years.

The closest analogue to software development is legislation.

Even the best written rules can have unintended consequences, and so we have tools to make the behavior ever more precise and less error-probe. But it’s never fool proof.

Also and like legislation, it’s the edge cases that balloon a proof of concept into monstrous sizes.

In some respects, for software to advance some components need to be less powerful. But we have this fetish for inventing yet another Turing complete language in the pro space, just because, and bolting on a million features.

It’s unnecessarily tiresome.


Hah! The funny part is, you think they don’t mess this up all the time, but they do! We all have experiences with buildings that are impossible to navigate, have weird maintenance issues (toilets always backing up, A/C a nightmare, rooms too small, rooms too big, not enough useful space, etc). Buildings get redrawn constantly during construction, and they rarely match the plans. Cost overruns are endemic, as are scheduling issues.

They’re also using literally thousands of years of deeply ingrained cultural rules and expectations focusing on making living in and building structures effective (it’s one of the core tenets of civilization afterall), supported by an army of inspectors, design specialists, contractors (themselves leveraging thousands of years of passed down and deeply baked in expertise in everything from bricklaying, to concrete work, to framing).

All that for what, functionally, is a box we put things in, including ourselves, that we prefer provides some basic services to a decent standard and isn’t too ugly.


I remember watching a documentary on architecture, and the speaker, who offering a different approach, said that for much of architecture, the never-look-back mantra was the unspoken rule of the day.

You'd design and build a building, and that was it. If the roof leaked (common on building-like pieces of art), you didn't want to know about it. If the interior was changed to actually work for the buildings occupants, you didn't want to know -- that'd mean that your beautiful design has been marred.

All this suggests to me that some of these designs are done without deeply considering the needs of the people affected, and realizing that those needs change, and worse, without learning from the mistakes and successes of the past.

[Note that I am not arguing about the merits of how software is, was, or should be designed.]


At the beginning of my career I worked in AEC on the planning side. It was well understood that whatever the Architects had designed would be entirely redone by engineers afterwards and then by on-site engineers and then by tradespeople after that in the implementation. No one really understands what's going on in a reasonably-sized building.


Addressing the real needs of people is hard, and gets in the way of being famous and changing the world - a mindset I’ve seen more than a few times in designers. All of them pretty senior? So I guess it was working for them?


A ways back, the president at a multi-discipline engineering consulting firm I worked in made an interesting point. If you give ten EEs a hardware task, they will come back with something that looks similar. If you give ten software engineers a software task, they will come back with ten completely different things. I think this is because in software there are so many possible ways to do something, and so much richness, that writing software is a very different from making hardware, or architecting a building, to go along with the parent comment.


It's not that software engineers are not capable of doing the same when required (e.g. in the firmware for NASAs mars rovers, etc.) but that usually software engineers don't do that because there is a better alternative.

If architects could build a house multiple times a day while slightly rearranging the layout every time they'd do that in a heartbeat.


Bingo! Architects don't get to debug their building with the press of a button.


There are quite a few responses, but I still want to point out a main difference more clearly:

There are natural-intelligence (human) agents translating the diagram to "code" (bricks).

There is a lot of problem fixing going on done by the construction crews, cursing at the architects (sometimes, or just going with the flow and what comes with the job).

That is the same with software:

If you give good developers diagrams those human agents too will be able to produce useful software from it, no matter the flaws in the diagrams, as long as they understand the intent and are motivated to solve the problems.


The constraints are different. If compilation took 3-5 years to complete, software would look more like civil engineering.

The goal of 3d printing and the like is to make mechanical engineering more like software so you can get a tight iteration loop


Good point. If you remember the days when compiling your program meant taking a deck of punch cards to the data center, handing them off to an operator, and then waiting a few hours for the result, you spent a lot more time planning your code and running it through your mental line-level debugger than you do today.


The interesting question is how they organize these tremendous design efforts before laying the first brick. In software, there just is no construction phase after the design phase.


Nod, we don’t have to deal with the pesky issues of moving actual physical objects to be in specific contact in certain ways with other physical objects. Once we figure out a design that compiles (in commercial construction, that would be a plan that passes validation/sign off), we’re ‘done’ except for the pesky bug fixing, iteration, follow up, etc.

Writing code is very close to what 90% of the drafting work is for construction (aka some relatively junior person figuring out exactly how many bricks would fit in this space, and how thick it would need to be, to meet the requirements his senior person told him he had to meet - and trying a couple other options when that is obviously BS that doesn’t work the first few times, and then everyone refactoring things when it turns out the first idea causes too many issues and the architect’s grand plan for a 50 ft open span in an area is impossible with current materials).


I suspect it probably is possible, if you're willing to spend enough time. However, it's also true that the cost of a building's architect changing his mind in medias res is far higher than the software developer's. It is not necessarily the case that the best way to approach one discipline is also the best way to approach the other, just because we happen to have decided both should be called "engineering."


They do however make computer models and simulations to understand the problem. Programmers do that as well by coding parts of the problem, running it to simulate usage and see how it works and adjusting accordingly. No bricks needs to be laid for software engineers to work either.


I start to think that this step is actually the code. An architect has to specify things because the drawing is not the building, while for programming the 'drawing' actually is already the program.


Buildings are naturally described by drawings, logic is naturally deacribed by notation. We wouldn't ask a civil engineer to design a building using prose, and so we should not ask a computer engineer to describe logic using boxes and arrows.


On the contrary, not only is this planning step not impossible in current programming practice, it's universal, or very nearly so. Almost nobody programs by hex-editing machine code anymore. We just edit the design, often in a language like Golang or C++, and then tell the compiler to start "laying the bricks," which it finishes typically in a few seconds to a few minutes. If we don't like the result, we change the design and rebuild part or all of it according to the new design.

More modern systems like LuaJIT, SpiderMonkey, and HotSpot are even more radical, constantly tearing down and rebuilding parts of the machine code while the program is running. Programs built with them are more like living things than buildings, with osteoclasts constantly digesting bones while osteoblasts build them. In these systems we just send the plans—our source code, or a sparser form of it—to the end-user to be gardened and nurtured. Then, just as osteoblasts build denser bone where its strength is most needed, the JIT builds higher-performance code for the cases that automatic profiling shows are most performance-critical to that user.

— ⁂ —

Soon architects will be able to do their work in the same way.

Like Microsoft programmers in the 01990s, they'll do a "nightly build" of the current design with a swarm of IoT 3-D printers. Consider the 10,000 tonnes of structural steel that make up the Walt Disney Concert Hall in Los Angeles, which seats 2265 people. After the 16-year construction project, it was discovered that reflection from the concave surface was creating deadly hot spots on the sidewalk and nearby condos, requiring some expensive rework.

If each assembler can bolt a kilogram of steel onto the growing structure every 8 seconds, then 2000 assemblers can rebuild it from source in a bit over 11 hours. In the morning, like programmers, the architects can walk through the structure, swing wrecking balls at it to verify their structural integrity calculations, and see how the light falls, and, importantly, notice the sidewalk hotspots. Perhaps another 2000 printers using other materials can add acoustic panels and glazing, so the architects can see how the acoustics of the space work. Perhaps they can try out smaller changes while inside the space using a direct-manipulation interface, changing the thickness of a wall or the angle of an overhang, while being careful not to stand underneath.

In the afternoon, when the architects have gone home, the assemblers begin the work of garbage collection of the parts of the structure whose design has been changed, so the next nightly build reflects the latest updates. As night falls, they begin to rebuild. The build engineer sings softly to them by the moonlight, alert for signs of trouble that could stall the build.

— ⁂ —

Today that isn't practical—the nightly build machine for a single architectural firm would cost several billion dollars. But that machinery itself will come down in cost as we learn to bring the exuberant living abundance of software to other engineering disciplines.

To do ten "load builds" in the 16 years the Walt Disney Concert Hall took, you'd only need two assemblers, perhaps costing a couple million dollars at today's prices; they'd be able to complete each successive prototype building in 15 months.

Suppose prices come down and you can afford 32 assemblers, each placing a kilogram of steel every 8 seconds. Now you can do a "monthly build", which is roughly what I did when I joined a C++ project in 01996 as the build engineer. Or you can build 10:1 reduced scale models (big enough to fit 22 people, in this case) a thousand times as fast. Incremental recompilation on the C++ project allowed individual developers to test their incremental changes to the design, and similarly this kind of automation could allow individual architects to test their incremental changes to the building, though perhaps not all at the same time—the full-scale building would be like an "integration test server".

Suppose prices come down further and you can afford 512 such assemblers. Now you're not quite to the point of being able to do nightly builds, but you can do a couple of builds a week, and you can rebuild a fourth of the Walt Disney Concert Hall overnight.

Suppose prices come down further and you can afford 8192 assemblers. Now you can rebuild the building several times a day. You can totally remodel the concert hall between the morning concert and the afternoon concert.

Suppose prices come down further and you can afford 131072 assemblers. Now you can rebuild the concert hall in 10 minutes. There's no longer any need to leave it built; you can set it up in a park on a whim for a concert, or remodel it into a cruise ship.

Suppose prices come down further and you can afford 2097152 assemblers. Now totally rebuilding the concert hall takes about 30 seconds, and you can adapt it dynamically to the desires and practices of whoever is using it at the moment. This is where modern software development practice is: my browser spends 30 seconds recompiling Fecebutt's UI with SpiderMonkey every time I open the damn page. At this point the "assemblers" are the concert hall; they weigh 5 kg each and link their little hands together to form dynamic, ephemeral structures.

Suppose the assemblers singing kumbaya shrink further; now each weighs only 300 g, and they are capable of acrobatically catapulting one another into the shape of the Walt Disney Concert Hall, or any other ten-thousand-tonne steel structure you like, in a few seconds.

(Wouldn't this waste a lot of energy? Probably not, though it depends on the efficiency of the machinery; the energy cost of lifting ten thousand tonnes an average of ten meters off the ground is about a gigajoule, 270 kWh; at 4¢/kWh that's US$11. In theory you can recoup that energy when you bring the structure back down, but lots of existing technology loses a factor of 10 or 100 to friction. Even at a factor of 100, though, the energy cost is unlikely to be significant compared to construction costs today.)

— ⁂ —

But tell me more about how programmers need to plan more to reduce the cost of construction mistakes?


Enterprise UML modelling tools certainly allow for complete development not only stubs.


> You'd want to design in some other medium

That's why I really like PlantUML [1].

It generates UML diagrams from a simple text markup language.

Much quicker to iterate on, easy to put into a repo and share or collaborate.

Still not something you would use to design your whole code structure, but great for brainstorming or drafting once you internalized the language a bit.

[1] https://plantuml.com/


Completely agree with this sentiment: Don't include every detail in your UML, but use it instead to straighten out your high-level ideas. PlanUML is also my go-to for this.


I want to add an important affordance of PlantUML: accessibility.

Using visual diagrams is shutting out vision-impaired developers from ever participating in your process. Maybe you don't have any on the team now, but that could change.

PlantUML is screen-reader compatible, and it does a pretty good job of laying out the content of a diagram in a way that "reads right".

I don't think purely-visual diagrams are an appropriate part of modern development for this reason, not without a diligent effort to make an alt-text which conveys the same information. With PlantUML, you get the alt-text for free.


> > To hyperbole a bit, I'd rather spend time writing property based tests and a few types in haskell in a way.

> I'd rather spend some time making sure I'm building the right thing, rather than testing that what I built correctly does the wrong thing.

I don't believe the GP was saying to use tests instead of planning. They were saying to use the tests as planning.

They called out property-based testing in which you describe behavior of the system as a set of rules, such as `f(x) % 2 == 0`, and the test harness tests many inputs trying to find the simplest example that fails that criteria.

They also called out defining types (in their chosen language, not a step removed in a UML diagram), which allows you to think about how the data is shaped before you write an implementation that forces a shape.


I agree completely with your first two paragraphs, but UML, in my opinion, failed to support that approach. Its primary failure is that it neither captured nor communicated the rationale behind the requirements, the answers to "why this?", "why this, instead of that?" and "is this right? is it sufficient?" Answering these sorts of question is central to the production of requirements and also to understanding them, but with UML these questions and their answers are treated like scaffolding, taken away from the result before its delivery.

One might argue that UML could support the capture of such information, but what matters is that this rarely, if ever, was done. It is not the sort of information suited to being presented diagrammatically, or at least not by the sort of diagrams that made it into UML.

One might also argue that no other requirements specification method centered on these features has made it into mainstream software development. Some people here, for example, have argued that the code is a statement of requirements, and code also lacks these features. It does not follow, however, that therefore UML should have succeeded.

Ultimately, UML was an added layer offering insufficient benefits to justify its costs. Its benefits were insufficient because it was predicated on the false assumption that requirements can be adequately captured by a sufficient number of simple declarative statements about how things must be, and that the process of specifying requirements is primarily a matter of making such statements.


It certainly isn't the optimal way. Imagine the UML for a metaclass that creates classes, or for composition/trait based object definitions.

The good UML diagrams are sequence and maybe use case.


Why would you ever not want to iterate your design? Doing is the fastest way of learning. The details can drive a design, so that if you don't remove all ambiguity, you will create an architecture that won't actually work. The problem people who just jump in face, is that they do not abandon their bad prototype and begin again, instead clinging to faulty architecture which leaves them in the same boat as someone who made an architecture unaware of the details.


Agreed ... the thing that bothers me about UML is that it has displaced better, smaller-bore tooling in a significant way. The idea of thinking-before-coding work is of course completely necessary.


Model driven isn't dead though, it has transformed. It's all about text models now. The only thing you really see is people using clicky-clicky tools to make databases.


I've always really disliked UML because it tries to strictly encode a whole lot of information into diagrams, which is way too rigid and opaque for me. My eyes just glaze over when I see UML.

I don't want to have to search "what does double arrow mean UML" in order to understand a proposal. I don't want an arrow to mean something that I couldn't learn somewhere else. I'd rather have a loose informal reference diagram alongside a couple paragraphs describing the system more formally. That way, the important information can be emphasized, the unnecessary information can be glossed over, and the diagram acts as a big-picture aide rather than some kind of formal semantic notation.


That's not a fault of UML. I can't read Greek but that doesn't make it unreadable.

Everything is hard to read before you know how to read it.


Yea but, why do we have to speak in Greek in the first place?


So consulting companies can make bank teaching you how.

After convincing the CIO that 'GREEK' is the silver bullet to their problems.

/s


Because that's the language the laws are written in. If you don't speak Greek, you don't know what the law is, so you'll break it. That's a bad idea.

If you're living in the Byzantine Empire, that is.


Same can be said about English with regards to programming. Most people don't have English as a native language.


Yeah that's why I normally write documentation in English instead of Greek. That way, when people read it, they don't need to learn a new language.

Besides, that's only half of my criticism. Greek is at least a full language where you have the flexibility to phrase things however you want and inject detail wherever you need. UML is a very rigid language which makes it hard to emphasize certain elements over others. A text has a reading order and a logical progression; UML is spaghetti.

If you're gonna write your docs in a different language, at least pick a good one.


Had UML succeeded at that goal, I think it's funny that hackers would have probably built a text based language to generate the UML diagrams.


Those tools exists already. I used PlantUML[0] at my programming course. UML is nonsense, of course, but it was part of the curriculum, and it was more tolerable doing UML in Vim than in a graphical point-and-click editor.

[0]: https://plantuml.com/


> UML is nonsense

Eh, uml is a tool. Pretending to document fully or even to have uml as ground truth is a fool errand, of course, but since our daily job involves taming complexity, any system of knowledge partitioning that one can assume everyone else understand is a godsend


I'm glad you've found UML useful.


It's inevitable. Most digital FPGA and ASIC development is done with HDLs despite the availability of schematic entry systems. 2D representations of behavior do not scale, are hostile to collaboration, and suffer from vendor lockin.


PlantUML & other tools exists for quite a long time


PlantUML is an excellent tool for creating visual representations of system behaviors. Because diagrams are generated from plaintext, they’re easy to maintain and version control. I use it often when designing new features and systems. You don’t need to pay attention to UML semantics to create valuable diagrams.


PlantUML also integrates pretty well with different wikis. The textual representation is saved in the wiki and the image is generated on-demand.


100% - that’s my primary usage of it. It’s easy to encode complex diagram and change it easily.


Hackers wouldn't touch UML with a 4000000000000 kilometer pole.


When I was in uni recently I was learning UML and wondered why all the FOSS tools for UML sucked. I quickly worked out it’s because no foss programmer actually uses or cares about UML


It would be difficult to do really anything with such a pole.


UML is the virtual thing, so it will be a virtual pole, of course.


Sure they would. As a widely recognizable set of boxes and arrow shapes, it's useful for the kind of doodling that you might want to show someone else later :).


Challenge accepted.


UML is a language or notation. It isn't dead and I consider it still useful because a standardized notation means you don't have to explain your notation again and again. Or worse you forget to explain the notation and people are confused what that arrow or box actually means.

The promise "that with detailed enough diagrams, writing code would be trivial or even could be automatically generated" was made by "model-driven something". The idea behind that gets reinvented often. The latest one is called No-Code or Low-Code.


The problem isn't uml itself but the fact that the uml definition is separate from the code. Is still see that there can be done use of visualizing parts of the code using uml.


A related one that people keep reinventing is “dataflow programming”, where programs would get more expressive if we didn’t call and return stuff but instead data just moved around those arrows on a graph. That’s like code generation but you actually execute the graph.

I guess it does work for Excel.


I never understood the appeal of UML. Hardware design had been done almost entirely by schematics, but then in the mid 90s HDLs (Hardware Description Languages) and logic synthesis offered increased productivity for hardware designers. Now other than very highlevel block diagrams hardware design is almost completely textual. UML seemed like schematics for software and a step backward.


That's the right perspective to look at it. It wasn't just increased productivity which brought us to HDLs, but it was the sheer impossibility of understanding or keeping track of the ever larger and more complex designs with schematics. With todays software systems we have exactly the same problem (but computer scientists apparently prefer to retry and reinvent things than to study anything old). UML (and with version 2 also SysML) finally had an equivalent textual representation, but it was much too late.


Completely agreed. As for why the code is the blue print - good code design requires the ability to switch between high level details (how do I structure these high-level components of the system) and super low level details (e.g. the application of a particular algorithm to a certain problem, or the various hacks and workarounds one often has to write when dealing with certain third party systems). The lower level details simply cannot be extrapolated from a high level structure.


To extend the construction analogy a bit - typical architectural drawings aren’t buildable as is. They often miss key details (member composition is rarely even mentioned, same with even member sizing!). Stamped civil engineering plans will often miss anything which is outside of the core structural elements being certified (so good luck figuring out the size of the beam you’re supposed to put somewhere if it isn’t a core load bearing element). Huge portions of construction are based off decades of (inconsistent) experience, in the field improvisation, cargo culting, and gut feel. The smaller/less big Corp the job, the more true this is.


I remember "Booch Blobs." The original system diagrammer was Grady Booch, and he used these little "clouds." I think Rational Ro$e used them. He ended up throwing in with Ivar Jacobsen, and they came up with UML (which Jacobsen started). More boring, but also more practical.

I use "pseudo-UML," as a "quick and dirty" diagrammer, but only when I want to do things like this: https://littlegreenviper.com/miscellany/swiftwater/the-curio...

I don't bother with the "official" definitions of UML. I kinda make it up as I go along.


> I think Rational Ro$e used them

Yes, prior to version 4.

> He ended up throwing in with Ivar Jacobsen

Jacobson had a proven method and toolset at that time called Objectory which was superior to Rose. Unfortunately they killed the tool. It seems to be an imperative of history that mediocrity prevails better.


The funny thing (having lived through that time) was that while it was very trendy to pretend that construction was not a giant disaster most of the time from a planning, delay, cost overruns, etc. perspective - it was pretty clearly not the case even then if we’d looked even a little!

Like many things, the big promise was a fad, but we learned some valuable things out of all of it, and some still survive.


Software Engineering is a licensed profession in several countries.

In Portugal I cannot sign a legally bound countract with Eng. SoAndSo withouth having been licensed to do so.

Naturally plenty of people without such duties never do the final exam, however the universities where they studied had to be certified by enginnering order anyway.


I don't think I've have ever met a software engineer in Portugal that is on the "Ordem de Engenheiros". It's far more common, because indeed they're legally bound, with civil engineers, material engineers and such.

That may also be true for some areas, but you can def. sign a contract for software development with just a generic business license.


There are certainly creative ways to sign the contract in order to avoid that requirement, after all we belong to the European nations that tend to be creative when it is time to comply with the law.

For example, I knew some consulting shops that had one poor soul that signed all contracts and hoped for the best.


> But as it turned out, that was a false premise. The real blueprints for software ended up being the code itself. And the legacy of UML lives on in simpler boxes and arrow diagrams.

IMO the bad rap UML gets is undeserved. The value of a detailed design in UML may be limited. But high level design elements like use case diagrams, sequence diagrams, activity diagrams - these are super useful.

Simpler "boxes and arrow diagrams" are fine but it's nice to have some consistency in the visual representation of these elements.


> It was developed during a time when there was a push to make Software Engineering a licensed profession

Regardless of the merits (or lack thereof) of the original push, I do want to see greater accountability and oversight of safety-critical systems and related, such as IoT systems - the idea of having a licensed/chartered engineer having to sign-off on a project (and so putting their personal professional reputation at stake) is something I support in the aftermath of things like Boeing's MCAS snafus - or the problems with Fujitsu's "Horizon" system - and so on.

I don't want occupational gatekeeping like we see with the AMA and the trope of licensed nail parlours, but we need to learn from how other engineering professions, like civil-engineering and aviation engineering, have all instituted a formal and legally-recognized sign-off process which I feel is sorely lacking in the entire software engineering industry.


The challenge is that when you're making enough detailed design ahead, you are back to waterfall so it's not really compatible with the agile methods of today.


Honestly, I think complex architectures are best demonstrated as diagrams—and those can be developed in an agile fashion. Stable, well-thought-out architectures can’t be slapped together without nice diagrams. There’s a ton of folks who just “start coding” to get a feature going, but when someone else takes over the project, how are they to learn the code? Diagrams are always the best way for me—and there’s limits on what doxygen says, depending on how bad the implementation is.

Main point of UML is to tackle both diagramming/architecture AND forcing basic coding to reflect the diagrams. It forces code and documentation to both reflect the architectural truth.

This doesn’t have anything to do with agile methodologies, as any task can follow agile workflow.


I'm kind of missing something opposite. A tool that can draw a diagram out of the code, that by dropping some details but preserving important stuff like what objects travel through which fuctions, can give you better understanding of what architecture in your program/system do you actually have.

Because it might be different from the architecture you think you have and some bugs or opportunitues for improvement might be more easily spotted through this different lens.


See this [1] article by Martin Fowler. UML promised and pushed for more control over how much of the program it could sketch/model.

1. https://martinfowler.com/bliki/UmlMode.html


I'm missing "UML as a satelite map of the wilderness"


It seems fundamentally obvious to me that this line of thinking is bogus. If you move enough of the complexity to UML diagrams of course the code will be simpler - because all the complexity is now in the UML diagrams.

That doesn't make the complexity go away, you have to do just as much work writing UML diagrams as you did writing code before, but now you're expressing your complexity in cumbersome visual designers rather than code.

If you're going to shift business logic/data from code to any other format you need to demonstrate that that other format is somehow better for representing that information, you can't just pretend that because it's not expressed in code any more you've gotten rid of it.


> writing code would be trivial or even could be automatically generated (there are UML tools that can generate code)

Ironically, a lot of documentation systems now do the opposite: take your code and produce UML diagrams from it.


Indeed. The problem is not with the diagrams, so much, as with the requirement for people to draw them. The code is the source of truth and the only true representation of the program.

There is value in diagrams - but that value is highest when the diagrams are derived directly from code. That is why I am making https://appland.com/docs - get the benefit of interactive code diagrams, generated automatically from executing code (not just static analysis, which is too weak to handle the dynamic behavior of modern frameworks).


I use it exclusively during refactoring to try and spot coupling, or to figure out somebody else’s code with a sequence diagram. It’s handy for that. It would be weird to use it for up-front design but I guess you could


I invested much of my early career in model driven architecture with UML. Including diagramming tools, dev rel, XMI transforms, code generation, etc.

I couldn’t have summarized it better than you just did. Thank you.


this is what I do... I have a UML document that describes the database schema and instead of autogenerating it, I run a compile-time check to verify that the UML is in sync with the schema.


Nice! What tools do you use to achieve this?



Honestly, UML would have been awesome for backend architecture. Most backend architecture I see are logos next to boxes connected by arrows, which is fine for high-level but extremely difficult to automate


Part of the reason for this is that until you get to larger system components, changing the code is relatively cheap, so there's less need for a formal design.


wow, that is perfectly worded. In 2001 I spent a great deal of time making these UML diagrams for my boss. I didn't understand what this had to do with writing code. I hated it. Tried to argue it was pointless, lets just skip these and start writing code... and there was crickets


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: