UML's promise was that that with detailed enough diagrams, writing code would be trivial or even could be automatically generated (there are UML tools that can generate code). It was developed during a time when there was a push to make Software Engineering a licensed profession. UML was going to be the "blueprints" of code, and software architects would develop UML diagrams similar to how building architects create blueprints for houses. But as it turned out, that was a false premise. The real blueprints for software ended up being the code itself. And the legacy of UML lives on in simpler boxes and arrow diagrams.
IBM used to push the adoption of their business process software for exactly the same reason. They imagined that "business process experts" use UML to construct the entire business process, and then the software (based on WebSphere Application Developer, an Eclipse-based IDE) will generate all the execution code, including deployment scripts. The irony is that the UML itself becomes more complex than code, and dozens of layers of exception trace were simply incomprehensible to engineers, let alone to "business process experts". To add insult to injury, IBM mandated generating tons of EJBs. Even thinking of that induces migraine.
P.S., It's surprising that those who advocate that UML is better than code didn't understand the essential complexity would not go away simply because we switched a language, as essential complexity lies in precisely specifying a system. Neither did they understand that that a programming language offers more powerful constructs and tools to manage complexity compared with UML.
I look at the code I'm writing (mostly C# thesedays) and I consider the _information-theoretic_ view of the program code I'm writing, and I see a major problem is that even syntactically terse languages (like C# compared to its stablemate VB.NET) still requires excessive, even redundant code in many places (e.g. prior to C# 9.0, defining an immutable class requires you to repeat names 3 times and types twice (constructor parameters, class properties, and assignment in the constructor body) which alone is a huge time-sink.
The most tedious work I do right now is adding "one more" scalar or complex data-member that has to travel from Point A in Method 1 to Point B in Method 2 - I wish I could ctrl+click in my IDE and say "magically write code that expresses the movement of this scalar piece of data from here to here" and that would save me so much time.
At least in modern languages like C# 9.0, Kotlin, and Swift (and C++ with heavy abuse of templates) a lot of the tedium can be eliminated - but not in the granddaddy of OOP languages: Java. Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
> but not in the granddaddy of OOP languages: Java. Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
Because... We don't. IDEs and code generators have replaced a lot of the more stupid boilerplate. Not that there isn't a lot of stupid boilerplate in Java but it's been greatly reduced by tooling.
Still, I don't work with Java because I like it, I work with Java because it works and has a great ecosystem, the tooling around it make it bearable and without it I'd have definitely have jumped ship a long time ago.
I'm not only a Java developer, I've worked with Go, Python, Clojure, Ruby, JavaScript, Objective-C, PHP and down to ASP 3.0. Java is still the language that employed me the most and the longest, I have no love for it apart from the huge ecosystem (and the JVM gets some kudos) but it works well for larger codebases with fluid teams.
Ward Cunningham once noted that because a programmer can input incomplete, or minimal, code, then push a few buttons to autogenerate the rest, that there's a smaller language inside Java. Since he made that remark a number of other JVM languages have sprung up that try to be less verbose. One of them is Groovy, which uses type inference and other semantics to reduce duplication (or "stuttering" as Go calls it").
The issue now with Java is that it's such a big language and it's accumulated so much from many different paradigms that experience in "Java" doesn't always transfer across different companies or teams. Some teams hew closely to OO design and patterns, others use a lot of annotations and dependency injection, still others have gone fully functional since Java 8.
And then there are shops like one of my employers, where a large codebase and poor teamwork has resulted in a mishmash of all of the above, plus some sections that have no organizing principles at all.
> Some teams hew closely to OO design and patterns
I maintain that design-patterns are just well-established workarounds for limitations baked into the language - and we get used to them so easily that we rarely question why we’ve even built-up an entire mechanism for programming on-top of a programming language that we never improve the underlying language to render those design-patterns obsolete. (I guess the language vendors are in cahoots with Addison-Wesley...)
For example, we wouldn’t need the Visitor Pattern if a language supported dynamic double-dispatch. We wouldn’t need the adapter, facade, or decorator patterns if a language supported structural typing and/or interface forwarding. We wouldn’t even need to ever use inheritance as a concept if languages separated interface from implementation, and so on.
Better answer: the only real OOP system is Smalltalk.
> design-patterns are just well-established workarounds for limitations baked into the language
Strict functional programmers have been saying that for years. They may be workarounds, but as patterns they have value in allowing one programmer to structure code in a way that is recognizable to another, even months later. You could say that a steering wheel, gas pedal, and brakes are workarounds for limitations baked into the automobile we wouldn't need if cars could drive themselves, but still value the fact that the steering wheel and the rest of the controls for a driver generally look and work the same across vehicles.
Right you are - but my point is that language designers (especially Java’s) aren’t evolving their languages to render the more tedious design-patterns obsolete - instead they seem to accept the DPs are here to stay.
Take the singleton pattern for example. It’s not perfect: it only works when constructors can be marked as private and/or when reflection can’t be used to invoke the constructor to create a runtime-legal second instance. A better long-term solution is to have the language itself natively support passing a reference to static-state, which completely eliminates the risk of a private ctor invocation - but that hasn’t happened.
OOP Design Patterns are like JavaScript polyfills: they enable things that should be part of the native platform. They’re fine to keep around for a few years when they’re new, but when you’re still using what is essentially an aftermarket add-on for 5+, 10+ or even 25+ years you need to ask if it’s the way things should be or not...
Patterns are commonly (but not only) symptoms of some failure of features on the language but at the same time they are just vocabulary.
Patterns exist in functional programming as well, any map/reduce operation is a pattern, any monad is a pattern. It's a proven way to achieve a goal, it's easy to compartmentalise under a moniker and refer to the whole repeatable chunk with a name.
Unfortunately a lot of people only learn how to properly apply design patterns after doing it wrong and/or overdoing it (mea culpa here!). It's easy to spot the bad smells after you've been burnt 2-3 times.
If map and reduce were design patterns, you’d be writing out the iteration bits every time you used them. Instead map and reduce are abstractions, and you only have to plug in your unique functions and values.
This reminds me of a talk by Rich Hickey [0], where he introduces the Transducer pattern, which is actually an abstraction for map/reduce/fold/filter etc.
(But I'm not trying to invalidate your claim that patterns exist in FP in general, only that specific case. Afaik, the Transducer abstraction isn't even widely-known nor used.)
By writing enough code using design-patterns that you see a pattern to the usage (and abusage) of design-patterns - and get plenty of experience with other languages and paradigms where said patterns are irrelevant.
I'd agree with that statement. I feel that I have settled in a few frameworks that I consider modern but mature and stable, with a good job market. For that I had to learn these smaller languages inside Java.
And not only smaller languages, the tool set as well. Maven and Gradle for a start, and Gradle is its own Universe of little quirks and "what the fuck" moments. IDEs have a learning curve (but I learned vim and emacs before using any IDE so knew how steep learning curves work) and if you manage to use their features it can help immensely your productivity. Frameworks such as Spring have enough pull that it's easy to find interesting projects with it, I think that the direction of Spring Boot is pretty good for modern tech enterprises, at least for part of the stack.
Boring technology has its place, it's a hassle you have to learn a pretty big set of tools to be productive in Java but when you do you can actually accomplish a lot with multiple teams and a scale of hundreds to thousands of engineers.
You can also shoot yourself on the foot pretty easily, in massive scale, if the people taking decisions are architecture astronauts and not battle-hardened engineers who have suffered through incomprehensible code and piles over piles of mishmash of technologies and failed frameworks. Keeping the tech stack simple, boring and focused on a small set of tools has its benefits on scale.
Groovy is a nightmare. We now reject candidates who advocate for it in production during interviews and we warn those who do say they d never dare there but use for testing.
Who cares if you have to write setters and getters by clicking a button in IntelliJ, or have to explicit your types rather than ask every subsequent readers to use their brain as a type inference compiler.
Typing clear code isn't a problem Groovy should have solved by making it all implicit, at the cost of having non compiled production code. A code should never fail at runtime because you made a function name typo that it couldnt tell you about any other time for free.
A lot of this also applies to other interpreted languages widely used in production, such as Ruby, Python and Clojure.
You don’t like interpreted languages in production, fine. But rejecting candidates who think differently from you just creates a monoculture and reduces the chance that you learn anything new, beyond reinforcing your own convictions.
How rational this is depends on your testing culture. With a comprehensive test suite, the guarantees offered by a compiler are not very important. The software goes through all its runtime paces before production anyway.
If you’re not going to write any tests, then obviously compile time is a crucial line of defense.
Most shops will be somewhere in the middle where compiler guarantees offer a real but marginal benefit, to be weighed against other tradeoffs.
The point is not "let's just not write any tests". With a compiler that offers meaningful guarantees, you can write more worthwhile tests than "does this function always take/return integers".
If you’re passing or returning values of the wrong type, it’s going to blow up one of your tests. Asserting on a value implicitly asserts its type. Passing a value and not getting a runtime error for your trouble, pretty strongly indicates that it’s the right type.
Writing tests instead of utilizing the compiler is wasted time and effort. And it is one of the worst kinds of code duplication, because you are reimplementing all the type-checking, bounds-checking, etc. Usually badly, buggy and again and again. And since usually the test suite doesn't have tests for the tests, you will only notice if something breaks in the most inopportune occasion possible.
In unit testing for dynamically typed languages, very rarely do you make explicit type checks. The type checking naturally falls out of testing for correct behavior.
If the language doesn't help you with a function name typo, that's crap dynamic. Not only is that not a feature or benefit of dynamic, but it fuels unfair strawman arguments against dynamic.
Here is something I made largely for my own use:
This is the TXR Lisp interactive listener of TXR 257.
Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
TXR is enteric coated to release over 24 hours of lasting relief.
1> (file-put-string "test.tl" "(foo (cons a))")
t
2> (compile-file "test.tl")
* test.tl:1: warning: cons: too few arguments: needs 2, given 1
* test.tl:1: warning: unbound variable a
* test.tl:1: warning: unbound function foo
* expr-2:1: variable a is not defined
That's still a strawman; it only scratches the surface of what can be diagnosed.
I see where you're coming from, having used Groovy for a few years (in Grails, mostly) but I think you're also overstating your case.
Groovy is very quirky, and it's good at turning compile errors into runtime errors (@CompileStatic negates many of its advantages like multiple dispatch) and making IDE refactoring less effective. But the again, so is Spring. This got better when Java configuration was introduced... and then came Spring Boot, which is a super leaky abstraction, with default configuration that automatically backs off depending on arbitrary conditions (!). And yet people find it valuable, because it reduces boilerplate.
These days, I use Groovy mostly in testing (and Gradle), and it can really make tests more expressive.
> IDEs and code generators have replaced a lot of the more stupid boilerplate. Not that there isn't a lot of stupid boilerplate in Java but it's been greatly reduced by tooling.
I've been dealing with Java for about four years in an academic setting, not professional. When observing Java code in workplaces, the code bases have universally been bloated monstrosities composed mainly of anti-patterns - but that was hopefully down to the era these applications were written in (J2EE, Struts, etc).
My experience is that:
* Auto-generated code is still visual (and debugging) noise
* Annotations like Lombok's are great, but are "magic" abstractions, which I find to be problematic. They add cognitive load, because each specific library has its own assumptions about how you want to use your code, as opposed to built-in language constructs.
* Especially for Lombok, I can't help but think adding @Getter and @Setter to otherwise private fields is a poor workaround for a simple language deficiency: not having C#'s properties { get; set; }. I feel the same about libraries that try to circumvent runtime erasure of generic types.
* Compared to C#'s ASP.NET (core), I find fluent syntax configuration with sane defaults and "manual" DI configuration much more manageable and maintainable than auto-magic DI as in Spring, because at least it's explicit code.
* Java tooling (and if previous points weren't, this is definitely a subjective stance) just seems inferior and to set a low bar in terms of developer experience - maven or gradle compared to nuget, javac compared to dotnet CLI, executable packaging... As to other tooling, I suppose you're mainly referring to the IntelliJ suite?
I think C# (and by extension, Kotlin) had the right idea in seeking a balance through removing as much boilerplate as possible in base language constructs. Adding libraries is fine, but shouldn't be a workaround to evolving the language.
I agree with all of your points, including C# and Kotlin approach.
At the same time, a lot of that are artefacts of decisions made early in Java such as backwards compatibility.
The same for tooling, a lot of tools are results of its time, Ant, Maven and now Gradle.
Lombok has its pros and cons, I use it sometimes but had issues with Lombok and Java upgrades due to how Lombok writes out its bytecode.
I haven't touched javac in more than a decade so I don't really care about it, it's been abstracted away by my build tools (and I agree, the build tools are less-than-optimal).
Again, I agree with all the critics, at the same time just given how large the Java installation base is and by having to go through the Python 2 vs Python 3 migration path, debacles, etc., I still prefer to have all this cruft that is known, well discussed online, has documentation about its quirks rather than a moving target of language features over the last 20-25 years.
Java is too big due to all the evolution it went through, it could have taken different paths and modernised the language? Yes, at the expense of some core design decisions in the beginning. Do I agree with all these decisions? Nope, but who agrees to all design decisions made by their programming language designers?
> Compared to C#'s ASP.NET (core), I find fluent syntax configuration with sane defaults and "manual" DI configuration much more manageable and maintainable than auto-magic DI as in Spring, because at least it's explicit code.
Fluent-syntax, as it exists today, needs to die in a fire. It's horrible. It's abusing a core key computer-science concept (return values) and turning it into something that exists to only save a few keystrokes.
1. You have no way of knowing if the return-value is the same object as the subject or a new instance or something else.
2. It doesn't work with return-type covariance.
3. You can't use it with methods that return void.
4. You can't (easily) save an intermediate result to a separate variable.
5. You can't (easily) conditionally call some methods at runtime.
6. There is no transparency about to what extent a method mutates its subject or not. This is a huge problem with the `ConfigureX`/`UseY`/`AddZ` methods in .NET Core - I always have to whip-out ILSpy so I can see what's really going on inside the method.
Some libraries, like Linq and Roslyn's config use immutable builder objects - but others like ConfigureServices use mutable builders. Sometimes you'll find both types in the same method-call chain (e.g. Serilog and ImageProcessor).
What languages need is to bring back the "With" syntax that JavaScript and VB used to have - and better annotations or flow-analysis so that the compiler/editor/IDE can warn you if you're introducing unwanted mutations or unintentionally discarding an new immutable return value.
It does that, but it also makes your code read more like natural language. Perhaps I was careless in my wording, as I meant to point to manual, explicit configuration rather than fluent syntax per se.
As to your bullet points: I can see where you're coming from. I still think it's better than the invisible side effects and invisible method calls you get with annotations.
> What languages need is to bring back the "With" syntax that JavaScript and VB used to have
As far as I know, With... End With is a weird cross between "using" in C# and object initialisers. How does that help prevent mutations? One of the code examples (0) even explicitly mentions:
With theCustomer
.Name = "Coho Vineyard"
.URL = "http://www.cohovineyard.com/"
.City = "Redmond"
End With
I honestly don't see the big difference with either:
var customer = new Customer {
Name = "Coho Vineyard",
URL = "http://www.cohovineyard.com/",
City = "Redmond"
};
or:
var customer = Customer
.Name("Coho Vineyard")
.URL("http://www.cohovineyard.com/")
.City("Redmond")
.Build();
"The most tedious work I do right now is adding "one more" scalar or complex data-member that has to travel from Point A in Method 1 to Point B in Method 2 - I wish I could ctrl+click in my IDE and say "magically write code that expresses the movement of this scalar piece of data from here to here" and that would save me so much time."
All of this is possible today and not even that hard (though it's harder than meets the eye, there's a lot of issues that description glosses over that you have to deal with, especially in conventionally-imperative languages). The main problem you face is that the resulting code base is so far up the abstraction ladder that you need above-average programmers to even touch it. (I am assuming that this is merely a particular example of a class of such improvements you would like made.) This is essentially the same reason why Haskell isn't ever going to break out of its niche. You can easily create things like this with it, but you're not going to be hiring off the street to get people to work with it.
Or, to put it another way, a non-trivial reason if not the dominant reason we don't see code written to this level of abstraction is the cognitive limitations of the humans writing it.
I know HN doesn't really like this point sometimes, to which I'd ask anyone complaining if they've mentored someone a year or two out of college and fairly average. You can't build a software engineering industry out of the assumption that John Carmack is your minimum skill level.
Rich Hickey and Clojure have a low-tech solution for you: use maps. This “wiring through all the layers” problem is basically self-imposed by the use of static typing for data objects. Instead you should mostly pass around maps, validate that the keys you care about are present, and pass along the rest.
Of course your Java peers aren’t going to be happy about this, so in some ways a new language is needed to establish a context for this norm. But the limitation isn’t physical, not even in Java.
It's weird to me how you find the magic of spring sad while you find the magic of Lombok acceptable.
Lombok requires that you use a supported build system and IDE and while all the currently relevant ones are supported that is no guarantee. Needs plugins and agents that support your various tools' versions including the JVM itself. I've been in that hell before with AspectJ and the aspectJ compiler vs eclipse plugin (version incompatibilities that made it impossible to work efficiently until they fixed it all up).
Disclaimer: last company we used Lombok. Current company we are switching certain things to Kotlin instead. data classes FTW for example. I do miss magic builders. Builders are awesome. Building the builder is tedious ;)
Lombok magic doesn’t span across files. Look at the class, see the annotations, and as long as you have even a trivial understanding of what Lombok is, you can grok it. It’s basically like a syntax extension.
Spring on the other hand... autowired values everywhere, and at least for me (who doesn’t work with Spring day in and day out) it’s very difficult to understand where they come from.
Don't get me wrong, I've used Lombok and liked it from the working with it and what it saves you aspect.
We do use spring and I've used it for a very long time now. Nothing is magic and not understandable about wiring if you do it right. Unfortunately there are a lot of projects out there that use it in exactly the wrong way if you ask me and then I'd agree with you.
I used to be in a company where we used XML config and everything was wired explicitly. The XML part sucked but with SpringIDE (eclipse at the time) it was Ctrl-clickable to find what's what.
We use Java config with Spring at my current company and I can Ctrl-click my way through it all and find what's what. There's a small corner of 'package-scan'ed stuff that is evil but we are cleaning that up.
FWIW I think that whether someone wants to use mutable objects or swears by immutability should be their choice, especially for interoperability with legacy code. It can be much easier to 'just go with the flow and be careful' in a legacy code base vs trying to have a clear separation of where immutability has been introduced already and where we still don't use it. Not everything is green field (in fact most stuff isn't) and not every company gives you enough time to always Do The Right Thing (TM).
Copying objects is a well known need and there are countless libraries that try to help you with it. All with their own problems, notably runtime errors vs. compile time safety or pervasive use of reflection.
When applying events, for instance. In F#, you could do:
match msg with
| IncreaseCounter cnt ->
{ model with Count = model.Count + cnt }
| DecreaseCounter cnt ->
{ model with Count = model.Count - cnt }
| ResetCounter ->
{ model with Count = 0 }
| ChangeRubric name ->
{ model with Rubric = name, Count = 0 }
The "with" says: copy the original record by value, but change these fields. For completeness' sake: F# also has implicit returns, so the bit between brackets is the function's return value.
Why do you think copy methods are "presumably wrong and misguided"?
For the rest, I agree that in 99% of the cases inheritance and mutability are not needed if you're using greenfield Kotlin libraries. But they are unfortunately often necessary in the Java world.
Mutable data classes are especially quite useful for reducing boilerplate when creating classes that implement the Fluent Builder pattern, which is unfortunately quite necessary if you don't have a copy method...
> Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
Lombok, at the very least, eliminates “manually writing by-hand getter-and-setter methods” (https://projectlombok.org/).
Thank you. I stopped using Java on a regular basis around late-2009-ish, which was before Lombok became really popular (as far as I know) so it is encouraging to hear that writing Java in-practice isn’t as bad as I feared.
Still... I feel strongly that Java eventually needs to adopt object-properties and reified-generics for it to stay relevant - otherwise it offers fewer and fewer advantages over competing languages - at least for greenfield projects at first, and eventually it’ll start being _uncool_ and fail to attract newer and younger devs to keep the ecosystem alive. Then we’ll end up with the next COBOL (well, more like the next Pascal/Delphi...)
That seems unnecessarily harsh towards Pascal/Delphi. I’d stick with the COBOL metaphor.
I’d also add that while Java does suck in some fairly obvious ways, most languages suck and at least Java can actually run concurrent threads in parallel.
Lombok ends up having all the costs of using a better JVM language (you still need to integrate it into your coverage tools, code analyzers etc.) but with few of the benefits. I used to use Lombok but in the end it was easier and better to just use Scala.
That’s fair. When I was writing Java code I wanted desperately to evaluate Kotlin, for the obvious reasons you’d expect, but there was not an easy Lombok-to-Kotlin migration path.
I probably would not choose Java with Lombok for a greenfield project today, were it up to me. But if I was forced to use Java, I would use Lombok. I was forced to use Java and I did use Lombok, and it didn’t really suck that bad.
I don't think any reasonable decisionmaker would approve Lombok and not Kotlin or Scala. (But I'm aware that many large organisations end up making unreasonable decisions).
The gap between post-8 Java and Kotlin is pretty small yeah. Though you have to write a lot of async plumbing yourself, and not having delegation is a real pain.
> I don't think any reasonable decisionmaker would approve Lombok and not Kotlin or Scala. (But I'm aware that many large organisations end up making unreasonable decisions).
For whatever reason, it’s a lot easier for most organizations to sign off on using a specific library for an existing programming language, even one as transformative as Lombok, than to sign off on using a different programming language, even one as backwards-compatible as Kotlin. Often they are categorically different decisions in terms of management’s interest in micromanaging them: they might default-allow you to include libraries and default-disallow you to write code in a different language.
In this respect, Lombok is really handy for a very common form of unreasonable organization :)
But when you read a novel, it's full of excessive and redundant code.
You don't write code for the machine, you write it for your team. It's fine if it's nice and comfortable, a bit repeated and fluffy, rather than terse and to the point.
This is exactly my grudge with boilerplate. Code is being read much more often than written.
I don't care if you hand-coded all those buckets of accessors, or your IDE has generated them -- that's irrelevant to that they're still overwhelmingly useless noise. Which I need to read through, which I need to review in PR diffs, skim in "find symbol usage" output, class interface outlines, javadocs, etc etc -- all that 10 as often as during writing. Somehow I'm expected to learn to ignore meaningless code, while producing it is fine?..
Remember the point made in "green languages, brown languages" recent post here on HN? The insight for me there was the source of "rewrite it from scratch" urge which should be very familiar to engineers working in the field. It comes from incomprehensible code or weak code reading skills. Either way, boilerplate does nothing but harm.
So no, while I agree on your point that code exists principally to be read by humans (and as a nice secondary bonus, executed by machines) -- I disagree that boilerplate is "fine" whatever its incarnation. It's not, because it directly damages the primary purpose of code: its readability.
> Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
My long-standing view is that Java's strict and verbose OOP syntax and semantics are an interface for IDEs. People who are hand-coding are practically guaranteed to be baffled by the verbosity, but they forget that Java development was the driver for IDE evolution (afaik), such that now we have ‘extract method’ and similar magic that understands code structure and semantics.
More specifically, OOP works, or should work, as an interface for IDEs that allows to (semi-)programmatically manipulate entities on a higher level, closer to the architecture or the problem domain.
Like you, I wondered if this manipulation can be harnessed and customized, preferably in a simpler way than giving in to the whole OOP/IDE/static-typing tangle and without writing AST-manipulating plugins for the IDE. In those musings I ended up with the feeling that Lisps might answer this very wish, with their macros and hopefully some kind of static transformations. Which are of course manipulating ASTs, but it seems to be done somewhat easier. Alas, predictably I've had no chance of doing any significant work in a Lisp, so far.
FYI for c# you have resharper allowing to add parameters to method and it would automatically propagate passing it through several levels up based on references. Guess sometimes all you need to know are the right tools.
Nowhere near. There was a lot of OOP hype in the early 90s with Smalltalk and C++, so Java just went all-in (everything is in a class) on that trend of the times.
Could you give a concrete example of your problem in a gist or something? I'm curious if it's solvable in C# as is. Sounds like it may be the kind of thing I'm approaching with reflection and attributes right now.
If you're writing in Java, why not write in Kotlin? They're sufficiently compatible you can have Java and Kotlin files in the same directory structure, compiled with the same compiler.
Well, in the case of Java there definitely are ways to minimize the boilerplate. Some of the more common ones that i use:
- Lombok library ( https://projectlombok.org/ ) generates all of the getters and setters, toString, equals, hashCode and other methods that classes typically should have (JetBrains IDEs allow you to generate them with a few clicks as well)
- MapStruct library ( https://mapstruct.org/ ) allows mapping between two types, between UserEntity and UserDto for allowing mostly similar objects or ones that are largely the same yet should be separate for domain purposes
- Spring Boot framework ( https://spring.io/projects/spring-boot ) allows getting rid of some of the XML that's so prevalent in enterprise Java, even regular Spring, and allows more configuration to be done within the code itself (as well as offers a variety of pluggable packages, such as a Tomcat starter to launch the app inside of an embedded Tomcat instance)
- JetBrains IDE ( https://www.jetbrains.com/ ) allows generating constructors, setters/getters, equals/hashCode, toString (well, there are covered by Lombok), tests, as well as allows for a variety of refactoring actions, such as extracting interfaces, extracting selected code into its own method and replacing duplicated bits, extracting variables and converting between lambdas and also implementing functional interfaces, as well as generating all of the methods that must be implemented for interfaces etc.
- Codota plugin ( https://www.codota.com/ ) offers some autocomplete improvements, to order them by how often other people used any of the available options, though personally there was a non-insignificant performance hit when using it
As far as i know, there is a rich ecosystem for Java to allow treating the codebase as a live collection of a variety of abstractions which can be interacted with in more or less automated ways, as opposed to just bunches of overly verbose code (which it still can be at the same time). Personally, i really like it, since i can generate JPA annotations for database objects after feeding some tools information about where the DB is and allowing them to do the rest, as well as generating web service code from WSDL (though i haven't used SOAP in a while and noone uses WADL sadly, though OpenAPI will get there).
And then there's attempts like JHipster ( https://www.jhipster.tech/ ) which are more opinionated, but still interesting to look at. I think that model driven development and generative tooling is a bit underrated, though i also do believe that much of that could be done in other, less structured languages, like Python (though that may take more effort) and probably done better in more sophisticated languages, such as Rust.
Low Code/No Code solutions don’t work because the people involved in implementing solutions are rarely engineers themselves. Most (good) engineers have learned through training and/or experience, well, engineering things, like edge cases, error handling, user experience, efficiency, testing,
maintainability, automated testing, and a plethora of other subtle and obvious aspects of system design. I know this quite well because I’ve worked with these so-called low-code and no-code platforms and every one of them I have seen end up having to be taken over by experienced engineers who have been brought in to fix (or in some cases completely rebuild) a poorly-designed system. These platforms typically suffer the “last mile” problem as well, requiring someone to write actual code.
And there's been the business process engine craze in between. BPEL comes to mind which also has 'visual editors' for the business people to use.
It's too complex for them and the you pay Software engineers to use BEPL instead. Which is just a worse language to actually program in than the underlying system.
Or any other number of 'process engines' which give you a worse language to describe your actual process in and then you need to do stupidly convoluted things to do simple things. But hey, we didn't have to code!
I worked on a Pega project once. There was nothing in there that the business people would be able to touch, especially after the requirements exceeded the capabilities of Pega’s primitives. One of the local friendly FTEs (the dev work was contracted out) would’ve been happy to use C#/ASP.Net web forms like everything else in the org.
Some people see past tries at something as proof that something will never work. Others see past tries at someone having the right idea, but wrong implementation.
Imagine how many tried flying before we "invented flight", and how many said "oh how they won't learn from the past".
I think that's a fair point. The way I see it, going from requirements (even visual ones) to working system would require strong AI, as any sufficiently powerful visual environment would wind up being Turing complete.
Which means that no code is either use case bounded or claiming something roughly on par with a break through. The first is common enough and where I imagine most low/no code offerings fall when the hype is stripped away. The hype seems to promise something on par with the second and I think that's where the dismissive attitude comes from.
Functional and declarative programming are mostly specifying requirements directly. You don't need a AI to do it, in fact that would be the wrong tool (AI is good for fuzzy problems and inference - not following a spec).
An extreme example of this are logic and verification systems like prolog and TLA+.
There is a sweet spot of low code I haven't seen explored yet, which is a declarative system that is not Turing complete. That would be an interesting avenue to explore.
Business requirements still have a measure of ambiguity and "do what I mean" to them. They are more formal than natural language, sure, but fall far short of the formalism of a declarative programming language. This is a big part of the business partnership underlying most Agile methodologies. If the formal spec could be handed off, then it would be and Waterfall would work better in enterprise settings. Instead, the team is constantly requiring feedback on the requirements.
So I guess I still see declarative languages as being part of the tech stack and something tantamount to AI being needed to handle all the "do what I mean" that accompanies business process documentation.
I think honestly the problem is a lack of tech literacy. I've seen spec sheets that are glorified database and json schemas in a spreadsheet, put together by BAs and translated by hand.
It could be done directly if every BA had enough programming knowledge to put together schemas and run CLI tools to verify them.
> had enough programming knowledge to put together schemas and run CLI tools to verify them.
That's quite a lot of programming knowledge. It makes some sense to decouple the business-oriented from the more technical roles - BA's trying their hand at coding is how you get mammoth Excel spreadsheets and other big-balls-of-mud.
Not sure if prolog or formal methods are good examples here, as they are pretty hard programming language. Yes, they can be used to specify a system, but they also require human ingenuity, aka strong intelligence, to get right. Prolog may be easy for some people, but I did spend inordinate amount of time to understand how to use cut properly, and how to avoid infinite loops caused by ill-specified conditions in my mutually recursive definitions.
As for formal methods, oh, where shall I even begin? The amount of time to turn something intuitive into correct predicate logic can be prohibitive to most of professionals. HN used to feature Eric Hehner's Practical Theory of Programming. I actually read through his book. I could well spend hours specifying a searching condition even though I could solve the search problem in a few minutes. And have you checked out the model checking patterns (http://people.cs.ksu.edu/~dwyer/spec-patterns.ORIGINAL)? I honestly don't know how a mere mortal like me could spend my best days figuring how to correctly specify something as simple as an event will eventually have between an event Q and an event P. Just for fun, the CTL specification is as follows:
I want to see formal methods used more in lieu of writing standards documents. If you go read a standards document, say for 5G wireless, you'll find it's largely formal specification awkwardly stated in English and ad hoc diagrams. It would be better to just write out something formal (with textual annotations as a fallback) and have a way to translate that to readable language.
They were right. The previous attempts did in fact use the wrong approach, and people have now successfully turned lead into gold. The only problem is that it’s too expensive to be worth doing.
I don't agree. If you could ask a prime Newton if he'd be satisfied converting lead into gold in a cost prohibitive manner I would bet any amount of money his answer would be a quick "no". The goal of alchemy was to convert lead into gold in a way that made the discoverer rich, it's just not proper to say the second part explicitly, but I believe most people understand it that way.
> The goal of alchemy was to convert lead into gold in a way that made the discoverer rich
That definitely needs a citation. The Wikipedia page mentions no such motivation, and describes Alchemy as a proto-scientific endeavor aimed at understanding the natural world.
However, the analogy is still accurate, because the right approach involved several steps which no one thought were conceivably part of the solution: "understand how forces work at macro scales", "understand electricity", "understand magnetism", "develop a mathematical framework for summing tiny localized effects over large and irregular shapes", "develop a mathematical framework for understanding how continuous distributions evolve based on simple rules", "learn to look accurately at extremely small things", "learn to distinguish between approximate and exact numerical relationships", "develop a mathematical framework for understanding the large-scale interaction of huge numbers of tiny components", and so on.
If you went back in time to an age where people were working hard on changing lead into gold and your mission was to help them succeed as soon as possible, your best bet would probably be something like teaching them the decimal place value system, or how to express algebraic problems as geometric ones. But if you also told people that this knowledge was the key to solving the two problems they were working on, "how to make very pure versions of a substance", and "how to understand what makes specific types of matter different" you would reasonably have been regarded as deluded.
> However, the analogy is still accurate, because the right approach involved several steps which no one thought were conceivably part of the solution
I don’t see how that follows. It’s just a truism that nobody figured out how to do it until someone finally did. The fact that the path wasn’t obvious at various points in the past seems irrelevant.
> But if you also told people that this knowledge was the key to solving the two problems they were working on, "how to make very pure versions of a substance", and "how to understand what makes specific types of matter different" you would reasonably have been regarded as deluded.
If they were listening to you at all, it’s not at all obvious why this part would sound deluded.
How is it any more exotic than any of the failed alchemies?
'It' (most of 17th, 18th and 19th and some early 20th century mathematics, chemistry and physics) is clearly a lot more abstract than the failed alchemies.
The point is that 'just keep trying' would not have been a good strategy.
Many alchemists tried to turn copper to gold as well. They might as well think that their predecessors are just unlucky by using the wrong implementation.
No Code is mostly a code word for outsourcing. You use their app to get it started, realize it won’t meet your requirements, and then pay them to work on it forever. Unless it’s just a marketing Website.
No Code I feel like could be also viewed as a compromise for selling dev tools to mass market. You can sell a "cheap complete*" solution instead for the overhead of also dealing with customer issues / inevitably helping train a dedicated person for them to maintain their app. Then you have customers who need dev tool support as intended
I think this is only partially true. There are aspects of coding which can be abstracted away, either because they're essentially boilerplate or because a simpler description of the solution is sufficient. Ideally if a more complex description is required, one can drill down into the simplified low-code description and add sufficient complexity to solve the problem.
I mean, couldn't many of the existing frameworks be described as low-code wrappers around more complex work flows and concepts?
> many of the existing frameworks be described as low-code wrappers around more complex work flows and concepts
Using frameworks, you are still using the language itself to command the frameworks. For example, if someone claims oneself as a React programmer, nobody would assume that someone didn't know Javascript.
So to efficiently use one framework, you should master both the language + framework. In other words, the complexity not only remains, but also accumulates.
But this is contradictory to low/no code's selling point, as they are targeting non-programmers.
This only goes so far, though, with frameworks. In my experience, the vast majority of people that make claims about a particular framework do not understand the abstractions they are building upon. In fact it is sufficiently reliable that I have found this to be an excellent hiring signal, to probe how well a person understands the abstractions in frameworks they use, but also to probe how they diagnose and fix holes in their own knowledge.
Everybody promises their no-code solution is going to adopt to the way your enterprise already works, but the truth is you kind of have to go the other way around if you don't want misery.
I work at Stacker (YC S20) [0] and the approach we're taking to deliver on this promise is to start with the data.
That is, we let you take spreadsheets you already use and build more powerful collaborative tools on top of them, without code.
If you take the premise that a tool has a 'data' part, and an 'app' part, and that the data models the process, and the app more just controls how data is accessed, presented, the UX, etc, you might see why I'm so excited about this approach -- if you take the data from spreadsheets that are already being used to model a process in action, by definition you don't have to change your process at all.
About 30 years ago one of my managers used to say "get the data model right and the application writes itself" and I have found that to be mostly true. What I have also often found is that people who create spreadsheets in business don't understand data modeling and even if the spreadsheet solves some business problem it's often very brittle and hard to change and adapt or generalize.
The spreadsheet structure point is an interesting challenge - I think often a spreadsheet ends up as the de facto model of a process, but often with, as you say, some redundancy, excessive flattening, and other structural issues that can make it more diffcult to build an app around.
The nice thing, though, is that shifting this structure around does not mean changing the process being modelled - it's more just a necessary part of making a more powerful tool to support it.
It's as you say, since the process is known, it's usually very clear exactly how the app should be, which under our model can inform how to shift the structure of the spreadsheet accordingly in pretty practical way. It's cool to see the same thing work in both directions!
From my experience working with some business-side using spreadsheets: yes, usually spreadsheets end as the de facto model of a process but not necessarily an efficient model or an easily replicable one.
In banks I know of some long-living spreadsheets that have been patched so much that it takes a real human weeks to months of work to disentangle the mess of macros and recalculations onto a streamlined script/process. Sometimes the resulting model diverges in quirky ways that are manually patched, I've seen diversions due to datetime issues (time zones, summer time, leap days, incorrect assumptions about dates, etc.) that were noted by side-effects of side-effects and the manual patching didn't help at all to debug the root cause.
I think that spreasheets are incredibly powerful, but the main reason for that power is that they are quite free-flowing and that invites the human creativity to solve problems with the limited set of tools and knowledge some users have, and some of those are in quite high technical positions using spreadsheets daily for years.
I believe you might have a killer product but I had so many headaches with spreadsheets that I wouldn't like to be working in that space.
My first project out of college was working on an internal metrics tool for a company. Their prior one was basically Excel; a guy who was due to retire had, back in the 90s, written an entire DSL with VBA, that could load files, run various functions on them, and output graphs.
Thing is, no one except him knew the DSL; everyone in the company relied on it, but they relied on him to write the DSL to compile their inputs into the output they wanted.
The rewrite included an actual database, proper data ingestion, and a nice clean frontend. The methods of aggregating data were reduced and standardized, as were the types of simulations and metrics that could be reported on; the flexibility was drastically reduced. However, the practical usage for everyone was drastically increased, because it moved from "we can't do anything different unless we get (expensive due to retire person's time)" to "we can do it ourselves.
I'm very jaded toward no/low code, in general, and that experience is partly the reason why. There isn't a sweet spot, that I've seen, that allows for non-technical people to have the control they want. And that was true even with spreadsheets.
The less nice thing, though, is that the model of the process you're starting from -- the actual spreadsheet -- has, as you say, these structural problems. And since (some speculation here, but I very much suspect that) many different processes, after having been so mangled, will end up in the same redundant, excessively-flattened structure, you can't determine from the spreadsheet alone which of these different processes it is supposed to encapsulate.
So before you can start "shifting this structure around" you'll still have to go through a standard business analysis process to find out what you are going to shift it into. And if you're already doing that... Well, then most of your promise of automation is out the window already, so what's the use of having the actual implementation done in some weird newfangled "no-code" or "low-code" tool?
Some strange comments in here about Low Code, as if they weren’t already successful. There are easily hundreds of apps successfully making use of Low Code to solve problems for people. Some Marketing Automation tools have had them for 10+ Years. Integration tools are also often Low Code.
Microsoft's Power Platform is a low code framework which works well, and generates a massive amount of revenue, as does Salesforce and some others. I recently designed and implemented a complex 500 user child protection application with PP that has been live for a year now. It was highly successful, and the time and cost taken to deliver it was far less than the cost of a hand written solution. That said, there is still quite a lot of custom code required for most enterprise level solutions even with the most mature low code platforms. Low code is not a panacea, and the same issues of how to represent requirements and design arise in the low code world as in the high code world. Low code platforms will continue to mature and improve. Maybe AI will catch them one day, but I'd be surprised if that happens anytime soon.
In 7 months, over 8,000 apps were created with Budibase. Many of these are either data based (admin panels on top of databases), or process based (approval apps).
Budibase is built for internal tools so the apps are behind a log in / portal.
Microsoft Access and Claris FileMaker have nearly 3 decades of success at low-code.
I genuinely think they get a bad wrap because the really successful apps you never hear about, but the problematic ones need a real programmer to sort out.
Nonsense, No/lo-code is the polar opposite of UML, No-code is the ultimate Agile, it’s exciting and fun to build with, because it’s alive, because it executes immediately, resulting in a powerful iterative feedback loop. UML is static and tedious to build, because any gratification is delayed so far into the future.
I know because I’ve done both, in fact I’ve invested all of my wealth (many millions) into developing a No-code ERP/CRM platform and it’s incredible - will launch this year.
Comparing UML and No-code is apples to oranges. UML is about generic abstraction without actual implementation. No-code is about domain-specific implementation done using simplified high-level visual constructs instead of general-purpose programming languages. In other words, No-code is programming (just not text-based), while UML is modelling.
Low-code/no-code is simply the CASE tools of the late 80s, or the UML->production system fully automated pipeline of the late 90s/early 2000s, given new life and a shiny coat of paint. The same problems apply and the same people keep buying the same damn snake oil.
Once you can specify your procedures, requirements, and constraints in a way that is specific enough for a computer to read and act meaningfully on, the elements of your specification method become isomorphic to constructs in some programming language. So you've replaced typed-in keywords with clickable symbols or buttons or controls -- but you've in no wise reduced the headwork of programming or made the programming go away.
With the recent Lambda announcement, Excel is going to have a cool escape hatch for doing more complicated stuff right in the sheet without needing to dive into VBA. I honestly thought I would never get excited about a feature being added to Excel, but I was wrong. This looks friggin’ awesome!
I strongly believe that every tool, every invention, can only innovate/be innovative in one area.
As such what we need is to have the right building blocks and abstraction layers which at the end will mean that something like no or low code will work. It will only work when all the tools that underpin it have been crystallised over the years.
This happens on every layer and is fundamentally why we see so much work repeated, but slightly different. Every language makes different trade offs, therefore all the libraries implement the same functionality but just a bit different this time.
Every once in a while something like UML (too complicated, e.g., due to the use of EJBs), Business Works (too slow), etc comes along which has promise and offers value at the time but just misses the boat to survive until the next generation of revised underlying tools.
I think a certain set of problems requires the thinking of someone who knows how complex systems work and where the pitfalls are.
You can even see this with stepped covid restrictions that rely on infection numbers crossing a certain threshold. Most engineers wlould imidiately see the real world consequences that arise from the lack of hysteresis.
Similar things happen with input sanitization, deciding on formats etc.
Some stuff just takes experience. The actual writing of the code is not the problem, the knowledge of what to do and what to avoid is.
They're trying to commodify writing code. It would bring down the cost of programmers significantly. They won't succeed, I think. Instead AI will probably beat them to it.
What sometimes gets forgotten is that even a programming language _is_ a model! And it is a model that fits exactly all the details that are necessary to construct an actual program with all the little special cases.
Modeling all constraints and runtime behaviors in UML is cumbersome and hard to understand. UML could be used for showing larger building blocks or complex flows (e. g. with sequence diagrams), but it is a bad fit to model a complete program in it.
> The irony is that the UML itself becomes more complex than code...
I forgot the law's name but the said law states that "a machine of a certain complexity cannot build a machine more complex than itself". So, a car factory is more complex than a car itself. Similarly a code generator cannot produce something more complex than itself.
This is why you need to merge things of certain complexities to build something more complex.
> I forgot the law's name but the said law states that "a machine of a certain complexity cannot build a machine more complex than itself"
Isn't this disproved by the evolution of reproductive organisms? That's not necessarily more complex of course, but it's pretty obvious that over a large enough time scale successive machines can become more complex.
Could the problem simply be that UML was poorly designed for the job?
Just because certain information has an essential complexity doesn’t mean that different representations are equivalently complex. There’s an essential complexity to the layout of the London Underground, but it would be considerably worse if you had to represent it with plain text files instead of maps or diagrams.
I agree that UML was wrong-headed and simply ignored this complexity instead of addressing it seriously, but I wouldn’t be surprised if it turned out that some diagram-based programming language would be useful for something.
Simulink, labview, and gnu radio are all fairly well adopted though they all have their problems. They are all notably not oop, at least not idiomatically. In fact, as most blocks are stateless other than their configuration, you might say the paradigm is closest to functional. UML, with the exception of flow charts was very OOP centric. In a world where I still come across people who use OO as a synonym for "good" I can see why they created it that way, but a tool that shapes your design space can be limiting.
The lack of detail in the models means autogenerators have to have lots of configuration for each item in the diagram. People would brag that 95% of their code was autogen, and I would realize they had spent dozens of hours figuring out ways to use checkboxes and menu selections to generate the code they wanted. Instead of typing it. And all the hideous autogen vode was a nightmare to step through in a debugger. Large labview projects aren't any better really, but they are popular.
> UML, with the exception of flow charts was very OOP centric. In a world where I still come across people who use OO as a synonym for "good" I can see why they created it that way, but a tool that shapes your design space can be limiting.
To be fair, that can’t be the only reason UML sucks, or even the main reason, because the same is true of Java, and while Java sucks, it’s a much better programming language than UML.
I think a lot of it really does come down to a denial of essential complexity. If you designed a visual programming language in such a way as to actually accept and handle essential complexity you’d be on a better track.
> The lack of detail in the models means autogenerators have to have lots of configuration for each item in the diagram. People would brag that 95% of their code was autogen, and I would realize they had spent dozens of hours figuring out ways to use checkboxes and menu selections to generate the code they wanted. Instead of typing it. And all the hideous autogen vode was a nightmare to step through in a debugger. Large labview projects aren't any better really, but they are popular.
I haven’t worked with these systems you’re discussing, but it sounds like people would be better off handwriting more of the code and only using the visual tools for the part that they’re actually better for. In much the same way that a Wikipedia article about elephants includes photographs of elephants instead of merely relying on long-winded textual descriptions of their appearance, while still having lots of text for the sorts of things text is good for representing.
I think maybe GUI interface builders or HyperCard might be another example of this hybrid approach. I think some incarnations of SmallTalk included similar ideas.
The hybrid approach failed miserably for us, but that could be the tool, it forced a lot of things through the GUI with no option to bypass. I noted elsewhere that diagrams are natural for civil engineering and notation is natural for logic and other branches of math. That is because of the "essential complexity". I think it is possible there will be a great general purpose programming diagram tool some day, but I've never seen one that wasn't either very domain specific, or downright terrible.
I've never gotten that complicated with it before I just switch to c++ uding UHD or soapy. I only use gnu radio to poke around and maybe prototype a little.
> Just because certain information has an essential complexity doesn’t mean that different representations are equivalently complex. There’s an essential complexity to the layout of the London Underground, but it would be considerably worse if you had to represent it with plain text files instead of maps or diagrams.
In general yes. I was only commenting on the assumption that UML can be of a general-purpose programming language, at least in the domain of business automation, or the idea that sufficient modeling can replace coding given the current technology.
> UML was going to be the "blueprints" of code, and software architects would develop UML diagrams similar to how building architects create blueprints for houses. But as it turned out, that was a false premise.
True. The blueprint is the code. The brick and mortar construction is done by compilers.
Perhaps the better choice was to have automated tools to turn source code into understandable business diagrams to allow business analysts to partner with software engineers, instead of the other way around.
I don't know of any that can make it understandable, however. I think that would be a very difficult task, even for quite small, well-designed programs.
Code has a lot of relationships. In 2D it looks like a mess.
There's value in visual representations of complex things. See for example Edward Tufte. People probably just think of him as visualizing data, but his book Visual Explanations goes into many other areas.
My preference these days is to use a text-based representation to generate diagrams with tools like Structurizr DSL, WebSequenceDiagrams, PlantUML and Graphviz. The source for the diagram(s) can be kept in the repo with the executable code and versioned.
Maybe in another decade we'll get some tools that can take the executable source code, along with all the deployment descriptors, the Kubernetes charts, Terraform configuration files, shell scripts, and so on and generate meaningful visualizations.
I used one of those tools at one point, Rational Rose.
It was possible to get it to generate code, as I recall it basically gave you stubs of classes and methods and you would then go and code the implementation of each method.
It seemed like it could save you some amount of effort at basic boilerplate stuff but at the cost of putting the same or more effort into the UML.
UML was the swan song of the classic waterfall SDLC gang. Agile and TDD came along and nobody looked back.
> We are tired of being told we're socialy awkward idiots who need to be manipulated to work... because none of the 10 managers on the project can do... Programming, Motherfucker.
Developers are better at self-organizing than people think. That is the real driving force in modern software. You can eliminate all the ceremonies of scrum and still have a functioning team. You can remove the scrum master and in some cases even the product manager and still have a functioning team.
Despite popular belief: developers CAN understand the product. In some cases developers can understand the product better than a product manager can. Also, developers can have a good approximation to what the customer wants, even without talking to the customer, by just looking at analytics data, log aggregations and bug reports.
The modern incarnation of agile gives too much power to product people and disempowers engineers, turning companies into tech debt mills. The original incarnation of agile empowered engineers and allowed them to collectively negotiate with the product manager.
Agreed. Original agile connected developers and customers through valuing sw features. Today's agile is whip-cracking with the value of features obscured from developers who waste customer time doing unjustifiable work. It's more of a social game than one that benefits customers, so it can not last forever.
Thanks. I think you've done a better work at describing the situation than I did.
The original agile manifesto emphasized collaboration with the customer. Scrum defined roles such as product manager and scrum master, and then the industry injected even more roles in between the customer and the developer...
Fast forward to 2021, we have "agile" developers that have never met a customer. So much for customer collaboration.
> Also, developers can have a good approximation to what the customer wants, even without talking to the customer,
Yoep, sure, maybe we even don't need clients. Just developers creating products for themselves. Or just coding for the sake of coding. I've seen that too mamy times. That's why we need product people.
The largest software companies in existence were born during a time when the role of product manager did not even exist.
Just like the largest corporations in existence became profitable and expansive before they hired MBAs.
Product managers and MBAs are the best examples of the Texas Sharpshooter cognitive fallacy. You shoot into a wall and then paint a target around it. Being successful at those roles is about painting targets around any successful initiative and claim it was your idea.
Powerpoint presentations are not reality, picking up the customer service phone, auditing the code, talking to internal and external users and keeping in touch with reality is.
I dunno I disagree—the fact is a lot of people just dive into coding and don’t spend much time with design.
There’s a ton of value in the idea of diagramming code and then generating sources. UML is a starting point but the journey is far from over.
The more appropriate idea is that you create documentation in the form of diagrams for free. Just like in TDD you get unit-tests for free.
Folks always talk about self-documenting code—and that’s great. But what about conveying a complex system to a new team of engineers? Diagrams are frankly priceless if done well.
Also, looking at something like Kubernetes where a declarative YAML file generates magic underneath is somewhat similar. A step beyond what we have would be nice diagramming capabilities over the YAML to auto generate the plumbing underneath.
Personally, I think future advances in development _will_ be done as higher level ideas—pictures worth a thousand lines of code—AI will do the rest.
> The more appropriate idea is that you create documentation in the form of diagrams for free.
The problem is the diagrams are hard to create and hard to update and usually don't remain synchronized to the code. If there was a good way to create documents from the code (perhaps with some annotations required), it could just be a Make target and boom, free(ish) documentation.
I’ve recently gotten reacquainted with Doxygen, and it allows you to embed PlantUML source right in your source code or Markdown files. Easy to write, easy to update, and stored as plaintext right in your source tree. I don’t love Doxygen, but it’s doing a great job at what I’m trying to do (document a complex C++ project)
I use PlantUML in org-mode as well! That's actually where I started, but found it much easier to recruit other people to write their own documentation (with sequence and state diagrams) by not requiring them to use Emacs :)
I feel like my ideal workflow would be a middleground between doing the design up front and just jumping into coding. Before you start coding I feel like you don't have much of an idea of what problems you will run into, resulting in diagrams based on the wrong assumptions. But with code it's easy to loose track of the high level structure of what you are writing. Writing code, then diagramming the high level structure, and then going back to fix the code seems like a good way to go.
Absolutely! That is similar to artists doing thumbnail sketches to figure out the composition; then once things are reasonably worked out, the chosen composition can be worked onto the final canvas; then the details follow.
That is a nice benefit of good development frameworks: how easy is it to explore new ideas? And frankly that’s why there’s an uptick in higher level languages.
Granted, but you can do the same and much more with different methods, and avoid fighting the frustrating, unreliable and time-consuming UML tools altogether.
I used RR at Uni in the early 2000s. It felt very clunky even then. It was also a pig to use - somewhere along the line it become known as Crashional Rose.
Yes. We had a joke that either RR wasn't properly software-engineered the way RR proponents demand, so they weren't even dogfooding. Or it was and the process clearly doesn't work, even for its proponents. Because obviously, RR was crap.
I had the same experience in 1999. I never understood how that software could be so bad. A student using something in a student manner shouldn't be able to crash the thing in multiple different ways. It would be like if I opened Blender as a newbie, hit a button somewhere, and clicked in the viewport and it hard-crashed. I'm sure it's possible to crash Blender, but for a newbie it ought to be harder than that!
And then for this to be held up as the example of how software engineering was going to be in the future is just icing on the cake.
I have many and sundry quibbles with various things I was taught in university ~20 years ago, but most of them I at least understand where they were coming from and some of them have simply been superceded, of course. But that software engineering course with Rational Rose has the distinction that, in hindsight, I don't think I agree with a single thing they taught.
The problem with tools that generate code is that they are often unidirectional. If there is no way to get code changes to propagate back to the visual model, the latter is likely to fall into disrepair pretty quickly.
It could be possible to do something interesting in this space, where UML can be used generate template code, and later on, another tool could extract UML from the code, compare it to the baseline, and flag any discrepancies. From there, you can either sign off on the discrepancies (and replace your hand-made UML with the extracted one) or fix your code. Bit of a kludge, but at least automatic verification is possible unlike documentation
If only it were that simple but the code is so expressive that you can't really create the UML as easily from it as the other way around. You just can do too much stuff in code that the UML generator would just not understand at all. Or you'd have to basically code in a very specific manner. Not fun. Of course since I last tried it they probably got better at it.
I even remember back in university you'd have to write your custom code _in_ the UML tools dialogs if you didn't want it to be overwritten next time you tried to generate code. Of course these were just simple text boxes. Horrible dev experience.
The trick is propagating "backwards" to the model feedback from tests, not changes, preserving the normal loop of receiving requirements, writing code (in this case the "model"), compiling it to something else (including generated code that fools would be tempted to modify) and running the program.
Honeywell "automatically generated" all of the flight code for the James Webb Space Telescope using Rational Rose in the early 2000s. They were still trying to fix the code when I was at NASA Goddard in the mid-2010s.
The difference I was trying to highlight is that UML (at least in my experience) was still very much focused on "big design up front" and production of design artifacts (vast numbers of diagrams) that agile and TDD approaches explicitly rejected.
I don't remember rapid iteration being a part of any UML-based methodology that I ever used. By the time the diagrams were complete enough to capture implementation details, they were too unwieldy. Did any UML tools support common refactorings, or would you have to manually change potentially dozens of affected diagrams?
Maybe I'm cynical, but it seems like these people are in a farcical cycle of repeatedly inventing some new master theory of programming only to find it's actually a disaster a few years later and then to switch to something apparently diametrically opposed.
Of course, they have a book on sale to explain the new idea...
How much time was wasted and how many projects were damaged by the bad idea of UML and design up-front that they were pushing as hard as they could less than two decades ago? How many developers are being stressed by endless manic sprinting and micro-managing processes under the name of Agile?
Maybe they should stop? Or apply some actual science? Some of this in-group call themselves scientists but all they do is pontificate. I'm not really sure many of them spend much time actually programming.
My impression as an undergrad learning UML was that it gave an architectural level view - so the effort would go in at a different level of thinking, not just in a different part of the process?
Yeah it turned into that before kinda dying. Now in the "real world" those tools are only used for diagrams that explain what's called a "slice" of the architecture, but no one really gives a whole architectural view of a system on UML. Not even for a simple component.
But the glimpse you got of it as an undergrad was UML trying to give it's last kick before dying. The whole quest for a formal definition and a standard for it doesn't make sense if you only want to use it to give an architectural level view.
Years ago I had a contract with IBM so I got Rose for free. It had really neat demos but once you started using it, it was basically useless or worse. You got a few stub classes and then spent the rest of the time keeping the models in sync with reality.
I think only Clearcase had a bigger negative impact on productivity than Rose.
The thing is I found quite paradoxical (to stay polite) that you'd spend time drafting something that is not precise, not data to help you but just a document in a tool.
The model driven thing was nice but it was never good enough to actually help with code. It was also deeply rooted in the crippled Java days so full of verbose diagram representing overly verbose classes.
To hyperbole a bit, I'd rather spend time writing property based tests and a few types in haskell in a way.
There can be a stage between "I have kind of an idea of what this is supposed to be" and "I'm ready to code this", where you think carefully about what this thing is actually supposed to be, and how it's supposed to behave and interact. It's not amiss to think for a bit before creating the code.
I'd rather spend some time making sure I'm building the right thing, rather than testing that what I built correctly does the wrong thing.
On the other hand, if you want to argue that UML is not the optimal way to do that, you could make a case. It makes you think through some questions, but those may not be the only questions, and there may be other ways of thinking through those areas than drawing diagrams.
And if you want to iterate your designs, UML is a painful way to do so. You'd want to design in some other medium that is easier to change. (Maybe something text based?) But if you're thinking through all the design issues in another medium, and iterating the design in that other medium, then why produce the UML at the end? To communicate the design to other people - that's the point of UML. But if you can communicate the design better using something else (like maybe the medium you actually design in), then why produce the UML?
That assumes that before you have a thing in your hand (a working program with expected input, and output), you can exactly describe how that thing should act, what it should look like, what the input and output should be (and not be) and have that be successful - and structured correctly internally the first time.
In my 25ish years of experience writing code? That has happened for a non trivial task exactly zero times.
If the idea is you could refactor the UML (and hence generated code) to adjust, since none of the tools are able to generate functional code (stubs and simple templates yes, but not much more than that), that means it would need to refactor a bunch of human manipulated and generated code without breaking it. Which I think is well beyond even our current capabilities.
It's weird to read this because building's architects and designers do exactly that: they have to make tremendous efforts to design complex systems (think an airport or a hospital) before they lay down a single brick. Somehow this idealization and planning step is impossible for software developers.
Those engineers have the good fortune to be working in a fairly constrained space. New materials and building techniques become viable slowly over time.
Software developers are able to build abstractions out of thin air and put them out into the world incredibly quickly. The value proposition of some of these abstractions are big enough that it enables _other_ value propositions. The result of that is that our "materials" are often new, poorly documented, and poorly understood. Certainly my experience writing software is that I am asked to interact with large abstractions that are only a few years old.
Conversely, when I sit in a meeting with a bunch of very senior mechanical engineers every one of them has memorized all of the relevant properties of every building material they might want to use for some project: steel, concrete, etc. Because it's so static, knowing them is table stakes.
I'd say this difference in changing "materials" is a big source of this discrepancy.
Also, the construction industry is a huge mess, and anyone telling you that things don’t go over budget, get torn out because someone messed up a unit conversion somewhere, burn down during construction because someone didn’t follow some basic rules, or turn out to be nearly uninhabitable once complete because of something that should have been obvious at the beginning - is just ignorant. These happen on a not infrequent basis.
The big difference is no one really tries new things that often in construction, because for the most part people have enough difficulty just making the normal run of the mill stuff work - and people who have the energy to try often end up in jail or bankrupt.
In Software, we’re so young we end up doing mostly new things all the time. Our problems are simple enough and bend to logic enough too, that we usually get away with it.
If you’ve ever poured a footing for a building, then had the slump test fail on the concrete afterwards you’ll sorely be wishing for a mere refactoring of a JavaScript spaghetti codebase under a deadline.
Buildings neither are Turing complete, nor do the building blocks become obsolete every few years.
The closest analogue to software development is legislation.
Even the best written rules can have unintended consequences, and so we have tools to make the behavior ever more precise and less error-probe. But it’s
never fool proof.
Also and like legislation, it’s the edge cases that balloon a proof of concept into monstrous sizes.
In some respects, for software to advance some components need to be less powerful. But we have this fetish for inventing yet another Turing complete language in the pro space, just because, and bolting on a million features.
Hah! The funny part is, you think they don’t mess this up all the time, but they do! We all have experiences with buildings that are impossible to navigate, have weird maintenance issues (toilets always backing up, A/C a nightmare, rooms too small, rooms too big, not enough useful space, etc). Buildings get redrawn constantly during construction, and they rarely match the plans. Cost overruns are endemic, as are scheduling issues.
They’re also using literally thousands of years of deeply ingrained cultural rules and expectations focusing on making living in and building structures effective (it’s one of the core tenets of civilization afterall), supported by an army of inspectors, design specialists, contractors (themselves leveraging thousands of years of passed down and deeply baked in expertise in everything from bricklaying, to concrete work, to framing).
All that for what, functionally, is a box we put things in, including ourselves, that we prefer provides some basic services to a decent standard and isn’t too ugly.
I remember watching a documentary on architecture, and the speaker, who offering a different approach, said that for much of architecture, the never-look-back mantra was the unspoken rule of the day.
You'd design and build a building, and that was it. If the roof leaked (common on building-like pieces of art), you didn't want to know about it. If the interior was changed to actually work for the buildings occupants, you didn't want to know -- that'd mean that your beautiful design has been marred.
All this suggests to me that some of these designs are done without deeply considering the needs of the people affected, and realizing that those needs change, and worse, without learning from the mistakes and successes of the past.
[Note that I am not arguing about the merits of how software is, was, or should be designed.]
At the beginning of my career I worked in AEC on the planning side. It was well understood that whatever the Architects had designed would be entirely redone by engineers afterwards and then by on-site engineers and then by tradespeople after that in the implementation. No one really understands what's going on in a reasonably-sized building.
Addressing the real needs of people is hard, and gets in the way of being famous and changing the world - a mindset I’ve seen more than a few times in designers. All of them pretty senior? So I guess it was working for them?
A ways back, the president at a multi-discipline engineering consulting firm I worked in made an interesting point. If you give ten EEs a hardware task, they will come back with something that looks similar. If you give ten software engineers a software task, they will come back with ten completely different things. I think this is because in software there are so many possible ways to do something, and so much richness, that writing software is a very different from making hardware, or architecting a building, to go along with the parent comment.
It's not that software engineers are not capable of doing the same when required (e.g. in the firmware for NASAs mars rovers, etc.) but that usually software engineers don't do that because there is a better alternative.
If architects could build a house multiple times a day while slightly rearranging the layout every time they'd do that in a heartbeat.
There are quite a few responses, but I still want to point out a main difference more clearly:
There are natural-intelligence (human) agents translating the diagram to "code" (bricks).
There is a lot of problem fixing going on done by the construction crews, cursing at the architects (sometimes, or just going with the flow and what comes with the job).
That is the same with software:
If you give good developers diagrams those human agents too will be able to produce useful software from it, no matter the flaws in the diagrams, as long as they understand the intent and are motivated to solve the problems.
Good point. If you remember the days when compiling your program meant taking a deck of punch cards to the data center, handing them off to an operator, and then waiting a few hours for the result, you spent a lot more time planning your code and running it through your mental line-level debugger than you do today.
The interesting question is how they organize these tremendous design efforts before laying the first brick. In software, there just is no construction phase after the design phase.
Nod, we don’t have to deal with the pesky issues of moving actual physical objects to be in specific contact in certain ways with other physical objects. Once we figure out a design that compiles (in commercial construction, that would be a plan that passes validation/sign off), we’re ‘done’ except for the pesky bug fixing, iteration, follow up, etc.
Writing code is very close to what 90% of the drafting work is for construction (aka some relatively junior person figuring out exactly how many bricks would fit in this space, and how thick it would need to be, to meet the requirements his senior person told him he had to meet - and trying a couple other options when that is obviously BS that doesn’t work the first few times, and then everyone refactoring things when it turns out the first idea causes too many issues and the architect’s grand plan for a 50 ft open span in an area is impossible with current materials).
I suspect it probably is possible, if you're willing to spend enough time. However, it's also true that the cost of a building's architect changing his mind in medias res is far higher than the software developer's. It is not necessarily the case that the best way to approach one discipline is also the best way to approach the other, just because we happen to have decided both should be called "engineering."
They do however make computer models and simulations to understand the problem. Programmers do that as well by coding parts of the problem, running it to simulate usage and see how it works and adjusting accordingly. No bricks needs to be laid for software engineers to work either.
I start to think that this step is actually the code. An architect has to specify things because the drawing is not the building, while for programming the 'drawing' actually is already the program.
Buildings are naturally described by drawings, logic is naturally deacribed by notation. We wouldn't ask a civil engineer to design a building using prose, and so we should not ask a computer engineer to describe logic using boxes and arrows.
On the contrary, not only is this planning step not impossible in current programming practice, it's universal, or very nearly so. Almost nobody programs by hex-editing machine code anymore. We just edit the design, often in a language like Golang or C++, and then tell the compiler to start "laying the bricks," which it finishes typically in a few seconds to a few minutes. If we don't like the result, we change the design and rebuild part or all of it according to the new design.
More modern systems like LuaJIT, SpiderMonkey, and HotSpot are even more radical, constantly tearing down and rebuilding parts of the machine code while the program is running. Programs built with them are more like living things than buildings, with osteoclasts constantly digesting bones while osteoblasts build them. In these systems we just send the plans—our source code, or a sparser form of it—to the end-user to be gardened and nurtured. Then, just as osteoblasts build denser bone where its strength is most needed, the JIT builds higher-performance code for the cases that automatic profiling shows are most performance-critical to that user.
— ⁂ —
Soon architects will be able to do their work in the same way.
Like Microsoft programmers in the 01990s, they'll do a "nightly build" of the current design with a swarm of IoT 3-D printers. Consider the 10,000 tonnes of structural steel that make up the Walt Disney Concert Hall in Los Angeles, which seats 2265 people. After the 16-year construction project, it was discovered that reflection from the concave surface was creating deadly hot spots on the sidewalk and nearby condos, requiring some expensive rework.
If each assembler can bolt a kilogram of steel onto the growing structure every 8 seconds, then 2000 assemblers can rebuild it from source in a bit over 11 hours. In the morning, like programmers, the architects can walk through the structure, swing wrecking balls at it to verify their structural integrity calculations, and see how the light falls, and, importantly, notice the sidewalk hotspots. Perhaps another 2000 printers using other materials can add acoustic panels and glazing, so the architects can see how the acoustics of the space work. Perhaps they can try out smaller changes while inside the space using a direct-manipulation interface, changing the thickness of a wall or the angle of an overhang, while being careful not to stand underneath.
In the afternoon, when the architects have gone home, the assemblers begin the work of garbage collection of the parts of the structure whose design has been changed, so the next nightly build reflects the latest updates. As night falls, they begin to rebuild. The build engineer sings softly to them by the moonlight, alert for signs of trouble that could stall the build.
— ⁂ —
Today that isn't practical—the nightly build machine for a single architectural firm would cost several billion dollars. But that machinery itself will come down in cost as we learn to bring the exuberant living abundance of software to other engineering disciplines.
To do ten "load builds" in the 16 years the Walt Disney Concert Hall took, you'd only need two assemblers, perhaps costing a couple million dollars at today's prices; they'd be able to complete each successive prototype building in 15 months.
Suppose prices come down and you can afford 32 assemblers, each placing a kilogram of steel every 8 seconds. Now you can do a "monthly build", which is roughly what I did when I joined a C++ project in 01996 as the build engineer. Or you can build 10:1 reduced scale models (big enough to fit 22 people, in this case) a thousand times as fast. Incremental recompilation on the C++ project allowed individual developers to test their incremental changes to the design, and similarly this kind of automation could allow individual architects to test their incremental changes to the building, though perhaps not all at the same time—the full-scale building would be like an "integration test server".
Suppose prices come down further and you can afford 512 such assemblers. Now you're not quite to the point of being able to do nightly builds, but you can do a couple of builds a week, and you can rebuild a fourth of the Walt Disney Concert Hall overnight.
Suppose prices come down further and you can afford 8192 assemblers. Now you can rebuild the building several times a day. You can totally remodel the concert hall between the morning concert and the afternoon concert.
Suppose prices come down further and you can afford 131072 assemblers. Now you can rebuild the concert hall in 10 minutes. There's no longer any need to leave it built; you can set it up in a park on a whim for a concert, or remodel it into a cruise ship.
Suppose prices come down further and you can afford 2097152 assemblers. Now totally rebuilding the concert hall takes about 30 seconds, and you can adapt it dynamically to the desires and practices of whoever is using it at the moment. This is where modern software development practice is: my browser spends 30 seconds recompiling Fecebutt's UI with SpiderMonkey every time I open the damn page. At this point the "assemblers" are the concert hall; they weigh 5 kg each and link their little hands together to form dynamic, ephemeral structures.
Suppose the assemblers singing kumbaya shrink further; now each weighs only 300 g, and they are capable of acrobatically catapulting one another into the shape of the Walt Disney Concert Hall, or any other ten-thousand-tonne steel structure you like, in a few seconds.
(Wouldn't this waste a lot of energy? Probably not, though it depends on the efficiency of the machinery; the energy cost of lifting ten thousand tonnes an average of ten meters off the ground is about a gigajoule, 270 kWh; at 4¢/kWh that's US$11. In theory you can recoup that energy when you bring the structure back down, but lots of existing technology loses a factor of 10 or 100 to friction. Even at a factor of 100, though, the energy cost is unlikely to be significant compared to construction costs today.)
— ⁂ —
But tell me more about how programmers need to plan more to reduce the cost of construction mistakes?
It generates UML diagrams from a simple text markup language.
Much quicker to iterate on, easy to put into a repo and share or collaborate.
Still not something you would use to design your whole code structure, but great for brainstorming or drafting once you internalized the language a bit.
Completely agree with this sentiment: Don't include every detail in your UML, but use it instead to straighten out your high-level ideas. PlanUML is also my go-to for this.
I want to add an important affordance of PlantUML: accessibility.
Using visual diagrams is shutting out vision-impaired developers from ever participating in your process. Maybe you don't have any on the team now, but that could change.
PlantUML is screen-reader compatible, and it does a pretty good job of laying out the content of a diagram in a way that "reads right".
I don't think purely-visual diagrams are an appropriate part of modern development for this reason, not without a diligent effort to make an alt-text which conveys the same information. With PlantUML, you get the alt-text for free.
> > To hyperbole a bit, I'd rather spend time writing property based tests and a few types in haskell in a way.
> I'd rather spend some time making sure I'm building the right thing, rather than testing that what I built correctly does the wrong thing.
I don't believe the GP was saying to use tests instead of planning. They were saying to use the tests as planning.
They called out property-based testing in which you describe behavior of the system as a set of rules, such as `f(x) % 2 == 0`, and the test harness tests many inputs trying to find the simplest example that fails that criteria.
They also called out defining types (in their chosen language, not a step removed in a UML diagram), which allows you to think about how the data is shaped before you write an implementation that forces a shape.
I agree completely with your first two paragraphs, but UML, in my opinion, failed to support that approach. Its primary failure is that it neither captured nor communicated the rationale behind the requirements, the answers to "why this?", "why this, instead of that?" and "is this right? is it sufficient?" Answering these sorts of question is central to the production of requirements and also to understanding them, but with UML these questions and their answers are treated like scaffolding, taken away from the result before its delivery.
One might argue that UML could support the capture of such information, but what matters is that this rarely, if ever, was done. It is not the sort of information suited to being presented diagrammatically, or at least not by the sort of diagrams that made it into UML.
One might also argue that no other requirements specification method centered on these features has made it into mainstream software development. Some people here, for example, have argued that the code is a statement of requirements, and code also lacks these features. It does not follow, however, that therefore UML should have succeeded.
Ultimately, UML was an added layer offering insufficient benefits to justify its costs. Its benefits were insufficient because it was predicated on the false assumption that requirements can be adequately captured by a sufficient number of simple declarative statements about how things must be, and that the process of specifying requirements is primarily a matter of making such statements.
Why would you ever not want to iterate your design? Doing is the fastest way of learning. The details can drive a design, so that if you don't remove all ambiguity, you will create an architecture that won't actually work. The problem people who just jump in face, is that they do not abandon their bad prototype and begin again, instead clinging to faulty architecture which leaves them in the same boat as someone who made an architecture unaware of the details.
Agreed ... the thing that bothers me about UML is that it has displaced better, smaller-bore tooling in a significant way. The idea of thinking-before-coding work is of course completely necessary.
Model driven isn't dead though, it has transformed. It's all about text models now. The only thing you really see is people using clicky-clicky tools to make databases.
I've always really disliked UML because it tries to strictly encode a whole lot of information into diagrams, which is way too rigid and opaque for me. My eyes just glaze over when I see UML.
I don't want to have to search "what does double arrow mean UML" in order to understand a proposal. I don't want an arrow to mean something that I couldn't learn somewhere else. I'd rather have a loose informal reference diagram alongside a couple paragraphs describing the system more formally. That way, the important information can be emphasized, the unnecessary information can be glossed over, and the diagram acts as a big-picture aide rather than some kind of formal semantic notation.
Yeah that's why I normally write documentation in English instead of Greek. That way, when people read it, they don't need to learn a new language.
Besides, that's only half of my criticism. Greek is at least a full language where you have the flexibility to phrase things however you want and inject detail wherever you need. UML is a very rigid language which makes it hard to emphasize certain elements over others. A text has a reading order and a logical progression; UML is spaghetti.
If you're gonna write your docs in a different language, at least pick a good one.
Those tools exists already. I used PlantUML[0] at my programming course. UML is nonsense, of course, but it was part of the curriculum, and it was more tolerable doing UML in Vim than in a graphical point-and-click editor.
Eh, uml is a tool. Pretending to document fully or even to have uml as ground truth is a fool errand, of course, but since our daily job involves taming complexity, any system of knowledge partitioning that one can assume everyone else understand is a godsend
It's inevitable. Most digital FPGA and ASIC development is done with HDLs despite the availability of schematic entry systems. 2D representations of behavior do not scale, are hostile to collaboration, and suffer from vendor lockin.
PlantUML is an excellent tool for creating visual representations of system behaviors. Because diagrams are generated from plaintext, they’re easy to maintain and version control. I use it often when designing new features and systems. You don’t need to pay attention to UML semantics to create valuable diagrams.
When I was in uni recently I was learning UML and wondered why all the FOSS tools for UML sucked. I quickly worked out it’s because no foss programmer actually uses or cares about UML
Sure they would. As a widely recognizable set of boxes and arrow shapes, it's useful for the kind of doodling that you might want to show someone else later :).
UML is a language or notation. It isn't dead and I consider it still useful because a standardized notation means you don't have to explain your notation again and again. Or worse you forget to explain the notation and people are confused what that arrow or box actually means.
The promise "that with detailed enough diagrams, writing code would be trivial or even could be automatically generated" was made by "model-driven something". The idea behind that gets reinvented often. The latest one is called No-Code or Low-Code.
The problem isn't uml itself but the fact that the uml definition is separate from the code. Is still see that there can be done use of visualizing parts of the code using uml.
A related one that people keep reinventing is “dataflow programming”, where programs would get more expressive if we didn’t call and return stuff but instead data just moved around those arrows on a graph. That’s like code generation but you actually execute the graph.
I never understood the appeal of UML. Hardware design had been done almost entirely by schematics, but then in the mid 90s HDLs (Hardware Description Languages) and logic synthesis offered increased productivity for hardware designers. Now other than very highlevel block diagrams hardware design is almost completely textual. UML seemed like schematics for software and a step backward.
That's the right perspective to look at it. It wasn't just increased productivity which brought us to HDLs, but it was the sheer impossibility of understanding or keeping track of the ever larger and more complex designs with schematics. With todays software systems we have exactly the same problem (but computer scientists apparently prefer to retry and reinvent things than to study anything old). UML (and with version 2 also SysML) finally had an equivalent textual representation, but it was much too late.
Completely agreed. As for why the code is the blue print - good code design requires the ability to switch between high level details (how do I structure these high-level components of the system) and super low level details (e.g. the application of a particular algorithm to a certain problem, or the various hacks and workarounds one often has to write when dealing with certain third party systems). The lower level details simply cannot be extrapolated from a high level structure.
To extend the construction analogy a bit - typical architectural drawings aren’t buildable as is. They often miss key details (member composition is rarely even mentioned, same with even member sizing!). Stamped civil engineering plans will often miss anything which is outside of the core structural elements being certified (so good luck figuring out the size of the beam you’re supposed to put somewhere if it isn’t a core load bearing element). Huge portions of construction are based off decades of (inconsistent) experience, in the field improvisation, cargo culting, and gut feel. The smaller/less big Corp the job, the more true this is.
I remember "Booch Blobs." The original system diagrammer was Grady Booch, and he used these little "clouds." I think Rational Ro$e used them. He ended up throwing in with Ivar Jacobsen, and they came up with UML (which Jacobsen started). More boring, but also more practical.
Jacobson had a proven method and toolset at that time called Objectory which was superior to Rose. Unfortunately they killed the tool. It seems to be an imperative of history that mediocrity prevails better.
The funny thing (having lived through that time) was that while it was very trendy to pretend that construction was not a giant disaster most of the time from a planning, delay, cost overruns, etc. perspective - it was pretty clearly not the case even then if we’d looked even a little!
Like many things, the big promise was a fad, but we learned some valuable things out of all of it, and some still survive.
Software Engineering is a licensed profession in several countries.
In Portugal I cannot sign a legally bound countract with Eng. SoAndSo withouth having been licensed to do so.
Naturally plenty of people without such duties never do the final exam, however the universities where they studied had to be certified by enginnering order anyway.
I don't think I've have ever met a software engineer in Portugal that is on the "Ordem de Engenheiros". It's far more common, because indeed they're legally bound, with civil engineers, material engineers and such.
That may also be true for some areas, but you can def. sign a contract for software development with just a generic business license.
There are certainly creative ways to sign the contract in order to avoid that requirement, after all we belong to the European nations that tend to be creative when it is time to comply with the law.
For example, I knew some consulting shops that had one poor soul that signed all contracts and hoped for the best.
> But as it turned out, that was a false premise. The real blueprints for software ended up being the code itself. And the legacy of UML lives on in simpler boxes and arrow diagrams.
IMO the bad rap UML gets is undeserved. The value of a detailed design in UML may be limited. But high level design elements like use case diagrams, sequence diagrams, activity diagrams - these are super useful.
Simpler "boxes and arrow diagrams" are fine but it's nice to have some consistency in the visual representation of these elements.
> It was developed during a time when there was a push to make Software Engineering a licensed profession
Regardless of the merits (or lack thereof) of the original push, I do want to see greater accountability and oversight of safety-critical systems and related, such as IoT systems - the idea of having a licensed/chartered engineer having to sign-off on a project (and so putting their personal professional reputation at stake) is something I support in the aftermath of things like Boeing's MCAS snafus - or the problems with Fujitsu's "Horizon" system - and so on.
I don't want occupational gatekeeping like we see with the AMA and the trope of licensed nail parlours, but we need to learn from how other engineering professions, like civil-engineering and aviation engineering, have all instituted a formal and legally-recognized sign-off process which I feel is sorely lacking in the entire software engineering industry.
The challenge is that when you're making enough detailed design ahead, you are back to waterfall so it's not really compatible with the agile methods of today.
Honestly, I think complex architectures are best demonstrated as diagrams—and those can be developed in an agile fashion. Stable, well-thought-out architectures can’t be slapped together without nice diagrams. There’s a ton of folks who just “start coding” to get a feature going, but when someone else takes over the project, how are they to learn the code? Diagrams are always the best way for me—and there’s limits on what doxygen says, depending on how bad the implementation is.
Main point of UML is to tackle both diagramming/architecture AND forcing basic coding to reflect the diagrams. It forces code and documentation to both reflect the architectural truth.
This doesn’t have anything to do with agile methodologies, as any task can follow agile workflow.
I'm kind of missing something opposite. A tool that can draw a diagram out of the code, that by dropping some details but preserving important stuff like what objects travel through which fuctions, can give you better understanding of what architecture in your program/system do you actually have.
Because it might be different from the architecture you think you have and some bugs or opportunitues for improvement might be more easily spotted through this different lens.
It seems fundamentally obvious to me that this line of thinking is bogus. If you move enough of the complexity to UML diagrams of course the code will be simpler - because all the complexity is now in the UML diagrams.
That doesn't make the complexity go away, you have to do just as much work writing UML diagrams as you did writing code before, but now you're expressing your complexity in cumbersome visual designers rather than code.
If you're going to shift business logic/data from code to any other format you need to demonstrate that that other format is somehow better for representing that information, you can't just pretend that because it's not expressed in code any more you've gotten rid of it.
Indeed. The problem is not with the diagrams, so much, as with the requirement for people to draw them. The code is the source of truth and the only true representation of the program.
There is value in diagrams - but that value is highest when the diagrams are derived directly from code. That is why I am making https://appland.com/docs - get the benefit of interactive code diagrams, generated automatically from executing code (not just static analysis, which is too weak to handle the dynamic behavior of modern frameworks).
I use it exclusively during refactoring to try and spot coupling, or to figure out somebody else’s code with a sequence diagram. It’s handy for that. It would be weird to use it for up-front design but I guess you could
this is what I do... I have a UML document that describes the database schema and instead of autogenerating it, I run a compile-time check to verify that the UML is in sync with the schema.
Honestly, UML would have been awesome for backend architecture. Most backend architecture I see are logos next to boxes connected by arrows, which is fine for high-level but extremely difficult to automate
Part of the reason for this is that until you get to larger system components, changing the code is relatively cheap, so there's less need for a formal design.
wow, that is perfectly worded. In 2001 I spent a great deal of time making these UML diagrams for my boss. I didn't understand what this had to do with writing code. I hated it. Tried to argue it was pointless, lets just skip these and start writing code... and there was crickets
added:As an old person, there's no point in listening to me though. From where I sit, the main improvement over the last 40 years has been the widespread adoption of third party libraries. You'd be surprised at the things that had to be written from scratch. ...I just thought of another difference over time, the population of programming hobbyists who became professionals. It would kill me to write software for free.
The Wikipedia article you posted is very relevant to me because I watched a division of the Air Force waste tens of millions of dollars trying to implement that approach for their collection of IT systems.
The consultants got very rich with their cartoon drawings on the walls and nothing was produced for the taxpayers naturally.
A complete failure but because it's the federal government nobody was held accountable and the person in charge changed the success criteria enough to be able to cancel it and call it complete.
The reason we stopped with UML and other front heavy planning methodologies is that what we planned was always wrong.
Software projects always used to go massively over budget, were delivered late, and usually didn't meet the customers requirements anyway.
It turns out that customers don't know what they want, or at least can't articulate it properly to business analysts.
By delivering early and often, we're not only releasing value to our customers early and often but we're getting feedback, allowing us and the customers to go on a journey of understanding their needs.
Also, it turns out that engineering teams know better how to develop a piece of software than architects.
We solve the complexity problems by breaking a project down into (as he says) "pizza team" size components and focus on defining the interfaces between those components, rather than go into the weeds of how information flows within the code. This leads to other complexities, of course, but they don't lead to the same sort of delivery problems that large, monolithic designs have.
All in all, a switch away from a design heavy approach has improved delivery, value to our customers, and enabled staff to be more productive.
I think the type of software that is being written has changed over time. In the past, companies had to write complete software that would be shipped on a CD - you cannot deliver an incomplete product and then change it later. Nowadays, majority of the software lives online and can be redeployed in a short time. Delivering early and often is possible now, but it wasn't in the past for majority of the projects.
Yep, also we're leveraging components that other organisations build more. For instance, we used to build our own messaging systems, now we use open source projects like Kafka. We used to write our own wire protocols, now we use HTTP endpoints with JSON payloads. We used to write our own authentication systems, now we use authentication libraries. Our patterns have standardised to the point that we don't need to design and document nearly as much as we used to.
Exactly this. You never know your problem domain as well as you think you do.
I always try to start a new project assuming everything could change. Paradoxically, this approach tends to produce the same net result as assuming nothing will change. If you have no idea how requirements will evolve over time, there's no point guessing. Just write the code needed for the first batch of requirements, and nothing else.
It is orders of magnitude easier to add a layer of abstraction to a simple system when new requirements demand it vs. remove a layer of abstraction later when you finally realize you don't need it. The latter is often impossible.
But what was perhaps lost along the way was communicating inner workings through something else than code.
I think UML and its likes still have immense value between developers and teams to communicate complex processes in a way that is easy to understand at a glance and facilitate shared understanding.
Sure, but you don't have to go as heavy handed as UML. You have to wonder if a visual diagram is the best way to go into detail or should we use other ways of delivering that information?
Sure, but agile is about breaking up work into small chunks and showcasing value in those small chunks. If you're doing it properly you'll be able to look at your velocity and know that you're falling behind schedule. Or, your customers will be able to see early that the thing your building doesn't suit their needs.
It means you can plan for and adjust early, rather than the week before a deadline.
Basically, we can't plan and estimate very well in tech, but it's easier to be accurate in our plans and estimates in smaller chunks than in larger.
Saying that, agile is about delivery rather than project management, and there always needs to be a larger roadmap to track features against - which is how we know if we're going to be late or not.
> We solve the complexity problems by breaking a project down into (as he says) "pizza team" size components
The value of carefully-planned design is above the "pizza-team" level - that's the whole point of planning a system "in the large". The 'in the large' bit implies larger than pizza size.
"Getting feedback early and often" can only apply at the level of an individual component.
Sure, but the higher level you go the less detail you need. UML gets right down into the weeds, this is where design as code and documentation as code comes into play.
even with agile processes you will probably need to write some kind of design documents, and chances are that this document will contain more than one sequence diagram or UML class diagram. You still need to communicate with your team members and reach some sort of consenus on what you are going to do and how the service will look like.
Yes, but at a higher level and less formally than UML.
A sequence diagram doesn't need to be ordered in a time sequence like UML, it can be good enough to just articulate and number the flow of information on a normal solution diagram.
A class diagram assumes we're using an OO language - we need to define schemas for data, but not classes. The engineering team knows how to structure their code, as an architect all we need to worry about are the boundaries.
UML suffers from false precision. Nobody cares about most of the specific rules and grammar of the language because the output is designed to be generally readable by a lay audience. The time your spend making your diagram compliant with UML is better spent making and remaking good general diagrams.
Also the level of detail that UML demands always turns out to be imprecise once coders end up implementing specs. So it's a place where enough formality is required to be a PITA to model but where it falls just short enough of the specifics a programmer will need once they get to actually building out the product.
I always treated UML as a window into the domain, not as a precursor to the code.
With just a few Use Case diagrams, some class diagrams, and maybe some sequence diagrams, you could communicate what the final system should do and how people will interact with it.
The details of how to implement would be derived from the architecture used. I always found even a few basic diagrams with little detail (classes without fields) very useful and miss them now that people don't want to "waste time".
The idea was if you found that the model was lacking in some way, you would update the model and regenerate your code. In practice that just didn't work, or at least it didn't result in saving any time or in producing better software.
Exactly. Tools to build visually appealing diagrams well appeared. I remember how fun it was diagramming in Visio for the first time.
Why would someone (especially someone non-technical) spend the time to learn and write what's essentially code to make a diagram when the alternative is drag and drop? That's not to say UML is without value, but to me it comes down to the difference between CLI and GUI tools: when the latter is broadly available to the masses, the former is only going to be used by power users who want the flexibility.
Another personal nit: I've never seen a "pretty" UML diagram. The value of aesthetics is obviously not critical, but if I'm looking to make a nice diagram to show my boss and the options are UML and Whimsical, I'm going with Whimsical every time.
I’m of the opposite predilection. I want to be able to manage my architectural diagrams in code review. I like having a tool where I can edit it in one pane and have the diagram be a live preview beside it. Ideally the wiki tool would be able to render it.
GUI tools I find aren’t as good for iteration/living in source control/embedding in a wiki nicely. “As code” ecosystem is very immature - automatic live preview is frequently non-existent, resulting diagrams are ugly and hard to layout.
PlantUML is great but suffers from having a very difficult to customize layout, finicky styling, ugly default style, and a bit of a mess of a language that’s evolved organically.
> Why would someone (especially someone non-technical) spend the time to learn and write what's essentially code to make a diagram when the alternative is drag and drop?
An advantage, depending on context, is that it brings the diagrams closer to the code that they're modeling, making it possible to version control them and include the diagram generation in documentation build automation.
They're not modeling code in many cases, they're modeling processes, products, and user flows. If you're building an app, your diagram is going to look a lot more like the things your users see in order rather than the code to make those UIs appear. E.g., the code for react-dom-router routing is going to look very different from the actual navigation patterns built on it (tree structure versus linear flow).
The primary need for diagrams is drifting further from the use case you're describing.
A software engineering course I once took categorized modeling/specification techniques in three groups: informal, pseudo-formal and formal.
Informal specs are just prose with maybe some diagrams. They are very helpful and not too labor-intensive to produce, so their usefulness is obvious, but they lack precision. Formal specs use some formalism to exactly define (often in some mathematics-inspired notation such as Z) the behavior of a program. Very useful if you have it, but labor intensive to produce, not too many people can read or write these comfortably, and keeping up with changing requirements is hard.
I think the selling point of pseudo-formal notation, like UML, was that it would be a "best of both worlds", where you get most of the benefit of formal specification, while the workload is more similar to informal specification.
Instead, my impression is that it's the worst of both worlds. The additional work isn't quite as high as for formal specification, but you get very little benefit for the additional work: the preciseness of your specification is still much closer to informal specs than to formal specs.
I see UML as the worst of both worlds in a slightly different opposition: ad-hoc diagrams, which aren't necessarily informal (they can be formal enough with limited, improvised concepts) versus standard diagrams, which offer no guarantee of being formal rather than vague.
The vocabulary of UML offers a useful common ground (for example, distinguishing metamodels and class diagrams from object diagrams), but fancy UML features beyond the most basic diagrams would be not only time-consuming but off topic on a whiteboard, and too cumbersome and fundamentally informal for their intended uses of detailed design and model-driven code generation, with standardization providing strictly negative value.
I think it was Kent Beck that proposed the tongue-in-cheek Galactic Markup Language, GML as his preferred alternative. It has very clear semantics: a box is a box and an arrow is an arrow.
Personally I always found that the less formalistic diagrams communicate much better since you can focus on providing the essential skeleton for your ideas rather than drowning in details that are only relevant in some contexts.
Nevertheless it would be useful if everyone knew which "arrows" mean subtyping vs. reference vs. ownership vs. instantiation, and which boxes mean action vs. object/class, etc. If you only have "arrow" and "box", you have to explain what you mean by them in the specific instance every time.
Also, UML does have some nice graphical notation like ——(o—— to denote interface and implementation (sockets and plugs).
> explain what you mean by them in the specific instance every time.
this is a feature, not a bug.
The diagram isn't a substitute for the conversation. It's an aid for the conversation (and a way of remembering what was said afterwards).
The mistake was always trying to come up with a diagramming methodology that could communicate the entire design without the need for people to talk to each other.
I’d add generating diagrams from the real code, database models, etc. as an alternative, too. General high-level diagrams don’t need the level of detail which UML requires but it’s inevitably not enough detail or accurate enough to understand the system using just the UML diagrams, so you end up with a bunch of expensive time wasted creating diagrams which don’t satisfy either audience.
A problem of UML is, even worse in UML 2, that it was developed for UML tooling vendors (IBM rhapsody, Enterprise Architect, etc.). It is as if we had one compiler vendor design and promote a Universal Programming Language, declare it as an industry standard that supersedes all alternatives and has in their interest to lock-in all programmers.
UML is just all around horrible. I can't think of a single situation where it provides any benefit. It is a huge pain in the ass to the developer, and it just inspires false confidence in managers
Good riddance. UML is part of the infinite-complexity infinitely-layered culture around the Java language in the late 90's and early 00's. Where OO-supremacist "system architects" were trying to split software development into the first-class flying, conference-attending chosen few who could do modelling, and the unwashed masses who were to be replaceable, underpaid, be-glad-you-earn-more-than-minimum-wage outsourced code monkeys.
Mentally I have this bundled in with 4GLs and the perennial visual programming environments, revolving around the idea that the act of coding was the expensive part rather than the application complexity.
I understand the appeal of saying you can ship the diagram rather than writing code based on it but it’s always felt like the kind of thing which leads to a demo cliff where the simple parts are appealing but anything non-trivial becomes unmanageable.
> revolving around the idea that the act of coding was the expensive part rather than the application complexity.
I just can’t wrap my head around this idea. If the act of coding isn’t the expensive part, then why should I have to spend so much effort coding and why can’t my effort be focused on the application complexity?
I agree that UML and past visual programming environments weren’t doing a good job of splitting these concepts, but I don’t buy that this means it’s impossible to split this.
There’s definitely room to shift the work but I think it’s really easy to look at the amount of time spent writing code after the specification is “done” and say that it’s overhead without questioning how much of it was really due to errors, gaps, or misunderstandings in the spec (thinking memorably of a couple of projects “overruns” where the managers who thought the plan was done were wildly out of touch with their direct reports who’d been excluded from the planning process).
I think a lot of business people also liked the idea of having one expensive architect giving diagrams to cheaper coders (probably in a different county), avoiding needing to give higher pay and social status than they needed to offer to be able to hire decent programmers. Something like UML is appealing if you want something you can look at to say it’s ready to hand over to be implemented and the numbers salespeople toss around for projected cost savings can make it appealing to pretend really hard that it’s that simple.
The UML specification is 796 pages[0]. It is amazing how humanity can take simple concepts like flow charts and inflate them into insane, hyper detailed legalistic rule books that almost no one understands or uses.
This is the real answer. The effects of UML will live on forever, but a formal language for it has been dead for a long time in common use, though some of course will still use the exact formality.
The value of UMLish things is just visually diagramming data and systems. Boxes with arrows and some data inside is typically enough to do the trick. Lucidchart is thriving despite many not actually writing explicit UML on it.
Why is this sentiment so prevalent. I'm thinking more and more that simplistic and vague informal diagramming is a great source of confusion.
UML, specifically statecharts capture a lot of behaviour unambiguously, though it is hard to draw by hand (due to its hierarchical nesting possibility, so some kind collapse/expand system is required to draw them in a sane manner).
I don't find it a source of confusion, because my boxes and lines diagrams are explicitly meant to be sketches, and are presented as such.
UML, as others have said, is both too precise about things that are still vague at sketch time and took vague about things I want to be precise about to be useful. And once I have a real system, the real complexity in the system always makes for either an insanely huge diagram with lines everywhere, or an imprecise one full of lies and omission.
Trying to program with UML is basically like trying to write a system of 10k lines by first writing them all down, and only then beginning to try to compile it. It's not going to produce good results, not just because of the endless little errors, but especially the big ones that your could have discovered much sooner if you were using something more real than UML. UML is superficially easier to work with than direct code, but in depth, I think inferior to it.
Some of the notation is sometimes useful, but to use it as intended is crazy.
> simplistic and vague informal diagramming is a great source of confusion
It definitely is, but are UML statecharts the right solution? Why not go the full nine yards and use a full-blown formal modelling environment, like TLA+? After all, using UML in a formal fashion would take more or less the same amount of effort as learning TLA+, which moreover:
* Is textual (making it easier to version-control and collaborate upon)
Because the more mathematically rigorous and pure a description is, the more likely that your programmers won't carefully read the spec, or won't fully understand the exactly meaning of the UML arrows.
It doesn't help that every UML diagram I've ever seen is ugly. There are always way too many jagged lines, and seemingly random attachment points. (Example: https://tallyfy.com/wp-content/uploads/2018/02/Class-Diagram...). In the little triangle, one line goes straight down, and one goes to the side. Is going to the side special? Or just a random artifact of the diagram? Without being an expert, it's hard to know.
With written-language descriptions, it's easier to be rigorous enough for all programmers.
In my experience (startup product development), product requirements and constraints are constantly shifting in order to support new use cases. In my practice, that has translated into only formalizing specifications after the fundamental needs of a system are set in stone, which is usually after release. When I’ve worked on contract-based projects, it’s been much easier to formalize beforehand, but I suspect most developers on HN are involved in active product development.
Thats true, but at least before implementation, the behaviour to be implemented should be known to some extent. UML Statecharts remain a good method to document behaviour.
The reality is that humans have a rough sweet spot for the amount of complexity they can reasonably consume, and a high-level, somewhat vague diagram that has been cherry picked by someone who intimately knows a system is a great introduction.
This is not to say that blueprint-style UML has no value, it's just comparatively rarer, since it requires much more time investment. However, the two do not address the same needs.
I've been trying to build diagrams to familiarize myself with new systems, but I wish there was a UML-lite that worked well for generalized diagrams in Python.
Instead, I spend 70% of the time just trying to figure out a good diagram approach.
Yup, I’ve always referred to UML as ‘a formal standard for back-of-the-envelope sketches’. It demands too much precision and rigor to be used during the early parts of a project when you want to just get people on the same page about roughly how things should be structured without baking in assumptions too early. Then in the later part of a project when you’re talking about modifying an existing system, reexpressing the existing semantics accurately in UML first demands unnecessary pedantry.
When you draw a diagram to communicate something about a system, being able to choose which things are important to you right now and which things aren’t is a feature of informal diagramming, not a bug.
I learned UML years ago but found PowerPoint superior for explaining what needed to be done. So much easy to sketch using PPT than even Visio which makes me pull my hair out.
> Explanation should do things that the other parts of the documentation do not. It’s not the place of an explanation to instruct the user in how to do something. Nor should it provide technical description. These functions of documentation are already taken care of in other sections.
Where UML goes wrong is that it tries to be a very detailed technical description. Usually, you just want a reference guide for that sort of thing. And reference docs are rarely maintained manually, at least not with any accuracy. I've yet to find a good system that can generate diagrams for technical reference that's usable.
It's hard to say what fits and what does not in a diagram, because it depends on the point you're trying to make. But I usually find that "explanations" are really tricky to get right. UML's a handy place to start thinking, but you're almost always going to cut out a lot of detail to make an explanation easy to understand.
Same here. I still think UML is a great tool, but few have the need to follow the standard to the letter. Often I don’t fully understand a problem before poking at it for a while.
As I write this I can’t help wonder if perhaps it would be useful to return to UML after doing some exploration and before starting the real implemention. Still everyone on a project would need a refresh on UML, most of us forgot most of the details.
UML really died because modern software development is less like designing an aircraft than it is a research program to design a new kind of aircraft. To the programmer, the plafroms they are going to build on are so vast and complicated, that their behaviour is not fully known to the programmer. So to design a specific application, the programmer has to performa series of experiments on the platform to determine the manner in which it will have to be used to support the objectives of the application. When those experiments are complete, the programmer can then spend some time in design, but for the most part programming-in-the small is akin to conducting research and then doing a little desing. For large systems, someone has to do design, but the best way of doing such design is through whitepapers, and large scale experiments and possibly simulations and analysis. UML isn't particularly useful, beyond establishing a common vocabulary for diagrams -- a task that can be done more simply and informally.
Yep. Software development is closer to biology than mathematics now. Instead of some first principles that help us understand a system, we poke and prod it.
I don't know about the rest of it, but sequence diagrams are alive and well and are basically in pretty much every design doc I write and read.
They are very very useful for letting people quickly gain a shared understanding for what a system is supposed to do (at a high level). They are extremely beneficial IME.
If you are designing a system and need other people to understand key parts of how it all hangs together (e.g. during review or implementation or maintenance) , and you are not using sequence diagrams, I'd urge you to strongly consider doing so.
In my experience, any attempt to describe/document/plan a system using UML inevitably turns into an outdated mess that no one is willing to waste time on updating. Eventually it becomes worse than useless since it contains objectively wrong information.
I don’t think the “using UML” phrase is significant in that statement.
It seems weird that “No one is willing to waste time” updating diagrams, yet everybody who encounters a new large code base starts drawing diagrams.
It’s not clear to me whether making it easier to update diagrams or making it required (say by having the diagrams be part of the source code, without which the code won’t compile. There have been several attempts writing such systems, but none has become popular) will help there. Maybe, the act of drawing such diagrams (as opposed to looking at them) is essential for learning a code base.
I usually start drawing diagrams because the existing ones are difficult to read.
This is another problem with UML in practice, that drawing diagrams is a skill, a bit similar to writing code in the sense that it is possible (and quite frequent) to have "spaghetti diagrams" with dozens of classes on one screen connected by long zigzag lines you need to carefully trace by finger.
The solution, as usual, is "divide and conquer", where you create a separate diagram for user management, another diagram for invoices, yet another for... whatever the application does. Okay, people do it partially, like they split 100 classes into two diagrams with 50 classes/tables each, but good design would be more like 10 diagrams with 10 classes/tables each.
Another problem is that the UML language cannot capture things specific for the project. For example, suppose that 80% of your classes have fields like "date_created", "date_modified" etc. How are you going to handle this? If you write those fields everywhere, you get lots of repetition. If you don't write them, you miss a potentially useful information.
With an informal diagram language, you could create a project-specific convention, for example a small clock icon in the corner would imply presence of "date_created" and "date_modified" fields -- and the icon itself would also be explained somewhere. A bit like graphical domain-specific language.
> With an informal diagram language, you could create a project-specific convention, for example a small clock icon in the corner would imply presence of "date_created" and "date_modified" fields -- and the icon itself would also be explained somewhere.
The UML spec actually has support for this. It defines a special kind of 'inheritance' between diagram elements and ways to represent it, including iconically.
> This is another problem with UML in practice, that drawing diagrams is a skill, a bit similar to writing code in the sense that it is possible (and quite frequent) to have "spaghetti diagrams" with dozens of classes on one screen connected by long zigzag lines you need to carefully trace by finger.
This still comes to a head with Unreal Engine Blueprints.
On that note, I really wish we had tooling for automatically generating diagrams from existing code. Kind of an interactive query language:
"Starting here [main.cpp :: Start()], how do I end up here [Frobnificator.cpp :: FrobnifyQuux()]?" - should give me a sequence diagram like this one: https://s.plantuml.com/imgw/img-f05ecfe23bf545656a3a1f66b2a9.... And then I should be able to narrow it down, skip selected functions or classes.
I once spent over a day drawing such a diagram (much larger) for the core system of a large codebase I started working on. I did it by manually stepping through a codebase and making notes as I went, in PlantUML format. It ended up being tremendously helpful, as parsing a diagram like this is much faster than parsing textual representation. And a year later, when I had to revisit that part of the code again, despite being not entirely accurate, the diagram was still very helpful.
Same for other types of diagrams - class, state, component, activity, timing diagrams. The tools we have all have enough information (e.g. Clang definitely understands C++ codebases well enough that it should be able to spit out the aforementioned sequence diagram between two given points), but I'm not seeing them used this way.
> It seems weird that “No one is willing to waste time” updating diagrams, yet everybody who encounters a new large code base starts drawing diagrams.
[citation needed]. I got a new job fairly recently, we’ve got a massive codebase, and I never had the urge to draw any diagrams. (We’ve got zero diagrams as part of our docs as well.)
> yet everybody who encounters a new large code base starts drawing diagrams
I draw a lot of diagrams when exploring code, but they are almost never uml, because usually I look at code searching for a particular information, and only draw what is relevant to that search with varying level of details.
If I was to draw everything (no matter the format) - it would become a huge mess with signal-to-noise ratio very close to 0, at that point I might as well just read the whole codebase.
My personal experience is that system-level architecture diagrams are useful (i.e. which service/application talks to which service/database/message queue etc.) but code/class level ones are not. As to why - because the code tends to change a lot more often compared with the overall architecture of a platform. Requirements change (or requirements become better understood), code is refactored once the engineers get a better feel for the structure they are solving, new tools come along, etc. Code level diagrams have a habit of going out of date quickly, or (even worse) discouraging people from making useful changes based on new information.
To me, it seems that in their rush to criticize agile, the author failed to consider _why_ agile-like practices killed off UML.
I used to develop next-gen high-end OO CASE tools, and am big into systems analysis/design/etc., visual thinking and communication, etc.
A few random comments, on latest impressions of UML, while trying to get something done last week, but not a rigorous look UML...
I happened to skim parts of the UML spec (796-page PDF) last Thursday/Friday, having not seen UML in a while.
I appreciate all the goodness from various methodologists that was incorporated into UML. And, especially as a tools developer, I also appreciate the semi-formal metamodels, which have a few key uses, most of them for tools developers.
Unfortunately, the particular UML features I needed last Thu/Fri (for a cross-org high-level process&architecture model) didn't seem to be supported sufficiently well (or at all) by any of the first half-dozen UML tools I tried. (I tried open source, and then lightweight Web SaaS. People generally do that, maybe more SaaS first, and want to get started rapidly. I'm not going to casually install a huge closed CASE platform, without checking the SaaSes and open source.)
By Friday afternoon, finding all these tools hindered more than they helped, I said "fiddlesticks this barrier situation", and decided to just use our GSuite Draw. Although, on the surface, it appears to be pretty much 1980s shared-whiteboard simple drawing CSCW (with a little 1990 Visio diagramming innovations), I was able to express enough of what I needed without too much effort, and I was even able to reason from the diagram a little as I was manipulating it.
To make GSuite Draw viable for my immediate purpose of starting to work through a model, I did have to use a metamodel and notation that was easier to draw without special tool support, but it worked for this particular model.
I'd still use a specialized UML tool for the most familiar Rumbaugh-ian static object modeling. And, if the tool supported them well, I'd also use for Harel state modeling, maybe event traces, maybe Jacobson use case modeling, and some other UML ones (e.g., Activity Diagrams, especially with the object flows, not just flowchart control flows).
I wouldn't generalize much last week's experience, to adoption by developers in general, since I'm a fringe person who knows specific metamodel features that exist, which I want to select for a particular need -- rather than being a new person dropped in front of a particular kind of UML diagram, and told to start expressing my system in whatever terms it gives me.
I haven't done a rigorous survey of current method&tool practice and offerings, but I'm starting to form a suspicion that adoption and execution challenges for UML-ish things today are very similar to what they were 20 years ago. The markets certainly seem to have changed in some ways, but even some of those ways (e.g., more widespread framework reuse), were already being talked about for modeling 20 years ago.
Sequence diagrams are probably the most useful UML diagram. I don't know of a simpler way to specify and document workflows, no matter the scale (from function to whole system)
Now that I think about it, sequence and state diagrams are pretty much the only programming-related diagrams I see used in the documentation at my telecom job. I even see sequence diagrams written in code comments from time to time, they're always appreciated.
UML is closely associated with Object Oriented programming. In the 2000s, OO programming was all the rage, and having a mental notebook of “design patterns” was considered the hallmark of a real software engineer.
Since then, OO has lost a lot of its luster, the languages most associated with it like Java and C++ are considered somewhat clunky and uncool. Functional programming took over as the “smart” paradigm to talk about, even if in practice most popular programming languages borrow a mix of imperative, OO, and functional features.
UML was also popular in a time when you could make a SQL database and just use that. The introduction of more database varieties complicates things.
Besides OO, UML was also strongly associated with “visual” programming done by non programmers. This paradigm is doomed to be reinvented and pushed as the next big thing every 3 years until eternity, which we currently see with whatever the latest no code trend is, but in the 2000s was associated with UML.
Finally, language independent data formats like JSON, Thrift, and protobufs got popular. Why make a UML diagram when I can just make the actual data structures then immediately have that as a binary format?
To the extent UML means “data modeling”, people still do that. But it became associate with a lot of cultural baggage, and most of that baggage was on the losing end of a lot of technology mindshare battles.
I don't agree with the posting. In my opinion, UML had rebirth in 2010s with the growth of popularity of PlantUML. See that PlantUML is referenced already in this thread, and to positive sentiments already expressed, I can add the notion that PlantUML fits much better, than most of earlier UML related software, for the purpose of network modelling. While original UML standard was not specifically designed for representing networks, PlantUML has network diagram type built-in. And extensibility of PlantUML allows to cover many use-cases beyond that. I imagine that with a certain amount of tenacity applied, it would be possible to create PlantUML library for "masala diagrams" as well.
Also, "agile" shops usually love that tool, since it requires minimal learning curve for any person on a team to start producing nice-looking diagrams.
You sketch out simple diagrams of major modules and how they interact.
That gives an understanding of what goes where and how things interact at high level.
You dont detail each function/attribute/call/... you'd go insane and it would be useless / you'd lose your readers time. (If they want this level of detail they can check the code)
It's also useful when doing new big features where you need approval/feedback/advice. One simple class like diagram to show how you structured your code is generally useful for people you present it to to understand what you want. Some use cases like sequence diagrams are also useful. Then you detail out further in whatever is agreed in the company.
Sometimes its useful to describe flows for really complicated bugs with a sketch like sequence diagrams to make sure you undetstand and found actual root cause and to show that.
There's quite a lot of hate in the comments for UML is seems. And while a lot of it is quite valid, I have to say that sometimes I miss _some_ of what UML could do, especially in form of documentation. Sometimes a simple sequence diagram is able to communicate the behavior of a system much clearer than 10's of pages of disjointed prose... Just my opinion.
Sequence diagrams are the one UML diagram I regularly still produce to communicate behavior both to other developers and less technically minded people. I personally find them great (as long as you don’t get carried away in very minute notation)
Nothing is stopping you from creating diagrams, but UML is not required to create useful diagrams. Most concepts can be diagrammed using only boxes and lines.
This could only be said by someone who hasn't sat through their Nth architectural overview where half the time is spent defining just exactly what was and was not intended by a particular line+arrow between two boxes when a simple sequence diagram would have been crystal clear.
Both simple box+line diagrams and sequence diagrams are entirely dependent on the creator's skill at creating diagrams. I've seen plenty of confusing sequence diagrams. I guess it also depends on whether your team find it useful that diagrams depict low-level highly detailed interactions as opposed to higher level architectural concepts.
In my view one of the biggest problems of UML was that it's ugly and not very intuitive. For a while I jumped on the UML bandwagon and developed a lot of diagrams of the various types. Turned out that even very experienced developers didn't understand them not even to mention less technical people.
Now I just draw a few boxes with arrows and usually get the message across.
UML as code generator also made for really neat demos but in practice the diagrams for complex systems were harder to read than the code.
If I had one wish it would be nice if there was a standard for embedding and editing diagrams in different software like Word or Powerpoint or the web. It seems you always end up copying/pasting stuff while often losing the original after a while.
It would be nice if Word had an interpreter for this. You enter the model text and Word renders it. Same for graphviz. My life wild be much simpler if Word rendered them.
Yeah came here to say that. I use sequence diagrams a lot to illustrate complex processes before starting to implement components and stuff like that. But they were "co-opted" by UML, the idea is older than the whole mega-effort to make UML a thing and automate software engineering.
I like the concept of 'Masala diagrams' that OP introduces - I've drawn these a lot, and the UML-trained part of me is always slightly uneasy that I'm combining (say) data and process flow on the same diagram, or that some of my boxes are physical machines and some are programs, etc. But the thing is, you shape your masala diagram to communicate whatever the thing is you want to communicate. You'd need 3 or 4 UML diagrams to cover the same thing, and no-one would really care that you'd bothered to keep to the conventions.
IMO diagrams work best in tandem with actual docs. The diagram is an abstract — you glance at it to get a high-level understanding, and if you need to deal with the details you can look at the writing. UML always gives me a headache; I hate having to squint at a diagram to deduce details that I could just be reading about.
In theory, UML diagrams sound good. In practice, having to maintain them is PITA. I quickly abandoned them as soon as i went above 10 classes, which even a basic OOP codebase will exceed quite early on.
$2.1 Billion - That is how much IBM paid for the acquisition of Rational Software in 2003.
The "Three Amigo's" (Grady Booch, James Rumbaugh, and Ivar Jacobson) had different modelling methods and decided to "Unify". Most the the UML hype was about adoption and market penetration in order to get to an exit. Others were caught up in the hype and made some money too, such as Martin Fowler, who wrote UML distilled and went on to agile and Thoughtworks. Special mention to the now-unknown Alistair Cockburn who wrote a book and did a lot of consulting around the confusion of use cases.
In today's language, Rational Rose would be the "Uber of Enterprise Modelling" and UML and RUP would be it's moat. Or something - Hacker News would be very congratulatory of their achievements.
The question is less about why we don't use UML anymore (A: the three amigos got their exit), but why nothing has come along to replace it (A: it's not needed).
UML sucks because it takes forever and if your software changes you have to change your UML. I just use tooling that allows me to generate diagrams on the fly based on the existing code.
In the planning phases I just use Draw.io, but that always ends up out of date and irrelevant once the system actually exist.
I find this kind of comment ironic (your not the only one to make it), its this kind of thinking which lead to UML becoming a complex over detailed beast, rather than be a tool to consistently model intentions.
If you model at the correct level of detail, you don't have pain keeping them in sync, one you start to model the implementation details you need to adding more complexity to your model to be able to sync the model :)
A lot of others have talked about the UML aspect of this post, but I want to speak to something that was more in the latent space of my reading of the post and its dislike for agile trends and a kind of supposition of something being lost by losing UML.
It seems to me that people emphasize the success of engineering disciplines outside of software engineering as a way to show the weakness of software engineering, but I think this way of looking at software engineering artifacts is misleading.
Optimization algorithms have some pretty dismal outcomes when you choose to analyze at a particular point in time, especially when you choose a failing artifact, because those failures are absolutely intentional and critical to the proper functioning of many types of optimization processes.
As an example generative discriminating processes, like GANs and like Software Creation + Chaos Engineering as popularized by Netflix, thrive on the creation of failure. Yet if you focus on the examples of failure you're missing the forest for the tree whose death is fertilizing the soil of the forest and resulting in a thriving ecosystem. Yes there was failure, but that failure wasn't an indication that the practice itself wasn't holistically valid. The failures were the means by which success was obtained. This falls directly out of the mathematics of learning.
The field envy of other engineering disciplines is misplaced. The limit of these optimization processes when expressed in code tends to be machine learning. These optimization processes result in learned creations of superior quality to what is currently produced and it seems highly likely that most fields that don't embrace this will find themselves out-competed by disruptive upstarts who aren't blind to the benefit of new technologies and workflows. Likely, some of these upstarts will be startup founders who frequent this site.
When you look at a time slice, failures look terrible. When you look at a longer period of time: failure is smarter than success. Risk aversion is self-destruction; die to live or you'll live to die.
I avoid even trying to use UML , because it just ends up in an argument about whether I’ve used it correctly, and is a distraction from whether the information displayed is correct.
It is also like using fancy vocabulary. You might sound smart and all but most of people will think you are silly. Then there will be a lot of people who won't understand you and there might be more problems because of that misunderstanding.
When you will make perfect REST API there will be someone who will not understand it and will just try things and he might make a big mess.
The same if you will create beautiful, perfect, UML diagrams. If no one else is proficient using them and it is only you. There is no advantage of working this way.
i don't think it is. the article isn't saying the diagramming is dead. it's saying uml is dead.
using boxes and arrows isn't uml (full uml) and as you say, it's wasting time to go "full uml," which is why it died. diagramming will live on, even if inspired by uml.
He calls it masala in his post. But you know what? Masala is just UML. Because the L stands for language, and languages evolve when used (or die when not used, not when used differently)
UML has been on life support for a long time because it served no real purpose for either engineers nor business users.
Any UML diagram that accurately captured complex business requirements was no easier to understand than the pages of code, databases deployed, and a gazillion other things that are required to run any modern corporate.
Anectodal data point: no, it hasn't. I still use it in every project that we make at my dayjob, including system documentation, implementation proposals, communicating design choices to other people.
That's what I tend to see as well. Used around documentation and drawings. Used much less in any context around real code...little code->uml, and almost no uml->code.
There were a bunch of academics in a certain rich EU country that dumped a lot of research dollars into UML (via their own research and their control of funding agencies and committees). That they’ve mostly retired by now probably has a lot to do with rapidly declining interest in UML.
My teacher in Software Engineering that "taught us" that UML is used everywhere in CS and business. Looking up their CV, they only had two years of experience outside academia, and that was 20 years ago.
UML - I know next to nothing about UML - but what I do know is the language was invented first and then people came around and tried to give semantics to the language. Well, in other words what that means is that the language was invented first and it really didn't mean anything. And then, later on, people came around to try to figure out what it meant. Well, that's not the way to design a specification language. The importance of a specification language is to specify something precisely, and therefore what you write - the specification you write - has to have a precise, rigorous meaning. - Leslie Lamport
UML: a language that was invented first and then people came around to try to get semantics. - Leslie Lamport
UML: fuzzy pictures of boxes and arrows. - Leslie Lamport
People use UML, things like UML, to model programs, but it's not clear how to translate them in to sequences of states, for concurrency. If you cannot translate them in to sequences of states, it means you don't understand them, and it may mean that there's nothing there. You know, there are lots of people selling snake-oil, drawing boxes and arrows that make you feel good, but ultimately have no real meaning. If something is really meaningful you should be able to express it in mathematics. - Leslie Lamport
No UML we will not miss you. Maybe architectural astronouts will miss it, but hardly anyone else.
UML was just another bad idea that wanted to reduce the complexity on business logic part but what it did in reality was:
- Add enormous complexity of UML on business logic
- Add enormous overengineering and complexity on "legoland" (code blocks that needed to behave by the UML spec)
- Add enormous complexity on a coding part (developers suddenly were no longer aware of the whole process and were not only doing stupid mistakes but were unable to optimize what could be optimized.
I was working on 200 million revenue project that was destroyed by architectural astronaut with "political connections" to upper management. At the end the whole dev. team was frustrated (those that brought the project to 200 million), top people were leaving and due to "generalization" of each "block" each feature took at least twice the time to deliver without all the overhead to actually get the conformation from UML "experts" what they actually need.
The further you go from the ground, harder it is to see the details. Once you float so high in space that Earth is only a ball, all problems / complexity / people / ... disappear. But it doesnt mean they are not there. You just cant see them. You are just playing with the ball.
The funny thing is that people just never learn, I bet there is a few thousands project running just now that want to do something similar as UML did.
Architects that have authority over development but work at arms length from the actual developmwnenr effort aren’t UML’s fault, and haven’t gone away in the same bureaucratic environments they have always been found just because UML got less popular. They may now communicate (and force developera to communicatr with them) using diagrams with ad hoc symbology and inconsistent annotations esther than a structured visual language, but they still exist.
It’s about communication. Occasionally one needs to communicate complex structures to an audience with unclear capabilities. The ability to fall back to something that one can reasonably hope is understood more or less the same way by both the writer and the reader is invaluable. If you are not dealing with complex structures or have other ways for communicating them or share another language with the reader (what do these boxes _mean_?), UML is not useful.
I replaced UML with Archimate, I hated where UML was heading and the concept of dropping code/reverse engineering from source code - the idea was novel enough but definitively snake oil. Architecture is important, this is nicely achieved with Archimate imo. What works for me might not work for you. I still use some elements from UML but usually on a whiteboard as either use-cases, simple sequence charts or state diagrams. The code is self documenting.
No silver bullet. No silver bullet. No silver bullet.
As Alan Kay said, programming is a "pop culture", more interested in the latest fads. the fact that we have so many programming languages, many of them relatively recent, proves we don't really know what we're doing.
Through my tinkerings with Rasperry Pi's, I began to develop an interest in electronics. I recently decided that I'd actually study a book on GCSE Electronics (not sure what the US equivalent of that is called).
What it highlighted to me is that there's certain fundamental, timeless principles. Now, electronics is a relatively young discipline, but you can still go back decades. A lot of components we have now were actually available then, and ALL of the mathematical principles behind designing circuits remain the same.
Programming is the strangest invention of mankind. It seems to have resisted many attempts at codification of fundamental principles.
The reason UML died is that most code is written by people who do not care about bureaucracy or much else than writing code.
UML was borne to have a language to specify the design so that somebody else can implement it, or at very least that's how I have seen it used.
But this doesn't make sense in most cases. We have learned that the more efficient way to write the software is have one person both design and implement that design. And this means the UML has been relegated to serve as documentation only.
Which we know nobody likes to do and few do it.
The UML is still strong in organizations that write code to order or when comprehensive documentation is formal output of the process. This has been more popular in the past but these days companies that shape our development consciousness decided (and rightly so) that developing their products is core part of their business.
I think, the biggest issue with UML is that it helps with things that doesn't need such a ceremony and doesn't help in things where you would need it.
Modeling database structures, class diagrams, interactions between components is usually something, any senior developer can do on a napkin without all the hassle, certifications, $500+ per seat tools and unique terminology.
The trickiest things are designed much better, when you just open REPL and start playing with the design until you get it down. It's highly creative, speculative and flexible. So exactly the opposite of UML.
In programming, the most exciting parts are things you couldn't productively solve with UML.
And calling diagrams Masala is a huge misunderstanding why people mix contexts, views and representations. This is that creative, speculative and flexible part you exactly need.
UML are blueprints, but blueprints are not the thing, just like maps are not the land. Talk to anyone in construction and their first complaint is often the blueprints are wrong. They may have missing or extra space, call for impossible things, changed since the project began, or be flat out wrong. Whether the blueprints are actually wrong or we’re not followed is up for debate.
So if UML is blueprints, it’s not that they failed. It’s that the creators misunderstood the actual fidelity of blueprints. They will never be perfect enough to replace implementation.
Sometimes I wish for more formal planning, and that real design could be done before code. But instead of building a design process, programmers immediately saw a way to automatically generate code, and in that they made UML useless.
To some extent UML did succeed: it is the de facto way to describe class diagrams, state machines etc. That's not to be underestimated.
I went to university just before UML had hit it "big". For some reason we learned three different ways to describe a class diagram graphically. Let's skip the part where we don't think that belongs to the university; it doesn't. But it does show how confusing things were, even to lecturers.
Now, if you were to bring me a class diagram with a completely made-up notation I'd immediately ask you to use UML notation next time.
It's not a big success, but a success nevertheless.
I think the big hope for UML was code generation, so UML->code, but the problem was, when manual fine-tuning of the generated code was needed, there was no functioning process of code->uml.
Why would I spend an hour on buiding a diagram to generate code if I can type the same code in five minutes? I can read both. This is from real world experience with Rational tools.
I suspect the idea was to lower the skill ceiling to make it accessible to non-programmers or more cynically in a belief it would cut labor costs. It doesn't work out that way but overspeccing the wrong details is a persistent temptation, perhaps related to psychology of the leaders for reasons similar to micromanaging.
UML is a near-deprecated aspect of what was once called Computer-Aided Software Engineering (CASE).
UML became the canonic iconic notation that replaced the multitude of -cough- guru-inspired systems and software notations that preceded it.
As is documented in the rest of this thread, UML belongs to the age of mainframe and mini-computer programming projects with ubiquitous RDBMS backends.
While the notations it consolidated were much lighter-weight and easier to use [AND more intuitive!], UML drowned in formalizing the corporate complexity of all of them.
And as if that weren't a complex enough ambition, UML attempted to do rules-based and heuristic enforcement of Object-Oriented programming just to ensure its rejection by pedestrian programming staffs.
Today, its a convoluted mess that virtually no one tries to untangle in a professional setting.
Intelligent IDEs and the left-shift of Dev-Ops utilities have co-opted the CASE tools lunch. For sketching, Google's Draw.IO is superior by leaps and bounds to the heavyweight, commercial drawing products.
While that explains the UML-narcolepsy effect, the broader issue has always been the lack of systems and program design education.
It still is important to convey system and programming information beyond the cohort of programmers. I can tell you from experience that many accountable company business actors have NO IDEA what these systems actually look like in all the important ways.
What's missing is Computer-Aided Business Engineering tools.
UML is the same line of thinking as J2EE which is the worst thing I’ve come across in my 25+ year career. J2EE took software complexity and made it not only more complex but catastrophically tedious. And then the “experts” were no longer software developers but bureaucrats. It was the perfect way to destroy Java and enterprise software.
UML was along the same lines as J2EE that rewarded bureaucracy and tedium.
UML is a relic of the worst parts of enterprise software back in the mid-naughts and I’m glad it’s completely dead.
Back in the 70's, some contracts required that a flowchart for the code be delivered along with the code. The idea was programmers would prepare a flow chart as a specification, then implement the code according to the flow chart.
What actually happened was programmers developed the code in Fortran, then ran a program that read the Fortran code and produced a flow chart on the printer.
After a while with this, the flow chart requirements were abandoned.
UML as a way of detailing the low level implementation of a system was, IMHO, a really bad idea. I say that as someone who once drew up all the GoF patterns in UML in Borland Together and then tweaked the generated Java code to do have running Java code for what the patterns said.
But there is a world of difference between "UML as a sketch" and using a diagram to constrain what actual code your co-workers actually write.
Sure – Fowlers’ UML Distilled is still handily placed on my bookshelf but even from when I first read it, I knew that I was never going to draw class diagrams from three perspectives (conceptual, specification ad implementation – leaving aside the GoF practice above). And in the 20+ years experience since UML became a standard I don’t think I ever did.
But turning it around, what diagrams do you draw when
- You are in front of a whiteboard and are trying to communicate with half a dozen (maybe microservce based) teams that you need X to happen before Y but the daft business require you to pass a new bit of data through 3 of the systems to implement this.
- You are in front of whiteboard with some of the newer members of your development team trying to work through what parts of their development can run in parallel and what parts must be serial (because the product owner is new and can’t get to that level of detail in words or drawings)
- You want to explain to the business that one of "these" will own many of "those" unless (and if you are getting ambitious in your whiteboarding) the guard condition is that it is a pink moon or its aries ascendant
Me – I want one "diagramming language" to realise my witterings in. And that one "language" was always sketched UML for the last 20 years.
Sure – if I was a database designer implementing a problem with a relational database I might well use a ER diagram. But that is simply acknowledging diagrams are so much better than words for sharing ideas and the diagrams should be appropriate to the level of abstraction being dealt with.
As a previously enthusiastic Scala user, I'd say that the role for UML has been _very slightly_ decreased by the shift to functional from OO. But not much. The functional example of "map X to Y except when ..." is really about implementation detail and I think most people are happy not diagramming at that level now.
If something dies without anyone noticing, it has no value whatsoever.
Good riddance.
As a new software developer in IBM, UML scared the hell out of me. It gave an aura of needless complexity to software problems that needed to be tackled by breaking them down into manageable chunks and apply time tested solution patterns.
And the whole Architect thing in IBM, oh my god. They were literally seen as demi-gods that everyone aspired to be. Really bad entry into software engineering for me.
Is it not uncommon these days to see UML sequence diagrams, for example...
I would describe the situation as:
- UML offers a lot of stuff, but only some of that became widely adopted.
- When stuff gets really ugly and complicated, UML can help by providing a high level view of what's going on in ways that are really hard to achieve just from memory.
Sometimes I use PlantUML to create diagrams to keep track of things. PlantUML is a format that is source control friendly, so I can check it in my docs folder.
This is just false. People just need a better UML IDE, and to stop thinking you need 100% UML coverage. UML is supposed to save time, if it doesn't don't do it, when it does you're wasting time to not use it. UML is still the go-to for creating State machines. I use them all the time in software and computer engineering. I've been using StarUML myself for years and it's still usable with the unregistered free version, you just have to put cards behind the UML to block out the watermark. I can't recommend StarUML enough. If enough people start buying it they'll put out more updates, it's been years since the last update. Evidence UML is not popular right now.
If you're trying to break up a project into work for a large team, UML makes sense. Most people are clueless of how to do Test Driven Development right. I personally start with use case scenarios, then write console tests with dummy users named after my UML actors. By writing the console test with mock JSON data, you can quickly build high quality apps. The problem is when you have to update your UML, it becomes a time sink. Really we should have IDE and AI tools to reverse annotate the UML models. And CV tools to turn drawings on backs of napkins into UML.
I have not tried PlanetUML but it looks really cool, and it's my style. Comparative to StarUML, I would argue it's probably better for your creativity to type stuff out by hand but StarUML's CAD-like interface is very good. I personally run a startup that does pen-and-paper GitHub/Markdown integrations and I'm working towards converting my I am You Language (IMUL) to PlaneUML. It's competing with other products for attention though. I think that's the end argument about UML, it competes for time resources, I need help.
I don't remember all of UML anymore, but I still use sequence diagrams, activity diagrams and state diagrams. Using something standard is just easier than having to think about how to represent something, and then explain your representation to someone else.
During the design phase, I'll usually write PlantUML code in IntelliJ and have the real-time preview fullscreen on another monitor. It's a nice setup.
It has mostly died with the current generation of engineers, due to a common problem.
We tend to go to either extreme, UML all the things (model driven design), massively over specify things before you write code. Then react by saying, this thing doesn't work when you try to do non sensible tings with it, its terrible, anyone who uses it doesn't get it.
The next gen of devs come along and thing its just for old folk who aren't down with the times.
Meanwhile, some people who never never fell into the trap of the extremes carry on to use it when useful and in a sensible way. Perhaps even try to show some newer devs the benefits, but have to now get past these preconceptions, even though the newer guys haven't any experience. These people are now often leading large engineering groups, and often want a basic overview of designs (not to the minuet detail), and really don't care about your classes, but please give me a sequence diagram so I can see how this microservice it actually going to fit in, and show me you have thought through the state transitions of the data.
I don’t get the example given here that shows three steps joined by arrows and claims that the diagram is ambiguous about the order of execution of the steps.
The steps are called step 1, step 2 and step 3. The arrows between them must imply some. Kind of sequencing or dependency. If the person who drew the diagram didn’t mean to communicate that the execution sequence was step 1, then 2, then 3, they should have given the steps different names.
Simple dependency or data flow graphs like that are great for recognizing redundant dependencies, for example - like noticing ‘step 3 uses the results of step 1 and step 2, but step 2 depends on step 1 so we can just simplify this down to a sequential process - or we need to break the dependency of step 2 on step 1 so we can parallelize them’
This is notation to help you use pattern recognition and visualize structure to make better decisions, not to capture the final design. Diagrams are for helping with the ‘working out’ phase, not expressing the solution.
I don't think it'll die entirely anytime soon, but certainly it has lost in mind-share - especially in the fully by-the-book way. Vaguely UML-like adhoc notations are way more common, and will survive even longer. (And my interactions with people aiming to do it properly have involved more ambiguity than would expect from a good standard)
UML started from the premise that software is a model of the world. But in fact what software really is, is a model of a low level program in assembler or binary we want to construct. But the proponents of UML didn't realize that either you must model the real world or you must model software. They are not the same.
"Modeling software" really means writing a program in a high-level programming language.
The failure of UML was it's schizophrenia: Was it made for modeling the world, or for modeling programs?
If it had realized that is the case its creators would have asked: "How can we create a more high-level programming language than already existing ones?
But UML creators were happy to claim that they have a great "modeling language" because then there was no real need to make it executable. The fact that some applications could produce some Java-code from a UML diagram does not mean they could create full running programs from it.
In a way, UML has survived in game engine "noodle graphs" (see Blueprints in Unreal Engine for instance). Those are more concerned with describing realtime data flow instead of component design, but the pros and cons are quite similar (simple things are simple, but complex things grow out of control).
Ah yes. That horribly failed project back in the day. Quoted 8 weeks to generate a requirements document and particular experimental prototypes for certain narrow areas to explore those requirements. Allowed two weeks for requirements gathering. The rest was then expected to be cycles of requirements and prototyping. This was all readily spelt out in the contract.
I made the mistake of using UML. They argued about lines and shapes of the sequence diagrams for around 3 days. Then it was fonts. Then it was colors. They wanted images. They wanted all sorts of nonsense in the diagrams. But the actual content? Apparently not so important.
Three weeks passed without a single line of code and the requirements gathering still wasn't complete or even approaching a first draft. I aborted it before this project had a chance to go into actual development which was the next stage and a separate contract. I learnt many valuable lessons. Including the wise counsel of having a contract with exit clauses. Thank you to the lawyer who wrote it up at relatively minor cost. The same lawyer who apparently was a waste of time according to colleagues at the time. Colleagues I later abandoned when they made other stupid decisions. Always get contracts verified by legal representation. Sounds obvious. But apparently not.
UML wasn't actually the issue. I just lacked the crayons and experience to stop what would later come to be termed "bike-shedding". Because the diagrams etc were "human readable" it was deemed that anyone could have an opinion. So they did. Repeatedly. Ad nauseum.
Later projects went better when I limited the size of requirements gathering and who could make changes. Experience built up and my BS detector got a lot better.
But I kind of didn't mind UML as such. I actually prefer waterfall. Real waterfall is cyclic and resembles agile. But no one ever does it that way. They think it locks things in at each stage and the deliverables are like unchangeable stone tablets from on high. Ugh. So painful when done that way.
speaking of uml, there was a general "software engineering" course at my uni and there was a section on UML diagraming. I remember creating these absolutely beast UML diagrams that looked more like spider webs. It was fun, but absolutely useless in the real world lol
My personal view: kill the garbage that is UML with fire, and all the attempts at formal modeling. Design is too rich of a field to be modeled formally.
I disagree with the author's apparent take that all there is is either formal UML-type thought systems or a mess of design-less fluffy user stories. There is room for both.
But a thoughtful design represents and communicates deep thought, and formal design systems like UML, in my experience, do not help people communicate deep thoughts. The best design documents that I've read were very well-written prose, with occasional diagrams for illustration, or even code snippets, but not trying to convey too much in a formal manner which would've made the doc less readable not more.
I think all of us who studied CS (at least over 10 years ago, no idea what's being taught now) hated UML.
That said, there have been plenty of times where I have used certain kinds of UML.
For example, sequence diagrams [1] are a great way of modelling certain processes between actors and objects within your system. I even used once a few months ago to help a client understand a complicated part of the system we had developed. I tend to think more junior CS graduates might end up pulling this kind of thing together using sticky notes in some web 2.0 board.
Yes. UML was stupid, over-complicated, unproven, horrible to work with, and if that wasn't bad enough, it was mostly people who didn't know how to code very well that advocated for it.
It's been dead for a while now. Just being brutally honest.
I tried UML years ago with little success. I recently started working with BPMN 2 with some success -- swimlanes are easy to explain and make sense to most people. ERD's are still my favorite notation and really stood the test of time!
I think uml is not a efficent way for me to pick up information. I learn them in the university and all, but don't think i actually used them really on my jobs. When I look up a design pattern, I try to understand it from a uml diagram, but almost immediately quickly jump to wikipedia for example code, as i consider it way more digestible. Not a coincidence, since i deal with code every code, with uml diagrams almost never.
For states, I like plain old state diagrams.
Same goes for creating design: I prefer sketching code in notepad in a some kind of java/c++/python frankenstein over drawing uml.
I only find the the Doxygen generated inheritance hierarchies to be usefull.
I've always thought of OOP as structuring a program the same way you'd structure a business. There are distinct divisions with responsibilities and internal state, with somewhat well defined ways to communicate with each other.
Part of why OOP got so popular in software businesses is because it removes an impedance mismatch between the functions of business and the software it produces.
In this view, UML is a kind of intermediate bytecode between business planning and software planning.
Where it falls apart, IMO, is that UML is too granular for business functions, and it enforces a probably suboptimal OOP way of approaching technical problems.
I've never encountered UML in the wild, but could say a few good words for SysML, which is built on top of UML.
Working on embedded software / hardware projects I've found SysML to be quite useful for systems design and documentation. Mostly BDD, IBD, Sequence, and State Machine diagrams. Parametric diagrams will probably come in handy at some point, as well.
SysML users tend to be in less vocal industries, especially those making expensive expensive hardware with lots of up-front design eg. defense, aero, medical, etc. I expect we'll keep using it until something better comes along.
1. Diagram-driven code didn't pan out as well as advocates hoped (and to the extent it was useful, UML wasn’t a great language for it; BPMN was less bad at this.)
2. Modeling and analysis became devalued, adversely impacting modeling languages
3. OMG also ended up as the owner of BPMN (which was also affected by the first two factors) which has enormous overlap in function with UML. So, with a reduced role for visual modeling languages you've got two competing standards in the area from the same organization, neither one of which is really adapted to current usage patterns.
It's not been my observation that the overlap is that big, having done lots of BPM work using BPMN in a former life. BPMN is for describing much higher level processes than what UML is for.
> BPMN is for describing much higher level processes than what UML is for.
BPMNs process-oriented focus is often viewed as both more general (tech neutral) and more suitable for high-level use than UMLs OO focus, and there are more low-level implementation diagrams in UML (e.
g., Class diagrams) and more high-level context diagrams in BPMN (e.g., Conversation) but there is still significant overlap. BPMN Choreography diagrams serve exactly the role of UML sequence diagrams. UML Activity diagrams and BPMN Orchestration diagrams cover very similar space.
We're using it via an executable process engine at my current company on a project that's not yet in production, so I'm still on the fence. I have no idea if anyone from the "business" side will ever look at these diagrams, and if it's just the developers looking at them, then why not look directly at the code with the support of a modern IDE?
It works well for business processes, e.g. here's one I did: capture credit application; do auto check for credit worthiness; do rules check for amount requested vs credit worthiness to trigger auto approve, auto reject or manual approval gate; submit save approved loan details to mainframe for money issuance.
The tools I used generated a system using storage-backed messaging between every step for reliability, and then I could plug in a UI or an integration where needed, and generating monitoring screens to illustrate where in the process things are, cycle times, etc etc.
If you use it like that, as a config language you can use to drive creation of other things, it can work well, and it's a pretty mature standard, so it can describe lots of possibilities.
She found that programmers don't use it and architects barely use it.
My favorite quote: "There was a tendency for informants in large organizations who did not themselves use UML, or who used it selectively, to assume that colleagues in other roles were likely to use it more."
There was a big UML fashion in the company where I work, they even hired a specialist in the subject, did training. And a few years later the specialist left, and no one made UML anymore
I (I should say we) have used UML class diagrams to build object oriented systems. The class diagrams are something a reasonably sophisticated, but non-programmer, person can understand, so they can participate in the design phase. Of course there's lots that comes after that design phase, but it set the stage.
In case it's of interest, the things we were modeling were bilingual dictionaries, and morphology and phonology (linguistics), all of which had a ton of structure we needed to model.
This fall was impending. The software components were invented faster than their representation of business components.
I remember facing this dilemma early on in 2000s. When I was writing JSP (Java Server Pages), we did not have an easy way to ‘model’ a JSP as a software entity in UML. A JSP did serverside logic as well as Frontend. Then a forum suggested that we do what happens underneath that a JSP is compiled as a Servlet and model them as ‘JSP_Front’ and ‘JSP_Back’ as the accurate representation.
I like drawing dataflow when mapping integrations in IT projects out with UML flow diagrams, such as this[0]. But other than that I have never seen anyone use them. I could never find a use.
In my college years I've been told a lot of things by my profs. ORMs will kill RDBs, UML is the shit, Agile is the future of all software engineering etc. I could never see most of those things happening but always assumed that they're smarter so they must know what they're talking about. It's kind of satisfying but also profoundly sad that they were both wrong and failed to instill a reflex of questioning authority in their students.
UML died because while people were buying UML to live the dream that programming could now be reduced to charting classes, real programming doesn’t work like that. All it added was some meaningless documentation that looked impressive but got out of date approximately immediately after the first class was implemented. For a while people would still pretend to use UML (even though they actually didn’t), but eventually we didn’t even bother pretending.
It's alive and well in enterprise projects where Solution Architects need to micro-manage frequently-replaced, Acceptance-Criteria-driven, non-thinking contractors.
The one great thing that came out of UML was sequence diagrams. I don't recall sequence diagrams being a thing before UML, but they are actually very useful for describing interaction flows.
What UML did was to provide software engineers with a common language when illustrating designs on a whiteboard. I guess there wasn't any money in stopping there, so they had to write a spec and them have very expensive software being built to support this vision.
Other people in this thread have expressed it, but the reason it's not useful is because most developers are doing something like UML, but it's not UML - so you get none of the benefits of the Unified part.
When I see a good UML diagram I know immediately what the bit of modelled code does. When I see a weird, cobbled together diagram of bits of code the creator thinks defines the state, I have gained nothing of value from viewing the diagram.
In my experience, when I see a good UML diagram I know I am seeing a picture of how someone thought something was going to work before they (or more likely someone else) actually built it. It might or might not accurately reflect how the code actually works. In particular constraints the diagram claims exist may or may not actually be enforced in the code. The UML diagram says a Foo only exists for the lifetime of the Bar that references it; well, this memory dump from production says otherwise, so I don’t care what your diagram says, the code is what matters.
I don't agree with the authors use of masala as a cultural appropriation. Using 'masala diagrams' to derogatorily describe software modeling is totally unnecessary. Why not simply call them blended diagrams or milkshake diagrams or literally anything else that describes a combination of stuff. Picking masala is totally out of context and isn't productive in the year 2021, lets not associate race with opinion.
There's a impedance mismatch between UML and actual programming languages. Idiomatic python is different from idiomatic java, and trying to make UML as it was originally intended, to be a detailed, formal specification of a program, is quite at odds with that. You either end-up having to ignore most 90% of the diagram, only using the high-level details, or you end up with programmatically generated garbage code.
UML died alongside the waterfall method. UML, especially defining the method names et al, before you coded anything was an extreme of the waterfall method. I think generating UML from code could have been useful. But I was much more likely to grasp more of the system from a paragraph or two or well-written text. Well-written UML rarely helped. Roughly sketched boxes with arrows and a bit of text are the most useful.
Well, I still use some UML diagrams fairly often. Not so much for permanent or comprehensive system documentation, but to help me think through specific features or behaviors, and communicate these designs to others. I usually use a smallish subset of UML, because this usage doesn't require deep details.
I did give up on WYSIWYG tools for this, though. PlantUML support in VSCode is pretty good.
My entire experience with UML was as a high school aged kid who'd just found out about it, but despite my best internet searching, could find absolutely no resources on learning or using it. So I just gave it up and this post is pretty much the first I've heard about it since c. 2002.
So... yeah, sounds like it. Everyone I know that needs something UML could do just uses Draw.io these days.
I've been at it 8 years and never actually seen anyone use it, so I think of it as a relic of an era before my time.
I don't really agree with the premise that this is a big tragedy though. There's a danger of getting carried away with the analogy to traditional engineering but the cost of changing things in software is much lower, so exploratory work is much more viable.
The problem I found was that no matter how careful we were in drawing the UML there was always one box on the diagram that turned out to be the "pandora box". When someone started work on it it suddenly became apparent that all the complexity of the system lurked within - and we could never be sure which one it was or how to get rid of it/them.
This is what happens when writing documentation owuld take the same or more time than writing code. We all see that most "no-coding" auto generating approaches to software development did not work except for simple solutions, and the irony was that the people who had time and manpower for uml where the one with most complex need of software.
The trouble with boxes-and-lines is that they work fine for small problems and collapse in a mess of overlapping nonsense for complex situations.
This is not a UML specificthing. Any visualisation that is boxes and lines that is not a strict acyclic single parent heirachy is doomed to failure for anything complex.
Ironically, UML diagrams are still very useful for their immediate purpose, whatever that is. I frequently use timing diagrams just for my own sake because having to articulate something always adds clarity, showing you where you were handwaving something in your head. It's the draw.io version of rubber ducky debugging.
Past a certain scale it just seems easier to read text than visual diagrams, and since apps have tended to get more complex over time, we're reaching that scale more often.
I've worked at two companies that started with database schema diagrams, but gave up on it. They got too complex to be useful, and people stopped looking at them.
Whenever my venture studio briefs developers with a new product that needs to be developer we do so in what we call a high definition UML. Which contains all the logic but centered around fairly high definition ui wireframes, sometimes even design.
It's the best method we have found so far to brief developers and for them to follow.
The problem with UML is that it could only represent certain functionality... and counted on code generation / translation to make it happen. It was an extra step. It was much more useful for analyzing and diagraming code, but that too was limited by whatever analysis was being done to generate UML.
Rational Rose was the most expensive, worst piece of software I've ever had to use. It made me completely lose faith in formal design methods as a process, both because the tool sucked, but if the gurus of professional engineering did their best and produced junk, you lose faith.
GitLab supported Mermaid.js rendering for a while but last I checked that was broken.
That showed the most promise for a practical use of UML.
I have no idea what that article is trying to get at about multi-dimensional diagrams.
I can make masala in UML, there's literally nothing stopping me.
Fortune magazine predicted this back in 1992. They did a study that correlated the growth of developers graduating from college was either flat or linear. The growth of computing needs was exponential. The end result of would be an exponential decrease in discipline in software as amateurs, hackers, trades people and what not necessarily fill the void. There is no other outcome. The challenge is for those areas such as military arsenals, power grids, healthcare and other safety domains to firewall off from the undisciplined masses and enforce discipline. However, Fortune pointed out that capitalism doesn't reward safety when its cheaper to hire good lawyers and litigate. So Fortune also predicted an explosion in tort and tort lawyers. Verner Vinge wrote about this in his novels. He supposed that rationalization will ensue and that undisciplined code will be seen as security through obscurity, a Gordian knot of convolution. One possible scenario is more-or-less Skynet were a few computing centers replace millions of devices. This makes sense from a capitalism perspective and governments can a few tightly controlled central services. So those are the choices given human nature. What's not a choice is an increase in discipline because, you know, human nature.
> In a model in which you pour user stories into a sausage machine, and you get a demo at the end of it (or a feature production release in a DevOps shop!) there is no room for purposeful, structured problem analysis anymore.
Rational was set to conquer the world. Never good idea. However I still use some of their ideas and StarUML diagramming tool to help document preliminary design, use cases, deployment etc. It makes communicating to non tech persons way simpler.
I use UML now just to make sense of the huge complexity in code (front end dev) activity charts for logic, sequence charts for debugging, state distances for UI states. I have found it invaluable.
But I am the only developer in a team of >100 who does.
I started college back in 2016. I saw uml from day one. Never actually used it outside college. We used a lot of those tools that can generate diagrams from code and vice versa, but I personally never liked it.
I personally have adopted sequence diagrams in explaining processes to experts in non-programming fields with great success. Can't imagine using UML in daily coding practice, though.
The software engineering industry has simply decayed into chaos ... it’s mostly blind leading blinds ... abstract thinking and design have been replaced by Radom walks ....
I'm a software engineer and use UML (specifically PlantUML) to produce diagrams for complex algorithms but I treat it more as a diagram tool than anything else.
As a tip, don't try an learn every minute detail, think 20% gets you 80% of the way, the rest is probably not worth it for you, and if it is useful to clarify your own thinking that's good, but probably lost on anyone you want to communicate it with.
no, I got a recruiter contacting me for some UML thing recently. Well, it was the Paris metro company, so I guess UML is dead, and they just got their "UML project" together.
Agile killed off UML. Pre-agile, people were either doing really slow iterations (spiral development, rational unified, etc.) or just trying to do waterfall because that what people learned in school. Either way that meant obsessing over requirements for a while and then doing an actual design phase before you actually did any coding. And of course that required lots of diagrams and documentation; if only to give management some sense of progress (which to be fair was often the main point). UML merely tried to standardize that practice. And then IBM bought Rational around the same time the Agile Manifesto was signed.
IBM buying Rational emphasized how corporate things had gotten and drove the point home that the Agile Manifesto signees were trying to make that basically was about eliminating inefficiencies and bad decision making from software development.
Extreme programming introduced the notion of a sprint measured in weeks and the notion of having working code at the end of each sprint. Later scrum, kanban and other agile methodologies copied at least those parts of the process.
Whichever your flavor of Agile is, two weeks does not leave a lot of room for maintaining artifacts that don't evolve along with code. Like UML diagrams. You draw a nice diagram, then you code for a week or so, and then your diagram is out of date. You could fix it by dedicating time to that. But you'd need to do so every two weeks. Or you just skip the part where you obsess about having diagrams and just move on straight to just worry about how to best run sprints.
Extreme Programming was extreme because it intentionally skipped both requirements specifications and design documents as things because they realized that requirements were a combination of wrong and out of date because of things you learned during the project. And consequently, the designs were wrong as well. You do a little bit of requirements and design each sprint. That's called a planning and estimation meeting. Requirements are called user stories now. And since there is only so much you can do to a software system in two weeks, you don't need a lot of UML documents to describe the changes you are going to make.
Along with Extreme Programming came tools like white boards, camera phones, and wikis. You use them to do a little white board session with typically some simple diagrams, you snap a photo of the whiteboard, and then put it on a wiki where no-one ever looks at it again. Good enough if you are doing two week sprints. The second you wipe the board, it fulfilled its purpose: sharing and communicating ideas and getting some consensus.
So, that's why UML died. It's no longer needed. Too expensive to create, more expensive to update, limited value once you have it, very limited shelf life once you stop maintaining it.
These days UML is the kind of thing I mostly expect to find in xkcd strips. I have been writing software for a few years and I haven't seen anyone use it non ironically outside of academia.
The versions of ISO 10303 I'm using are based on EXPRESS and EXPRESS-G; didn't see any SysML there; this even applies to ISO 10303-233, the systems engineering AP of STEP. There is a tendency towards RDF/OWL driven by the ISO 15926 series though.
I wrote "is moving". Nothing has been published yet that has been developed using SysML, the first standard will be the next edition of one of ISO 10303-242 or 239 or the first edition of ISO 10303-243.
Can you please be specific and provide us with current links or official documents which support your statement? I know that there are some exponents (mostly with computer science, not systems engineering background) who don't like EXPRESS and would rather use SysML, but these are just opinions, not the official roadmap which affects the ISO 10303 standard series. It's possible that there will be an official mapping from EXPRESS to SysML (as far as possible) in future to automatically generate SysML diagrams (in addition to EXPRESS-G). But it's very unlikely that SysML will replace EXPRESS or be used as an equivalent alternative. EDIT: I remember similar discussions in the nineties when UML supporters questioned the appropriateness of EXPRESS.
There are some tools [1] to allow the "harvesting" of existing EXPRESS models into SysML and to export XSD, JSON or EXPRESS schemas for delivery with the published standards. There will also be a way to map between existing models and new ones. The driver for this is to try to provide a way for ISO 10303-242 and ISO 10303-239 to interoperate better.
SysML is also replacing the use of IDEF0 for the high-level requirements diagrams, finding IDEF0 software was getting too hard.
That's apparently a tool to exchange UML diagrams using XMI. There is even an EXPRESS meta model by the OMG (to automatically generate UML projections from EXPRESS specifications, not updated since 2015). No conclusions can be drawn from this about the roadmap of the ISO working groups regarding future editions of ISO 10303. However, you had referred to minutes of TC184/SC4 where decisions regarding the use of SysML should have been made. These minutes, or an official roadmap from ISO, where the use of SysML is foreseen, would interest me (and a lot of others too, I suppose).
Have a closer look at those tools, they do everything that I described and are what we are using in the document production process.
I'm not copying committee working documents from ISO livelink to somewhere else. If you want to read them then join the working group, Switzerland used to be a Participating Member back in the 90s.
> I'm not copying committee working documents from ISO livelink to somewhere else
Well, then it makes littles sense to refer to these minutes as evidence. From our discussion, I conclude that there is no reason to believe (absent evidence to the contrary) that "STEP (ISO 10303) is moving to use SysML to define the models for the collection of standards", neither as a replacement of nor as an equivalent alternative to EXPRESS. As a user of the standards, I do not care much which tools the working groups internally use to develop the standards.
The authors of -243 use the JSON schema as an input to OpenAPI tools to build webservices. The Application Protocol model with definitions of what things mean is in SysML, this will also be included in the package for the standard.
A new XML exchange format for models defined in SysML is being defined in ISO 10303-15.
Just had a look at the new ISO 10303-1:2021 (second edition, replacing the frist edition from 1994) which appeared in March. EXPRESS is still the primary description language and will obviously remain with us for a long time to come. Section 6.3.3 explicates "EXPRESS models provide the basis for all specifications of product information in ISO 10303". SysML is not even mentioned (in contrast to e.g. UML). Section 6.3.6 states "ISO/TS 10303-25 does not map all EXPRESS constructs to the
UML meta-model, because that meta-model does not support all the corresponding EXPRESS concepts. The specified mapping is a one-way mapping from EXPRESS into the UML Interchange Meta-model". There is no reason to assume that SysML is more expressive than UML in this regard. ISO 10303-11 was last time confirmed in December 2019.
ISO 10303-15 is apparently a technical specification on how to transform SysML XMI to XML Schema (XSD) format.
A decision was made to have ISO 10303-1e2 just reflect what has been published to date, then start work immediately on ISO 10303-1e3 that will also contain the description of how SysML will be used.
The main use case of ISO 10303-15 is to be able to validate XML exchange files in the new format.
I expect it will take many years until the mentioned technical specifications become regular standards. I would be very surprised to see a third edition of ISO 10303-1 within the next eight to ten years. From what I've seen up to now is that SysML might get a similar position as UML in the todays version. Even if there is a feasible mapping from SysML XMI to EXPRESS, the official description language will still be EXPRESS in future editions of the ISO 10303 series. Fads come and go; we saw this with UML, and we will see this with SysML; SysML is simply not suited to replace EXPRESS, even less than UML. I bet that in seven years at the latest, SysML will be eclipsed by a new fad.
If I were actually surprised, ISO/TC 184/SC 4 would have done an extremely poor job in terms of reporting and public relations. But let's continue this discussion in five years, then we'll know more.
I think it highly depends on your work context, wouldn't exactly call it dead but firmly in the trough of disillusionment. Funnily enough, I think the author's analysis is a bit off, or rather misses essential points. They somewhat blame the procedural changes for a demise of UML.
The problem they don't speak about that UML is not universal, it has serious flaws.
# Complexity as a Modeling Language
It can only be understood by those extensively trained in UML. Really, show someone a UML diagram and they won't understand them. What are those arrows supposed to mean? The boxes. etc. About a dozen different diagram types don't help either.
In fact, I have seen so many faulty diagrams, activity diagrams that contain error and mistakes because the semantics were not properly understood that just confirms how complex UML is. Furthermore, I was in UML corporate trainings and the instructor made mistakes about UML activity diagram semantics, etc. It's that bad. From that perspective, thinking that a business person may be able to understand is wishful thinking
# Object-Oriented Paradigm and the 'Encoding' Problem
UML at its core assumes object-orientation. That means that class-diagrams replace Entity-Relationship diagrams. Class diagrams may work as an 'encoding' of ER diagrams, but its fairly complex. This is a pattern that repeats for a couple of applications of UML. It is possible to model a lot in UML, also by extending UML with its possibilities to extend it, but the results really look like a strange encoding and its hard to read these diagrams.
Overall UML has replaced a Zoo of diagram types that were more or less established and the results are often not pretty. I made a deep dive once into diagrams of the 80ies and 70ies and overall a lot felt refreshing.
# Model-based Engineering and Tool Vendor Takeover
I haven't been involved in UML prior to version 2, but it overall feels like UML is a vehicle by vendors of UML tooling to sell their products and as such, UML has adapted the idea of model-based engineering.
When the naive assumption of a software engineer about UML is, that a couple of Visio/draw.io/Dia diagrams will do, one learns over time that its hard to maintain these models, and the fix for it is to use a modelling tool such as Enterprise Architect or Rhapsody that contains a whole Tree of UML entities from which you derive your diagrams as viewpoints. Its like falling through a rabbit whole. If you manage to avoid this rabbit whole it just means you probably are only using a small subset of UML.
I think this lead to UML becoming increasingly complex and the language standard should probably have been limited in a way that it remains useful without the tooling.
# Lack of Block Diagrams and Data-Flow diagrams
If you look at ad-hoc (non-UML) diagrams in open-source documentation and corporate ppt slide decks, or look at diagrams your colleagues draw on a whiteboard, what one really quite often sees is diagrams that show data flow, as in: "Frontend in the Browser requests data from the backend service, backend service retrieves data from the SQL database and the object-storage system". These simple relationships are remarkably hard to nicely communicate in UML. Hence, if you intuitively start to describe a Software system, UML will not make it easy.
# Whats the alternative?
I am personally a huge fan of FMC (http://fmc-modeling.org) but am the first to admit that it is a little obscure, but it offers everything I consistently need.
* block diagrams for data-flow and system interactions, structure
* Petri nets for behaviour
* E/R diagrams for data structures.
Surprisingly, the diagrams seem to be fairly approachable for engineers and business people alike. A lot of the micro design decisions for the FMC diagrams have been done right. In block diagrams, arrows point in the direction of data flow, everyone can understand that. In E/R diagrams, the 1:N relations also have arrows pointing to the `1`, that's somewhat easy to grasp. Petri nets are equivalent to UML's activity diagrams, but don't omit the drawing of 'places'. With places being drawn, people have at least the chance to grasp the semantics and firing rules (that are also present in UML's activity diagrams but people just don't make that connection).
The trick is to tie your UML to your code with a few scripts. After a point, if you're manually interacting with it, you're doing it wrong. A lot like the AWS console.
Actual engineers still use it. Not the ones we started calling engineers because they know a bit of Linux and kubernetes.
I actually saw a kid a few years ago with a huge UML book, a student at the local uni. Being self-taught I had no idea what it was so I had to look it up. I can't even come close to working with those actual engineers, I just mess around in Bash and automate stuff you can learn for free using open source.
Not sure why you included the necessary gate-keeping. If you do the same things engineers do, you are an engineer. You may not be a good one, but you still count.
> I can't even come close to working with those actual engineers,
Then how do you know they still use UML if you've never been around them?