Hacker News new | past | comments | ask | show | jobs | submit login

Of all the things I read at uni UML is the thing I've felt the least use for - even when designing new systems. I've had more use for things I never thought I'd need like Rayleigh scattering and processor design.





I think most software engineers need to draw a class diagram from time to time. Maybe there are a lot of unnecessary details to the UML spec, but it certainly doesn't hurt to agree that a hollow triangle for the arrow head means parent/child while a normal arrow head means composition, with a diamond at the root for ownership.

As the sibling comment says, sequence diagrams are often useful too. I've used them a few times for illustrating messages between threads, and for showing the relationship between async tasks in structured concurrency. Again, maybe there are murky corners to UML sequence diagrams that are rarely needed, but the broad idea is very helpful.


True but I don't bother with a unified system, just a mermaid diagram. I work in web though, so perhaps if I went back to embedded (which I did only a short while) or something else when a project is planned in it entirety rather than growing organically/reacting to customers needs/trends/the whims of management.

I just looked at Mermaid and it seems to as close to UML as I meant by my previous comment. Just look at this class diagram [1]: triangle-ended arrows for parent/child, the classic UML class box of name/attributes/methods, stereotypes in <<double angle brackets>>, etc. The text even mentions UML. I'm not a JS dev so tend to use PlantUML instead - which is also UML based, as the name implies.

I'm not sure what you mean by "unified system". If you mean some sort of giant data store of design/architecture where different diagrams are linked to each other, then I'm certainly NOT advocating that. "Archimate experience" is basically a red flag against both a person and the organisation they work for IMO.

(I once briefly contracted for a large company and bumped into a "software architect" in a kitchenette one day. What's your software development background, I asked him. He said: oh no, I can't code. D-: He spent all day fussing with diagrams that surely would be ignored by anyone doing the actual work.)

[1] https://mermaid.js.org/syntax/classDiagram.html


The "unified" UML system is referring to things like Rose (also mentioned indirectly several more comments up) where they'd reflect into code and auto-build diagrams and also auto-build/auto-update code from diagrams.

I've been at this 16 years. I've seen one planned project in that 16 years that stuck anywhere near the initial plan. They always grow with the whims of someone.

> I think most software engineers need to draw a class diagram from time to time.

Sounds a lot like RegEx to me: if you use something often then obviously learn it but if you need it maybe a dozen or two dozen times per year, then perhaps there’s less need to do a deep dive outside of personal interest.


UML was a buzzword, but a sequence diagram can sometimes replace a few hundred words of dry text. People think best in 2d.

Sure, but you're talking "mildly useful", rather than "replaced programmers 30 years ago, programmers don't exist anymore".

(Also, I'm _fairly_ sure that sequence diagrams didn't originate with UML; it just adopted them.)


>People think best in 2d.

no they don't. some people do. Some people think best in sentences, paragraphs, and sections of structured text. Diagrams mean next to nothing to me.

Some graphs, as in representations of actual mathematical graphs, do have meaning though. If a graph is really the best data structure to describe a particular problem space.

on edit: added in "representations of" as I worried people might misunderstand.


FWIW, you're likely right here; not everyone is a visual thinker.

Still, what both you and GP should be able to agree on, is that code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It's dumb that we're still stuck with this paradigm; it's a great lead anchor chained to our ankles, preventing us from being able to handle complexity better.


> code - not pseudocode, simplified code, draft code, but actual code of a program - is one of the worst possible representations to be thinking and working in.

It depends on the language. In my experience, well-written Lisp with judicious macros can come close to fitting the way I think of a problem. But some language with tons of boilerplate? No, not at all.


As a die-hard Lisper, I still disagree. Yes, Lisp can go further than anything else to eliminate boilerplate, but you're still locked in a single representation. The moment you switch your task into something else - especially something that actually cares about the boilerplate you hidden, and not the logic you exposed - and now you're fighting an even harder battle.

That's what I mean by Pareto frontier: the choices made by various current-generation languages and coding methodologies (including choices you as a macro author makes, too), are all promoting readability for some tasks, at the expense of readability for other tasks. We're just shifting the difficulty around the time of day, not actually eliminating it.

To break through that and actually make progress, we need to embrace working in different, problem-specific views, instead of on the underlying shared single-source-of-truth plaintext code directly.


IMHO there's usually a lot of necessary complexity that is irrelevant to the actual problem; logging, observability, error handling, authn/authz, secret management, adapting data to interfaces for passing to other services, etc.

Diagrams and pseudocode allow to push those inconveniences into the background and focus on flows that matter.


Precisely that. As you say, this complexity is both necessary and irrelevant to the actual problem.

Now, I claim that the main thing that's stopping advancement in our field is that we're making a choice up front on what is relevant and what's not.

The "actual problem" changes from programmer to programmer, and from hour to the next. In the morning, I might be tweaking the business logic; at noon, I might be debugging some bug across the abstraction layers; in the afternoon, I might be reworking the error handling across the module, and just as I leave for the day, I might need to spend 30 minutes discussing architecture issue with the team. All those things demand completely different perspectives; for each, different things are relevant and different are just noise. But right now, we're stuck looking at the same artifact (the plaintext code base), and trying to make every possible thing readable simultaneously to at least some degree.

I claim this is a wrong approach that's been keeping us stuck for too long now.


I'd love this to be possible. We're analyzing projections from the solution space to the understandability plane when discussing systems - but going the other way, from all existing projections to the solution space, is what we do when we actually build software. If you're saying you want to synthesize systems from projections, LLMs are the closest thing we've got and... it maybe sometimes works.

Yeah, LLMs seem like they'll allow us to side-step the difficult parts by synthesizing projections instead of maintaining them. I.e. instead of having a well-defined way to go back and forth between a specific view and underlying code (e.g. "all the methods in all the classes in this module, as a database", or "this code, but with error handling elided", or "this code, but only with types and error handling", or "how components link together, as a graph", etc.), we can just tell LLMs to synthesize the views, and apply changes we make in them to the underlying code, and expect that to mostly work - even today.

It's just hell of an expensive way to get around doing it. But then maybe at least a real demonstration will convince people of the utility and need of doing it properly.

But then, by that time, LLMs will take over all software development anyway, making this topic moot.


ok, but my reference to sentences, paragraphs and sections would not indicate code but rather documentation.

oops, evidently I got downvoted because I don't think best in 2d and that is bad, classy as always HN.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: