Create a full-detail schematic of the system in version-controlled UML.
At some point, "deploy" the UML by printing it into a 4 cm-thick binder of paper, then distribute these binders to the head architects.
Iterate on the UML until the architects are happy. (The architects spent many years trying to auto-generate code from the UML diagrams and have the results "fleshed out" by lowest-bidder consultants, though this never really worked. Their stated goal was to no longer have to write any code in house, but rather nothing but UML.)
Begin implementing the system in house with auto-generated code from the binder-of-UML as a baseline, after the lowest-bidder consultants had failed.
Quickly get into big fights between the coders-on-the-ground and management when it was found that the UML diagrams contained major architectural flaws and the UML-phase would not, actually, account for 80% of the project's duration and budget. Needless to say, more than half of the projects failed entirely.
This experience nearly made me leave the industry, before I discovered that there was plenty of software being written in a saner way. This was more than a decade ago, but to this day, just seeing UML diagrams turns my stomach.
This goal was promoted heavily in the late '90s and early '00s. Some tools could supposedly "round-trip" between UML and code, but I never saw that actually work. As originally conceived, UML was supposed to foster communication and there are times when a UML diagram is the perfect means to communicate the intent. I tend to use sequence diagrams and ER diagrams the most (yes - I know ER diagrams aren't UML but they're roughly analogous to class diagrams and what we're usually trying to flesh out is the relationship between models).
As an aside, there were/are graphical programming languages. I wrote a couple of Windows and Mac utilities in ProGraph  during the '90s. The ProGraph environment lives on as Marten  which I recently played with on Linux. It's not nearly as clean as I remember it being.
The other useful diagram tfa doesn't cover is the finite state machine diagram. Handy, plus, they look cool.
As an aside, I find it's much, much easier to do with pen & paper than in any tool i've ever used (looking at you omnigraffle & visio!)
> UML was supposed to foster communication
> could supposedly "round-trip" between UML and code, but I never saw that actually work
In practice in most industries, most folks can probably get away with just knowing the ISA/HASA type stuff and 1..N, * arrity, etc. Swimlanes are also somewhat useful as a concept (but so are regular flowcharts).
There's a whole other ton of UML (see 5/7th of Rational Rose around the early 2000s, which is probably gone all deep and crazy now, etc) that is, IMHO, pretty skippable.
I've not even seen some of the "stuff you have to know" (ball and socket, etc) used.
I do think that designs get better when you can draw them, as when they get hard to draw, it becomes conceptually obvious there may be problems.
UML generation code tools running against 1300+ Java classes and making some huge diagram covered up in arrows definitely doesn't work too well though :)
If you don't think the documentation is useful, please get rid of it. But blaming UML for your documentation problems is like blaming English for forcing you to rewrite comments that no longer make sense.
My sense is that people associate bad processes and bad teams with the tools that they were using. Crappy tools exist, and people have had some pretty dumb ideas about how to use UML, but is useful as a way to express aspects of your design that may lend themselves to a more object-oriented approach. This is something for which no other widespread diagramming language exists.
Whether you're using an iterative or waterfall approach, at some point it is necessary to provide some context for your software units, like explaining how they are associated or what their areas of responsibility are. That's all the documentation does.
ER diagrams are super useful, too.
The point at which I started to realize the usefulness of diagrams was when I started working on projects that were more than a few hours worth of coding (my first internship). UML provides a way to visually represent how your code will both relate to itself (classes, interfaces, objects) and the overall project to both the developer AND the business analysts who have no idea what inheritance or polymorphism mean. I've included an early draft of a piece of a rather large data model. It won't mean anything to you without the overall diagram, but you see some relationships between the customer's account, application and product: http://s18.postimg.org/gxfkjytft/diag.png
You probably had an example like grocery store checkout software, hotel booking software, or library cataloging software. A UML diagram can show to everyone all the possible routes a customer can take such at starting points, ending points, resume milestones, activities that are and are not allowed, transient and permanent data and everything in between. This all applies to the academic scenarios listed above, but it's more in depth (likely) than an assignment that you'll have when learning about UML.
Just keep in mind that large project will have multiple developers of varying expertise with business people who likely cannot write "Hello World!" in any language. UML attempts to bridge the gap between you, your superiors, your subordinates, and the non-tech side of the project.
We met with them, licensed their system and began to make UML. After several weeks we started to generate code, but it wouldn't run. Several more weeks and we had running code, but were missing key functional requirements. We pulled the plug after 6 months. It turns out that UML class and state diagrams are just not robust enough to explain specific functional details. The system generated thousands of lines of code to do simple things.
We ended up writing it ourselves in ASP, it took us about 3 weeks to code everything.
Nuevis went out of business in the original dot-com bust.
The biggest relief in starting to use XP was that the methodology described how we actually worked, not how we were supposed to work.
We simply stuck the accenture clowns alone in a room with rational rose, and we never heard from them again (nor did we ever see any uml). Best case scenario.
After all, that's probably what it was designed with.
I used to do product reviews for a magazine, and Rose was the only review software I decided not to keep. And I'd have felt bad giving it to anyone else.
From that time on, any mention of Rose in a job description automatically disqualified it.
It was tolerable. But changes were a bit expensive, and we kept a reference machine for builds, which was not very nice. The environment was hard to replicate - behind the scenes was a sandbox of Smalltalk and that was sort of ugly.
We did a port to Rose, but Rose was not, frankly, as capable.
What's funny-peculiar is - as described in your experience - the whole idea of "architects that can't code." At the very least there's apparently no market for tools to enable that.
Now, I'm happy to find out that that is not indeed how things HAVE to work, and I'm a lot happier and less conflicted. And now that we have many prominent examples of more functional and humane corporate and engineering cultures, I'm sad for all those people still toiling away in dilbert-land.
You can also look around for medium-sized places where software is the primary product; they can typically afford some Dilbertiness, but not too much, or they get eaten by somebody else. Depends on your tolerance, and on your ability to carve your own defensible niche out.
But either way, the key is certainly to start looking. No, working in Dilbert is not inevitable, but you'll have to take positive action if you are stuck there now.
As I see it, the crucial difference between Dilbert and non-Dilbert is responsiveness to unhappiness and the ability to learn from failure.
In Dilbert-land, an entire engineering department can grow to be silently miserable. Outside of Dilbert-land, many people would raise their voices and with enough protest, major change would happen. That's because management knows that bad morale is deadly, and can't afford to restaff after mass departures. And outside Dilbert-land, people tend to not see themselves as trapped, and really will, indeed, quit.
Also, medium-sized agencies simply cannot afford to have more than 10% of their projects utterly fail - their cash-flow is too limited, and they live on their reputation and good relationships with clients. Unlike the Dilberts, they simply can't walk straight into failure over and over.
Let's be honest, those of us who have built careers on the "modern-web-stack running in AWS" tend to tilt in the bearded-hipster direction. I've interviewed people coming out of corporate behemoths, and they're typically older, have families, have a more conservative demeanor, and are a few years behind the HN curve, tech-wise. It takes some effort to cross that cultural gap and recognize the decades of engineering and inter-personal experience that some of these people have.
If that's not in the cards, maybe you could look slightly farther afield in your geographical area than you would normally, or look farther afield in your technical area than you normally would.
It might also help to work on some either open source or other public-facing projects (assuming you can and they don't conflict with your current employment, etc.). Then you show that you have experience other than just what you might have from your day job.
The printing thing stopped when online help got better and more ubiquitous. Now, I usually read docs on the web and only buy a book when I want to majorly invest in learning a new technology.
In fact I had burned them to a cd-rom so I could pass them out easily to co-workers...
Better than printing them out, but still unthinkable now.
There. My 20+ years of experience in software architecture in various fields from games to networking tells me that you now know enough to work out the classes and their relationships in a large software system.
Don't fuss around with "aggregation" or "composition" or whatever. Don't spell out functions (though occasionally I'll jot one below a line to remind myself what the salient feature of the dependency is). And by no means write the class properties, their types, or their access specifiers (public, protected...)—this is way too much detail. A UML diagram is useful in modeling broad object relationships in a system. If you want to work out what properties a class should have, write the damn class. Any software developer worth his salt can figure out the code from a high-level diagram; don't write the code for him. Or do, but then don't call it an architectural diagram.
I know there's a whole culture of software development where architects design code but don't dirty their hands with writing it, then hand it off to underlings who type it up for them, and so on down some kind of techno-bureaucracy Great Chain of Being. Rubbish. Code architecture is a thing and some kind of diagramming is helpful, but UML as such is the sort of busywork and IRS-style hierarchism that marks bloated government jobs, not real productivity or real teamwork.
Give UML a miss and use something very, very simple.
I'd rather outline the major components of a system by drawing (on real paper) simple boxes and lines, or write the code that implements the system.
Not sure what code-as-picture achieves - it's generally has worse tooling (less editable, less versionable, etc.) and tends to be used by 'architects' who don't write code, only for that UML to be essentially ignored by the coders on the ground.
The problem is that complex systems need complex design. As your design get more sophisticated, you need more sophisticated ways to communicate the design. Concepts like dependency, multiplicity, inheritance, inclusion, asynchronous message, exception, timed event are all things you can represent with boxes and lines, but unless you adorn the diagram with a huge amount of text you're not going to capture the distinctions between those different types of relationship.
It's not unlike any other language: I can use the word thought to describe something in my head that isn't physically manifested, but words like lie, memory, hypothesis, idea, belief, image, nostalgia, forecast, imagination are all used to distinguish between types of thoughts. If I used only simple words like "thought" then I wouldn't be able to express anything more sophisticated while still being concise.
In short, if you want to graphically describe something complex, you can either use a sophisticated visual language to do it concisely and unambiguously, or you can use a simple visual language and write tonnes of documentation to support it, with all the problems of attention, ambiguity, and rot that go along with it.
The problem is that complex systems need complex design. As your design get more sophisticated, you need more sophisticated ways to communicate the design
No. Complex systems need more thinking and refactoring until they are less complex.
It's incredibly naive (or at best disingenuous) to assume that complex systems can be reduced by thinking and refactoring. Perhaps some can, but when you consider many of the software artefacts we use daily (seeing as that's the context of this subthread), such as operating systems, virtualization platforms, or even good ol' clunky enterprise applications, you can't just wish the complexity away with a bit of hard work.
I'd be pretty surprised if any of the code that I am using to read this was designed with detailed UML diagrams.
It wouldn't have been possible at all if the pieces that make it up weren't loosely coupled.
>It's incredibly naive (or at best disingenuous) to assume that complex systems can be reduced by thinking and refactoring.
It's naive to assume that you can't. Loose coupling pretty much only comes as a result of extensive refactoring.
Google for "linux kernel diagram". It's quite well designed, displays loose coupling, high cohesion, and all the other things we like about good complex systems. If I want to dig in and start working on a module, I'd still have to understand the relationship between all the bits I work directly with (and probably some that are further away).
Don't confuse loose coupling with low complexity.
We already have a perfectly good way of representing the details of a system - it's called source code.
Interestingly enough, that's a bit of a false dichotomy.
Imagine it's your first day on the team, and there's thirty classes (or thirty functions) in this software package, and you want to understand their inheritance/composition, what's a good way to take a step back and just see how they relate to each other? You can browse some interface files or some source files, accumulating a model in your head. But the visual throughput into your brain is just so much better when you can see it in a diagram.
Regardless of whether a human draws the UML representation of the design in advance of the implementation or a machine deduces this from the existing source code, you can still benefit from a class diagram that will cut out the noise of the implementation itself. As long as we're drawing a diagram, why not have a common way to represent the concepts we use in software?
> perfectly good way of representing the details of a system - it's called source code.
> simply use a tool to generate a class diagram from the code
Once you resolve to generate that class diagram you have a choice in how to represent it. UML is just one way, proposed as a standard way, to represent that visually.
The problem of UML is that it tries to be the language of all these levels of design (and even different stages of design!), to the point that the highest-level design document is trying to be something so completely specified that there's no abstraction.
One of the most constructive things you can do when designing software as a part of a team is to review these diagrams with the team. Inevitably, if you create boxes and lines, you'll need some sort of legend that identifies the relationships established by the visual style of boxes and lines. UML is intended to be a common legend, like the schematic representation of circuits (resistors, capacitors, diodes, grounds, etc).
> Not sure what code-as-picture achieves - it's generally has worse tooling (less editable, less versionable, etc.) and tends to be used by 'architects' who don't write code, only for that UML to be essentially ignored by the coders on the ground.
An error that happens more often than we'd like to admit is when we totally mis-assess the requirements or omit big portions of an element's design. Sketching a quick diagram that shows "the UserAccount has a user_name, user_id, and a password_hash element" allows the team to sanity check the design approach -- "Oh, but a UserAccount also needs an email_addr element." "Are you kidding? How would we email them, there's no interface for that on this system." It's sad, but sometimes entire (enormous) features like this don't get caught until much later in the design cycle.
There are definitely better techniques than waiting until someone quibbles a field in a lines-and-boxes design session to pull that kind of insight out though. Just sitting around and telling stories about what the user will be able to do, and how they will be able to do it can work wonders.
But of course, that is exactly the opposite of what UML set out to achieve: a system so formal and complete that it could be used to generate code automatically.
Believe it or not, the customers understood it perfectly, after some minutes drawing and deleting lines. There was never a single complaint about that module and only minor bugs, not affecting to the functionality.
I hated use cases diagrams, only useful for CYA purposes. The classes diagrams that I've seen in the wild were E/R models translated to UML, but they were the most part of documentation for some projects, so it was better than nothing.
The traceability systems from requirements to complete UML diagrams to code were a bunch of good intentions that fortunately never materialized.
I found it to be a pain as a student (it clearly was a fad at that point), but every time I go back to shipped code in mainstream paradigms, I understand why people went this way. It's as boring as intractable. We don't want to code at that level, we want abstractions, but these paradigms didn't help so we needed diagrams. And it's an offspring of OOP mostly which helped reducing cpp-like complexity but not for free.
IMO UML boils down to a visual type checker. We can layout things more efficiently than with linear text, and can appreciate the relationships between parts. But in a way, that's what a type checker does. I remember never needing a diagram (for medium single person programs that is) in Eclipse since it could look at the whole system for me and show contradictions or suggest options.
I'm now very regularly tempted to do things with a bit of empty interface logic checking, FP-ish minded. You can still write this as text, and as long as it the types flow logically it's OK.
ps: this was also influenced a bit by posts like these https://www.google.com/search?q=f+sharp+uml
I used to work on a CASE tool early in my career. I was always struck by the fact that we never ate our own dog food; that is, we never employed the tool we were working on to guide its own construction. This was the first time my eyes were opened to the fact that perhaps there was a lot of junk being built :)
CASE itself I believe was an attempt to replicate for assembling bits some of the benefits CAD/CAM was bringing to assembling atoms.
Although the latter is partly because the idea of doing 'SE' without the 'CA' is now inconcievable, even to senior management.
Same about dogfooding. When I realized RSA wasn't (at least when I was working on it) developed with RSA or any UML I felt weird... Doubly weird since other tools like IBM Jazz (team collaboration) used themselves to ensure a certain dose of value.
Nobody forces you to use software to draw UML diagrams and nobody forces you to include every aspect of your objects in the UML diagram. The basic idea of UML is pretty sane, many people just overdo it.
Informal UML diagrams - dynamic box-and-line relationship diagrams scrawled on a whiteboard where one arrow has been drawn in extra thick to emphasize a point; sequence diagrams where one colleague has drawn the sequence arrows as they work today and another has scribbled over the top the arrows as they should work in a different dry-wipe color; those are an appropriate use for "sort-of-UML" - as tools in rapid communication of ideas. My phone's full of photos of that kind of UML. But at that level of formality, nobody should mind if you get the open or filled diamond arrowheads wrong, or you use arrows for one to many relationships instead of inheritance, so long as everybody working on the diagram understands what is meant in that moment.
UML is a formal standard for back-of-the-envelope doodles, and that is exactly as ridiculous as it sounds.
however, I found that a good sequence diagram here and there and a deployment diagram could really ease up communication when comparing the critical points of two or more different solutions - if done at the right level of detail.
I get your opinion from a programmer point of view. However from the big picture point of view, either from the architect or from the client (if she (even partially) gets UML), it could be really useful.
E.g. let's consider working for a large financial institute, designing and/or implementing very difficult workflows / algorithms what would you use instead? In my experience most of the clients don't take the time to read half (if at all) of the agreed spec, not to mention understanding. In that case an activity/state diagram really helped a lot.
And on not versionable: you are right only to an extent (e.g. exported Visio diagrams to JPG...). However there are tools/methods, which can be versioned (e.g. generetaing UML via Graphviz/dot, and AFAIK Enterprise Architect supports versioning too).
I understand you, but the problem of simple drawings is that is not a common language broadly understood.
Another good use case for UML is the visualization of complex flows in frameworks.
++ E-R (entity-relationship) diagrams. I find it easier to look at boxses for each table follow the lines signifying relationships to other boxes. The "crows feet" can signify 1-to-many. The diagram is easier than reading a sequential list of SQL CREATE TABLE statements and making a mental note of "FOREIGN KEY" strings and mentally backtracking to the parent table.
++ swim lanes to show how the "state" of a system is supposed to change with "time". This can succinctly show how data "flows" without having to actually run a program and watch variables in a debugger to manually recreate the "swim lane" in your head.
++ truth tables to summarize combinations of valid/invalid business rules and associated side effects. A grid is easier than parsing a long (and often nested) list of if/then/else/switch statements.
As for UML, the notation never seems to be that succinct or helpful to me. On the surface level, it seems that UML (for code) should have the same return-on-investment as E-R (for db tables) but it doesn't in my experience.
I also wonder if there is a cultural component to UML usage. It doesn't seem like tech companies (such as Microsoft/Google/Amazon/Ubisoft/etc) internally use UML as a prerequisite step for building systems. On the other hand, I could see more UML usage at non-tech companies (such as banks/manufacturing/government) building line-of-business CRUD apps. Grady Booch (Booch & UML notation) did consulting about software methodology at non-tech companies so that may have been a factor.
I believe all of these statements to be true.
I had a contract many years ago with a large insurer. Their development process basically consisted of drawing really complex UML diagrams, then hitting the Big Red Button and having the modeling tool generate 40,000 lines of framework code. The chief architect explained to me that really the only work required was just a tiny bit of business logic in the appropriate places.
Fortunately I was not part of the main dev team, which for some strange reason (at least in the lead architect's mind) had the damnedest time with this system. My job was to create an internal permissions system. Given app X, user Y, and action Z, was the action allowed or not.
I looked at the problem for a while, and no matter how I thought about it, to me I had three lookup tables and one method. Boom, I'm done.
The lead architect wanted me to still draw a diagram with one class, push the button, and get the 40,000 lines of code. For some reason, this did not appeal to me.
Took me about 3 weeks to convince him that really 20 lines or so of code was all we needed. I still had to draw the diagram, though.
That's the horror story -- one among dozens I have. But on the flip side, I've been with teams that interviewed the customer while sketching out a domain model. Since we all understood UML, a quick and lightweight sketch using the appropriate notation got agreement on a ton of detail just taking 30 minutes or so. That would have been very difficult using a conversation or word processor. Sketching without some common lightweight understanding could have led to rookie errors.
There is nothing in this world better for getting quick technical agreement on complex problems than group sketching using lightweight UML. The trick is sticking to the absolute minimum.
In my career, I have only seen a few things that caused religious fervor on any significant scale: XML and UML. But I would like to know if the level of specification around these languages is the off-putting aspect or if it is something else. Did mathematicians of the Newton-Leibiz time exchange letters denouncing proofs or trolling with irreducible polynomials?
Now something like Lucidchart is a great way to knock out some swim lanes or ER diagrams without the nonsense of automatically generated code. You can use the diagrams to get your team and the client on the same page without UML becoming a religion.
Particularly, for my personal projects, I use the Use Cases diagram to map the requirements and the features my application will have, associated with my prototypes. Other diagrams, like Class diagram, usually I use just to map the Domain before develop the persistence. This is how all my projects start, even if I am working alone. It is good to me and it's part of my creative process.
All the "Software Engineering" classes at university were just UML. Many years I thought SE was just that, UML.
When I did my first job interview I was rather relieved that the hiring manager wanted me to draw UML. He was very pleased with my UML skills, but he always said, I shouldn't draw it with all the details, just the basics are enough :D
That was 2011... I never had to use UML in an interview again. But I often used it as a documentation tool for a big PHP codebase, I had to manage.
UML and ER diagrams are often an overkill when designing an app. But they are nice to visualize what is going on in legacy stuff.
Thing is, the core elements of UML are very useful in communicating a design or an idea. Class diagrams are a great way to discuss an OO-ish codebase in front of a whiteboard (or any data model, really). When you do that, it really helps when everybody knows that an arrow in static UML diagram types means "dependency" and not "the data flows from here to there".
Similarly, I still haven't seen a better way to visualise state than with a UML state chart.
It's also very nice if you can draw a UML object diagram that people understand (looks like a class diagram, except you basically draw a hypothetical runtime situation of instantiated classes and you underline the object identifier names). This works best when people understand that the picture on the left is a class diagram (design time) and the one on the right is an object diagram (runtime example) of the same classes. This is not complicated stuff, but it doesn't really work as well when half the team thinks UML is for losers.
Now, bear with me, I'll be the first to agree, UML is a bloated piece of shit. Package diagrams, wtf, who needs that? Use case diagrams that show which use cases are specified, instead of how the use cases go - seriously? Activity diagrams so you can draw a 5-line method on an entire sheet of paper, big fucking what the hell were you guys thinking?? Why do I even know what this stuff is? What a waste of time - even the decent diagram types have 60% bullshit syntax and only 40% useful stuff. And message sequence charts are nice enough for protocols but impossible to draw right.
But to dismiss UML just because some enterprise architects went a little overboard in 2002 is a bit like dismissing all of OOP because 15-level inheritance hierarchies used to be hip.
I wish we could agree on a tiny subset of UML that actually makes sense, and all learn that. This post makes a good start for class diagrams, although IMO even the ball-and-socket notation is overblown nonsense from a time long gone. Maybe we should do this, and give it a separate name.
On a mildly related note, one thing I like about OOP is that you can draw pictures of it easily. Does anyone here know of a good way to visualize functional code structure? You can draw a dependency chart of modules of functions but that only gets you so far.
Years ago I co-authored a book on UML, but the only UML diagrams that I still use are sequence diagrams which I think are great for explaining interactions between objects or separate services.
I live in the Boston rt128 area and I pass OMG's building all the time and I just have no idea how they are still in business (they are near Trip Advisors new complete awesome building).
I wonder how many massive companies continuously donate to OMG and do not realize it.
However, UML was designed as a standard, near-UML is not UML. Ergo, UML is useless.
I feel better already.