Create a full-detail schematic of the system in version-controlled UML.
At some point, "deploy" the UML by printing it into a 4 cm-thick binder of paper, then distribute these binders to the head architects.
Iterate on the UML until the architects are happy. (The architects spent many years trying to auto-generate code from the UML diagrams and have the results "fleshed out" by lowest-bidder consultants, though this never really worked. Their stated goal was to no longer have to write any code in house, but rather nothing but UML.)
Begin implementing the system in house with auto-generated code from the binder-of-UML as a baseline, after the lowest-bidder consultants had failed.
Quickly get into big fights between the coders-on-the-ground and management when it was found that the UML diagrams contained major architectural flaws and the UML-phase would not, actually, account for 80% of the project's duration and budget. Needless to say, more than half of the projects failed entirely.
This experience nearly made me leave the industry, before I discovered that there was plenty of software being written in a saner way. This was more than a decade ago, but to this day, just seeing UML diagrams turns my stomach.
This goal was promoted heavily in the late '90s and early '00s. Some tools could supposedly "round-trip" between UML and code, but I never saw that actually work. As originally conceived, UML was supposed to foster communication and there are times when a UML diagram is the perfect means to communicate the intent. I tend to use sequence diagrams and ER diagrams the most (yes - I know ER diagrams aren't UML but they're roughly analogous to class diagrams and what we're usually trying to flesh out is the relationship between models).
As an aside, there were/are graphical programming languages. I wrote a couple of Windows and Mac utilities in ProGraph  during the '90s. The ProGraph environment lives on as Marten  which I recently played with on Linux. It's not nearly as clean as I remember it being.
The other useful diagram tfa doesn't cover is the finite state machine diagram. Handy, plus, they look cool.
As an aside, I find it's much, much easier to do with pen & paper than in any tool i've ever used (looking at you omnigraffle & visio!)
> UML was supposed to foster communication
> could supposedly "round-trip" between UML and code, but I never saw that actually work
In practice in most industries, most folks can probably get away with just knowing the ISA/HASA type stuff and 1..N, * arrity, etc. Swimlanes are also somewhat useful as a concept (but so are regular flowcharts).
There's a whole other ton of UML (see 5/7th of Rational Rose around the early 2000s, which is probably gone all deep and crazy now, etc) that is, IMHO, pretty skippable.
I've not even seen some of the "stuff you have to know" (ball and socket, etc) used.
I do think that designs get better when you can draw them, as when they get hard to draw, it becomes conceptually obvious there may be problems.
UML generation code tools running against 1300+ Java classes and making some huge diagram covered up in arrows definitely doesn't work too well though :)
If you don't think the documentation is useful, please get rid of it. But blaming UML for your documentation problems is like blaming English for forcing you to rewrite comments that no longer make sense.
My sense is that people associate bad processes and bad teams with the tools that they were using. Crappy tools exist, and people have had some pretty dumb ideas about how to use UML, but is useful as a way to express aspects of your design that may lend themselves to a more object-oriented approach. This is something for which no other widespread diagramming language exists.
Whether you're using an iterative or waterfall approach, at some point it is necessary to provide some context for your software units, like explaining how they are associated or what their areas of responsibility are. That's all the documentation does.
ER diagrams are super useful, too.
The point at which I started to realize the usefulness of diagrams was when I started working on projects that were more than a few hours worth of coding (my first internship). UML provides a way to visually represent how your code will both relate to itself (classes, interfaces, objects) and the overall project to both the developer AND the business analysts who have no idea what inheritance or polymorphism mean. I've included an early draft of a piece of a rather large data model. It won't mean anything to you without the overall diagram, but you see some relationships between the customer's account, application and product: http://s18.postimg.org/gxfkjytft/diag.png
You probably had an example like grocery store checkout software, hotel booking software, or library cataloging software. A UML diagram can show to everyone all the possible routes a customer can take such at starting points, ending points, resume milestones, activities that are and are not allowed, transient and permanent data and everything in between. This all applies to the academic scenarios listed above, but it's more in depth (likely) than an assignment that you'll have when learning about UML.
Just keep in mind that large project will have multiple developers of varying expertise with business people who likely cannot write "Hello World!" in any language. UML attempts to bridge the gap between you, your superiors, your subordinates, and the non-tech side of the project.
We met with them, licensed their system and began to make UML. After several weeks we started to generate code, but it wouldn't run. Several more weeks and we had running code, but were missing key functional requirements. We pulled the plug after 6 months. It turns out that UML class and state diagrams are just not robust enough to explain specific functional details. The system generated thousands of lines of code to do simple things.
We ended up writing it ourselves in ASP, it took us about 3 weeks to code everything.
Nuevis went out of business in the original dot-com bust.
The biggest relief in starting to use XP was that the methodology described how we actually worked, not how we were supposed to work.
We simply stuck the accenture clowns alone in a room with rational rose, and we never heard from them again (nor did we ever see any uml). Best case scenario.
After all, that's probably what it was designed with.
I used to do product reviews for a magazine, and Rose was the only review software I decided not to keep. And I'd have felt bad giving it to anyone else.
From that time on, any mention of Rose in a job description automatically disqualified it.
It was tolerable. But changes were a bit expensive, and we kept a reference machine for builds, which was not very nice. The environment was hard to replicate - behind the scenes was a sandbox of Smalltalk and that was sort of ugly.
We did a port to Rose, but Rose was not, frankly, as capable.
What's funny-peculiar is - as described in your experience - the whole idea of "architects that can't code." At the very least there's apparently no market for tools to enable that.
Now, I'm happy to find out that that is not indeed how things HAVE to work, and I'm a lot happier and less conflicted. And now that we have many prominent examples of more functional and humane corporate and engineering cultures, I'm sad for all those people still toiling away in dilbert-land.
You can also look around for medium-sized places where software is the primary product; they can typically afford some Dilbertiness, but not too much, or they get eaten by somebody else. Depends on your tolerance, and on your ability to carve your own defensible niche out.
But either way, the key is certainly to start looking. No, working in Dilbert is not inevitable, but you'll have to take positive action if you are stuck there now.
As I see it, the crucial difference between Dilbert and non-Dilbert is responsiveness to unhappiness and the ability to learn from failure.
In Dilbert-land, an entire engineering department can grow to be silently miserable. Outside of Dilbert-land, many people would raise their voices and with enough protest, major change would happen. That's because management knows that bad morale is deadly, and can't afford to restaff after mass departures. And outside Dilbert-land, people tend to not see themselves as trapped, and really will, indeed, quit.
Also, medium-sized agencies simply cannot afford to have more than 10% of their projects utterly fail - their cash-flow is too limited, and they live on their reputation and good relationships with clients. Unlike the Dilberts, they simply can't walk straight into failure over and over.
Let's be honest, those of us who have built careers on the "modern-web-stack running in AWS" tend to tilt in the bearded-hipster direction. I've interviewed people coming out of corporate behemoths, and they're typically older, have families, have a more conservative demeanor, and are a few years behind the HN curve, tech-wise. It takes some effort to cross that cultural gap and recognize the decades of engineering and inter-personal experience that some of these people have.
If that's not in the cards, maybe you could look slightly farther afield in your geographical area than you would normally, or look farther afield in your technical area than you normally would.
It might also help to work on some either open source or other public-facing projects (assuming you can and they don't conflict with your current employment, etc.). Then you show that you have experience other than just what you might have from your day job.
The printing thing stopped when online help got better and more ubiquitous. Now, I usually read docs on the web and only buy a book when I want to majorly invest in learning a new technology.
In fact I had burned them to a cd-rom so I could pass them out easily to co-workers...
Better than printing them out, but still unthinkable now.