I would say category theory is the wrong title for this, though. Category theory is occupied with the algebras that emerge when you put algebras into the same space and look at mappings among them. It emerged in algebraic topology where you would construct a family of algebras that describe aspects of a geometric structure and then establish connections among them to talk about the effects of operations on those geometric structures. Categories gave you one algebra, functors gave you families of algebras, and natural transformations gave you operations on the underlying spaces.
Many of the algebras described here and others that I have found useful emerged earlier in abstract algebra or universal algebra (monoids, lattices) or in parallel in combinatorics (magmas).
The category theoretic view is often an inconvenient setting. Lattices, for example, are best dealt with simply as a set with some operations that obey specific axioms. You get theorems from the axioms, and if you need one of those theorems to apply to your system, you make sure you implement it in such a way that it obeys those axioms.
Or if you have an algebra near to what you need, you may be able to modify it in the abstract setting to keep properties that you want. I ended up doing this with lattices to describe tagging regions of genomes.
The agreement part is that the algebraic structures that naturally arise in my programming experience are indeed ultra-basic: monoids, lattices, semirings, partial orders, and so on. (When I need elementary linear algebra things are getting fancy!)
The disagreement arises from the fact that the right home for these basic concepts is often surprisingly exotic. For example, it's often the case that what you need is not a monoid in the category of sets, but in the category of sets and relations or partial orders and monotone functions.
Another example is when you want to represent syntax trees with scoped binding structure. Now a syntax tree is the dumbest possible algebraic structure (it's a free algebra with all generators and no relations), but when you have binding you need a free algebra in the category of presheaves over finite sets and injections.
As it happens, today I was thinking about semirings, and how to apply them to discrete event systems, and was fascinated to see that both algebraic geometers (Alexander Grothendieck) and computer scientists (Robin Milner) were doing some of the same sorts of tricks to transfer algebraic structures (like rings) over to settings lacking an inverse (like semirings)--rather than take the direct approach (as you mentioned in your lattice example):
There are many times in mathematics where some class of objects has a nice additive structure, except that one can’t always invert elements. One approach is to say, “oh well,” and develop the theory of monoids; another,which we’ll discuss today, adds in formal inverses, in effect pretending that they exist, and uses them to recover something useful.
So I disagree, I think the title is right. If you don't go as far as category then you've stopped short of the biggest pay-off.
That is an interesting description of design patterns, and not one I've heard before.