This might be my own confirmation bias, but this is my takeaway: point of view and the constraints that you see or believe are there are the main determinant of choices, and not whether some particular pattern or feature of a language is intrinsically good.
The older I get and the more code I write, the more I find this to be true. I change my own mind about things I used to argue fiercely over and things I thought were tautologically true.
I think (hope) this is making me more open minded as I go, more willing to listen to opposing points of view, and more able to ask questions about what someone else's constraints are rather than debating about the results. But, who knows, I might be wrong.
Pointing back at yourself over 10 years is pointing to a different place, sure, but it is also a different person.
Static typing is useful, especially in large projects, if it provides the right guarantees.
OTOH, whether it does that depends on the type system. Go, Java, Pony, Rust, and Haskell are all static, but their type systems offer very different capacities. If you have a type system that has a lot of ceremony, and fails to provide the guarantees that are actually needed in your project, it's a pure burden. If it's low-ceremony and provides guarantees important to your project, it's a clear, undiluted benefit. Reality often falls somewhere in between.
If you are writing a small one time use script to accomplish a task clearly this that kind of protection is of low value.
If you are trying to write or maintain a system intended to last 20 years and keeps bugs out of 100 millions lines of code, every kind of check that can be automated has extremely high value.
Most projects are somewhere between these two extremes. The nature of the cutoff point where strong static typing helps or does not is what we should be debating, not its inherent value as Dahart suggested.
Maybe a few type errors will still slip by, but you'll have found and fixed so many other kinds of errors much earlier. Kinds of errors that benefit by being caught immediately instead of festering because they passed a type checker. (I've never really found a type error to be a catastrophic-oh-I-wished-we-found-this-sooner type of bug. You fix it and move on. It's not dissimilar to fixing various null pointer exceptions that plague lots of corporate Java code.)
To me your obvious claim is not obvious at all, because the tradeoff space is so much richer than what mere type systems allow. We're not even touching on what you can do with specs and other processes that happen before you code in any language, nor other language tradeoffs like immutability, everything-is-a-value, various language features (recalling Java only recently got streams and lambdas), expressiveness (when your code is basically pseudocode without much ceremony (or even better when you can make a DSL) there's a lot fewer places for bugs to hide)... Typing just doesn't tell that much of a story.