The way I've dealt with complexity in large code bases is through being fearless about refactoring. Refactoring may not reduce complexity in terms of what the software does, but it reduces the complexity of understanding the code base tremendously by realigning the structure of the code with the actual problems being solved.
Refactoring gets a lot less scary when you have greater confidence in the low level correctness of the code.
On your second point, yes, I have found that FP has some, shall we say, interesting jargon. But I have trouble thinking of succinct names for a lot of FP constructs that are nonetheless useful, such as monads. A lot of more colloquial terms that come to mind in brainstorm sessions might even undermine understanding by providing a false equivalence. I think the same argument can be made for mathematical notation.
In summary I'd turn around your last sentence a bit. Yes, the #1 problem is complexity, but you can reduce complexity significantly by applying correctness and modularity and other programming 'buzzwords'.
You can rail against complexity itself, but I think we're probably on the bottom end of a very large complexity slope over the next decades. So we'll need better and better constructs to deal with it.
At that point you're talking multiple codebases and the complexities become managing transactions, data transformations, and contracts across discrete processes.
I'm not sure how that's germane to the discussion at hand. In fact, to the opposite point, I've found that in multi organization refactors and designs functional programming continues being a useful mine for concepts to simplify thinking around data transformations, immutability, and data contracts.