Another issue is that, if you are talking about small-to-medium sized applications then clearly there is a difference in languages. For example, it is pretty clear that writing a script is easier in perl than in C, or writing a medium sized expert system is easier in LISP than in Pascal.
However, if you consider large scale applications (100k+ LOC), then I don't believe there is any difference between writing in C, C++, Java, or Common LISP, as long as the programmer(s) have deep experience with the used language.
Just notice that the language is not going to solve the large-scale problem by itself. As long as such language has tools for creating abstractions, the code will have about the same complexity no matter what. If that complexity will be encapsulated in simple concepts (structs and functions) or higher level concepts (closures and continuation) depends on the taste of the developers and the language used.
The choice of abstraction do matter. If you use weak ones, your productivity is taking a serious hit: your program will be bigger, more complex, and have more errors (squared).
C++ abstractions, for instance, are incredibly weak. Take the function abstraction, which isn't even complete: you have no way to write anonymous functions the way you write literal integers. Higher level concepts, as you call them, aren't more complicated than the "simple" ones. Often, they are just less familiar and more consistent.
: Anonymous functions should actually be called "literal functions":
(fun x -> 7 x + 42) -- a literal function
357 -- a literal integer
2 + 3 -- expression which yields a integer
f . g -- expression which yields a function
There are two kinds of design patterns: architectural patterns (e.g. MVC) and language patterns (e.g. Iterator). In the language I use for work, C#, we have to use a lot of both kinds of patterns. Often the "all code must be in a closure or a method" way the language works gets in the way (I can only imagine it's much worse in Java). In 100k LOC, I bet 25-30% of it (random "from the gut" guess) must be language patterns (e.g. visitor).
When you realize that nearly none (if any at all) of the language patterns are needed in Lisp you realize that in 100k LOC you only work with the problem. It seems to happen to me pretty often that I run into situation where there are two possible representations of something, both having problems and I realize that in Lisp I wouldn't have even noticed the situation at all because I could have used a more natural solution right from the start (CLOS' generic function approach makes all the difference in the world here).
Keep in mind I've used C++ and its descendants longer than I've used lisp.
I've worked on heavily used programs in several languages (including Lisp) and frankly in my experience the quality of the programmer is way more important than the framework they start from.
Can you provide a source for the 30% figure?