This is why we will not have, and should not want, a "sufficiently smart compiler," but instead a "sufficiently predictable compiler." If the compiler is an enormously complex inference system, then tiny language-level changes can result in huge performance changes (see "space leaks" in Haskell, or "auto-vectorization" in basically anything). The solution isn't adding more knobs to the compiler; it's adding easier inter-language communication. Scripting languages like Matlab, Perl, Python, and R have been doing this right for decades: make a good effort at a specific domain (math, text, statistics), and make it easy to call into lower-level code for the key pieces.
A simple example: Most high-level array implementations have a reserve method that allows you to specify the expected number of items added to the array. This allows allocating all required memory up front and hence avoids re-allocs and the associated copies.
In many cases, it would be trivial for a compiler to automatically determine the size of the array and add a reserve call automatically (because the entries are added in a loop with an easily-determined iteration count). Yet compilers cannot be allowed to do that, because doing so changes the visible behaviour of the program.
However, the library in which the array is implemented is free to define the semantics of the array. So that library could provide some hint to the compiler, in essence a library-provided optimization pass, which allows such reserve calls to be added automatically.
I'm not saying that coming up with a good DSL to describe such optimization is easy, but it may well be worthwhile in the long run.
You actually get a huge amount of perf gains just from improving locality, a lot of in-depth optimizations are possible but I am not sure what compilers now days actually do, it has been a long time since I worked on a compiler team!
Such information might be problematic to integrate into a legacy project, but in a language which allowed for optional type annotation, such information could be fed into programmer tools and more easily analyzed. (Example: So, this code here where it's not an int -- do we really need to have this, or could we move the error checking elsewhere, so we can just say that's an int?)
There are two cases I can think of where the static typing in the most common languages are insufficient. The first is union types--you can't easily express a data structure like JSON in Java (although one could argue this is a feature, not a bug). The second case is where you need to map types to instances of those types, something akin to a Map<Class<T>, T> in Java syntax. While you can generally specify the latter at a method syntax (e.g., public <T> T get(Class<T> clazz)), there's no way to write that method without having to resort to subverting the type system.
Dynamic typing can be beneficial if you would otherwise be stuck subverting the type system more often than following it, although that's as much of an argument for more powerful static type system as it is one for dynamic typing.