Yes, this is the current trend. And the results don't really justify the cost:
Which is why I (roughly) agree with Daniel Bernstein's polemic "The Death of Optimizing Compilers" https://cr.yp.to/talks/2015.04.16/slides-djb-20150416-a4.pdf
(I frequently make code 10x to 1000x or more faster, and compilers' contributions to that total tend to be fairly minimal, though larger than zero and usually worthwhile. But not worth the current shenanigans and not worth having no idea how the code is going to turn out)
Moreover, this explains the NASA coding rule that all loops shall have an explicit upper bound.
The deeper issue seems to be: If you write code in a fashion that you can reason easily about its performance, then you might not get the optimal performance in all cases, but you establish an minimum level of performance.
So performance stability is more important than reaching optimal performance, because the latter may be easily destroyed accidentally in future versions of the software.
The only exception are well-defined tasks with stable input/output definitions (e.g. numeric primitives like matrix multiplication, fourier transformation, etc.) where the whole point of newer versions is performance improvements and nothing else.
It seems clear to me that at the present state of tech that Most developers are better at picking algorithms while compilers are better at picking which low level instructions. Clearly there are some exceptions, but things like gotoBLAS are clearly exceptions and not the rule.
At least people claiming something will be optimized away can readily be empirically tested in a way that most will listen to. I have no problem testing the / operator, but I have never had holders of such beliefs accept my results, but people claiming something like "virtual function calls can be optimized away" can handle it when I can demonstrated that inside a single binary they can be but at places like library boundaries they cannot.
This is one area I dislike in C and C++ culture, there is this tendency to micro-optimize code like that, without even profiling it, or it causing an actual impact on the application being built.
So one ends up with endless bike shedding discussions about how to write code, instead of writing it.
A few years some Ruby dev said that declaring methods without parenthesis would let the Ruby parser parse method declarations faster. The dev never even measured, he just thought the parens were ugly and a ton of gem devs removed parens claiming a speed up. It wasn't until a few years later that someone benchmarked it and determined that there either was no difference or that it cost just the tiniest amount to remove them.