It does when you multiply it across millions of CPUs. Saving 100 watts across 3 million CPUs (300 megawatts) is enough to shut down the equivalent of power plants.
I'm all for efficiency but if we are to progress as a civilization we need to find a way to not have power hold us back. We can get to a point where we have nearly unlimited power with minimal impact on the environment around us. That needs to happen as soon as possible.
There are an estimated 3 billion people in the world who still rely on solid fuel (wood, coal, peat moss) to cook food and generate heat [1]. We are extremely far away from being able to use 100% electricity to meet two basic needs for the whole population. From that perspective power-plant-scale computing is an incredible luxury.
But it won't happen before all M2/3 Macs will be obsolete. Having goals like that is great but until we reach that point efficiency is still important.
Performance per TDP is always relevant. Sure individuals may choose they just want whatever to maximize a metric, but that’s a different decision entirely. Like comparing a motorcycle and a truck without mentioning all the things different between the two.
TDP is the wrong measure here. i9-13900K and i7-13700K both have 125W TDP with 253W PL2 specs. By that calculation, the 13900K is more performant per TDP, so is "better" for efficiency.
Computation per kWh (or rate of computation per kW) is the right efficiency metric, not TDP (thermal design power).
If you have a workload that pushes the 13700K and 13900K to their turbo limits (and have ensured that those limits are actually configured to be the same), then you really will find the 13900K getting more work done for the same power and energy. That's how the extra cores help in power-limited scenarios.