The power usage gap is much less in single-threaded tasks, than the gap between the nominal TDPs of the CPUs.
A desktop CPU at 5 to 6 GHz in single-threaded mode has a power consumption in the range of 20 W to 30 W per core.
I do not know about the latest Apple CPU, but for the older Apple smartphone CPUs the power consumption in single-threaded mode was between 4 W and 5 W per core, similar to the power consumption of an Intel or AMD CPU core when running at clock frequencies between 3 GHz and 4 GHz.
The engineering samples of Arrow Lake S running at 5.7 GHz have demonstrated slightly higher ST GB6 scores (e.g. 3450), while AMD 9950X at 5.7 GHz has demonstrated ST GB6 scores insignificantly lower (e.g. 3418).
For all practical purposes, it can be considered that the current desktop CPUs of Intel and AMD (Arrow Lake S with Lion Cove cores, e.g. 285K, and Granite Ridge with Zen 5 cores, e.g. 9950X) and the new Apple A18 Pro CPUs have the same single thread speed, because the differences between them do not exceed 1%.
Of course, it is very impressive that Apple's smartphone CPU matches the single-thread speed of the desktop CPUs, but for multithreaded applications the performance depends very little on microarchitecture and it is mostly determined by the CMOS manufacturing process and by the configured power limits, so there no smartphone CPU can approach the throughput of a desktop CPU.
Sprinters who run very fast at the start of a race are technically faster at measurement time than a marathon runner who paces themselves over a prolonged period, despite finishing ahead of the sprinter at the finish line.
Isn’t it fun to measure things and create feel good narratives when it’s advantageous to your plot?
The catch is that it's a mobile processor with very limited cooling so it's only good for bursty loads. Which happens to be a good fit for the kind of stuff people do on phones.
If you're in Lightroom all day, or compiling code all day, the Intel chip is probably faster.