The entire problem of using byte swaps is that you need to use them when your native platform's byte order is different from that of the data you are reading.
You know the byte order of the data. But the tricky part is, what is the byte order of the platform?
It will always be correct, but you can't just assume that the compiler will optimize the shifts into a byteswap instructions. If you look at the article you will see that it tires to no-true-scotsman that concern away by talking about a "good modern compiler".
And what exactly is the problem there? Are you going to be writing code that a) is built with a weird enough compiler that it fails this optimisation but also b) does byte swapping in a performance critical section?
(I'm not sure how to answer the question... what do you mean, "when?")