This code is converting a bunch of bytes into an integer by bit shifting. So it's specific to the binary representation of ints in your language, and how types of various sizes and signedness interact.
Note that this doesn't depend on your CPU, but rather on how integers and their byte representations are specified in the language.
I think it's quite reasonable to expect the programmer to know what that means assuming they are writing bitfiddling code like this.
None of that argues against abstraction. It's perfectly reasonable to want to write code that uses left bitshift and works on unsigned and signed 8, 16, 32, and 64 bit integers.