It makes more sense if you think of your program as schematics for building a machine. Following that metaphor, the type system can help you find flaws in your design. When you build your schematics, you're still constrained by the real world, unfortunately. CPU temperature would just be an IO request, which leaves your program, the runtime collects the information, then feeds it back into your program.
I personally like to think of my machine running in a lab, with IO being scientists running around taking data out, doing work, then feeding it back into the machine. :D
You missed my point. I'm not talking about reading CPU temperature from the program - that is easily represented as you say. I'm talking about the generation of heat from the very act of running the program. I don't think there is any reasonable thing to label that but a "side effect". It's an unavoidable effect of running the program, it is not in any sense why you run the program, it doesn't show up anywhere in the types, the language gives you no guarantees about it, &c, &c.
You are going to have to point at something more specific than the article as a whole. At a skim, it seems to support my interpretation perfectly fine. It starts off:
"In computer science, a function or expression is said to have a side effect if, in addition to returning a value, it also modifies some state or has an observable interaction with calling functions or the outside world."
The increase in CPU temperature certainly an "observable interaction with [...] the outside world". Given that you are given no kinds of guarantees about the semantics of this interaction in any language I'm aware of, it would be poor engineering to rely on it, but that doesn't mean it doesn't happen or that it is not a side effect.
I think I'm comfortable with that. Then you push the "that we care about" question down to considering correctness and optimization of implementations.