The reason for this is that the CDN can keep "hot" TCP connections "open" to your backend origin servers.
Normally, when the user connects to your website, best case, he/she will have to send a SYN, receive a SYN/ACK, send their request in their final ACK (hopefully it fits), and then get their response. That's two round trips.
Then, if the file is "large", the TCP window will start out at some horrible default size, and will only get larger over the course of the connection being used, with successful packets being sent back over great distances and latency.
However, with a CDN, it likely already has a connection open, saving an entire round trip to your server. Meanwhile, if your response is somewhat heavy (multiple kilobytes), the old connection will already have a large window (as bandwidth between the CDN node and your origin server is likely high, even if the last mile to the user isn't), so you will get much higher performance and won't get stuck waiting for ACKs.
In fact, once you start thinking at the TCP level, you realize all sorts of things, like: even if you don't have existing reusable connections, the window will still warm up faster for the CDN (due to their connection being slightly lower latency and likely more stable), yielding greater total bandwidth; and, even more hilariously, even if there is no "last mile effect", having a server halfway between two servers will improve their performance due to these same windowing issues.
Another thing that will help long distance if you use a large enough well managed CDN is the interconnectedness of the CDN's nodes. Google probably has a much better pipe between there point of presence here and the one nearest Qatar (to use your example), so much so that "my server -> them -> then again through their fat pipe -> user in qatar" may well be faster then "me -> user in qatar via other ISP and international peering arrangements".