PyTorch has become dominant in research because of its API (both its stability + having eager mode).
TF has become dominant in industry because A. it came out several years before PyTorch and industry is slow to move, B. It supported a lot of production use cases (mobile, serving, removing Python overhead) that PyTorch didn't for a long time.
If only articles were as concise as your summary is I would enjoy reading them but as long as they are many pages long I have no time to read beyond the titles, abstracts, conclusions and comments.
BTW I've also read (here on HN) PyTorch learns much faster than TensorFlow does.
PyTorch wraps THNN, not Torch. Moreover, even if this was true, it wouldn't matter at all. Practically 0% of the overhead is related to Python in the first place, all of the time is dominated by the underlying C implementation.