20-30% usually. You get faster cores but no LRDIMM (i.e. you are constrained effectively to 128GB ECC UDIMM, at best 256GB ECC UDIMM if you are lucky to get 32GB ECC UDIMM modules). EPYC has 4TB ECC LRDIMM ceiling, new TR on TRX80 might have the same ceiling as well. I am glad that AMD provides TR as they make way less $ on them than on EPYC, but it's a great marketing tool for them. I am running some TRs for Deep Learning rigs (PCIe slots are most important) on Linux, and they are great, Titan RTXs and Teslas run without any issue, but Zen 2 should give me much better performance on classical ML with Intel MKL/BLAS in PySpark/SciKit-Learn, so I can't wait to get some.
Intel makes rather pessimistic assumptions about AMD and uses the model name to pick which code path to use and ignores the CPU flags for floating point, etc.
So if you want to compare performance fairly I'd use gcc (or at least a non-intel compiler) and one of the MKL like libraries (ACML, gotoblas, openblas, etc). AMD has been directly contributing to various projects to optimize for AMD CPUs. They used to have their own compiler (that went from SGI -> cray -> pathscale or similar), but since then I believe have been contributing to GCC, LLVM, and various libraries.