Statisticians, academics, math enthusiasts, and People Who Spend Too Much Time Building Tables and Graphs want concrete blocks. They want to build a case or a report or something that suggests a point using a mathematical model. They deal with "statistical significance" because that's how the tools they understand work. Thus A/B testing.
Web workers, businesses, advertisers, people looking for better conversion rates, etc. want to eat. They want apples. Talking about statistical significance, trial sizes, and frequency measures don't help if they don't get more apples. The article showed that MAB gets more apples than A/B and by a wide margin.
Comparing the two isn't just missing the point, it's drawing an equivalency between "statistical significance" (whatever the tools say it means) and actual real world significance, a.k.a. better results. If A/B testing fails to even simulate better results, the proper word for that is worse.
(And don't even get me started about trying to run a statistical experiment until significant results are found... someday that practice will be labeled as fraud.)