Some strategies I've seen conference reviewers use and that I use to quickly evaluate whether a paper is worth a deeper dive:
1. Figures. Can you understand the figure from the caption or associated text blurb? Is it neat and labeled correctly? I've found that excellent figures are a strong signal that the authors put a lot of thought and work into the paper. If you look at Yann LeCun's early works they all have really excellent diagrams with clear descriptions. Some of his figures are so classic that they regularly appear in ML presentations to this day.
2. Read the related work section. A lot of people seem to skip this but it can be dynamite if you have knowledge of the field. You'll probably be aware of other papers referenced in this section and when the authors of the current paper point out differences and similarities to other methods this will give you a great idea of whether this paper's approach is worth understanding.
3. Save the Method and Result sections for last. For me, and I would assume most people, these heavily technical sections are very time consuming to understand and ought to be read very last. Read abstract, intro, related works, and conclusion first. I would bet I discard half of the papers I pick up without ever readin the middle chunk.
4. Conference or Journal it was published in. IEEE, NIPS, CVPR are all places where the best stuff in my area gets published. Papers not from top shops should have to do more to earn your attention. Strong figures being the one I usually go back to.
One thing I would not do, that I used to do, is worry about the number of citations a paper has. Awesome new research and old but potentially newly important work will not have many citations at first.
This is just my list. I'd appreciate other filters people have for reading papers!