1. Understand the task they're solving, how they did it, and their results. Anybody with a basic understanding of the domain should be able to do this by reading the abstract/intro and conclusion. If you find yourself having trouble here, you probably just need more background in the field.
2. Gain some intuition of why their method works. This is probably one of the hardest parts to figure out how to do, and is probably the part that most people stumble on. Really, this is basically the entirety of what you're trying to do when you learn math. There's also varying levels of intuition. There's "I get why this might work", "I get why this works", and "I get why it's impossible that this doesn't work", in order of difficulty. The more background you have, the easier this intuition is to grasp. Alternatively, you can bootstrap your intuition by reading other people's blog posts, talking to somebody who understands the paper, asking the authors, playing around with your own implementations, etc. I'm not sure anybody has any good answers for this stage. Personally, I really like good blog posts from bloggers I know are good, but unluckily, many papers do not have blog posts attached :(
3. Finally, there's the strict mathematical rigor part. These levels aren't really strict; oftentimes, I'll treat math I'm not familiar with as a black box theorem. If you don't have the math background for these proofs, there's usually not much better recourse than learning the subject properly.
Luckily, many ML papers barely have any mathematical proofs :)
Alex Irpan has a great explanation of the Wasserstein GAN paper here: https://www.alexirpan.com/2017/02/22/wasserstein-gan.html
If you're looking for interactive blog posts, distill.pub probably has the best: https://distill.pub/2017/research-debt/
And one final note: Many papers (especially papers that aren't math papers) are often surprisingly simple to get to step 2; it's just hidden behind a lot of cruft. I will say that it's wise to be careful about so called "intuitive explanations" of a concept. If somebody gives an "intuitive" explanation for why X is true, but that intuitive explanation doesn't explain why !X is false, it's not very useful.