Given the small size of the laser projector, I imagine natural movement from the phone being hand-held would result in significant displacement of the projected dots over a 1s interval? Have you tried integrating the 15 frames to see what it looks like?
Getting minute, subpixel movements can ironically give you MORE resolution if you process it over time, though you'd probably need some sort of "anchor" points
I think the irony being implied is that normally when you're shooting video and your camera is jittering, you're effectively losing resolution compared to a static camera because of motion blur, whereas this depth mapping benefits from minute movements. Though looking at individual frames of video is different than combining them into a single sharper image, I get the counterintuitive feeling they were driving at.
It gives you about 15 fps of depth data