Not the OP but I'll take a stab: I see "waiting faster" as meaning roughly "check the status of" faster.
For example, you have lots of concurrent tasks, and they're waiting on slow external IO. Each task needs its IO to finish so you can make forward progress. At any given time, it's unlikely more than a couple of tasks can make forward progress, due to waiting on that IO. So most of the time, you end up checking on tasks that aren't ready to do anything, because the IO isn't done. So you're waiting on them to be ready.
Now, if you can do that "waiting" (really, checking if they're ready for work or not) on them faster, you can spend more of your machine time on whatever actual work _is_ ready to be done, rather than on checking which tasks are ready for work.
Threads make sense in the opposite scenario: when you have lots of work that _is_ ready, and you just need to chew through it as fast as possible. E.g. numbers to crunch, data to search through, etc.
I'd love if someone has a more illustrative metaphor to explain this, this is just how I think about it.