So to my mind you have to have some notion of what is 'forcing' the change before you can really say that things will change.
To use a current example, 'touch'. The force here is that two fold, one the amount of keyboard interaction you need to consume content is much less than the amount you need to create it, and two, keyboards take up space that could be filled with other features. Touch became credible when you could use it exclusively to operate the device in an acceptable way. Its why it failed to displace keyboards on the original Tablet PCs (you needed the keyboard too often) and its why the iPad without a keyboard is a lot less productive to process email on .
So 'post touch' needs, by my reasoning, some force behind it if it is going to displace touch. And we can look at those forces and see where they are coming from.
Clearly people talking to their devices is cool, but annoying to others on the train and potentially embarrassing. That being an example of a force which doesn't allow voice to display touch. But the Myo device seems to be operable reasonably privately if it is sensitive enough. The Leap lets you do gestures locally for action at a distance, I could see that as having some pull if people continue with large displays at a distance, but being less effective if the trend becomes many touchable displays close to you. I would say Kinect is a sort of mixed bag here, great for games, a huge win for Robotic vision, but less durable as new general purpose interaction method.
It will be fun to watch. Just hope my toy budget can keep up!
 This are clearly pretty arguable statements, but they are hear to serve as illustration of the force pushing change on people rather than a quantitative measure of that force.
Hopefully Subvocal Recognition can improve enough that it will solve this particular problem. They've already created non-invasive forms of electronic signal relay that could be used for this as well.
It definitely will be fun to watch. I'm with you on the skepticism of video capture devices like Kinect being the solution to non-touch interfaces. We'll see though :D
Moreover, I imagine (okay, hope) that intense miniaturization is going to one day produce something like "Google Contact Lenses", which are going to be even more restrictive in the sort of interactions they permit.
And anything that gets popular for Google Glass is probably also going to be good for existing contexts like cars.
I don't know exactly what this is going to be, but it'll be cool.