Look at the terrible touch-driven applications and you'll probably find that a goodly number of them assume the same sort of interaction paradigm that you'd find with a keyboard and mouse. The excellent touch applications have made good use of the fact that you have more than one finger and that you can perform gestures with them, that fingers get in the way of stuff, and that you don't want to have to tap multiple times to get something to happen (whether that's finding a file or so on).
It'll take a shift in thinking to make post-touch effective (whether that's gesture, vision tracking, thought etc). If we think about a computer as being a desktop with folders on it, or an 80x24 terminal then we're looking at it the wrong way (an extension of the "if you see a stylus they blew it" principle I guess).
There are, though, UIs where item organization may happen without user input, like Genius playlists in Apple iTunes. This might be the future, I guess.