I find it amazing that when faced with a general question about a "security" feature the median internet tech nerd responds with an attitude of absolute paranoia (c.f. 4096 bit RSA keys, multi-word pass phrase choices, ssh key forwarding pedantry, general NSA tinfoil hatism....)
Except when confronted with an Apple product. Then it's all "Nah bro, relax. No way could you lift a fingerprint from a glossy phone screen". :)
I'll say it for the third time. It's cute feature (like face unlock was before it). Use it and enjoy it. If you honestly think you're buying a serious security mechanism you're simply wrong.
You see two different classes of responses because there's two different use cases.
There's security that geeks advocate for ourselves and our own implementations (often things we only have to set up and maintain infrequently) and then there's security that normals actually use (often things they have to authenticate with several times a day).
And I must have missed it, if anyone's been arguing this is a serious security mechanism. As far as I've seen, it's been lauded as (not much) better than a passcode, but, primarily, convenient enough to get people to use it instead of nothing, bringing up the relative security of a still-fairly-insecure bunch.
And you may want to re-read the discussion over the faked-print attacks. It isn't about (im)possibility. It's about the time, expertise and equipment involved and the likelihood of success being too expensive to be worthwhile for gaining access to most phones. 
And if we're wearing our "serious" security hats, I still don't see any reason to worry too much about print faking, as its core assumption is a skilled attacker who has unfettered physical access to our device, unbeknownst to us and beyond our control. And at that point, the game is already over.
 CCC themselves, with ideal source prints, had to significantly complicate their process to generate fakes that worked with a suitable consistency. So even if you think suitable source prints grow on trees, the point of significant skill, equipment, time and resources remains.
It's not at all clear that the absolute paranoiacs and the people saying that it's unlikely that any but a vanishingly small number of regular people will ever have Touch ID hacked are from the same set.
When you say it's not "a serious security mechanism", it sounds as if that's defined in some absolute terms. But if the effort to hack it is hundreds of times more difficult than the possible payoff from hacking it (which appears to be the case for nearly anybody but James Bond), then it acts as a serious security mechanism for that user's context. Literally nobody is going to make a mold of my finger to unlock my iPhone — they'd have to be absolutely insane to think that was worthwhile. So it's a serious security mechanism for me. Would it be a serious security mechanism to cover nuclear launch codes? Of course not.
> When you say it's not "a serious security mechanism", it sounds as if that's defined in some absolute terms.
You have to understand that the practice of cryptography has always had a military basis; the commercial/private use is ancillary.
So, what's "a serious security mechanism?" Presume you're a military commander during active war, whose battle plans are intercepted by an opposing nation. What is the likelihood, given the opposing nation believes your plan will lead to their complete destruction, that they'll be able to break the security in time to execute a counter-operation? A serious security mechanism is anything that reduces that likelihood.