Since adding them our Google webmaster's tool is much "happier".
When differentiating content based on user-agent and including the "Vary: user-agent" header in responses, you are effectively disabling HTTP caching. Because of the huge number of user agent strings, neither the server's output cache nor any CDN/intermediary cache will be effective at reducing request processing load. This is a very poor trade-off, and typically unacceptable.
If you must serve dynamic content based on user agent, the third option on the cheat sheet is probably better: use rel=canonical with separate URLs per device class. On each request, the server would still sniff the device class from the user agent string, but if the sniffed class does not match the one designated by the URL, the server 302-redirects (temporarily) to the device-specific URL (else, it serves the appropriate HTML). This requires a little more programming effort, but is usually worth having both caching and SEO.
(I dislike URLs that essentially represent the same content to vary in domain or path, so I distinguish them with a simple "?lite" query parameter. It's also very nice to have the server take into account an override cookie when sniffing the device class, which the user can set through UI in the site header/footer.)