Additionally the second paragraph:
We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.
This infers they think AI will be used for hostile means. Such as wiping out the human race maybe? It is just un-informed people making un-informed decisions and then informing other un-informed people of said decisions as if they were informed.
And yes, AI will definitely be used for all sorts of purposes including hostile means. Just like anything else, really. Financial manipulation, spying, intelligent military devices, cracking infrastructure security, etc.
These are realistic concerns, we shouldn't fall for the Skynet red herring. We can have problems with ethical AI use, even if it's not a self-aware super-human superintelligence.
The downside is that much of the research is probably held secret for business advantages. The public releases are more of a PR and hiring strategy than anything else in my opinion. By sending papers to conferences, Google's employees can get to know the researchers and attract them to Google.
Others say there's nothing to worry about, Google and Facebook are just today's equivalent of Bell Labs, which gave numerous contributions to computer technologies without causing much harm.
EDIT: I have to agree with _delirium's skepticism towards them doing much in that regard though.