I consider myself to be very Left of center, but, I can't imagine what form of 'democratic control' you think is necessary over the research that Google and Facebook does.
I do not fault Google or Facebook for planning on time-scales longer than most governments. Governments ought to be doing this level of long-term planning, but are not (at least publicly)
At the same time, I can see the basis for some anxiety, because it's not hard to imagine proprietary research going a few steps farther and developing some sort of general intelligence or even a limited but extremely high-powered intelligence that would confer an overwhelming commercial advantage, and/or a political one. Suppose, as an exercise, that one developed an algorithm to maximize persuasiveness by first leading readers/listeners into a quiescent, semi-hypnotic state and then making your commercial or political pitch. There's certainly a potential for abuse.
In Europe this sort of thing tends to bring up the precautionary principle, the idea that you shouldn't do something without oversight and demonstrated minimization of risk. I think that's highly limiting, but expect some pushback against Google over this. Of course, I don't think democracy is all that wonderful either but then I'm a bit of a misanthrope.
I agree that it is good, but even though the scientific theories and algorithms seem to be "open", having access to both the computing power and data-sets of Google, is not.
So one could replicate these experiments, but not quite on the scale that Google does. I'm not at all sure if it's practically possible for a single (really clever) person with a high-end CPU/GPU machine (and possibly some $$$ for Cloud Computing instances), to replicate something similar to the results in this blogpost.
The recognition nets used in the blogpost seem to be trained on a tremendously high number of training examples, to give the ability to "hallucinate" (or classify) such a great variety of animal species, for instance.
It's very possible.
GoogLeNet is an example in Caffe: BVLC GoogLeNet in models/bvlc_googlenet: GoogLeNet trained on ILSVRC 2012, almost exactly as described in Going Deeper with Convolutions by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)
Knowing that the most common parallel effect of induced hallucination via psychotropics is ego-loss (complete loss of subjective self-identity) , maybe they need to try completely inverse processes in order to create a sense of ego in a machine... Because what's real intelligence but one's sense of self?