Here is a February 2012 study from Pew Internet, citing that 66% of the American internet population uses social media.
Our data, in addition to the survey shared on HN earlier, shows very similar statistics.
I couldn't agree more with your points about the importance of survey design. Our hope with GCS is to provide a real time, affordable, iterative mechanism for conducting this type of research.
The challenges faced when benchmarking against Pew and others, is, I believe, an indication of the value of continually collecting and updating data.
That company took no issue with running both polls (telephone, n=1030, "representative of people living in Germany"), and merely pointed out the differences in the questions when asked for comment afterwards. The second poll was mostly run to prove exactly that point.
It seems they take money for whatever poll they're asked to do, no matter how useless. They probably even optimize it to confirm the bias of their client - since polls are usually used for (internal) marketing purposes, I'd assume that's usually desired, too.
I wouldn't expect Gallup to do different. That's their business.
Also, if you're close to a university check out their social science courses for courses on survey/questionnaire design that you might be able to audit. They'll usually be something in that area on social psych courses for example.
The biggest mistakes I see are:
* Surveys being too long. Every additional question you have makes it less likely that people will bother to complete it.
* Not tracking abandonment rates. High abandonment rates mean that people cannot complete (because you've messed up a question design), or don't want to complete because of a perceived bias, or because it's too long, etc. It's a sign of a bug you need to fix.
* Trying to do too much in a single survey. If you have eight assumptions in your product that you're trying to explore do eight small surveys rather than one large one.
* Questions that assume the answer. You're often asking a survey because you hope people answer in a particular way ("Yes! People are going to be interested in my product!"). Get somebody else to read the survey and see if they can figure out from the questions what answers you want. If they can - rephrase the questions.
* Questions with no correct answer. For example "Do you prefer to log in with Facebook or LinkedIn? yes/no". I can't answer that truthfully (real answer "it depends"). People who can't answer questions truthfully either "lie" or abandon the survey. Both bias your results. Look at all your questions and think whether there is another way of answering it that you don't include.
* A preference for quantitative rather than qualitative data. Checkboxes and radio buttons and hard numbers make it easy to draw pretty graphs and fool yourself into thinking you're being scientific. The key info you need is often in the "soft" written answers.
* Doing surveys too early when you should actually be talking to people.
* More of a form design issue - but people tend to fit their answers to the box size. If you have a small text entry box people are more likely to give small answers, which may be less useful than the longer answer you would get from a bigger text area.
I'll shut up now since I should be doing actual work :-)
Will Evans has a nice series of slides on UX Research basics. There's one that covers some bits of survey design that you might find useful http://www.slideshare.net/willevans/introduction-to-user-exp...
It's the difference between speculating and actually making money. It's the difference between making a survey (the beginning, perhaps, of an indirect path to something... who knows what) and making a sale (direct path to booking revenue).
Programmers have a particular love for indirection. And the web has a particular love for speculation. Can we sell surveys? Yes, I think we can.
I'm sorry, did you think that there was science happening in the HN self-promotional echo chamber?