Oh, very well. Thank you for proving that the scientific community disagrees with me, and proving evidence for your point. I can't say I actually agree with those criteria; especially since one is wholly impossible: there is never a possibility that there's 'no fear of reprisal' when the reprisal can take the form of conflicting with one's self-identity. To be honest, to accept a self-report I'd have to see the following:
1) A study which shows that the questions themselves do not introduce bias. An actual study, where multiple groups of participants were asked the same questions in different forms so as to prove the language of the question cannot influence the result. Of course, this would cause every questionnaire and interview study to fail, because the language does indeed affect the results and is thus a confounding variable (which cannot be controlled without pretending that some language "just doesn't affect people", and yet still functions as language).
2) A proof that the demographic of the sample was controlled for all controllable factors other than those measured. For instance, in this study it wouldn't be good enough to test for the correlation between gender and friendship satisfaction by just getting a bunch of men and women: they'd all have to be the same class, race, wealth etc.
3) The study cannot draw conclusions, nor interpret its results as causative. This is really quite self-explanatory: correlation does not imply causation. Yet, especially in sociology and psychology, this logical maxim seems to get forgotten amongst the excitement of having produced a study.
I'm sure there's more objections, but you've already put up with me arrogantly berating the scientific community for 3 points now. If I were allowed to edit my post to state that the scientific community disagrees with me regarding the validity of the 2007 study, I would.
As for an experimental methodology for studying friendship, I can't say that I can think of any studies which would do so and get past an ethics committee (bloody ethicists), but making the study longitudinal over childhood through to young adulthood would help, as it would show what age-bound variables affect the output. It might just be that young adult men are, for instance, too busy developing a career to have friends, or too busy drinking beer to have friends, or whatever; either way, making it longitudinal would allow some of the uncontrollable confounding variables (such a life experiences) to become more apparent.
1) Questions that introduce bias are known as leading questions, and researchers have devised multiple methods of avoiding that - including, as Dan noted, asking the same question more than once with different wording, and using only neutral language. Also, keeping questions simple, clear, specific and brief - with no implicit assumptions or loaded phrases.
2) Good research controls as many variables as possible. The more uncontrolled the variables are the less valid the data is - but this applies to all studies, not just self-reports.
3) Correlation ≠ causation is rarely forgotten in the actual research - the discussion sections of research in reputable journals are overly modest at best, noting the limitations and weaknesses of the study and typically making few claims for generalizability. Mass media reports, however, tend to take more than a few liberties.
I agree any valid study of friendship has to be longitudinal - the issue becomes one of measurement. You do not trust self-reports, yet how else could it be measured? Hire a researcher to follow people around? Ask them to carry an audio recorder with them every day for a few years?
The only practical alternative I can think of is to ask their close friends or relatives. However, this may be unnecessary because research has already compared self and other reports on a sensitive issue (life satisfaction) and found a high correlation (1).
And finally - although unscientific, the high upcount of this article suggests that it hit a nerve and that many here are unsatisfied with the quality of their friendships. It is my own experience, and that of my brother and my father, and most of the other men I know - more than enough to suggest something is not quite right - that it warrants a thoughtful discussion and not be dismissed out of hand.
> And finally - although unscientific, the high upcount of this article suggests that it hit a nerve and that many here are unsatisfied with the quality of their friendships.
See, you used the term 'suggests' rather than 'proves' because you know that claiming a stronger relationship between upvotes and motive would be affirming the consequent. But this is precisely the sort of weasel-wording which I've seen in observational studies, and it seems deliberately crafted to trick an uncanny reader ill-versed in logic into misinterpreting 'suggests' as 'proves'. Of course, we both know that we cannot infer anything from a consequent other than one of the possible antecedents must have occurred, and we both know that the antecedents in this case -- motive for clicking upvote -- is huge, and thus nothing meaningful can be inferred about the consequent. I'm happy to have a discussion about almost anything, but if someone comes to the party with nonsense evidence pretending the discussion has already been studied and decided, I'm going to call them on it.
I also feel like you've also dodged every point I've raised (or perhaps I didn't explain my objections very well). With regards to #1, the issue wasn't that I think researchers are deliberately crafting leading questions, but that in order for the study to be valid they'd have to show that their questions either do not lead thus aren't confounding (which I've argued is impossible), or that they lead predictably thus can be countered in the analysis (which I also argued is impossible).
With #2 you're correct that this is an issue for all studies, but it's a particularly large issue for studies of things which are irreducibly complex, like people. Since we can't (easily) take specific facets of a person and study those in silo from the rest of a person, controlling confounding variables becomes a bigger issue. Even in other observational sciences we can usually demonstrate the core parts of our assumptions in a controlled experimental manner. For instance, in the study of global warming, we can demonstrate in a controlled, experimental way that the combustion of fossil fuels releases CO2. With studies of human behaviour this is rarely possible.
With #3 you're correct that the media is far guiltier of this than the scientists, but I'd argue that scientists need to be more vocal about this issue. I appreciate this treads a fine line between asking for more scientific social responsibility, and holding scientists responsible for the behaviour of society, but I feel this is a valid concern due to the way that politicians like to fund studies such as these to validate their personal opinion. The reason I believe this important isn't that I think scientists are trying to dupe us, far from it, but because it worries me that as burden the of proof for a posteriori logic falls from the strongly codified and philosophically justified rules of empiricism and falsifiability, so scientists move from being discoverers of truth to yet another controllable authority figure.
Also, thank you again for citing evidence for your point. I apologise that I have not done so, but I seriously doubt any scientists actually agree with me here. Having read your linked study, I would say it both stands to reason and doesn't really seem to prove the point it claims to prove. If you set out to prove that self-reporting isn't invalidated by confounding variables, and you do so by invoking self-reporting which contains almost exactly the same confounding variables, then you can't really claim to have proved anything. Relatives and friends of a sample in such a study would be just as likely to change their answers, consciously or subconsciously, to avoid internal conflict, and because they're tied to the sample in such a way that would produce a similar personality and similar self-identity reprisals if the subject's life choice were cast in doubt, it's also not a large leap of logic that their changed answers would usually change along the same lines as the sample.
Again, I can't really think of a better way of studying complex issues like human behaviour, but since we started at the point of 'science agrees self-reporting is fine' and are now at 'we agree it is the best we can get', I feel we're moving in the right direction. I do agree that well-controlled self-report studies are probably the best we can get in this field, it just seems to me that the best we can get isn't as valid as the best we can get in experimental sciences, and should be noted as such.
About 1) - don't most forms ask questions more than on e using different wording? This helps eliminate people just filling the form out randomly, but couldn't it also help keep language neutral?
Yes, this is correct. Most self-reporting relies on having the same question asked in different ways and places to catch people whose inconsistent answers suggest they should be removed from the sample.
However, my objection is that I don't believe language can be easily classified in terms of the response it'll elicit. Obviously, one can (usually) correctly guess the response that'll be received if one were to run up to a stranger and yell "You're a [swearword of choice here]", yet the fact that I've had to preface this with the modifier 'usually' betrays my point; some people will get aggressive if you swear at them, some will laugh, some will respond in kind, and so on. My concern is that if we can't even predict the effect of language in its most obvious state, we probably can't predict its effect in subtler states.
This unpredictability of language leaves us in a tricky position when it comes to asking questions on a self-reporting study. In order to solve that one objection, we'd have to come up with a method of using language which manages to communicate its point, without causing that point to make people feel emotion. This is further complicated by the fact that people are complex beasts with internal and external factors playing in to how they behave, such that a question formed neutrally for one person would probably not be so neutral with others. This also makes avoiding 'fear of reprisal' for one's response to a question impossible, as we can only remove external reprisal. It would not be possible for us to, for instance, removal the internal upheaval of a conflicted homosexual admitting to a survey that they were gay.
1) A study which shows that the questions themselves do not introduce bias. An actual study, where multiple groups of participants were asked the same questions in different forms so as to prove the language of the question cannot influence the result. Of course, this would cause every questionnaire and interview study to fail, because the language does indeed affect the results and is thus a confounding variable (which cannot be controlled without pretending that some language "just doesn't affect people", and yet still functions as language).
2) A proof that the demographic of the sample was controlled for all controllable factors other than those measured. For instance, in this study it wouldn't be good enough to test for the correlation between gender and friendship satisfaction by just getting a bunch of men and women: they'd all have to be the same class, race, wealth etc.
3) The study cannot draw conclusions, nor interpret its results as causative. This is really quite self-explanatory: correlation does not imply causation. Yet, especially in sociology and psychology, this logical maxim seems to get forgotten amongst the excitement of having produced a study.
I'm sure there's more objections, but you've already put up with me arrogantly berating the scientific community for 3 points now. If I were allowed to edit my post to state that the scientific community disagrees with me regarding the validity of the 2007 study, I would.
As for an experimental methodology for studying friendship, I can't say that I can think of any studies which would do so and get past an ethics committee (bloody ethicists), but making the study longitudinal over childhood through to young adulthood would help, as it would show what age-bound variables affect the output. It might just be that young adult men are, for instance, too busy developing a career to have friends, or too busy drinking beer to have friends, or whatever; either way, making it longitudinal would allow some of the uncontrollable confounding variables (such a life experiences) to become more apparent.