The goal is for the student to get some experience with performing their own research wherever that may lead them. This is actually a pretty impressive piece of independent work for an undergrad.
"Correct methodology" and "real research" are very subjective things. We often judge too harshly, and as long as the paper is not misleading, you can judge it's merits directly without resorting to non-science.
Peer review should sort out the bad science from the good. The above link is just laziness: they have correlated P values with low quality research, and have decided to use it as a crap filter in leu of better, but more expensive, peer review.
The article submitted was not peer reviewed, and could very well be bad science, but I think accusations of non-science are not very productive and do not provide feedback in how to do better.
> The above link is just laziness: they have correlated P values with low quality research,
My point was rather that there was so much low quality research they decided to try to filter it by banning a statistical tool. Are you saying the reasons for so much low quality research had nothing to do with methodological problems? Maybe it's not low-quality, since you know it's "subjective"...
> The article submitted was not peer reviewed, and could very well be bad science, but I think accusations of non-science are not very productive and do not provide feedback in how to do better.
You are addressing a point I never made. What I said was(quote): The advisors should have encouraged a smaller and more manageable scope with a more thorough methodology
Because an important part of undergraduate education is in fact learning to be able to tell what good research is. The advisor's aren't really benefiting them by not pointing out methodological problems and letting them do research ">>>>> wherever that may lead them" - because that only encourages methodologically sloppy work in the future.