
Maps of subjective feelings - cheeaun
http://www.pnas.org/content/early/2018/08/27/1807390115
======
scienterrific
This is the most bullshit I've seen packed into a scientific publication
wrapper in quite a while.

Look at this graph:

[http://www.pnas.org/content/pnas/early/2018/08/27/1807390115...](http://www.pnas.org/content/pnas/early/2018/08/27/1807390115/F2.large.jpg)

What?

And this image here is like an uncanny valley of robot perception trying to
comprehend the vagaries of sensation:

[http://www.pnas.org/content/pnas/early/2018/08/27/1807390115...](http://www.pnas.org/content/pnas/early/2018/08/27/1807390115/F3.large.jpg)

Why thank you, robo-droid! _Very_ intuitive! I'm putting this on the
refrigerator!

I feel like I just accidentally read an algorithmically generated ad-tech
article from a chum bucket click-funnel content farm.

~~~
ivraatiems
Can you tell us a little bit about your scientific background which would help
us understand your qualifications to make that statement?

I'm really tired of non-subject-matter experts on HN coming in and declaring
things that require very specific expertise bullshit because they didn't
immediately understand or agree with them on a first pass.

I'm not saying you're doing that, but that's an awful lot like what it seems
like given how argumentative your comment is.

~~~
scienterrific
I can tell you how _unscientific_ the process of this study was:

    
    
      A total of 1,026 participants took part in online 
      surveys where we assessed:
    
      (i)   for each feeling, the intensity of four 
            hypothesized basic dimensions
    
      (ii)  subjectively experienced similarity of the 100 
            feelings, and 
      
      (iii) topography of bodily sensations associated with 
            each feeling.
    

They mechanical turked some online surveys, and asked non-experts for this
data, on the basis that it’s subjective data. Then they performed some canned
statistical analysis du jour on it, so that the paper would have some catchy
data visualizations, and filled for 1,000 words.

So, first and foremost, this is a free-association study, with all the rigor
of a Freudian interpretation for responses to a Rorschach test.

But beyond that, why would anyone hold the opinion that this is science?
Online surveys for 1,000 people, to gather opinions on the best way to
associate words in English? That’s what this research accomplished.

And lastly, let’s imagine the potential applications for these results. How
will they guide us to better predictions in the future? If I look at the
information collected here, can I draw more accurate conclusions about the
world, and improve practical decisions, now that I’ve been informed by these
results?

No. No I can’t. This is the kind of analysis that simply trusts the respondant
to complete an online survey in good faith. You can’t run the world like that.
You can’t drop ten cents on a 30 minute multiple choice questionaire, and use
that as your foundation for new insights.

They basically asked people to generate a tackling dummy data set for them by
completing a survey, so that they could run a sort algorithm on it, and apply
a graph to an Excel spreadsheet of answer counts. Except they did it in an
academic setting, so that some how validates these results with defacto
authority? Not in my world.

~~~
ivraatiems
I'm sorry for the late response.

I asked whether you had a scientific background that would help me understand
your authority to make the claims you're making. Though you didn't answer that
question, it's clear to me that you don't have one. It also doesn't really
seem like you read the study beyond the abstract.

> You can’t drop ten cents on a 30 minute multiple choice questionnaire, and
> use that as your foundation for new insights.

Why not? There is evidence that these questionnaires work. You assert
repeatedly that the study "can't" do things a certain way. But how do you know
that? What basis do you have, other than your own intuition, to say that
something is being done right or wrong in a field about which you know
nothing?

> So, first and foremost, this is a free-association study, with all the rigor
> of a Freudian interpretation for responses to a Rorschach test.

What expertise do you have that gives you the ability to make that claim? Why
should I believe you when you say that, yet should _not_ believe the authors,
who have more experience and more qualifications than you do, and who have
cited to numerous sources written by other such persons, when they say the
opposite? If you'd read the paper in its entirety, you'd see that the authors
cite (numerous, as is common in studies like these) sources to explain why
their metrics and methods are valid. All research is built on the back of the
research that comes before it.

But even assuming you read only the abstract, there are numerous, rigorous
formats for these kinds of surveys which have been formulated and have been
found to be valid. Here's an example of a review which covers the validity of
Mechanical Turk:
[https://www.sciencedirect.com/science/article/pii/S074756321...](https://www.sciencedirect.com/science/article/pii/S074756321730506X)

I'm sure you can find surface-level issues with that review by reading its
abstract, too, but the scientific process isn't about common sense or what you
think is likely to be true. It's about what is found to be true after testing,
which is what that review covers and is what this study did.

> Then they performed some canned statistical analysis du jour on it, so that
> the paper would have some catchy data visualizations, and filled for 1,000
> words.

That you don't understand an analysis or don't know what it means or does
doesn't mean it is "canned" or inapplicable, and doesn't mean it isn't valid
for this use. What about the nature of the analysis makes it inappropriate?
Once again, the authors have clearly indicated where the analysis comes from
and why they did it, so, can you dig down and explain why that choice of
analysis is flawed?

> They basically asked people to generate a tackling dummy data set for them
> by completing a survey, so that they could run a sort algorithm on it, and
> apply a graph to an Excel spreadsheet of answer counts. Except they did it
> in an academic setting, so that some how validates these results with
> defacto authority? Not in my world.

You don't seem to understand the scientific process, and you don't seem to
have bothered to put any effort into learning about it, either. I'm not
contending this study is perfect, accurate, or even totally valid. But I do
contend that your critique of it is a series of hollow assertions that follow
from a conclusion you reached based on a gut feeling about what is and isn't
science, and that, if nothing else, is not scientific.

What you are doing to social science is the equivalent of a marketing manager
declaring he knows how to do a software engineer's job better because he read
the client-side code and can use a web browser and "it can't be that hard." I
urge you to reconsider, and spend some time actually learning about how the
scientific method is applied to social science, if only so you can better
critique it.

