
Investigating Causal Effects of Mental Events in Cognitive Neuroscience - lainon
http://philsci-archive.pitt.edu/14507/
======
danielam
Related articles worth reading:

[0] [http://edwardfeser.blogspot.com/2011/01/against-
neurobabble....](http://edwardfeser.blogspot.com/2011/01/against-
neurobabble.html?m=1)

[1] [http://edwardfeser.blogspot.com/2012/03/reading-rosenberg-
pa...](http://edwardfeser.blogspot.com/2012/03/reading-rosenberg-part-
viii.html?m=1)

[2] [http://edwardfeser.blogspot.com/2017/01/revisiting-ross-
on-i...](http://edwardfeser.blogspot.com/2017/01/revisiting-ross-on-
immateriality-of.html?m=1)

------
taneq
This is something which has always irked me about the "black box" arguments
regarding artificial neural nets. Sure, it's hard to interpret exactly "why"
an ANN gives the result it gives (other than "it did the maths and that was
the result") but people always gloss over the fact that we really don't know
much at all about how the human brain generates its results. We just accept
individual introspection as ground truth when, as any psychologist will tell
you, it's anything but reliable.

~~~
forapurpose
> We just accept individual introspection as ground truth

Isn't a fundamental of scientific method to not accept introspection, but to
require publicly observable, reproducible evidence?

~~~
taneq
Yeah, but if a driverless car does something stupid, people want an exact
diagnosis ("sensor X failed", "the vision algorithm misidentified Ms. Whatsit
as a paper bag due to a bug on line 714", etc.) whereas a human driver can say
"I thought they were going to brake" and we just accept that that's why the
person did it.

~~~
YeGoblynQueenne
The difference is that neural nets are technologicl artifacts that we design
and manufacture, unlike our own minds. We generally make sure that when we
design and create such things, we know how to control them.

For example, a car breaking system that randomly failed to stop the car under
conditions that were impossible to understand would be unacceptable.

When a component of a safety-critical system fails, we want to know why it
fails. Not because we're curious, but because we want to avoid failure in the
future. If the component is a black box that can't be reasoned about, then we
can't know whether it will fail again in the future.

Also: "Ms Whatsit"? In view of the Uber accident, that is extremely
insensitive.

