Again, I think that the sensor itself is detecting the frequency.
The actual signal isn't "direct".
IIRC from reading up on haptics - different tactile sensors are sensitive to different ranges of stimulus.
Different cell bodies sense different effects and frequency ranges and send an encoded signal.
We tend to forget that the underlying technology is very different and needs different encoding, signalling and computation approaches.
The main sentiment is that our nervous system is quite slow compared to our electronics and needs to use different design approaches and novel encodings, to get things done.
In much the same way our electronics can't engage with visible optic phenomena "directly". I mean visible light frequencies are too high to directly sample and transmit electronically the sensor used has to deal with that.
Right, I know it isn't direct. The ability to introduce aliasing and fool this perception system is the basis for haptic feedback systems. I was responding more narrowly to the topic of a frequency domain transform as you get from the cochlea. I didn't mean to suggest that there are not other "hardware" signal transforms or feature extraction happening prior to the nervous system signal processing.
As I understand it, the touch sensing structures are more like a zero, first, and second derivatives (deformation/tension, velocity, acceleration) and these coincidentally have different frequency sensitivity curves. My reading is that in some ways this might be more similar to how our visual system integrates different photon detection signals to interpret color. Rather than the clear frequency domain transform of the cochlea, there are broadly overlapping sensitivity curves with only 2-3 peaks.
The actual signal isn't "direct".
IIRC from reading up on haptics - different tactile sensors are sensitive to different ranges of stimulus. Different cell bodies sense different effects and frequency ranges and send an encoded signal.
We tend to forget that the underlying technology is very different and needs different encoding, signalling and computation approaches.
The main sentiment is that our nervous system is quite slow compared to our electronics and needs to use different design approaches and novel encodings, to get things done.
In much the same way our electronics can't engage with visible optic phenomena "directly". I mean visible light frequencies are too high to directly sample and transmit electronically the sensor used has to deal with that.