Hacker News new | past | comments | ask | show | jobs | submit login
Phoneme- and Word-Based Learning of English Words Presented to the Skin (fb.com)
36 points by godelmachine 10 months ago | hide | past | web | favorite | 13 comments

I was reading about a research lab somewhere in Madison that would attach electrodes to the tongue and send electric impulses based on real time cemara feed. The blind's person brain would then adapt and start seeing with the tongue. The general idea was that the eyes are just the gates that transform light into electric impulses so why not use tongue to send the electric impulses. They did achieve pretty amazing results.

I vaguely recalled reading something like that before as well, this is where I read it initially: https://www.scientificamerican.com/article/device-lets-blind...

As a person suffering from a disease which often causes blindness, I am particularly keen on the sensory-replacement science that we are dipping our toes into.

Looks like it's not a research anymore but a commercial product. PONS device. http://heliusmedical.com/index.php/divisions/neurohabilitati...

This thing has been having trials since at least 2009, yet apparently hasn't gone mainstream yet. More suspiciously, the scope seems to have shifted from "provides a means for people to 'see'" to "helps people reacquire motor control in tenuous and difficult-to-assess ways". Has it actually been shown to do anything or is it just modern snake oil? A quick Google search finds lots of hype but nothing conclusive...

Well I read about the lab and neurostimulation in Norman Dodge's book "The brain's way of healing". I checked his website and the message about the device that FDA might approve it in the end of 2018.

Here is the message:

PoNS UPDATE, January, 2018. Information on availability of the PoNS will now be through the manufacturer of the device, Helius Medical Technologies. In brief, some good news is that the final patients in the studies required for FDA approval finished their treatment in May and July 2017. The two studies which are the necessary prerequisites for FDA approval have now been completed, and a final package with the results, is currently being prepared by Helius for FDA submission. But the PoNS can not be made available to the public until FDA approval comes through. The latest guestitimate we have heard is that it could take until the end of 2018 for the FDA to release its decision. This may seem confusing, because the PoNS was available to patients who were in the well-known studies (for instance, the U.S. Military study of its use for treating traumatic brain injury and the Montreal Neurological Institute study for use for Multiple Sclerosis patients). But now that the studies are complete, the PoNS cannot be available to anyone until approved by the FDA. We know this is frustrating for those hoping to get access to a PoNS, and who had hoped it would be available by now, but this pace is not unusual in approval of new cutting-edge devices. Other news is that there is a migration of PoNS development activity to Helius. Because the PoNS studies have been completed, the Tactile Communication and Neurorehabilitation Lab that opened in 1992 and developed the PoNS and many other inventions, has been closed and the three scientists who invented the PoNS, Yuri Danilov, PhD, Kurt Kaczmarek, PhD and Mitch Tyler PhD, are now consulting for Helius on how to refine it. The TCNL lab website still has 50 research papers related to the PoNS posted on it, here.

As in they would actually have the subjective experience of sight? Or would they just become aware of the three dimensional layout of the objects?

Seems like you could answer that question by wearing a blindfold and using the device for a few months. But, I doubt you are going to find a volunteer.

That said, if you use a tool for long enough it seems to gain a sense of touch. Your brain is just interpreting what you feel in your hands, but from the brain's perspective it's just signals either way and the shortcut is useful.

Many blind people have previously had normal vision. So if such a study featured them, they would be able to answer that question.

That's only half the story, what happens when a sighted person takes off their blindfold after leaning how to use the device? I picture some come of synesthesia, but it's hard to say.

The most interesting thing would be if their eyes stopped being integrated for a while. https://m.youtube.com/watch?v=MFzDaBzBlL0

This looks very interesting. I was amused on my first skim-through to see:

"One-hundred common English words were selectedfor the present study (see Table 1; the 8 groups are explained later in Sec. Error! Reference source not found.)"

Is this study referring to something other than braille, or is braille inclusive under "English Words Presented to the Skin".

"Previous studies have explored the role of training in the use of haptic devices. For example, a study concerned with the acquisition of Braille by sighted learners has demonstrated that a corresponding visual display was beneficial for the acquisition of haptic Braille [13]. This result suggests the use of a visual display in the current study, corresponding to the activation of vibrators on the tactual display. In addition, the efficacy of correct-answer feedback for perceptual learning tasks is well-established [14], indicating that correct-answer feedback should be employed in the learning procedure. With the phonemic-based tactual display designed for conveying English words, we consider two training approaches to training: phoneme-based and word-based [15]. The phoneme-based approach, which operates on a “bottom-up” theory of learning, concentrates on maximizing the individual’s ability to discriminate between and identify the individual sound patterns of speech [15]. The word-based approach is based on a “topdown” theory of learning. It bypasses the training of basic phoneme patterns and starts with words directly [15]. Previous studies of speech training have employed single or combined approaches [16-18]; however, these studies have not led to definitive conclusions for choosing one approach over another."

AIUI, the ultimate goal of this work is to present spoken, rather than written, words to the skin. The idea is that you can process spoken language into phonemes, and then present those to the skin, although in this study, the phonemes were generated directly.

I assume that the belief is that it's more easier to reliably encode speech to phonemes in real time than it is to encode it to text.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact