Hacker News new | past | comments | ask | show | jobs | submit login
The blind man who taught himself to see (mensjournal.com)
388 points by philk on March 3, 2011 | hide | past | favorite | 84 comments



"Running into a pole is a drag, but never being allowed to run into a pole is a disaster," he writes. "Pain is part of the price of freedom."

Pain is part of the price of freedom. That's a truth that could use to be applied more widely.


Kish figures it would require $15 million to prove whether or not his idea is feasible. He fears he’ll never get the opportunity.

Sounds like he needs some real angels, those who measure the success of their investment in something bigger than dollars. How does one go about finding and pitching to them?


Can't he use lean startup principles to launch with his existing device that offers an improvement over a cane, instead of raising $15 MM to offer a perfect product that lets blind people play tennis? If I were blind, I would surely pay thousands of dollars for the K-Sonar device if it could enhance my sensory input beyond "what I feel with my cane within 2 feet" to "everything around me within 15 feet".

If he got the K-Sonar approved as a medical device, I think he'd be able to get paid from deep-pocketed insurance companies too.

Improving the range (and doing research on inner-ear microphone implants) could be done in later versions (if mainstream blind customers really are interested in playing tennis, which I am skeptical about). If v1 makes people more independent, that's a huge deal already!

I bet he could get pretty far just by reallocating his existing $200k/year budget to commercializing the technology, which has the potential to be embraced by a lot more people than the blind people he is introduced to (only 10% of whom even get good at echolocation-through-clicking).


Part of it involves surgery - which might be tricky to do incrementally.

It also doesn't sound as though he knows much about startup methodology or has the personality or desire to solve the bootstrapping problem. He's seems like a domain expert but not an entrepreneur.

I guess that means a lot of blind people will have fewer opportunities.


>If I were blind, I would surely pay thousands of dollars for the K-Sonar device if it could enhance my sensory input beyond "what I feel with my cane within 2 feet" to "everything around me within 15 feet".

There is an easy(cheap, i'd say tens of dollars, see prices for ultrasound parking sensors) implementable "virtual cane" : the pair of laser (like in laser pointer) + receiver + small chip which would re-scale the sensor output to the hand from light speed scale (10m / 300000 m/s = 0.00003s ) to the tactile feedback scale - tenths of a second. The same can be done with ultrasound. The both can be even paired, for precision and cases of say low light (or sound) reflecting surfaces.

The main problem i think is that blind people have their hands full just trying to live the life, while non-blind don't care and or passionate enough to cross the threshold into doing something. The guy in the article is just a miraculous exception.


Unfortunately, medical devices don't really allow incremental development. Studies are incredibly expensive, and relatively minor changes can completely invalidate them.

Even Steve Blank recognizes the model doesn't fit for medical devices.


For the first time in my life, I truly wish I had the money to be an angel investor.


Peter Thiel? He's a transhumanist, and this seems pretty doable. Anyway, if he hasn't pitched Thiel yet, he should do it, the worst thing that could happen is that Peter says "no, that's not how I want to spend my money", and that'd be fine.


The Bill&Melinda Gates fundation might be interested too. They are interested in medical & technology. That would be a good fit.


Kickstarter.


Kickstarter is a great platform for sales and marketing once you've got a functional and scalable prototype. It's not a great place to raise exploratory / research funding.

If you look at the history of Kickstarter's most successful projects, they tend to resemble pre-sales more than "donations." There's usually a product delivered at X level of individual funding, and 90% of the funding comes in at X level.

This model is fantastic for launching new products that are ready to go upon receipt of funding. It's not as successful for raising money for R&D.


Didn't Diaspora raise all of that cash through Kickstarter with basically nothing in return?


I think the jury's still out on Diaspora. I'll give them the benefit of the doubt for the time being. That's my polite and respectful way of saying yes to you here.

But it's the case in point. Diaspora raised what would otherwise have been VC or angel dollars. Kickstarter isn't the platform for that. It's a bootstrapping platform optimized for product sales.

The people who succeed on Kickstarter have a) finished or finishable products that b) can be sold via "donations" on Kickstarter. This is because the patrons/backers on Kickstarter are far more likely to fork over their $50 and receive something tangible in return than they are to donate $50 to support a dream.

I'm a big fan of the Kickstarter concept. Huge fan. But human nature dictates the limits and strengths of its use cases. Your audience on Kickstarter is consumers, not financiers. These are consumers who receive no equity or charitable tax deduction for their "donations," and who will thus "donate" primarily in exchange for privileged or early access to a cool product. Or maybe to get their names in the back of the book. Whatever. Point is, they want to buy something. Inevitably, then, it becomes a sales and marketing platform and not a venture capital platform.

It's theoretically possible that the next Apple would come out of Kickstarter, but not the next Facebook.


Here's a teenager who could do the same thing: http://www.youtube.com/watch?v=YBv79LKfMt4

The video is from when he was 14, he died from cancer at 16.


I saw Ben's story on a documentary or news program a few years ago, and it was the first thing I thought of reading this article. I had no idea he had died - a sad note to a series of uplifting life stories.


Site with more information: http://www.benunderwood.com/


Daniel Kish was featured in Derren Brown's 'Misdirection' series. The clip here shows him using echolocation to describe the shape of a car and ride a bike: http://www.youtube.com/watch?v=MjFcvixIRxs

'You might think of it like being in a choir or an orchestra... The world is like a living symphony... This car is kind of like an instrument, except that, instead of making its own sound, it's reflecting sounds I'm making.'

Extraordinary.


Indeed! Unfortunatly we will never be able to practice Echolocation. Imagine if we are able to give him back vision.... What will he be able to do from there !


Why won't we be able to practice echolocation? There's nothing stopping you (apart from the fact that it's not very useful when you can see)...


Well when you grow up without any vision, your other senses develops at a much faster rate, especially the hearing. Because we grew up with vision, we know what it is to see and our ears will never have the capability to develop the same way.


The article mentions that he has successfully taught echolocation to someone that lost sight at 14 years old, so you seem to be underestimating the powers of the human brain.


Then it is a field that we must take advantage of. We are barely using the full capacity of our senses; we're too lazy with our eyes. It's something we should consider in the future, try to develop the human senses.... I'm sure there's a little budget for that in the 3 trillion they spend on military ventures!


He's not the only one. Here's a documentary about a boy who uses the same technique http://www.youtube.com/watch?v=qLziFMF4DHA

I have once seen tests in some popular science tv show. They blindfolded a few people and let them walk in front of a door (or sth like that) to check if they'll stop before they reach it, and somehow they managed to do it. We can use echolocation by nature, we just don't develop this skill.


you see Daniel Kish in the doc as well: http://www.youtube.com/watch?v=3Px-aPnk4ZU


Please please please don't link to pages that open the print dialog :(


I'd rather have a single page which opened the print dialog than have multiple pages.


It's probably the only all-in-one-page version of the article.

Although I do agree the dialog popping up at the beginning is annoying, it's only one click (or key) to dismiss it; I prefer that to having to click on several "next page" links.


Then surely it makes sense to link to the root page which has a (single-click) link to the print page?


Which then opens the print dialog, so actually two :P


Only if you're horrified of print popups.


"He’s tired of being told that the blind are best served by staying close to home..."

Kudos to him. It's amazing how people have the ability to reprogram their bodies in order to break their handicap.

Reminds me of a story "Blind Teen Gamer Obliterates Foes" http://www.wired.com/gaming/gamingreviews/news/2005/07/68333


I really enjoyed this article. I do take issue with a couple of sentences:

"We hear in stereo 3-D." This is not true unless we move our heads. We hear in one dimension, left-to-right.

"We hear better than we see." This is an apples-to-monkeys comparison.


> "We hear in stereo 3-D." This is not true unless we move our heads. We hear in one dimension, left-to-right.

Put on some headphones and listen to this and tell me you cannot visualize it in 3-D:

http://www.youtube.com/watch?v=YuxaOO6PsAw

This is a binaural recording. More info on these here: http://en.wikipedia.org/wiki/Binaural_recording

You can hear in multiple dimensions because, as the original article mentions, there is some post-processing in your mind due to the time delay between sounds entering your left ear and sounds entering your right. Just because your ears are aligned on a single axis does not mean that you cannot perceive in multiple axes. No moving of your head is necessary to perceive in more than one dimension.


It actually also works because a sound registers differently based on the direction it travels through your own head (i.e. your skull itself is an occlusion source) - your brain recognizes front vs. back based on how your own head dulls the sound


I assume the shape of our ears also affects the sounds, depending on the location of the sound. This converts vertical location into frequency shaping.

And I wonder, why else would we have such oddly shaped ears?


You would be correct. The shaping of the pinna (and resulting sound reflections) help us determine the height of sound sources, even if they're right on our midline.


I listened to these so much several years ago (in particular one with somebody shaking a matchbox) that my brain apparently realized, to some extent, that it was being tricked. Ever since then, recordings like this don't work properly. Anything that's supposed to sound like it's coming from in front of me gets flipped around so that it's behind me. It's very strange.


We can hear true 3D. There are actually a couple of ways in which we can sense the direction of sound. One is loudness, which obviously only works for left-right. This is loudness only in the sense of sound-gets-more-silent-farther-from-the-source.

Another is the phase difference of the sounds in both ears. Phase difference comes from the fact that sound arriving at one ear had a longer way to travel than sound arriving at the other ear. tells a more complete story of direction than loudness and is very precise.

Thirdly, there is frequency response. Both our head and our ears absorb different frequencies at different angles. Also, there are some reflections from our shoulders and chest.

Taken all that together, normal people can detect the origin of sound sources to about one degree of precision.

But, there is more. Close your eyes and have another person talk while turning his head in different directions. You can clearly detect the direction he is talking to. At the same time, you can roughly sense the size and type of the room you are in. Also, you can do this while being in really noisy environments (say, a car in traffic with the radio playing and the kids in the back).

Talking about seeing vs. hearing: The response time of the ear is about 10-50 ms. It usually takes 200-500 ms to make sense of something you see. The human ear has a frequency resolution of about one Hz (at <500 Hz), a dynamic range of about 120 dB and a frequency range of about 10 octaves. The eye can only detect a very small dynamic range in comparison, and about one octave of wavelengths. But most importantly, the eye can only detect averages of three distinct, fixed frequency ranges (red, green, blue; red and green overlap 90%). The ear has floating detection windows and uses an arbitrary amount of different windows at any one time. That is, the ear can detect any spectral distribution with great precision, while the eye only detects three distinct spectral windows.

So in a lot of ways, the ear is physically way more precise than the eye. Of course, signal processing makes all the difference and the amount of signal processing going on in our acoustical and visual brain centers is just staggering. There is nothing in the technological world that even comes close to that.


The eye ends up taking in a lot of information from every point you look at though, whereas the ear is always listening in every direction at once. Because of the kind of processing in the eye and brain, our three types of cone cells actually end up capturing most of the information encoded in the different reflectance spectra of the kinds of objects that naturally occur in the world (most of the variation occurring in typical objects in the wavelength range we can detect): today with synthetic materials and various lighting sources metamerism is noticeable in special edge cases, but just wandering around it usually doesn’t matter too much. After all, a decent percentage of males get along just fine as dichromats, and many don’t even realize they’re missing anything until adulthood (because even with two sensor types there is a lot of variation to pick up).

I think it’s pretty silly to argue that hearing can pick up more detailed information about the world than vision can: from vision, we can figure out shape, orientation, distance, texture, gloss, material, lighting, direction and speed of extremely tiny motion, etc., and we can do it for fine details of everything we look at, thereby constructing a tremendously detailed model of our environment without needing to directly touch every part of every thing. Anyway, both hearing and vision are extremely sophisticated. The two gather quite dramatically different kinds of information. Neither should be underestimated.


Of course, this is completely true. But note that I did not talk about the amount of information but the precision of information. Also note that every cone cell is equivalent to one hearing cell in the cochlea, of which there are plenty, too. (Only in the eye, they are distributed spatially while in the cochlea, they are distributed by frequency)

But all this is really not as important as the signal processing that makes sense of it. There are interesting connections between hearing and seeing. If you watch TV and someone at your side turns his head to you in order to speak, you will notice and shift your attention to him. You will think that you saw his head movement in the corner of your eye. But truth is, many people wearing hearing aids will not notice the same situation, for the simple reason that you actually did not see his head movement, you heard it.

There are many more examples where things like this happen. What you perceive is different from what your senses detect. All these intricate combinations of sensual information are the really interesting part.

Another fun thing about hearing: The human ear can detect very low sound pressure levels. Actually, it will detect a displacement of the eardrum of about the diameter of an air molecule. In a way, this is saying that the ear can detect the impact of individual air molecules on the ear drum (not really true, but in the ballpark). That is freaking amazing.


"a dynamic range of about 120 dB" Sound of a jackhammer at 1m has sound pressure level of approximately 100dB (relative to 20 uPa). Leaves rustling gently in the wind have sound pressure level about 10dB. If our hearing actually had 120dB dynamic range we'd be able to easily hear the rustling of the leaves while jackhammer is pounding right next to us, not to speak of normal conversation which roughly has level of 50-60dB.

As an analogy to radio equipment we might say that human ears have automatic gain control which spans 120dB (or as audio comparison we can adjust the volume on that span) but the range which we can discern two sources (the instantaneous dynamic range) is about 30dB or less.

Also about the frequency resolution of the ear vs the eyes the bandwidth are on a completely different scale. Visible light spans about 380THz which means that the bandwidth of that signal is 19 250 000 000 times the bandwidth spanned by our ears. You cannot do just simple octave based comparison as the amount of information is not dependent on the amount of octaves but only on bandwidth. You are correct in the sense that the eye uses this information in quite limited fashion, however the actual processing is not on normal frequency domain but on spatial domain (eye is not just one sensor but craploads of them).


http://en.wikipedia.org/wiki/Impulse_response#Acoustic_and_a...

(eta: just posting the link seems kinda snarky -- this ties into your post, but i don't have anything else to add)


Yes and no.

There's something funny going on with front-back perception. Normally it shouldn't be doable, but the external part of the ear is like a direction-dependent frequency filter. When the brain hears a familiar sound, but the spectrum is skewed just like this, it goes "aha, this comes from the front side". If the spectrum is skewed the other way, it goes "this comes from behind".

It's not 100% reliable, but it works some times.


http://en.wikipedia.org/wiki/Sound_localization

There's a lot more to it than that, and it's not just something that occasionally works. The outer-ear notch filtering is also used in vertical localization.


Also: people who depend on behind-the-ear hearing aids do not get the benefit of these echoes, which means that, for example, it’s harder for them to distinguish the voice of someone talking to them from background noise or from echoes off the walls.


>We hear in one dimension, left-to-right.

While our localization abilities vary by axis, we do have them in all three directions due to the combination of several mechanisms, none of which are actually 'left-to-right'.

http://en.wikipedia.org/wiki/Sound_localization


How do you "explain" to someone who was born blind that he is "blind"?


On Reddit recently, in an "I am blind, ask me anything" thread, this comment appeared:

"Do you see blackness all the time?"

"No, I don't know what blackness is; I see the same as you see out of the bottom of your feet".


Wow. What a beautiful answer that perfectly illustrates the point. Thanks.


Possibly too late to be seen now, but I've been wodering about this too; for instance how do you talk about windows?

A blind person who has a sense of the size of a room, have they any idea that windows are transparent and have a view over a vast area of cityscape/fields/oceans/length of road/etc?

Standing on a hilltop or in a viewing tower, you can see people in the landscape, but too small to see their faces or identify them - still identifiable as people by shape and gait.

I often automatically turn to Google images for an idea of what something I'm unfamiliar with "is", having to revert to a dictionary and textual description would feel like a step backwards, and that's a very recent (last 5 years) development.


Beautiful question.

You could also ask if blind people are able to perceive vision in dreams. To which the answer, I believe, is that they simply cannot, due to the fact that their brains have no notion of a visual image.


'...dreaming is a gradual cognitive achievement that requires the development of visual and spatial skills and other forms of imagistic skills as well.'[1]

Those who've never seen, or who were blinded very early, have auditory dreams. (They dream in sounds.) Those who were blinded after the age of around seven -- when the ability to form mental images necessary for dreaming develops -- are able to dream in pictures.

[1] http://psych.ucsc.edu/dreams/Library/kerr_2004.html


I'm not sure about that. Sight happens in the brain. When the eyes don't work this doesn't have to mean the visual part of the brain doesn't work.

Related: http://news.ycombinator.com/item?id=2185429


I wonder how a brain that has never "seen" an image would percieve technology to allow the blind to see using this:

http://vision.wicab.com/technology/


I suspect we commonly underestimate blind people mental capacities.

One can argue that 'seeing' goes beyond the perception of lights and colors. It's also about shape, so I guess a blind person can perfectly 'visualize' a rectangle, a cube, etc. And probably do geometry, or even 3D-object rotations.

Can anyone confirm this?


I read something recently by a blind guy who said that sometimes he perceived some amount of light in his dreams. This was because his blindness is not complete blindness. There are varying degrees of it. In his case, he could detect some light changes.


This breaks my heart.


This, along with sign language, are things I as a sighted, hearing person want to learn. (Even with fully operational eyes you can't see in the dark, or in smoke. Even with fully operational ears you can't hold a conversation beyond shouting distance.)


my great grandfather gone blind years beofre i was born. when i was 5, he pointed out the birds by kind in flight and distance, and built all kind of house furniture, home repair. strangely to me, he was able to somehow 'see' the tape measure, maybe just he had a keen sense of measures. a few years later, his sight started coming back he said, then he passed. this is not such a great story, just an example the human brain and personal encouragement can go a long way. hats off to mr kish.


Higher frequencies, as bats use, give finer resolution. This seems impossible for humans, but since we have cochlear implants, there's no reason they couldn't respond to a shifted audio spectrum. Or, just have an external hearing aid that frequency-shifts the sounds (and cancels out the original), together with a higher-frequency sound source.

OTOH, eye implants are probably not that far off.

> Just the auditory cortex of a human brain is many times larger than the entire brain of a bat.

This really startled me. It seems plausible that we could do as well, with early training etc. Plus, I imagine that some of the higher-order processing of the occipital cortex would also be seconded.


If you're interested in reading more about pushing the limits of remapping the human brain, check out The Brain That Changes Itself by Doidge. Fascinating.


>Others, like a commenter on the National Federation of the Blind’s listserv, consider him “disgraceful” for promoting behavior such as tongue clicking that could be seen as off-putting and abnormal.

Appalling. Forcing people to be cripple because solving it might be "off-putting" or "abnormal" [1]. Blind people being able to do all the things the sighted can do (and more in some cases) is worth hearing a few clicks now and again.

[1] I say "solving" because Kish doesn't seem to be at a strict disadvantage. He has disadvantages and he has advantages in areas we don't (e.g. finding the way out of a car park, "seeing" around corners, etc.).


I know I've only been thinking about the problem for about an hour, but seems like you could hack together a solution involving a kinect processed to do audio output for a lot less than $15 million. The kinect's range isn't 1000 ft (i think more like 10, tops) but it's a "the future will inevitably upgrade" part. Audio output would be in normal hearing ranges, no surgery required, and could be adaptable to different users.


He's Daredevil!


For those who don't know - Daredevil is a comic book character who is blind but has trained his other senses to compensate.

http://en.m.wikipedia.org/wiki/Daredevil_(Marvel_Comics)

Not an amazing contribution to the thread but relevant and hardly worthy of punishing with downvotes.


Filtering non-contributions with downvotes is part of how Hacker News is supposed to work. Punishment is beside the point.


Pretty amazing. It is quite difficult to understand echolocation since we are able to see, but it is also difficult for him to understand vision since he can't. I think that vision is the lazyiest sense we have, it's too easy and the body have a great potencial to use our senses to the max; we're just not using it.


Is it possible to learn echolocation without being blind? as a kid, my doctors were worried my deteriorating vision would leave me blind, so I learned Braille (that never happened). I wonder if I could have learned this skill with vision (if your auditory senses weren't compensating for loss of sight).


What the ? How is this even possible ? Am I the only one doubting the story ?


It's a pretty well-documented phenomenon.

http://en.wikipedia.org/wiki/Human_echolocation


Here is a post to a video to quench your doubting mind: http://news.ycombinator.com/item?id=2284276

(Direct Link: http://www.youtube.com/watch?v=YBv79LKfMt4)


I've read such a story about somebody else before (I think it was somebody else - younger, and colored). So it seems the same trick has been invented several times by blind people.

Still very cool.


Amazing that his mind adapted so that he in fact sees (the echo as a flash of light). I don't think this is a fake story since what we see happens in the mind, not in our eyes.


No more print articles, please. I'm tired of print dialogs popping up, un-asked-for. Use Readability if you don't like the layout.

http://www.mensjournal.com/the-blind-man-who-taught-himself-...


It was probably done to get the article onto one page, not the five that the normal article is.


Incidentally, I think readability attempts to get the whole article onto one page via js.


Wow. What an incredibly uplifting story.


Indeed! If only he got more mainstream support, could be expanded to more people


Not sure why he wouldn't use this, this seems vastly superior to the system he uses:

http://www.youtube.com/watch?v=8xRgfaUJkdM&feature=playe...


I don't know about that. While that system is clearly easier to use, from what I've heard about echolocation it provides much more detail than that system.

If you look at my other post, I linked to a video of a kid who used echolocation. He could distinguish objects - he walked past two garbage cans in the street, and said "Is that a car? clicks his tongue several times No, it's a garbage can."


I used the software I linked to above (I'm not blind), you can download it to a laptop, plug in a webcam and earphones. I could tell the difference between a car and a garbage can with no practice at all. Can he click with his tongue and tell if the picture on the wall is a picture a square or a circle? No, this software can enable him to do that.

I'd like to get his opinion of this: http://www.seeingwithsound.com/


I stand corrected. It's hard to tell from the video.


You wouldn't be able to hear someone walking from behind you, or to your side, or around a corner. This emulates a low refresh rate, very limited field of vision at the cost of reduced hearing. What he's interested in is absolute gains in hearing. I don't think trying to emulate a worse copy of vision fits with his philosophy.


So incredible. I would love to learn to do this myself - seeing the world with sounds must be so different. If I were a billionaire I'd pay invest that $15M and hope I could learn.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: