
A Kenyan Engineer Who Created Gloves That Turn Sign Language into Speech - MaysonL
https://www.becauseofthemwecan.com/blogs/culture/meet-the-kenyan-engineer-who-created-gloves-that-turn-sign-language-into-audible-speech
======
throwaway77384
[https://mimugloves.com/](https://mimugloves.com/)

I worked with these folks for a while. I also work with the deaf community.

Sign Language gloves are an interesting idea, but they don't work. Sign
language relies heavily on facial expressions and body language beyond the
hands.

This was also tried here before: [https://www.huffingtonpost.com/entry/navid-
azodi-and-thomas-...](https://www.huffingtonpost.com/entry/navid-azodi-and-
thomas-pryor-signaloud-gloves-translate-american-sign-language-into-speech-
text_us_571fb38ae4b0f309baeee06d?guccounter=1)

But deaf people aren't actually that keen on these solutions, as I found out
when I proposed this to some of them myself.

~~~
philipps
You are absolutely correct that while the solutions don’t work that is only
part of the problem - the fact that engineers think they can come in and „fix“
communication for the deaf community is a bigger issue. Directly involving
your users (as you did) is so important when designing for someone with
different needs and preferences. I hope other well meaning talented engineers
read your comment and will take it to heart.

~~~
mettamage
By reading your comment and making that very clear, I did!

Taken to <3

I think more people should take it to heart. If you want to influence cultures
for the better, you need to interact with them. It's obvious when you know it,
less obvious when you don't.

------
dbt00
Seems like a fun hardware hack, but if he wants his niece to be able to
communicate effectively he should probably just learn frickin sign language.
It's unlikely that these can read a vocabulary, just the alphabet, which is
incredibly limiting.

So, cool, looks like fun to hack on, almost certainly not newsworthy.

~~~
mav3rick
This is such a negative attitude. A person tried something new in the
accessibility space, so what if you think it's trivial.

~~~
ceejayoz
There's a long history of folks trying "something new" in the accessibility
space and discovering that if they'd have talked to the people with
impairments they're trying to help that there are significant problems with
the approach.

As you can imagine, this external savior thing can be pretty frustrating to
the people who actually live their lives with a particular impairment.

------
bigmit37
I really want to play around with sensors and create projects similar to this.
I am fascinated with sensors.

Is this something that is possible with arduino or raspberry pi? Or is there
something else I should look into?

~~~
tehlike
You should be able to do it with a camera, and tensorflow and traing on images
with gestures.

------
anonytrary
So you have sensors all over the fingers, use that information to get the
current gesture vector, then find the basis vector that makes up the majority
of the current gesture, look up the word in a Map(gesture -> word), then pass
the word to a TTS engine. That'd be a toy model. What they actually do is
probably more nuanced and precise.

------
TeMPOraL
Even though this isn't going to be useful for interacting with deaf community,
if you can turn gestures into words easily, I feel military might be
interested. Also possibly divers, and other people working in conditions where
they can't or shouldn't speak.

------
selimthegrim
Can someone do it the other way? UC Berkeley was forced to take down countless
hours of recorded lectures because they were violating federal law by not
having a provision for blind or deaf people to listen to them.

~~~
lsiebert
UC Berkeley was the subject of a Justice Department investigation which stated
they weren't in compliance with the law, but never actually sued or subject to
a court order to take down anything. They made a unilateral decision.

They decided not to pay for captioning, image enhancement or audio
descriptions for their lectures for old content, even though the justice
department letter said:

"UC Berkeley is not, however, required to take any action that it can
demonstrate would result in a fundamental alteration in the nature of its
service, program or activity or in undue financial and administrative
burdens".

They could arguably have fulfilled their requirements by working to update
material based on actual demand, and having a process to request updating such
material. Court ordered consent degrees are negotiated and approved by a
judge, and they could have attempted to negotiate something to address the
complaint.

UC Berkeley is a public institution, with expenses paid for in large part by
federal and state funds, and those funds come with requirements not to
discriminate on the basis of disability, and instead of actually addressing
that discrimination in a meaningful way for their content, they chose to
simply stop providing the service. You can have your opinion on if that's the
right decision, but they weren't forced to make one decision or another.

And per UC Berkeley, it wasn't just the Justice department letter, but also
online "limited use" and to better protect instructor intellectual property
from 'pirates'". [https://news.berkeley.edu/2017/03/01/course-
capture/](https://news.berkeley.edu/2017/03/01/course-capture/)

------
lsiebert
I may have written too much. TLDR: This is typing english letters with hand
gestures for each letter, not sign language translation which is HARD.

There have been devices like this since the 80s, because it basically does
hand shapes, not signs (which have movement, grammar, spatial references, body
placement references and other aspects that a hand and wrist device won't
capture).

This looks suitable for fingerspelling only (Think, instead of words, spelling
out everything you want to say using the letters of the alphabet, and you'll
get why this isn't a full solution, though you should understand that sign
languages are discrete languages with their own grammar and syntax not just
english words as signs, so... I guess imagine spelling out what you want to
say but it's transliterated french and using latin declensions or something).

Better then nothing, sure, but hardly translating sign language.

Can you do it with a camera and machine learning? I very much doubt it. I am
not sure current gesture segmentation methods could pick out discrete signs or
even hand shapes in different orientations, given that there isn't a neutral
state one returns between "words".

Then you need to do gesture classification, differentiating based on
handshapes (which can transform during a sign, with the nature of the
transition, as well as it's speed, the size of the over all gesture,
incorporated facial language etc is meaningful, and can be modified by
previous or future signs, spatial designations etc), and THEN you need to do
stateful real time machine language translation from the decoded signs into
English (or Swahili possibly), because a nations sign language isn't english
(or whatever your local language is) with the words signed, it's it's own
thing, and there may not be a one to one correspondence between words.

In ASL (and I can't speak to other sign languages), you often put the subject
of the sentence first (IE instead of saying, "who's the new guy in the purple
tie?" the grammar is more like "GUY THERE, WEARING TIE PURPLE, NEW, WHO?") And
that's something simple, where gender is included explicitly, no past or
future tenses etc. Of course there's also code switching, and formalized
systems for conveying English like Signing Exact English.

Oh, did I mention that no sign language has a written form? Linguists and
others can create code designations based on what they see as sign word
correspondences, but the same sign can be modified, such that the sign you use
for a pretty face can be modified to indicate degree of beauty, so the sign
for gorgeous and the sign for pretty need to be differentiated, by software.

Oh and if you got it working for one country... you got it working for one
country. British sign language and american sign language are radically
different, and basically every country has it's own sign language, though they
may have some shared roots. Even in the US, because of segregated schools for
the deaf, there's a black ASL dialect.

Anyway, we might have the technology at this point to do sign language
translation, but probably not, but even if we did, the funding to get it from
working on set sign language phrases in an MIT or Carnegie Melon Lab to
something that can actually translate sign language isn't there.

If someone wanted to start camera classification for gestures, I'd start with
recognizing flipping the bird fast enough to blur it in a live video feed,
because people might actually pay for that.

