Hacker News new | past | comments | ask | show | jobs | submit login
My Journey From Blindness to Building a Fully Conversational User Interface (medium.com)
114 points by evanh2002 on Mar 2, 2016 | hide | past | web | favorite | 41 comments



I got so excited when I read this headline because I thought there was a product I could get for the 80 year old blind lady I work with that would allow her to read her email by talking to her computer, but alas it's just for shopping.

Computers are presently absolutely terrible for the visually impaired and there is no product I can find that even begins to address it.

As the article said screen readers right now are ass backwards, it might read to you, but you still need to be able to see move the mouse and do anything with it.


Blind users don't need to use a mouse, they can use (tabbing or double swiping). This is why web accessibility is so important, and while relatively easy to implement the basics, most developer seem to skip it or forget it.

Screen reader accessibility can be archived by coding pages in clean and standards based HTML. alt attributes are basic, but often are missing or have unclear text. Form elements need to be properly labelled, links need clear descriptions etc.

I agree that screen readers aren't the greatest technology, but the truth is developers of websites and applications aren't doing their part to make their products accessible.


Yeah, but you have to build a business that makes money first. It's still a step forward. Every time someone solves a problem with voice, it becomes easier for the next person.

Some in the hacker community are working on voice solutions to help program with off the shelf programs, for example:

https://github.com/melling/ErgonomicNotes/blob/master/progra...

Someone could probably use the same solution to help the visually impaired.

Throw in the new motion gesture technology, like that by Intel: http://www.intel.com/content/www/us/en/architecture-and-tech...

And soon we'll all have more natural ways to interact with computers.


I have no real knowledge, but I imagine that gesture interfaces would be massively worse than keyboard for visually impaired because the lack of tactile feedback.


Not necessarily. Successfully thinking about this stuff requires stripping interactions all the way back to their intent. The intent of either a key press or gesture is to change some state and the intent of the feedback is to communicate both the change and the new state.

The tactile feedback from a key press is at best only a small proportion of that. You get to know you mushed a key, but not what (if anything) it changed. Assistive tech using keyboard input will generally fill in that gap with voice feedback. Gesture input with haptic and audio/voice feedback should be usable in some scenarios. I'm currently working on a couple of iOS apps in the fitness space that have this as a key part of their UI.


you can provide audible feedback when you move your hands. Many gestures don't provide anyone tactile feedback. Move your hands or fingers then something happens on the screen. Google's new chip could probably help interpret a series of gestures:

http://www.youtube.com/watch?v=0QNiZfSsPc0


Apple's UX has been pretty impressive, although I've never tried to use it full-time. http://www.phonearena.com/news/Blind-man-demonstrates-the-ef... http://www.siriuserguide.com/siri-dictation-guide/ You can either dictate an email, or record a "voice memo" and email the sound. http://www.cnet.com/how-to/voice-dictation-is-now-done-in-re...


Sorry for the let down. I feel her pain. We're working on it, and should have something for her soon. Email is definitely high on our list of tasks that need a conversational layer.


Completely agree. My brother recently lost his sight. I've been working on tools to address some smaller, specific pain-points for him; but general purpose computing without vision is beyond frustrating...


Have you looked into NVDA?


While their first offering is for shopping they are also producing an SDK to help other develop similar interfaces: https://github.com/ConversantLabs/SayKitSDK.


Always great to see innovation in this space, will be really great to see how this approach pans out.

I've mixed feelings about conversational UI personally but one of the things that makes accessibility so hard is that there really is no 'one size fits all' solution.

As the author signposts, people who lose their sight later in life tend to struggle a great deal more than those of us lucky enough never to have known anything different - we had the advantage of naturally developing all the appropriate strategies and skills while our brains were still plastic.

That group would doubtless benefit from an assistive tech stack that 'just works' and doesn't require a bunch of training and knowledge acquisition to get even the simplest things done.


Wow, just reading your story makes me frustrated. I can't imaging trying to program without being able to see my screen.

How do you anticipate programming as you become legally blind? Will you start dictating code and having it read back to you?


There are a lot of professional developers who are blind. They use accessible IDEs paired with a Screen reader. Here is a good description: http://stackoverflow.com/a/453758/319013

In the future, I think there is a lot of potential for pairing a Conversational UI and a natural language programming...language. Eve looks really promising, and could change the way that everyone writes software, and not just the blind: http://eve-lang.com/

Here is a demo: https://youtu.be/VZQoAKJPbh8?t=46m52s


Anyone else experience a psychosomatic reaction from this story? My eyes started to ache just reading it. What a terrifying prospect.


Okay, this is really more nitpicking than anything else, but this passage struck me as odd:

> When it came to the people I met, many who have been blind since they were very young and who have worse vision than I will ever have, I was blown away. The term disabled does not apply to them. They are extremely independent, good natured, and very successful in whatever field they chose to be in.

Instead of saying that the term disabled does not apply to successful etc people, I think better phrasing would be that meeting these people changed what he considered disabled to mean/imply.

Other than that, I like the overall message. I do think that it's good idea to focus on making purpose-built applications for vision (or otherwise) impaired people, maybe even trying to grow a small niche ecosystem.


Agreed, thanks for saying this. I'm always a bit weirded out when someone tries to tell me I'm not disabled because I'm successful, so of course I'm able. I get that, probably from a pure linguistic perspective, the word "disabled" has negative connotations. In actuality though, there's a rich disability community. Chicago (I think, anyway) has an annual disability pride parade. We live in a culture that tries to strip lots away from us when we're disabled, and sometimes it's all we can do to claw some of that back. So while someone may mean well by claiming that someone who is successful or competent shouldn't be called disabled, it takes away that person's autonomy and ability to self-identify, and also removes success and competence from what people think of when they think "disabled." IOW, if someone can say that the word "disabled" doesn't apply to a person because they do something well, others in the set of disabled people have it a bit worse off because they're both faced with a challenge and not able to handle it well.

That said, I didn't think this was the author's intent. But I can't count how many times someone didn't think I was blind because I wrote well, and they couldn't conceive of how someone who couldn't see could write a coherent sentence. :) Try applying for jobs in that expectational landscape and, well, it's challenging and is why I don't even like working for others anymore.


>I'm always a bit weirded out when someone tries to tell me I'm not disabled because I'm successful, so of course I'm able.

I usually ask those people if they'll let me drive their car.


This really wasn't my intent. I sincerely apologize for communicating otherwise.

I identify as blind or low-viz and wouldn't want to take away from others who also identify as such or as disabled.

That line was meant to be a response to the fears that it seemed like my family and friends had about my diagnosis. Fears, that I wouldn't be able to remain independent or be successful. You know how loved ones generally fear the worst possible scenario. Anyway, wording it this was was terrible, and I'm sorry. I'll see if I can make an edit and rephrase things.


Thanks, I know it wasn't. Likewise, my intent wasn't to make you feel bad. I imagine this landscape is all very new to you, and recognize that it's probably lots to come to grips with and navigate. To that end, kudos for finding a problem and running with it. :)


Meh. It is a bit nitpicky but the sentiment is appreciated. The language around this is a mine field and some are pickier and quicker to offence than others.

I coach at a tennis club for players with low or no vision and we have a standing joke around how we're technically classed as 'vulnerable adults' but this is very much not the impression that people take away from meeting is for the first time. It's funny from where we're standing.


So I'm not trying to discount the author's company or experiences, but as a blind person myself, the tone of this article is confusing. My takeaway is that apps, screen readers and our current concept of computing are supposed to be terrible for me and difficult to use, but I don't get that. I don't doubt that someone newly blind after years of productive computer use or someone blind after a lifetime of being sighted would find things challenging. But if I suddenly found myself in a wheelchair, I'd likely find aspects of my daily life post-transition challenging, and would try not to write articles that made general statements about every wheelchair user's experience and challenges. :)

Are screen readers difficult to use? Sure, but no more so than typical desktop environments. Describing them as "requiring special training from organizations like the Lighthouse for the Blind" feels like a bit of a stretch. Would some folks require training? Sure, just as they might require training to learn OS X if they're familiar with Windows, but that's true of anyone sighted or not, and hardly a requirement. Also, while I agree that licensing costs for some screen readers are enormous, just about every mobile and desktop platform has at least one free, competent alternative. Someone newly blind or blind after years of sighted life might certainly need training, but in general I find picking up a new screen reader to be no different than learning any other semi-complex app. If using Word/LibreOffice is part of your job then you sit down, learn it and become competent with it over time, or you get trained. Categorizing screen readers as being different and worse in this regard feels unfair.

"All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users." Again, I strongly disagree with this. The problem is that UX time and energy is misspent almost exclusively on visual aesthetics, or even worse, on a certain type of visual aesthetic. For instance, say you use icon fonts because they're a cool thing to do. That choice has accessibility impact beyond blindness. Had a UI/UX designer focused on universal design and on crafting experiences that both look well and perform nicely from a keyboard/switch user's perspective, that interface will be visually appealing, nice to use via the keyboard, easy for users with low hand dexterity using switch controllers, etc.

Again, kudos to you for driving innovation in this space. I wish access tech didn't always feel like an industry stuck a few decades in the past, or wasn't being driven forward by companies with a medical-model concept of disability who view us as a market to be pitied rather than sold to. To that end, I like seeing exciting tech in this space, even if I can't imagine ever preferring to shop by a conversational interface. But I read headlines like "Apps are a nightmare for the visually impaired" and think "huh, I'm building an app for the visually impaired and hope it isn't a nightmare." And I worry that, rather than learning about how universal design can build experiences that look and work well for everyone, software developers and companies will read this post and think "I don't have to make my app accessible because blind people are using special custom interfaces anyway, so when a blind person tells me my app isn't accessible, I can pretend ARIA attributes are hard and write them off." Again, not trying to be negative, I just wish folks understood that we don't need separate interfaces and experiences.


I definitely agree with you that the last thing I would want readers to take away from this article, is that they don't have to worry about accessibility or universal design. Until we have better tools, we should be providing the best possible experience with the tools we do have.

I also agree that we shouldn't be creating separate experiences for the blind. I think it's generally acknowledged that they end up being worse than a combined interface, never getting the resources or new features that experiences for sighted users get.

Where we seem to disagree is on the roll that screen readers play in limiting the usability of technology. On the one hand they are amazing because they provide access to technology that would otherwise not exist. On the other hand, by virtue of the way they function-mapping a 2 dimensional visual experience into a one dimensional stream of audio-using a screen reader can only be so efficient.

This lack of usability puts access to technology beyond the reach of many who are less tech savvy than you or I, and given that the vast majority of people losing their vision in the US are the elderly, there are a lot of people who fall into that category. What's worse, the rate of vision loss is set to double as baby-boomer's age out.

I totally agree that the medical-model of Accessibility sucks, but I think Screen Readers fall into that category. They seek to adapt an experience designed for others to the needs of the disabled. Conversational interfaces have the potential to create a consumer quality experience, that by it's very nature is accessible (at least to the blind). And accessible by default is the best possible outcome.


It's interesting to read that you conceptualize screen readers as rendering a 2-D environment as audio. I'm a very visual/spatial person, but I've always conceptualized them as rendering a tree of GUI widgets, rather than a visual environment. I guess it's the difference between thinking of my desk as a visual collection of objects, and more as an object with an Arduino/RPI in the top drawer, papers and folders in the second, etc. Not saying either is wrong, just that maybe it's a matter of conceptualizing UIs as groups of collected and organized widgets, rather than as laid out on a map. I've come to enjoy developing with React because I can say "here's my workspace for a given task. It has a toolbar containing these related functions, these two loosely-related larger workspaces, etc." Then I let a visual designer come along after and make things look better. :)

Anyhow, I look forward to reading more about your SDK. Where can I learn more? I'm building an app that could benefit from a conversational UI on top of the traditional one and would be interested in reading up on what you offer, particularly as it's meant for blind users too.


You can check our SDK out at developer.conversantlabs.com. It's currently in a developer preview. Send me an email at chris@conversantlabs.com. It would be great to talk more. If our conversation in this thread is any indication, I think we'll have a pretty good discussion :)


I conceptualise them as a non visual means of surfacing an n-dimensional information architecture. But I'm just weird like that.

One thing screen readers are super good at is exposing shitty IA design, which is regrettably common.

That said, it cuts both ways. There is a public transport app in the UK (Traveline GB) that as a low viz (legally blind) user I find incredibly frustrating to use, but my no viz pals absolutely love.

In this case it seems the IA is there but the visual interface to it is worse than what voiceover exposes.

Accessibity is hard.


> Are screen readers difficult to use? Sure, but no more so than typical desktop environments.

That's a very low bar. Expert computer users like you and me have no problem with a screen that has even thousands of pieces of information on it. We use contextual cues to help us navigate that space, relying on good information design. But most people find current computers overly complicated. I think one of the reasons smartphones have eclipsed PCs is that the 4 inch screen forces you to cut down to a more human scale of choice complexity.

But that's why I am finding myself agreeing with OP that conversational interfaces are the future, for the same reason. They will force us to design down even further, into a one-dimensional UI. Choice, choice, choice, choice, etc.

That UI will be a slight simplification on the current phone UIs, and of course multiple threads can be shown on the screen at once to allow better use of screen real-estate for those who need it. Essentially, you just add lots of these one-dimensional UIs to the screen and multi-task. But the capital-T Truth is that even without a screen reader, using a Desktop computer and indeed vision itself is a largely linear process. Experts get fast at it, but we still must scan about to discover new information and places to interact. I think we will be surprised how much of our computational life actually becomes conversational.

One of the most exciting things to me about conversational interfaces is that writing automation software that uses them is much more trivial than writing automation software for a big complex GUI. And building AIs that live in them is also easier.

In general, I think there is something very important in the space of UI for beginners, mobile users, AIs, and people with disabilities that is happening. A kind of crystalization of intent. I'm only hearing people like OP start to talk about it now, but I actually think this is the next big wave in UI. And I do agree with you that most time spent improving UIs is wasted in exactly the way you describe.


That's fair, but my takeaway from this article was that screen readers were this whole other kind of thing that was so much more difficult to use. I could see that being true for someone who never used computers, or for someone used to using them a certain way and having to transition. And I worry that people will read it and come away with the impression that, since screen readers are hard, accessibility is harder and just isn't something they have time/money/expertise for. In actuality, much of accessibility is just using things as they were intended to be used, and attaching textual labels to visual-only elements. If web developers just used <button/> and <a/> instead of <div/>s for interactive elements, there's 90% of accessibility right there. You may be amazed at how hard of a sell just that much is. If the practice of attaching click/keyboard handlers to a <div/> were to be wiped from the face of the earth today and forever, I'd instantly love the web a lot more. :)

So yes, I generally agree that conversational interfaces are a good direction, and I'd like to incorporate them into the app I'm currently building. I just worry that the tone of this article makes the current state of the art sound terrible, when it's less the state of the art and more the implementation details of that state that makes things difficult.


The reason those details are so rarely done right is that the UI frameworks are designed for 2D and serial navigation is an afterthought.

If you build conversation into the core of your UI framework in an intrinsic way, it means everything written in that framework will be done right. Every 2D interaction has a first class conversational analog because conversation is the base interaction primitive and all of the 2D interactions exist within that space.


Really great article. It's really inspiring to watch someone face their problem head on, but stay focused on how it helps other people. This technology will certainly also have applications beyond the visually impaired community.


I was very excited when I read the title, but was sort of disappointed when i found out it was mostly shopping. But I do think that they are paving the way for something special and cannot wait to see where this is in 5 years time.


Ive been told my eyes show eventual macular degeneration. I will not be able to use this computer. I don't know when it will happen. I will be blind before I die.

I might have 10years, 20, 30... we don't know.


That's bad news, I'm sorry. However there's no reason you wont be able to use a computer. I have been blind since birth and use a computer and a smart phone every day. NVDA is an excelent open source screen reader for Windows. There are screen readers for other operating systems too. If you like emacs, emacspeak makes it talk. on Linux there is Orca, Speakup etc.


Very very cool. I've met Chris and he is a super nice guy.


Great story! They're definitely paving the way when it comes to accessibility. Need to look more at their SDK...


I thought this article was very insightful and I cannot imagine where this will be 5 years down the road.


This is an interesting approach. It does, though, make me think of the command line.. how many tasks are there that we can do, now, with a command line reader? Certainly covers stuff like email, at least for somewhat technical users.


Well, $1800 for a single license. I don't think so: http://www.nvaccess.org/


JAWS seems to be the standard big businesses target, it costs around $1000.

Although NVDA is free, its not as high a priority for some reason.


Well done to Target for allowing them to build a product.


This is really cool. The key point is your last line

> For the first time accessible technology for the blind can drive innovation for everyone else, rather than having to play catchup, and that’s pretty cool

I wouldn't say for the first time though. Many solutions designed to address accessibility needs result in improved UX for all users. My team puts a lot of emphasis on accessibility for this reason. plug - I'm currently hiring a web accessibility engineer too so if you're interested find me on twitter @kenwarner




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: