Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Resources for blind developers and sysadmins?
197 points by PakG1 on Nov 24, 2018 | hide | past | web | favorite | 44 comments
Two weeks ago, I woke up one morning with slightly blurry vision and a big, opaque, dark circle getting in the way of many things (especially if I close my right eye). Visited the doctor and I'm told that I have macular degeneration in my right eye and a retina problem in my left eye. It's surprising because I'm not yet 40. Doctor said I won't go blind and prescribed me some medicine for an initial treatment. I'm now looking into treatment options and researching this. I am supposed to go back to the doctor soon for a checkup. The dark circle has never left me; in fact, it's now been joined by a transparent, shimmering circle.

I'm not worried about going blind based on the doctor's opinion, but I am concerned that I will have to deal with long-term vision issues. Currently, I cannot read the text on my screen if I close my right eye, as the dark circle gets in the way of everything.

It left me with an interesting question. I'd read about blind people working as developers and sysadmins before. But I never really thought about it, because my vision was so good. I'm curious now, depending on how bad my eyes get, how would I transition so that I'd be able to continue my chosen profession? Are there blind developers and sysadmins here who can toss a few resources my way?

Much appreciated to the HN community in advance, thanks. I'm currently based in Asia, so about to sleep soon, hopefully I'll wake up with a big load of comments to read. ;)

Blind software engineer here. Sorry to hear about your eyesight issues, dealing with that is always hard, but the good news is that there is life here. I've been working on a bunch of open source projects and recently I found a full time job as developer. I know almost a dozen blind software developers working in big IT companies. Let me introduce you to the tools you can use. There are basically two types of tools: screen magnifiers for those with low vision and screenreaders for those who don't have any usable vision. Judging by your description, you might want to start with a magnifier. In the unfortunate case if your vision keeps getting worse, you can consider switching to a screenereader. Magnifiers are obviously easier to learn, but you can write code and do all the good stuff with screenreaders too. I am not that familiar with Magnifiers, but I've heard that the most popular one is called ZoomText and it is for Windows. Macs have their own magnifier, not sure about Linux. In the world of screenreaders you have NVDA and JAWS for Windows, VoiceOver for Mac, ORCA for Linux and also EmacsSpeak, which is an accessibility extension for Emacs. You can also use phones with screenreader: there is VoiceOver for iPhone and TalkBack for android. One suggestion is try to find an organisation in your country that deals with blind and low-vision people - ask your eye doctor if he knows any - they might be very useful in your situation as they can recommend a lot of tools and resources specific for your country, such as white cane trainig, if you need it, guide dogs, braille training, they can also probably teach you how to use screenreaders and magnifiers. Feel free to contact me if you have any more questions: anton <dot> malykh <at> gmail. And good luck!

Blind developer/sysadmin here as well. I completely agree with the above comment, and would add to it:

I'm not sure what platform you currently use, but if not Windows, you're in for some challenges if you're blind or visually-impaired. I'm currently using Linux because I'm a console-based developer (I mean, I still use Firefox and the GUI but run most dev tools from the console.) This generally works well enough for me until I need an IDE, because my current codebase is C#/CSHTML/JavaScript/HTML/CSS/TypeScript, and I can't juggle all that state in my head without something like an IDE to help out. And, as far as I know, there's nothing like that available and accessible for Linux. My hope is that VSCode can help a bit, but Electron apps are currently inaccessible under Linux, which rules out a whole bunch of traditionally useful tools (No "Electron is bloated" arguments here please, this is something else entirely.) Unless you're a Linux user who uses mostly GTK-Gecko-based tools and the console, and those GTK-based tools don't use crazy custom widgets, you can make Linux work. But QT is challenging, Electron is out, and Chromium/Chrome are inaccessible (though I hear that is changing soon.) Windows offers lots more tooling, plus access to Electron apps, which opens up many more possibilities for non-console-based development.

But anyway, if you can at all transition your workflow to Windows before things get more challenging, I think you'll be in a better place. I'm about to ditch Linux soon, but for a high-end PC which will run VMs for anything I might need. Running a pile of VMs is probably the best solution to this issue, but second best would almost certainly be Windows. MacOS and Linux trail behind, unfortunately.

I also wish you good luck. Happy to chat if I can help more. nolan at thewordnerd dot info

I'm a totally blind developer and have never gotten Linux to work very well. What distro do you use? I'm also curious if you run Linux in VM's with speach? I think part of my problem may be bad audio support in Virtualbox and I may have better luck with VMware. I use Windows 10 every day with Jaws and Windows subsystem for Linux. WSL has noticeably increased my productivity compared to Cygwin do to the larger volume of software that runs in it and no longer having the need to spend time compiling tools from source. I find NVDA to be a pretty good screen reader and you could probably use it full time. I've used Jaws for over 20 years though so it's worth paying the $100 a year to keep up to date rather then spending the time getting used to a new screen reader. It's also nice to have a company you can call for support and talk to someone instantly when your entire ability to do your job depends on screen reading software.

I use Fedora. Most of my issues seem like potential leaks/performance degradations in the accessibility stack. Performance drops over time, and doesn't seem to improve when apps are killed/restarted. The only components I can't restart are a11y-related, and the only way to get performance back is to restart or log back out and in. Maybe the issue is gnome-shell, but I can't help but feel like there'd be a revolt if everyone experienced the performance issues I do, so unfairly or not I blame accessibility.

Hi! I'm nosing in here but I'm a partially blind dev who recently came back to Linux and have had great success with the KDE desktop, specifically Kubuntu 18.10 in my case.

They have excellent accessibility support baked in by default.

Things are getting better all around in modern Linux though because in recent Ubuntu releases you can enable voice narration, full screen zoom (my show stopper must have) and screen readers from the login screen.

Good luck!

I've heard iOS is generally good towards people with visual impairments. Do you have experience with iOS/OSX and are you able to comment on their friendliness?

OSX has been an accessibility leader for a very long time. However recently everybody else has been catching up.

I was away from Linux and Windows for 10 years because neither provided key chorded full screen zoom by default and now they do.

So, in short, accessibility support in OSX is good, but no longer best in breed, so choose the OS you're most comfortable with generally.

IOS accessibility is excellent. Screen zoom, VoiceOver, high contrast, etc.

Many thanks, may reach out in the future. :)

I would toss in one other note about OS X. Consider enabling the "Speak Selection" or highlight-to-speak option. As the name implies, it reads selected text aloud.

My wife is low vision but not so much that she needs a full screenreader. Voiceover is overkill for her and has a steep learning / adjustment curve. Highlight-to-speak has been good enough thus far.

> Sorry to hear about your eyesight issues, dealing with that is always hard, but the good news is that there is life here.

You're awesome. Reading this made me tear up a little bit. There were a few blind software engineers at my previous workplace and I've always found it admirable. I wish you all the best.

As a software engineer, one of my greatest fears is losing my eyesight. I am currently not showing any signs of macular degeneration but having had LASIK, I am under increased risk of that happening. Had I known more, I would have kept my cornea intact. Now I feel slightly worried whenever I have problems focusing on text at night or getting blurry vision for a brief moment on occasion when I blink as these might be early signs.

I have a serious question to ask you.

Do blind software engineers work exclusively on systems-type applications or does there exist technology for blind developers to create UIs?

I'm a blind developer and my job is all back-end database and REST api development. I have done UI development on internal applications where function mattered more then looks. It's pretty easy to create basic UI's using HTML since everything is in code. Specifically I used Grails to generate UI scaffolding as a starting point that I was then able to modify. Once I got everything working another developer spent a day adding some CSS and fixing a few rough edges.

I mostly work in the backends, but I wrote a few UIs too. Either in HTML/PHP, or in wxPython. I can probably write UIs in other languages/technologies too if I had to. They probably don't look very nice, but they are functional. Every now and then something would glitch and all my buttons would end up one of top of the other or something like that, so I'd ask someone sighted to check for that.

Sorry about a bit off-topic and not directly answering your questions but your story resonated with my experience. Maybe it might help you or not :)

I've had surgeries on both my eyes for retinal detachment before I was 22 years old (30ish now). I am now handling myopia of -16 due to bad eyes and post-surgery effects. I don't want to insinuate that you have the same conditions but I dealt with similar symptoms. Sudden loss of vision, dark circles and dark drops passing in the vision is a scary and distraught feeling.

The thought process you describe happened to me as well and is still in-progress. Dealing with the fact that having such issues "early" in life poses questions and anxiety for what might happen later. I kind of navigated over the years with two ways of dealing with this. First as you today with the ideas of pro-active actions to prepare for the day I might go blind: learning braille, using a computer without or very poor sight and thinking about possible career transitions.

I was lucky enough that the surgeries went well and my sight "kind of" stabilized. Over the years I adopted a more "optimistic" approach. Even with a bad start, it is still only a possibility and not a certainty to become blind. I decided to live my life as fully as possible with my current sight and avoid to end-up in a anxiety circle about it. Who knows I might still have my sight for life. And by the time it might deteriorate, technology and medical science will have improve, thus hopefully increasing the chances in treatment, tools and possibilities.

Good luck for everything.

I know 'ed' isn't commonly regarded as the peak of accessible interface design, but it occurred to me recently that it was at least designed under the assumption that the user wouldn't be able to see their document, and that retrieving/printing content from the document would be relatively expensive and inconvenient.

I wonder therefore if ed might be a surprisingly accessible editor for users with vision issues, or maybe its a case of users without vision issues simply finding it "no less hellish".

See Edbrowse[0] for a web browser and text editor designed with ed user interface principals. It’s made by a blind computer user.

[0]: http://edbrowse.org

Lovely response, thx.

The philosophy of edbrowse, plus some light computing history, is in a beautiful article linked from that page.


Another blind developer chipping in, although most things really have already been said. I have only recently (over the last 4-5 years) started programming professionally, and I have certainly run into my fair share of hurtles along the way. However, by figuring out workarounds or alternatives to inaccessible tools, you can certainly be efficient in this space without using your eyes at all. I have no experience using magnification software as I am fully blind myself, but like others I'd also be more than happy to talk about the little tricks of the trade I've picked up over the years. Feel free to reach out on Twitter or so ( @zersiax ).

Sorry, I don't mean to hijack the thread, but I knew some prolific visually-impaired developers in my youth, and never had the wherewithal to ask them questions about their toolchain.

1. what can we as engineers without visual impairments do to make our work more accessible and easier to collaborate with?

2. are there tools that you wish existed / could make your life easier, but that would require a significant capital outlay / massive time commitment? of course you're a dev and can "scratch that itch" yourself, but there are only so many hours in the day and you have a day job.

I had a friend who was red-green colorblind who pointed out to me that a lot of the color schemes I was using in my talks were hard for him to quickly understand, and it was (pardon the pun) quite eye-opening.

Many thanks, may do so in the future. :)

On a tangent to this, I do 6-8 hours a week "volunteer" work with a friend of mine who is an audio engineer. (It's actually paid work, but I've never raised my price over the past 14 years from when I was 18, so it's essentially just minimum wage/spending money and keeps the relationship professional.)

He uses Windows, Window Eyes and NVDA, Cubase and Mixbus and an absolute shitload of custom-written AutoHotkey macros and C# applications which I've written over the past 10 years.

He's amazingly effective (and for certain workflows, with the help of the macros, far more effective than sighted people) and has recorded, mix and produced a broad range of music.

My point is, it sucks, but there are some incredible people, doing incredible things, who are completely blind.

Good luck!

Hi there, sorry to hear about your sudden vision loss. I'm also visually impaired with a similar condition as yours.

I'm also a software developer, mainly using Visual Studio, and I use the Windows Magnifier in docked mode at the top of the screen.

My usable vision in my left (bad) eye, is similar to looking through a frosted glass pane, my right eye is better and allows me to use a pc as long as I zoom in on things.

Some things or adjustments I have made:

1) At work and at home I use a 27" 4K monitor. This is so that I can zoom in on things without it becoming terribly blurry. You could go bigger than this, say like a 40", but I've never tried it.

2) I also often wear sunglasses in the office to block some glare from ambient light and from the screen. This dims things quite a bit, but you get used to it. I can touch type, but mostly only the letters on the keyboard, so I sometimes struggle with the punctuation or programming symbols while I have the sunglasses on. I guess I could get a keyboard with backlit keys.

3) At home I use Safari's Reader mode A LOT, this helps reduce clutter, and allows me to zoom text automatically to a standard font and size. I always joke by saying I wish the web would go back to black on white html :)

4) I take a daily vitamin called Ocuvite, formulated with Lutein.

As for day to day development tasks, it's not that bad, however, it's important that you make your condition aware to your colleagues, so that they can understand and accommodate you. Whenever I go over to someones desk to look at something on their PC, they will almost always zoom in for me to see.

During meetings or presentations, I stand right up against the side of the TV or projector screen.

At the risk of rambling on too much, I hope you find what works for you!


Sorry, I forgot to mention you should watch this video by Abrar Sheikh: https://www.youtube.com/watch?v=L3WpnG49XLc

What he's achieved is quite impressive.

There was a discussion about this a while back at a Tetra Society meeting; http://www.tetrasociety.org/.

One idea I thought promising was audio games and household assistance delivered through devices like Google Home and Amazon Echo. They could be programmed and controlled though a natural-language API like Inform7, which is used for interactive fiction.

I do not know of any working tools for developing these, but I heard of a BBC experiment called "The Inspection Chamber" that allows playing audio games solely through voice. There is an article at https://www.bbc.co.uk/taster/pilots/inspection-chamber

apologies if the non-software related comment is unwelcome.

I know some people who were afflicted by macular degeneration at a younger ago. They speak positively about supplements that contain all three of lutein, zeaxanthin and meso-zeaxanthin[1]; which have shown some solid results[2] in slowing down progress of dry MD & preventing its progress to wet MD.

Wising you the very best of luck managing this!

1. Macushield or Macushield Gold (AREDS formula) https://www.macushield.com/ 2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4963902/

I worked with a blind software engineer for quite some time. He's tweeted a little bit about it in the past[1], he's also one of my favourite people I've worked with, he's incredibly kind and I'm sure he wouldn't mind if you reached out. Wishing you well.

[1] https://twitter.com/ezufelt/status/1003070722879651840

Thanks everyone, HN rocks as usual. :)

This is a pretty broad topic, and many people have spent their lives working in the digital inclusion and accessibility space, so I don't mean for this to even remotely be exhaustive, but at a high-level, I can offer some thoughts that may be of help.

Let's consider two groups for now, though there are many others. Folks who are blind or have pretty impaired vision and then those who have some usable vision. For those of us who are blind or mostly blind, we use speech and Braille to interact with technology instead of a visual user interface (UI). The program that translates the UI into speech/braille is typically a screenreader. On Windows, the two popular screenreaders are Jaws and NVDA. Jaws is proprietary, costing upwards of $1K to $1.5K depending on add-ons, etc., and was widely considered the best game in town for the longest time. around 11 years ago, two blind guys out of Australia, Mik and Jamie (Jamie now works for Mozilla), rightfully thought it was ridiculous that an already disadvantaged audience should have to pay more than what most folks pay for their computers to access them, and so they worked on the free and open source NVDA screenreader , which stands for non-visual desktop access. Anyways, on Mac, there's the built-In VoiceOver screenreader from Apple, though the mobile instantiation of this screenreader, the one that is built into every single IOS, WatchOs, and TVOS device tends to be superior to its desktop counterpart, which is suboptimal wen it comes to surfing the web, interacting with programs, etc. in my and others' opinion.

For our second group, there's screen enlargement. Hell, back in the DOS days, I would have my nose on the glass of the CRT monitor to read the text, which I enlarged a bit because my brother showed me the 'mode 40' command which did 40 lines per screen instead of 80, thereby doubling the height of characters and at that time, making them readable to me. Now, I rely purely on screenreading-based approaches. Programs like the magnifier built into Windows and Mac, ZoomText, and others exist to enlarge the contents of the display, change foreground and background colors far past OS capabilities, offer auto-panning and focus-tracking, and generally act as you would expect a smart magnifying glass to behave. Again, shout out to Apple here: all of their devices have this feature built-in.

So, in Linux land, we've got the Orca screenreader. Joanie has done a truly incredible job managing that project and the volunteer community is awesome, but it's not anywhere near as resourced as NVDA, for example. Therefore, I consider Orca a labor of love and the best tool around for accessing the Gnome desktop, but it's not as fully-featured as NVDA, Jaws, etc. again IMHO. On Linux natively, there is also the SpeakUp screen reader, which patches directly into the kernel and intercepts input/output buffers. SpeakUp has the advantage of getting you speech earlier to boot-time, and it supported/supports a myriad of hardware synthesizers, which is still a big deal for a subset of screenreading users. It does not work in GUI environments though. I'd be remiss if I did not point out Emacspeak, though it is a self-voicing Emacs environment with all the advantages and disadvantages therein. Gnome, and maybe KDE?, also has magnification, but it's been a little while since I've played with this, so not sure if it's just using magnification primitives of basic video card driver stuff, actually manipulating underlying windowing primitives, etc.

So, I know that's a ton of background/lead-in, and I apologize, but I just wanted to contextualize some of these answers. Also, note that I'm leaving out a solid 90% of things.

For sys-admin stuff, back when I did a lot more of it, I would use my windows laptop and then ssh to access an army of machines running various operating systems, mostly Linux. I would write scripts in Perl, later Python, to automate things for me, as most sysadmins do. I would usually find a way to edit on my Windows PC, and then transfer the files to the right place. Over the years, this has seen me explore everything from samba shares, SFTP-mapped drives, two terminals open with an up-arrow of an scp command prepped to go, shared drives with multiple OS environments accessing it like in the case of VMWare mirroring out a disk, etc. etc. The reason for this is that editing on Windows tends to be far more accessible than relying on the interplay of nano or vi within an ssh window. Don't get me wrong, if I need to dip in an change a hosts file, quickly edit an nginx config, go modify a crontab entry, whatever ... those are doable with a nano or vi invocation in my ssh window, but for more complex editing (think back to going through all the lines of php.ini or making sure fstab is all good, etc.) then I would bring the file to windows, edit and send back. Yes, this is as annoying as it sounds, and I used to be a wizard at fromdos and the tofrodos tools, etc.

Anything web-based is of course froth with whether or not its accessible. Hell, HN isn't exactly great in this space, having some truly avoidable and bad web accessibility (just saying ... would take less than a day to make this website so much easier to use with a screen reader). For what I mean by web accessibility, take a look at the WCAG 2.1 by the W3C. I'll put links at the bottom of this message for as many things as I can think of. I mention web accessibility, though, because of tools like PhpMyAdmin, SQLBuddy, the web interface on routers and switches and PDUs, etc. There again, I like Ubiquiti products for example, since I can ssh into them and go to town.

Other sysadmin tasks involve things like port scanning, network traffic capture, setting up permissions, installing applications, etc., which are all done in similar ways, but it's the "how" that's different, not the "what", if that makes sense. I use nmap just like you for port scanning, but I may do stuff like redirect the output to a file that's growing and then use notepad or wordpad to consume that file as it grows, or 'tail -f ' it with greps sprinkled into the pipe chain just to make it less noisy.

Speaking of output being noisy, conventions matter, right? So, for example, imagine parsing a web access log with speech. You can visually just gloss over the timestamp and host information to get the crux of what you want to look for, but I have to listened to most/all of that content to get to what I want. To this end, I would either specify different format strings to put important information up front on the line, but then I've got non-standard logs, or do creative things with awk/gawk/perl one-liners, etc.

I feel this has turned more into high-level thoughts and less tools, so maybe I should end with apologizing that I'm not sure I even remotely answered your question, but felt this info could add some context to the discussion. If you are looking for highly specific strategies and targeted solutions, I would be happy to speak with you one-on-one over email, voice, etc. I believe strongly in paying things forward, and there have been many folks who have helped me along the way.

Fun fact, I wrote this in Outlook's new email window, spell-checked it a bit, and then pasted it into HN's edit field. Something that may seem totally weird, but a type of computer interaction I don't even think about anymore, even though it's probably quite different than how most folks use various applications, web or otherwise. The reason for this is that the affordances of web edit fields, even when some accessibility has been taken into account, are minimal in comparison to far more accessible interactions like the new message window of Outlook.

Some links:

WCAG 2.1 https://www.w3.org/TR/WCAG21/

orca Screenreader https://help.gnome.org/users/orca/stable/

NVDA Screenreader https://www.nvaccess.org/about-nvda/

speakup Project: http://www.linux-speakup.org/

JAWS Screenreader: https://www.freedomscientific.com/Products/Blindness/JAWS

zoomtext Magnifier: https://www.zoomtext.com/

emacspeak http://emacspeak.sourceforge.net/

cheers, and I do wish you all the best with your vision. Thank you for asking this question on HN. I hope others contribute to the conversation and this gets some increased awareness.

Man, I feel like half this post was my sysadmin playbook. I eventually got sick of explaining why it was such a pain in the ass to crack open vim and edit remote files, and why it was much easier to edit locally and scp back. I credit interactive fiction for the problem-solving skills, and gopher for the boost to memorization (gopher because if you wanted to find that primitive search engine 12 levels deep, you had to memorize lots of strings of numbers to navigate those mazes of menus. Not much different than how we use appliances these days, really, though it's hell to manage if you're stoned or not 100% on your game. :)

Yeah, I hear you. Honestly the smart-home/IOT movement has been very beneficial, not because of the sorry excuses for automation being sold right now, but because it enforces a certain level of pre-2005-web-like openness. I knew all the file system structures and specific web paths back in the day because it was just necessary to memorize it and easy to access things if you had specifics, whereas now everything is cryptographic strings, super long addresses like IP V6 ones, and based on dynamic interactions so that deep URLs are pretty much worthless to memorize except maybe on sites like GitHub, where the pattern is reasonable. The reason for my comment on smart-home stuff bringing back some of this openness is only that at least now there's a backdoor, so to speak. When a microwave doesn't talk and also has very little tactile differentiation on the buttons, at least you can use your Google Home or Amazon Alexa to tell that thing to warm up a plate for 3 minutes. Same goes for toaster ovens, rice cookers, water boilers, Sous Vide circulators, thermostats, locks, lights, etc. Yet another of the countless examples of why having things be open and have an API makes it better for absolutely everyone, especially since it facilitates use cases not able to be foreseen at inception. And, yes I do believe all of that can be done in a secure and sensible way. No need for security to be antithetical to inclusion, which is a pretty popular misconception.

Speaking of whacky stuff to do to access Linux and text editing, I remember back wen Jaws and even NVDA had far worse cursor tracking in the Windows CMD window, so you couldn't be sure that when your screen reader said it was beside a certain character, that it actually was. This is something I argue no sighted user of a computer has probably ever had to deal with, and never for a duration of years. So what I would do is I would type 'z' and then route the screenreader's secondary cursor to make sure that the 'z' I added was added where it would/should have been, if my cursor was in fact where the screen reader said. Just imagine needing to text edit where you can't even trust the position of the system caret .... fun times!

Have you tried reaching out to hn@ycombinator.com with some guidance?

May be worth a try.

I'll send them an email, though it's been my experience that it's the type of issue that doesn't get much love without another impetus e.g. pressing need from high-paying customer, legal pressure, compliance, etc. Having said that, I've had success reaching out to other companies before, so wish me luck.

Well, if they don't listen, just do some public shaming in a blog post and submit it to HN. ;)

This Ask showed up on a Google Group I happen to co-own called Blind Dev Works. So far, it's been very low traffic, but it is a resource that already exists intended to support the needs of blind developers.

You and any other blind developers here (or potential allies) are welcome to join. Just send a join request. I'll approve it.


(I'm visually impaired, but not blind. I'm not really a programmer either, though I still hope to be one someday.)

Thanks, I'll join soon, depending on my prognosis. Hope it's not bad to say this, but prefer to not join too many things to limit the volume of emails going into my inbox! :)

Whatever works for you.

Check out the resources at http://benetech.org for blind and vision impaired people. Benetech is located in Palo Alto, CA USA. https://en.wikipedia.org/wiki/Benetech

Considering vision problems: I wonder if VR headsets could be hacked to be a useful screen-magnifier. The screen would not necessarily be represented like a big movie screen, but you could move around the screen using head motions. Not sure about the ergonomics of this, but it might work better than solutions on standard computer screens.

I have a strong vision impairment and have tried this when Occulus first came out. I found the lag not good and became no longer interested in this problem. There are a few vendors working in this space whose hardware is slowly getting better ...


You may find this previous item about a blind developer using Visual(!) Studio interesting: https://news.ycombinator.com/item?id=14347908

Read up on how blue light + retinal can knock out photoreceptor cells. Especially in your 50s and 60s. When your immune system goes downhill. You can reduce the amount of blue light on your smartphones and monitors.

A very interesting thread about this on HN recently:


Parent link is: (Ask HN: How should a programming language accommodate disabled programmers?)

And another related thread:

https://news.ycombinator.com/item?id=18478776 (Ask HN: Blind programmers, what can I do to make my code easier for you?)

One more resource for visually impaired developers is program-l@freelists.org

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact