Hacker Newsnew | past | comments | ask | show | jobs | submit | devinprater's commentslogin

Lol last night, on a forked and accessible version of Termux I vibecoded into existence, on an Emacs and Emacspeak vibejiggered to work on Termux, I vibecoded, with gptel-agent, an Emacspeak package to make it speak when tool calls are being asked for by the model, and automatically speak any explanatory text after all the tools are called and edits are made. All on my phone with a Bluetooth keyboard. It's so easy, even a blind man can do it! :)

And because it's all controlled by me, I can tell it how to have the package speak, what it should ignore, and I'm not stuck with whatever some sighted person at some big company thinks a blind person wants. Everything should at most be open source, and at least be hackable.

All that to say, AI has helped me out a ton. Now I can be as productive as Emacs, and a Linux terminal, and maybe one day a Linux GUI with real Firefox and such, allows. And it would have *never* happened without AI.

So let's please do continue bringing on the AI. Make it smart and local, so I can have continuous AI descriptions right on my phone, with the ability to screen share or even agent-control my phone to get around inaccessible apps. Oh and fix AI app accessibility so the app sends output to screen readers when I type to it cause I hate talking to my phone and not every blind person wants to speak all the time. Ugh I hate that stereotype.


This is so amazing! I am so stunned and deeply interested on how you set it up and your workflow. To me it sounds like you are already in the future many of us sighted people imagine with AI.

I am not sure if you are able to use it, but I saw Droidrun.ai (https://www.droidrun.ai/) the other day, and agents should be able to drive the phone.


I'm glad I have chatGPT to turn that image with benchmarks into an accessible table lol. I like claude Code, but their accessibility in anything other than accidental CLI accessibility is frustrating. Try it. Load a screen reader like VoiceOver for Mac (cause I know most programmers use Macs) and go to claude.ai. In the "write your prompt to Claude" box, type something like "What will the weather be like tomorrow?" and press Enter/Return. Try closing your eyes for a good 30 seconds and within those 30 seconds, tell me how you'd know if a reply has been given by the model. Then try the same thing with ChatGPT. I would /love/ to be proven wrong.


thanks for sharing! just tried it for the first time.. Anthropic should really do better


Over the past month, with vibe-coding, I've:

* Made Termux accessible enough for me to use.

* Made an MUD client for Emacs.

Gotten Emacs and Emacspeak working on Termux.

Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.

None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.

I just use one agent, Codex. I don't do the agent swarms yet.


I'm completely blind. I like Linux. I've started to love Android since getting a Samsung and getting rid of OnePlus, cause accessibility. Termux is cool, but it's accessibility wasn't. So, I had Gemini rangle it up a bit into my fork of Termux [1]

Now it reads (usually) only newly incoming text, I can feel around the screen to read a line at a time, and cursor tracking works well enough. Then I got Emacs and Emacspeak working, having Gemini build DecTalk (TTS engine) for Termux and get the Emacspeak DecTalk speech server working with that. I'm still amazed that, with a Bluetooth keyboard, I have Linux, and Emacs, in my pocket. I can write Org and Markdown, read EPUB books in Emacs with Nov.el, look at an actual calendar not just a list of events, and even use Gemini CLI and Claude Code, all on my phone! This is proof that phones, with enough freedom, can be workstations. If I can get Orca working on a desktop environment in Termux-GUI. But even with just Emacs and the shell, I can do quite a bit.

Then I decided to go wild and make an MUD client for Emacs/Emacspeak, since accessible ones for Android are limited, and I didn't trust my hacks to Termux to handle Tintin++ very well. So, Emacs with Emacspeak it was, and Elmud [2] was born.

Elmud has a few cool features. First of all, since Emacspeak has voice-lock, like font-lock but for TTS, Ansi colors can be "heard", like red being a deeper voice. Also a few MUD clients have sound packs on Windows, which make them sound more like a modern video game, while still being text-based. I got a few of those working with Elmud. You just load one of the supported MUD's, and the sound pack is downloaded and installed for you. It's easy and simple. And honestly, that's what I want my tools to provide, something I, or anyone else who chooses to use them, that is easy to get the most out of.

None of this would have been possible without AI. None of it would have been done. It would have remained a dream. And yes, it was all vibe-coded, mostly with Codex 5.2 on high thinking. And yes, the code may look awful. But honestly, how many closed-source programs look just as bad or even worse under the covers of compilation?

[1] https://github.com/devinprater/Talking-termux-app

[2] https://github.com/devinprater/elmud


Nah, just Microsoft Copilot. No Os.


No way. There's just no way.


I hate that so much. When blind people are trying to start JAWS (the screen reader) by typing "jaws" into the start menu and pressing Enter, it will sometimes pull up a Bing page on Jaws the movie instead. And the blind person is just sitting there waiting for the screen reader to start. I tell people to use the run dialog for that reason. Sucks but that's what you have to do in the age of inshittisoft.


They are apparently replacing the run dialog with a new "Modern Run" dialog, so we can look forward to that also not working properly:

https://www.windowscentral.com/microsoft/windows-11/after-30...


the only sane tool remaining in windows is the RUN :( I wont even touch this shitty OS without RUN


This is purely insane. Doesn’t Microsoft violate accessibility laws in some jurisdiction due to this?


"rules for thee not for me"


Linux is even getting more accessible. I'm thinking of Elementary OS which not only posted about their accessibility work, but linked to the articles which really fired things up. I'm a Fedora guy, mainly because I want the latest Orca, AT-SPI2 and such, so I don't feel like an Ubuntu dirivitive would work as well.

So I installed Fedora on my work machine and find that I can still get all of my work done. Well except the parts that require testing accessibility on Windows screen readers or helping with Windows-related issues.

The only thing I miss now are the many addons made for NVDA, especially the ones for image descriptions. But if I can get something to work with Wayland, I could probably vibe code some of them. Thank goodness for Claude Code.


A ton of studies colleges/universities/corporations do on blind people give gift cards as payment. Usually $20 or so for a good 40 minutes of time.


What are you trying to sell me again? :)


His keen eye and intellect - valuable traits as a laborer in a knowledge economy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: