Hacker News new | past | comments | ask | show | jobs | submit login

Their custom UI framework might be all fun and games for now, but that will probably change once they realize they need to implement accessibility. Doing this in a custom framework without sacrificing performance won't be easy and is going to require lots of messy, per-platform work. It's not like it's optional for them either. It would be for a simple editor that you can just decide not to use, but they're positioning Zed as a collaboration tool, so making sure that everybody on a given dev team can use it is going to be crucial.

I wish developers finally learned this lesson. As a screen reader user, I'm sick of all these "modern" tools written in Rust (and yes, it's nearly always Rust) where Voice Over just sees an empty window. It's far easier to dig yourself out of the trap of no accessibility if all you need to do is to slap a few aria labels on your buttons and sort out some focus issues than when you need to expose every single control to every single OS.

At least there's AccessKit[1] now, which might make the work a bit easier, though I'm not sure how suitable it is for something as big as an editor.

[1] https://accesskit.dev/




Here's their only blurb on accessibility:

> Currently, many of Zed's themes are largely inaccessible. We are working on a new accessible theme system, which will launch with Zed 1.0

> A11y (accessibility) in Zed will be a long project. Likely lasting far beyond 1.0. Due to GPUI being written from the ground up we don't have access to the same a11y features that Swift, Web-based apps or [insert other language] does.

> Making Zed accessible will be a joint effort between things on the Zed side, and building out features in GPUI.

> For now, you can join this discussion to talk further about a11y in Zed: Accessibility (a11y) in Zed [links to: https://github.com/zed-industries/zed/pull/1297]

And that link is useless, it goes to a Github issue about back and forward buttons.

https://zed.dev/docs/themes

https://github.com/zed-industries/zed/pull/1297 - the link they use that's supposed to be for accessibility discussions. It appears it's supposed to be a link to this: https://github.com/zed-industries/zed/discussions/6576

So they've thought about it, but they haven't actually done it yet.


> So they've thought about it, but they haven't actually done it yet.

In my experience, a11y shouldn't be an afterthought but baked-in from the beginning. Doing the latter results in hacky, harder-to-maintain code.


100% this.

If you think about it, doing accessibility right -- in particular screen reader work -- requires thinking very carefully about the data model behind the presentation. What you need to declare and when. And doing that thinking actually could force engineers into building UI frameworks that not only are accessible for the visually or auditory impaired, but for broader systems as a whole.

It's hard work that has to get done, but it's not particularly sexy.


I think it could be sexy! I honestly think a graphical user interface designed from first principles to be accessible would also be kind of incredible for everyone else to use too, much in the way many accessibility improvements (in the US at least) like curb ramps and subtitles make things better for everyone as a side effect. Honestly, I think an ideally a11y interface might end up working like a modern easier on the eyes version of Symbolics Genera — think about it: every UI element on the screen actually retrievably connected to its underlying data representation, and imbued with a ton of semantic metadata that's designed to be used to actively facilitate further interactions with the UI (maybe for instance a generalized way to attach metadata about related UI elements to each UI element, so the UI itself essentially forms a sort of semantic hypertext web you can navigate behind the scenes), as well as composability, rearrangeability, and scriptability! That would probably be a massive boon for disabled people and also amazing for us nerds. Perhaps it would even have a focus on representing the interface in terms of that metadata, with the visual elements being sort of abbreviations/overlays on top of that metadata, sort of like how Emacs does graphics and GUI elements?


Yes I think you and I are on the same wavelength.


I'm not surprised that most of rust guis are not a11y friendly, there is no established gui library yet, none of them I would call mature yet

Not long ago there weren't any gui libraries that wouldn't be just binding to existing C framework or was in proof of concept state of lifecycle

I'm sure this situation will improve in the future and I understand frustration of someone that rely on a11y features, but you need to understand that everyone first will try to achieve solid gui library before will start adding accessable functionality


System76 is making headway getting a11y accessibility for Cosmic Desktop. They are doing a lot of good work in the Rust gui space.


Agree, I think they might shape first mainstream rust gui library.

They are doing amazing job with whole desktop environment. I'm yet to check first hand their work, but so far looks very promising


> I wish developers finally learned this lesson.

From a product perspective, re-inventing the wheel for something that — at best, many years from now — will be at parity with native presentation layers in terms of performance, a11y support, user experience, etc. is normally considered a risky move. Many startups have failed in part from pouring resources into shiny non-differentiators.

The only examples of successful products that use non-native UIs either (1) leverage web technologies or mature frameworks like Qt, or (2) are Blender (age 30). Apple did this with iTunes, but iTunes felt unpleasant on Windows, and people used iTunes for Windows in spite of this. I understand the appeal of creating frameworks like GPUI, but the article doesn't explain the relationship to the problem Zed is trying to solve.


There's also Zoom, which apparently uses an internal, heavily-modified fork of some Chinese UI framework on Windows (and QT on everything else). They did go the extra mile and add accessibility support though, their American clients, particularly the ones in government, healthcare and education, didn't really give them any other option.

There's also Google Docs, which uses weird tricks instead of rendering straight to DOM. They didn't even bother implementing accessibility in that layer, something which would probably have been impossible back then. Instead, they offer an accessibility mode and mark their entire UI as hidden to assistive technologies. When the accessibility mode is on, all speech is generated by a micro screen reader implemented in Google Docs directly, and the generated messages are sent as text to be spoken by your real screen reader. This is an ugly hack that doesn't really support braille displays very well, so they later implemented yet another layer of ugly hacks that retrofits the document on top of an actual DOM.

Your point still stands though, these are exceptions that prove the rule.


Interesting, thanks! It looks like Figma (a clear success) also took the "just give me a canvas" route:

"Pulling this off was really hard; we’ve basically ended up building a browser inside a browser. […] Instead of attempting to get one of these to work, we implemented everything from scratch using WebGL. Our renderer is a highly-optimized tile-based engine with support for masking, blurring, dithered gradients, blend modes, nested layer opacity, and more. All rendering is done on the GPU and is fully anti-aliased. Internally our code looks a lot like a browser inside a browser; we have our own DOM, our own compositor, our own text layout engine, and we’re thinking about adding a render tree just like the one browsers use to render HTML." https://www.figma.com/blog/building-a-professional-design-to...

The value proposition of a "boil the ocean" approach to UX frameworks is clearer for browser-based apps than native apps. That said, 7 years in, Figma apparently has a long way to go:

"To repeat a familiar refrain: We still have a lot of work to do! As we continue to improve access to our own products, we’re also advancing our understanding of what our users need to design accessibly." https://www.figma.com/blog/announcing-figjam-screen-reader-s...


> The only examples of successful products that use non-native UIs either (1) leverage web technologies or mature frameworks like Qt, or (2) are Blender (age 30). Apple did this with iTunes, but iTunes felt unpleasant on Windows, and people used iTunes for Windows in spite of this

Successful non-native UIs? Microsoft Office. Every single Adobe product, including the ones they got from Macromedia. I feel most professional software fall in this category. Spotify originally launched with completely custom UI. On Windows it is more difficult to name successful products that used native UI than those that did not.


Huh, is Office not just Win32?

I always assumed it was since the accessibility is there, but these are all Microsoft APIs after all, so they might just have implemented them on their own.


Yes, but I think those examples fall under (1), no?


Don’t want to downplay the concern. With modern machine learning is there an opportunity to build a better accessibility tool?

Something that works on any set of pixels and sees like a human does?

I.e parses text with OCR.


That would be useful if the normal way fails but would always be slower and not as accurate than intentionally implemented accessibility


Honest question out of curiosity-

Are there possibly AI-based solutions possible that can add assistance at a more generic level that doesn't require software that has "deep" knowledge of the window architecture (and the actual text in it etc.)? My understanding is that this is how tech like VoiceOver works- it knows the actual window definition and all the elements of it at a programmatic level and can take advantage of that.

I'm not asking if they're already available (although that would be a nice option), but for projects like this that wanted the speed of rendering on the GPU (at the possible cost of, as you said, VoiceOver seeing an "empty window"), it would be at least a fallback position.

(So does this mean that ALL content that renders through the GPU, such as games, are inaccessible to you? If so, I'm sorry...)

Not sure if this might help you but I have an Apple shortcut defined on my iPhone called "GPT Explains" that is activated by a double-tap on the back of the phone (which you can assign, as you probably know, in Accessibility settings)- it takes a screenshot, ships it off to OpenAI and returns with a description of what it's seeing, any to-English translation of non-English text, and any counterarguments to any claims made in a meme, etc. (yeah, the prompt for this is kinda wicked, lol). If this is helpful to you, I can give you a link after I remove my OpenAI key (you'd have to provide your own).

EDIT: I made a copy of it without the API key: https://www.icloud.com/shortcuts/0d063c6810d74a35a017e5a5f69...


iOS already has such a feature, but it's obviously not 100% accurate, not real time, and not great for battery life.

It's good enough if you have to click a broken "I accept your terms and conditions" checkbox, but nowhere near good enough to daily drive your phone with. In other words, a band-aid solution for when everything else fails, mostly for situations where the app you're trying to use is mostly usable, but has an accessibility barrier preventing you from carrying out a crucial step somewhere.

There's also vocr on Mac (and equivalent solutions on Windows), which recently got some AI features, but it doesn't even recognize control types (a very basic feature of any screen reader), just text. Again, good enough to get you through most installers where all you're doing is clicking "next" ten times, probably good enough to get you through the first-run experience in a VM that doesn't yet have a screen reader installed, at one tenth the speed of a sighted person, but that's about it.


Adding control type recognition seems like THE most trivial add-on feature to implement with an AI-powered solution. I mean, at that point it's just about training data... "Here are 100 different button looks. Here are 100 different radio button looks. Here are 100 different checkboxes. Here are 100 different dropdown menus." etc.


[flagged]


From the current state of things, which to Voice Over is an empty window with no elements whatsoever, if they have thought about accessibility, they definitely don't consider it a priority. With such a monumental task, I'd be willing to excuse some slip-ups, but there was literally zero work put into this.


Saying “zero work out into this” and noting they didn’t prioritize it is different from implying they’re naive and unaware of how complex and intricate Accessibility can be. I’m not actually interested in whether they support it or not - I just think your comment is a poor and/or lazy attempt at dunking on them. It contributes to this site being less interesting discussion and turning it into /. 3.0.

(For what it’s worth, if I was building a next gen (attempt) at developer tooling, I would punt on Accessibility at the start as well. It sucks, but that’s such a smaller segment of the market that you don’t _need_ to serve immediately. It only matters that you eventually get there.)


Also, even if they did use native widgets that come with integrated accessibility features, would those actually work as intended for a multi user collaborative editor like Zed?

Imagine a group of four people live collaborating on a file in Zed. How do you present the actions that are being taken by everyone so that a screen reader can understand what is going on?

Mute everyone except the user? Speak every keystroke pressed by everyone all of the time? Announce line changes made by others when they pause for a while / when they move to another line?

In short, I think if accessibility for screen readers were to happen for Zed it would take a monumental amount of effort to make it usable, regardless of whether they are using native widgets or not.


You would not believe the hubris of our profession in general around "how hard can writing a GUI toolkit be?" and also how little most engineers seem to think about accessibility when scheduling and estimating and architecting.

Backdrop: helped ship accessibility features onto the Google "Nest" Home Hub, somewhat last minute. And then watched it all get re-written for Fuchsia w/ Flutter (which of course had no accessibility story yet) and, yeah, last minute again.


No, I’m aware of how people downplay the scope involved in building a GUI framework. I’ve written your comment no less than (I estimate) 10 times on this very site.

I give these devs the benefit of the doubt given their prior body of work.


I think it's likely they have. But is there anything specific where they've written or spoken about Zed and accessibility?


I just posted a quote from this page: https://zed.dev/docs/themes

They've thought about it, and have a "follow this link to discuss" that links to the wrong thing so you can't actually follow the link to discuss it. Unless you're interested in a closed issue regarding back and forward buttons.


They’ve only spent 10 years writing text editors. They might not have heard of accessibility before. /s


To be honest, the previous editor they've worked on was Atom, and they didn't care then either, even though their job would be much easier. It's definitely in the realm of possibility that they have next to no accessibility experience.


Why bother with accessibility. If a disabled wants to use an IDE, use a different one??


I'll take this seriously since lots of people probably wonder this even if they don't bother to ask it.

Disability isn't a permanent state that you start with. It's something that can happen to you 5 years into your career, or 15. It can also be temporary - you break your leg and now you need crutches, a cane or a wheelchair until you heal, for example.

Accessibility also helps people who you wouldn't traditionally classify as disabled: Designing UI to be usable one-handed is obviously good for people who have one hand, but some people may be temporarily or situationally one-handed. Not just because they broke an arm and it's in the cast, but perhaps they have to hold a baby in one arm, or their other hand is holding a grocery bag, or they're lying in bed on their side.

Closed captions in multimedia software or content are obviously helpful for the deaf, but people who are in a loud nightclub or on a loud construction site could also benefit from captions, even if their ears work fine.

So, ultimately: Why should someone who's used to using a given editor have to switch any time their circumstances change? The developers of the editor could just put the effort in to begin with.


It can take an instant to become disabled: there is no permanent and distinct set of "disabled and "not-disabled" people.


Not to defend GP, but if I suddenly went blind, I really don't know if it would take longer to learn how to use my existing tools with a screen reader or to learn new tools better designed for it. It would be a completely new and foreign workflow either way.


This is not about what tools you want to use, but what tools you're forced to use by your team.

If this were a simple, offline editor, a decision not to focus on accessibility would be far easier to swallow. They seem to be heavily promoting their collaboration feats. If those on your team collaborate using Zed and expect you to do the same, other tools aren't an option.


Fair point. I'm not used to being forced to use particular tools so I didn't think of that.


Have you considered that disability is not always permanent? What if you were temporarily blind? Or could only see magnified or high-contrast UIs? Or you broke both arms, but your feet are fine for using your USB driving Sim pedals as an input device for 10 weeks while your arms heal? Would you still want to learn new workflows to be used over a few months only?

A11y isn't about helping one set of users (those who have completely lost their sight), it's about helping a whole spectrum of accessibility challenges - not by prescribing boxed solutions, but giving the user options customizable to their specific needs.


I did consider temporary disabilities, and don't see how it changes anything I said (which is, again, not much).


The amount of time one is willing to set aside to learn new tools & workflows isn't worth it if they are to be used for a limited period. It's much better to use the old tools one is familiar with in those cases.


You're still assuming that learning the old tool in a new way is faster. That's specifically what I'm questioning.


Accessibility helps all users, not just disabled users.


Which is to say, we are all (temporarily, situationally, eventually) disabled in some way


I have most commonly heard this phrased as, we are all temporarily able-bodied.


Which accessibility features are you most commonly using ? Just wondering, what are the most used accessibility features, that new GUI-s don't have.


> what are the most used accessibility features

ramps (by parents with babies in their strollers), subtitles (by people learning languages or in loud environments), audio description (by truck drivers who want to watch Netflix but can't look at the screen), audiobooks (initially designed for the blind, later picked up by the mainstream market), OCR (same story), text-to-speech, speech-to-text and voice assistants (same story again), talking elevators (because it turns out they're actually convenient), accessibility labels on buttons (in end-to-end testing, because they change far less often than CSS classes), I could go on for hours.

For user interfaces specifically, programmatic access is also used by automation tools like Auto ID or Autohotkey, testing frameworks (there's no way to do end-to-end testing without this), and sometimes even scrapers and ad blockers.


Focus and focus management working as expected


Getting ratioed because people think “my app must follow accessibility for… oh.. uh… because big company does so we must do same ooga booga smoothbrain incapable of critical thinking” Waste of time unless you are aiming your application AT people who use screen readers e.g. medical or public sector. EVEN THEN has anyone actually tried? Even accessible websites are garbage.


This is not Twitter


I'm guessing this was downvoted for being rude, but I think there is a valid question here. It looks like Zed is putting a lot of work into minimizing the latency between a key being typed and feedback being displayed on a visual interface which is easily parsed by a sighted user.

If a programmer is using audio for feedback, then there is probably be some impedance mismatch by translating a visual interface into an audio description. Shouldn't there be much better audio encoding of the document? There would also be many more wasted cycles pushing around pixels, which the programmer will never see. An editor made specifically for visually impaired programmers, unencumbered by the constraints of a visual representation, would be able to explore the solution space much better than Zed.


This has been tried in Emacspeak[1] and doesn't work that well in practice. I'm in the blind community and know plenty of blind programmers, none of whom seriously use Emacspeak. VS Code is all the rage now, and for good reason, their accessibility story is excellent, they even have audio cues for important actions (like focusing on a line with an error) now.

[1] https://emacspeak.blogspot.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: