Even on Web, in almost 40 years developing software, there was one single project where it did matter, the customer was part of a government organization.
Like writing secure software, until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.
> until this becomes a legal requirement no matter what, it will keep being ignored by the large community of developers.
"no matter what"? One of the more popular uses for these immediate-mode UI toolkits is to create interfaces for video games, either debug UIs or the menus painted over the game's main scene/content. I don't think people who need a screen reader are going to be playing a twitch-reaction shooter game.
I agree that these shouldn't be used for general use applications, but I strongly disagree with the sentiment that somehow all programs should be forced to work with screen readers. Some domains and applications are primarily visual and don't really translate well to textual interaction. I think these kinds of toolkits work best with those kinds of applications.
You shouldn't use these to write the next Discord or Slack or Firefox or LibreOffice etc. -- but I don't see a problem with making a debug UI or a menu for an action video game with an immediate mode toolkit.
You would be surprised at what people do with screen readers, but yes, that probably won’t work for first person shooters.
Realistically, however, even if this is only/mostly used for games (if so, why doesn’t https://github.com/Immediate-Mode-UI/Nuklear even mention the word game?), many, if not most, of them probably will be turn-based, because it’s much easier to write such games.
Also, accessibility doesn’t imply screen reader. It also includes high-contrast, larger fonts, tabbing through controls (e.g. to support users with Parkinson or motor disabilities), etc. Nowadays, a GUI library should pick up settings for those from the OS.
I would think the same thing, but playing video games blindfolded is a thing. Look up ponktus Super Metroid 100% blindfolded and zallard1/sinister1 2p1c blindfolded Punchout for examples.
If it's possible for someone to beat these games blindfolded (at a competitive pace, no less!), then it's possible for a blind person to beat it too.
The old consoles didn't have screen readers, but watching zallard1 play Wii Punchout blindfolded, I can see where they would help. Amazing fights, yet painful to watch when using the menu system.
That’s up to the person or company creating the video game to decide though. It can’t be a legal requirement any more than you could require painters to also produce a 3D model of their work so the visually impaired can enjoy it too.
This seems a bit short sighted. Should we not require accessible entrances to shops, banks, etc in law because it should be up to the people who own the building to decide if it's worth the massive expense of making the building accessible for only a few customers?
A videogame is (almost always) an unnecessary, discretionary waste of time for everyone who uses it. A storefront probably serves a purpose; a bank obviously serves a purpose. It might not be that easy to draw the line but it's obvious that "game" is on one side and "bank" the other.
It absolutely can be a legal requirement, though I am not arguing that it should be one. I was specifically addressing the comment "I don't think people who need a screen reader are going to be playing a twitch-reaction shooter game." Apologies for being unclear.
The people typically playing the games blindfolded had first beaten them countless times with normal sight. You typically will see this on games where core gameplay elements are not influenced by any significant randomness.
Outside of that lack of randomness, I doubt there are any purposeful or even accidental affordances towards blind players in those games.
Woah hang on there buddy, this is an open source project often used for video games. If you want accessibility so bad that you think it should be a “legal requirement” why don’t you write your own accessible GUI tool kit instead of complaining on the internet?
I feel there are plenty of reasons to not write your own anything, from prioritizing other projects, not having time, or just not having all of the required expertise. As a student I'm lucky enough to be able to drop basically everything and work on this one cool project idea so long as I get my essays in on time, but that just doesn't seem to be something universally applicable.
> instead of complaining on the Internet
Where else would you prefer they complain? I agree it may be more effective to open an issue no the repo, but does that also count as `copmlaining on the Internet'? Talking to people is how we change things. In this case, that's contributing to the usability of technology that grows ever more central in our lives.
Personally, I think a focus on accessibility is a great focus to have, and it should be obligatory if it isn't voluntarily universal. There is no reason for our society to provide more opportunities to humans with perfect vision than to humans with impaired vision.
Are there any options, besides Qt, that are accessibility friendly? I don't consider anything but Qt for this very reason (GTK on Windows/MacOS is not accessibility-aware or whatever you want to call it, to my knowledge.)
And I generally like Qt, but I can see how you might consider it heavy and a bit unwieldy.
I believe that .NET MAUI, when it drops, will be accessibility-friendly from the get-go. Of course, if Qt is heavy for one's needs, then I imagine .NET is, too. And of course there's Electron.
My impression, from looking into this a bit a few months ago, is that cross-platform accessibility is just a huge effort, and may be beyond the reach of projects that lack commercial backing.
Perhaps more technical than you intended, but for linux it interfaces to screen-readers via AT-SPI via DBus (Or you can use the older libatk -- iirc). Also as other people mentioned, high contrast, big text, and other specific display settings have to be figured out from the information passed from the OS / display server / wm.
It's never openly stated, though, so there are a lot of people who DO use it for other things than games, and then end up with a completely non-accessible app.
There are lots of non-gaming related apps that would be near impossible to use as a visually impaired person. Among them image/video editors most DAW (digital audio workstations) etc.
The thing that makes it more unsuitable for video games imho is the lack of IME support, which absolutely is a concern for video games, as many gamedevs have found out The Hard Way
It really depends on the game. Too many games rely on visuals as the primary form of output in a way that simply cannot be made particularly accessible, outside of the obvious things, which many newer games at least seem to address, to various extents, as shown in the GMTK: colorblind modes, large text[1], on-screen icons for sounds, no quick-time events or other twitch controls.
Having said that, I think The Last of Us 2 needs to be studied more as they appear to have done an amazing job with accessibility, given that a blind player was able to complete it. Maybe other developers can learn from that.
[1] And a personal pet peeve of mine: pixel art games using pixellated fonts. I find most pixel fonts extremely difficult to read. When I posted about that on r/gamedev once I just got yelled at because "artistic vision", but your artistic vision is useless if I can't play the game. At least give me the option to use a normal, crisp font.
I don't know why you are downvoted, while you state correct viewpoint IMHO. The accessibility is not just about screen-reading but also options that make our contact with the virtual product be more pleasant (e.g. resizeable UI, non-transparent background under subtitles). There are also many indie, slow-paced games that do not have any kind of voice-acting (well, most of tycoons), where even Windows Narrator will be enough. Please note, you don't have to be a blind person to use assistive software - there are a lot of people wearing a corrective glasses, who might use some screen reader.
Will I get scolded for not having implemented accessibility features into my yet-to-be-released game? Pull requests implementing those will be welcome though.
No, you won't. All I'm saying is, games are apps, and they should also acquire accessibility features. Many games do, and we can do better than the current status quo.
"mostly for video games, so not really a concern" should be downvoted, not the people who say that games should think about accessibility.
As usual though this statement is posted yet no real references are linked on how and what needs to be actually done to make a graphical user interface good for accessibility. Rules like have a high contrast color version for people with less eyesight. I tried to find resources about accessibility a while back to actively work towards it but couldn't really find anything official or sometimes only behind paywalls.
Not sure if commenters posting about accessibility have a disability themselves or are speaking up for people with disability. However reality is that in total it is a small percentage of users and without guidelines or having a disability yourself it is really hard to work towards. Especially at least from what I have seen in games the variety and range of disability. Without good guidelines and accessible interfaces for aiding tools. So requiring a gui library without funding to have a high level of accessibility or labeling it worthless is a somewhat cheap way of judging these libraries.
Good points overall. The reason nobody posts examples or references is because it's way too damn hard right now.
Current accessibility APIs are tightly coupled (conceptually and logically) to APIs that originated in the 80s: Win32 and Cocoa.
If you're only using native widgets it's virtually automatic to have full accessibility. But as soon as you need minimal customisation you have to interact with extremely verbose APIs. Cocoa's API is much better, but MS's Automation API is very arcane and complicated, and even MS employees acknowledge that.
On top of that, even if you're using the APIs as intended, the examples provided by Microsoft are low-quality and are not a good starting point for implementing accessibility on non-native.
Thus, only giant corporations have the resources to fully re-implement accessibility in non-native applications. Google can do it in Flutter and Chromium (Electron). Nokia for Qt. Facebook for React Native. But single developers just don't have the power to do it on their lightweight libraries.
What we need is a smaller lower-level accessibility API that gives accessibility to game engines, non-native UI toolkits, TUIs and command line apps. But I don't think there's much incentive coming from OS makers to do it.
Take for instance a Webbrowser. You could implement one using ncurses, etc. like links, lynx, etc. But for the screen reader this is just a terminal window with a bunch of text.
A Gui can help the screen reader to know which part to read when and how it releats to other parts of the GUI.
Also as a blind person, you live in a world of people that see. You cannot expect every developer to take care and cater to your needs. A GUI that takes care about this automatically for the developer means the dev can just continue doing their thing. While the blind people, can benefit from it as well.
I have no experience with assistive software but I suppose non-native UI libraries should be using native OS toolkit (e.g. [0][1]) or specific APIs libraries, which targets NVDA, JAWS, Orca and others (this is one of the same idea shared in the one answer on SO [2]). I guess web browsers and other native GUIs just do that behind the scene.
Edit: IAccessible2 [3] seems to be compliant to both Windows and Linux. Whether Apple AppKit provides specific accessibility-focused UI elements [4]. Flutter has also similar one, it is called Semantics [5].
Are there any screen readers that can model a system from images? Immediate mode GUIs can deliver multiple frames per second, so it seems like there would be plenty of data points from which to build a dynamic model of the system.
This is possible at least for restricted domains: I've personally written software for image processing for text extraction and application steering from high frequency screenshots of a Windows app that didn't have an automation API.
Also: The DeepMind Starcraft 2 AI plays at a high level in real-time from, AIUI, an image stream.
It works surprisingly well, however it quickly falls apart if nonstandard controls are introduced. If you have a custom control that does not behave at all like a standard control would, you'll quickly see its limits. It's OCR is also pretty good, but errors do still happen which make apps unusable. Don't get me wrong, it is actually amazing and works much better than I expected, but it's obviously no match for a proper implementation.