AI is transforming many aspects of our lives, and its potential to enhance accessibility is particularly exciting for those who still struggle to participate fully. Recently, at the Annual Conference for Inclusion Enterprises in Germany, I discussed how AI tools like ChatGPT can help. In particular, I am impressed by how easily AI can translate between different media formats—voicing written text, subtitling spoken words, or describing visual inputs. I also found out that powerpoint already supports live-subtitling and translating out of the box.
So, where we previously needed a dedicated person to translate, now everyone can have their personal assistant to make information and the world accessible in the way they can process it best.
Bridging the Media Divide
While screen readers have been translating text to braille or a (rather robotic) voice, the old techniques were still rather linear or search-driven. Users had to listen or read a lot, as scanning information is much harder. However, LLMs allow users to ask for specific information directly and can explain details and add more information than the original source provided. No need to read everything to find the contact details, e.g.
Benefits for Everyone
Accessibility is often neglected if there is no benefit for the healthy as well — but luckily, we all can profit from these efforts. For example, the startup xrai.glass provides an app that works on the pone but also with smart glasses (Affiliate link) to subtitle conversations in real-time. This was originally aimed at deaf users and people hard of hearing, but since LLMs are also great at translating to other languages, now everyone has a good use case for smart glasses to understand conversations in a foreign language. Have a look at their video demonstration here:
GPTs for Simple Language
Sensory input is not the only limitation we can experience; mental limitations also hinder many people from communicating and understanding effectively. Here as well, LLMs are a fantastic tool to translate texts and speech into “simple language,” ensuring more people can understand a text. Of all the things I demonstrated, a GPT translating text into a simple version was surprisingly met with the most interest.
A Glimpse into the Future
Large language models are a key technology for greater social and professional inclusion for individuals with disabilities, and they should be promoted and possibly subsidized as such. If you know anyone with a disability, please introduce them to these possibilities if they are not yet aware. The mobile version of ChatGPT, for example, has a great voice interface. The app Be My Eyes uses ChatGPT to replace volunteers who helped with visual tasks, like finding or reading things. Smart glasses, as mentioned, are great for people hard of hearing. If you are tired of yelling at your grandparents, these make a cool gift. The latest demo of openAI and Apple Intelligence also show how quickly progress is happening right now. They might replace a lot of apps which are currently adressing the needs of disabled users.
If you know of any cool use cases, apps, or developments regarding AI and accessibility, be sure to leave a comment!
Leave a Reply