
The never-ending hype of AI
In the middle of technical evolution, it is often hard to understand what will be affecting us the most, in the long run. Every now and then, a new gadget is described as a game changer. Some of these innovations disappear and are forgotten, while others actually turn out to have a major impact on society or individual’s lives.
Very few of us can make a correct assumption about technology on beforehand, which is just a sign of us being human. So, will AI “change everything” when it comes to accessibility, or is it just another bump on the road (or a glitch in the matrix)?
At the recent edition of the M-enabling Summit, an annual policy and technology-oriented global event focusing on digital accessibility, run by the UN-initiative G3ict in Washington DC, AI seemed to sneak into almost every session. It is, of course, important – not least because of the potential risk it may pose on the disability community. Many panels were really good, informative and thought-provoking – but not every speaker has something brilliant or new to say about AI. After a while, the discussions become predictable.
The week before the Summit, I was leading a workshop on AI and accessibility in Brussels, as part of a study that Digital Europe is conducting on behalf of the European Commission. The discussions were very hands on and relevant, but they may have exhausted the part of my brain still open to new perspectives on the potential of AI.
A blessing and a curse
One way to make the discussion slightly more tangible could be by being more precise when talking about specific implementation. Speech recognition, for example, has many use-cases, and is already widely spread on the market. We can all clearly see the potential, as well as the problems when it comes to small language areas, speech impairments or accents, and bias.
A recurring topic is the opportunities for personalisation, and how systems/interfaces/browsers could automatically understand how a specific user would like information to be presented. At first glance, this may sound like a huge step forward for persons with disabilities. But do people realize the potential privacy-issues emanating from such functionality? I personally think that most of us, no matter of abilities, click “Yes” or “Ok” without reading or understanding the fine print, in far too many instances. Who will make sure that end users do not end up in a situation where no insurance-company will accept them as customers because of personal information being “leaked”?
“AI needs to be overseen by humans”, “AI output needs to be quality assured manually” and similar self-evident statements are often made. But what do they mean, in reality? How do we ensure that people – who are just as imperfect as any technology – are making good decisions based on an unreliable source?
Learn from the best – stay out of the worst
If you only have time to read up on one source, my recommendation is to look into what Jutta Treviranus, Professor at the Inclusive Design Research Centre at OCAD University, Toronto, Canada, is doing in the field. Her focus is on AI ethics, and how the statistical analysis that AI is based on, risks to amplify, accelerate and automate inequities. Her arguments are always based on research, and you will for sure be inspired by Jutta’s work.
A more down-to-earth reflection is that accessibility is one of the topics where it is obvious that machine learning needs tonnes of data. There are simply not enough (reliable) sources out there for you to let Chat GPT lead the way when it comes to accessibility. Use with care – and only for topics where you know enough to determine when your AI-buddy is inventing things.
Susanna Laurin, Managing Director and Chair, Funka Foundation