Artificial Intelligence Challenge 2: Making User Interfaces Accessible and Useable for Everyone

Why is it difficult?

Digital user interfaces based on graphics, touch, or voice proliferate in the Internet era with growing web and mobile apps. Most of these interfaces are designed without considering people with different types, degrees, and combinations of disabilities. It’s difficult to train all the designers in the world to address accessibility problems in all of their designs. Can we use AI to create an Info-Bot that can understand digital interfaces and how people use them? If so, we could completely redefine and re-imagine accessibility by combining this open-source, public access Info-Bot with ‘Individual User Interface Generators’ (IUIGs) tuned to each different type of individual with disabilities such as blindness or deafness, etc. Such a daunting task would require understanding many dimensions of artificial intelligence to work in the real world with all the static, dynamic, or virtual interfaces.

What is the impact?

Achieving the vision of the Info-Bot and IUIGs will bring an enormous increase in the coverage and accessibility of user interfaces to all types, degrees, and combinations of ability/disability, literacy, digital literacy, ability to remember, and ability to learn new things. It will allow companies and designers to create simple interfaces that Info-Bot can understand and assure them that the IUIGs will convert their design to the desired individual product for everyone. More importantly, it will bring information and communication technologies (ICT) accessible to people who cannot now access technologies or lose their ability to use new (and old) ICT as they age.