Making Technology Accessible

Assisting the Deaf and Blind in a Digital Age

Richard Ladner and graduate student Sangyuan Hahn (seattlepi.com)

Can you imagine a blind person taking their cane, walking to the car, telling it where to, and it goes there? I think that’s going to happen.
Richard Ladner

In the 21st century, UW has emerged as a center of research on making computing technology and digital communication accessible to people with disabilities.

In 2002, a blind graduate student joined the Computer Science and Engineering Department at UW and began working with Richard Ladner. Ladner, who was raised by two deaf parents, noticed he could not access figures in textbooks, and together they began researching how to produce tactile representations of figures quickly for blind readers. The result was Tactile Graphics Assistant (GA), which uses machine learning technology to scan and read figures, reproducing all text in the image as Braille text. Ladner developed TGA with help from graduate students Sangyun Hahn, Chandrika Jayant and Lauren Milne as well as undergraduate students, including Catherine Baker.

While certain steps in the TGA process must be checked by hand, all the figures in a textbook can be converted into Braille text in a matter of minutes using this technology, making these figures accessible to blind students. Since not all blind people know Braille, the next step was to also produce QR codes that could be scanned to produce audible text that could be played on smart phones.

Tactile Graphic Assistant

Tactile Graphic Assistant

Student using MobileASL (wired.com)

Student using MobileASL (wired.com)

Having begun research on TGA, Ladner built a research program for accessible computing between 2004 and 2008. One of his first students, Jeff Bigham developed WebAnywhere, which allows blind people to use the internet without downloading software or using a screen reader to convert web pages into audible text. WebAnywhere is a website hosted on the UW server that captures webpages, inserts code into the webpage, and delivers a modified webpage that is readable out loud. WebAnywhere is available to anyone and has been implemented into many languages other than English. As an open source website, developers have been able to improve the service from outside UW.

Ladner and Electrical Engineering Professor Eve Riskin collaboration on video compression for several years running combined with Ladner’s knowledge of deaf accessibility issues and American sign language (ASL) led to their development of MobileASL with graduate students Jessica Tran, Jaehong Chon, Neva Cherniavsky, Anna Cavendar, Rahul Vanam, and many undergraduate students. Begun in the early years of smart phones, their research sought to maximize the ability of ASL video communication by understanding the lowest possible bit rate for legible ASL while minimizing data usage and maximizing battery life. To reduce bandwidth requirements, video analysis algorithms were developed to detect motion, reducing the frame rate when someone was not actively signing.

As technology has improved, particularly with 4G mobile networks, the need for a MobileASL program has diminished. Nonetheless, this research raised awareness of the need to make cell phones accessible to deaf users. When Apple developed Facetime in 2010. one of its first advertisements featured two people communicating in ASL.

This video gives a brief demonstration of how the MobileASL project works.

Additional Resources

Tactile Graphics Assistant

MobileASL

GeekWire Article