Most Web users have encountered CAPTCHAs, though they may not recognize the acronym for Completely Automated Public Turing Test to tell Computers and Humans Apart. A CAPTCHA is a challenge-response test—typically comprising an obscured sequence of letters and digits—designed to determine whether the user is human.
"Traditional visual CAPTCHAs, or twisted text, are supposed to be something that only a human who can see can understand," says Jonathan Lazar, associate professor of computer and information sciences and director of TU's Universal Usability Laboratory. "The visual clutter discourages image recognition by automated viruses and bots.
As crucial as they are to thwarting spam, CAPTCHAs present a formidable obstacle to blind or visually impaired users. That's where Lazar and a former student enter the picture.
"Jon Holman, an undergraduate major in computer information systems, approached me and said he wanted to study blind users," Lazar says.
With Lazar's encouragement—and the help of the National Federation of the Blind—Holman convened a focus group on online security. He discovered that CAPTCHAs were one of the biggest usability hurdles for this group
"He wanted to identify alternatives that would be equally usable by people with and without impairments," Lazar adds.
Holman, Lazar and Jinjuan (Heidi) Feng, assistant professor of computer and information sciences, subsequently developed a CAPTCHA prototype that combined a non-textual picture with an equivalent sound clip. A sighted person could use the picture or the sound clip as easily as the typical twisted text, but, more important, blind users could use the sound clip, and deaf users could use the picture. The audio/visual CAPTCHA, since it didn't use text, should also be more secure.
Using a trial-and-error process, the trio coordinated audio and visual cues. Images and sound clips had to be clear, unique and obvious, such as matching a picture of a piano to the sound of a piano, or the sound of rain to a picture of rain. The choices focused primarily on musical instruments, animal sounds and weather sounds. Some of the image-sound combinations proved problematic during the user-evaluation phase, when blind users found it difficult to identify the sound of a "gently grunting" pig.
Holman presented a demo of his prototype, Developing Usable CAPTCHAs for Blind Users, last fall at the ACM ASSETS conference in Tempe, Ariz.
"Jon had a really great experience at ASSETS," says Lazar. "He was the only undergraduate there researching human-computer interaction. Everyone else was a doctoral candidate."
Holman has since earned his degree, but Lazar, Feng and their colleague Harry Hochheiser have four graduate students working on accessibility-related projects, including a more stable and robust version of the original CAPTCHA prototype.
Are real-world applications in the offing? Lazar sees great potential in the TU CAPTCHA project, but cautions that more work remains to be done.
"This was a great project that combined two strengths here at Towson: computer security and computer accessibility," he adds. "We have a long history of doing this type of practical research in partnership with community-based organizations, and we're going to continue to do it."
The team of researchers presented a paper, "Investigating the Security-Related Challenges of Blind Users on the Web," at the Cambridge (U.K.) Workshop on Universal Access and Assistive Technology during the spring of 2008. The work has been published by Springer-Verlag as a chapter in the book Designing Inclusive Futures.
After graduating with honors from Towson University in December 2007, Jon Holman moved to Northern Virginia and now works for Unisys in the Federal Systems Group.
Office of Sponsored Programs & Research