University of Nevada, Reno computer science engineering team Kostas Bekris and Eelke Folmer presented their indoor navigation system for people with visual impairments(缺陷) at two national conferences in the past two weeks. The researchers explained how a combination of human-computer interaction and motion-planning research was used to build a low-cost accessible navigation system, called Navatar, which can run on a standard smartphone. "Existing indoor navigation systems typically require the use of expensive and heavy sensors2, or equipping rooms and hallways with radio-frequency tags that can be detected by a handheld reader and which are used to determine the user's location," Bekris, of the College of Engineering's Robotics Research Lab, said. "This has often made the implementation3 of such systems prohibitively expensive, with few systems having been deployed4."
Instead, the University of Nevada, Reno navigation system uses digital 2D architectural maps that are already available for many buildings, and uses low-cost sensors, such as accelerometers(加速计) and compasses, that are available in most smartphones, to navigate5 users with visual impairments. The system locates and tracks the user inside the building, finding the most suitable path based on the users special needs, and gives step-by-step instructions to the destination.
"Nevertheless, the smartphone's sensors, which are used to calculate how many steps the user has executed and her orientation6, tend to pick up false signals," Folmer, who has developed exercise video games for the blind, said. "To synchronize7 the location, our system combines probabilistic(概率性的) algorithms and the natural capabilities8 of people with visual impairments to detect landmarks10 in their environment through touch, such as corridor intersections11, doors, stairs and elevators."
Folmer explained that as touch screen devices are challenging to use for users with visual impairments, directions are provided using synthetic12 speech and users confirm the presence of a landmark9 by verbal confirmation13 or by pressing a button on the phone or on a Bluetooth headset. A benefit of this approach is that the user can leave the phone in their pocket leaving both hands free for using a cane14 and recognizing tactile15(触觉的) landmarks.
"This is a very cool mix of disciplines, using the user as a sensor1 combined with sophisticated localization algorithms from the field of robotics," Folmer, of the University's Computer Science Engineering Human-Computer Interaction Lab, said.