Normal vision is
essentially1 a
spatial2(空间的) sense that often relies upon touch and movement during and after development, there is often a
correlation3 between how an object looks and how it feels. Moreover, as a child's senses develop, there is cross-referencing between the various senses. Indeed, where the links between the senses are not made, there may be developmental problems or delays. This should be taken into consideration when training new users of visual
prosthetics(修复学), artificial retinas, or
bionic(仿生学的) eyes, suggest researchers in Australia. Writing in the International Journal of
Autonomous4 and Adaptive Communications Systems, a team at Monash University explain that
haptic(触觉的) devices, technologies that simulate the feel of an object should be used as early as possible in children fitted with visual prosthetics, and also for older congenitally blind and late-blind people. The haptic device can provide
supplementary5 or
redundant6 information that allows cross-referencing with the visual
input7 from the prosthetic. This, George van Doorn and colleagues suggest will help train the brain more effectively to understand the electrical input it is receiving from the prosthetic.
The input to the brain from any of our senses is ultimately an electrochemical signal, no actual light, sounds, odor
molecules8 or other
stimuli9 enters the brain. During
infancy10, the brain learns to interpret these different signals. However, the brain can be retrained to "understand"
inputs11 from seemingly odd places. For instance, researchers
grafted12 an electronic retina, not dissimilar to a low-resolution digital camera, to a patient's tongue and then helped the patient learn how to interpret patterns of light hitting the
sensor13, even though the electrical signals reach the brain from receptors in the tongue.
At the moment, artificial retinas are very low resolution, a small array of a few dozen pixels, whereas a digital camera might have millions of pixels in its sensor. One can imagine that during the next few years artificial retinas will become more sophisticated and their resolution will increase. The limiting factor is the ability of the brain to be retrained to understand the input from these devices. Van Doorn and colleagues Barry Richardson and Dianne Wuillemin, experts in virtual reality, bionics and
tactile14 technologies are now investigating how a haptic device might help. They suggest that exploiting multisensory processes will allow cross-calibration of information from the environment as well as assisting in teaching
recipients16 of visual prosthetics to filter out noise, just as the brains of sighted individuals are able to do when looking at an object or scene.
The concepts are not unrelated to the ability of Braille readers to "see" text and deaf people to "hear" sign language. There are, however, critical periods in development when the brain is most receptive and plastic. Even poor
sensory15 information is better than none at all, the team explains, provided that the different inputs correlate -- from a visual prosthetic and haptic device, for instance -- all tell the same story about the world. "The inescapable conclusion is that, if the aim of a bionic eye, or equivalent, is to restore
functional17 vision in the young or less young, then a visual prosthetic must operate in a multimodal context in which haptics will be a major player," the researchers conclude.