What if we could use our ears to see the world around us instead of our eyes? The suggestion may sound ludicrous but now, thanks to the work of Dr. Michael Proulx and colleagues, we may be a step closer to achieving the impossible.
The concept of using our ears to process visual information stems from a theory known as neural plasticity which emerged approximately 40 years ago. Neural plasticity is the idea that the brain can actually adapt to changes like sensory deprivation. Prior to this discovery, neuroscientists were under the impression that each of our senses were directly associated with a particular point in the brain. Therefore it was thought the region of the brain used for visual processing was left dormant in blind people, when in fact it may actually be taking advantage of the visual processing region to process other types of information in some way.
The exciting work of Sedato and colleagues helped to provide an example confirming this idea of elasticity in the brain. Blind subjects were asked to carry out a Braille reading task while undergoing functional MRI. Incredibly, they discovered that although subjects were using their hands instead of their eyes, there was actually extensive activation in the primary visual areas of the brain, during this task.
Since then researchers have been inspired to carry out further work on sensory substitution. This is the idea of providing information for a missing sense, such as vision in blind people, through an alternative sense that has not been damaged in any way such as touch or hearing. This was first tested in the 1960’s where blind subjects were asked to sit on a dental chair set up with moving pins along the back. A camera was then used to take in visual information and this was processed, such that the pixels from the camera image were then translated into pin movements on the chair, causing a massaging motion along their back. A white area would cause the pin to push into the subject while a dark area would cause the pin to stay back. Remarkably subjects learned to use this device and soon had the sensation of being able to ‘see’ what was in front of the camera.
Of course it would not be convenient to walk around with a dental chair strapped to your back. So there has been much work since then to try and improve this model further making it more accessible to individuals during everyday life. This led to the development of the vOICe (the capital letters stand for “Oh I See”) programme developed by the Dutch programmer Peter Meijer back in the early 90’s. Unlike the dental chair this version actually enabled visual information to be transmitted into sound. This is how it worked. Similar to the first model, a camera was used but this time it was fitted to a pair of sunglasses. The camera scans an image
from left to right every second and then provides information to a computer which runs the vOICe programme, transmitting various sounds to the individual about the image through a set of headphones. So objects to the left of the individual would be heard thorough their left ear whilst objects to their right would be heard through their right ear. The brightness of the image is then conveyed by the loudness of the sound, whilst the height of the pixels in the image is conveyed by the pitch of the sound, such that a pixel that is high in the image would cause a high tone as opposed to a pixel that was low in the image which would cause a low tone. Although the sounds provided are extremely complex, basic training over a period of about 3 weeks has been shown to cause participants to quickly learn how to interpret the sounds such that they get at least 75% of simple images correct.
Now Proulx along with his team in Düsseldorf have tried to take such research even further looking at object recognition with real objects such as a plate or teapot. Both blind and blindfolded participants were required to come in for training once a day for three weeks. They soon learnt to detect important aspects of the sounds such as the twittering noise caused due to reflective patterns from the light in metallic or glass objects, which helped them to make a distinction between very similar objects such as a comb and a brush. Participants were first tested and asked to categorise various household objects, using the camera, identifying them as objects from various rooms in the house such as the kitchen or bathroom. After further training they then had to identify the objects more specifically. It was exciting to find that 60% of both blind and blindfolded participants were able to complete these tasks by the end of the training period. This encouraging news means that it may only take a matter of weeks for blind people to learn how to regain their ‘sight’.
As exciting as the progress of this software has been it is hoped that in future it can be used for even more challenging tasks, such as being able to find an empty chair within a room. However this would also require the individual to carry out a visual search, a perceptual task requiring attention. Many have already focussed their research efforts towards understanding more about visual search tasks but it is hoped that in future there will be more work concentrating on combining both categories of auditory and visual searches together, to provide individuals who have lost their sight the best chance of being able to ‘see’ again.
1. Dr Michael J. Proulx
2. Vision technology for the totally blind. Available: http://www.seeingwithsound.com/. Last accessed 1 April 2009
3. Electricity for eyes. Available: www.lumen.nu/rekveld/wp/?p=383. Last accessed 1 April 2009