Deprecated: mysql_connect(): The mysql extension is deprecated and will be removed in the future: use mysqli or PDO instead in /home/virtualw/public_html/Resources/Hosted/Resource.php on line 9
Probing the Differences Between Organic Vision and Machine Vision
Not a member yet? Register for full benefits!

Username
Password
Probing the Differences Between Organic Vision and Machine Vision

We've known for as long as machine vision has existed, that human visual protocols are far superior to anything we create for our robotic and artificial intelligence systems, however it has in recent years, increasingly been presumed that our machine vision systems are rapidly closing the gap, what with the vast strides that have been made in all areas of visual object recognition, optical navigation and visual-triggered reflexes.

What we lacked was a definitive study of what the human visual system can do, and more importantly, how it does it. Countless studies have been performed by various medical experts all around the globe of course, but they have almost all been from a medical perspective. Of the remainder, most have been neurological, concentrating on specific sections of the brain. A really holistic assessment of the human visual system from a programming perspective had not been undertaken.

That has of course now changed, hence the existence of this article talking about it. Five researchers at UC Santa Barbara have been attempting to change this, focussing on the differences between human and machine vision systems from an algorithmic, programming and architecture perspective.

Miguel Eckstein, UC Santa Barbara professor of psychological and brain sciences, along with student researchers Tim Preston, Koel Das, Barry Giesbrecht, and Fei Guo were focussing their efforts on the greatest difference between human and machine vision systems – the raw speed of visual search. Their paper, "Feature-Independent Neural Coding of Target Detection during Search of Natural Scenes," has been published in the Journal of Neuroscience.

"Our daily lives are comprised of little searches that are constantly changing, depending on what we need to do," said Eckstein. "So the idea is, where does that take place in the brain?"

A large part of the human brain is dedicated to vision, with different parts involved in processing the many visual properties of the world. Some parts are stimulated by colour, others by motion, yet others by shape.
However, those parts of the brain tell only a part of the story. What Eckstein and co-authors wanted to determine was how we decide whether the target object we are looking for is actually in the scene, how difficult the search is, and how we know we've found what we were looking for. By reverse-engineering this information, they hope to radically improve our machine vision systems by making use of the same methods the human brain uses.

They found their answers in the dorsal frontoparietal network, a region of the brain that roughly corresponds to the top of one's head, and is also associated with properties such as attention and eye movements. In the parts of the human brain used earlier in the processing stream, regions stimulated by specific features like colour, motion, and direction are a major part of the search, just as they are for our artificial counterparts. However, in the dorsal frontoparietal network, activity is not confined to any specific features of the object. Instead, something very different is going on, which is likely the key to the increased speed of human visual search.


From left to right : The MRI machine used to determine areas of activity in the subject brains; Researcher Tim Preston; Associate Professor of Psychological & Brain Sciences Barry Giesbrecht; and Professor of Psychological & Brain Sciences Miguel P. Eckstein.

"It's flexible," said Eckstein. Using 18 observers, an MRI machine, and hundreds of photos of scenes flashed before the observers with instructions to look for certain items, the scientists monitored their subjects' brain activity. By watching the intraparietal sulcus (IPS), located within the dorsal frontoparietal network, the researchers were able to note not only whether their subjects found the objects, but also how confident they were in their finds.

The IPS region would be stimulated even if the object was not there, said Eckstein, but the pattern of activity would not be the same as it would had the object actually existed in the scene. The pattern of activity was consistent, even though the 368 different objects the subjects searched for were defined by very different visual features. This, Eckstein said, indicates that IPS did not rely on the presence of any fixed feature to determine the presence or absence of various objects. Other visual regions did not show this consistent pattern of activity across objects.

"As you go further up in processing, the neurons are less interested in a specific feature, but they're more interested in whatever is behaviourally relevant to you at the moment," said Eckstein. Thus, a search for an apple, for instance, would make red, green, and rounded shapes relevant. If the search was for your car keys, the interparietal sulcus would now be interested in gold, silver, and key-type shapes and not interested in green, red, and rounded shapes.

In other words, they're using pattern association, and associative memory, using internal reasoning and past experience to narrow the list of possible targets drastically, beyond the speed a standard visual search would show. Thuis is where our artificial systems let themselves down – they concentrate on the visual data they are presented with, whereas the brain is multithreaded – one thread examines the image data, the other thinks on the object being searched for, and associates it with additional likely properties beyond the simple shape of the item, which are also fed back to the thread running the visual search, allowing it to discard objects that don't match this new data.

"For visual search to be efficient, we want those visual features related to what we are looking for to elicit strong responses in our brain and not others that are not related to our search, and are distracting," Eckstein added. "Our results suggest that this is what is achieved in the intraparietal sulcus, and allows for efficient visual search."

References

UCSB Study Reveals Brain Functions During Visual Searches

Feature-Independent Neural Coding of Target Detection during Search of Natural Scenes (Subscription Required)

Staff Comments

 


.
Untitled Document .