Comparing deep brain networks: Can they ‘see’ as well as humans? | India News

BENGALURU: New study of IISc Center for Neuroscience (CNS) have examined how well deep neural networks – machine learning systems inspired by the network of brain cells or neurons in the human brain – compare with the human brain in visual perception.
Highlighting that deep neural networks can be trained to perform specific tasks, researchers say they have played a pivotal role in helping scientists understand how our brain perceives the things we see.
“Although deep networks have evolved significantly over the last decade, they are nowhere near performing as well as the human brain is still in perceived visual cues. In a recent study, SP Arun, an associate professor at CNS, and his team have compared the various qualitative properties of these deep networks with those of the human brain, ”IISc said in a statement.
Deep networks, while a good model for understanding how the human brain visualizes objects, work differently from the latter, IISc says, adding that while complex computation may be trivial, some tasks can It is relatively easy for humans to find these networks difficult. complete.
“In the current study, published in Communicating NatureArun and his team sought to understand what visual tasks these networks can naturally perform by virtue of their architecture, and which require further training. The team studied 13 different perceptual effects and uncovered unprecedented qualitative differences between deep networks and the human brain, ”the statement reads.
An example, IISc says, was the Thatcher effect – a phenomenon where human beings find it easier to recognize local feature changes in an upright image, but this becomes difficult when the image is flipped upside down.
Deep networks trained to recognize upright faces showed Thatcher’s effect compared to networks trained in object recognition. Other visual properties of the human brain, called mirror confusion, were tested on these networks. For humans, mirror reflections along the vertical axis appear more similar than those along the horizontal axis. The researchers found that deep networks also show stronger confusion mirror for vertically reflected images compared to horizontally reflected images.
“Another phenomenon unique to the human brain is that it focuses on finer details first. This is known as the impact of global advantage. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it, ”explains Georgin Jacob, first author and PhD student at CNS.
Amazingly, he said, neural networks showed local advantage. This means that, unlike the brain, the networks focus on the finer details of an image first. Thus, although these neural networks and the human brain perform the same object recognition tasks, the steps followed by the two are very different.
Arun, the study’s senior author, says recognizing these differences can push researchers closer to making these networks more brain-like. Such analyzes can help researchers build more robust neural networks that not only perform better but are also immune to “adversarial attacks” that aim to derail them.