Researchers from the Indian Institute of Science (IISc) in their study have found essential qualitative differences between the human brain and Deep Nuclear Networks, and these gaps can be filled by training the deep networks on larger data sets, incorporating more constraints or by modifying network architecture. .
The team from the Center for Neuroscience (CNS) studied 13 different perceptual effects and found that convincing or deep neural networks whose object representations are broadly aligned with the brain still outperform humans . “Many studies have been showing similarities between deep and brain networks, but none have really looked at systematic differences,” said SP Arun, Associate Professor at CNS and senior author of the study in a note from the institute. Identifying these differences can push us closer to making these networks more brain-like, he added.
In their paper, qualitative similarities and differences in visual object representations between brain and deep networks, Nature Communications, Georgin Jacob, RT Pramod, Harish Katti, SP Arun (2021), noted that while deep neural networks have revolutionized computer vision and their representations object across layers broadly correlates with visual cortical areas of the brain, whether these representations exhibit qualitative patterns seen in human perception or brain representations remain unresolved. During their study, they found that phenomena such as Thatcher effect, mirror confusion, Weber’s law, relative size, normalization of multiple objects and correlated sparsity were present in deep neural networks trained after object recognition training.
However, phenomena such as 3D shape processing, surface invagination, occlusion, natural parts and the global advantage were absent in trained networks. In explaining one of the experiments – Global Advantage – Georgin Jacob, first author and PhD student at CNS said “For example, in a tree image, our brain would see the tree as a whole before noticing the details of the leaves. similarly, when presented with an image of a face, humans first look at the face as a whole, and then focus on finer details like the eyes, nose, mouth etc. Amazingly, networks showed This means that, unlike the brain, the networks focus on the finer details of an image first, so while these neural networks and the human brain perform the same object recognition tasks , the steps followed by the two are very different. ”The study provides suggestions for what could be incorporated into the deep networks to improve it.