BENGALURU: A new study from the IISc Center Neurosciences (SNC) has explored the degree of comparison of deep neural networks (machine learning systems inspired by the network of brain cells or neurons in the human brain) with the human brain in terms of visual perception.
Pointing out that deep neural networks can be trained to perform specific tasks, the researchers say they have played a key role in helping scientists understand how our brains perceive the things we see.
“Even though deep networks they have evolved significantly over the last decade, they are not yet very close to realization and the human brain in perceiving visual cues. In a recent study, SP Arun, an associate professor at the CNS, and his team have compared several qualitative properties of these deep networks with those of the human brain, “IISc said in a statement.
Deep networks, while a good model for understanding how the human brain visualizes objects, work differently from the latter, IISc said, adding that while complex computing is trivial to them, certain tasks that are relatively easy for humans can be difficult to do for these networks. complete.
“In the current study, published in Communications on Nature, Arun and his team tried to understand what visual tasks these networks can perform naturally under their architecture and which require additional training. The team studied 13 different perceptual effects and discovered hitherto unknown qualitative differences between deep networks and the human brain, ”the statement says.
An example, the IISc said, was the Thatcher effect: a phenomenon where humans are easier to recognize changes in local characteristics in a vertical image, but this becomes difficult when the image is turned upside down.
Deep nets trained to recognize vertical faces showed a Thatcher effect compared to nets trained to recognize objects. Another visual property of the human brain, called confusion of mirrors, was tested in these networks. To humans, the reflections of mirrors along the vertical axis appear more similar than those on the horizontal axis. The researchers found that deep networks also show stronger mirror of confusion for vertically reflected images compared to horizontally reflected images.
“Another peculiar phenomenon of the human brain is that it first focuses on coarser details. This is known as the global advantage effect. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves it contains, ”explains Georgin Jacob, first author and doctoral student at the CNS.
Surprisingly, he said, neural networks showed a local advantage. This means that, unlike the brain, networks focus first on the finer details of an image. Therefore, although these neural networks and the human brain perform the same object recognition tasks, the steps that follow the two are very different.
Arun, the study’s lead author, says identifying these differences may bring researchers closer together to make these networks more brain-like. These analyzes can help researchers build more robust neural networks that not only work better, but are also immune to “adversarial attacks” that aim to derail them.