Two neuroscientists have designed a model which mirrors human visual learning. They say that computer-based artificial intelligence (AI) can work like human intelligence when programmed in a way to use a faster technique for learning new things.
The research was conducted by Maximilian Riesenhuber, PhD, professor of neuroscience, at Georgetown University Medical Center, and Joshua Rule, PhD, a postdoctoral scholar at UC Berkeley. It was published in the Journal Frontiers in Computational Neuroscience. The research explains how the new method increases the ability of Artificial Intelligence software to learn new visual concepts quickly.
Riesenhuber says, “Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples. We can get computers to learn much better from a few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
He explains that humans are capable of learning new visual concepts quickly from sparse data. Sometimes, a single example is also enough. Even 3 to 4 month-old babies can easily recognize zebras, and also differentiate them from cats, giraffes and horses. On the other hand, artificial intelligence software usually needs to “see” multiple examples of the same object to identify and learn it.
Making Artificial Intelligence learn faster
The standard approach of identifying an object by an Artificial Intelligence software employs using only low and intermediate level information, like shape and colour. A big change has been made in designing the software, to identify relationships between whole visual categories.
Riesenhuber says, “The computational power of the brain’s hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects.”
The two neuroscientists, Riesenhuber and Rule discovered that artificial neural networks learned new visual concepts significantly faster when their method was applied. Artificial neural networks represent objects in terms of previously learned concepts.
Rule explains, “Rather than learning high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter.”
Under the human visual concept learning, lies the human brain architecture which builds on the neural networks involved in object recognition. It is thought that the anterior temporal lobe of the brain contains “abstract” concept representations which go beyond shape. This complex neural hierarchies for visual recognition enables humans to learn new tasks leveraging prior learning, which is very crucial.
“By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse with a different stripe,” Riesenhuber says.
The scientists say that in spite of the advancements in Artificial Intelligence, the human visual system still remains the gold standard when considering the ability to generalize from few examples, deal with image variations and understand scenes.
“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, but they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber concludes.