AI/ML News

Stay updated with the latest news and articles on artificial intelligence and machine learning

A new architecture that combines deep neural networks and vector-symbolic models

Researchers from IBM Research Zurich and ETH Zurich have recently created and presented a neuro-vector-symbolic architecture (NVSA) to the community. This architecture synergistically combines two powerful mechanisms: deep neural networks (DNNs) and vector-symbolic architectures (VSAs) for encoding the interface of visual perception and a server of probabilistic reasoning. Their architecture, presented in Nature Machine Intelligence journal, can overcome the limitations of both approaches, more effectively solving progressive matrices and other reasoning tasks.

Currently, neither deep neural networks nor symbolic artificial intelligence (AI) alone demonstrate the level of intelligence that we observe in humans. The main reason for this is that neural networks cannot share common data representations to obtain separate objects. This is known as the binding problem. On the other hand, symbolic AI suffers from rule explosion. These two problems are central in neuro-symbolic AI, which aims to combine the best of both paradigms.

The neuro-vector-symbolic architecture (NVSA) is specifically designed to address these two problems by utilizing its powerful operators in multidimensional distributed representations, serving as a common language between neural networks and symbolic artificial intelligence. NVSA combines deep neural networks, known for their proficiency in perception tasks, with the VSA mechanism.

VSA is a computational model that uses multidimensional distributed vectors and their algebraic properties to perform symbolic computations. In VSA, all representations, from atomic to compositional structures, are multidimensional holographic vectors of the same fixed dimensionality.

VSA representations can be composed, decomposed, explored, and transformed in various ways using a set of well-defined operations, including binding, unbinding, merging, permutation, inverse permutation, and associative memory. Such compositional and transparent characteristics enable the use of VSA in analogy reasoning, but VSA does not have a perception module to process raw sensory inputs. It requires a perception system, such as a symbolic syntactic analyzer, that provides symbolic representations to support reasoning.

When developing NVSA, the researchers focused on solving problems of visual abstract reasoning, specifically widely used IQ tests known as Raven's Progressive Matrices.

Raven's Progressive Matrices are tests designed to assess the level of intellectual development and abstract thinking skills. They evaluate the ability for systematic, planned, and methodical intellectual activity, as well as overall logical reasoning. The tests consist of a series of items presented in sets, where one or more items are missing. To solve Raven's Progressive Matrices, respondents are tasked with identifying the missing elements within a given set from multiple available options. This requires advanced reasoning abilities, such as the ability to detect abstract relationships between objects, which can be related to their shape, size, color, or other characteristics.

In initial evaluations, NVSA demonstrated high effectiveness in solving Raven's Progressive Matrices. Compared to modern deep neural networks and neuro-symbolic approaches, NVSA achieved a new average accuracy record of 87.7% on the RAVEN dataset. NVSA also achieved the highest accuracy of 88.1% on the I-RAVEN dataset, while most deep learning approaches suffered significant drops in accuracy, averaging less than 50%. NVSA also enables real-time computation on processors, which is 244 times faster than functionally equivalent symbolic logical reasoning.

To solve Raven's Matrices using a symbolic approach, a probabilistic abduction method is applied. It involves searching for a solution in a space defined by prior knowledge about the test. The previous knowledge is represented in symbolic form by describing all possible rule implementations that could govern the Raven's tests. In this approach, to search for a solution, all valid combinations need to be traversed, probabilities of rules need to be computed, and their sums need to be accumulated. These calculations are computationally intensive, which becomes a bottleneck in the search due to the large number of combinations that cannot be exhaustively tested.

NVSA doesn't encounter this problem as it is capable of performing such extensive probabilistic computations in just one vector operation. This allows it to solve tasks like Raven's Progressive Matrices faster and more accurately than other AI approaches based solely on deep neural networks or VSA. This is the first example demonstrating how probabilistic reasoning can be efficiently executed using distributed representations and VSA operators.

NVSA is an important step towards integrating different AI paradigms into a unified framework for solving tasks related to both perception and higher-level reasoning. The architecture has shown great promise in efficiently and swiftly solving complex logical problems. In the future, it can be further tested and applied to various other problems, potentially inspiring researchers to develop similar approaches.

The library that implements NVSA functions is available on GitHub.

You can find a complete example of solving Raven's Matrices here.