Google’s DeepMind developed an IQ test for AI models

The search for "generalisation" in AI is somewhat hindered by an inability to test for it, so a recent paper by Google's Deep Mind team provides an interesting insight in to the thought process of teams pursuing this goal. The team generated a number of tests which contain patterns with abstract relationships between elements in the pattern, and between sets of patterns. Within the sets, specific elements are missing, and the researchers found that pattern completion performance was strongly correlated with core model performance. Whether this provides a way to test models remains to be seen and is the subject of further work.
In a new paper, researchers at Google subsidiary DeepMind tested the ability of machine learning models to reason abstractly, like humans.