NIXSolutions: DeepMind’s AI Achieves Math Olympiad Milestone

Google DeepMind, a London-based subsidiary of Google specializing in artificial intelligence (AI) research, has unveiled AlphaProof and AlphaGeometry 2, AI models that can solve complex mathematical problems that current AI models cannot. We’ll keep you updated on further developments in this exciting field.

Advanced Mathematical Reasoning Abilities

Solving mathematical problems that require advanced reasoning abilities is currently beyond the capabilities of most AI systems. These types of problems necessitate the formation and use of abstractions, complex hierarchical planning, subgoal setting, backtracking, and finding new paths, which pose significant challenges for AI. However, both new AI models from DeepMind have advanced mathematical reasoning abilities to tackle these complex issues.

NIX Solutions

AlphaProof was developed using reinforcement learning, acquiring the ability to prove mathematical statements in the formal programming language Lean. It was created using a pre-trained language model called AlphaZero, a reinforcement learning algorithm that previously taught itself to play chess, shogi, and Go. On the other hand, AlphaGeometry 2 is an improved version of the existing AI system AlphaGeometry, introduced in January, designed to solve geometry problems.

While AlphaProof was trained to solve problems across a wide range of mathematical topics, AlphaGeometry 2 is optimized for problems involving the movement of objects and equations concerning angles, ratios, and distances. Because AlphaGeometry 2 was trained on significantly more synthetic data than its predecessor, it can handle much more complex geometry problems, notes NIXSolutions.

Testing and Validation

To test the capabilities of these new AI systems, Google DeepMind researchers tasked them with solving six problems from this year’s International Mathematical Olympiad (IMO) and proving their answers correct. AlphaProof solved two algebra problems and one number theory problem, one of which was the hardest in the Olympiad, while AlphaGeometry 2 solved the geometry problem. Two combinatorics problems remained unsolved.

Renowned mathematicians Tim Gowers and Joseph Myers checked the solutions submitted by the systems. They awarded each of the four correct answers the maximum number of points (seven out of seven), giving the systems a total of 28 points out of a maximum of 42. An Olympiad participant who scored the same number of points would have been awarded a silver medal and would have been just short of the gold, which is awarded to those who scored 29 points or more.

This marks the first time an AI system achieved medal-winning results in solving IMO math problems. “As a mathematician, I find this very impressive and a significant leap over what was previously possible,” Gowers said during a press conference.

Creating AI systems that can solve complex mathematical problems could pave the way for exciting human-AI collaborations, says Katie Collins, a researcher at the University of Cambridge. This, in turn, could help us learn more about how we humans do math. “There’s still a lot we don’t know about how humans solve complex mathematical problems,” she says. We’ll keep you updated on how these developments unfold and impact the field of mathematics.