One of the most effective tools that the era of modern computer science has offered to the world is Neural Nets. These simplified copies of the brain are highly effective in recognizing patterns in data and in helping us make decisions in complex tasks. The fact that Neural Nets can accomplish several tasks that seem to require sentience has led many to believe that these learning algorithms can imitate human intelligence as well, albeit at a much lower scale.
This belief is rather unfounded as it has been demonstrated that these learning algorithms do not have the basic intellect required for performing reasoning tasks such as the ones required in Mathematics. That was the belief. Now, Francois Charton and Guillaume Lample at Facebook AI Research have trained a neural network to perform mathematical operations ranging from addition to integration. The problems were randomly generated by another algorithm, which would then be fed to the Neural Net to be solved.
The most exciting part of this breakthrough is the unconventional methodology used to accomplish the result. The researchers used the basic principles of Natural Language Processing (NLP), essentially treating every problem as a sentence. The structure was simple; the Neural Net treats the variables as Nouns and the mathematical operators as Verbs. After it had made sense of the expression, it would ‘translate’ it into a solution.
The Neural Net displayed an accuracy of 95%, which was much higher than other algorithms on the same problems. The total set of 500 problems were tested on the Neural Net and the other algorithms as well. The Neural Net could not perform as well on a certain type of problem of Differential Equations. The accuracy would go as low as 40% on the harder ones.
AI has come a long way from the days of Deep Blue. This transformation is owed to the relentless effort of the researchers, such as the ones at Facebook, who have constantly been chipping away at the problems facing the current generation of tools at our disposal.