Highlights

  • The new AI solution runs on the “hide and seek” game model pattern.
  • Researchers at Microsoft explained that the program was developed in a combination of the two models, and both were trained “independently” and “without labeled data.”.
  • Initially, the testing was done on Python, and when the app was trained, it was tested in real life.

A research team at Microsoft has developed an Artificial Intelligence (AI) solution called BugLab, which they believe will allow software developers to debug their programs quickly and accurately.

The new AI solution functions on the lines of the “hide and seek” game model. Its workings are similar to the way Generative Adversarial Networks (GAN) are created.

Researchers Miltos Allamanis (Principal Researcher) and Marc Brockschmidt (Senior Principal Research Manager) gave all the details of the new solution in a detailed blog post. They explained how the two networks were created and pitted against one another, much like the “hide and seek” game is played.

Competition

The two networks are used to produce and locate big and small tiny bugs in the existing code. The first one is developed to create bugs in already written codes, while the other has been created to find them. As both the teams keep moving further, they start gaining more experience; the AI then reaches a point where it can identify flaws hidden in actual code.

The researchers explained that the program was developed in a combination of the two models, and both were trained “independently” and “without labeled data.” They further claimed that the program can identify arbitrarily complex bugs and that this is something outside the reach of modern AI methods. What they did was that they concentrated on commonly appearing bugs, such as incorrect comparisons, variable misuses, incorrect Boolean operators, and similar issues.

Detecting false alarms in the midway

Initially, the testing was done on Python, and when the app was trained, it was tested in real life.

“To measure performance, we manually annotate a small dataset of bugs from packages in the Python Package Index with such bugs and show that models trained with our “hide-and-seek” method are up to 30% better compared to other alternatives, e.g., detectors trained with randomly inserted bugs,” the blog added.

The researchers found the findings promising, as 26% (a quarter) of the bugs could be identified and fixed automatically. Furthermore, among the flaws discovered, 19 were unknown previously. However, false positives, too, were there, which necessitated further learning before such a procedure could be used practically.