Highlights:

  • Uncertainty, as measured by Bean Machine, can highlight a model’s limits and potential failure points.
  • Bean Machine allows BMG to handle the work of probabilistic modelling, inferring the possible distributions for predictions based on the declaration of the model.

Meta, formerly Facebook, announced the release of Bean Machine, a probabilistic programming system that can apparently make it easier to represent and learn about uncertainties in AI models. Available in early beta, Bean Machine can be used to discover unobserved properties of a model via automatic, “uncertainty-aware” learning algorithms.

“Bean Machine is inspired from a physical device for visualizing probability distributions, a pre-computing example of a probabilistic system,” the Meta researchers behind Bean Machine explained in a blog post. They added, “We on the Bean Machine development team believe that the usability of a system forms the bedrock for its success, and we’ve taken care to center Bean Machine’s design around a declarative philosophy within the PyTorch ecosystem.”

Modelling uncertainty

It’s common parlance to consider deep learning models as being overconfident, even though they make mistakes. Epistemic uncertainty describes what a model doesn’t know as the training data was inappropriate, while aleatoric uncertainty is one that arises from the natural randomness of observations. Epistemic uncertainty will decrease after sufficient training samples, but even more data cannot reduce aleatoric uncertainty.

Probabilistic modelling, the AI technique adopted by Bean Machine, can measure such kinds of uncertainty after considering the impact of random events in predicting the occurrence of future outcomes. Compared with other Machine Learning approaches, probabilistic modelling provides many benefits like expressivity, uncertainty estimation, and interpretability.

Analysts who leverage it can understand an AI system’s prediction and the relative likelihood of other possible predictions. Probabilistic modelling also simplifies the process to match the structure of a model with the structure of a problem. With this, users can interpret the reason behind particular predictions, which may help in the model development process.

Bean Machine, which is built based on top of Meta’s PyTorch machine learning framework and Bean Machine Graph (BMG), a custom C backend, allows data scientists to do the math for a model directly in Python. It also lets BMG handle the work of probabilistic modelling, inferring the possible distributions for predictions based on the declaration of the model.

Uncertainty, as measured by Bean Machine, can highlight a model’s limits and potential failure points. For instance, uncertainty depicts the margin of error for a house price prediction model or the confidence of a model designed to predict whether the new app feature will outweigh the old feature.

While highlighting the importance of the concept of uncertainty, a recent Harvard study revealed that showing uncertainty metrics to both people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While building trust in AI may never be as simple as presenting metrics, being aware of the drawbacks may go a long way in shielding people from the limitations of machine learning.

Bean Machine quantifies predictions “with reliable measures of uncertainty in the form of probability distributions … It’s easy to encode a rich model directly in source code, [and because] the model matches the domain, one can query intermediate learned properties within the model,” Meta continued. “This, we hope, makes using Bean Machine simple and intuitive — whether that’s authoring a model, or advanced tinkering with its learning strategies.”