Highlights

  • The newly introduced tool will help technical teams to prioritize, adjust, and address issues in the data or the algorithm.
  • Arize Bias Tracing is currently working with classification models and will expand to other use cases over time.

Arize, an Artificial Intelligence (AI) observability instruments maker, has launched Bias Tracing, a brand new tool to identify the major reason behind Bias in Machine Learning (ML) pipelines. This will enable teams to prioritize, adjust, and address issues in the data or the algorithm itself.

For a long time, firms have relied on observability and distributed tracing to enhance application performance, troubleshoot bugs, and identify security vulnerabilities. Arize is a part of a small cadre of companies trying to adopt these techniques to enhance AI monitoring.

Observability monitors complex infrastructure at scale by analyzing information logs. Tracing reconstructs a digital twin representing the appliance logic and information flow for complex functions. The new bias tracing method uses similar methodologies to map AI processing flows spanning data sources, feature engineering, coaching, and deployment. When Bias is found, it can help information managers, scientists, and engineers identify and mend the underlying cause of the problem.

“Any such evaluation is highly effective in areas like healthcare or finance, given the true world implications when it comes to well-being outcomes or lending selections,” said Aparna Dhinakaran, Arize Co-founder and Chief Product Officer.

The reason behind AI bias

Arize’s AI observability platform supports tools for tracking AI performance and analyzing model drift. The new Bias Tracing capabilities can automatically detect which model inputs and slices contribute the most to Bias encountered in production and identify their root cause.

Dhinakaran mentioned that the Bias Tracing launch is linked to Judea Pearl’s groundbreaking work on causal AI, which is at the forefront of explainable AI and AI fairness. Pearl’s work on Causal AI focuses on teaching machines to learn cause and effect rather than merely statistical correlations. For example, instead of just correlating a protected characteristic and outcomes, a machine must also be able to reason whether a protected attribute is the cause of an unpleasant outcome.

Finding in-depth reason

One example of a fairness metric Arize uses is recall parity. Recall parity evaluates the model’s sensitivity for one group to another, as well as the model’s ability to forecast true positives rightly.

Consider an example where a regional healthcare provider may be more interested in ensuring that their models interpret healthcare needs equally between Latinx (the ‘sensitive’ group) and Caucasians (the base group).

Suppose recall parity exceeds the 0.8 to 1.25 thresholds (known as the four-fifths rule), it may signify that Latinx are not receiving the level of needed follow-up care as Caucasians, resulting in different levels of future hospitalization and health outcomes.

“Distributing healthcare care in a representative way is especially important when an algorithm determines an assistive treatment intervention that is only available to a small fraction of patients,” Dhinakaran said.

Arize assists the organization in finding that there is an overall problem and helps the firm click a step deeper to notice that the differential impact is particularly severe for specific groups. For example, this may include Latinx women, Latinx over the age of 50, or Latinx in specific areas. By highlighting the cohorts where model unfairness is potentially the most severe, ML teams know how to tackle the issue by changing or retraining the model accordingly.

Arize Bias Tracing is built with a vision to work with classification models and aims to expand to other use cases over time.

WhyLabs, Censius, and Data Robot are other companies working on AI observability. At the same time, firms such as Fiddler and SAS are working on tools for improving AI explainability.