Articles

Managing Discrimination Risk of Machine Learning and AI Models

October 8, 2021
Microchip

Businesses frequently rely upon predictive models to support or make decisions about investments, hiring and retention of employees, financial reporting, customer service (including granting credit, pricing, and marketing), customer relationship management, capital adequacy, and various other purposes. As a general matter, the use of any model entails “model risk,” which can be defined as “the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.” The potential sources of such risk include, but are not limited to, design flaws; errors in assumptions, mathematics, or programming; data errors; errors in model implementation; or the misapplication of models to purposes for which they were not designed. However, it also includes the risk that a model’s use will result in the violation of laws or regulations (such as prohibitions on discrimination), or will cause costly reputational harm. Increased model complexity, uncertainty about data inputs and assumptions, a greater extent of model use, and a greater potential impact of the model all increase model risk.

The risk of unknowing and unintentional discrimination is an increasing concern with the increased application of complex machine learning and so-called artificial or automated intelligence (AI) models to ever-expanding sets of available data to make decisions about consumers and actual or potential employees. Because allegations of discrimination can be damaging to a business enterprise—in terms of both direct financial costs and reputational damage—businesses relying on models with potential discrimination risks need to take care to identify, investigate, and manage those risks. In this article published in American Bar Association Journal of Labor & Employment Law, David Skanderson discusses key considerations in managing the discrimination risk posed by predictive models, based on the author’s experience as a quantitative economist in financial services, and explains how concepts of model risk management that have been developed in the financial sector may be applied to managing discrimination risk (and other business risks) in other sectors. We start with a non-technical overview of machine learning models, followed by a discussion of considerations in evaluating models for fairness, before turning to the subject of model risk management.

Key contacts