Saltar a navegação
Manage AI Bias Instead of Trying to Eliminate It
Article

Manage AI Bias Instead of Trying to Eliminate It

To remediate the bias built into AI data, companies can take a three-step approach.


Read offline


Editorial Rating

8

Qualities

  • Scientific
  • Applicable
  • Insider's Take

Recommendation

Increasingly, businesses are turning to AI systems to automate their processes, data analysis and interactions with employees and customers. As a result, fairness has emerged as a knotty problem. AI systems inherently reflect and perpetuate the pervasive bias in data sets – a flaw that has no mathematical solution. In succinct and implementable fashion, AI risk governance expert Sian Townson recommends a three-phase approach to manage AI systems’ intractable biases.

Summary

AI systems inevitably perpetuate bias.

Artificial intelligence (AI) systems generalize from existing data, and because bias pervades historical data, AI will echo and perpetuate that bias. AI bias occurs in part because, typically, less input data exists for minority groups. As a result, AI results will produce lower accuracy for members of minority groups – a problem that affects algorithms for medical treatment, credit decisions, fraud detection, marketing and reading text.

No method exists to prevent AI from perpetuating bias that exists in historical data. Mathematically, it can’t be done. AI excels at discovering patterns, and bias permeates data sets too deeply...

About the Author

Sian Townson is an expert in AI risk and governance frameworks and holds a doctorate in mathematical modeling from Oxford University. She currently works as a partner for the global consultancy, Oliver Wyman.