Machine learning is becoming ubiquitous. This branch of artificial intelligence works by teaching a computer program what correct output looks like. This powerful method raises questions regarding fair outcomes for the people machine learning (ML) affects. Software engineer and attorney Aileen Nielsen examines different kinds of fairness and how training data and algorithms can promote them. For those developing machine learning models, she provides useful examples in Python.
Fairness is about who gets what, and how that’s decided.
Every new technology creates victims along with progress. Information technology improves life but can prey on users’ time and attention or become a tool for nefarious purposes. Often, unfairness takes the form of violations of community norms. This happens when large-scale use of a technology, from drones to bots targeting dating apps, becomes a nuisance.
Software developers should pay attention to the difference between equity and equality, security and privacy. Not paying attention to fairness will expose companies to legal trouble and consumer backlash. Laws set by the United States, Europe and China provide standards for some of these aspects. However, a fairness mandate need not dampen innovation; it can stimulate ideas in mathematics, computer science and law.
People tend to prefer equity over equality. Equity implies that people should not receive different treatment for belonging to a certain group – direct discrimination. Neither should a specific group enjoy or suffer good or bad consequences – indirect discrimination.
But equity ...
Comment on this summary or Comenzar discusión