You may be tempted to dismiss instances of machine bias as “glitches.” However, they’re structural and reflective of real-world racism, sexism and ableism, says data journalism professor Meredith Broussard. Technology should work for everyone – nobody should feel barred from using technology based on their skin color, gender, age or ability. Broussard presents several case studies of machine bias, detailing the harm it’s caused in areas including policing and health care. She urges Big Tech to embrace accountability and work toward the public interest.
Machine bias is a structural problem requiring complex solutions.
People assume computers can solve social problems, but this isn’t always true. Machines can only calculate mathematical fairness, which is different from social fairness. The person programming a computer may program it to create a solution mathematically, but that doesn’t mean their algorithms lack bias, leading to neutral decision-making. Programmers are humans who bring their biases, such as those rooted in racism, privilege, self-delusion and greed, to work with them. The belief that technology will solve social problems indicates “technochauvinism,” as it ignores the fact that machine bias exists and that equality often differs from justice or equity.
People rarely build biased technology intentionally. Most engineers probably incorrectly assume they’re building a “neutral” technology. As an example of machine bias, consider the video of racist soap dispensers that went viral in 2017: A darker-skinned man found that the soap dispenser didn’t recognize his hands as human hands, as it only recognized lighter-skinned hands and thus...
Meredith Broussard is a data journalist and the author of multiple books, including Artificial Unintelligence: How Computers Misunderstand the World. She’s also an associate professor at New York University’s Arthur L. Carter Journalism Institute and a research director at the NYU Alliance for Public Interest Technology.
Comment on this summary or Start Discussion