Skip navigation
Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem
Article

Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem


auto-generated audio
auto-generated audio

Editorial Rating

7

Qualities

  • Analytical
  • Applicable
  • Concrete Examples

Recommendation

Organizations use artificial intelligence (AI) to enhance their cybersecurity. AI can view patterns, detect unusual behavior, and help identify potential risks; however, the AI itself comes with its own risks. Jasmine Henry, a journalist specializing in analytics and information security, explains that AI often reflects the natural human prejudices of the teams that produce it. Henry shares her insights into the causes of AI bias and offers ideas for creating more diversity.

Take-Aways

  • Organizations increasingly use artificial intelligence (AI) to augment their cybersecurity, but AI applications feature the inherent human prejudices of those who develop it.
  • Creating a more diverse environment can help cybersecurity teams recognize AI bias and create better security AI.

About the Author

Jasmine Henry is a journalist specializing in analytics and information security. Her work has appeared in Forbes, Time and dozens of other publications. She specializes in writing about emerging technology trends.