With AI technology quickly becoming essential to business, ensuring these technologies are handled in a way that puts people first and makes the responsible use of AI a cornerstone of innovation is vital. This Microsoft AI Business School podcast, led by David Carmona, general manager for AI and innovation, discusses how to put Responsible AI in action with other experts.
Considering the pace with which AI is advancing, it’s critical to retain society’s trust.
Natasha Crampton, Microsoft’s chief responsible AI officer, thinks companies must treat AI like security and privacy, as a “core element of trust.”
Sarah Bird, Microsoft’s former leader of responsible AI for Azure Machine Learning, believes that internal communications should begin with discussions of ethics, technology and society. Crampton adds that open communication is central, and we must be humble enough to admit that they don’t always have the necessary answers.
Responsible AI begins with organizations defining their values through a set of principles.
Microsoft’s 2018 book The Future Computed established responsible AI principles, delineating the company’s approach to its challenges. Carmona thinks every company must “take a stand” in how it will address these challenges. Microsoft formed a committee known as Aether, which stands for AI Ethics and Effects on Engineering and Research. Aether serves as a think tank that promotes discussion between people who have a...
Host David Carmona is general manager for AI and innovation at Microsoft. In this Microsoft AI Business School podcast, he interviews guests Sarah Bird, former leader of responsible AI for Azure Machine Learning; Natasha Crampton, Microsoft’s chief responsible AI officer; and Nick McQuire, who leads CCS Insight’s enterprise and artificial intelligence research.
Comment on this summary or Démarrer une discussion