Ethical Guidelines for AI, Machine Learning DevelopmentMicrosoft's Diana Kelley Explains How to Create Responsible New Technologies
Enterprises need to consider ethical guidelines when creating new types of artificial intelligence and machine learning, says Diana Kelley of Microsoft, who explains how companies can create responsible new technologies.
See Also: AI's Impact on SOC Maturity
"We want to make sure we create artificial intelligence and machine learning tools that we can trust, having these ethical components that have been addressed and that would include things like transparency and security and privacy and ensuring that they have been created resiliently to prevent against misuse," Kelley says.
In a video interview at Information Security Media Group's recent Fraud and Breach Summit in Seattle, Kelley discusses:
- How ethics needs to play a role in the development of AI and machine learning;
- How to remove the bias that is sometimes built into these algorithms;
- The roles that these technologies play in helping security teams keep up with their jobs.
Kelley, cybersecurity field CTO for Microsoft, is a cybersecurity architect, practitioner, executive adviser and author. At Microsoft, she leverages her more than 25 years of cyber risk and security experience to provide advice and guidance to CSOs, CIOs and CISOs at some of the world's largest companies and is a contributor the Microsoft Security Intelligence Report. Previously, she was the global executive security adviser at IBM Security. Kelley is a faculty member with IANS Research, an industry mentor at the CyberSecurity Factory and a guest lecturer at Boston College's Master of Science in Cybersecurity program.