Robust Governance, Standards Needed for AI Adoption at ScaleIAPP's Ashley Casovan on Training and Certification Methods for AI Governance
Last year during his testimony to the U.S. Congress, Sam Altman, CEO of OpenAI, advised governments to regulate artificial intelligence. "Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful AI models," he said.
The proliferation and adoption of generative AI models has introduced significant data privacy and protection concerns, including the unauthorized synthesis of realistic yet fake data, potential privacy infringements through the inadvertent disclosure of sensitive information, and amplification of biases present in training data. The creation of deepfake content also raises the stakes by threatening reputations and enabling the spread of misinformation.
The AI Governance Center - a part of the International Association of Privacy Professional, or IAPP - is dedicated to raising awareness about global AI governance and regulations. The center comprises 45 organizations and is actively engaged in developing training and certification programs aimed at addressing AI-related challenges.
"The EU AI Act and other similar acts should focus on a risk-based approach to address the emerging risks from AI systems. There is a need to establish an AI governance officer," said Ashley Casovan, board member at Responsible AI Institute, and managing director of IAPP AI Governance Center.
In this video interview with Information Security Media Group, Casovan also discussed:
- The changing risk landscape of AI systems;
- The need to continuously develop governance models;
- How to train and reskill professionals to better govern AI.
Casovan serves at the board of Responsible AI Institute. She has more than 10 years of experience as a public servant and has been actively involved in Canada's Open Data and Open Government Community as a community champion and implementer.