The Role of Technology in AI Governance
Key Takeaways
The risks of AI
- AI adoption has increased dramatically including a sharp rise in the use of generative AI
- It’s most commonly used in marketing and sales, content support, product and service development, and IT
- There is a herd mentality for companies using AI
- 63% of companies say inaccuracy is a challenge with using AI, as is intellectual property infringement and cyber security
- Boards must weigh up use cases of AI and then prioritise
The new regulations
- EU AI Act: Recent thinking is that the EU is leading the way on this in the way that it has with GDPR
- Initially, some countries/regions aimed to regulate AI based on the computing power the AI required to function
- The four categories of risk:
- Unacceptable risk: social scoring, facial recognition, dark pattern AI, manipulation
- High risk: transportation systems, safety, employment, education access, border control, justice systems
- Limited risk: AI systems with specific transparency requirements such as chatbots, emotion recognition systems
- Minimal risk: AI enabled video games, spam filters
The key principles
- Transparency: organisations must implement clear AI decision-making models and ensure auditability for regulators and stakeholders
- In reality, there is a huge amount of work involved in this
- Accountability: Companies should have clear governance structures that monitor AI impact and ensure responsibility for errors or harm
- Privacy: Strong data governance protocols and compliance with regulations (e.g. GDPR)
- Fairness: Regular audits to detect and prevent biases in AI models
- Safety: Use reliable systems
- Sustainability
- Dignities
What does this mean for corporate governance?
- Understand AI decisions: The best thing that companies can do is get hands-on with AI and get clarity about what it can do
- Document AI processes: Maintain records of data, models and updates for traceability
- Enable auditability
- Communicate clearly: Provide simple, non-technical explanations of AI outputs to stakeholders
- Align with new regulations
What exactly is trustworthy AI?
- Legal
- Technically robust
- Ethical
What is ethical AI?
- This is a question that big companies are now trying to tackle, it’s not just academic
- Although everyone is talking about it, it doesn’t mean we’ve arrived at any consensus
Key points for corporate governance
- Accountability and risk management: EU AI Act Article 9 (accountability and responsibility) and Article 16 (risk and impact management framework). Continuous monitoring of risks and documenting adverse impacts of AI systems
- Transparency and ethical use of AI: Article 8 (transparency and oversight). Explanable AI, can we explain what it’s doing and do we know?
- Data privacy and non-discrimination: Article 11 (privacy and personal data protection) and Article 10 (equality and non-discrimination). AI systems must protect people.
The role of technology in AI governance
- There are tools that are helping companies: AI Model Monitoring. This tool looks at different metrics to help you explain what your AI is doing.
- IBM help with audibility and traceability
- Google look at bias detection and fairness
- H20 Eval studio: Detects if AI is making up information
About
This Webinar
Join us for this insightful webinar on AI in governance, proudly hosted by the Corporate Governance Institute in collaboration with Duke Corporate Education. As part of our ongoing partnership, we’re excited to introduce the Certified Corporate Governance Institute Professional course—an exclusive live, online programme delivered by industry leaders at Duke Corporate Education.
In the webinar, we will explore the critical role of technology in AI governance, providing practical insights into how organisations can leverage cutting-edge tools to ensure compliance, transparency, and ethical AI use. We’ll also discuss the implications of the EU AI Act, offering guidance on aligning your governance practices with new regulatory requirements.
Your key takeaways from this webinar will be:
– Using AI Governance Technologies:
Gain a clear understanding of the essential tools and technologies that support AI governance, making them accessible and actionable for your organisation.
– Navigating Regulatory Compliance:
Learn how to strategically implement AI governance technologies to meet the demands of the EU AI Act and other emerging regulations, ensuring your AI initiatives are both innovative and compliant.
– Balancing Innovation with Control:
Explore best practices for integrating AI governance tools into your corporate strategy, enabling responsible and controlled use of AI while driving business growth.
This Speaker
Clark Boyd is CEO and founder of AI marketing simulations company Novela. He is also a digital strategy consultant, author, and trainer. Over the last 12 years, he has devised and implemented international strategies for brands including American Express, Adidas, and General Motors.
Today, Clark works with business schools at the University of Cambridge, Imperial College London, and Columbia University to design and deliver their executive-education courses on data analytics and digital marketing. He is also a faculty professor of entrepreneurship and management at Hult International Business School.
Clark is a certified Google trainer and runs Google workshops across Europe and the Middle East. He has delivered keynote speeches on AI at leadership events in Latin America, Europe, and the US
Insights
Insights on leadership
Want more insights like this? Sign up for our newsletter and receive weekly insights into the vibrant worlds of corporate governance and business leadership. Stay relevant. Keep informed.