News analysis

How will boards govern AI?

by ACCA & EY

how to govern AI

Do you trust AI? As a business manager or board member, are you concerned about AI being integrated into your business systems?

Over the last few years, the field of AI governance has proliferated, presenting an immediate challenge for boards and business leaders to ensure responsible applications and practices.

Boards are now responsible for understanding and managing the risks associated with AI and ensuring that a company’s AI systems are developed and deployed responsibly and ethically.

Stay compliant, stay competitive

Build a better future with the Diploma in Corporate Governance.

Stay compliant, stay competitive

Build a better future with the Diploma in Corporate Governance.

A response to the UK government’s white paper on AI regulation

The ACCA (Association of Chartered Certified Accountants) and EY Global have written a report responding to the UK government’s white paper on AI regulation.

The report, called ‘Building the foundations for trusted AI‘ provides their perspective and recommendations on the government’s approach to AI regulation, particularly concerning the role of audit and assurance in building trust in AI systems.

You can download the report here.

As professional organisations in the field of accounting and assurance, the ACCA and EY have expertise in ethical practices, governance, and assurance, which can contribute to the responsible deployment of AI.

The report contributes to the ongoing dialogue and refinement of the UK’s AI regulatory regime.

Should boards be concerned about the rise of AI in business?

Boards are responsible for understanding and managing the risks associated with AI and ensuring that the company’s AI systems are developed and deployed responsibly and ethically.

The UK Corporate Governance Code and the UK Companies Act set out principles of good practice for boards, including assessing and managing emerging and principal risks and presenting a fair and understandable assessment of the company’s position and prospects.

The ACCA and EY emphasise that boards should consider how they can work with management teams to ensure the workforce is equipped with the skills to deploy and monitor AI effectively.

Executive business managers should be aware of the AI-related risks that may arise from third-party products or services with AI system components and ensure appropriate checks and compliance with standards are in place.

The board should play a key role in overseeing AI governance and ensuring that AI systems are aligned with the company’s strategic objectives, risk appetite, and ethical considerations.

A pro-innovation approach to AI

The report is a response to the UK government’s white paper on AI regulation.

It supports the government’s pro-innovation approach to AI regulation and emphasises the importance of building trust in AI through ethical practices.

The report also highlights the role of audit and assurance in ensuring the ethical deployment of AI and provides recommendations for policymakers.

It welcomes the government’s white paper as a starting point for ongoing dialogue and refinement of the UK’s AI regulatory regime.

What key principles are outlined in the UK government’s white paper on AI regulation?

The key principles outlined in the UK government’s white paper on AI regulation are:

  1. Safety, security, and robustness.
  2. Appropriate transparency and explicability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress. 

These principles provide a framework for regulating AI systems and ensuring they are developed and used responsibly and truthfully.

How does the ACCA report emphasise the role of audit and assurance in building trust in AI systems?

The report emphasises the role of audit and assurance in building trust in AI systems by highlighting that the audit and assurance profession can play a vital role in providing information and building trust in AI.

It states that if the public is to trust AI, they need more details of the AI models being used, and the audit and assurance profession can help provide that information and play a vital role in building trust.

The report also mentions that assurance techniques and technical standards can support the development and implementation of trustworthy AI alongside regulation.

It calls for a “toolbox of assurance techniques” to measure, evaluate, and communicate the trustworthiness of AI systems across their development and deployment life cycle.

These techniques include impact assessment, audit, performance testing, and formal verification methods.

Furthermore, the report acknowledges the efforts of the UK’s AI Standards Hub in building a community supporting AI standards, which includes the involvement of the audit and assurance profession.

It recognises that existing codes and standards relevant to professional accountants can contribute to developing and deploying trustworthy AI.

Overall, the report recognises the critical role of audit and assurance in businesses that develop and deploy AI models, ensuring compliance with regulations and ethics policies, managing data appropriately, and communicating with boards about technology-related risks and exposures.

If the public is to trust AI, they need more information on the AI models being used.

The audit and assurance profession can provide that information and play a vital role in building trust.

“We highly support the efforts of the UK’s AI Standards Hub, which is helping to build a community supporting AI standards by facilitating knowledge sharing, capacity building and research,” says the ACCA.

Boards must present a ‘fair, balanced and understandable assessment of the company’s position and prospects’ regarding a company’s position on AI.

What are some of the concerns raised about the government’s approach to AI regulation, and what issues does it fail to address?

Some concerns raised about the white paper’s approach to AI regulation include:

  1. Lack of clarity on a dedicated AI regulator: The white paper does not propose a specific AI regulator, which has raised concerns about accountability and oversight. Stakeholders have called for a designated entity responsible for AI oversight to avoid uncertainty and inconsistency in AI regulation.
  2. Delay in statutory footing for cross-cutting principles: The white paper does not initially put its cross-cutting principles on a statutory footing, raising concerns about regulatory certainty. This delay may result in businesses needing more clarity and potentially delaying upskilling and preparation for future compliance, particularly for small and medium-sized entities (SMEs).
  3. Insufficient guidance on ethics and trust: The white paper acknowledges the importance of accountability, ethics, and trust in AI but does not provide detailed advice. Concerns about additional support and clarity in navigating the ethical considerations and building confidence in AI systems have been raised.
  4. Inadequate regulation of foundational models: The white paper acknowledges the complexities associated with foundational models, such as large language models (LLMs), but does not provide a transparent regulatory approach for these models. Concerns about the challenges of regulating foundational models and their potentially transformative effects on business ecosystems have been raised.
  5. Limited consideration of environmental impact and job displacement: The white paper does not provide a comprehensive view of the environmental impact of AI or address the social considerations related to job displacement. Concerns have been raised about the need for policy work to define the government’s view on these issues and ensure the responsible deployment of AI.

Overall, while the UK government’s white paper sets out a framework for AI regulation, there are concerns about the lack of clarity, guidance, and specific provisions in certain areas, which may impact the effectiveness and comprehensiveness of the regulatory approach.

University credit-rated Diploma in Corporate Governance

Globally recognised and industry approved.

Tags
AI governance
Audit
Boardroom Documentation
Risk
Trust