News analysis
Does your company need a policy for AI like ChatGPT?
Does your company have a policy for ChatGPT use? It probably wasn’t on any firm’s to-do list a year ago. How fast things can change in management and governance.
‘Chat’ as some call it is generative AI that has become a household name. Supporters love it for its cutting-edge ability to generate human-like content and save time on certain menial tasks.
Critics are nervous, however. They don’t like the pace of change it brings, nor the potential impact it could have on day-to-day business.
Whatever your company thinks about it, one thing is clear: we are rapidly approaching the stage where boards and management need a policy for ChatGPT, and other forms of generative AI.
By creating a comprehensive policy around the use of AI in the workplace, a board of directors can help to ensure that the technology is used effectively and ethically and that employees are trained on how to use it safely and responsibly.
Read more: What is generative AI?
Quick reminder: what is ChatGPT?
ChatGPT is generative AI that has surged in popularity this year. Its strengths lie in its ability to generate content in response to human questions.
Generally, this content is factual, relevant, and delivered in a way that makes it look human.
You can read more about it here.
Why has ChatGPT become so popular?
- Its uniqueness. Its groundbreaking ability to speak to humans naturally and conversationally sets it apart from competitors.
- Its usefulness. ChatGPT has been used to write articles, marketing content, essays (something schools and colleges are rapidly trying to gain control over), and even computer code.
- Its newsworthiness. The above points have earned the system front-page publicity worldwide, ensuring more would read about it and try it out.
Is ChatGPT already a big player in business?
It appears so.
A new report from Korn Ferry estimates that nearly half of surveyed businesses already use ChatGPT to complete work tasks. 80% said the system was “legitimate and beneficial.”
In other words, it has only been in the public realm for months, and ChatGPT has already found a home in a huge proportion of offices.
“They’re figuring out ways to make generative AI work for them,” said Esther Colwill, president of Korn Ferry’s Global Technology, Communications, and Professional Services practice.
How are businesses using ChatGPT?
So much. It’s part of the system’s appeal. Examples include:
- Writing templates for online content.
- Customer service correspondence.
- Writing code.
- Writing sales pitches.
- Summarising long reports.
- Analyse business trends.
Supporters of the system maintain that it doesn’t signal a replacement of traditional workers, but it does give traditional workers a time-saving tool, the likes of which they have never seen before. In other words, it’s opening new doors.
Sounds great, so why the urgent need for a policy for ChatGPT?
Because as good as ChatGPT looks on first viewing, the system also has its share of limitations that could cause problems if left unchecked. Much of these limitations stem from the information bank available to it.
- That bank does not keep up with any news cycle. The most recent information could be months, if not years, old. This means any ChatCPT-produced content could ignore the most recent relevant events.
- The information bank can include biased sources. ChatCPT could misinterpret these as hard facts and present them as such.
- The bank may contain sensitive data, which ChatGPT could deem fair game for widespread publishing. If organisations use ChatGPT for published content, they become liable.
In addition, the system (like any other tech) can make simple errors that might be challenging to spot.
All of this adds up to scenario where something useful could become something harmful if the right controls aren’t put in place.
A ChatGPT policy is a good idea
Limitations are just one factor in why companies should create a policy for chat GPT. The other is the pace of its popularity. Many will undoubtedly feel they need help to keep up and make sound decisions about its use.
Policies help correct this imbalance. They allow corporate leaders to decide what ChatGPT is helpful for, and when it should be avoided.
They also ensure that a business isn’t shying away from what is undoubtedly a significant new player in the business world but isn’t going in blind either.
Like any new corporate movement, strategy is crucial. Make sure you form yours soon.
Read more: How to start using ChatGPT
What should a ChatGPT or AI usage policy contain?
Here are some areas that a board of directors could consider when creating an AI policy for their workplace:
- Data privacy and security: A policy should be put in place that outlines how the company will collect, store, and protect the data used by AI systems. This includes ensuring that only authorised personnel access data and that it is stored securely.
- Bias and discrimination: AI systems can reflect and amplify human biases and prejudices. The policy should address how the company will ensure that AI systems do not discriminate against individuals or groups based on protected characteristics such as race, gender, or age.
- Transparency and explainability: The policy should require that AI systems used in the workplace are transparent and explainable. This means that employees should be able to understand how AI decisions are made and why specific outcomes are generated.
- Employee training: The policy should require that all employees who work with AI systems are trained on how to use them effectively and ethically. This includes understanding the limitations of the technology and the potential impact on their work.
- Accountability and responsibility: The policy should clearly define who is responsible for AI systems’ decisions in the workplace. This includes holding individuals and departments accountable for the outcomes generated by AI systems.
- Ethical considerations: The policy should address ethical concerns surrounding the use of AI in the workplace, such as the potential impact on employment and the ethical use of AI in decision-making.
- Continuous monitoring and improvement: The policy should require ongoing monitoring and modification of AI systems used in the workplace to ensure that they are functioning as intended and are not causing unintended consequences.