AI isn’t coming, it’s here – what are the biggest concerns?

by Dan Byrne

AI isn’t coming, it’s here – what are the biggest concerns?

With AI taking the world by storm in its rapid progression, David Duffy, CEO and co-founder of the Corporate Governance Institute discusses the top concerns for businesses. 

A Korn Ferry report estimates that nearly half of surveyed businesses already use ChatGPT to complete work tasks and 80% say the system is “legitimate and beneficial.” In other words, it has only been in the public realm for months, yet ChatGPT has already found a home in many offices.

With OpenAI’s ChatGPT being only one of many AI tools experiencing rapid adoption by businesses, here are some growing concerns about the ethical dilemmas this technology brings if it is not regulated to an appropriate standard. 

Will AI make my job redundant? 

Job security is difficult to measure right now because, despite the hype and constant chatter about AI and job losses, we are still in the early days and don’t know the actual impact this technology will have on the working force. That said, the revolutionary nature of AI means that there will inevitably be a shift in how we work. Some may lose jobs; some may need upskilling or other training. The ethical dilemma is how a company balances these changes with their embracement of AI.=

Supporters of the system maintain that it doesn’t signal a replacement of traditional workers, but rather it gives traditional workers a time-saving tool, the likes of which they have never seen before. In other words, it’s opening new doors. However, it is understandable that fear abounds as companies like BT announce predictions of relapsing around 10,000 workers with AI by the end of the decade.

AI can’t access personal data… right?  

AIs can draw from any information held in the public domain of the internet. That’s a lot of data. Despite best efforts to safeguard personal information, some can easily become accessible. Perhaps a person gave information while on a website without a second thought, thinking it was private. Or maybe the information was shared as a part of a wider data leak. In essence AI cannot distinguish between sensitive data and what is deemed fair for widespread use. If it is in its knowledge bank and contains sensitive data, it will handle it like any other information.

For example, Samsung banned ChatGPT for company use after an employee leaked secret data trying to use it for a task, making the data permanently in the AIs language bank and easy for others to access. Apple, JPMorgan, Deutsche Bank, and Verizon have also announced banning ChatGPT for various reasons, but mostly to protect against employees who might use AI and unintentionally jeopardise private company information by doing so.

To complicate things further, if organisations or employees use sensitive data collected by AI, they can be held liable, highlighting the importance of AI policies within business.

Will we witness an increase in misinformation? 

AI banks do not keep up with any news cycle. The most recent information could be months, if not years, old. This means any ChatGPT-produced content could ignore the most recent and relevant events. Its information bank also can include biased sources – as the internet contains an endless wave of biased news. ChatGPT could misinterpret these as hard facts and present them as such to an unsuspecting user. 

Sam Altman, CEO of OpenAI, told a congressional hearing in Washington in May that the latest models of AI technology could easily manipulate users, saying, “the general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern.”

Eliot Higgins, the founder of Bellingcat, an independent investigative collective, used an AI image generator to create fake images of Donald Trump being arrested in New York. The tweet has since been retweeted and liked by thousands, being one of many similar incidents. This has caused fear of what the future holds for misinformation and deep fakes.

To this, Prof Michael Wooldridge, director of foundation AI research at the UK’s Alan Turing Institute, said that even when Photoshop became popular, similar fears were widespread but eventually the population could distinguish what was real vs curated. 

AI feels different from Photoshop, as it has the potential to gain intelligence. Some fear we will reach a point where we will not be able to believe things encountered on the web.

What’s the bottom line?

With AI rapidly developing and more companies jumping on the development of their own AIs, it is clear that whether you like it or not, AI is already here and only a matter of time before businesses feel they will be left behind competitively if they do not adapt with the new technology. However, change can be good if done correctly, and AI has the potential to enhance our working lives. This is why implementing a comprehensive policy around the use of AI in the workplace is vital for today’s businesses. Using the board of directors to ensure the help of the technology is used effectively and ethically and that employees are trained on how to use it safely and responsibly.

Even with the rapid developments, regulation can’t seem to keep up. Businesses need to push for regulation and governance in AI. Having board members fully educated on the latest developments will ensure both businesses and employees are more protected.

ENDS

Notes to editors

The Corporate Governance Institute is the global leader in the education and certification of existing and aspiring boardroom directors. The Institute provides board directors with education and certification to the highest standards. Working with leading board members and industry practitioners, we create and deliver world-class education and certification for the modern board director.

The Corporate Governance Institute arms our graduates with the tools and expertise required to be highly effective board members.

Go to Top