AI ethics

Contributed by Paige McAllister, SPHR, SHRM-SCP, Vice President for Compliance — The Workplace Advisors

Conversations about artificial intelligence (AI) have been everywhere recently. 

According to the Pew Research Center, 62 percent of Americans believe AI will have a major impact on workers but only 28 percent believe it will impact them directly. AI is already impacting employees as 4,000 jobs were lost in May 2023 due to AI, the first time AI was listed as a reason for a layoff.

While most of the recent conversation involves AI-generated content, other AI formats have been used in the workplace for a while. 

AI-Generated Content

In the workplace, chatbots can be used to research topics and to generate content such as policies, procedures, emails, letters, and disciplinary action. On the positive side, AI used for HR purposes can help effectively address legalities, uncomfortable topics, and messages for general audiences. However, AI has also been shown to generate content which lacks empathy, is non-specific, disregards the privacy of others, does not offer face-to-face interactions, or contradicts itself. Asking the same question in different ways could give different results which could complicate or confuse the issue more.

Beyond these concerns are the inherent limitations of chatbots as they are built on a Large Language Model which relies on many available data sources. However, the end results are only as good and valid as the data it references, which is not always valid or accurate. For example, Wikipedia is an often-used resource but, since it relies on user-generated content, it has been proven to be only 80 percent accurate. In some cases, chatbots have also created their own inaccurate reference material from which to develop and validate an answer even though it is incorrect or fictional. 

Suggested Actions to Take Before Using AI

As tools develop and improve, AI will find a place in most workplaces. As you determine how AI will be allowed in your workplace, consider taking the following actions:

Research your AI tools: Learn what AI is and how it is incorporated into tools you may use now or may rely on in the future. If you choose to use AI tools, be sure to understand their validity and limitations. For example, if you are going to use virtual analysis of recorded interviews, understand the science behind it, including if the tool has been properly tested to remove implicit biases.

Establish policies and procedures on AI use: Draft a policy to outline when and how AI can and cannot be used. Include clear statements prohibiting discrimination and revealing confidential information. While the policy can be general to cover any AI, develop exact procedures and expectations as you initiate AI tools.

Train employees and managers: As you expand the use of AI tools in your company, train your employees and managers on when and how to use them properly and legally. Instruct users on what is and is not allowed as well as expectations such as reviewing and fact-checking all content before releasing it or personalizing a letter to an employee or customer.