Abstract: Chatbots such as ChatGPT are conversational agents that play an important role in many sectors. They are highly sophisticated artificial intelligence (AI) agents increasingly integrated into organizational functions ranging from customer support to recruitment and professional development. Alongside the increasing adoption of chatbots is an increasing focus on diversity, equity, and inclusion (DEI). This article presents a series of chatbot examples and discusses their associated DEI implications, such as those related to AI bias. From the cases reviewed in this article, we extract a framework of DEI safeguards, which includes input safeguards, design safeguards, and functional safeguards. This framework can be used by those involved in developing AI based chatbots or managing their use to ensure that chatbots support, rather than weaken, organizational DEI initiatives and strategies.

Lay summary (by Claude 3 Sonnet): Chatbots like ChatGPT are advanced artificial intelligence (AI) programs that can have conversations and are being used more and more in businesses for tasks like customer service, hiring, and employee training. As chatbots become more common, there is also increasing focus on diversity, equity, and inclusion (DEI) in organizations. This article looks at some examples of chatbots and discusses the DEI implications they raise, such as the potential for AI bias. From analyzing these cases, the authors develop a framework of safeguards to protect DEI when using chatbots. This includes safeguards around the data inputs to train the chatbot, how the chatbot is designed, and how it actually functions. The framework can act as a guide for developers creating AI chatbots or managers deploying them to ensure the chatbots support and do not undermine an organization’s diversity, equity, and inclusion efforts and policies. The goal is to harness the power of chatbots while mitigating potential negative impacts on DEI.