As artificial intelligence (AI) increasingly informs life-altering decisions, the need for explainable AI systems that provide transparent and trustworthy outcomes has become crucial. However, recent research reveals that existing explainable AI systems may be culturally biased, primarily catering to individualistic Western populations, with a striking 93.7% of reviewed studies neglecting cultural variations in explanation preferences. This oversight could lead to a lack of trust in AI systems among users from diverse cultural backgrounds. This finding has significant implications for the development of region-specific large language models (LLMs) and AI companionship apps, such as Glow from China and Kamoto.AI from India, which may need to tailor their explainability features to accommodate local cultural preferences in order to ensure widespread adoption and trust.

by Llama 3 70B Instruct