The article highlights serious concerns with large language models like ChatGPT, including providing harmful misinformation, lacking robustness, encoding societal biases, and disproportionately impacting marginalized groups. While AI companions could potentially benefit marginalized individuals struggling with support, the proprietary nature and biases of current models raise risks. To mitigate this, open-source models informed by marginalized perspectives should be developed. At an individual level, users must critically examine the creators and adjust prompts to reduce harmful biases. Ultimately, model builders should be held accountable for unethical choices, rather than placing the burden on marginalized communities to “fix” flawed systems through uncompensated labor. Open critique is crucial for identifying limitations before mainstream deployment that could endanger vulnerable populations. Responsible AI development centering marginalized voices is needed for AI companions to provide meaningful support without perpetuating harm.

by Claude 3 Sonnet